Choosing The Right CPU (xeon cpu)


#5

Thanks Bjorn, great info.

By the way I currently have a core i7 920 3.4 GHz (OC) that I’m not going to junk it. It have been worked good while I bought it 5 years ago and now just need really more speed on rendering and simulation. the new workstation main use would be rendering and simulation.


#6

I don’t trust CPU mark or passmark for rendering benchmarks. Their result scores don’t match up with rendering engines. I actually think they aren’t even very optimized for a lot of cores so high core counts end up performing similar to low core counts like when comparing dual to single CPU’s of the same CPU model.

These new xeons stay in turbo mode for quite awhile which is 3.6ghz for the e5-2697 v3. It’s not until 8-10 cores are in use that the clock speeds start dropping to their lowest speed.

The newest 8-core i7’s are good when overclocked, though around ~4.4 ghz seems to generally be the highest long-term stable clock frequency they run at.

IMO I’d only go with xeons if you need/want dual CPU’s and/or more than 64 gigs ram…or if you’re stuck with a certain vendor and that’s all they offer. For rendering though, the dual xeons are beasts


#7

To the best of my knowledge Cinebench is one of the very few benchmarks that correctly measures more than 64 threads (32 cores with HT and above). Below that there can be some differences as well, but above most benchmarks become unusable.


#8

yeah I agree. IMO cinebench is the standard and best rendering benchmark for judging raw raytrace performance.


#9

Thanks for replys,

Finally I decided to buy a Xeon. I have two choices and not sure which one would be better :
Xeon E5-2697 V3
Xeon E5-2687W V3

what’s your suggestion?


#10

at first glance the 2687w looks like the way to go, but there was an article somewhere that showed the 2697 to actually be better because it had a much broader turbo range making it faster than the 2687w in every core usage scenario. In the end, the 2697 ends up being better in every way possible compared to the 2687w, though it’s also more expensive.


#11

maybe a bit off the track but have you at all considered GPU rendering side if available in your line of work?


#12

not enough memory available in gpu’s and less shader options available. That and legacy CPU render farm infrastructure is hard to abandon if you’ve already heavily invest in it for years


#13

With engines like RedShift memory is not that big issues, it is rendering huge scenes wihtouth a single problem.
Older infrastructure is one thing but costs savings when you have 10 times more speed in 10 times less workstations, less licenses, less power consumption and not to mentioned again much faster rendering… it saves money that is invested transferring from CPU to GPU, plus it can be done gradually adding GPUs which is easier then updating CPUs.
Back to ram issue, it was problem years ago not so much these days with advanced both in way how they handle RAM and rapidly increasing ram on cards.

And with hardware it is always more or less same situation, you invest a lot and in couple years it is obsolete and simyl can;t do its work any more and time to let go :slight_smile:


#14

In the OPs position I’d go with a GPU rendering solution hands down. Octane or Redshift are now available for 3dmax, personally I’m a big fan of redshift because it will go out of core and use the available system ram to suppliment gpu ram.

If you go with an i7 5930k and several GPUs the render capability will far outweigh any dual xeon workstation, and for less overall cost as well.


#15

for system with 2 high end xeons you can easily setup an i7 3930k on x99 mbo, 4x 970 cards and it will out render that xeon sys by far.

Right now I have 3 comps with total of 10GPUs in them (1 with 4x titan, 1 with 4x 970 and 4rd smaller one with an 780 and an 970 card) and using them to render whole tv serie we are working on in HD resolution. Lets just say it renders faster then we can send scenes to them :slight_smile:


#16

you can easily setup an i7 3930k on x99 mbo

X99 mobo’s are socket 2011-3 so aren’t compatable with a 3930k which is the older socket 2011, X99 can only be used with 5820K which will be linited to 3way graphics cards as it only has 28 PCIe lanes or 5930K & 5960X which have 40 PCIe lanes so can do 4 graphics cards setups.


#17

I mean 5930K for a bit more expensive version depending on budget, or he can move step back to older 3930k with an x79 liek for exmaple p9x79-e ws from asus with 4 pice x16 slots

anyway budget wise should be bellow or close to dual xeon system but with much more speed at hand as well


#18

5930k is a pointless CPU for 3D work.
5820k is just as fast. The extra PCI-E lanes of the 5930k make no difference for rendering, even GPU rendering because its not bandwidth-limited enough.
5820k or 5960x are the ones to get.


#19

On what ground would you suggest 5960x over 5930k?
5960x is twice the price and support same amount of PCI lanes.

Also 5920 can support only 3 GPUs, while 5930k can support 4 of them installed
It is not matter of speed over lanes but number of cards that can be installed.

SO 5930k is actually sweet spot, a bit less cores then 5960x but faster clock and supports 4 GPUs

edit: 4 GPUs are bot supported in 3-way SLI on 5920 but in rendering SLI is not used but I’m not sure if 4 GPUs can be installed on 5920k soi woud need to confirm that :slight_smile:


#20

You mean 5820k right? No such thing as 5920k;)


#21

yea yea sry hehe 5930 5960… automatically contined :slight_smile:


#22

I’m saying 5930k is pointless for CPU rendering vs 5820k because the clock-speed difference is nothing.
And it’s pointless for GPU rendering vs 5820k because the extra PCI-E lanes just provide more bandwidth but GPU rendering is not dependant on bandwidth that much. It won’t let you use more GPUs, just get better transfer speeds on these GPUs & you won’t be hitting those limits on GPU rendering, only on gaming, & even then only on gaming with >2 graphics cards.
On the other hand, even though its quite expensive 5960X provides 2 extra cores = +25% for CPU rendering.
So I’m saying, for the CG artist, only 5820k & 5960k are worth looking at. 5930k is a waste of money.

P.S- I went back & edited where I got it wrong on the CPU names, don’t want to contribute to any confusion.


#23

Thanks sentry66 and other guys for reply,
Finally I decided to buy a 2697 v3.

@mirkoj, my main use is rendering with cpu-based render engines like vray, mental ray and so on, but in some cases I’ll use gpu-based engines/software like iray, lumion3d and so on. but as I firstly mentioned cpu have priority.

I wanted to buy one GTX Titan Black but someone said that it will be a hot choice and look for others choices like GTX 980. Do you guys agree with him?


#24

A Hot choice?
Potentially overpriced, if you have no use for DP floats, sure, you basically pay a lot of premium for the 6GB of ram, and it probably won’t be long before 9xx with 6 or 8GB come out for affordable enough prices.

Hot? What does he mean exactly? All nVIDIA cards from Kepler on will try to hit 80C under stress all the time and then microthrottle unit blocks clock to stay right there (or darken some silicon if that’s more convenient I believe), it’s how they are designed, the stock cooling though is plenty to keep it under control with very little noise ( I own a Titan btw).

The draw of a Ghz edition Black under load averages in the 220W and can peak close to 400 for a while.
The more you cool it and clock it the more it will draw, of course (since it will try to ht 80C and then throttle down), so extremely well cooled Blacks are known to have averaged 250-260, but not stock AFAIK.

The 980 , depending on cooling, will obviously output less than that given it has less than 70% the amount of cores and only slightly higher clock (which can be throttled), and it’s a Maxwell, which is more power efficient in general, but you are still looking at 160-170W averages, and 360 or so long peaks.

Cooling wise there isn’t a lot more that the Black will require, and neither is a particularly dangerous card. I wouldn’t put them in a poorly conditioned farm rack close to the heatpipes :stuck_out_tongue: but in any semi-decent gaming or workstation case it’s absolutely not a concern.

The basic thermal design of Kepler and Maxwell (Titan and 9xx) GPUs means they basically always have the same thermal impact temperature wise, just the more you cool the case the more they will draw and, of course, radiate. The difference in peak though is in the 10-20% range depending on load and cooling, which is nothing to make a purchasing decision based on IMO.