4 Quad Xeon MoBo for Render Farm??


#1

Hi all,

Considering upgrading my render power for my home office.
I already have 3 Pc’s running 3dsmax 2010 and vray 1.5.
I want to get rid of two i7 860 systems and buy new rack-mount machines and keep i7 2600k (4.4 oc) with quadro 4000 as main workstation.

I am considering either getting;

4 x quad xeon E5 (ssd, low price mb, onboard vga, 8-16gb ram, nice psu and fan) cases
or
2 x 4 quad xeon (supemicro mobo, ssd, 32gb ram, watercooling, good psu) cases.

Second option is a tad cheaper than first as I have to get only 2 ssd, case, etc and maintenance of 2 is easier than 4. (Oh, I forgot that I have to get 2 more license for vray and win 7 in 1st option, which I rather avoid)

However, my main question is, can Vray or max use that kind of setup (2. option).
I know from searching this forum that vray can use all 32 cores while rendering (considering I get the 4 core xeons not the 6-8 ones). But will they “understand” that mobo/cpu count?

My main pc (the one with i7 2600k) will be slower than the slaves however, I think, besides rendering, max can only use 1 core so I’m fine with my current workstation setup.

Any advice or recommendation is valuable,
thanx…


#2

i don’t use vray, but my understanding is per license, you are allowed up to 10 network “nodes”, each physical machine is (or should) be considered a node…

so… what about the CPU count inside a single machine would be confusing to the software license?.. it’s a single machine, core count would not matter or cpu count per node.


#3

Thanks for the answer,
I’ll check my vray docs, I remember getting the license for two extra pc’s that’s why I’m confused.

About the other thing;
What I mean was, there are some threads mentioning that while rendering, mental-ray can use all the cores as buckets (i.e: i7 2600 is 4 core and i see 8 buckets) but Vray have some problems, probably above 32 cores (4x 4core xeons, with ht its 32, isn’t it?). If that is the case and 32 is fine than no problem.

I guess what I wanted to ask was, I do not know, like as you cant install or make win7 32 recognize more than 4gb ram, maybe max/vray can’t recognise and make use of a system with 4 cpu’s…
Does that seem logical/plausible?

Btw, I made a calculation error, 1st option means 4 cpu’s and 2nd is 8 cpu’s, so it will probably be more expensive because of cpu count, but that is not the main issue.


#4

Four Quad Core E5-4603 aren’t a good idea for a render machine i think. At 2GHz they are rather slow. A single current 6 Core i7 will be about 1/2 - 2/3 as fast and will cost only a fraction. It will also scale a lot better with single threaded calculations.

As for RAM, no 32 Bit Windows can use more than 4 GB of RAM. Any application you run on such a system will be able to adress a maximum of 1.6 - 3 GB of RAM. I would not take any 32 bit system into consideration for a render machine.
You can take a look here for info on RAM useage. It’s for CINEMA 4D but does apply for most current applications as well
http://www.bonkers.de/cinema/cinema.html#memory


#5

Oh no, I am already using win 64bit, it was just an example.

Hmm, so I should consider getting 6 core xeons too than.

But the same questions remains;
ie:
4 seperate systems with 6 core cpu’s
or
2 systems with 2 cpu motherboards?

Btw, I think 6 cores are still twice the price of quads, do you think two 6 cores (total of 24 core) will be faster than 4 quad (total of 32 cores) cores, because of the new architecture etc…?
Does not seem so to me, I’m not sure, but than again that is why i am here?


#6

With every new generation of Intel processors you can roughly expect a speed increase of 10% at the same clockrate / core. Your actual speed depends a lot on what you are rendering and what your software supports, you should determine that before you start selecting hardware…
The way you are currently looking at things (number of cores / sockets) won’t help you much, you need to be much more specific and compare detailed configurations.
Cheers
Björn


#7

Thanks for the info Srek.
As I prefer working freelance there is not something specific that I render, whatever the job is, weather fluid sims or arch viz, so my setup must be generalist too…

I will dig a little deeper about the config and costs.
But I get that the best solution for me either dual core or single core machines, preferably 6 core CPU’s…

cheers…


#8

If I were in your shoes, this is how I would do it…

I would first find a good baseline comparison between what you want to replace and its’ near equivalent in xeon (which will be impossible to get a one to one)… but something like:

http://cpuboss.com/cpus/Intel-Xeon-E5-2609-vs-Intel-Core-i7-860#differences

then do a quick passmark lookup:
e5-2609 single cpu score: 4389
e5-2609 dual cpu score: 8404
i7 860 score: 5169

I believe you could expect similar performance results in real world rendering calculations for a dual core e5-2609 versus single core i7 860, about a 60% increase.

This will at least give you a good idea of what the bottom line product,price, and performance you’re aiming at. Ultimately, it will come down to your budget which dictates if you end up with a single core or dual core solution. By applying the same comparison method above, to what falls within your budget, you’ll be able to make that decision.

There are other factors to consider, like power consumption… in the long run, this could very well fill the gaps between cost/performance in single cpu and dual cpu systems… its’ hard to find good real-world figures here, but they’re out there… you’ll just need to sit down and do the math for your locale. Making a higher initial investment, with lower operational costs in the long run could be beneficial.


#9

win7 can only use 2 physical processors

You need win server 2008 or 2012 to support 4 physical cpus which costs $700 for 2 cpus, then another $670 for an additional 2 cpus. So a $1370 OS for windows…or run linux for free - if only max ran on it.

The question in my mind is, does max run well on Win Server 2012? Can Vray fully utilize that many cores? - I’m sure it does, but double check

I just built my renderfarm based on i7 3930k all running at 4.7-4.9ghz 24/7
Nothing in the xeon world comes close to that performance for the money

My $1400 render nodes running linux hold their own quite well even when compared to $6000 dual xeon rackmount machines.

I get into debate with IT guys all the time about renderfarm computers. They’d rather spend 4-6x the amount of money per equivalent performing machine to get that 5 year warranty, 4 hour on-site turn around time, xeons, ECC memory, redundant power supplies, etc than to have 4-6x the processing power and having to troubleshoot and fix the machines yourself when something breaks. Any single part on an i7 system can be torn down and replaced in 15-20 minutes or less and uses standard parts. Meanwhile, it’ll take Dell up 4 hours, or half a work day to get the machine back to you.

On a large scale though, sure I get it. Several mid size ATX cases aren’t tiny and will take up a large footprint even on large 3 row carts. compared to rackmounts or blades. At some point you also can’t be expected to maintain 100+ machines and still do animation, but if it’s just a small number of machines, replacing the occasional bad power supply or a fan isn’t a big deal. I’d rather spend 15 minutes replacing a bad part and going home at a normal time than having to spend 3 hours optimizing a render and go home late because I only have 1/6 the processing power available to me because all the money is tied up in warranties, xeons, and fancy memory.

I actually don’t think power consumption is quite the huge deal when comparing multiple i7 machines vs fewer dual xeon systems. The xeons are slightly more efficient with power, but the power consumption of a dual xeon 2670 is barely better than an overclocked i7 3930k system while the performance is roughly the same in a lot of cases. You either have two 90 watt xeon CPU’s or one 130 overclocked 50% higher so it becomes 180 vs 195 watts (actually more like 180 vs 220 since overclocking loses efficiency) to power the CPU.


#10

I’m not so sure about the price difference, but I would agree that (after spending a couple hours digging) that the power consumption difference is not all that great.

It was actually difficult to find some real world stats on a comparable E5 xeon server and its’ power use… but I did come across this article:

http://www.tomshardware.com/reviews/supermicro-6027r-n3rf4-tyan-gn70-k7053-intel-r2208gz4gc,3150-15.html

A quick look, I averaged between their 3 comparisons and the system would consume about 430W (dual xeon e5-2690). If a e5-2665 was used, it would be ~40W lower. So, 390W would be a fair measure of a dual xeon E5-2665 at 100% load.

Compared to my i7-3930k at stock 3.2ghz, at 100% load my system uses on average about ~370W… I don’t overclock, but I can guess using a calculator bumping to 4.5ghz at 1.4vcore increases power consumption by 120W. I used this site calc to guesstimate it:

http://www.extreme.outervision.com/PSUEngine

Overclocking the i7-3930k system to 4.5ghz I think conservatively would raise to use ~450W… I think that is conservative, but I could be wrong.

If we only look at the difference in Watts/hr of 60W:

My house has a 13SEER AC unit, which draws ~5000BTU/hr to cool 100sq. ft.
(http://www.energystar.gov/index.cfm?c=roomac.pr_properly_sized)

What we really want to do, is calculate the required cost to offset the heat produced… so we can take the Wattage difference to find the systems BTU/hr production. This is (at 60W) about 205BTU/hr. My 13SEER unit would need to use ~16W an hour to offset this, or about .04$/day (using an average of .10$ kWh for electricity)

Then we need to add the additional cost for the direct extra power use of the CPU (difference was 60W), which would be ~.15$/day.

So, only ~.19$/day or about 70$ a year more to run the i7-3930k overclocked at 4.5ghz/1.4vcore versus the dual xeon E5-2665… 24/7 at 100% load.

You can pick up dual xeon E5-2665s (3000$) and a mainboard (580$) for lets’ say 3600$… versus the i7 @ about 900$.

Yes, the difference of 2700$ and saving 70$ a year… the ROI on power savings would take 38.5 years.

But, if we look at passmark differences… the i7@4.5ghz seemed to score around 17,818 while the dual xeon scored 18,128. ~2.3% difference

(I had to browse through here to find some posted scores on the overclock: http://forums.overclockers.co.uk/showthread.php?p=22269060 )

However, if you do not overclock the i7 the E5 system produces a 33% difference… power consumption at this point is also negligible.

In the end, putting the extra work required to get a stable overlock on the i7 can be worth the trouble… it just takes some time, know-how, and a bit more $ for a solid cooling solution.

Personally, I’m using an i7-3930k… tis’ great, I dont’ overclock it, and the turbo mode works just fine when I load it up. I think my dumb math here is going to be a little off (especially the BTU/hr to cool my room), but close enough.


#11

There are two things though that come to mind for myself… it would be nice to see some benchmarks for the 8core/16thread xeons (like the E5-2665)… more cores, less clock speed. It has nearly always been the case in the past, that the more cores you can throw at it… the quicker it will be… even at slightly lower clock speeds.

The other thing I would like to see, is for someone to build a consumer “outdoor” AC unit for enclosed computer enthusiasts… So I could pipe my system to the wall and push all that heat outdoors. damn, it really gets warm in my office. It usually is 10 degrees warmer in here or more than the rest of my house… just with me, my PC, and a tv on.


#12

I agree with all points tswalk just mentioned.

The sandy bridge-E i7’s such as the 3930k actually are xeons, but with 2 cores and 8 megs of L2 cache disabled. They were not chosen for xeons because they leak slightly more voltage than the chips intel selects to live on as xeons.

As far as overclocking goes, our renderfarm is small, just 10 machines dedicated to rendering only, and the rest being workstations. Once you get a base overclock setting dialed in, you can save it to a USB stick and load it as the starting point for all the machines.

After that, it literally just takes about 30-60 minutes to tweak 2-4 BIOS parameters for each particular machine if you know what you’re looking for. Then you just let them sit for an hour or two crunching looking for a crash. I worked on 2-3 machines at a time and got them all dialed in over an afternoon. Workstations take more time to overclock because there’s more I/O loads at differing degrees so it’s harder to account for all situations until you experience them.


#13

http://www.tomshardware.com/reviews/xeon-e5-2687w-benchmark-review,3149-8.html

that article doesn’t include an overclocked i7, but you can see how a regular 3930k compares to the top-end 16 core/32 thread dual xeon. Imagine the i7 being 40-50% faster and it would finish that benchmark around 80-90 seconds which is only 10-22% slower than a machine that costs 4-6x more. The flipside is any single-threaded task would be around 20% faster with the overclocked i7.

Mainly I just look at the Max space flyby render. Cinebench is too 100% perfect that only utilizes perfect-scaling raytrace features - which is useful to know, but not real-world for modern scenes that have to do precalculations, loading, juggle stuff around memory, tessellations, lightmaps, etc

So compared to a single E5-2665, an overclocked 3930k would utterly destroy the E5-2665 in every performance category except for a select few types of calculations that could fully reside in the 20meg L2 cache of the xeon, but not the 12 meg cache of the i7.

I think it mainly depends on the scene you’re rendering. More cores is always better, but with multiple cheap machines, you get more cores and faster cores for the same money


#14

is DBR included with your Vray license? or do you have to have separate license per server node with it?

Their site is a bit… under-informative.


#15

Thank you all for the info about the power consumption, but to be honest, it is not one of my priorities while setting up the system.
After all, to my understanding, if I am gonna be concerned weather or not I can cover the added electricity cost of new pc’s, I should continue with the setup in hand and do not invest on new computers…

@sentry66: Would you mind giving more info about your setup?
Besides i7 3930k, what psu, cooling, ram are you using?
I ran my 2600k in 4.4ghz (from asus uefi bios, just clicked on turbo mode, nothing else), the fan seemas to work a little louder, but otherwise its pretty stable.
Its usually not recommend OC in render nodes, as they will work 24/7, did you had any problems with cooling/psu vise, assuming 4.7-4.9 is more than “standard” turbo mode.

@tswalk: I’m a little ashamed, I never managed to get a stable DBR setup.
Sometimes it works sometimes it doesn’t.
It works with one machine and not on the other, I have no idea whats the problem, so I stopped bothering and generally use backburner and split frames to pc’s and get separate renders for animation and for single renders I use only the main machine.
But once I get the new setup and NAS, I will have to find a solution.

According to cpuboss 2600k vs 3930k comparison.
Naturally, 3930k is 25-30% better in almost all aspects than 2600k except single core performance, which is probably due to 3930 being 3.2 compared to 2600k’s 3.4.

Well so far so good, as we have imagined, 4 x hexa 3930k setups than…
However, I read something like this;
“This chip, like the Core i7-3960X, requires an X79 Express motherboard with an LGA2011 socket; a discrete video card, as the chip sports no integrated graphics system (despite its use of the Sandy Bridge microarchitecture, which provided one on the Core i7-2600K and Core i5-2500K) ; and a separate cooler, as Intel does not bundle one with any CPU in this line.”

I was already gonna get a separate/better cooler or go for liquid road, however I do not want to invest on vga for my render nodes.
Any ideas on that, or VGA is only needed if its gonna be a workstation/main computer, and can install windows/max etc with the available on board VGA?

cheers…


#16

you could easily get away with onboard video… or even without video at all. I’ve built images for systems that were headless before, its’ not to difficult. Look into DISM for windows to build a custom auto unattend install for each machine. That way you have a base image for distribution, and can configure custom system details with a simple XML file. Once you get the hang of that, its’ a breeze.


#17

Our render closet has it’s own air conditioner and dedicated power line, so it’s always 70 degrees F at ambient temp.

Our render nodes are:
asus sabertooth x79 motherboard - one of the better purpose-built overclocking boards.
Obviously a 3930k
noctua NH-D14 SE2011 cooler
corsair vengeance 4x4Gb 1600mhz low profile ram (low profile is important for noctua cooler)
cheap $30 geforce 210. I wish the motherboard did have onboard video:(
cheap hard drive (or could be a USB3 stick) for the OS
Corsair 200R cases - fantastic case for render nodes BTW, really quick to install.
corsair 750 watt gold certified power supply.

A 550-650 watt would be ok. I prefer Seasonic power supplies, but the corsair was $10 cheaper for the 750 watt and has a good reputation. IMO having gold certification is important since it lowers energy costs, produces less heat, and makes your air conditioning work less.

when everything is added up, it comes in around $1400 without an OS

I have a high speed 110CFM 120mm exhaust fan out the rear
I have a normal 120mm exhaust fan out the top of the case in the rear position
I have a high speed 100CFM 140mm intake fan on the top of the case in the front position, normally people might use this fan position as an exhaust, but I don’t see the point in that since the CPU cooler is right there.
I place the graphic card on the 2nd PCI-E slot, away from the CPU

Then I place a 120mm fan on top of the graphic card angled at a 45 degree angle blowing air down at the motherboard and base of the CPU cooler. IMO this is a fairly critical fan placement. I zip tie one of its corners to one of the noctua’s fan clips so it stays put.

Don’t go with liquid cooler for 24/7 overclocked rendering! Water pumps can fail, heatsinks can’t. You won’t be around to know the pump failed…and the pump software usually isn’t linux or mac compatible. IMO liquid coolers should only be used for systems you are actively in front of or high end servers with a 24/7 IT staff on duty. Even if a pump has good reliability, on an overclocked system their failure rate is something like 20, even 30% sometimes because they have to work hard all the time.

The other thing I do, is I’ve had power supplies fail because their internal fan failed. I don’t like the entire fate of the power supply, and thus the computer hinging on that single fan so I take an 80mm fan and zip tie it to the rear of the power supply to act as a secondary backup exhaust fan. I put a fan guard on it if I have any and route the power cable in through an open PCI slot. I’ve had low priority computers run for years off of these tiny backup fans with a dead main fan in the power supply.This is my form of power supply redundancy - a $2.50 fan

No one here will probably be willing to do this, but I also take a little time to lap the CPU’s and heatsinks since they’re not even close to being flat. Lapping shaves a few degrees off and gives you some more headroom for overclocking and will help make the chip last longer by being able to transfer heat to the heatsink quicker. You can lap two CPU’s at the same time, one in each hand if you have a large enough sheet of glass in about 40 min if you start with 320 grit sandpaper. It voids your CPU warranty and has a risk of damaging the CPU if you’re not careful. Before doing it, I check that the CPU works first beforehand.

I don’t know if anyone else has noticed this, but CPU temperatures are lower by a few degrees C in Linux than in Windows when rendering mental ray scenes or even just sitting at idle.

If we didn’t have to pay for all these expensive maya MR batch and Smedge licenses, I would rather have gone with 2600k i7 machines in mini ATX form factor with smaller cases and less extreme overclocks. We’d have 3x as many machines, roughly 50% more cpu power for the same money, and the machines would be semi-disposable coming in under $600 each. In the end the 3930k 2011 platform was better for us as it lowers the machine maintenance, physical machine footprint, and software licensing costs. It’s also nice in that our workstations are 3930k’s so it makes render times more predictable.


#18

Great writeup sentry66, thank you.
I am not nearly technically capable or informed as you do, so I have to re read that post a few times more to take it all in.

The difference of my setup from yours is, well, besides all those custom made things you have mentioned :), that I already have a 46U AV rack and 20U room to put all those necessary parts. (In case google is not mistaken Corsair 200R cases are tower style);
4x2U rackmount cases, 2U for NAS, 1U for ethernet hub and some spare room (whatever I may need, probably another UPS for starters).

But the main problem I see/will face is the cooling of the components in the rack.
The rack has only 4 fans on the top, 3 in 1 out and that’s it.
With all the AV gear, its already barely holding on to 80F while fans keep working 30% of the time.
The defining factor here is, even though the rack is in my basement, the ambient temp is never lower than 75.

I could get another rack (I prefer not to), lets say 22U, but the cooling solution will be the same, at top with max 4 fans, unless I build something custom.
Maybe I can put 4 fans at the back door, but then every time I want to plug something out I have to deal with the fan on the door (might keep the power cable long, that will solve that problem :))
But the hot air rises and such so the fans on the back door will be less effective.

If you think that I can solve the cooling problem (to a degree) with such a setup, instead getting a new rack, I would like to use it on the one I already have, but I’m afraid as its higher the fans will make less of a difference…

Btw, the only solution that the rack company recommended is to get another 4 fans (3in 1out) and put it directly on top of the highest heat source (which will be the pc cases).
But than wouldn’t it only mean that I will blow hot air to the component above the fan (lets say receiver) and blow the already hot (ambient in the rack, 80) air to the cases?!! That looks pretty lame to me.

What do you recommend?


#19

Yeah the corsair 200R cases are regular aluminum tower cases so they won’t work in a rack and don’t have the strength in the front for mounting.

Here’s a link to the carts we’re using that we stack them on:
http://www.homedepot.com/p/t/202361002?langId=-1&storeId=10051&catalogId=10053&R=202361002&catEntryId=202361002
I think their price is quite reasonable, though you have to assemble them. We can fit 12 machines on each cart, 3 rows of 4. I know it’s not an “enterprise solution” and the IT guys will sometimes tease me, but the hell if I care. It works great for a small setup.

What I like about the carts is I can move them around. Even in a small space, I can slowly rotate them to get to the back of a machine, or easily relocate them. Racks are generally stationary and heavy with all the heavy duty steel construction all the cases have - steal is also worse for letting heat out than aluminum.

I’m really the wrong person to ask about for rack advice because I avoid racks at all cost unless space finally has become an issue and smaller footprint machines are required. What I do know is the cooling in rack cases absolutely sucks most of the time unless you have those custom A/C enclosures the big renderfarms or super-computing centers use. I’d have to see your setup to really understand it, but it sounds like you’re probably right about recycling hot air into the other rack components.

With rack cases, usually the only intake or exhaust points are the front and rear of the case. Very few rack cases seem to have side intake or exhaust locations. Typically the majority of the fans are up front, creating a positive pressure case scenario which will bring in a lot more dust into the case and not do well at getting the heat physically out of the case.

Negative pressure cases are better for 24/7 heavy computing while positive pressure setups are better for short term heavy computing or gaming. Positive pressure will get the heat off a component quickly in the short term, but usually creates more turbulence which will slowly bake the ambient case temps.

The smaller rackmount machines have those crazy high speed small fans that are like loud dentist drills. When I first started working where I am, we had a bunch of those type of small rackmount machines and the sound would drive you nuts. I figure we had a whole room to use up, why not go with larger machines that can utilize larger fans that won’t drive you insane?

So anyway those issues combined with the low profile the units, I really don’t think I’d recommend overclocking in rackmount cases except maybe 4U ones that have some good high powered exhaust fans. Even 4U cases aren’t tall enough for a Noctua NH-D14 SE2011 cooler, so you’d have to go with a weaker cooler. Given that normal i7’s run slightly hotter than xeons, IMO it could just be asking for trouble.


#20

I know what you mean about the noise, especially in the 1U cases, that’s why I was gunning for 2U, at least they have a tad bigger fans (8cm vs 6cm I think) so they turn slower/silent!

And, no, I do not think I’ll overclock them, unless I invent some stable and constant nitrogen cooling technology :slight_smile:
Still, for best bang per buck 2600k is on the lead, but if i get 3x3930k instead of 4x2600k I’ll be able to cut some expenses from all the other components, and invest more on cooling, meeh, we’ll see, gotta learn the heat differences between 2600 & 3939 too.

Thank you for your help mate, I’ll study some more about the cooling solutions on high U racks, will check back with the findings, if I can find any…

cheers…