Need advice about Xeon render blade please


#3

Ah I wasn’t aware of that. Thanks for the tip! E5-2600 is also available for ATX size?

Looking at this page, how can I tell if it’s 1U or 2U?
http://www.supermicro.com.tw/products/system/4U/F617/SYS-F617R2-RT_.cfm


#4

Those are actually 8 two socket systems in a 4U chassis. Currently that should be the highest packing density you can get.
Keep in mind that this is not actually a blade system, but 8 independent computers that only share a redundant PSU. There is no common backplane etc.
This kind of system is usefull if you need many simple plain single systems in a tight package. For a renderfarm it should be quite good, provided you have the rackspace to mount it. This system will be very loud and will need a climate controlled environemnt.

Cheers
Björn


#5

Following what Srek said, if you’re at the point where you have to ask about such things, I wouldn’t recommend for such a high density per square foot.

Buying components on a farm is based on a lot more than just picking the right units.
What you can or can’t afford in power, climatization and surface are important determining factors.

Don’t you have a reseller around who’s set up other farms before and you can rely on? A preferred provider maybe.

You will need the right network and storage bandwidth as well for a dense farm not to become more of a bottleneck than it actually helps.


#6

Hey man,

The ATX sized parts are what you’ll find in a normal desktop or a large, cheap server. Their bigger and cheaper than what you might find in a compact server. They make ATX motherboards that will hold a pair of E5-26000 chips.

-AJ


#7

Thanks for your input Jaco. That’s the thing - there are only a few studios in Thailand with render blades (mostly Dell) and rarely anyone with decent level of knowledge about them. 3 other vendors I talked to knew nothing about 3d rendering and I seriously doubt Dell’s sales would know any better. I think my studio already appointed a sale to come talk to us this week, I’ll find out.

I guess I have some vague idea what I should be going for… 6 blade, 2U, twin xeon 2600 socket, 24Gb of ram. On top of that, a file server with decent write speed and at least 3-4 NIC.

Probably will need a climate controlled environment no matter how dense the blades are. I almost melt on my 5-minute walk to lunch today.


#8

If you need only six render nodes I’d look at what gives you the best bang for the buck. High density server hardware isn’t the best bang for the buck but offers benefits for large deployments. Forget the 1U enclosures and 6U rack unless you need it to be portable or you plan to scale the farm out to many nodes (dozens or hundreds of nodes).


#9

Thanks Olsen. The most we’ll ever have is probably 3 boxes, 18 nodes total.


#10

What software will you be using on the farm? Cinema 4D, Maya, After Effects, Nuke, etc.


#11

yeah, if you’re only getting 6 blades (which are going to be expensive), I wouldn’t bother getting blades. Surely you have enough room in your studio for 6 mid tower cases. With the money saved, you could easily get probably 8-10 non-blade machines of equal performance. Xeons are expensive either way, but you pay a small premium for them being shrunk down so small into blade rackmounts.


#12

Do you have space problems?
Or do you already have a climate controlled area for things such as servers with the racks already set up?

High density is rarely the way to go unless you tick both the above boxes.

A taller and easier to climatize set up with more and more straight forward networking might see you better off.

Also what engines and requirements?
Once you get a wider array (many CPUs and cores) and a lot of it is dual or quad procs, you also have to bear in mind they share memory, and, guaranteed, you will at some point want to split jobs more granular so you can have multiple running per CPU.

If your average job now can cap 24GB on a workstation, for a farm with duals seriously consider at the very least 32, if not more.

Wattage is also important.
How expensive is power there?
High density means lower power consumption per cycle, but higher power consumption in cooling per watt produced (residual heat means cooling can’t ever be lazy, which means it hardly ever cycles off).
If you have expensive power bills, but an already under capacity climate controlled area, they are great. If it’s the opposite, then you might want to go lower density with a taller and easier to cool rack.

Just keep these things in mind.

Other things like cost of licenses and licensing schemes also contribute.

A farm doesn’t finish costing you money once it’s set up, it just starts there, and people often under-estimate how impactful a poorly chosen one can be on your bills for power and software.


#13

Oh so RAM are shared? Do they share power supply and NIC cards too?

@sentry66, @Jaco thanks for your reply, really got me thinking. Apart from saving space, are there any major pros for going blades? We have no space problem. Actually, we have lots of space. And since our country is really hot, our room is already air-conditioned.

Would an air-conditioned room considered climate controlled? Is there more to it than that?

I’m thinking the farm would be used primarily for Vray which I believe that Vray for Maya comes with 10 distributed/standalone license. Some of them may also run 3Delight and PRM, it all depends on our investor.


#14

What they share depends on the build, type and so on.

PSU is usually shared as in you only need one plug per blade, and internally it will draw and power what it needs to, but there are many offers where you have two, and good blades often have redundant power supplies in case one fails (which is something you can put in any server tailored case too, if you need to).

NICs depends, some have multiple regardless of the number of CPUs and mobos (plates) hosted, some have one plug and deal with splitting and offering multiple IDs in a managed way, some will have fiberchannel too.

Climate control is about the inside of the chassis being at reasonable temperatures, so if you have an air conditioned room hosting forty workstations comfortably, it won’t be a problem to add 10, whether you use them for distributed rendering only or seat someone in front of it it matters absolutely nothing, if they don’t overheat, they will keep churning frames out. That’s the whole extent about climate control.
With racks you have to be more specific and careful because it’s a lot of heat in a small space, but the principles don’t change.

Workstations are, from a power and running costs point of view, very rarely advantageous over more compact solutions.
They will be cheaper in terms of casing and management, but they usually aren’t as optimized heat and power draw wise as blades can be. It doesn’t mean they aren’t an option though. Again, space and power being the difference between a computational centre and a bunch of workstations.

There is nothing magic about hardware inside blades. If it can fit in one, the equivalent can fit in a case if you prefer that.

That’s why I was stressing those points.
People think of a renderfarm as if it’s some sort of magical, abstract entity… It’s not, it’s just a bunch of computers, end of story.

Power, space and computational needs and constraints dictate whether you need one in racks, or you can pile up some cases on a desk.
It’s all about logistics, end of story.


#15

Thanks Jaco. Yes you’re right, at first everyone at my studio (me too) thought a render farm is something specialized. I’m relieved to know that each blades are just like regular PC :slight_smile:

How much faster would a Xeon 2690 have over 2650? The CPU price alone is almost twice more expensive?


#16

It depends at what tasks.
Some you could see something close to time and half, others you will hardly notice the difference.

Whether the 1k per CPU difference is justified, as for other things before, depends on many factors.

If your CPUless blade cost is (for the sake of argument) 800$, and you have no space or racking or network limits, it’s not worth it, because a configured blade costs 5k with the 90s, and 3k with the 50s, making an additional blade better ROI.

If a CPUless blade costs you 2k, and you are license capped with some expensive software that brings in license per CPU as a factor, adding, say, 500$ per CPU, then the 90s will be better value for money.

If you have licenses in excess, space in excess, cooling in excess, and power doesn’t cost a fortune, then it boils down to naked blade costs, and in that case many cheaper CPUs will usually win over few top of the line ones you pay a ridiculous premium for.

Is it worth the money? It depends, as I said before a farm is about balancing all things out from a purely logistical and financial point of view, if someone who doesn’t know your studio and constraints gives you a straight yes or no answer to that question, don’t trust them :stuck_out_tongue:


#17

A a xeon 2690 is at most 35% faster than 2650 at single-threaded tasks and at most 45% faster at multithreaded tasks. In the real world though, performance doesn’t scale perfectly linearly and will likely be lower than what I just said.

Don’t compare CPU price vs CPU price. Compare system price vs system price


#18

Hm? Did you think about pure i7-2600k/ - 3820/ 3770/3930k CPUs?
They have THE BEST money/ speed ratio…


#19

Only if you’re not using an expensive renderer (for example Renderman). Doesn’t make sense to spend $2,000 per render license and put it on a $800 machine because you run up a huge bill in licenses. Since in this case there are 10 complimentary licenses (according to the OP) the single processor nodes would be a good bang for the buck.


#20

Do each node need full Maya license if I’m doing primarily Maya batch? Is there a cheaper license for render node?


#21

This is all stuff you can find on AD’s website, or from your reseller.
From their doco for 2013 (and AFAIK it hasn’t changed for 2014) assuming you mean MRay:

Network rendering using the render command line utility
Each Maya license allows you to render in Maya interactively on one machine and run batch rendering on five machines. You can therefore perform mental ray for Maya rendering on up to 6 machines.

Other engines will be a different deal, and how maya seats are used is up to how things are set-up over there.


#22

Thanks Jaco. I guess the 5 batch machine also applies to Vray. 1 Vray every 10 nodes, 1 Maya every 5 nodes. Doesn’t sound too bad.

Including the software price, building 16x3930k actually comes really close to an 8 blade twin Xeon 2660. Now it’s just a matter of space I guess :open_mouth: I have this impression that Xeon will probably have longer life-span than I7 with more endurance, we might go that route for longevity.