Any render farms using processing virtualization?

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

 
Thread Tools Search this Thread Display Modes
  4 Days Ago
Any render farms using processing virtualization?

A senior IT person just told me that buying additional render nodes is old fashion and the future is in virtualized processing where all CPU's are virtualized and then divided up as needed to provide users high availability processing at any time via dynamic virtual machine clients. That it's far more cost effective than purchasing additional hardware.

Renderers are generally well-threaded these days so it just seems like slicing up a pie into smaller pieces and potentially costing more in software licenses. But according to this person (who has zero CG experience), render engines likely aren't very efficient with CPU resources. It sounds like rehashed marketing BS, but maybe I'm wrong?

I've never heard of a render farm virtualizing their CPU's. Maybe it's so new that major studios haven't used the technology yet? Or maybe it doesn't work well regarding rendering? Who is doing this and how much better is it if at all?

Last edited by sentry66 : 4 Days Ago at 09:47 PM.
 
  4 Days Ago
sounds like something that Vray or similar per-system licensing software would not support.
__________________
[Invivo Animation Reel]
 
  4 Days Ago
I'm guessing that that's a fancy way of saying "You'll have 12 remote servers with 24 CPU cores and 32GB RAM each rendering for you, but your 3D software will see the whole thing as 1 PC with 288 CPU cores and 32GB of RAM".

In other words, you won't need some kind of distributed rendering client or network rendering system anymore.

You will see 1 remote PC that you control and render with, but that can actually give you hundreds or thousands of CPU cores on-demand, depending on how much CPU horsepower you need and can pay for at a given time.

That is the only sensible explanation I can come up with for what he said - hundreds of virtualized CPU cores across dozens of networked remote servers that act like 1 big-ass computer with a massive number of CPU cores.

Yes, that would make remote rendering a lot easier I suppose. You'd essentially remote control just 1 virtual PC for rendering that is actually dozens of networked servers in real life.

Or put another way - you could have an entire rack full of 24 Core servers sitting in the corner of your office, and you'd be able to control that whole rack like 1 big PC.

You could run 1 copy of Maya on it and just hit regular "render", without having to worry about networking, distributed rendering, whether the right files are on each machine and so forth at all.

Sounds like a good idea, except that none of the CG soft makers will let you do that without milking money from you for every single CPU core in that virtual machine.
 
  4 Days Ago
He wasn't sure if you could combine all CPU's into one giant virtual machine. It was more he wants to take our existing machines and slice them up thinner to become more nodes with just enough memory on each node to render our files. He wants to spend money to double the memory on the current machines.

I told him we can already run multiple parallel jobs on a single machine if they have enough memory and give them fewer cores. We can even slice up each frame. In some auxiliary passes like mattes where it takes more time to load the file than to render the frame having more jobs in parallel can help, but primary main passes are almost fully multi-threaded.

We use vray and mental ray. The only time the engines are not using all cores these days is when loading a large file (partially bottle-necked by network bandwidth) and towards the very end of rendering where a tile or two might hang on a single thread. I think at best we'd gain 5-10% more performance at the expense of more network traffic and higher render license cost, but he seems to think we'll gain more like 2x the gain.

Last edited by sentry66 : 4 Days Ago at 11:46 PM.
 
  4 Days Ago
You can do this with Amazon EC2 (Elastic Network)

Great thing is when you run out of ram or need more CPUs, you can log out to the dashboard, select how many extra CPUs you need + Ram. Log back in and finish your renders.

To anwser your last 2 questions:
Who is doing this - I read of 1 guy years ago who started doing this and wrote a tutorial, unfortunately his blog is down now.
Is it worth it - Well that depends on your situation, your saying the render is hanging at the end so your obviously outputting already so thats great. I would work out why its bottlenecking, are you sure its not just because its saving out the render passes, this part (last 5%) can usually take 10-20minutes on its own depending on file size/format. I would investigate this first - then maybe look at other options.
__________________
James Vella
3D Visualization
Portfolio

Last edited by NorthernDoubt : 4 Days Ago at 07:49 AM.
 
  3 Days Ago
Originally Posted by sentry66: He wasn't sure if you could combine all CPU's into one giant virtual machine. It was more he wants to take our existing machines and slice them up thinner to become more nodes with just enough memory on each node to render our files. He wants to spend money to double the memory on the current machines.
Sounds like the Senior IT person got taken out to a nice lunch by the MS Server sales rep. Reminds me of the old joke "Sure we lose money on every sale but we'll make it up in volume"
 
  3 Days Ago
Originally Posted by NorthernDoubt: You can do this with Amazon EC2 (Elastic Network)

Great thing is when you run out of ram or need more CPUs, you can log out to the dashboard, select how many extra CPUs you need + Ram. Log back in and finish your renders.

To anwser your last 2 questions:
Who is doing this - I read of 1 guy years ago who started doing this and wrote a tutorial, unfortunately his blog is down now.
Is it worth it - Well that depends on your situation, your saying the render is hanging at the end so your obviously outputting already so thats great. I would work out why its bottlenecking, are you sure its not just because its saving out the render passes, this part (last 5%) can usually take 10-20minutes on its own depending on file size/format. I would investigate this first - then maybe look at other option

Interesting, but our renders are not hanging. They render perfectly normal like anything else. So an hour long render might have 15 seconds loading for a 2 gig file across the network, then 15 seconds at end where cpu cores finish their work, but hyperthreads are the last couple threads that are left still processing since they're so much slower than real CPU cores.

So far the technology sounds interesting in the flexibility, but as a practical thing I'm not sure how often people are going to want to change the render farm node config just to get that last 5% of performance to suit their project.

Anyway, hearing someone say adding more render nodes is an "old fashioned" way of adding performance was something I don't think anyone has ever heard
 
  3 Days Ago
I don't know if this is the correct answer, but I think you should read this:
Redhat Cloud
 
  3 Days Ago
Did you check which buckets are being processed by which cpu to determine this slow down? Ive never heard of this as all cpu/threads should be equal speed unless something is specifically using them (such as a manual affinity override). Typically the operating system wont hog that much. How much time in total are you trying to save here, per render?

Are these last buckets usually slowing down in similar areas, such as metals,glass or light fittings?
__________________
James Vella
3D Visualization
Portfolio

Last edited by NorthernDoubt : 3 Days Ago at 04:29 PM.
 
  2 Days Ago
Hey guys, Alex here from CGIFarm.com .

We are actually using virtualization one of our locations and I will give you the usage scenarios and why this is good in practice on a large scale:

We have 64 and 96 core machines that we can virtualize. The main reasons to virtualize your hosts are these:

1. It's easy to deploy an image with the software and plugins you need, making it easy to maintain the plugins as you mainly maintain 1 single image that
get's propagated to all the machines. It takes 30-60 seconds to create 200 machines from a single 120 GB image.

2. We can cutomize the VM processing capabilities depending on the job requirements. For example, if you have a job that takes a really long time to load
into ram, cuz there are a lot of props etc and it renders in 5 minutes on a 96 core machines.. then you keep about 90 cores on idle until the job gets loaded into
ram, wasting a lot of resources this way. In this case we would give 24 cores to a VM and you have multiple machines loading the scene in ram, producing 4
frames at once, render time will increase but the resources would be used more efficient.

3. Networking and security, having your system virtualized, gives you the ability to create virtual networks very easy between the instances that are rendering
one particular project, having the nodes on a specific virtual network assures that no other nodes will have access to the rendered frames. It's pretty tricky to do
this with bare bones, so tricky that probably no one would go through this process as you would need to set a lot of scripts and agents to setup these things for you.

For VM's you can set the network configuration at creation time and delete the machine when it finishes the job, this way you always start a new machine from the
fresh image and the risk of a node being compromissed during render time and have this propagating on other future jobs is zero.

In terms of processing power, if you were to virtualize a 24 core node, then you won't gain too much with that, as most of the jobs are 30-60 minutes / frame for a 24 cores
and that's using the node pretty intensive. You loose about 5-8% of the CPU power by virtualizing your system, because of the extra layer of virtualization.

I would say that virtualization at this point is for 64 core + machines and high ram, 480GB+ ...

For your office renderfarms, I would say that's much more proffitable to use them as bare bones and use a software like remote installer to keep your nodes updated with the latest plugins.
Licenses for good virtualization software are expensive and you also need to build your own software to spawn the required nodes and delete them after the job is completed.

Plus you only pay for 1 OS license in case you are using windows for your images.
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 06:21 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.