ITs,TDs:Nuke is not using all resources in the render farm

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

Thread Tools Search this Thread Display Modes
Old 10 October 2013   #1
ITs,TDs:Nuke is not using all resources in the render farm

I have Nuke and 10 licenses to render in the farm.
I'm using just one node with 32 threads and 64 gigs.
The rest is for Arnold. Comparing the resources used
in both of them. With one super heavy loaded nuke script;
with all the expensive nodes. And a decent Arnold scene.

Both using one different node with the same configuration.
The discrepancy is a huge, around Arnold 90% and nuke 10%

I will appreciate any help you can give me.
Old 10 October 2013   #2
I've never tried to use Nuke over the network but it's not going to scale like a 3D render - Nuke only uses half of your CPU cores at the best of times because of the difficult of threading its various operations, so adding network bandwidth bottlenecks on top of that is likely going to cut that down even further.
Old 10 October 2013   #3
Hi David,
Thanks for your reply.
Have you ever found any workaround, py script, nukepedia. !!Or running multiple times Nuke and assign to each thread the same script with different frames.
Old 10 October 2013   #4
Nuke can use all of the processor cores most of the time but a few of the nodes won't. If the job is CPU bound then by all means run concurrent Nuke jobs to utilize all of the hardware. On the other hand it might also be I/O bound in which case running concurrent jobs will exacerbate the problem. For example if you're on a cheap NAS that does 25 MB/s then it'll take forever to render compositions no matter how many cores you have. When running a Nuke job take a look at the network traffic. Also what kind of network storage are you using?
Old 10 October 2013   #5
Hi David,
We have a state of the art render farm , and the NAS is not the problem.
The network traffic its find, using foundry license, just take to much at one node of 32 cores and 64 gigs.
We can see that its only using around 10% of resources in general. Network traffic its ok though.
And right now we tried a non elegant python script to run multiple times Nuke and assign to each thread the same script with different frames. And the script just blew the ram, and shutdown the node.
We are playing with -m and -s flags.
Any idea?
Old 10 October 2013   #6
What makes you so sure the network isn't a bottleneck? If you want to try concurrent Nuke jobs the queue manager should be able to do that for you. If you're manually starting jobs on each node check out this.

I had done concurrent queues before for other stuff like Houdini and RealFlow but Dave turned me onto this new library in Python 3.X which makes it so much easier.
Old 10 October 2013   #7
ya, that Python 3.x multithreading is ridiculously fast. If you can't keep a job busy with that, then the limit is definitely your network.
Old 10 October 2013   #8
Thread automatically closed

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
CGTalk Policy/Legalities
Note that as CGTalk Members, you agree to the terms and conditions of using this website.
Thread Closed share thread

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Society of Digital Artists

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump

All times are GMT. The time now is 01:35 PM.

Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.