ITs,TDs:Nuke is not using all resources in the render farm


#1

Hi,
I have Nuke and 10 licenses to render in the farm.
I’m using just one node with 32 threads and 64 gigs.
The rest is for Arnold. Comparing the resources used
in both of them. With one super heavy loaded nuke script;
with all the expensive nodes. And a decent Arnold scene.

Both using one different node with the same configuration.
The discrepancy is a huge, around Arnold 90% and nuke 10%

I will appreciate any help you can give me.
Cheers,
-Alexis


#2

I’ve never tried to use Nuke over the network but it’s not going to scale like a 3D render - Nuke only uses half of your CPU cores at the best of times because of the difficult of threading its various operations, so adding network bandwidth bottlenecks on top of that is likely going to cut that down even further.


#3

Hi David,
Thanks for your reply.
Have you ever found any workaround, py script, nukepedia. !!Or running multiple times Nuke and assign to each thread the same script with different frames.
Cheers,
-Alexis


#4

Nuke can use all of the processor cores most of the time but a few of the nodes won’t. If the job is CPU bound then by all means run concurrent Nuke jobs to utilize all of the hardware. On the other hand it might also be I/O bound in which case running concurrent jobs will exacerbate the problem. For example if you’re on a cheap NAS that does 25 MB/s then it’ll take forever to render compositions no matter how many cores you have. When running a Nuke job take a look at the network traffic. Also what kind of network storage are you using?


#5

Hi David,
We have a state of the art render farm , and the NAS is not the problem.
The network traffic its find, using foundry license, just take to much at one node of 32 cores and 64 gigs.
We can see that its only using around 10% of resources in general. Network traffic its ok though.
And right now we tried a non elegant python script to run multiple times Nuke and assign to each thread the same script with different frames. And the script just blew the ram, and shutdown the node.
We are playing with -m and -s flags.
Any idea?
Thanks,
-Alexis


#6

What makes you so sure the network isn’t a bottleneck? If you want to try concurrent Nuke jobs the queue manager should be able to do that for you. If you’re manually starting jobs on each node check out this.

http://docs.python.org/3.3/library/concurrent.futures.html

I had done concurrent queues before for other stuff like Houdini and RealFlow but Dave turned me onto this new library in Python 3.X which makes it so much easier.


#7

ya, that Python 3.x multithreading is ridiculously fast. If you can’t keep a job busy with that, then the limit is definitely your network.


#8

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.