View Full Version : farm setup / maintenance??
10-20-2005, 04:48 PM
Ive setup a renderman renderfarm at my site that consists of 5 dual G5's running OSX for render nodes, and disbatching from a few G4 powerbooks as user machines running Maya 6.0.1 and RAT 6.5.1. The file server we have is a 1 TB raid array mounted through a G4 Xserve.
Unfortunately for me, at my facility I dont have complete say over how things are setup/used because the G5's are often used for other things as well. This includes the way backups are done on our file server.
My first question to you all is: In reference to workspaces what have you all experienced to be the best way to handle jobs/projects??? right now I have everything being read from and written to this file server, but I believe it is causing serious bottle neck issues this way. Should I setup a seperate workspace for each project, and have the files read from and written to the user machine??? or is a person based workspace the better route. Basicly Im looking for advice on your experiences with this.
Second: How do you all handle and schedule back-ups of your file server. From what I have experienced, running a back-up on the server and rendering at the same time causes it to crash. Do you just not render during a back-up, if so what do you do during crunch times?
10-20-2005, 06:42 PM
When you say things are being read to and written from the file server, do you mean that all I/O during renders occurs over the network? If so one way to cut down on fileserver/network load is to set up an easily mirrored project workspace so that during a render, the project and related files can be copied to each node and once a frame is complete, it can be sent back to the file server.
Also, you mention that your G5's are often assignned to other tasks, have you considered adding some inexpensive Linux boxes to act as dedicated rendernodes? You can get quite a machine for < 400USD these days.
If backups are conflicting with the rest of the file-server activity, you need to figure out how long the backups take, and isolate it from direct I/O during that period. If your project files are mirrored on the rest of the nodes you should still be able to render, just hold off on sending the completed frames back to the server.
10-24-2005, 05:29 PM
would there be a way to set pre and post scripts to copy the project files to each of the nodes at render time. ie:
1. I start job
2. Renderman copies all associated files to each node.
3. Job Renders
4. Finished frames are copied to file server (if backup is not running)
or would I have to copy all files needed to each server before starting a job?
thanks for the help
10-25-2005, 02:13 AM
Are you handy with any scripting languages such as perl, or ruby? I have been working on a distributed render manager in ruby for a while. Basically the server knows where the files are located, and what frames need to be rendered. Each client requests a frame, and is sent the necessary files to render that frame. The central server doles out the frames to prevent duplicated effort.
If you manually copy each file you have to make sure you split the rib files up amongst the various machines to make sure you don't render the same frame twice.
I haven't used Maya with RAT in a long while, check to see whether there're hooks for scripts. I know houdini has pre and post script options. As a hack, I believe you could add a tcl box to each project, that handled the file transfer, but I think a centralized render manager would be more maintainable.
10-25-2005, 02:13 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.