View Full Version : backburner pb: multiple tasks on a single computer

09 September 2009, 04:11 PM
Hello everyone,

I'd like to submit a job to backburner, but it has to be computed on a single machine, and should not be stopped and restarted by another (I'm computing a GI map with VRay) and I don't really know wich switches to activate to do so... I guess "nonconcurrent" could be one, maybe "blocktasks", but I guess it does not block all tasks for a single computer, and there is also a "NonStoppable" and "SrvLimitCount" node in the backburner XML file (actually I'm manipulating this XML to send the job so I'm not limited to 3dsmaxcmd.exe or cmdjob.exe switches)...
So I'm asking there if someone know the solution directly before loosing hours to try by myself :)


09 September 2009, 04:25 PM
cmdjob supports a tasklist feature. For this each line corresponds to a machine, so if the list only contains one machine, then it will only be processed by one machine. I would make the job critical (-priority:0), as long as you don't send another critical job it won't be interrupted by any other jobs. Use the -servers to assign it to a specific machine, and only one machine. Set an extremely large value for -timeout: (like 60000 or so to make sure it doesn't time out, though this can have downsides).


09 September 2009, 05:12 PM
If you're save I-Maps. I would save per render with a callback like so...

The function "f360lib.digittostring <number> 4" returns a four digit frame number.
You'll have to create your own OutPath based on your network.

Add this callback before submittal...

-- Add Persistent callback
I_MapPath = < network Location >
ExStr = "z = f360lib.digittostring (currentTime.frame as integer) 4 \n"
ExStr += "OutPath = (\""+I_MapPath +"_\"+ z + \".vrmap\") \n"
ExStr += "makeDir ( getfilenamepath OutPath )\n"
ExStr += "renderers.current.saveIrradianceMap OutPath\n"
callbacks.addScript #postRenderFrame ExStr id:#CB_SaveIrradianceMap persistent:true

I have the Imapviewer.exe tool located in the root dir of max for convience, and call a merge of all the maps with something like this... and apply this map when I do a final render.

ldmergemaps = ""
svmergemap = < merged map filename >.vrmap
MapsToMerge = getFiles( <network location > + "*.vrmap")
for i = 1 to MapsToMerge.count do
append ldmergemaps (" -load "+"\"" + MapsToMerge[i] + "\"")
doscommand ( getDir #maxroot+"imapviewer.exe" + ldmergemaps+" -save "+"\""+svmergemap+"\""+" -nodisplay")

This way you can have the whole farm work on the Imap Calc. Although the DR is pretty solid now, that's an option for small scenes... >80meg

09 September 2009, 09:51 AM
to Eric:
Actually cmdjob is what I was using until now, but it has several lacks compared to 3dsmaxcmd and a major one for my script is that when you use netrender method "<job>.handle", it gets only the handles of 3dsmaxcmd jobs (why?) so I have to send it with 3dsmaxcmd...
I already use the tasklist stuff for Fusion jobs sent to BB, and it works well I admit!

to "Kramsurfer":

I'm not sure I understood all you said before (by the way why don't you just use the formattedprint command to format numbers?) but the aim I intend to reach is (obviously) to lower computing time, and to do so I set the irradiance computation on "Multiframe incremental" (it doesn't compute precedently sampled zones... but in order to do so, as the map is kept in memory, it can't be computed on multiple nodes...I already thought of computing multiple maps and gathering them with Imapviewer.exe, but it would take much more time than using "Multiframe incremental" I guess.
I didn't manage to find a way (actually I haven't search a lot...) to send DR jobs and other jobs (scanline, MR) at the same time, but it may be the solution...

I founded a "IgnoreJobShare" node in BB XML too, so great there is no documentation on it...

09 September 2009, 05:37 PM
In your .xml file you could set <Servers All="No"> and then include only one server in server section. That may do what you want, I never use the approach you are so, can't help beyond what I can find in a simple examination fo a standard backburner .xml file.

Hope this helps some and good luck,

09 September 2009, 09:40 AM
yes, but it would mean that I'm using an arbitrarily chosen server, instead of leaving BB choosing itself an available one, for instance...

after analyzing 2 jobs, one with a .tga output and another with a .mov output (BB should behave with a quicktime like I'd like it to behave with my irradiance map job, that is to say computing all frames in a row) it seems that the only nodes changed are NonConcurrent and NonStoppable, so I'm gonna explore in that way but I could swear I've already tried that and it didn't prevent jobs stopped to be resumed by another computer.
Thanks guys!


09 September 2009, 05:03 PM
Yeah, what we're doing is computationally longer but when spread across all the nodes, much faster in artist time and allows for more iteration. What's an artist's time worth vs computer render time? And like anything else this value varies on workload and due dates.

As far as DR. I simply use a cmdjob that launches the vraySpawner on the 'DR Slave' systems and assign those to the submitted job. We've a handful of render nodes that are prioritized to be DR nodes first and max render nodes second. It's not 100% press and go. One needs to be mindful of the server status's while using DR.

10 October 2009, 03:31 PM
Yeah, what we're doing is computationally longer but when spread across all the nodes, much faster in artist time and allows for more iteration. What's an artist's time worth vs computer render time?
Actually the other nodes are computing other jobs, so it's not really taking on "artist's time" as finally every passes come together...
Bad news for me on the other hand, the switch "NonStoppable" seems to be f*** up and certain jobs manage to stop it anyway (those with a higher priority for example)... what it does well is restarting it from zero when another server recovers the job...I lost DAYS of rendering time before I noticed that (of course there is no track in the logs when they restart a job like this, so it's invisible, everything seems to work well...)
I don't know what to do... changing software maybe...

P. (completly upset :cry: )

10 October 2009, 05:20 PM
What about keeping it at Priority 0 and hosted to one machine. And ensure whoever else understands the consequence of putting other jobs in at zero... namely don't do it!

If you've python installed, I can give you some code to set priority to -127... then no one will EVER jump it.. ha ha...

Either way you really shouldn't render to .mov with network rendering. Individual frames are the way to go. You can render one or paint on another to fix something.. frames frames frames.

10 October 2009, 07:29 PM
It sounds like you are trying to hack your way through a problem that would be easier solved to just create a batch file, remove the machine from the render farm, and execute the batch using 3dsmaxcmd. That way you know it won't be interrrupted, unless it crashes.


CGTalk Moderation
10 October 2009, 07:29 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.