View Full Version : another mental ray memory woes thread

04 April 2007, 12:13 PM
ok we all know mental ray crashes on very large scenes with lots of nodes - but i've got one scene i just can't get to render, always 'out of memory' errors when batch rendering (from cmd line). i have 6.5 on XP, 2.2GHz X2 w/ 4GB ram, 3GB switch set. i've searched the archives and tried everything... increasing global memory limit up to 2400 (MR batch never actually gets up that high but still crashes), and setting it low to 300 and enabling large BSP as someone sugested in the archives, didn't work, MR still uses over any memory limit set which makes me think that global limit does nothing lol.

one thing i've noticed, it never actually gets to "rendering in progress", it always dies when translating so my guess is the shear volume of nodes in my scene is killing it. (maybe 5,000 or so nodes, pretty standard for architecture).

i've tried setting export tesselated polygons and export objects on demand, and incresing BSP, lowering depth, still no help..... comping is not an issue to to the scene's structure and time issues, my only option might be to combine objects...? will be a hassle but is there anything else anyone can think of...? please don't say use X64 'cause the scene has beeen tested on a 4GB X64 box with the same problem.

04 April 2007, 01:25 AM
First of all, to take advantage of 64-bit, you need a 64-bit Maya.

Increasing the memory limit does not help. Think of it as a warning track. It doesn't do any good if you hit the wall without a warning track. With batch, I'd set it to between 1000-1200, even smaller if there is lots of Maya structure in memory. With standalone, for 32-bit mray, I'd set it to 1400. Even with the 3G switch, the application has to be made 'aware', since a process is limited to 2G in XP usually. I'm not sure if Maya 6.5 is.

Decreasing to the right limit considering your OS and memory is what helps when used along with features that can flush objects from memory, like Large BSP, objects on-demand, fine approximation, cached textures (tiled .map files if using mib_* nodes, in 6.5 they will not work with maya file nodes).

Exporting tessellated polygons may also be going against you. As mentioned, fine approximation allows for tessellated objects to be flushed and reconstructed on demand.

Combining objects, assuming they are not tiny, would also go against you, especially if you use objects on demand. Consider each object to be an ice cube in a glass. Larger objects (ice cubes) may not fit or be managed well in your glass. Lots of very small objects (crushed ice :) may increase the overhead of flushing them out and in.

04 April 2007, 03:52 AM
If you're running short on time you could always try to run a work around script that renders sequences through Maya.

I remember seeing a much more recent thread with a more recent script, but the script in the linked thread is essentially the same, I think.

05 May 2007, 01:37 AM
Decreasing to the right limit considering your OS and memory is what helps when used along with features that can flush objects from memory, like Large BSP, objects on-demand, fine approximation

thanks for the replies and sorry it took me so long to reply - the boss has finally caved and bought 8.5 so it should get easier once that's installed :) another question that's sorta related.... i'm having trouble keying visibility (even experimenting with the global options for 'optimize animated visibility' and 'non-animated display visibility' etc does not work 100% of the time, probably because the grouping structure and driven channels of the scenes i'm in can get quite complex..... anyway as a fix its just as easy for me to key objects' translation out of the scene by 100000 or some huge number from one frame to the next (hehe luckily we don't have to render motion-blur in maya), but i was wondering do objects that are outside the clipping planes still have an impact on memory or scene translation...? (which would suck). i've heard people keying scale down to 0% but if the objects are still in-frame wouldn't they use memory in some form?

05 May 2007, 11:47 PM
Under Render Settings>Translation>Performance note

Export Objects on Demand

This will make it so that objects are only loaded if a ray hits their bounding box.

If objects are outside the viewing area, they will not get loaded unless some reflection or FG ray hits them.

05 May 2007, 12:18 AM
hi, i have had a tough scene and i rendered everything out by doing all of the above suggestions, and setting the memory limit to 0, GUI mode or cmd line. give it a try

05 May 2007, 12:36 AM
another suggestion: if you can, DONT use scanline, it's keeping much more memory in ram, i dont know if it's a bug or simply a scanline limit

some test:

max 9 mray 3.5

13.612 teapots x 25.600 polys each one = 348.467.200 polygons!
raytracing 29s
FGmb (1 bounces) + AO + raytracing 2m 20s

this scene is absolutely impossible to render with scanline on
i'm using raytracing with grid acceleration and all other memory optimizations

maya 8.5 mray 3.5

270 ajax x 544566 polys = 147.032.820 polygons
FG + AO + raytracing

(my pc specs: quad-fx 2.6(x4) 2gb ram)

06 June 2007, 11:13 PM
Is that instances, or duplicates?

Because yeah, i've had MR run out of memory before tesselation even occurs, from sheer number of objects to catalogue.

I happens quicker than you would think.

06 June 2007, 12:30 AM
Just a question about 'export objects on demand': though I read the lamrug notes about, I can't understand how to set the threshold. I read about placeholders and other stuff, but can't get it! Please can someone explain this to me?

CGTalk Moderation
06 June 2007, 12:30 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.