PDA

View Full Version : 3.000.000 Polys - How much RAM needed ??


maxwater
11-26-2005, 12:24 PM
Hi everyone.

I need to render a scene with about 3.000.000 Polygons. it should be
an outputsize of about 7000x3500 pixels. (i am planning to use
AO instead of FG, if needed tile rendering)

Seems like it even won`t start with 1Gig (out of memory message)

Would be 2Gig RAM enough ?

anyone had same problems solved ?
thx in advance....


greets,
maxwater

LehaS
11-26-2005, 02:28 PM
IMHO since you are going to use just AO (i consider you are goingt to render separate AO pass) it means you are going to render just occluded geometry pass without actual lights and shadows from them....only AO shader...
..This means you can get a real advantage of not rendering all your 3mln stuff at once....for instance you could reduce it to 3 x 1mln for instance by setting your AO shader`s distance parameter(at least for built-in mib_amb_occlusion) so that your geometry will be occluded by nearby object only and it will not take into account the ones that will be far/hidden (just interpolating to your "bright color" value)....so geometry you hide wont be cached by MR during rendering and probability of MR crushing will be minimized....
Sure depending on the scene this can be hard to accomplish but still rendering your scene as a separate AO pass will speed up things a lot...

DJ_
11-26-2005, 02:39 PM
I have rendered scenes with 3 mil polys on my machine (with only 1 Gig Ram) without any major problems...

What renderer are you using?

if you are using Mental Ray, try checkig your "memory limit" first, it may be set lower than you should have (with 1 Gig ram, it is recommended to be about 650 Mb). You may also use diagnostics to find the best BSP depth settings (in accelerarion method). If your 3 mil pilygons are quite evenly distributed over your scene, try "grid" as the acceleration method, otherwise you can use "large BSP", it takes about 20% longer to render, but render any scene.

NOTE: Mental images recommends 2 gigs Ram (or more).

floze
11-26-2005, 03:22 PM
The amount of physical RAM is less important than the fact that your 32bit OS cant handle more than 4GB of total memory. If you were on 64bit you technically could handle 8GB maximum memory.

However, that you're render is crashing indicates that you're on windows - linux for instance seems to be a lot more stable when it comes to heavy scenes. There's a couple of things you can though, some time ago there was another thread about that:

http://forums.cgsociety.org/showthread.php?t=278636

Try using the 'Export Tesselated Geometry' from the Translation> Performance tab in the render globals - this helps saving memory. You also might apply this option per object if it's causing any trouble for some elements, using the boolean 'miTriangles' attribute on your shape nodes.

Try using .map format textures - they dont eat up any memory as they are memory mapped.

Try using large bsp as DJ_ stated - the large bsp might flush it's cache if the memory limit is reached, avoiding crashes.

Try using 'Load Objects On Demand' - this helps avoiding crashes at translation time (imo the most critical part when dealing with large scenes).

Try using subd approximations instead of using the meshsmooth - this also helps stabilizing translation, handing over memory to render time where it might flush the cache.

Apply approximation nodes to all blends of NURBS if there are any - they sometimes blow up the triangle amount for no reason.

Many of these points are explained in detail at this page:
http://www.jupiter-jazz.com/wordpress/wp-content/data/tr4kv2/html/chapter4-MEM.html

gl and hf...

Jackdeth
11-26-2005, 05:29 PM
It's only about 2.7 gigs for a 32bit app, but its nice having some extra ram for filesystem cacheing and OS. If you are doing heavy raytracing, displacement maps, and motion blur then you might not be able to render that scene. Break it up into passes, and try to avoid raytracing on most of the passes.

maxwater
11-26-2005, 10:49 PM
THX leha_sokol, floze, dj , and jackdeath !!!

i will work through your suggestions and let you know if it works for me.

special thanks to floze for his great participation in the
forums. always great, detailed answers (just read your physical light thingy)

where can i find that MR memory limit or is it only available for MR standalone ?
(i am working with maya7 / mr)

greets,
maxwater

dagon1978
11-27-2005, 12:05 AM
THX leha_sokol, floze, dj , and jackdeath !!!

i will work through your suggestions and let you know if it works for me.

special thanks to floze for his great participation in the
forums. always great, detailed answers (just read your physical light thingy)

where can i find that MR memory limit or is it only available for MR standalone ?
(i am working with maya7 / mr)

greets,
maxwater

render settings > mental ray > memory and performance > memory limits

this is a test with the mr approx (intead of meshsmooth)
athlon xp2500+@2400 1gb ram
10.000.000 poligons 4000x3000 with raytracing, final gather and ibl :buttrock:
http://img333.imageshack.us/img333/9255/dragone2oy.jpg

RC 0.0 info : wallclock 0:26:05.41 for rendering
RC 0.0 info : allocated 371 MB, max resident 676 MB
GAPM 0.0 info : triangle count excluding retessellation : 10712840
GAPM 0.0 info : triangle count including retessellation : 10712840
DB 0.0 info : disk swapping: 0 objects (0 KB) swapped out in 0.00 s,
DB 0.0 info : 0 objects (0 KB) swapped in in 0.00 s, max swap space used 0 KB, 0 KB failed
LINK 0.0 info : mrLibrary: No memory leaks detected.
MSG 0.0 info : wallclock 0:26:15.19 total
MSG 0.0 info : allocated 2 MB, max resident 676 MB

Arcon
11-27-2005, 05:39 AM
The amount of physical RAM is less important than the fact that your 32bit OS cant handle more than 4GB of total memory.


floze has a good point about memory limitations on 32 bit windows, but in practice the limit isn't 4GB for maya, as applications have shared limits within the 4GB of ram allowed.

more importantly, i've noticed that basically whenever a maya thread gets over around 1.3GB usage (GUI or batch), it will crash immediately.

i think maya is getting a bit behind the times to be honest, XSI is 64 bit already and i don't think that's going to happen with maya for the next release.

Komarcic
11-27-2005, 11:14 AM
DJ_: why is it recommended to use only 650MB if you have 1GB of memory? i usually set this to about 1000MB in MR.

floze
11-27-2005, 04:42 PM
floze has a good point about memory limitations on 32 bit windows, but in practice the limit isn't 4GB for maya, as applications have shared limits within the 4GB of ram allowed.

more importantly, i've noticed that basically whenever a maya thread gets over around 1.3GB usage (GUI or batch), it will crash immediately.

i think maya is getting a bit behind the times to be honest, XSI is 64 bit already and i don't think that's going to happen with maya for the next release.
Yeah you're right. I just felt a bit stupid repeating myself all over, that's why I linked to the other thread where I posted that (http://forums.cgsociety.org/showpost.php?p=2659376&postcount=2). Here's a nice explanation about vm in windows, for anyone being nerd enough:
http://aumha.org/win5/a/xpvm.php

I've seen maya crashing at ~2GB for maya(batch) process only, with the /3GB switch enabled.. this means there's quite some space to fill actually. Putting the memory limit to around 1GB should be ok though in most situations, since this limit does not mean that mr would never cross it - it only indicates where it will start to try flushing the caches. Imo the stability of mr for maya has become quite reliable since 6/6.5, compared to elder implementations and if you apply the stuff I mentioned above. But I forgot the most important thing btw:

DO A BATCH RENDER! :twisted:

DJ_
11-27-2005, 05:28 PM
DJ_: why is it recommended to use only 650MB if you have 1GB of memory? i usually set this to about 1000MB in MR.

650MB is actually a little bit too high if you only have 1 Gig of Ram, but it works fine for me. It is recommended to set the limit to (your ram) - (400 to 500) for it to work fine.

As floze said: It is not THAT important, but one thing to test while debugging. If your scene doesn't render, it may be set too low and if Maya or MR just crashes, it may be set too high.

maxwater
12-04-2005, 12:47 PM
hi everyone... thx again for your replies !!!.

now i got 2 GIG of RAM. trying to render it...
in maya with batch rendering, it basically works fine up to a
resolution of 4K.

command-line render gives me strange results. looks like it is
rendering a different layer.

fact is, i only got 1 rendering layer in my scene (called "layer3")

when i use the command line render i tell him "render -l layer3 filename.mb"

but gives me just a different result as the in-maya batch rendering.

http://mitglied.lycos.de/xpsoldiers/bilder/renderprob.jpg

id like to use command line renderer to free some memory..

is there maybe a small mel script which gives me all parametes used by
the in-maya batch rendering command ? resulting in a command line ready
"render -x 1234 -y 1234 -l layer -etc filename.mb"

any hints ???

would be very nice.

thx,
maxwater

Hamburger
12-04-2005, 12:59 PM
You need to use the switch -mr to turn on mental ray for command line rendering.

maxwater
12-04-2005, 01:19 PM
thx hamburgerrtain,


"render -r mr" solved my prob so far...


thx !

maxwater
12-04-2005, 02:54 PM
now that command line rendering works fine for smaller resolutions
(without the -r mr flag it rendered out black/gray)

i tried rendering this 3mio Poly scene in 8000x6000
(MR Memory limit 1000MB)


this happens :



.......
JOB 0.9 progr: 99.9% rendered on studiomachine.9
JOB 0.11 progr: 99.9% rendered on studiomachine.11
JOB 0.5 progr: 99.9% rendered on studiomachine.5
JOB 0.9 progr: 99.9% rendered on studiomachine.9
JOB 0.4 progr: 100.0% rendered on studiomachine.4
RC 0.0 info : rendering statistics
RC 0.0 info : type number per eye ray
RC 0.0 info : eye rays 54025126 1.00
RC 0.0 info : transparent rays 2880964 0.05
RC 0.0 info : reflection rays 18861457 0.35
RCI 0.0 info : main bsp tree statistics:
RCI 0.0 info : max depth : 40
RCI 0.0 info : max leaf size : 326
RCI 0.0 info : average depth : 35
RCI 0.0 info : average leaf size : 15
RCI 0.0 info : leafnodes : 107875
RCI 0.0 info : bsp size (Kb) : 7305
PHEN 0.0 progr: calling output shaders
PHEN 0.0 progr: maya_shaderglow(): Computing glow...
MEM 0.0 info : allocation of 313590544 bytes in C:\engserv\rbuild\101\client\M
aya\src\MentalRay\mayabase\src\mayaglow.c line 900 failed: flushing
MEM 0.0 info : try '-memory 520' for future runs
MEM 0.0 progr: scene cache flushed 12 MB in 0.00s, now: 508 MB
MEM 0.0 info : allocation of 313590544 bytes in C:\engserv\rbuild\101\client\M
aya\src\MentalRay\mayabase\src\mayaglow.c line 900 failed: flushing
MEM 0.0 info : try '-memory 507' for future runs
.....................................
mental ray: out of memory
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 info : cleaning up memory mapped frame buffers



now lowering the MR memory to 507 ( MEM 0.0 info : try '-memory 507' for future runs)

gives me this result:
rendering 100% as before..


...............
PHEN 0.0 progr: calling output shaders
PHEN 0.0 progr: maya_shaderglow(): Computing glow...
MEM 0.0 info : allocation of 313590544 bytes in C:\engserv\rbuild\101\client\M
aya\src\MentalRay\mayabase\src\mayaglow.c line 900 failed: flushing
MEM 0.0 info : try '-memory 479' for future runs
MEM 0.0 progr: scene cache flushed 10 MB in 0.00s, now: 470 MB
MEM 0.0 info : allocation of 313590544 bytes in C:\engserv\rbuild\101\client\M
aya\src\MentalRay\mayabase\src\mayaglow.c line 900 failed: flushing
MEM 0.0 info : try '-memory 469' for future runs
.......
mental ray: out of memory
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 fatal 031008: can't allocate 313590544 bytes.
MEM 0.0 info : cleaning up memory mapped frame buffers


any idea what i could do to solve this prob ?


thx in advance,
maxwater

DJ_
12-04-2005, 05:47 PM
Your problem seems to be in the postprocess "mayaglow". Have you tried rendering your image without it? (you can always use photoshop to add glow)

maxwater
12-04-2005, 06:41 PM
There is no glow in the scene. For testing purposes i also deleted all
shaders. atm i am only using the initial lambert shader on all objects.

but it helped to disable the "Output Shader" in the "miDefaultOptions - Features"

http://mitglied.lycos.de/xpsoldiers/bilder/output.jpg

I tried that because of this:


PHEN 0.0 progr: calling output shaders
PHEN 0.0 progr: maya_shaderglow(): Computing glow...
MEM 0.0 info : allocation of 313590544 bytes in C:\engserv\rbuild\101\client\M
aya\src\MentalRay\mayabase\src\mayaglow.c line 900 failed: flushing
MEM 0.0 info : try '-memory 520' for future runs


Now my scene renders just fine...(at least with the Lambert Shader)

i let you know if it still works with different shaders, FG , higher settings etc...

thx for your help, hope this will help some others, too

greets,
maxwater

sacslacker
12-05-2005, 03:50 AM
i think maya is getting a bit behind the times to be honest, XSI is 64 bit already and i don't think that's going to happen with maya for the next release.

Maybe so but if you've used the 64 bit OS's, no doubt you've realized that they are pieces of shiat. Maybe that's just my opinion. It appears that at least in my case, they are quite unstable.

I figure Maya will be 64 bit just soon enough for the OS guys to get it right.

Aneks
12-05-2005, 05:26 AM
There is already a 64 bit version of maya in Beta. whether it actually uses memory and other 64bit features efficiently is another question.

maxwater
12-05-2005, 01:06 PM
Ok, now it renders out up to a resolution of about 7000x5000.
( 8000x6000 gives an error after 100% rendering . cannot write .tga file)


JOB 0.3 progr: 99.9% rendered on studiomachine.3
JOB 0.2 progr: 99.9% rendered on studiomachine.2
JOB 0.8 progr: 100.0% rendered on studiomachine.8
RC 0.0 info : rendering statistics
RC 0.0 info : type number per eye ray
RC 0.0 info : eye rays 36274569 1.00
PHEN 0.0 progr: calling output shaders
PHEN 0.0 progr: writing image file C:/Maya_project/m7/render002.tga (frame
1)
RC 0.0 progr: rendering finished
RC 0.0 info : wallclock 0:10:22.57 for rendering
RC 0.0 info : allocated 259 MB, max resident 364 MB
GAPM 0.0 info : triangle count excluding retessellation : 0
GAPM 0.0 info : triangle count including retessellation : 0
DB 0.0 info : disk swapping: 0 objects (0 KB) swapped out in 0.00 s,
DB 0.0 info : 0 objects (0 KB) swapped in in 0.00 s, max swap space used 0 KB
, 0 KB failed
MSG 0.0 info : wallclock 0:11:50.51 total
MSG 0.0 info : allocated 8 MB, max resident 364 MB

Problem is:

http://mitglied.lycos.de/xpsoldiers/bilder/prob002.jpg

i mean it tells 100% done of rendering. but there are still many areas which
are obviously not rendered.

any hints how to solve this ? are "wrong" bsp settings causing such problems ?
or anything else ?

help would be great ! thx in advance


greets,
maxwater

maxwater
12-05-2005, 02:24 PM
Ok, now i changed physical MR memory limit from 650MB to 1300MB and
use Large BSP with standard settings. Seems to work fine now.

i let you know if i encounter more probs. Right now i am rendering my
3 Mio Polygons scene with FG and a resolution of 7016x4961 (DIN A2, 300 dpi)

greets,
maxwater

DJ_
12-05-2005, 02:27 PM
JOB 0.9 progr: 99.9% rendered on studiomachine.9
JOB 0.11 progr: 99.9% rendered on studiomachine.11
JOB 0.5 progr: 99.9% rendered on studiomachine.5
JOB 0.9 progr: 99.9% rendered on studiomachine.9
JOB 0.4 progr: 100.0% rendered on studiomachine.4

Are you using satellite? I find it very buggy, specially when batchrendering. That could explain why your image is half black, if your satellites are not as fast as your main machine, you may have 100% done on your machine but it may still be waiting for the 78.8% (and so on) from other satellites.

maxwater
12-05-2005, 06:39 PM
hi DJ,

no i dont use sats. only batch rendering on a single processor machine (with 2 GIG of RAM).
atm i am trying to optimize BSP tree. because with FG on and resolution
of 6000x3000 it fails again.

any suggestions which could be good BSP settings for a 3 mio polygons scene (FG on) ?
and what is the "task size" in MR options is responsible for ?

greets,
maxwater

maxwater
12-06-2005, 05:29 AM
Its driving me nuts. now it renders again 100%, taking 2.17hours
and then it tells me framebuffer 0 invalid. ( i used 3x8 RGB framebuffer)

command : render -x 7000 -y 5000 -r mr render004.mb
max MR memory: 1500
BSP size : 30
BSP depth : 20


JOB 0.8 progr: 99.9% rendered on studiomachine.8
JOB 0.0 progr: 99.9% rendered on studiomachine.0
JOB 0.14 progr: 99.9% rendered on studiomachine.14
JOB 0.11 progr: 99.9% rendered on studiomachine.11
JOB 0.6 progr: 100.0% rendered on studiomachine.6
RC 0.0 info : rendering statistics
RC 0.0 info : type number per eye ray
RC 0.0 info : eye rays 39301722 1.00
RC 0.0 info : finalgather rays 209580 0.01
RC 0.0 info : fg points computed 2105 0.00
RC 0.0 info : fg points interpolated 34567719 0.88
RC 0.0 info : on average 56.86 finalgather points used per interpolation
RCI 0.0 info : main bsp tree statistics:
RCI 0.0 info : max depth : 20
RCI 0.0 info : max leaf size : 42231
RCI 0.0 info : average depth : 19
RCI 0.0 info : average leaf size : 1128
RCI 0.0 info : leafnodes : 3379
RCI 0.0 info : bsp size (Kb) : 8606
PHEN 0.0 progr: calling output shaders
PHEN 0.0 progr: writing image file C:/Maya_project/m7/render004.tga (frame 1)
PHEN 0.0 error 051003: frame buffer 0 invalid, cannot create file C:/Maya_proj
ect/m7/render004.tga
RC 0.0 progr: rendering finished
RC 0.0 info : wallclock 2:16:22.45 for rendering
RC 0.0 info : allocated 417 MB, max resident 492 MB
GAPM 0.0 info : triangle count excluding retessellation : 0
GAPM 0.0 info : triangle count including retessellation : 0
DB 0.0 info : disk swapping: 0 objects (0 KB) swapped out in 0.00 s,
DB 0.0 info : 0 objects (0 KB) swapped in in 0.00 s, max swap space used 0 KB
, 0 KB failed
MSG 0.0 info : wallclock 2:17:51.19 total
MSG 0.0 info : allocated 11 MB, max resident 492 MB


what could i do to solve this ?


thx,
maxwater

DJ_
12-07-2005, 11:03 AM
BSP size : 30
BSP depth : 20
RCI 0.0 info : main bsp tree statistics:
RCI 0.0 info : max depth : 20
RCI 0.0 info : max leaf size : 42231
RCI 0.0 info : average depth : 19
RCI 0.0 info : average leaf size : 1128

I think you should lower the size and increase the depth (closer to the default).

Take a look at the links that Floze posted at a thread I started for a while...
http://forums.cgsociety.org/showthread.php?t=282669

Take o look here too... you may find some tips...
http://www.jupiter-jazz.com/wordpress/wp-content/data/tr4kv2/html/chapter4-MEM.html

Steve McRae
12-07-2005, 11:48 AM
something that helps as well with this large images is too lower the Data Type (from 16 bit to 8 bit if you have raised it)

CGTalk Moderation
12-07-2005, 11:48 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.