View Full Version : Segment Memory Question

 kangonto10 October 2004, 10:15 AMSince i recently had problems with hires renders i have made a simple calculation: Render size 720 x 576 It gives 720 x 576 pixels = 414720 pixels Each pixel has 128 bits, so 414720 x 128 = 53084160 bits 53084160 bits are 6635520 bytes (dividing by 8) 6635520 bytes are 6480 KBytes (dividing by 1024) 6480 KBytes are 6,3 MBytes (dividing by 1024) So if the memory needed to store a frame at 720 x 576 is 6,3 Mbytes, why in Lightwave i need at least 17 Mbytes to render in 1 segment ? I remember myself using LW on my old Amiga with only 16 Mbytes or RAM (without virtual memory). I can't remember the way LW 4 managed segment memory, but, i suppose it rendered in pure 32 bits (1,5Mb to store a frame at 720 x 576). It could be nice if we could adjust the way LW manages this. It could save lots of memory to render at huge resolutions. I suppose it's a stupid idea due to technical issues, but i'm curious about it.
gerardo
10 October 2004, 06:55 AM
Yeah my old Amiga can do that too! :D but I don't remember where I read (I think was the manual) that although LW can calculate dynamic ranges of 128 bits, these are expandable up 320. If this is certain, LW maybe calculates more than 128 bits in each pixel for what the calculation would be the following:

Render size 720 x 576

It gives 720 x 576 pixels = 414720 pixels

Each pixel would have 320 bits, so 414720 x 320 = 132710400 bits

132710400 bits are 16588800 bytes (dividing by 8)

16588800 bytes are 16200 KBytes (dividing by 1024)

16200 KBytes are 15,8 MBytes (dividing by 1024)

Almost what you say :shrug:

Gerardo

NanoGator
10 October 2004, 08:08 AM
LW has a few buffers to play with when it renders.

gerardo
10 October 2004, 08:32 AM
LW has a few buffers to play with when it renders.

Perhaps that explains it better (or complete the equation) :p

Gerardo

dmaas
10 October 2004, 08:19 PM
LW needs more than just the RGBA data for each pixel. Some extra memory is probably required for the multi-pass AA algorithm.

Thank goodness memory is so cheap that multi-segment renders are a thing of the past :)

kangonto
10 October 2004, 08:36 PM
Thank goodness memory is so cheap that multi-segment renders are a thing of the past :)
Try to render an scene with 2,000,000 polys at 7000 x 5000. :banghead:

Srek
10 October 2004, 08:56 PM
You can't compare rendering with simply storing a bitmap. Many informations are created during rendering that exceed the size of the final bitmap by far. It's like comparing the effort to store a book with the effort to write it.
Cheers
Srek

kangonto
10 October 2004, 10:08 PM
You can't compare rendering with simply storing a bitmap. Many informations are created during rendering that exceed the size of the final bitmap by far. It's like comparing the effort to store a book with the effort to write it.
Cheers
SrekNice comparation! :)

Yes, that's quite intuible(does this word exist? :shrug: ), but ... I expounded the example of my old Amiga for some reason. An Amiga with 16Mb rendering a Frame at 720x576 (1 segment) without any problem. Modern LW takes more than 16Mb to render that frame.

And yes, it's also intuible that we are in 2004 and LW is much more powerful than 8 or 9 years ago. Simply i introduced the idea of ... maybe cutting part of this power in some especific situations like the one i'm suffering right now (very high resolution renders in combination with heavy scenes and LW memory limitations). Would it be too difficult to reduce the number of bits used to store the frame (in memory, not in the final file) by user demand ? I mean, 'i don't need alpha channel, or depth channel or whatever, so i disable them and save the memory needed to render this f***** huge render'.

At least i see light at the end of the tunnel, 64bits technology it's here and LW is going to support it. No memory limitations at last! :bounce:

Srek
10 October 2004, 07:33 AM
During rendering you can't loose this information without stopping it to work. There are ways to compress bitmaps in RAM, but i'm not to sure this is worth the effort except when it comes to realy hugh multilayer textures. 64 Bit adress space will bring much more available memory, but at the same time it will bring a bigger overhead. Also the CPU <-> RAM bus speed is trailing way behind the CPU speed so accessing more RAM will take more time, slowing down rendering. In the end i think 64 Bit will bring more freedom to the programmers and users, but it is no cure it all.
Cheers
Srek

kangonto
10 October 2004, 01:40 PM
I mean before rendering not during rendering.

kangonto
10 October 2004, 01:51 PM
And by the way, do you mean 64 bits computing will be slower than 32 bits. Is 32 bits computing slower than 16 bits ? This is an strange way of thinking.

Do you mean an ultra-modern truck it's slower than a wheelbarrow because it has to carry much more load ?

I like comparations too :).

Srek
10 October 2004, 02:29 PM
When adressing memory with 64 bit instead of 32 bit it's simply twice as much data that needs to be processed. Think of it as longer streetnames that take longer to read since they store more information.
This is a bit simplified, but the outcome is that at the same clockspeed and with the same general architecture a 64 bit system will be a bit slower then a 32 bit system.
On the other hand the much larger adressable memory space allows to use algorithms and solutions that were not usable previously, which might not only make up for the small loss but revert it. However this depends heavily on the problem at hand.
It's to early to start speculating here, it will take some time until developers will learn how to make the best use of 64 bit systems. In the meantime simply having more RAM is in itself a hugh advantage when it comes to large scenes or rendersizes.

Cheers
Srek

CGTalk Moderation
01 January 2006, 05:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.

1