View Full Version : Idea to speed up GI renders
12-19-2005, 05:27 PM
I original posted this thread in the hardware/technology forum with no response and wish I had posted it here instead so here goes.
Sorry for the slop: I cut/pasted this from my original post and was short on time but I think my idea still conveys.
I was wondering (and this is WAY beyond my scope of work) if anyone within the CG programming area has thought about implimenting any of Stephen Wolfram's work into their designs?
I can imagine a superior statistical sampling (aka Monte Carlo'ish) method(s) to use for the integration/sampling of data (3d-space and matter)... Well I guess it really would'nt even be quasi-Monte Carlo because it would follow patterns (haven't got that far yet) of the "new math"
I think because of the nature of the calculations regarding these patterns. Imagine it sort of being like the pattern the claculations follow (instead of being pseudo-random) would be similar to procedural fractal patterns. These calculations when applied with Wolfram's "New Science" would be simple, just highly recursive.
This probably could enable REALLY quick Global Illumination renders by using a new method to "fill" in sampling areas/locations by following some of his newly discovered patterns that are like fractals sorta... Their called cellular automata and basically tell us how nature is formed and how it formed itself.
Now GI uses either an error tolerence or a Montecarlo method which is a method that uses pseudo-random numbers as a starting point for the sampling engine to grow from... Well my idea would use Wolfram's simple rules in a highly recursive method to "fill in the grid" instead of the Montecarlo method. I think that because the way this new science works (very simple rules over millions of cycles) instead of heavy vector calculus would speed up GI renders quite a bit...
Anyone wanna help me try to get some stats? On wolfram's website he has a program to explore this more... I'm just an idealist with very light programming skills that has'nt programmed in ages although I used to write a little assembly on my Apple //e... LOL!
peace and merry Christmas!
12-19-2005, 06:26 PM
You don't make it clear exactly what the cellular automata would be used to do. Are you talking about sampling? Or what?
12-19-2005, 07:24 PM
Imagine instead of a virtual grid of 3d space being calculated traditionally (being mapped thru a montecarlo method of pseudo-random numbers and calculus vector math) it would follow the very simple rules of this new science over millions of times to map this space...
Since Wolframs says he shows that even basic cells (0's and 1's) with very basic rules exhibit higly complicated behavior when recursive to a high-degree then because the math is lighter it should result in the 3d sampling happening at very high rates of speed...
12-20-2005, 08:00 AM
Do you have a URL for more information?
12-20-2005, 09:11 AM
I think cellular automata have been around since the 80's, so not that new. What they definitely are is incredibly slow as the size or dimensionality is raised.
Mapping light transport to a 3d grid is something that's been looked at before, I think, probably called a light field or something like that.
To get good results, you'd need your "light field" to be at least at the resolution of the image. So for a standard 1024*1024 cornell box, you'd need 1024*1024*1024, or just over 1 billion cells. You'd therefore need about 10 gigabytes of memory just to hold a colour at each cell. In fact, for a cellular automaton it's worse still because you need to have two copies of the automaton (before and after each update) at any one time.
I programmed a CA a while ago for doing simulation of watercolour paint. Even on a 2d grid, it got very slow very quickly. Updating something big enough to do accurate light transport would take aeons.
Ultimately, I don't think automata are well-suited to the light transport problem. They're at their best when simulating diffusion processes (e.g. the lattice-boltzmann method for fluid simulation), which light transport is not.
There are however, some things that are well-modelled for by a CA, light scattering in a particpating medium for one. A guy here did something like that for subsurface scattering for harry potter 3: discretized the surface illumination onto a 3d grid, then did a "blur" on that information to scatter the light. In some ways it's a nice solution: it elegantly handles the boundary between the scattering medium and air, as well as blocking geometry. The downside is you're introducing a major source of aliasing (the 3d grid) very early on in the pipeline, which is always a bad idea in graphics. Thinking about it, it might be possible to improve on that using somthing like the LBM, storing photons in each cell and updating according to CA rules.
12-20-2005, 09:13 AM
Do you have a URL for more information?
12-20-2005, 02:20 PM
Thanx for all the interest which is why I posted - It is something you guys need to be looking at I think.
I'm going to have to do lots more reading over the Christmas break after I get his book. The online version has its limits for sure... I'm thinking to work the way I was thinkin' the whole rendering system would have to be higly material aware for the automata to be able to chew through it fast enough to mark time gains on paper. :shrug:
I've been going in the front-door @ http://www.wolframscience.com/
It allows one to read the book online.
Here are some really juicy tid-bits to get ones brain chuggin'...
12-23-2005, 08:13 AM
This might work well for one thing: parallelism. If you could actually hardwire a large cubic grid. You could possibly have everything updating at once, which would be very fast. But then you would still be limited regarding the size of the grid.
12-23-2005, 08:13 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.