PDA

View Full Version : Rendering Algorithm Idea (Feedback wanted)


Nyx2095
04-12-2006, 08:31 PM
Allright. Most of you probably saw my thread on lens simulation, and the idea of "forward raytracing" (from light to camera), and probably went "forward raytracing, why???". The reason why is because I had an idea for a new rendering algorithm. At least I believe it's new. I decided I might as well share it with you because I will need some help with the mathematical details and more people thinking about it could mean more ideas. I also think that it's good to share resources. So in other words, this thread is a request for feedback, and possibly for help if you're interested (I do plan on implementing it).

A little introduction...

A long time ago (in the 70s I believe), someone had the idea that by simulating light transport (individual photons) from lights to cameras (forward raytracing) in a geometric manner, it might be possible to generate synthetic images. However, this idea was quickly abandoned as calculations showed this would take decades to do in any 3D scene on any computer of the time. So raytracing was born, and died very soon after. It was only when Whitted brought the idea of backward raytracing that the whole concept of raytracing became conceivable and interesting to computer scientists in general. Then many techniques were developped to essentially develop efficient global illumination through extensions of backward raytracing.

To this day, the idea of forward raytracing remained pretty much dead. It is still impractical. On a moden computer, it can take weeks to generate a decent image. However, if you notice, this is also true with backward raytracing, to some extent. If perfectly naive backward monte-carlo path tracing is performed (bouncing randomly, hoping rays may eventually hit lights), it can take weeks to generate an image too!

The idea...

What if we actually used the intelligent techniques developed for backward raytracing to do forward raytracing? I can hear some people say "HUH!" already... But wait...

1) Biasing the rays...

If instead of tracing rays and making them bounce randomly, we biased rays at each point *towards the camera*. This would seemingly greatly increase the rate of convergence. This is similar to biasing rays towards lights in path tracing. A pinhole camera could be used (bias rays towards camera origin), but more naturally, a camera with an actual thin lens would be best.

However, one point in the scene may hit multiple points in the image plane depending on the lens parameters and point in the scene. Hence, a method to compute what lens points correspond to image plane points given a scene point may be needed. For a given scene point, all of its contributions to image plane pixels could then be computed *at once*. Hence depth of field could come *for free*.

The niceness here is that we are biasing rays towards the camera, but there is *only one camera*, and it should be very fast to know if a point is within the visible frustum of the camera and if it's worth doing some computations at all.

2) Photon mapping...

A photon map could be built *as the path tracing is being performed* to store information about the *outradiance* (outgoing directions) instead of the irradiance (incoming photons). What should probably be stored is the directions of photons that eventually ended up contributing in the image plane, and possibly their degree of contribution. From this information, it could be possible to gather information about how likely a given emission direction is to result in contributions in the image plane. This could be used to perform importance sampling (more photons towards more important directions).

3) Progressive refinement

If we need to get more samples at a particular point on the image plane to reduce the variance, it might be possible to "request" them by computing the world points that are visible from that image point and then biasing photons towards that point.

Speed...

I don't know exactly how fast this all would be... But it should be much faster than plain forward path tracing. It may even be able to compete with backward path tracing in fact, because of its inherent simplicity. If we assume that there is only *one* camera in the scene, then biasing rays so they hit this single camera should be fairly easy.

I find there is some "neatness" in such an idea because forward raytracing is more "natural" than backward raytracing. It may also be easier to simulate some optical phenomena in a more realistic manner.

playmesumch00ns
04-13-2006, 07:35 PM
I'm having trouble figuring out what situations would be better simulated this way?

It seems to me that even if you do know "very quickly" which points are inside the camera frustum or not, you still have to generate those points, which means tracing rays, which is slow.

The advantage of backward raytracing is that even if a point does not connect to any lightsource, it still contributes to the image, whereas with forward raytracing you could easily expend a whole lot of effort for no gain at all.

I think all the advantages you're imagining would also be present in a bidirectional MLT/ER algorithm, for instance mutating the first segment of the path for motion blur and dof.

Nyx2095
04-14-2006, 06:09 AM
Quick "proof of concept" implementation. No colors because no shaders for this implemented atm. I might also have some parameters slightly off in the light distribution. Rendering time is 5 minutes.

http://www.xgameproject.com/renders/render21.png

tciny
04-14-2006, 08:40 AM
I find the idea interesting but I'm having trouble figuring out in what cases it'd really be an advantage in terms of rendering time or image quality...
While I think this concept works well in an environment like a cornell box, it might give you a headache in large scenes with lots of fine detail where its becoming harder to bias the rays towards the camera... a combination of forward tracing for the emitted light and backwards tracing for composing the final image seems just like it has less overhead :)

MattTheMan
04-14-2006, 02:08 PM
yeah, tciny has a point...
But still, I'm impressed with the results- 5 minutes... wow! Any way to like increase the sampling though? Any easy way?

Still, impressive- and all done foreward...

Nyx2095
04-14-2006, 04:17 PM
yeah, tciny has a point...
But still, I'm impressed with the results- 5 minutes... wow! Any way to like increase the sampling though? Any easy way?

Still, impressive- and all done foreward...

This scene is nearly an ideal case for such an algorithm... The only way to increase the sampling might be to send more samples to areas that are less likely to reach the camera, by using techniques like photon mapping to guide the process. It's not complete though. This technique seems to work well for indirect lighting, but it would never work for specular surfaces (especially purely specular).

Carina
04-15-2006, 12:20 PM
I'm afraid I struggle to quite see how beneficial this would be as well. Another consideration is the bias toward the camera, while I can see this being very useful for direct light transfer, how useful would it be when you're dealing with light "bouncing" between surfaces.. Surely this will bias the light distribution in the scene? Or am I missing some vital logical step here..

CGTalk Moderation
04-15-2006, 12:20 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.