Ok, I will put some ideas down here…
- Adaptive sampling.
It seems that a lot of rendering time is wasted when a very plain area is passed 16 times.
The renderer should only give a pixel another pass if it looks like it needs it.
This should be calculated on a probability basis.
The likelihood of a pixel being re-rendered should be a function of the amount that the colour of the pixel has changed with each pass (if the colour has changed on each pass then it is a safe bet that it will need more passes, and the more that it has changed on average, the more passes that it needs).
However, this may cause a problem in the following instance:
If there is a plain blue sky and motion blur is on then the last frame in a pass will be the only one to change. Using the above method, this pass is not likely to be rendered as the algorithm will have decided that it is not likely to be needed.
However, some pixels in the last pass will be rendered as it will be done on a probability basis. This will cause a stippled effect where there is a pixel rendered here and a pixel not rendered there. This would be dealt with partly by the soften option that has already been implemented. Also, any texturing or film grain would hide the stippling. Also, if this method increases the render speed then there can be more passes resulting in better quality where it counts.
IT might be a good idea also to have each pixel rendered in a different pass order. In the method above, the first pass will always get rendered and the last one is the one that will get left out so the pass order should be shuffled per pixel.
The probability of a pixel being rendered could also be adjusted on the basis of the previous frame or frames.
2 User defined sampling.
It would make sense to have the sampling probability adjusted by the user. If there is a high contrast, important character in the foreground then they should get more passes than a lower contrast simple background object. It might even be possible for the renderer to work out what objects in the render tree will need more passes so that this can be done automatically.
- Adaptive ray-traced shadows.
Where you have more than one sample on a shadow, you can end up with a stippling effect. This is particularly noticeable where the light is large and the shadow is far away from its source object. Since a ray-traced shadow knows how far from the source it is then it should be able to adapt so that it renders more samples where it needs it (further away where the penumbra is more spread out). A maximum and a minimum level could be set on the number of shadow passes for each light. It should also be possible to calculate where there only needs to be one sample (where there is no penumbra).
- Under- sampling.
There could even be under-sampling where passes that are considered to be unlikely to make a big difference would render with larger pixels. This method would be in-compatible with the shuffled order method.
Aside from this, I suspect that the lower level code could be optimised somewhat.
Please tell me if I am talking rubbish as I dont know much about this stuff. Do other renderers do this kind of thing?
Cheers,
John.

