Raytracing algorithms


I just want to know what is the most used raytracing algorithm. I’ve been doing some research and I’m thinking of going with the ZZ-buffer technique. But I want to know what is the technique most used in commercial raytracers like Mental Ray and Brazil. ( I think from what I heard of Brazil being inaccurate that it uses a take on the multiresolution point sampling method.)

Anyways, any help would be greatly appreciated.


I think every commercial renderer is by now using a hybrid approach, combining a scanliner and a raytracer. AFAIK all the “decent” raytracers are using distributed raytracing with binary trees or octrees for speed optimization, probably some other optimization tricks too (have a look at the “graphics gems” books).


Thanks, I actually kept on doing some reserach and found that out rather quickly. They also use hybrid aproaches to raytracing algorithms sometimes. I think. I read that in a paper somewhere.


Combining a scanliner and a raytracer? How should I image this to be done?


I guess you use a scanline algorithm instead of raycasting for primary visibility, then use raytracing to compute reflections etc.


That sounds like logic, but I doubt it will result in a great improvement in the rendering time. OK, when not using GI and with only a few lights in the scene, it will speed things up a lot, but when lots of lights or GI are added, the results will hardly make any difference. Or am I wrong here?


considering that the large majority of production renders doesn’t involve GI or raytraced shadows, this does result in significant speed improvements.


Look at the Lightwave renderer. If you were to put a floor object down, then a sphere above it and hit the render button… it renders the shpere first then continues to render the rest of the scene in a second pass.

Most renderers I’ve come across will render the scene top-down and one line at a time, that’s what I interpret as “scanline”. But it still uses ray tracing techniques. It just does so in a linear patter (left to right, top to bottom).

Don’t forget frustrum culling for optimizations (basically hidden surface removal). If it’s not visible, you don’t need to ray intersect test it. Maya’s rendering renders in blocks at a time rather than one dot at a time. Might take up more memory. The idea is you split the scene into regions then determine what objects are inside which region. When you ray trace a region you only test against the objects in that block.


If you are not using GI or raytraced shadows, then why use a raytracer at all? Why not a total scanliner?


Well yeah, for the main part. After all, prman (Reyes) only included raytracing last version, and then only to keep up with mentalray.

There are some situations where you just need raytracing, where only an accurate refraction or reflection will look right. The rest of the time you can fake it.

For instance… A Bug’s Life, when Hopper unleashes all those seeds(?) from the glass bottle, the glass had to be raytraced. Contrastingly, in Finding Nemo, the refractions of both the fish tank and the plastic fish bag were faked.

Maya’s renderer is a hybrid scanline/raytracer. Even mentalray does occlusion calculations with a modified z-buffer algorithm first I believe (although I’m probably talking out of my arse here)


Originally posted by Oogst
If you are not using GI or raytraced shadows, then why use a raytracer at all? Why not a total scanliner?

Render the typical checkerboard with a chrome ball - why raytrace everything? I think every current commercial renderer is going to cast rays for the reflections only, rendering the rest with a z-buffer algorithm. Raytracing is just too expensive, so it’s only being used where really necessary.


Ah, well, but now I do not think you can talk about a raytracer that is made faster using scanline-techniques. This is what even the MAX scanline-renderer already does: everything is scanline, except for raytraced shadows, reflections and refractions. It is raytracing indeed, but I do not think the MAX scanline renderer is a raytracer. I would call that a scanliner with some raytracing stuff.


…or a hybrid raytracer. I believe that’s the proper term?

Ah phooey, it’s all a bunch of mish-mash anyways. Btw Oogst your raytracer’s looking quite cool. What other features are you plannign on adding next?


What I am wonderng about is why I see hardly any renderers that accelerate using the GPU. Is it still not precise enough, or is more research in the field needed? The hybrid rendering you mention can be speeded up using the GPU.

Currently I do not have time to work on ny own raytracer, but I am planning on adding importers for different file-types. The creater of some modeller mailed me he wanter to support my renderer, so I guess that will be the first thing to do. If only because he said he can add the features my renderer has to the materials in his modeller, making it work well together. That would be fun.


The GPU’s just a big parallel vector processing unit. You can offload all sorts of calculations onto it…

Future graphics cards will be geared much more towards accelerating high-end rendering. We’re only seeing the first little icicle on the tip of a monster iceberg that’s coming our way very soon. It’s gonna be cool!


But why am I not seeing it already? The current Ati- and Nvidia-cards boasted being able to do this sort of stuff a year ago, is that not long enough to see results yet?


ATI and Nvidia don’t specialize in professional cards. Sure they have chipsets for DCC (FireGL, Quadro) but not quite the same as something from the likes of 3DLabs. 3DLabs have pro video cards that can accelerate rendering, but only in specific applications (such as Maya).

To have that level of control over the graphics chip you’d need certain knowledge about it, like driver writers, and that costs $$ if ATI or Nvidia is even willing to license it.


I think that hybrid is the way to go for these types of things. Anyways about using a gpu for these things, what about letting the gpu handle geometry and a the raytrace stuff lie on top of it.

Since they use secondary rays anyways for reflections and such why not have the gpu handle anything scanline and have the software handle the rest, and then add (or multiply) the results to make the final image. Only thing I can’t figure out is how the geometry would be anti-aliased among other things but I don’t see why this can’t be done now without actually having to have driver access, becuase this can be doen as an opengl extension to the raytracer ( a module perhaps if you use a client/server approach while defining the whole renderer it self using a kernel like approach) So you can have a renderer with a rayserver that handles all the raytracing stuff and a Lightserver that handles al the light stuff and then have a geometry server which actually handles the scanline renderer that is handled through a custom graphics engine that uses the gpu (written in opengl) that handles all the scanline stuff. Then I guess you can have a comp server ( or buffer ) that combines all of these elements together ( maybe that should be written into the scanline gpu rendere itself)

Now for anyone that has programmed for games or has used te gpu (programmed) before will this work? If not, why not? ( expalnations would be very welcome) This is for my own research purposes ( after I’m done i’ll be testing some of my ideas on this client/server based renderer and see if it works) I haven’t written anything yet.


And why not build an ultrafest hardware-renderer that does its thing completely in hardware? This would of course not be of superb GI-renderer, but things like bump-mapping, shadow maps, speculars maps, they can all be done very well today using buffers and shaders in the GPU. For a lot of purposes this would be enough quality.


The GPU’s very good at handling lots of vector and matrix transformations in parallel. You don’t want it to actually draw stuff, you just want to harness that power for doing the calculations its good at.