PDA

View Full Version : Raytracing algorithms


Apoclypse
02-27-2004, 06:51 PM
I just want to know what is the most used raytracing algorithm. I've been doing some research and I'm thinking of going with the ZZ-buffer technique. But I want to know what is the technique most used in commercial raytracers like Mental Ray and Brazil. ( I think from what I heard of Brazil being inaccurate that it uses a take on the multiresolution point sampling method.)

Anyways, any help would be greatly appreciated.

stew
02-27-2004, 07:34 PM
I think every commercial renderer is by now using a hybrid approach, combining a scanliner and a raytracer. AFAIK all the "decent" raytracers are using distributed raytracing with binary trees or octrees for speed optimization, probably some other optimization tricks too (have a look at the "graphics gems" books).

Apoclypse
03-04-2004, 04:40 PM
Thanks, I actually kept on doing some reserach and found that out rather quickly. They also use hybrid aproaches to raytracing algorithms sometimes. I think. I read that in a paper somewhere.

Oogst
03-08-2004, 03:40 PM
Combining a scanliner and a raytracer? How should I image this to be done?

playmesumch00ns
03-08-2004, 04:56 PM
I guess you use a scanline algorithm instead of raycasting for primary visibility, then use raytracing to compute reflections etc.

Oogst
03-08-2004, 06:28 PM
That sounds like logic, but I doubt it will result in a great improvement in the rendering time. OK, when not using GI and with only a few lights in the scene, it will speed things up a lot, but when lots of lights or GI are added, the results will hardly make any difference. Or am I wrong here?

stew
03-08-2004, 09:36 PM
considering that the large majority of production renders doesn't involve GI or raytraced shadows, this does result in significant speed improvements.

Vertizor
03-08-2004, 10:18 PM
Look at the Lightwave renderer. If you were to put a floor object down, then a sphere above it and hit the render button... it renders the shpere first then continues to render the rest of the scene in a second pass.

Most renderers I've come across will render the scene top-down and one line at a time, that's what I interpret as "scanline". But it still uses ray tracing techniques. It just does so in a linear patter (left to right, top to bottom).

Don't forget frustrum culling for optimizations (basically hidden surface removal). If it's not visible, you don't need to ray intersect test it. Maya's rendering renders in blocks at a time rather than one dot at a time. Might take up more memory. The idea is you split the scene into regions then determine what objects are inside which region. When you ray trace a region you only test against the objects in that block.

Oogst
03-09-2004, 08:22 AM
If you are not using GI or raytraced shadows, then why use a raytracer at all? Why not a total scanliner?

playmesumch00ns
03-09-2004, 09:15 AM
Well yeah, for the main part. After all, prman (Reyes) only included raytracing last version, and then only to keep up with mentalray.

There are some situations where you just need raytracing, where only an accurate refraction or reflection will look right. The rest of the time you can fake it.

For instance... A Bug's Life, when Hopper unleashes all those seeds(?) from the glass bottle, the glass had to be raytraced. Contrastingly, in Finding Nemo, the refractions of both the fish tank and the plastic fish bag were faked.

Maya's renderer is a hybrid scanline/raytracer. Even mentalray does occlusion calculations with a modified z-buffer algorithm first I believe (although I'm probably talking out of my arse here)

stew
03-09-2004, 09:56 AM
Originally posted by Oogst
If you are not using GI or raytraced shadows, then why use a raytracer at all? Why not a total scanliner?
Render the typical checkerboard with a chrome ball - why raytrace everything? I think every current commercial renderer is going to cast rays for the reflections only, rendering the rest with a z-buffer algorithm. Raytracing is just too expensive, so it's only being used where really necessary.

Oogst
03-09-2004, 09:44 PM
Ah, well, but now I do not think you can talk about a raytracer that is made faster using scanline-techniques. This is what even the MAX scanline-renderer already does: everything is scanline, except for raytraced shadows, reflections and refractions. It is raytracing indeed, but I do not think the MAX scanline renderer is a raytracer. I would call that a scanliner with some raytracing stuff.

playmesumch00ns
03-10-2004, 09:56 AM
...or a hybrid raytracer. I believe that's the proper term?

Ah phooey, it's all a bunch of mish-mash anyways. Btw Oogst your raytracer's looking quite cool. What other features are you plannign on adding next?

Oogst
03-10-2004, 11:15 AM
What I am wonderng about is why I see hardly any renderers that accelerate using the GPU. Is it still not precise enough, or is more research in the field needed? The hybrid rendering you mention can be speeded up using the GPU.

Currently I do not have time to work on ny own raytracer, but I am planning on adding importers for different file-types. The creater of some modeller mailed me he wanter to support my renderer, so I guess that will be the first thing to do. If only because he said he can add the features my renderer has to the materials in his modeller, making it work well together. That would be fun.

playmesumch00ns
03-10-2004, 01:42 PM
The GPU's just a big parallel vector processing unit. You can offload all sorts of calculations onto it...

Future graphics cards will be geared much more towards accelerating high-end rendering. We're only seeing the first little icicle on the tip of a monster iceberg that's coming our way very soon. It's gonna be cool!

Oogst
03-10-2004, 01:49 PM
But why am I not seeing it already? The current Ati- and Nvidia-cards boasted being able to do this sort of stuff a year ago, is that not long enough to see results yet?

Vertizor
03-10-2004, 04:07 PM
ATI and Nvidia don't specialize in professional cards. Sure they have chipsets for DCC (FireGL, Quadro) but not quite the same as something from the likes of 3DLabs. 3DLabs have pro video cards that can accelerate rendering, but only in specific applications (such as Maya).

To have that level of control over the graphics chip you'd need certain knowledge about it, like driver writers, and that costs $$ if ATI or Nvidia is even willing to license it.

Apoclypse
03-10-2004, 07:51 PM
I think that hybrid is the way to go for these types of things. Anyways about using a gpu for these things, what about letting the gpu handle geometry and a the raytrace stuff lie on top of it.

Since they use secondary rays anyways for reflections and such why not have the gpu handle anything scanline and have the software handle the rest, and then add (or multiply) the results to make the final image. Only thing I can't figure out is how the geometry would be anti-aliased among other things but I don't see why this can't be done now without actually having to have driver access, becuase this can be doen as an opengl extension to the raytracer ( a module perhaps if you use a client/server approach while defining the whole renderer it self using a kernel like approach) So you can have a renderer with a rayserver that handles all the raytracing stuff and a Lightserver that handles al the light stuff and then have a geometry server which actually handles the scanline renderer that is handled through a custom graphics engine that uses the gpu (written in opengl) that handles all the scanline stuff. Then I guess you can have a comp server ( or buffer ) that combines all of these elements together ( maybe that should be written into the scanline gpu rendere itself)

Now for anyone that has programmed for games or has used te gpu (programmed) before will this work? If not, why not? ( expalnations would be very welcome) This is for my own research purposes ( after I'm done i'll be testing some of my ideas on this client/server based renderer and see if it works) I haven't written anything yet.

Oogst
03-10-2004, 07:57 PM
And why not build an ultrafest hardware-renderer that does its thing completely in hardware? This would of course not be of superb GI-renderer, but things like bump-mapping, shadow maps, speculars maps, they can all be done very well today using buffers and shaders in the GPU. For a lot of purposes this would be enough quality.

playmesumch00ns
03-11-2004, 09:49 AM
The GPU's very good at handling lots of vector and matrix transformations in parallel. You don't want it to actually draw stuff, you just want to harness that power for doing the calculations its good at.

stew
03-11-2004, 03:03 PM
Have a look at Pixie (http://www.cs.berkeley.edu/~okan/Pixie/pixie.htm).
It's a RenderMan compliant REYES renderer than an optionally use OpenGL for hiding, while using the CPU for shading.

Note that it's not doing any ray tracing on the GPU, that would be inefficient: Ray tracing involves lots of unpredictable random memory accessing, which is something at which GPUs aren't very good at. They're much better at processing large consecutive areas of memory, like being done in REYES rendering. I know there are some proof-of-concept ray tracing implementations in cg that run on modern graphics cards, but they're only of academic value.

If you have money to spend, there's also RenderDrive (http://www.renderdrive.com/) to get ray tracing on hardware, but that's not what you'd usually consider a graphics card in the traditional sense.

Apoclypse
03-17-2004, 05:10 PM
Oh Okay so the gpu would be used to do the actuall scanline computations. And then you could have the software take care of the rest. Is their any resources you guys have that shows you how to do this type of thing. Thanks for all your help so far.

kiaran
03-23-2004, 09:17 AM
First let it be known that I'm a pretty green programmer with only limited rendering algo knowledge. But it seems to me that offloading vector and matrice calculations to a GPU would be a waste of time considering a standard dual CPU setup would likely be much more efficient at handling and communicating the necessary math.

It is my understanding that GPU's currently use 'hacks' and workarounds to generate ray trace reflections, HDRI lighting, and things like that. Perhaps someone out there could explain whether or not these hacked methods would be suffiently accurate enough to be used in high-end renders. I suspect not.

BTW Oogst: My God! Your renderer is amazing. What resources have you used to learn all these algorithms / techniques? Could you recommend any books?

rendermaniac
03-23-2004, 10:13 AM
You can gain by using hardware for rendering. If you have a dual CPU system you now have a third processor essentailly for free. Plus I think memory speeds on the graphics cards are much better than in a CPU - as long as it stays on the card.

One of last years Siggraph Stupid RAT Tricks by Hal Bertram (think he's at MPC now) was utilising graphics hardware to speed up rendering. I would love to know more details as I didn't see the talk and his notes haven't been posted on renderman.org.

I think that was basic transforms and lighting though - not raytracing, but it did speed up rendering.

I may be wrong of course - hardware isn't my specialty.

Simon

pete016
03-26-2004, 10:02 PM
the solution a RAYTRACING PCI ADD IN CARD with 8 of ARTs AR350 raytracing chips. Its the NEW PURE Card from good old ART.
check it out www.artvps.com

I hear its now only $3000 - and now supports radiosity as well. supports both max & maya and renderman!!!

Oogst
03-27-2004, 09:59 AM
Indeed a great solution. $3000. For that much money I can buy me an incredible workstation which will not only accelerate the rendering, but also everything else I do. As such a card does nothing but rendering, it would be a complete waste of money. Though for a renderfarm it might be worth the dime.

playmesumch00ns
03-27-2004, 11:38 AM
Originally posted by kiaran
First let it be known that I'm a pretty green programmer with only limited rendering algo knowledge. But it seems to me that offloading vector and matrice calculations to a GPU would be a waste of time considering a standard dual CPU setup would likely be much more efficient at handling and communicating the necessary math.

It is my understanding that GPU's currently use 'hacks' and workarounds to generate ray trace reflections, HDRI lighting, and things like that. Perhaps someone out there could explain whether or not these hacked methods would be suffiently accurate enough to be used in high-end renders. I suspect not.


The GPU is just a big, massively parallelized vector processor. You can already perform ray/triangle intersections faster on an nv35 than a pentium 4 2.8GHz, providing you present the data in the right way. Add to this the fact that GPU speed is increasing much faster than Moore's law, and raytracing on the GPU starts to look very attractive.

kiaran
03-27-2004, 10:50 PM
The GPU is just a big, massively parallelized vector processor. You can already perform ray/triangle intersections faster on an nv35 than a pentium 4 2.8GHz, providing you present the data in the right way. Add to this the fact that GPU speed is increasing much faster than Moore's law, and raytracing on the GPU starts to look very attractive.

Cool. So the GPU wouldn't necessarily be used for video card features but rather as a helper cpu. Anything to help speed up raytracing would be greatly appreciated.

I hope mental images looks into this...

ThE_JacO
04-08-2004, 04:05 PM
Originally posted by kiaran
Cool. So the GPU wouldn't necessarily be used for video card features but rather as a helper cpu. Anything to help speed up raytracing would be greatly appreciated.

I hope mental images looks into this...

they already did, since XSI v3 (MRay 3.1) you can choose Hardware accelerated between the rendering options (and it correctly translates into an MI file token, so I think it's safe to assume it's a Mental Images chunk of code and not SI only).

what it does is basically using the GPU to sort out what it's best at, but it needs to stick to basics for compatibility reasons (no funky nV CG or pixel shaders).

the undertow is that you can notice a reduced precision, to keep the speed bump consistant it doesn't re-iterate too much, so in scenes that come in small scales and with a dense detail you can sometimes notice sorting errors.

said that it's anarguably something that's gonna be common in the future, at least for workstation users, for farms.... not sure, HE vidocards are too big and heat up a bit too much for a 1U encasing, and more often then not for large farms the additional space and need for ventilation isn't an affordable factor, but it's only a matter of time :)

CGTalk Moderation
01-17-2006, 02:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.