Raytracing algorithms


#21

Have a look at Pixie.
It’s a RenderMan compliant REYES renderer than an optionally use OpenGL for hiding, while using the CPU for shading.

Note that it’s not doing any ray tracing on the GPU, that would be inefficient: Ray tracing involves lots of unpredictable random memory accessing, which is something at which GPUs aren’t very good at. They’re much better at processing large consecutive areas of memory, like being done in REYES rendering. I know there are some proof-of-concept ray tracing implementations in cg that run on modern graphics cards, but they’re only of academic value.

If you have money to spend, there’s also RenderDrive to get ray tracing on hardware, but that’s not what you’d usually consider a graphics card in the traditional sense.


#22

Oh Okay so the gpu would be used to do the actuall scanline computations. And then you could have the software take care of the rest. Is their any resources you guys have that shows you how to do this type of thing. Thanks for all your help so far.


#23

First let it be known that I’m a pretty green programmer with only limited rendering algo knowledge. But it seems to me that offloading vector and matrice calculations to a GPU would be a waste of time considering a standard dual CPU setup would likely be much more efficient at handling and communicating the necessary math.

It is my understanding that GPU’s currently use ‘hacks’ and workarounds to generate ray trace reflections, HDRI lighting, and things like that. Perhaps someone out there could explain whether or not these hacked methods would be suffiently accurate enough to be used in high-end renders. I suspect not.

BTW Oogst: My God! Your renderer is amazing. What resources have you used to learn all these algorithms / techniques? Could you recommend any books?


#24

You can gain by using hardware for rendering. If you have a dual CPU system you now have a third processor essentailly for free. Plus I think memory speeds on the graphics cards are much better than in a CPU - as long as it stays on the card.

One of last years Siggraph Stupid RAT Tricks by Hal Bertram (think he’s at MPC now) was utilising graphics hardware to speed up rendering. I would love to know more details as I didn’t see the talk and his notes haven’t been posted on renderman.org.

I think that was basic transforms and lighting though - not raytracing, but it did speed up rendering.

I may be wrong of course - hardware isn’t my specialty.

Simon


#25

the solution a RAYTRACING PCI ADD IN CARD with 8 of ARTs AR350 raytracing chips. Its the NEW PURE Card from good old ART.
check it out www.artvps.com

I hear its now only $3000 - and now supports radiosity as well. supports both max & maya and renderman!!!


#26

Indeed a great solution. $3000. For that much money I can buy me an incredible workstation which will not only accelerate the rendering, but also everything else I do. As such a card does nothing but rendering, it would be a complete waste of money. Though for a renderfarm it might be worth the dime.


#27

Originally posted by kiaran
[B]First let it be known that I’m a pretty green programmer with only limited rendering algo knowledge. But it seems to me that offloading vector and matrice calculations to a GPU would be a waste of time considering a standard dual CPU setup would likely be much more efficient at handling and communicating the necessary math.

It is my understanding that GPU’s currently use ‘hacks’ and workarounds to generate ray trace reflections, HDRI lighting, and things like that. Perhaps someone out there could explain whether or not these hacked methods would be suffiently accurate enough to be used in high-end renders. I suspect not.
[/B]

The GPU is just a big, massively parallelized vector processor. You can already perform ray/triangle intersections faster on an nv35 than a pentium 4 2.8GHz, providing you present the data in the right way. Add to this the fact that GPU speed is increasing much faster than Moore’s law, and raytracing on the GPU starts to look very attractive.


#28

The GPU is just a big, massively parallelized vector processor. You can already perform ray/triangle intersections faster on an nv35 than a pentium 4 2.8GHz, providing you present the data in the right way. Add to this the fact that GPU speed is increasing much faster than Moore’s law, and raytracing on the GPU starts to look very attractive.

Cool. So the GPU wouldn’t necessarily be used for video card features but rather as a helper cpu. Anything to help speed up raytracing would be greatly appreciated.

I hope mental images looks into this…


#29

Originally posted by kiaran
[B]Cool. So the GPU wouldn’t necessarily be used for video card features but rather as a helper cpu. Anything to help speed up raytracing would be greatly appreciated.

I hope mental images looks into this… [/B]

they already did, since XSI v3 (MRay 3.1) you can choose Hardware accelerated between the rendering options (and it correctly translates into an MI file token, so I think it’s safe to assume it’s a Mental Images chunk of code and not SI only).

what it does is basically using the GPU to sort out what it’s best at, but it needs to stick to basics for compatibility reasons (no funky nV CG or pixel shaders).

the undertow is that you can notice a reduced precision, to keep the speed bump consistant it doesn’t re-iterate too much, so in scenes that come in small scales and with a dense detail you can sometimes notice sorting errors.

said that it’s anarguably something that’s gonna be common in the future, at least for workstation users, for farms… not sure, HE vidocards are too big and heat up a bit too much for a 1U encasing, and more often then not for large farms the additional space and need for ventilation isn’t an affordable factor, but it’s only a matter of time :slight_smile:


#30

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.