PDA

View Full Version : totally basic renderer writing


kuui
05-23-2005, 01:21 PM
hi everyone!

first, i'm pretty familiar with c++ as i have to use it at work
for engineering tools etc.

now i'm thinking about to start my own renderer, first as a little raytracer, basically drawing an
object etc. and then when i'm deeper inside the whole rendering-stuff...adding features.

my problem: i dont know where to start. i know some basic RSL but this won't help me much i think, and i'm totally lost.

i got the book "Physically based rendering" as a lot of people recommend it but it doesnt seem to be a good book for totally newbies in 3d programming.

can you give me some hints where to start totally from scratch, maybe books i could buy or online stuff.

sorry if this was asked before...

i dont fear learning another languages too, i'm also using VB and a bit perl, although i would like to stay with c++ :)

chaoticbob
05-23-2005, 03:09 PM
I would recommend this tutorial:

http://www.flipcode.com/articles/article_raytrace01.shtml

Jacco Bikker walks through writing a basic ray tracer step by step and does an excellent job explaining what each step is all about.

Vertizor
05-23-2005, 03:34 PM
Language is the first barrier, environment is the next. By that I mean: lets assume you're developing for Windows, so you'd need to know a little bit of the Windows environment and how the drawing routines.

Here's what I did. I found equations for sphere intersection, and basic ray trace algorithms. Great, so I can ray trace a sphere, but how do I actually see the results? Gotta know a little bit of graphics programming in Windows. It's quite simple really. What I did for just a test, is to set the color of the pixels, one by one, in the basic window. It's not the most elegant way to do it, but it was "proof of concept." Because as soon as I moved the window and its contents are refreshed, my picture is gone, it was not manitained. A better solution would be to hold the pixel color data in an "off screen" buffer. Then use it to repaint the window whenever necessary. And if you know a little bit of Windows programming, you'll know when it is necessary to do so.

But at that point all I had was just a plain flat shaded circle. More work needed to be done to add real shading to make it appear 3D. There's tons of resources out there, algorithms and examples etc. But from what I found, they seem to mostly focus on the ray trace methods and algorithms only. You're left to put it all together yourself. Like in my example above, that was my way of bringing it all together in my own way.

arnecls
05-23-2005, 05:45 PM
I would recommend the "bible"
Realtime Rendering
http://www.realtimerendering.com/

Here's everything you need. Physically based rendering is a great book, but not for beginners - you're right there.

Some terms you should look up:

- Triangle intersection tests using barycentric coordinates
- The rendering equation (Kajiya)
- BRDF (have fun with this - quite complex for beginners but necessary)
- Bounding volume hierarchies (+ Goldsmith / Salmon if you want to be fast)
- Some Kay and Kajiya Papers at citeseer - they did a lot of work in that area

They main concept is quite easy.
You've got a camera position p, looking in direction d (normalized) and an up-vector u (normalized). Now that's enough to create a plane (you only need a "right-vector" which can be calculated by d cross u). This plane lies some way in front of the camera ("near" distance, e.g. 1).
Now you have to "shoot" rays from this plane into the scene. This is done be dividing the image plane by the preferred image-resolution (e.g. 640x480). This is easy, too just go from

p + d*near - up*(height/2) - right*(width/2)

to

p + d*near + up*(height/2) + right*(width/2)

to create the starting points. The ray's direction can be calculated by

startingPoint - p

Now you test every triangle in the scene with each generated ray (yes, that's a lot of calculations). If you hit one or more triangles, take the one with the smallest distance and shade that point (you'll need the Rendering Equation for that). The color returned is painted at the pixel your ray came from, e.g. 0x,0y for the first or 639x,479y for the last ray. If you hit nothing the color is black.
Pretty simple.
If you want to create reflactions / refractions you have to generate a new ray at the positon your ray hit the triangle with the new direction, trace that ray again until no new ray is created or your maximum recursion-depth is reached and take that point as color.

There are some ways to speed up the whole thing using bounding volume hierarchies - which I really recommend because they can reduce calculations from hours to microseconds if done correctly.

If you mastered that concept you should have a look at photon-mapping or montecarlo raytracing for more realistic images.

Everything I said here is covered in the Realtime Rendering book, too.

kuui
05-24-2005, 06:45 AM
very cool thank you a lot for your kind information, i'll get myself the realtime rendering, it seems to be that book that can give what i need.

i've got also a lot of papers here, some of Kayija and read them through, i think with some time and effort i'll get my head around it.

also i think that i'll first start with a very basic raytracer to get myself comfortable with rendering processes itself as i was just coding engineering stuff until now, which sometimes has also 3d but not n the part i do

thanks a lot for the explanation, i really got it now how a raytracer basically works. i know all this is a lot of math, linear algebra etc. but thats not a big problem. well, at least not until it comes to the more complicated stuff hehe.

yes i'm developing for windows as i got near to nothing knowledge about Mac and personally i dont use linux very much.

ok, time to start getting my head around this!

arnecls
05-24-2005, 08:18 AM
> yes i'm developing for windows as i got near to nothing knowledge
> about Mac and personally i dont use linux very much.

Well there isn't so much difference between that systems if you use cross-platform libraries like SDL (which could be usefull for you because of it's 2D image functions). There are some differences between the MSC and GCC, but those are mostly small ones.
The real advantage of the Mac platform is the Altivec instructionset in my opinion because Intels SSE description and handling is rather painfull. If you want to do fast raytracing you should look at that, too as you can get close to realtime with that. A friend of mine at the University of Koblenz wrote a raytracer that gets 3-5 fps on a Dual 2Ghz G5 that way (ok - plus a bunch of other tricks :).

Vertizor
05-24-2005, 03:47 PM
You're not going to be able to use double precision floats with Altivec though, so you'd wind up falling back on the FPU (which isn't really a bad thing hehe). SSE on the otherhand does double precision float SIMD.

CGTalk Moderation
05-24-2005, 03:47 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.