Do they use raytracing? I would think not since rendering a simple sphere takes a few seconds. Game engines must use something even more simpler?
How do video games render images?
Currently video games are using hardware supported api to render image. Raytracing just one of the way to calculate lighting, but is not applied in realtime rendering games normally. If you are interested in how to produce image, you can reference some books about Computer Graphics.
The process ist called “rasterisation”. In contrast to the raytracing principle (shooting rays into the scene, looking if something is hit, calculate the color at that hit and coloring the pixel with it) it transforms triangles from 3D to 2D coordinates (for short).
They main principle is to transform vertices via some 4D Matrixes and fill the space in between three of them (a triangle). One key technique for that “filling the space in between” is the Z-Buffer which (for short) stores a z-value at each pixel. That’s necessary to paint overlapping triangles without (!) sorting them.
The whole process is a lot faster than raytracing but less accurate. Mirrors for example are harder / nearly impossible to create with that technique and lightning is often far away from “real” in means of photometric consistency.
If you want to know more about it I would recommend a big like
Shirley, “Fundamentals of Computer Graphics”
Games render 3D objects by drawing thousands of textured triangles.
These triangles are drawn one horizontal line at a time, and the depth at each pixel is compared with the zbuffer to determine if another triangle is occluding it. If no triangle iis occluding the pixel, the depth at that pixel replaces the deoth at that position.
Raytracing is slow partly because it is not hardware accelerated. Modern games require video cards which have special CPUs designed for drawing textured triangles really fast.
Yes, unfortunately, the raw power needed to raytrace a game at 30 frames/sec is just insane… literally, the machine would have to have a processor for every few screen pixels, just to handle it… but boy wouldn’t that be something. Give it a few years!
I would really love to understand how to develop a good graphics engine, for I have game ideas falling out of my ears… but alas, even my artwork is far behind my imagination in curve… mostly I have to resign myself to writing, conceptualizing, heck, directing my games…
Now if only I had the networking panache necessary to land some funding! :bounce:
About the realtime rendering, you have to consider that the programs you use are using ‘antialiasing’ to make the renders, a realtime engine such in a game, doesn’t apply antialiasing often.
Stenciled shadows are sort’ve starting to appear now.
We used Volume shadows in Goliath. They look pretty good even if I do say so myself. The screenshots section should have some shots showing off the shadows.
at this point, game engines don’t do much with low level rendering. They mostly set up lists of triangles to draw and send them to the hardware, which does its thing.
What you’re probably asking is more about what game graphics hardware does to render an image. It really is not fundamentally different from how many 3d app renderers work, except for more modern GI renderers. A good analog to a game engine renderer is the 3dsmax scanline renderer, or Renderman.
Older hardware has what’s called a “Fixed Function Pipeline”. This means that you give the hardware some data, like triangles and textures, flip a few switches to say stuff like “use fog”, “use textures”, “don’t use vertex colors”… and the hardware just churns through it and puts something on the screen. Granted, a gross oversimplification, but that’s basically what happens.
Newer hardware uses shaders, which let developers alter the pipeline. It is no longer “fixed”.
To understand the difference, I’ll use 3dsmax as an example. Imagine you only got to use the Phong shader. Everything you do has to use Phong. For maps, you can use Bitmap. That’s all. Feel free to change some parameters for each, but you’re limited to that Material and that Map type. This is similar to a fixed function pipeline. For lighting, you can have point, spot, and directional lights. For shadows, you can have shadow maps if you render your scene more than once and use the results as a bitmap, projected onto a surface using phong.
See where I’m going with this? Fixed function pipelines are an exercise in creative use of limited assets.
Programmable pipelines on the other hand are wide open. They are analogous to being able to use the entire palette of materials and maps in 3dsmax, with the additional ability to program your own. You can also use any light type, including area lights. The only thing that will limit you is the speed at which the graphics hardware can run the shaders you write.
So in that respect, games and pre-rendered stuff are growing more similar as time goes on.
This is a pretty good, though dated article that I found real quick on google…
Ehrm… trunks15, you are wrong too.
A Zbuffer is nothing more than a depth list for polygons, to determine which polygon/triangle is closest to the viewport/camera. The closest one will be rendered fully, the ones behind it only partially or not at all, depending on the position in the zbuffer.
The Zbuffer is being used in every engine and/or technology today. Some use other methods (like the W-Buffer), but the purpose of all of these kinds of buffers is the same: To determine which polygon are rendered fully, partially or not at all.
DirectX uses it, as well as OpenGL. But most of the time, the programmer isn’t confronted with these tasks because both technologies are designed to do these tasks themselves.
Just open a book on graphics programming ( I use ‘Computer Graphics: Principles and practices - the C edition’ ), and you will see that zbuffering (or another method like it, like W-Buffering ) is a fundemental required step at some stage in the rendering process.
so if max does scanline render using cpu then does that mean videocards implement the scanline renderer through the gpu?
sort of, yeah. Not the exact same renderer ofcourse, because every renderer differs from one another, but the rastirisation principles are the same. Max has a lot of additional features compared to your GPU (aside from the fact that modern GPU’s are catching on). Many software developers realize this, and they program their software in such a way that it uses both the CPU and the GPU for rendering.
The poly rendering stuff and basic material/shader stuff will be done by your GPU for example, and more complex stuff like raytracing and GI will be done by the CPU, although I have seen a very nice implementation in the DirectX 9 SDK that fakes GI in a very convincing way.
You must be talking about the newest versions of DirectX lau, because in the past the programmers had to use something called ‘The painters algorithm’, which had and has the same purpose of the z-buffer (yes, i knew what it is for) but was harder to understand than a zbuffer because the render does it all, when using z-buffer.
I didn’t know that. I’m into graphics programming since a year or two, so that ‘painters algorithm’ is pretty new to me.
I only knew about the Z/W buffers. But now you made me curious. Is it faster than a z/w-buffer implementation? Are there any advantages using that system. I’m into programmming software rasterizers the last few months and if it could speed up my rendering proces, I would be greatly interested!
btw. that red font is really hard to read. I’m hope its not because your mad at me, cause I was being a smartass (I know I tend to be from time to time, its one of my items to better myself 
Actually, there’s a realtime ratracing board being developed at a university in Germany. Here’s a link:
Painters is slower and less accurate, but it uses much less memory. Playstation 1 used a painters algorithm because they had no Z-buffer, they also chinced on the UV calculations (that led to “squiggle-wall” effects in racing games and the like).
this “squiggle wall” you talking about, is that the jittery effect when things move around? I noticed that in all PS games.
It wasn’t really jitter, any jitter you saw was probably z-fighting. I wonder if I can find a screenshot, hang on.

That’s the best one I could find, you can see the texture start to deform and stretch right there where its really close to the camera, it has to do with some trick where you only do one of the 2 necessary calculations to get your UVs to map properly. So its hella fast, but if the textures are too close to the camera the errors get exaggerated and your textures distort. In most games this manifested itself when the camera was very close to a wall, you would see a normal straight texture start to zig-zag up and down as it got closer and closer to the camera.
Yeah, I was always amazed by the atrocious rendering on the PSone. No texture filtering (I think) plus the texture deformation makes for ugly ugly games. I liked the n64 better, but of course it kinda sucked because its texture sized couldn’t have been much greater than 64x64 :).
hm… simple mirrors are no provlem. for that the right technique is stencil mirrors. another way to render more complex geometries like shperes are dynamic cube maps
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.