PDA

View Full Version : Game Lighting


OneToe
01-16-2007, 11:07 AM
hi guys,

as i look through the newest releases of computer games i recognize a huge step game development must have done concerning realistic game lighting techniques. could you please explain some basic techniques and how they produce / fake such realistic lighting?
I do not know much about engines and special features, just the basics. maybe someone got some links about new lighting features in engines...

thanks
onetoe

QMag
01-16-2007, 01:40 PM
The most basic lighting technique is dynamic lights. These have a drawback though, current hardware usually runs only eight of them at a time and dynamic lighting doesn't look too good. So another, probably most used in AAA games at the moment, is per pixel lighting. With per pixel lighting and normalmaps/displacement maps you can achieve a high res looking mesh with a low res mesh. Normal mapping works in a way that a shader takes a mesh and a normal map, then combines the meshes normal to the normal in the normal map, thus faking the high res feel of the model. Displacement mapping modifies the texture coordinates according to the displacement map supplied to the displacement mapping shader, thus giving additional faked height to the model. Dunno too much about non virtual displacement mapping, but that should be more realisit than virtual displacement mapping, but it also probably takes more firepower from your comp. Displacement maps can also be used to shade the model and add more depth to the texture.

One very impressive looking lighting technique is ambient occlusion. That calculates the ambient occlusion (per pixel?) and makes very nice looking ambient occlusion maps depending on the implementation. This is not too popular in games yet, maybe due to its slowness, but atleast nVidia guys seem to have implemented a ambient occlusion technique that runs at an somewhat nice speed.

With lightmapping you can also achieve pretty light static results. Light maps can be animated also but this could require a slow recalqulation of a portion of the light map if dynamic shadows are desired. Basically light mapping can be great for static portions of the scene.

This might clear things up a bit ;)

Ian Jones
01-16-2007, 02:50 PM
OpenGL or DirectX?

OneToe
01-16-2007, 03:04 PM
as far as i know, the majority is based on directX.. i dont know why they decided directx as the better one... nice question, too ;)
onetoe

Ian Jones
01-16-2007, 03:36 PM
Sorry, who is they? we're lacking some context here. What is it your wanting... general resources on directX lighting?

As for techniques, well there are many. Often the best are the ones which fake lighting phenomenon. 'Light maps' for example are texture elements which are pre-rendered in a 3d software and then used as textures on models to fake the lighting. Often this is combined with realtime lights aswell, but for the sake of speed this method is very useful. If you are familiar with AO (Ambient Occlusion) rendering in 3d software, well this is often used to generate light maps for this purpose. An AO pass generates a very useful generic lighting solution which doesn't indicate any specific light direction (eg highly useful for the dynamic nature of realtime gfx).

Pixel shaders offer control over texture effects and can be used to simulate lighting phenomena aswell. These are basically algorithms which work on a texel (texture element, usually a fragment / triangle) calculating pixel values based on many factors such as relative angle to light sources.

Normal maps offer a texture based solution to generate fake lighting which makes a flat polygon respond more realistically to lighting direction. A soft of advanced bump map effect if u like.

Well, there are heaps of things out there... shadow maps, raytracing. I'm not an expert though, so will leave you with that. ;)

OneToe
01-16-2007, 06:35 PM
hi ian,

thank you for your reply!
i see, i have to explain the way i want to look on this materia. :)
i read an article about the hellgate:london engine and i also informed myself about raytracing and rasterizing. as my sources told me rasterizing is still the main realtime rendering technique, because raytracing needs a level of performance the home user's hardware can't provide at the moment (i downloaded an realtime raytracer engine (http://www.winosi.onlinehome.de/Ravi.htm), which gives some very simple example) . but then i found out about different lighting techniques in actual game engines which also seemed to deal with raytracing techniques - for example raytraced shadows. maybe i misunderstood the information, but it did not sound like a faked version of the "real" raytraced shadow. maybe it's also effortable for the actual standard performance level to render some features, modules or parts of the raytracing technology in realtime. maybe ...
i hope i made clear what confuses me.

well, now i try to get some understandable basic information of the actual standard of technology in game engines. so i ask what they, the developers of games, decided to use und why they hold this for the best choice. i think concerning the opengl/directx-problem it's a matter of actual gamer's hardware witch is build to perform the best with directx based games. but in fact i do not know what directx makes it more usefull for games than opengl.

concering light maps. i heard of those, i work with cinema 4d and also used the "bake"-function for some game design project to bake my shaders to maps. i recognized the option to bake the AO-channel in a AO-map, but to this time i didnt knew what i should do with them. know it gets a bit clearer, as u tell me about faking the realistic spread of light on different surfaces. but what if the light source changes its position? the map is statical, so would it be misconceptious to "burn" in a static lighting situation in the environment.
i hope again you see what confuses and where i think the wrong way...

concerning pixel shaders. i cant follow you.. "calculating pixel values".. could u give some more detail?

concerning normal maps. i know about those "unmaterial, optical" ways to fake a higher detail of surface without fixing the light situation of the surface to the map. i'm a bit confused about the difference of normalmaps, bumpmaps and displacementmaps... earlier i thought, normal maps do "really" deform the surface because they got mor information than the bump maps, but that wouldnt be logical in game design... could you please explain shortly the main differences?

you see, i have more lacks than clear knowledge.. :rolleyes:
Thank you very much for your efforts! I hope my English is not too bad..
OnetToe

HalfVector
01-16-2007, 11:59 PM
Hi.

Well, I'm not an expert but I'll try to clarify some points.

for example raytraced shadows. maybe i misunderstood the information, but it did not sound like a faked version of the "real" raytraced shadow. maybe it's also effortable for the actual standard performance level to render some features, modules or parts of the raytracing technology in realtime. maybe ...
There are two major techniques used nowadays in games. Stencil Shadow Volumes and Shadow Maps.

well, now i try to get some understandable basic information of the actual standard of technology in game engines. so i ask what they, the developers of games, decided to use und why they hold this for the best choice. i think concerning the opengl/directx-problem it's a matter of actual gamer's hardware witch is build to perform the best with directx based games. but in fact i do not know what directx makes it more usefull for games than opengl.
Well I personally prefer Direct3D but it's not a matter of better performance over OpenGL (possible performance differences between these two APIs are due to driver optimization than anything else) but because of its API design.

but what if the light source changes its position? the map is statical, so would it be misconceptious to "burn" in a static lighting situation in the environment.
i hope again you see what confuses and where i think the wrong way...
Obviously, in the case of dynamic lights, you have to do per-vertex or per-pixel dynamic lighting.

If per-vertex, the lighting equation is evaluated for each vertex (faster but poor quality, specially when dealing with specular highlights). If per-pixel, the lighting is evaluated for each fragment (slower but very high quality, specially if you use normal maps to add fine details). If you ask me, with the current hardware, there's no reason to not use per-pixel lighting, although this will depend on the situation. For example, there's no need to evaluate a complex formula for each fragment when an object is so distant from the camera that you barely can see the details of this object. In this case, why not to use a per-vertex solution and a less complex lighting model (see below)?.

Anyway, once you choose one method or the other, you have to decide what lighting model to use to evaluate the lighting equation. For example, Phong, Blinn-Phong, Oren-Nayar, Cook-Torrance, etc. Keep in mind that some models are more complex than others and hence, they take more time to be evaluated. Also, some models are more appropriate for some materials than others.

Also, as the graphic hardware evolves and becomes more powerful, new techniques are being developed to achieve real-time global illumination (or at least faked globall illumination) but I think these techniques will take some time to settle down into the game industry (again, I'm not an expert so I could be wrong!).

i'm a bit confused about the difference of normalmaps, bumpmaps and displacementmaps... earlier i thought, normal maps do "really" deform the surface because they got mor information than the bump maps, but that wouldnt be logical in game design... could you please explain shortly the main differences?
A bump map is a grayscale image where each pixel defines height while normal maps is a color map where each pixel encodes a normal. You can extract a normal map from a bump map calculating the slope for each pixel (the height difference between adjacent pixels, horizontally and vertically). None of this techniques disturb the geometry.

On the other hand we have the displacement map that is like a bump map (a grayscale image where each pixel represents height) but in this case, the geometry is displaced along the normal as much as the displacement map indicates at the point of interrest.

Hope that helps!.

OneToe
01-17-2007, 12:45 AM
thanks for replyin halfvector!

i think i got that difference between per-vertex and per-pixel. sobviously the quality of the per-vertex-technique depends on polycount of objects? i didnt got how your decision influences your choice of light model..

concerning the choice of opengl/directx. i see, it depends on your favorization to the API design, but arent there other factors to consider?

as far as i know GI is based ractracing, how could they implement it in the traditional engine system? i mean, it was a huge step for game development to enlighten the scene with GI but how do they work it out?

concerning the map-types. in fact all three maps can be used in game engine to simulate a higher detail of the surface without "real" creasement of polycount. both bump and displacement map are grayscale maps. so you could just take a bump map and get some displacement using them? does the normal map contain the information about all 3 coordinates of a vertex so it can deform the vertecis in "all directions"?

thanks for your help!
onetoe

QMag
01-17-2007, 06:19 AM
as far as i know GI is based ractracing, how could they implement it in the traditional engine system? i mean, it was a huge step for game development to enlighten the scene with GI but how do they work it out?

OpenGL is based on raytracing? mah? Never heard of that before...

Probably all Dirext X techniques can be implemented in OpenGL also. at the moment thenre aint too much difference between these APIs I think, but when the new Direct X comes out the sitsuatiion might change.

HalfVector
01-17-2007, 03:54 PM
i think i got that difference between per-vertex and per-pixel. sobviously the quality of the per-vertex-technique depends on polycount of objects?
Exactly. If you want high quality per-vertex lighting, you'll need a high poly count. And still, you could get wrong specular highlights.

I didnt got how your decision influences your choice of light model..
One of the reasons to choose one lighting model or another is the material you're trying to emulate. For example, if the material is plastic, then I'll use phong/blinn-phong but if the material is some kind of metal, I'll go for Cook-Torrance. And if the material is one with anisotropic highlights, I'll go for anisotropic ward or something like that. In the case of hair I'll go for something like Kajiya-Kay (or some similar technique).

But as I said, it'll depend on your hardware target. Because some of those techniques take long time to be evaluated. Moreover, some of them take a lot of code instructions so not all of them can be executed in every graphics card. For example, If I remember correctly, the Oren-Nayar didn't fit on a shader model 2.0 pixel shader. So in such cases, you could use look-up tables to speed-up/reduce the instruction count (but getting worst quality in return). For example, I think Doom3 uses the Phong model and look-up tables, but obviously, the base target for Doom3 was really low.

concerning the choice of opengl/directx. i see, it depends on your favorization to the API design, but arent there other factors to consider?
Yes. For example portability. If you need UNIX like OS compatibility, your only choice is OpenGL. On the other hand, if you only care about Microsoft platforms, then you could use Direct3D (in the case of XBox is the only choice).

as far as i know GI is based ractracing, how could they implement it in the traditional engine system? i mean, it was a huge step for game development to enlighten the scene with GI but how do they work it out?
There are some papers/books (ShaderX series, for example) that describe some techniques to get global illumination. But I can't go into detail on this because I've not experimented with them. But as far as I know, they're not based on ray tracing (there are other techniques than raytracing to obtain GI).

so you could just take a bump map and get some displacement using them?
Yes, but only if you are using a displacement mapping technique.

does the normal map contain the information about all 3 coordinates of a vertex so it can deform the vertecis in "all directions"?
Not really. The normal map doesn't encode vertex coordinates but per-pixel normals.

OneToe
01-18-2007, 04:57 PM
ok,
i know now something about
light model, dynamic light, shadow map, stencil shadow, surface redetailing maps (bump, disp, normal), opengl/directx

i would like to get this information in some relation now.
so i try to analyze screenshots of actual computer games.
here are some examples i would like to discuss:

1. Oblivion screenshot (http://forums.cgsociety.org/3#%20---%20www.looki.de/kmx/modul_cms/uploads/2006/3/1142698490_43765_big.jpg)
i see a realistic overlighting of the sky, a nice contrast.
i heard this is produced by hdr-technique - could you explain this?

2. Oblivion screenshot (http://forums.cgsociety.org/www.looki.de/kmx/modul_cms/uploads/2006/3/1142698435_43752_big.jpg)
this is a nice lighting example. i see very realistic lighting/shadowing of the surface of the objects - but they do not seem to cast shadows...
could you get this bump on the wall using bump maps or normal maps?

-------
i got more pix, but enough for now ;)
buenas dias
onetoe

HalfVector
01-18-2007, 08:10 PM
1. Oblivion screenshot (http://www.looki.de/kmx/modul_cms/uploads/2006/3/1142698490_43765_big.jpg)
i see a realistic overlighting of the sky, a nice contrast.
i heard this is produced by hdr-technique - could you explain this?
That's right. HDR lets you work with values out of the [0,1] range. And that means you can obtain that kind of contrast. Now, because the output to the monitor still needs to be in the [0,1] ([0,255]) range, you have to clamp those values to that range (using tone mapping). Also, you can apply a bloom filter in post-processing to obtain that glare effect you can see exaggerated here (http://www.beyond3d.com/interviews/oblivion/images/es4_ss2.jpg).


2. Oblivion screenshot (http://www.looki.de/kmx/modul_cms/uploads/2006/3/1142698435_43752_big.jpg)
this is a nice lighting example. i see very realistic lighting/shadowing of the surface of the objects - but they do not seem to cast shadows...
could you get this bump on the wall using bump maps or normal maps?
Yes, you can obtain those bumps with a normal map. Moreover, I think Oblivion uses parallax mapping that it uses a normal map plus a height map to increase the sense of depth.

What I don't know is what kind of parallax uses Oblivion, if plain parallax mapping or parallax occlusion mapping that is a more advanced technique (and slower too!) developed by ATI's Natalya Tatarchuk, I think.

Parallax occlusion mapping is better because is more accurate than parallax mapping and you can obtain self soft-shadows!. Take a look at this video (http://www.arrakis.es/~jonathan01/stratos/pom_self_shadows_enabled.avi) for a demostration of parallax occlusion mapping with self soft-shadows (hard to believe what you see is just a plane built up from two triangles :)). Is from an engine I was developing with a friend of mine. The engine was formerly known as Haddd but now is open source and its name is Jade (http://www.codeplex.com/JADENGINE/Wiki/View.aspx?title=Features).

Also, to extend the info, you could take a look to a technique similar to parallax occlusion mapping, relief mapping (http://fabio.policarpo.nom.br/relief/index.htm).

Hope that helps.

OneToe
01-20-2007, 07:42 PM
buenas tardes,

the bloom/hdri thing.
my level of knowledge: the bloom filter is a highly realistic effect to imitate human eye's light reception and processing. it processes every frame after it's been rendered with 2d techinques. it adds glow to enlighted parts of the frame depending on light situation - in CallofDuty 2 the bloom filter adds a high glare after the view went to the sky. this effect is very fast to render, but often it's use is a bit overdriven. the hdr-technique is able to simulate the same, but it works slower and it's result is better
... i didnt quite understood how it works, but it must be a post-effect too..

i'll post more stuff later, got much work to these days.
thank you!
onetoe

--- later post ---

the map thing.
my level of knowledge: there are two types of maps used for redetailing a surface: normal and height map. you can use a height map (grayscale) for bump mapping or displacement mapping.
bump mapping works by rendering a special light situation and to cover the surface with the new rendered texture. this simulates a higher surface detail; it's very fast to process. if u view the conture of the bump-mapped surface you will notice no change to before because bump mapping does not deform the surface. the deformation is only mapped onto the surface as a flat texture, it's view-dependent and polycount-independent.
displacement mapping requires the same type of map (grayscale height map) but it "physically" deforms the surface by resetting the vertices to a new position depending on the height map information - generally in normal-direction. its quality depends on polycount and if you want to simulate a surface detail similar to bumpmapping the polycount has to be relatively high and so it's much slower to process. the advantages are the as well deformed conture and the ability of optical overlapping structures.
normal maps contains much more information than height maps. the three color channels r,g,b correspondent to the three dimensions x,y,z. so u can define an exact 3d shape on a 2d image - relative to a base mesh. you can also integrate reflection information in a normal map by using the alpha channel.

my questions:
1. does bump-mapping work by using the pixel-shader - and displacement mapping by using the vertex shader? or did i misunderstand the idea of these shaders?
2. is normal-mapping vertex-dependant deformation? does it work the same way displacement mapping works?
3. how does parallax mapping combine normal map and height map and how is it possible to render soft shadows to this? i think a disadvantage of parallax mapping is the color map stretching at border points of the deformed surface. nevertheless this technique is unbelievable.

shadow question: do u render the self shadow of an object and the shadow that it casts in the same process, the same "engine module"?

thanks!
onetoe

OneToe
01-21-2007, 01:59 PM
i just found out normal mapping is a visual deforming technique, too. so it's similar to bump-mapping but seems to have more features and advantages..
onetoe

ps: sorry for double-posting

--- edit1 ---

i found a very nice link: normal map photography (http://zarria.net/nrmphoto/nrmphoto.html)
this explains how you can create a normal yourself.
i think a normal map contains every light situation a light direction requires,
so it matches the surface shading to the acutal situation.
maybe this is faster to render than bumpmapping or is there another advantage?
onetoe

Ian Jones
01-21-2007, 03:01 PM
normal maps are better at representing the surface than bump maps. Bump maps are just grayscale maps which indicate raised or depressed area's based on greyscale value. Normal maps indicate surface direction (surface normal) at every pixel which is much more useful when trying to determine how the light should react to it. That's a simplified explanation.

CGTalk Moderation
01-21-2007, 03:01 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.