PDA

View Full Version : disappointed in Doom3 graphics


ViPr
08-21-2003, 03:23 PM
i've been making my own engine that can use diffuse, specular, normals, height, and gloss maps. Doom3 cannot do gloss maps which i consider to be essential. also i looked at some of the Doom3 textures from the leaked alpha demo and alot of what we think is bump-mapping is faked just like in old engines which is strange considering Doom3 can do bump mapping.

many programmers want to make the final generation of 3d engines but the big problem is that there are no models for us to test or show off our engines. Doom3 models are not satisfactory ( because of the lack of gloss maps and the misuse of the other textures by the id software artists). the 3d programming community needs artists who can donate their models that use all the types of textures i mentioned above properly without faking anything so we can advance game graphics technology at a better pace.

ngrava
08-22-2003, 03:06 AM
pardon my ignorance but, what are Gloss maps?

-=GB=-

playmesumch00ns
08-22-2003, 09:22 AM
Specular maps under another name? Game programmers have weird names for things.

ViPr: it's all a fake anyway!

ViPr
08-22-2003, 10:13 AM
all the 3d artists should know what gloss mapping is. it's not the same as specular maps. specular maps affect how bright in what colors specular highlights and other reflections appear on surfaces while gloss maps affect how blurry the specular highlights and reflections look because gloss mapping is used to state how rough or smooth a surface is in a general sense without using bump mapping when the bumps would be too small to pick out individually.

playmesumch00ns
08-22-2003, 11:06 AM
AAaaahhh I see. Ok that makes sense. Sorry;)

Gooberius
08-22-2003, 11:37 AM
so a gloss map controls the specular power on a per texel basis? if that's the case then this is easy to implement on pixel shader 2.0 class cards. If the gloss map is a real floating point surface then you can just read from that texture and use that result in the pow instruction later. Presumably this isn't in Doom3 because the art pipeline had to be stabilised reasonably early in development, and cards around that time (GF2, at best GF3/Radeon8500) couldn't do such an operation, so it's not worth including in the art resources. Even if they could do that op on the card there probably wouldn't be enough free instruction slots to do the full lighting calculation *and* the per texel specular power lookup. (ok, maybe multipass, but we're potentially heading into nasty fillrate problems)

There are good reasons, I'm sure. Perhaps this is why Carmack recently said "I have one more rendering engine to write" ;)

ViPr
08-22-2003, 01:22 PM
yah i believe Carmack's last engine is to finally add gloss mapping and to add support for floating point instead of just integer textures.

Gooberius
08-22-2003, 09:28 PM
and quite probably to implement a generalisation of shadowing. Stencil shadow volumes are pretty naff really. Shadow buffers are definitely the way to go, but require some work from the IHVs before they're really usable.


(all this is IMO, of course :))

erilaz
08-23-2003, 03:30 AM
i looked at some of the Doom3 textures from the leaked alpha demo ...

This is because it's an alpha demo. In a recent interview I read with Carmack, I vaguely recall he said the engine etc. was not completed when that alpha was "released".

ViPr
08-23-2003, 07:23 AM
i also read a recent interview and it said that the Doom3 graphic engine was completed many years ago and has not changed since that time.

erilaz
08-23-2003, 09:45 AM
Well there you go! I didn't read that one!:D Ah well...

PoR3
08-08-2004, 09:59 PM
Very interesting man, for sure you know your subject :)
Didn't know about that, all the info we can gather about Doom3 was interviews about gameplay, release date, etc...
Thx you for the info!

Sashelas
08-09-2004, 03:50 PM
Doom3 failed on Win2k3 server (most of my machines) and ran slow on my XP laptop at 1024x768 while most other games I play are placid at 1280x1024 such as city of heroes and dark age of camelot.

I'm not too impressed. Are the other first person shooter game engines faster? I'm particularly curious about the new Unreal engine. Does anyone have the Unreal engine sdk?

Tom Pawlik
08-19-2004, 11:19 AM
As mentioned earlier, the doom3 engine was developed for the GF1 class hardware. Based upon NV Rasterizer and not fragmentshaders. This is the point ! The normalmaps in doom3 are only 8bit in precision and this is not enough to store something really smooth.
Doom3 is coming out now, but the techniques used in this engine are allready dead.
Stencil shadows will not survive the next 2 years, they are much too inefficent and tricky (Carmack inverse is patented = it sucks very bad).
Look here what next engines will look like :
http://www.unrealtechnology.com/html/technology/ue30.shtml
This one does not use stencil, it uses shadowmapping with manipulation of the offscreen rendertarget for the shadow all at fp-precision.
Doom3 is cool, but from the technical point of view it was dead before it was out. Carmack was always crying for fp-precision, but for doom3 it was too late to build the whole game upon them.

For people that want to use newer techniques :
You have to speak with the artists, they are also human. For example look here :
http://209.132.68.66/zbc/showthread.php?t=20310
There are two meshes with normalmaps and all stuff. Try these for your engines (The normalmaps are 16bit).
The only thing was, that if normalmaps are stored in 16bit you often have only two components, because a normal is always 1 and you can calculate the third component out of the two given. Talk to ZBrush2 artists if you need good objects with normalmaps, i don't know another program that is so intuitive and that uses normalmaps so efficently.

cu
Tom

ThE_JacO
08-19-2004, 12:02 PM
The only thing was, that if normalmaps are stored in 16bit you often have only two components, because a normal is always 1 and you can calculate the third component out of the two given. Talk to ZBrush2 artists if you need good objects with normalmaps, i don't know another program that is so intuitive and that uses normalmaps so efficently.

cu
Tom care to detail about that please?
maybe I'm thinking of vectors in a too abstract way, but afaik for a 3Dvector you ALWAYS need 3 values.
even thinking in terms of displacment vectors, having a starting point and a lenght referenced in world space, for the same X and Y value and Lenght you would always have at the very least 2 solutions with Z=c and Z=-c, and since you assume that it's always normalized you can't even use the trick of using a signed value for the length

I've seen implementations of the quadrants system to reserve more detail for the values, but it's always been RGB=XYZ and the quadrant stored elswhere or partially limiting one of those channels.

also the purpose of normal maps is to spend more storage but simplyfing calculations, isn't it a bit of a nonsense to have a large data set AND require further processing for every single pixel of the map?

are you sure you aren't thinking of ortho matrices where the 3rd component can always be obtained with a cross?

I'd love to be proven wrong, it's very likely it's a shortcoming of my maths, but I never heard of figuring out the 3rd value of a 3D vector given only 2 and the assumption it's already normalized.

Tom Pawlik
08-19-2004, 08:55 PM
I'd love to be proven wrong, it's very likely it's a shortcoming of my maths, but I never heard of figuring out the 3rd value of a 3D vector given only 2 and the assumption it's already normalized.

That's it, i also thought i was going the wrong way by eliminating the 3rd component. First i thought that the computation of a vector out of 2 components would always run into precision-problems even normalized. I was right for 8bit Normalmaps representing xyz in the rgb channels. I implemented a technique to eliminate the 3rd component and i saw a little flickering because of the rounding error at 8bit.
At 16bit the problem was gone. XYZ->RGB in 16bit Mode would mean 48bits of Normaldata only for a single fragment. The rounding error was little enough not to be seen and some tests showed me that even the computation of one component is fast enough to be an alternative.
I am not allowed to show you code or screenshots, but in my case the artists were very very pleased about the fact that a normalmap was only 32bit at a detail you expect from 48bit maps.
Of course it depends on the space you calculate your normals in, but you can assume, that at 16bits precision per channel the maps are detailed enough that you can assume the normal being 1. In 8 bit precision i often saw the problem, that rounding errors made the normal go over 1.

Try it for your project, but for me it works very well and i do not need 100fps, i like it more if i see much more microdetail.

cu
Tom

P.S.: 3Dc rocks, look at the new compression technique from ATI designed for normalmaps (lossless).

ThE_JacO
08-19-2004, 09:06 PM
what I really can't seem to grasp is how you compute that third value.
I don't need a snippet, but could you at least tell me how you are doing that from a maths/assumptions standpoint please?

are you limiting the perturbation range somehow? what space do you reference usually? averaged samples by proximity or you just reference a polynormal, an arbitrary ortho set from that and compute from there?

thanks in advance for anything you can spare.

ViPr
08-19-2004, 09:12 PM
z=sqrt(1-x*x-y*y)

ThE_JacO
08-19-2004, 09:48 PM
ViPr thanks a lot, but that is what i meant when I said the best you can get to will still leave you with a double solution.
it returns an absolute of z but it's an unsigned.
so I assume you limit the range of the perturbation to half a sphere.

plus square roots tend to be a bit expensive, especially doing 1024*1024 of them.

I'm sorry if these questions may seem banal to some, but while I do have some maths ground to stand on, and wrote quite a few tools, normal maps are something I didn't use much yet (especially in their gaming implementations), and I'm very interested into seeing what direction the optimization is taking.

P.S.
mine is genuine curiosity, not headbutting.

dassbaba
08-19-2004, 10:05 PM
if you're disappointed by the doom 3 engine wait till the halflife2 engine comes out

Tom Pawlik
08-19-2004, 10:35 PM
it returns an absolute of z but it's an unsigned.
so I assume you limit the range of the perturbation to half a sphere.

plus square roots tend to be a bit expensive, especially doing 1024*1024 of them.



It's true that it is unsigned, but this is what i can tell you ( what i am allowed to tell you) :
At 16bits the normal is precise enough to use one signed bit.

The second thing i am allowed to say is that the aspect ratio between the texture and the world space (hope you know what i mean) is very important. Limit your maximal approximation to a surface as far as you do not see the rounding error of the z-component computation. It depends on what dimensions you want to use. Out Team wanted an BSP-like enigne for closed rooms, so i could live with an error of approx. 0.025 of a lit pixel and just not to allow the artists to go deeper.

I also thought that sqrt is expensive, but it is not. The memory limits if you use all 3 components are much more of a problem (at least for my team, the artists ALWAYS wanted more normalmaps in our project).

I would like to tell you more, but i am not allowed to. Sorry if i can't tell you what the problem in our development was, but i can tell you that even 15bits of precision are enough to show very smooth characters.

cu
Tom

ThE_JacO
08-19-2004, 10:57 PM
you told me most of what I wanted to hear mate, much appreciated.

the point for me is that I'm a developer only to the extent I need to be to support the rest of my skillset (technical animation and pipeline engineering).
this means that for what I do I can get pretty much anywhere, and given enough time I can make it pretty too ;), but to a game developer or full R&D person my standards are quite lousy.

there are a couple of things I learnt.
one that always applies to the "technology scavengers" like me is that, no matter how brillant some of my thoughts can seem to me, if other people that do ONLY that thought of another way there's bound to be a good reason for that, and it's only convenient to pick their brains about it.

another is that given a second thought being anal retentive about the abstraction of a process is often counterproductive.
IE: while signing the normal seems impossible (the moment you square everything the sign is lost, wheter you keep a bit reserved to sign it later on or not) for my purposes I now realize it would be useless, that normal would be in the clipped or hidden space anyway, and only some non-real-world cases would need it.

the last one is that while I'm interested in normal maps more for a different field (post 3D lighting ala Norman/Illusion/XSI is one, my field is films not games) then for games, the bonduaries between our fields are blurring nowadays, but game developers take economy of resources one step forward for obvious reasons, and it's always worthy to hear from them.
I have only utmost respect for you people.

right now I still displace in subpixel more then I do use 3rd party normals, so the whole space issue for me is of a different nature, but it's nonthless interesting, especially now that I'm digging into gelato and how much it could save me in rendertimes.

thanks again.

Tom Pawlik
08-19-2004, 11:21 PM
Hi again, a mate at my team told me that there is a demo of 16bit normals available at the ATI-Site : http://www.ati.com/developer/demos/r9700.html It is the demo with the car. It uses a 16bits x and y component and computes the z from this assuming that the normal can only be 1.
As i said i hope you got the picture, but this was 4 long months of work and as far as i always want to share experience, i can't.
Are you coding offline rendering-stuff ? This is what i do at spare time and i also wanted to implement the z-saving in Maya to save memory. This is cool for Stills, but it tends to show a little "noise" effect on an animation (Perhaps you know the effect of MR when you use too few photons for FinalGather, it looks simmilar).
It's getting really exiting since ZBrush.

The only thing for 3D is : try not to see any difference between vertex and pixel stage (after deform). We are at a time where a pixel is 128bits wide and can hold more information than i can imagine (Hehe, HDRI rocks !). Do not try to make everything perfect, just try to "live with the error" and limit the camera this is enough.

cu
Tom

ThE_JacO
08-20-2004, 03:13 AM
yes, I'm doing software rendering stuff and tring to exploit hardware rendering for the same purposes.

while for normal rendering I don't need much done (maya, XSI and HDN can all provide excellent normals output with their default tools) the things I'm studying, more then doing, about hardware rendering can use every bit of optimization there could be. I'm still far away from taking advantage of this stuff completely, but I reckon it's going to be relevant in the future.
renderfarms equipped with a quadro card for each box are a bit unlikely, but since every studio I worked for switches the workstations into rendermode at night... it doesn't take a genius to see that there's potential already.

btw did you try gelato? there's a free version out there now, and if your hobby is SW rendering while your work is HW rendering, you could find it the most interesting gap bridge ever.

playmesumch00ns
08-20-2004, 08:31 AM
the last one is that while I'm interested in normal maps more for a different field (post 3D lighting ala Norman/Illusion/XSI is one, my field is films not games) .
How do you know Norman?

Hugh
08-20-2004, 08:51 AM
If he's talking about the Norman you think he's talking about, then it's been mentioned in various articles... (I vaguely remember something about it in the Tomb Raider 2 one...)


The_JacO: If you consider that any point you've got a normal for is going to be facing towards you, then you don't need to necessarily worry about the sign of Z - you've only got a hemisphere that it could potentially be in...

The other way that you could get a full sphere normal from 2 values is a 3D polar coordinate system - x being one axis and y being another - in the system, z would generally be the magnitude...

However, both are pretty much mute points, as you've got space for 3 values, and, as you said, the whole idea is to reduce processing... that bit of extra space needed is worth having the xyz value of the normal right there at your fingertips....

HomerS
08-20-2004, 10:10 AM
Hmm... I found the Doom 3 engine much better than I expected....

ViPr
08-20-2004, 11:06 AM
if your normals maps are tangent space then the component for how much the vector goes outward is always positive.

i'm looking forward to HL2 more than Doom3 because i think maybe Valve said something about shaders being editable in that game. i'm really upset about the omission of gloss maps in Doom3 because it makes it not possible to make certain parts of models look dry while others look wet and slimy for that icky creepy look and it's a horror game for chrissake! have you looked at the screenshots. the mouths and wounds look totally dry and the teeth claws and intestines look metallic probably because they are white on a specular map instead of white on a gloss map. i think the artists although brilliant at art have no clue about the technicalities of this stuff. when i looked at some of the texture maps i could tell in the diffuse map what direction the light was in. there is supposed to be no indication of directional lighting in diffuse maps. i also saw that some of the height/bump maps were just the normals maps greyscaled. and on the specular maps it looked like they painted the specular highlights on the texture rather than painting the specularity. they looked like they had no idea what they were doing. but i have to excuse them a little because without gloss mapping it's not possible to do this stuff properly and it makes normal mapping quite pointless because they just went back to their old habit of painting the lighting into the texture maps themselves.

if an engine wants to use normals maps then it has to go all the way with diffuse, gloss, and specular maps as well otherwise there is no point. you either use photographic textures and really downplay the lighting in the engine or you have the engine do the lighting but then you have to give it all these textures to fully describe the surface so the engine can do it's job.

anyway the only thing that can save Doom3 for me is if they make the shaders editable to the point where they can add gloss mapping, but i highly doubt they will allow this because it would makes Carmack's next engine in a year rather pointless.

i suggest we all wait for then till we buy another Id engine. hopefully by then my own engine will be ready too :)

i was wondering is someone could find out for me if HL2 can do gloss maps. btw i think that although HL2 can do normals mapping and alot of cool stuff. i think they are really not using it all that much. i think their game is a combination of old and new methods. i think maybe only about 15% of the pixels on the screen are using normals maps whereas in Doom3 i think every pixel uses them.

Neil
08-21-2004, 04:49 PM
i've been making my own engine that can use diffuse, specular, normals, height, and gloss maps. Doom3 cannot do gloss maps which i consider to be essential. also i looked at some of the Doom3 textures from the leaked alpha demo and alot of what we think is bump-mapping is faked just like in old engines which is strange considering Doom3 can do bump mapping.

many programmers want to make the final generation of 3d engines but the big problem is that there are no models for us to test or show off our engines. Doom3 models are not satisfactory ( because of the lack of gloss maps and the misuse of the other textures by the id software artists). the 3d programming community needs artists who can donate their models that use all the types of textures i mentioned above properly without faking anything so we can advance game graphics technology at a better pace.
Can we see your engine? It's easy to complain about other people's work, but I seriously doubt Carmack is sleeping on the job or "not smart enough". There are reasons behind his decisions. If you just want normal mapped characters (models) why not just post a request in the game art section on here? There are a few characters finished usually monthly.

thebigMuh
08-22-2004, 12:21 PM
anyway the only thing that can save Doom3 for me is if they make the shaders editable to the point where they can add gloss mapping, but i highly doubt they will allow this because it would makes Carmack's next engine in a year rather pointless.

All fragment programs are in base/pak000.pk4, directory "glprogs". Use winzip for opening the file. The big main shader is interaction.vfp. You can happily add as many additional passes as you want, including gloss maps. For an example, look at the parallax mapping mod here: http://www.fileplanet.com/files/140000/144453.shtml

Ciao, ímuh!

ViPr
08-22-2004, 12:50 PM
if they can add gloss mapping to doom3 then i totally love doom3. however adding gloss mapping requires that gloss texture maps are created besides just adding some code. is it possible to tell the code to use extra textures or just do more things with the existing textures?

btw i don't see how parallax mapping is possible when Doom3 does not have complete bump maps of the surfaces because it relies primarily on normal maps. bump/height maps are required for parallax mapping.

btw do you think that HDRI rendering and full screen glow is possible to add to Doom3 as well?

schmu_20mol
08-22-2004, 04:20 PM
if they can add gloss mapping to doom3 then i totally love doom3.

errr... :banghead:

ThE_JacO
08-22-2004, 10:10 PM
How do you know Norman?
it's fairly public by now, it's been mentioned in more then an article on sites and, but don't quote me on this, even on cinefx I think.

the concept isn't drastically new, the first time I saw that done was in illusion with a plug that dated back to 95 or 96 I think, so the actual idea behind norman has never been kept that secret.

I've heard bits and bobs more here and there from some friends in MPC or colleagues who happened to be ex-MPC, but don't worry, the source hasn't been leaked ;)

ThE_JacO
08-22-2004, 10:15 PM
If he's talking about the Norman you think he's talking about, then it's been mentioned in various articles... (I vaguely remember something about it in the Tomb Raider 2 one...)


The_JacO: If you consider that any point you've got a normal for is going to be facing towards you, then you don't need to necessarily worry about the sign of Z - you've only got a hemisphere that it could potentially be in...

The other way that you could get a full sphere normal from 2 values is a 3D polar coordinate system - x being one axis and y being another - in the system, z would generally be the magnitude...

However, both are pretty much mute points, as you've got space for 3 values, and, as you said, the whole idea is to reduce processing... that bit of extra space needed is worth having the xyz value of the normal right there at your fingertips....
hey Hugh.
for the normal part yeah, I figured out a couple of posts later that I could see why an unsigned would suffice from a practical standpoint, that's why I said that my curiosity was more about the maths then about the implementation.

as for the processing power I still want to have a look into that as soon as I'll have the time.
I truly can't see how millions of square roots for every image can be inexpensive, but maybe for a pipe involving graphics cards so much and that relies on the GPU to translate the map to dot products the saving in memory could be worth it.

mental
08-22-2004, 10:53 PM
my apologizes for pushing this thread further off topic but...

@ThE_JacO:

how much success have you had with post 3D lighting? i'm curious as to how far you have been able to push the techniques and what limitations have you encountered with re-lighting in 2D?

thanks!
-mental :surprised

ThE_JacO
08-22-2004, 11:20 PM
insofar I haven't spent much time writing anything, tried some concepts and doing some researches but it's all in time nicked off work, other personal projects and all the such.

I've seen and done myself some things, at different levels, in places where I worked and it's not bad at all, but it's still enanchment/correction, nowhere close to contributing a keylight from nothing.

I'm a bit on the tight side because I honestly don't know how much is out of my last 2 NDA's bonduaries, but sticking to personal tests I've squeezed something nice out of it for fur, but still needs lots of refining.

placing rimlights on fur for the artists has always been damn hard, it's so easy to completely burn out something or having it disappearing that it takes a lot of test rendering, and that can be expensive.
on the other hand normals for fur can be calculated quite fast (especially when you use hair shaders that rely on planes along the ricurve/MI_Hair with perturbated normals to create the tubes).

the 2 problems I'm having are that the resolution for the normal pass has to be subatomic, and the flickering insofar is quite obvious for the same reason, and while you can oversample the hell out of a rendering to antialias, it becomes a lot harder or unconvenient if you have to over-render those passes just to do the post, the storage requirments would become ridiculous, normal maps are huge and non-destructive compression is almost useless on a rainbow.
I don't know if those can be overcome, as I said I'm not in the same league of some of the people here, and my field really is technical animation, but I like toys I can break :)

I'm also experimenting with the simpliest combo right now, normals+depth and applying simple illumination models to it; but I can see potential in post-texturing and per-object depth along the normal to layer textures and other similar things, very much like doing primitive raymarching on it but in post.

don't know where this will get me (most likely a dead end), but there's so much to be learned on the way :)

playmesumch00ns
08-23-2004, 08:55 AM
it's fairly public by now, it's been mentioned in more then an article on sites and, but don't quote me on this, even on cinefx I think.

the concept isn't drastically new, the first time I saw that done was in illusion with a plug that dated back to 95 or 96 I think, so the actual idea behind norman has never been kept that secret.

I've heard bits and bobs more here and there from some friends in MPC or colleagues who happened to be ex-MPC, but don't worry, the source hasn't been leaked ;)
Yes I believe ILM has one called light or something. What I meant was are you ex-MPC yourself.

ThE_JacO
08-23-2004, 04:29 PM
Yes I believe ILM has one called light or something. What I meant was are you ex-MPC yourself.
sadly not yet.
had a call to arms some time ago but was stuck on another flick.

there will be a chance sooner or later anyway :)
btw Hugh knows me, I'll be back in London the first days of september, drop by for a beer.

CGTalk Moderation
01-15-2006, 10:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.