PDA

View Full Version : 3d normal map texture require tangent space?


DudeChump
04-13-2011, 12:29 AM
I have a procedurally generated mesh, and I want said mesh to use normal mapping. Problem is, given how hard it is to procedurally generate 2d uv coordinates for an arbitrary 3d mesh without obvious artifacts and stretching and so on, I decided to use 3d textures.
I have writen a normal map shader for meshes with 2d uv coordinates, and then I had to transform the light and eye vector with the TBN(tangent, bionormal, normal) matrix for the lights, eye vectors to be in the same space as the texture(I can not say I understand why I need to do this), and for that I needed to calculate the tangent for every vertex.
Most methods for doing so need you to have the UV coordinates for those vertexes, which I obviously do not, but this leads me to think that maybe a tangent basis is not necessary?
But when I do sample the 3d normal map texture in the fragment shader straight off, it looks, well, weird, and obviously not correct.

I am kind of tired now, so nothing makes sense anymore, but I'll return to this tomorrow.

Basically what I am wondering is:
1) Does it make sense at all to have a 3d texture as a normal base?

And if so,

2) Would I need to use the TBN matrix on the light vectors/eye vectors when that normal base?

gruhn
04-15-2011, 06:35 AM
Bear in mind I don't know what I'm talking about...

Sure, a 3d texture is fine. All you need for a normal map is a way of specifying a color at a location on the surface. It's just a map. "Given this point in space, return a value."

Two things going on with the normal calculation at render:
Get the color
Figure out how to shade the pixel

Get the colors of maps
This is just your 3d texture look up. "Oh, I'm at x,y,z on the object therefore I'm blue."

Figure out how to shade the pixel
Given that blue, how well lit is it? This involves the angle between the surface and the light.
Add a specular component. This involves the from the camera to the surface to the light.
And THAT angle is wiggled about depending on the normal map.
So, for all this I can well see why they'd all want to be in the same space. So yeah, I bet you have to do that.

The sources you are working with will also have a transform from point on surface into texture space (uvw) in order to get diffuse and normal (and etc) map data. Now if your textures are all calculated in object space then you're good. IIUC some solid textures use a normalized space so you may have to do a uvw conversion too. "Oh, the top of Godzilla's head is at 3543,12993,40932 but the noise texture that defines his scales is 0 to 1 based, so normallookup(my_noise, x/maxx, y/maxy, z/maxz).

Right, just in case you understand all that and I'm missing the important point... "most involve knowing the uvws of the verts"... hm...

OK, http://www.terathon.com/code/tangent.html says that the tangent at the vert is aligned to the uvw. Right, duh... a normal is a unique vector, but tangent to an object is really a plane not a vector.

...because the tangent space normal map distorts the normal based on r=x g=y b=z at this location on the object rather than world space. So that distortion (0.2 units along the x axis) varies in real world direction based on the location on the object. If the direction that the normal map thinks x is is not in the same space as the light angle calculations you will shift incorrectly and your bumping will be wrong.

IIUC tangent space is like a phong interpolated virtual smooth surface wrapping the object. It probably contacts it at the verts. The circle surround the inscribed polygon (to use a 2d analogy). Now if all you had to do was phong shading then the normals would be all you'd need to calculate the correct shading in relation to the light and two normals imply three points implies one plane implies one angle determines one shade.

What you could do is say that every location on the surface has a triplet associated with it that shifts the normal by a specified amount before it is fed to the lighting calculation. Hey, it's magenta there (255, 0, 255) so that means shift the vector positive x and positive z now calculate the light.

BUT what that means is that behaviour is not consistent with color. On the front of a house that yellow would shift the normal east and up to catch the morning sun. On the back of a house that yellow would shift the normal east and up to catch the morning sun. Except as you paint it (who paints normal maps?) yellow means "tilt right" and "tilt left" depending on where on the object you are and that's not that obvious.

Wouldn't it be good to have a system where cyan meant "do nothing" and magenta meant "tilt towards the left no matter where this bit of texture gets applied"? So we have tangent space normal mapping. I'm making this up as I go along. Which is great because you can use the exact same pixels for a rivet anywhere on the map because the normal map is only looked at up close by an ant crawling across the object.

!!! But there's the rub. Which way is "left"? !!!

Left is negative u. So the tangent is used to rotate the distortion vector (the scaling inherent in uvw mapping probably isn't important (at a guess)) of the normal map into alignment with the lighting calcs so that the surface normal is perturbed in the right (left) direction.

So, in short, if you want to use tangent space looking normal map you need SOME mechanism for determining the world (or object or light) space meaning of the normal map's xyz perturbation. That task is usually handled by the uvw mapping.

DO YOU want to use tangent space looking normal maps? I'm thinking... Bump maps are easy. Fill the map space with black and populate it with little fuzzy white spheres and you get bumps where ever a polygons slices a sphere and makes a white dot on the surface. White is always "out" and black is always "in".

My first inclination is that your procedural texture is easier to generate if you don't have to worry about what it looks like depending on where it falls. Just make a noise texture based on purple, pink, and cyan and it will be right so long as you know which way is right and up. Magenta is always "left" and cyan is always "right" (or whatever the colors map to).

But that assumes you can generate the map correctly. Think of that little white sphere in the black again. If it gets intersected by the front of the house you want the east side of it to be magenta. If it gets intersected by the back of the house you want the east side of it to cyan.

!!! The COLORING of the normal map is DEPENDANT on the part of the object it corresponds to. That sounds like a bad way to define a texture.

Same problem with world space.

The meaning of a ... texture voxel (volume texel?) in the 3d map is dependent on the surface that is using it. I intuit that that's a generic "nature of normal mapping" issue and if the solution exists it is beyond the scope of this article.

If all you want is some noise stuff it may not matter (we still have yet to solve the "define x" problem). Even large scale Perlin noises and the like mayn't be too harsh.

comments "Procedural normal mapping can be done any time you have a procedural height map - you just take partial derivatives of the height function dh/du and dh/dv, and let the normal be (1.0, -dh/du, -dh/dv). (Of course you must normalize it before using for lighting.)" Which sounds to me like bump mapping not normal mapping. But it does jibe with my first guess at "legit solutions".

1) not really
2) yes

The real world may know better answers.

DudeChump
04-15-2011, 07:05 AM
Thanks for the answer, I am actually quite surprised, it seems that on most forums if you come up with a question that is different from the staple ones like "der, how do I do steep parallax mapping?!?"(which there is already a gazillion of tutorials for, now I am all for someone going "der, I am trying to do steep parallax mapping, but this thing here is wrong, what could be the cause?), you never get a half way decent answer.

I got the idea of a 3d bump map texture from:
http://something-constructive.com/projects/writeup/
The dude explains a bit how he distorts the normals with a 3d noise texture(although the normal distortion does not look like it is "a few octaves of perlin noise", I was thinking that maybe he sampled his 3d texture several times and multiplied/added togeteher the result). It seems to me that it is only one channel of noise, so I suppose the dude is doing it the old school bump mapping way, which I have tried but abondend, can't remember why though.

Anyways, to my question regarding wheater it would make any sense to use a 3d texure as a normal base when rendering, you replied "not really", what would you do?
Is there a nifty way of creating good 2d uv coords per vertex?
Or should I just go for tri-planar texturing?
I do have a density volume from which the mesh was generated, perhaps there is some way I can use that?

Basically what I am trying to achieve is texturing arbitrary meshes generated from a marching cube algorithm, with a fair bit of detail, and also I don't want it to repeat itself too much. I have tried with triplanar texturing and terrain splatting, but then I wound up with doing 12 texture lookups per fragment, so then I tried to precompute the 2d(to avoid doing 3 times the texture lookups since I would not have to do triplanar texturing) uv coordinates(using the normal of the vertex) but then I got really ugly texture artifacts(they are kind of hard to explain, I do not have a pic right now).
So now I am trying to use 3d textures, problem is that they are too small and dont have very much detail(right now they are composed of 3 8 bit channels of perlin noise(with different seeds)). Also, my tangent basis sucks. On a sidenote, I have been looking mad for some kind of explaination of the complexity of a 3d texture lookup as compared to a 2d one, do anyone happen to know a place where I can see this?

Thanks for the response.

Robert Bateman
04-15-2011, 04:30 PM
Bear in mind I don't know what I'm talking about...
It shows.

...because the tangent space normal map distorts the normal based on r=x g=y b=z at this location on the object rather than world space.
I should hope the tangent space normal map contains tangent space normals. I should also hope that when querying a normal from that map, a deformation is NOT applied.


So that distortion (0.2 units along the x axis) varies in real world direction based on the location on the object. If the direction that the normal map thinks x is is not in the same space as the light angle calculations you will shift incorrectly and your bumping will be wrong.
Why distort data that is perfect to begin with?


What you could do is say that every location on the surface has a triplet associated with it that shifts the normal by a specified amount before it is fed to the lighting calculation. Hey, it's magenta there (255, 0, 255) so that means shift the vector positive x and positive z now calculate the light.
(255, 0, 255) ??? Forcing a normalisation per pixel is not a good idea. A better idea would be to store a normal.


BUT what that means is that behaviour is not consistent with color. On the front of a house that yellow would shift the normal east and up to catch the morning sun. On the back of a house that yellow would shift the normal east and up to catch the morning sun. Except as you paint it (who paints normal maps?) yellow means "tilt right" and "tilt left" depending on where on the object you are and that's not that obvious.
It's behaviour should be consistent with any mapping, colour included. i.e. Value = map(u, v)
No one paints normal maps. They model a high poly object, then bake it out. Worst case they paint height values in bump maps which are converted to normals.


Wouldn't it be good to have a system where cyan meant "do nothing" and magenta meant "tilt towards the left no matter where this bit of texture gets applied"?
No it would be insane. Which way is left?


So we have tangent space normal mapping. I'm making this up as I go along. Which is great because you can use the exact same pixels for a rivet anywhere on the map because the normal map is only looked at up close by an ant crawling across the object.
If by 'crawling across' you mean 'impacted at the speed of light into', then yes, something like that....


!!! But there's the rub. Which way is "left"? !!!
Who cares? More importantly, where are my normals?


Left is negative u. So the tangent is used to rotate the distortion vector (the scaling inherent in uvw mapping probably isn't important (at a guess)) of the normal map into alignment with the lighting calcs so that the surface normal is perturbed in the right (left) direction.
Scaling is important.


So, in short, if you want to use tangent space looking normal map you need SOME mechanism for determining the world (or object or light) space meaning of the normal map's xyz perturbation. That task is usually handled by the uvw mapping.
But why create a mapping to a mapping to another mapping, when just a mapping will do?

DO YOU want to use tangent space looking normal maps? I'm thinking... Bump maps are easy. Fill the map space with black and populate it with little fuzzy white spheres and you get bumps where ever a polygons slices a sphere and makes a white dot on the surface. White is always "out" and black is always "in".
Bump maps is a more convoluted way of doing normal mapping.

My first inclination is that your procedural texture is easier to generate if you don't have to worry about what it looks like depending on where it falls. Just make a noise texture based on purple, pink, and cyan and it will be right so long as you know which way is right and up. Magenta is always "left" and cyan is always "right" (or whatever the colors map to).
In relation to what co-ordinate frame? Why not use a normal map as just that. A mapping to some surface normals? Random values will not look as good as normal vectors.

But that assumes you can generate the map correctly. Think of that little white sphere in the black again. If it gets intersected by the front of the house you want the east side of it to be magenta. If it gets intersected by the back of the house you want the east side of it to cyan.
But where are your normal vectors? It's one thing to define a tangent space deformation lattice over your normals, but without any normals you're scuppered before you'd get that far.


!!! The COLORING of the normal map is DEPENDANT on the part of the object it corresponds to. That sounds like a bad way to define a texture.
I think you are confusing the term 'map' with 'texture'. A texture is a map, but a map is not a texture. A mapping turns one set of values into another. In this particular case, you give it a surface coordinate (UV), and it returns a normal vector at that point.

The meaning of a ... texture voxel (volume texel?) in the 3d map is dependent on the surface that is using it. I intuit that that's a generic "nature of normal mapping" issue and if the solution exists it is beyond the scope of this article.

A 3D mapping is no different to a 2D mapping apart from the insane overhead of storing 3D textures. Really though, using a 3D normal map would be pointless. For example, a 128x128x128 normal map, would end up storing 2,097,152 normals - the vast majority of which would never get used on the surface. You'd be better off storing vertex normals, or simplifying it with a 2D mapping.

The only time a 3D normal map might make sense would be as an function. i.e.

vec3 genSphereNormal(vec3 xyz, vec3 minBounds, vec3 maxBounds)
{
vec3 extents = maxBounds - minBounds;
vec3 result = (xyz-minBounds) / extents;
result -= vec3(1,1,1);
result *= 2.0f;
return result;
}

DudeChump
04-15-2011, 05:54 PM
Okay, thanks for the answers, both of you. So we can conclude that using 3d normal maps for normal distortion for a fragment on the face of an polygon is silly? The reason I got into it was because of this dudes cool cave generating thingy:

http://something-constructive.com/projects/writeup/

Appearently he "The surface of the cave is rendered with a procedural bump map that samples a 3D texture that holds a few octaves of Perlin noise. Color is also generated from this texture by blending between two colors based on the noise at a given point."

Maybe I understand "procedural bump map that samples a 3D texture that holds a few octaves of Perlin noise." wrong?

Basically, the way he renders his stuff is something along the lines of what I want, I have got a marching cubes algorithm applied to a procedural density volume, and I get nice normals, splatting and ambient occlusion factor, but so far all my attempts to generate nice 2d UV coords has failed miserably. I tried using triplanar projection in the shader, but that meant that I wound up with 24(!) texture lookups per fragment(4 for bump maps, 4 for texture map, multiplied by 3 for triplanar projection, not to mention that I blend between these values). Does anyone have any tips of how I would go about texturing my mesh? With a fair amount of detail, and not too repetitive?
I think the dudes procedural cave is shaded very nice, and I have from those few lines of text tried to deduce how he did it, but alas, it looks bad.

By the by, can someone please help me with the complexity of a lookup on a 2d texture as opposed to a 3d one? Is it VERY much expansive? I figured that the memory requirements would be cubed instead of squared(obviously), but I thought the lookup was about the same, I thought they had something along the lines of a hash function that takes the coords when doing lookups?

Anwyays, thanks for responding, I really appreciate it.

Edit:
Oh and also the only reason I want to use 3d texure maps is because I fail to generate any nice 2d uv coords at all, if there is indeed some kind of algorithm that does generate fairly good 2d coords for a mesh defined by a density volume, then please let me know.

Gravedigger
04-16-2011, 10:07 AM
i haven't read through all these long posts but i'm pretty sure that i agree with robert

first of all if you have a vector component you want to use on your model there are 2 situations: your object is static, your object is deforming

when the object is static you can use worldspace, objectspace or whatever you like since the vertices are consistent. however when your object is deforming the ONLY space that will work is tangent space because this is the only space that is consistant even when the object is deforming. so if you want to use normal maps with deforming objects you have to do it in tangent space!

the thing with the caves rather looks to me like vector displacements. again your object is static here so you don't necessarily need to have your vectors in tangent space

grs
Patrik

CGTalk Moderation
04-16-2011, 10:07 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.