Normal vs. Bump. Always better to use normal?


Enough with the wrong assumptions.

Parallax mapping have come and gone. Using texture memory to fake geometry when geometry is getting cheaper and cheaper and texture memory more precious is NOT the way of the future. If you really want to look into the crystal ball you could watch Sparse Voxel Octree, Normal-, Parallax-, Silhouette- mapping and what not is not, in their current form, gonna shape the future “next-gen”.

So, Normal Maps are in every sense of the word, trivial – atleast if you’re serious about working in real-time entertainment industry. Read up or shut up.


Waddaya mean,… the postman should know? Meshugenah



Ha ha, you are welcome to your opinion.


Are you having a bad day? :slight_smile:


I use displacement, normal map and bump, depending on the situation.

Displacement + Bump = Use it when I don’t need to render it fast (example: illustrations and personal stuff). Use the displacement to get the general form and normal map to get small details forms\sharper ones.

Retopo + Normal Map (most used here)= Use it when I need to render it fast or for animation stuff. A nice mesh and a normal map on top, adds a lot of flexibility in the time of render and fast edit.

Bump = I use it to add procedural textures if needed. Also is good to add extra small details or forms, using texture channel 2 or other.

When using SSS with Mental Ray (the native SSS skin), I prefer to use displacement + bump, or retopologize + bump, because I get weird light reaction putting in normal map together with SSS. If I need to use normal map with SSS I use Jonas Thornqvist’s skin shader.
To produce normal maps, I use the Nvidia plug-in for Photoshop, Zbrush and 3DsMax’s Render to Texture.

Until now, this is what I use, but maybe in the future I can change my pipeline.



Yeah your stuff is high poly,… nice work on your site. :slight_smile:


Oh, ups, forgot to tell! Yeah Kanga, hi-poly.


normal maps are better, as they display the detail more precisely, but bump maps are a lot easier to work with and to create yourself.


As much to make sure I understood this as anything, I made an illustration of what (I understand) happens with bump, normal, displacement and normal+displacement maps. I used world space (xyz) rather than texture space (uvn) for my labels, since that’s probably easier for most people to understand. But if you’re going to be picky, the “Normal X” value should be “Normal U” and the “Normal Z” should be “Normal N.”

I’m not entirely clear what vectors the normals would face with displacement: would they extend along the same vector as the original surface or be recalculated as tangent to the displaced surface? I show them retaining their prior values. Likewise, I show displacement occurring along the vectors calculated from the normal map in the Normal + Displacement map, but I suppose (depending on how the displacement shader is written) the proper result might be like the Displacement image with the modified normals taken from the Normal image.


So far the best explanation I have seen is in the reference docs for this app.
A great app.


Ah I just read this.

Regardless of your opinions or whether this post was directed at me or not this use of language is offensive to native English speakers. You may be as pompous and arrogant as you like as long as you don’t direct derogatory remarks at members of this forum. If you find this impossible I strongly suggest you move on.

For your own information pedantic discussions about technique rarely benefit those who truly wish to know about popular processes that are very much in use at this moment. I cant discern how much practical experience you have on this subject as I am unable to see any evidence that you have made use of it, anywhere.

Lets get back to a useful discussion shall we.


As usual, you have to also consider: what is your project’s work-flow and deployment. If you are doing “a game,” say, then it might matter whether it’s a low-end or a high-end one; whether it needs to be able to run on “an ordinary 1960’s-era television” (i.e. varying and possibly-wretched output resolution), and so on and on and on.

If you are doing work that involves “node-based” compositing and other types of things, these various types of maps are simply very-useful inputs, for you to make use of as you may. (The ruling constraint of real-time rendering is removed.) You are free to construct a “pipeline” which uses these data in whatever way you might dream up. All of these maps are subtly different; therefore, useful in their own way and even more useful (perhaps) in combination.

The “bump map,” being in effect a point-by-point “third dimension” (expressed as a distance along the normal-vector), can actually be quite a useful thing because it is expressed in terms of the model.

The “normal map,” being a pure expression of how light bounces off the surface, can be more easily “baked” ahead of time … and it typically is.

You can use that information “in the typical game-way,” or you can use the two pieces of information together. When you do that, you wind up with a polar coordinate. (“Vector + distance.”)

“Displacement map?” Sure! By actually “moving points around” (without really moving them at all), you can describe variations of how the light behaves that would be difficult, or perhaps impossible (depending on the situation…) to describe by other means. You are affecting the behavior of faces by directly manipulating vertices, in a non-destructive way. AFAIK, only a displacement-map can describe that, in those terms.

Since this is a digital computer, and these days it has a bumper-crop of both fast CPUi[/i] and fast memory, it is truly meaningless to “debate” which one is “better.” (And it’s just plain rude to snipe at other people.) Just learn about them all. And, let this thread become one in which we discuss and explain how we’ve used them. (For everyone who’s talking there could be a hundred people listening, and trying to learn something.)

Is what I blathered-on about strictly correct? Not exactly. Normal map and bump map information are used differently in different circumstances. Real-time games have very different (“time is of the essence”) computational requirements. What’s most critical to see is how the information in each type of map is different. If you are not doing a game, you can do anything.


Ok, I had that same question a few days ago, but I investigated about the topic and did some tests (maya and mudbox + some lovely textures from 3d Total)

Normal maps are maps that contain normal elevation information in a color. The normal is a 90 degree angle with the surface, so if you have a really high res model with lots of details in , say Mudbox or Zbrush, you may not wanna export that directly to your main 3d software (in my case Maya). So you got your 3d low poly model, and you get the normal map from the hi-res model and apply it to the low res one. So when you render your model, it will bounce light and affect it like if the surface was the hi-res. But in reality it is a low res model. So rendering time is really fast compared to if you use real geometry or a displacement map. Normal maps are extremely popular in video game developmente, and are intensively used on the Ps3 and Xbox 360 to be able to get those real time awesome graphics (but in reality now you know they aren’t true hi-res models, it is an optical trick).

Bump maps are more simple, they are just black and white maps that contain pixel information that makes light bounce or not on a surface. Like imagine a brick wall, it is actually completely flat, but with a bump map it will make the darker areas (between bricks) look like holes, and the brighter areas look like elevations. It is also a trick but it is more like for surfaces with less detail than a normal map.

Displacement maps, are like normal maps, BUT THEY ACTUALLY CREATE THE GEOMETRY. So if you really want extremely good detail (given that you made that detail in mudbox or zbrush) you should use a displacement map, which is NOT A TRICK, it is the actual geometry being modified. But be warned, depending on your model… it can be really really time consuming at rendering time. And I really don’t think it is commonly used for real time projects (video games).


Un-fortunately, all three of these descriptions are actually more-or-less in-correct…
[li] The word “elevation” does not belong there. A normal is a vector; it has no elevation. Nor does the application of normal-maps have anything to do with “elevation” … but see the next bullet-point. [] A bump-map doesn’t determine whether “light bounces or not.” (Are you thinking of “alpha” or “reflection?” (Also: it is possible that your use of the word “elevation” in the previous item suggests a confusion between the two concepts.) [] A displacement-map does not create geometry, but rather exerts a non-destructive on-the-fly effect of changing it (by “displacing” the affected vertices). [/ul]
As you see for yourself, the terminology can be very confusing … and the distinctions, subtle though they may be, are extremely important. Since the end-result of all three techniques is (frequently) “more or less the same visual thing,” confusion is very, very understandable.


Oh, I am very sorry then, I mean… I understand how it work and what it the result of using these techniques… but yeah, surely I have a mess of concepts in here, so please forgive my mess… And thanks at the same time, cause you helped me understand it better now!


“Sorry?” “Forgive?” :shrug:

We’re friends here. Just talkin’ about something we love :curious: to do. Ain’t no apologizin’ needed.


Could someone clear a few things up for me a bit please.
If one was to use a displacement and a normal map together, with the idea to use the displacement for the general form and normal for extra details, and generate 'em from some app like say zbrush.
What subdivision lvl should the normal map be generated from then? The lowest one, or the one the displacement map was generated to “emulate”(which in this case wouldnt be the highest one)?
Or is it advisable to just generate both normally using same subdivision lvls at generation?
Also, should these maps be gamma corrected when using a linear workflow, and how does smoothing the mesh in the 3d app affect the final result?


The quickest way to get used to the workflow is to run tests on simple shapes. Learn how to generate displacement and normal maps and try combining them.


Yes of course, I have been doing a lot tests and I definitely had good and bad results. However, I’m wondering if some general rules apply here.


for my Morgan Freeman’s render I used normal maps. Why?. Because with displacement maps the same image spent 5 times to render in my old crap computer. I didn’t notice a better contour or sharp detail with displacement so I decided just use normal map. Sometimes a few persons appreciate high quality and probably it wasn’t necessary.

But of course we should always work in high quality, displace, bigger maps…etc for the best of the team, as well, and if you can, reduce the quality when it’s necessary. For me, the final image puts the rules.