View Full Version : Normal Pass?
04 April 2011, 11:43 AM
I'm looking for the normal information relative to the camera: a 'normal pass'. I need an animated sequence of normal maps, where each pixel of the normal map corresponds with the final rendered object.
I tried RenderMap, but it generates a blank purple map whenever the space 'relative to UV basis' is enabled. Yes, I have set Automatic Basis to several texture projections, including a Camera projection. Object and local space normal maps seem to work fine.
I tried ultimapper, but it requires a high and low res model, which I don't have. I'm not trying to add detail to a lowpoly model. The end product is just the normal map.
Tried using a CAV, Tangent, and Binormal property mix, which almost worked. I was able to render the tangents using a color_map_lookup, but the normal information isn't quite right. Normals facing the camera change values depending on the position of the camera, which shouldn't happen.
Admittedly, I am new to normal maps. Is there a way to produce the image I want?
There is one work around; Photoshop has a plugin (rumor) that can convert a depth/height/bump map into a normal map. I could render out a camera-based depth sequence and convert it in Photoshop, but it's annoying, and I doubt the results are as accurate as actually calculating the 3D surface in XSI.
(clarification stuff. Skip if you know what I'm trying to get)
A normal value is a relationship, a difference. Usually, it's a relationship between the surface and the world orientation. Often, it is the relationship between a smaller surface (high-poly tringle) and a larger surface (low-poly triangle). I need the normal value for the difference between the surface and the camera.
XSI does this fluently with the incidence shader, but the incidence shader only measures deviation from the camera vector, not the direction of deviation, which a normal map calculates.
04 April 2011, 06:50 AM
for a normal pass of the whole scene, you should look into the "normal" render channel pass (framebuffers section, in the render manager window).if you want to isolate specific objects, just put them on a separate pass with the normal framebuffer active on that pass.
downside is that there seems to be a problem with the way framebuffers are being rendered out of xsi mray and most of the times they are coming out black or having some of the data corrupt but usually the information is still there, you might just need to resample the image in nuke or whatever app you're using to fetch that data.
also i believe the normal framebuffer in xsi refers to tangent space normals. for camera space or object space you might need to built your own custom shader in the rendertree and use a "store color in channel node" to feed the data to a custom buffer.
05 May 2011, 11:31 AM
Thanks loads for your help oktawu, but it looks like you're right, we're going to need to do a custom shader. Do you have any suggestions for how to approach building a custom camera-space normal shader in the render tree?
I played around a bit and ended up with:
vector_State(normal Vector)>Store_Vector_in_Channel>Vector_to_color>color_input of constant shader
This seems to give me 'world space', but only half. I only have the positive values of xyz, the inverted values are black (the negative z axis is yellow usually, I've got no yellow). I tried a vector-vector subtraction to invert the vectors, but then my sphere turned all black instead of half black.
How can I get the Camera space normals?
How can I get the negative axis value colors?
Thanks again for the help
05 May 2011, 05:05 PM
if you want world space normals, then yes all the negative axis will look black. There's information there, but it's out of display range, since you're displaying -ve values...
For camera space normals, maybe something like:
Vector state (normal) > Vector Coordinate converter (to Camera) > Vector to Color >
05 May 2011, 03:49 AM
[Fantastic! Thanks everybody for the help. It looks like all is well now.
CiaranM, the vector_coordinate_converter is purrfect, thanks.
Someday, I'll need to do an intensive comprehensive course on the render tree. The tool is so key, but so elusive.]
I spoke too soon. Looks like my orientation is still off. Lots of sources insist that blue, representing the z axis, should be pointing directly at the camera, but if you look at 99% of normal maps on the web, true z-axis blue is pointing off to the bottom left somewhere, while most flat surfaces facing the camera are a grey-purple color.
I've succeeded in getting z-axis true blue to point directly into the lens, representing a perpendicular surface, but that's not giving the results we want. It's not following the standards of normal mapping.
Using the pass normal render channel from the z axis (with camera at x0,y0,z100, and facing the origin), XSI produces an accurate normal map that matches normal orientation standards, but it is in 'world space' or 'tangent space' (if I will ever understand tangent space), so the normal render channel is useless from our camera's required location.
The vector_coordinate_converter seems so close. It manages to convert the world-axis-aligned-normals to point 'true z-blue' into the camera, now if I can get it to convert non-world-axis-aligned-normals to point 'soft purple' into the camera.
I don't even know the terminology for what I want. Am I trying to 'bend' the normal vectors to the camera? Any suggestions for the next step?
05 May 2011, 04:11 AM
Okay, I think we got it. I needed to change the Vector_to_color node's conversion_method from 'direct' to 'normal'.
Our realtime lighting is now behaving as expected. Thanks again for all the help guys :)
05 May 2011, 04:11 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.