the hardest working mother ****ers!


Here is some my reference and info I’m using behind the research for the hair shader and some test renders in its current state.

:Hair reference sheet:

:Hair Notes:

:Research Papers of Interest:

Light Scattering from Human Hair Fibers

Real - Time Hair Shading

Rhythm and Hues Narnia Paper


:Test Render wip:

:Test Animations:

Short Blond Test

Long Hair Test


So far the shader in its current state is taking into account 3 anisotropic highlights for the hair. One is a sheen coat or shine on top of all highlights, second is the primary highlight of the hair that is shifted towards the tip of the hair, third is the secondary highlight of the hair that is shifted towards the root of the hair. All highlights can control there roughness factor and shifting along the hair for more control of highlights. The shine highlight and primary highlight have color values that can be mapped and changed to desired color to match certain hair colors. The secondary highlight used to have its own color value as well but from my observations the secondary highlight was always a more saturated color of the primary color picked for the primary highlight. So now the secondary highlight simply takes its color from the primary highlight and allows you to control its saturation value to achieve the secondary highlighting color. The hair diffuse component is a very basic lambertian hair model derived from the Kajiya and Kay’s diffuse model for hair. Still yet to concept up some ideas to take into account maybe single or multiple scattering into the hair. Actually this might not be need due to the fact that a simple constant ambient term in the hair does an ok approximation, even more if we stick to darker type hairs. The hair transparency is already set in place as well, allowing for root, tip and border control for the hair. Including an overall transparency control so the whole hair can be softened with the transparency. It is using a simple control of the shaders to force them to be transparent and control the shadow color, since hair is very depend on the look of the shadows being transparent and the shadow color its self.

There are still other ideas I’m toying with to improve the quality of the hair mainly one will be some sort of hair noise to mimic the glinting noise that is seen in hair due to its scaly nature. Also I might try to implement some sort of hair reflection that samples its environment so depending on how shiny the hair is it could reflect its environment or if the hair is coated with some shiny substance i.e. gel, water etc… I’ve been playing around with ideas as well in how to fake or take global illumination into account into the hair and possibly having occlusion as well.

when I get some more time I’ll post up some more recent render tests…


:Current Shader Attributes:


JESUS CHRIST there has to be a better way!..


I got the bright idea that it would make a nice feature in the book …if the texture aritst after completing the highly detailed map …eaisly whip out z-brush…make a new layer for high frequency disp…and gently inlfate the precious surface details from an alpha mask…

until I tried to export the ****ing thing…

  1. I get the alpha depth factor…to correct for the gamma (because z-brush samples mid-gray displacement range)

  2. I use some ****ing thing called the Alpha Displacment Exporter …to put it into 32 bit disp…that maya likes…typing in some bullshit serial code…(my porn site has less protocal than this)

  3. Then I set some MR node for the subdivisions…

a couple other steps…and Im praying that something renders…

by this time I need a drink…

****ing A…

need some help…there has got to be a better way…this used to be about reading a ****ing black and white map…how hard is that?


hey paul,
that’s a great idea you have. definitely do this! now, by the time you get to that section of the book, mudbox2 will be out and this won’t even be a problem (you can do this in mudbox now, actually, but mb2 is already proving to be a big upgrade)…i know i’m a mudbox slut.

so the workflow will go somefin’ like this:
i’ll send you my highest res mudbox file, you can apply your textures on to it (preexisting or not) add another level of subdivision or two (depending if you’re workin’ with heaps of ram, like 8 gigs and a 64 bit OS), use your greyscale map to displace the mesh/carve in, then inflate your tertiary level skin pores and wrinkles, etc. for that last added sweetener. when that’s all done, we can extract the displacement maps from mudbox2.

NOTE: i have yet to extract displacement maps from mb2. i’m pretty sure it’ll be good. last resort is we extract 2k 32bit floating pt displacement maps from cyslice. i’m wishing on mb2 being a one-stop-shop for sculpting and extracting…but there’s always cyslice, which does a beautiful job extracting maps.


**** I hate Z…

I didnt know you can take a map and pull out the highfrequency detail…from the map…in Mudd box…

I will upgrade my box to 64 bit…just to do this…

we should really stick it to ****ing Z-brush…Aaron simms can have it…

lets promote Mudd!

any documentation on how to make a mask of a loaded texture map and pull out a disp? (in mudd?)


zbrush is not that bad. But either way the mental ray problem is a pain. I never use the mr subdivision node thing never could get it to work. Also the fact that most machines I’ve had couldn’t handle such high displacement. I prefer to use a technique I found at headus ages ago. I either export a lower subdivision level out of zbrush or what ever package ur using. For example if my mesh has 6 levels I’ll generate a displacement map and normal map from level 3 and export this level 3 into an obj for rendering in maya. Seems u can get faster renders if in the imported level 3 mesh in maya u take off feather displacement and let maya only push the points according to the map for silloete of the model and let the normal map render all the missing fine detail that would otherwise cause maya to go nuts tessalating. Maybe not the best method but works great for stills and speeds render times greatly and comes very close to the original sculpt…


Do people have all these problems in Muddbox?

I pretty much gave up on 32 bit disp out of Z-brush…

  1. Do you do anything for the Gamma correction? (-2.2 1.1 rule?)
  2. For 16 bit maps do you have to do that converstion thing?
  3. For 16 bit maps do you have to do a node correction for subdivisons? (I remember this being easier years ago)


as far as the high frequency detail? its your models…your croncepts…so …tell me what you want to do …where bump and disp meet…

It would be much easier to keep highfrequency in the bump…no? …do what you guys do and I will follow your lead for what creature…



So I’ve been out of town on travel and I meant to put this up about your mental ray rendering problem. So as I said before I never use the subNode thing MR gives you to approximate the mesh for rendering. Instead I’ll bring in a lower rez of my mesh in this example it was level 3 for the base to start my rendering on. Already since I’m using a base from the original sculpt I have some silhouette info there so I won’t have to displace to much of the mesh. First I’ll take feature displacement off in the mesh so it won’t cause maya and mental ray to tessellate. This will only push your vertices out depending on the displacement map that you provide which is really all we need. Also I just used an 8-bit displacement texture; it seemed to work just fine.

As for the displacement map texture its self needs the correct alpha gain number so you don’t push the vertices way to out. Usually I get this number from zbrush when it makes my displacement map. But the formula for this goes as this your main number goes into the alpha gain and in the alpha offset is the same number divided by 2 and it’s always negative.

Here is a quick example I made with this method I like to use. Which by the way its origins come from here Headus Example it goes into much further detail and has examples to download to see the shading networks. This is the lower base mesh which I simply subdivided once in maya with a subdivision node and the poly count is 34,272 polys. The left side is this base level plain on its own, and the right is which the generated normal map, which we can say will deal with all this high-frequency detail we need from the meshes, plus it still uses the displacement map to push out the silhouette to complete the reconstruction of the scuplt. This renders much faster than trying to let maya subdivide the mesh for displacment and captures those really fine details and still gives room to include more bump maps or normal maps if needed for other fine details all handled in the hypergraph shading network.

High rez screen snap for compare. I tried to render it in maya but of course mental ray crashed on me. This sculpt is 548,352 polys, compared to the top 34,272 polys. Your always gonna lose some detail, but not to much.

Screen snap of the shading network, its very simple based shading network, mainly the normal map is capturing the fine detail…


Thanks Miguel! will give it a shot this weekend…btw Joe alter hasnt gotten back to me yet…Daniel is trying to get a hold of him…



No prob! Hey thats cool, guess still not needed yet. Models are still being developed. Maybe we should have backup ideas for hair in case that doesn’t go through…

Speaking of, I was rebuilding some topology on a model of mine and I was wondering if anyone is gonna do a section in the book about retopology. This might make a good section. I’m sure that stefano would agree with me in saying that using the NEX plugin for maya to rebuild topology is pretty sweet!

BustRetopNEX <-- slow but you get the point…

Some other tools of interest…

:Normal Mapping:
Photoshop Normal map Filter
Crazy Bump

:Maya UV Tools:
Pelting Tools


ah we will get license for shave or i will freaking buy one (or Daniel…can ya help a brother out)


Maybe we can pass a donation plate around and buy it…

Found this free screen capture software for anyone interested, works pretty well.
Cam Studio


Good Eye color reference images for look dev…
Eye Ref

Here is also a siggraph video detailing a method for eye rendering…
Eye Rendering


Joe ALter has donated a licence of shave for us…!

Big thanks Joe…

(We need to plug him)

Miguel …Daniel will hook you with license…



that is great news Paul! Definitely we need to plug him… Thanks u guys rock…


quick question about muddbox…(was playing around with it this weekend)

I know MB can make a stencil from a photo…a free floating mask (like you would in real life airbrushing)…


can it make a stencil derived from a UV map?..ideally load a texture and pull and alpha matte of that?

can MB 2009 pull this off?


Sounds like your trying to pull that zbrush trick where you take the greyscale bump and inflat the mesh for details? I’ve used mudbox to a certain point but from my knowledge I don’t think you can alpha mask from a texture using the uvmap coords. There might be some sort of workaround of trick to do something similar, or the new 2009 might be able to… Although I’m pretty sure it might not be able to do that. I could be wrong!