Realistic Fur Pipeline

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 04 April 2009   #31
had to edit the post for some reasons

Last edited by Rainroom : 04 April 2009 at 03:12 AM.
 
Old 04 April 2009   #32
Originally Posted by playmesumch00ns: Well if you're using a dome then you don't want ambient occlusion or you'll be doubling up environment shadowing. Actually I hate ambient occlusion. People seem to think you just turn it on and it makes your scene look better but it ends up just making everything look dirty. The shadwing doesn't respect the incident light distribution and that always looks fake.

Wasn't talking of ambocc, which I agree looks dodgy when trasmitted body to fur (but is actually quite freakin cool to approximate by the volume if you do feathers or matted hair and are close enough).

I meant affecting the specular model by running it through a look-up of the reflection occlusion (transmitted or approximated), or even just modulating it last by that look-up.
Especially if you have an env that you got the basic domelights set from that usually pays in spades, or have you found it superflous or have found an alternative?
Or are you just pushing throught an inordinate amount of deep shadow maps?

Btw feel like sharing what the number for average outdoors dome look like? number of lights, distribution tricks and number and res of the maps?
I'm too far away from the lighting dept to be arsed and go there ask ex colleagues of yours
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles

Last edited by ThE_JacO : 04 April 2009 at 01:22 PM.
 
Old 04 April 2009   #33
Originally Posted by ThE_JacO: Wasn't talking of ambocc, which I agree looks dodgy when trasmitted body to fur (but is actually quite freakin cool to approximate by the volume if you do feathers or matted hair and are close enough).

I meant affecting the specular model by running it through a look-up of the reflection occlusion (transmitted or approximated), or even just modulating it last by that look-up.
Especially if you have an env that you got the basic domelights set from that usually pays in spades, or have you found it superflous or have found an alternative?
Or are you just pushing throught an inordinate amount of deep shadow maps?

Btw feel like sharing what the number for average outdoors dome look like? number of lights, distribution tricks and number and res of the maps?
I'm too far away from the lighting dept to be arsed and go there ask ex colleagues of yours


We're looking into ways of getting better approximations of the environment in the highlights now... I don't want to talk about those yet though... got to keep some tricks up our sleeves

I never really considered doing what you suggest with the reflection occlusion - I found in most cases the dome shadows offer a good enough visibility approximation for environment lighting - in most cases you're more concerned with the highlights from the key lights so we tend to ramp down the specular on the fills to avoid confusing reflections everywhere. Plus the hair brdf plays much nicer with explicitly shadowed sampling than with env/occlusion hacks, especially with the secondary highlight, caustic and transmission components.

On Caspian our domes were normally 12-20 lights, but went up to 32 for some cases. They were generated with debevec's median cut algorithm with an extra merging stage at the end to get a better approximation. TDs would normally start with a dome of 16 or 32 then merge individual dome sources to reduce the number of shadows. There are better sampling methods for environment maps than median cut out there but we found it worked well enough in practice not to bother implementing anything more advanced. We did implement a k-means search as well and used that for the more uniform environments since median cut creates very regular patterns of lights if there are no strong sources.

The maps themselves were normally 256 pixels square. For our domes we always used the standard-shadow/occlusion trick, except on a few hero reepicheep shots. Those were towards the end of the show and I'd just bump the dome res up to 512, full-on deep deep shadows for 32 sources. By that point the farm was pretty empty and I was too tired to spend hours optimising shadow sampling settings
__________________

You can have your characters photoreal, fast or cheap. Pick two.
 
Old 04 April 2009   #34
:thumbsup:

Great discussion on fur! Something I've always hunted for on forums.

Besides 3delight and prman, I feel Sitex Air also does a great job of rendering fur (although I haven't found its deep shadows equivalent, fragmented maps, to be particularly helpful so far). Its got some decent looking out of the box shaders for fur and hair too. And considering its price, its well suited for small to medium sized studios. Of course, you'll need to couple it with Animal Logic's MayaMan to use it with Maya.

We use Shave and A Haircut and besides its dynamics, I find it great for production. In the newer version there's even the support for attaching maya hair systems for dynamics. I guess its better than using its own system. While we have mostly been using off the shelf tools, we are trying to get into more r&d over fur/hair shaders and grooming tools. The deformer method of grooming fur sounds awesome! Checking out some vfxworld articles right away.

For feather, do most of you use the hair/fur tool itself and use custom shaders to render them out as feathers? Our shader that does that works fine but we suffer from a problem with the root darkening in such cases (though we are baking out surface normals as well). And we try to fake occlusion by using a dome setup too but we don't sample any environment maps. Would it be advsiable to do so? Also, playmesumch00ns, are 256 maps enough to get occlusion shadow details or do you multiply the short distance pre-baked occlusion over it? Because in our renders, to get enough occlusion details in fur we really had to crank up our shadow map res (probably because we are not using deep shadows and you are) which in some cases causes unwanted self shadowing in characters (like a more detailed shadows of character's hand on torso than required).

Since we are just getting into renderman shading, we are trying to implement the short distance fur occlusion baking method described in pixar's paper but are really not sure about the part in the paper which talks about adding this occlusion info into the fur shader.

Quote: Ambient occlusion calculation during render time
only requires two texture lookups based on scalp (s,t)
and parametric “v” and a blend.


As I understand, the ambient occlusion has already been converted to a flat texture map (using ptrender, as it mentions in the paper). So, does the fur shader have some attribute thats looking up the grayscale value at a particular point to multiply over the actual fur color? And since the paper mentioned that the occlusion is calculated at four different ranges, how are these separate occlusion values at different lengths along the fur (v, I guess) interpolated? Am I making any sense?

What I tried is adding another attribute to our fur shader where we call the baked occlusion map and it is simply multiplied over the final colour output of the fur (similar to the root darkening or root/tip multiplier you find in some fur shaders). That seems overtly (and incorrectly) simple to me and I'm sure I'm missing out on shadow details by doing this. How can I improve on this and understand the paper better?

Last edited by noizFACTORY : 04 April 2009 at 08:15 PM.
 
Old 04 April 2009   #35
Please excuse the crappy gimp diagram. Basically you're linearly reconstructing the occlusion along each hair from the 4 maps. Each map represents an occlusion result at some distance in v along the hair. All you need to do is figure out which two maps you are 'between' based on the v coordinate of the point you are shading, then linearly interpolate (mix) the values stored in those maps based on how far you are between them.

For the example in the diagarm this will be something along the lines of:

 mix(
 float texture( map1[0], s_root, t_root ),
 float texture( map2[0], s_root, t_root ),
 (v - map1Position)/(map2Position-map1Position)
 );
 


Assuming 4 samples along the curve, map1Position will be 0.333... and map2Position wil be 0.666...

Then you just multiply the result into the diffuse component of your hair shader. Jobbed.
Attached Images
File Type: jpg hairocc.jpg (11.6 KB, 392 views)
__________________

You can have your characters photoreal, fast or cheap. Pick two.
 
Old 04 April 2009   #36
Thank you so much for the simple yet effective illustration playmesumch00ns!

Hopefully, I should be able to implement whatever I understand of this in theory, in our shading network too.
 
Old 04 April 2009   #37
wow, all that is a bunch of very very useful info.

so how you manage to make a dome light with the colors of your hdri?
 
Old 04 April 2009   #38
Originally Posted by Yeminius: wow, all that is a bunch of very very useful info.

so how you manage to make a dome light with the colors of your hdri?


http://gl.ict.usc.edu/Research/MedianCut/
__________________

You can have your characters photoreal, fast or cheap. Pick two.
 
Old 04 April 2009   #39
HA!

cooool

A bit out of topic but... a weird problem I'm having now, not sure if it is because of the fur or just mental ray.

If I change lets say a light parameter the fur then renders blueish in some areas. I can save the scene after that, restart maya and it renders ok. It happens 1/3 of the times I change something... what could it be?
 
Old 04 April 2009   #40
If anyone's interested, David on DJX has just posted some results he's got with the p_shader set using Maya fur.. http://www.djx.com.au/blog/2009/04/...hader_replacer/

The results are excellent, and hopefully for anyone who's had issues trying to get render passes with shave straight out of maya (shadow pass!!) will realise the benefits here. Going to try his findings with shave as soon as I get an opportunity to.
__________________
LinkedIN
VIMEO

- My thoughts are my own and should not be confused with anyone else's.
 
Old 04 April 2009   #41
Very cool Dominic, thanks for the link
__________________
Mike Rhone
-VFX Artist-

Dust Rig - tutorial for Maya

Tonga the Fox - Free cartoony rig for Maya!
 
Old 04 April 2009   #42
mmmm...


as I said a couple of posts before, when I change something I get this.



Anyone knows why this happens?

... I can hit save, go and open the scene again.



voilà

it is pretty annoying :P, and as you cna see, the light is not matched... so think how long would it take to re open the scene every time I change a paramenter ;O

some people say its mental ray memory management issues... not sure what that means
 
Old 04 April 2009   #43
penetration

Has anybody come across a situation where the shave guides penetrate into the body during animated scenes incase of full fured animals as a result creating a baldness patch on the body.
For example: I have a full fured dog and in some animated scenes,the guides seem to penetrate into the body ,specially near the knee joint areas or the palm areas.

Did u guys face any problem like that with the animals in narnia or kong?

P.S: combing the shave guides manually in every frame where theres baldness patch isnt really a good solution since you never know as it might appear on some other body part in some other frame.
__________________
Character FX TD
 
Old 04 April 2009   #44
Well, I'm not sure if I understood what earlyworm said about Kong's fur styling, but as far as I understand he kind of converted the shave styled hair into maya fur, is that right?

If so, is there any other way to convert stylized hair made in shave into maya fur without proprietary plugins?
 
Old 04 April 2009   #45
Originally Posted by Yeminius: mmmm...
some people say its mental ray memory management issues... not sure what that means


There are a LOT of folks who have far more production experience than me in this thread (which is friggin' awesome, by the way).

That said, are you using 32-bit framebuffers by chance? I've noticed that I get that posterized/tie-dye look sometimes in my interactive renders if I leave 32-bit on. Doesn't always do it, and changes from reboot to reboot. Batch renders always come out properly though.

I've also seen issues like that if doing Shave satellite renders aren't sharing their shadow map directory properly.

--T
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 11:29 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.