PDA

View Full Version : mental ray Z depth in Post


Redsand1080
10-12-2010, 02:38 PM
I've been trying for sometime to get decent results out of a mental ray Z depth channel in a post application and have never been satifsfied with the results. I was wondering if anyone could offer some advice on how to do this properly and not end up with edge artifacts once the blur is applied. I've seen plenty of people get nice results, but I've never been able to achieve them no matter what workflow I used.

I am currently under the impression I am doing everything correctly, but if I'm not please let me know. I'd love to be able to get acceptable results.

My goal is to render a Z depth pass with NO antialiasing at 3 times my normal render res and then reformat that down in Nuke to get my antialiasing back. I'm trying to get correct edges in the render with no 'smoothing' between different depths. I know you can achieve this in Renderman with the 'Hider' but I'm trying to get this to work in mental ray.

Here is my workflow:

1) I apply p_z shader to my geometry
2) I set my framebuffer to 32-bit float
3) Set p_z shader to 'float point output'
3) Render to EXR file format
4) I render at 3 times my normal render resolution. So if normal render is HD 720 I render my Z depth pass at 3840 x 2160
5) I set my sampling to 'fixed' and sample exactly once per pixel...that equates to a setting of 0 in the UI.
6) I set the filter to triangle because according to Zap the Box filter can still do some smoothing, so triangle is best to try and get no smoothing of edges.
7) Apply z_blur in Nuke.

Am I doing anything incorrectly?

Thanks so much for any advice!

-Justin

Redsand1080
10-12-2010, 03:18 PM
I just watched the Round 6 blog and noticed that those guys did an erodeDilate on the Z depth pass before they scaled it down and applied the Frischluft filter. So their node tree looks like this: 1) Depth pass double res no antialiasing, 2) erodeDilate, 3) Scale, 4) Lenscare.

They must have experienced a similar edge problem without the erodeDilate.

Does anyone have any experience using a technique such as this to get decent edges?

Thanks!

-Justin

Galakgorr
10-12-2010, 05:43 PM
that method works well for edges around objects that don't have huge depth contrasts.

we do the same thing in after effects... use mattes to isolate objects in your depth channel and dilate the isolated object's zdepth a bit, then throw that precomp on top of your existing zdepth. it's tedious but it's the only foolproof method i'm aware of that will get rid of those edges.

i don't think you actually want any antialiasing at all in your depth channel for frischluft, but i could be wrong. from what i understand, though, and from what my compositors have told me, the depth channel should never be anti-aliased.

if you have extreme changes in depth you'll definitely have to render out the foreground and background objects as separate passes so you have color information behind your foreground characters for when they're blurred.

Redsand1080
10-12-2010, 08:26 PM
if you have extreme changes in depth you'll definitely have to render out the foreground and background objects as separate passes so you have color information behind your foreground characters for when they're blurred.

So what you are saying is the Z depth channel will only work for objects that aren't very far apart depth-wise? And when I say 'work' I mean not produce nasty-looking edge artifacts.

So is there no way to just have one Z depth image of the whole scene and use that one image with one z_blur node to blur the scene? Is it neccesary to break the scene up into parts that consist of objects that are relatively close to each other? Let's assume that there are no moving characters to worry about...it's just a still image.

I was hoping a double res 32 bit non-antialaised image would have accurate enough data to hold up from the foreground all the way to the background.

Thanks again!

-Justin

Galakgorr
10-12-2010, 09:38 PM
the problem isn't that there aren't enough values for Z in a floating point image. the problem is that when you blur things, the foreground and background objects blur together. you get artifacts because in a flat image there is no data to fill in the values BEHIND your foreground objects. that's why you don't get artifacts if your objects at different depths are rendered separately.

dilating the depth channel around specific objects is a hack, but it works pretty well for smaller differences in depth. for major differences you should render things separately or consider just rendering depth with a lens shader (if you have the time).

ndeboar
10-13-2010, 12:27 AM
Yup. if you want to blur any forground objects, they have to be seperate elements. It's where post dof falls apart.

Redsand1080
10-13-2010, 01:08 AM
Thanks a lot for the great feedback. This finally puts this issue to rest for me...so even if getting post DOF isn't as easy as I would like, it is possible. It would be so darn convenient if I could just use the Z depth pass the way I would like, but I guess that's just not going to happen. It's nice to know that I haven't been doing anything 'wrong' necessarily, just that my nice and neat approach isn't possible.

Thanks again for the clarification!

-Justin

weerawan
10-13-2010, 08:06 AM
I would just say a big Thanks for all guys here.Very informative.
You guys rock!!! :buttrock:

Redsand1080
10-19-2010, 05:53 PM
Looks like some really smart guys wanted a good solution to this problem as well and invented 'deep compositing.' Really cool stuff. Can't wait untill we can actually use this tech! Looks like it might get rid of all the workarounds and edge artifacts associated with classic Z-depth passes that are such a royal pain!

Linkity-link (http://www.deepimg.com/)

InfernalDarkness
10-20-2010, 12:48 AM
Thanks a lot for the great feedback. This finally puts this issue to rest for me...so even if getting post DOF isn't as easy as I would like, it is possible. It would be so darn convenient if I could just use the Z depth pass the way I would like, but I guess that's just not going to happen. It's nice to know that I haven't been doing anything 'wrong' necessarily, just that my nice and neat approach isn't possible.

I feel your pain my friend. It bothers me that a one-click "done" solution implemented in a $99 piece of software at the lowest end of the CG app scale (Bryce 3D) has working, proper Zdepth out of the box. But in Maya and mental ray of course, we have to fight it every step of the way, only to end up back at zero.

You're not the only one who sucks at z-depth! I finally got it working only to find out that LumDepth didn't support my alpha channels, and poof, no instancing following that. What a mess!

sentry66
10-20-2010, 01:30 AM
I just use a basic depth shader and render it at 2x resolution as a 16 bit iff. Might not be as nice as a 32 bit format, but its still reliable, quick, and works every time

Redsand1080
10-20-2010, 01:42 AM
I just use a basic depth shader and render it at 2x resolution as a 16 bit iff. Might not be as nice as a 32 bit format, but its still reliable, quick, and works every time

So this method works for you even with extreme distances between foreground and background objects? No matter what I did using the 'render out the foreground and background in one shot approach' I got artifacts around my objects that were very much in the foreground and overlapping the extreme background.

I feel your pain my friend. It bothers me that a one-click "done" solution implemented in a $99 piece of software at the lowest end of the CG app scale (Bryce 3D) has working, proper Zdepth out of the box. But in Maya and mental ray of course, we have to fight it every step of the way, only to end up back at zero.

You're not the only one who sucks at z-depth! I finally got it working only to find out that LumDepth didn't support my alpha channels, and poof, no instancing following that. What a mess!

Deep image compositing looks like it can solve all this! Not quite usable for most users at this point, but some studios have gotten it working already. But yes, regular Z depth is a little bit more work than I would like to get some decent results.

sentry66
10-20-2010, 07:25 AM
So this method works for you even with extreme distances between foreground and background objects? No matter what I did using the 'render out the foreground and background in one shot approach' I got artifacts around my objects that were very much in the foreground and overlapping the extreme background.


I can't say whether I've encountered scenes with the depth scale you're using or not, but I've done city shots and got perfectly good Z depth between buildings up in your face and buildings 5 miles away.

By artifacts, do you mean banding in the depth file or something with the blur effect you're using? Or do you mean where the blur gets a harsh cut off?

I usually don't have an issue using a depth pass or two (maybe an additional one with some objects hidden), an alpha channel and minimax to get what I want with just the basic lens blur DOF in after effects. For blurs getting cut off with a harsh edge, you can use a minimax filter and an additional layer without it to get rid of that in a lot of situations - just not super simple for bluring foreground elements while focusing on a midground area. For that you'd do a normal depth blur to blur the background, but then in addition to that you'd take the depth pass, invert it, clamp in on it with a levels or curves adjustment to create a foreground mask to cut out the foreground element, and then comp the cut-out foreground back on top with a depth blur. For the cut out background, you'd need to use a a minimax filter to fill in the cut out areas on the underlying image, then depth blur its foreground as well as the actual foreground layer. If you need to rack the focus from foreground to background, you'd need to animate a few things and opacities....it gets complicated, but works for extreme DOF blurs.

THExDUKE
10-20-2010, 09:23 AM
We also suffer from the lack of zdepth possibilites in Maya. Our workarounds sometimes remind me to McGyver! "I Just need a SamplerInfo a Ramp and a Lambert!" :D

However...We have 2 other solutions which maybe might work.

1) Apply a ramp as a projection on a surface shader and use that as a zdepth. Advantage is that you can render with AA!
Disadvantage is maybe that it needs a bit of tweaking.

2) Apply a VolumeCube to your scene and adjust it according to what you want.
Advantage and Disadvantage is the same as the one with Surface Shaders.

Both works but its not as sexy as you would use real DOF (lensshaders like bokeh or physcial Dof and so on)

sentry66
10-20-2010, 05:54 PM
yeah that's a good method or there's some shaders that can do it with expressions that you can adjust. But it's nice having the AA. The only reason at all that I render at a higher resolution is to maintain good AA after I beat up the depth pass with some levels or curves adjustments and then shrink it down to the comp size.

InfernalDarkness
10-20-2010, 06:11 PM
I'm sure my method is flawed. I'm sure it's possible, and I have no problems with zdepth renders when the objects aren't too complex. Compositing straight lines isn't difficult. It's when I have heavy alpha-channel textures such as tree leaves and flowers going on that zdepth gets messy. When using the Luminance Depth preset, I know that one can duplicate the shader and add the alpha channel file in a transparency slot, but my objects aren't that straightforward, and when I do get it to work, the composition is difficult due to lack of pixel data.

Also, all that workflow requires a separate (even if simultaneous) render, which is not a "pass" at all. I don't know how to pull z-depth out of the framebuffer for example, to have it render alongside my main pass as another layer in my .exr file... Certainly have a lot to learn here when it comes to compositing.

CGTalk Moderation
10-20-2010, 06:11 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.