The right way to render a depth pass


Instead of creating a new thread I will continue with this one.

I am also trying to figuring out how to render usable depth path.
And it’s Madness! ( No no, it’s not Sparta)

I’m using Mental Ray for maya, and I have set up a render layer with depth pass.

Question 1:

I tried what you guys were saying but I have little issue I can’t overcome.
I render my depth pass in a separate layer, without anti aliasing, but when I look at the border edge of my depth pass in nuke I still have in between values.

The picture is at full resolution without any reformating or anything else.
I used an AA setting of 0-0 unchecked Premultiply alpha (doesn’t matter if it’s checked or not I still have the same issue), unchecked interpolate, and my filtering is at Box (1/1)
The value range from 0.9 for the white 0.3 for the grey and 0.1 for the darker.
The result is that my edges suck with depth of field.

If I understand it right I should not have these in between value the darker pixel which are my foreground object should have roughly the same value and the grey pixels should not be there.

But how do you achieve that?

Answer 1:

I found the answer while writing this post, the fact was that I am using a depth pass was the issue, you have to uncheck the filter box in the attribute editor of your depth pass to get unchanged value, or you will have these inbetween (filtered) value.

Question 2:

BTW it still not give me an accurate Depth of field, even if I found that using depth pass with the same resolution as your beauty was better than doubling the resolution.
When I use depth pass with twice the resolution and reformat it to fit my beauty, I lose some pixel, I didn’t applied any filtering with the reformat node or you will have the same result as using AA (inbetween values).
Using the same resolution seems to get you the right contour, but you still have artifact from Depth of field.

Answer 2:

After reading about this issue in lenscare documentation it seems that there is some misleading about this double resolution trick.

Conversely, when
there is no anti-aliasing on the depth buffer but there is anti-aliasing on the
image, then those two images don’t match exactly, which results in more or less
visible artifacts. One way to deal with the second problem is to render in doubled
resolution, apply Depth Of Field and then resize back to normal resolution.

order to reduce the rendering time for the raytracing, it is possible to decrease
anti-aliasing on the image by the same factor the resolution has been increased.
For example, if 16 times oversampling is enabled with a doubled resolution only 4 times oversampling is needed.
If you already rendered out your image in normal resolution it is acceptable to
increase image size in After Effects and render out only the depth map in doubled
resolution. Then use that z-buffer to apply Depth Of Field on the bigger resized

If I understand it right you don’t have to render your depth path alone at double resolution but your whole passes. And then resize it.

Or at least double size your beauty pass to match your depth pass, apply the DOF and resize it back to your normal resolution.
But you should not reformat your depth pass to match your resolution and then apply DOF, like I did.
Maybe it was clear for you guys, but just in case someone made the same mistake as I did.

Sorry about this long post discussing with myself, I prefer to write it down to share it to whoever will need it instead of keeping the solution for myself.


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.