PDA

View Full Version : The right way to render a depth pass


wellHARD
07-14-2010, 06:43 PM
Hey guys..

I working on a project at the moment and I am having trouble rendering out a decent depth pass. I looked up on the net and a lot of people recommended the "luminance depth" render layer preset.
It sounded good so I tried it out, but I am struggling to change the depth fall off (white to black) settings to customize the DOF, what ever value I try in every field doesn't seem to do anything...
can someone please help me out with this, or
does anyone know of any good tuts on DOF
(using maya 2011)

many thanks

TinyCerebellum
07-14-2010, 07:51 PM
The luminance depth render layer is indeed very easy to set up and use. The new shader it creates actually takes the near and far clipping plane from your default camera. Thus, if those values are very large while the scene is very small, the gradient in the render will likely appear as all white, or with very faint gradients. Personally, I found that the best way to get the most values in the gradient is to change your camera's near and far clipping planes, watch that they don't cut off your objects, then manually replace those values in the Old Min and Old Max attributes of the setRange node that is connected to the new Surface shader. Obviously, you'll have to break the connection first for those two attributes.

badmanjam
07-15-2010, 12:51 PM
thanks for the advice..:)

This DOF seems to be very linear...It only really goes from a focus point to the other focus point. How would I get the effect where I can isolate an area that I want to be in focus and everything in front of that area and every thing behind that area will not be in focus..? Is this possible with this shader?

many thanks

animatedfox
07-15-2010, 02:13 PM
The answer is "Yes you can..." but just how easy it will be depends on your compositing program of choice.

With the data in your depth pass you can do all sorts of depth effects. Some compositing programs (or plug-ins) have a depth range...allowing for this foreground and/or background blurring. If your program of choice doesn't have this functionality, you can probably still use it to remap the color values of the depth pass to give you a blur map that will give you what you are looking for.

Let us know what compositing program you are using and maybe someone can help you out...or you could check in those forums.
~Ben

wellHARD
07-15-2010, 04:39 PM
Let us know what compositing program you are using and maybe someone can help you out...or you could check in those forums.
~Ben

Hey there

Im going to be using Nuke as my composting app of choice, would you be able to guide me through your work flow for doing this in nuke if possible?\

many thanks

Galakgorr
07-15-2010, 05:13 PM
if you're using nuke, you're going to want a floating-point depth channel.

you can either use the render passes system in maya to create a "camera depth" pass and be done with it, or you can make a custom surface shader that outputs depth in floating point... 1cm = 1.0 luminance.

the shader is pretty easy-- create a surface shader, a samperInfo node, and a multiplyDivide node. connect samplerInfo.pointCameraZ --> multiplyDivide.input1Z. then set multiplyDivide.input2Z to -1. this is because cameras always face in the negative Z axis in maya, so when pointCameraZ says that something is -123 cm away from your camera, it will now return 123. then connect multiplyDivide.outputZ --> surfaceShader.outColor. that's it. the render will look completely white but if you look at it in nuke and turn your exposure down, the values are all there, without any banding or clipping plane settings or anything.

wellHARD
07-16-2010, 07:45 AM
if you're using nuke, you're going to want a floating-point depth channel.
1cm = 1.0 luminance.


Ahhhh..thats where was going wrong, I forgot that I should check the actual pixil data rather than just trusting what I see on screen.....
Thanks for the reply, I will try it out as soon as im at my desk

Many thanks

Eshta
07-16-2010, 05:50 PM
There are plugins for Nuke that will make it easy for you to compile the DOF effect.
http://www.vfxtalk.com/forum/t-lensblur-ok-t23893.html

wellHARD
07-18-2010, 09:30 AM
There are plugins for Nuke that will make it easy for you to compile the DOF effect.
http://www.vfxtalk.com/forum/t-lensblur-ok-t23893.html

awsome, i will check that out..
thanks

Kyron
08-01-2010, 06:41 AM
The luminance depth render layer is indeed very easy to set up and use. The new shader it creates actually takes the near and far clipping plane from your default camera. Thus, if those values are very large while the scene is very small, the gradient in the render will likely appear as all white, or with very faint gradients. Personally, I found that the best way to get the most values in the gradient is to change your camera's near and far clipping planes, watch that they don't cut off your objects, then manually replace those values in the Old Min and Old Max attributes of the setRange node that is connected to the new Surface shader. Obviously, you'll have to break the connection first for those two attributes.

Hmm I tried fiddling around with these but still no luck. It dosent seem to be a 2011 bug though, because opening an old 2009 scene everything works fine.

I tried breaking the connectcion to the sampler info and the multiply divide node (it still pops back to min 0.1 max 10000 when i reconnect).
Made a whole new scene with some primitives and spreading them out, scaling them up and down, and with different renders and settings, and I still only get a pure white (like an alpha), and no gradient. This is really driving me insane, because the old preset luminance depth used to work fine for, dunno whats going on.

Edit.: opened Maya 2009, made a few primitives and rendered with the luminance depth preset. It worked fine:shrug:. Strange that I can actually import a 2009 scene to 2011 and that works as well, anyone know whats been changed between those versions and how to fix it? I havent tried the other presets besides ambient occlusion (but that one worked fine in 2011).

I tried importing the working zdepth 2009 scene to 2011 and it worked fine, thinking I could outsmart Maya I tried importing my original scene to the 2009 zdepth scene, but as soon as I did that, the objects turned pure white again, with no gradient... Beginning to think that someone intentionally did this to annoy me:argh:

Nick2970
08-01-2010, 12:25 PM
if you are using 2011, just add (and associate) a 'camera depth remapped' pass in the passes tab. There is nothing more you need to do.

Nick

chafouin
08-01-2010, 01:55 PM
Nick is right, there is a pass for that. You can set the near and far distance (which is why it's called remapped) in the attribute editor. Note that the simple camera dapth pass can also do it, just have to check "Remap depth values".

But since you need to render Zdepth twice bigger, with no anti aliasing, the best is to create a new render layer, assign a surface shader to everything (so the RGB render won't take ages - since beauty is always rendered), override the render settings for resolution and sampling (fixed, 0), turn off the Framebuffer -> interpolate samples, and render with the camera depth pass in a 16bits or 32bits format, of course :)

Kyron
08-01-2010, 02:22 PM
Thanks for the replys guys.

Nick: Hmm care to eloborate on what you mean/how to do that?

dot87: You lost me in between Framebuffer and 16/32 bit. :)
I will try and google up on that a bit.

Thx again

Redsand1080
08-01-2010, 02:32 PM
override the render settings for resolution and sampling (fixed, 0)

In 2010 there is no right click menu to override 'sampling mode' to fixed. I've tried to do this a few times before and never gotten it to work. Do you use a piece of MEL code to override this since the normal right click functionality doesn't exist for that control?

chafouin
08-01-2010, 02:41 PM
You want your Z depth pass to be floating points, so you have to render in a 32bits float image format (EXR for example). And you have to set it in the Render Settings, Quality Tab, Framebuffer, and select a data type, RGB (Float) 3*32bits for example.

And unchecking interpolate samples makes sure that your image is aliased.

Redsand : you're right. The idea is to use Custom Sampling, set it at 2-0 for your master layer, and override values to 0-0 for your z depth render layer.

Sorry I didn't see your message since you posted when I was writing :)

Redsand1080
11-13-2010, 01:44 AM
But since you need to render Zdepth twice bigger, with no anti aliasing,

So do you reformat you're depth pass back down to regular res before or after you apply your blur node of choice? In my experience if I render out the depth pass to double res no AA then reformat it back down to regular res and _then_ do the blur it looks horrific. The only thing I've been able to do is take my beauty render and reformat that _up_ to the same res as the depth pass, do the blur, then reformat the result back down to normal res. It softens the edges of the beauty slightly but that's the only way I've been able to get rid of nasty edge artifacts so far. Any other work flows for doing this would be greatly appreciated!

Bitter
11-14-2010, 08:37 AM
This always gets complicated. . .but it's not really.

If you use the regular Camera Depth renderpass, switch off Filtering.

Render to at least RGBA 16 (half) to get the correct precision under Framebuffer in Quality tab. (EXR filetype)

If you need to know the depth you can open the render pass from the Maya Renderview. File>Load Render Pass This opens imf_disp. Select Layer>depth (whatever the image name is with "depth" appended.)

If you pass the cursor over the image you will get pixel values at the bottom. You can even change the Exposure to see the depth levels. If you drag your cursor over where you want focus, it will give you that exact pixel value you can use in Nuke.

Sidenote: Fcheck, when you hit 'z' on the keyboard will usually give you the depth near and far values in the terminal window as a little printout. However, I'm having issues with 16 half or 32 bit exrs. You'd think someone would rewrite Fcheck by now. . . .

I have no idea why you're being told to render it separately or render it twice as big etc. It's not necessary for correct Z-depth unless there's a bug I am unaware of.

You don't want anti-aliased z-depth because the pixel will contain "some" information of the object behind mixed with "some information of the object in front. The result has a value that has nothing to do with the location of either object once filtered. It will fall in between somewhere and is no good.

scrawford
11-14-2010, 08:46 AM
Ive never gotten this far into it, but I think they double the resolution of the depth pass because it wont be anti-aliased while the other passes will. I think.

Bitter
11-14-2010, 08:58 AM
Ahh, ok.

Well, you don't want it to be anti-aliased anyway for the reasons explained above.

The pixel should be the depth of the object that covers the majority of the pixel, even if it's just 51%.

Fringing around some DOF is called leaking and is a problem of DOF in post-processes. Pushing DOF to an extreme level can exacerbate the problem.

chineseboy
11-14-2010, 10:09 AM
i use the sw render's environment fog to make z channel . my english is low, maybe you can see http://v.youku.com/v_show/id_XOTE1MTYzNjg=.html

PrayingMantis
12-15-2010, 09:28 AM
Instead of creating a new thread I will continue with this one.

I am also trying to figuring out how to render usable depth path.
And it's Madness! ( No no, it's not Sparta)

I'm using Mental Ray for maya, and I have set up a render layer with depth pass.

Question 1:

I tried what you guys were saying but I have little issue I can't overcome.
I render my depth pass in a separate layer, without anti aliasing, but when I look at the border edge of my depth pass in nuke I still have in between values.
http://forums.cgsociety.org/attachment.php?attachmentid=158328&stc=1

The picture is at full resolution without any reformating or anything else.
I used an AA setting of 0-0 unchecked Premultiply alpha (doesn't matter if it's checked or not I still have the same issue), unchecked interpolate, and my filtering is at Box (1/1)
The value range from 0.9 for the white 0.3 for the grey and 0.1 for the darker.
The result is that my edges suck with depth of field.

If I understand it right I should not have these in between value the darker pixel which are my foreground object should have roughly the same value and the grey pixels should not be there.

But how do you achieve that?

Answer 1:

I found the answer while writing this post, the fact was that I am using a depth pass was the issue, you have to uncheck the filter box in the attribute editor of your depth pass to get unchanged value, or you will have these inbetween (filtered) value.


Question 2:

BTW it still not give me an accurate Depth of field, even if I found that using depth pass with the same resolution as your beauty was better than doubling the resolution.
When I use depth pass with twice the resolution and reformat it to fit my beauty, I lose some pixel, I didn't applied any filtering with the reformat node or you will have the same result as using AA (inbetween values).
Using the same resolution seems to get you the right contour, but you still have artifact from Depth of field.

Answer 2:

After reading about this issue in lenscare documentation it seems that there is some misleading about this double resolution trick.
Conversely, when
there is no anti-aliasing on the depth buffer but there is anti-aliasing on the
image, then those two images donít match exactly, which results in more or less
visible artifacts. One way to deal with the second problem is to render in doubled
resolution, apply Depth Of Field and then resize back to normal resolution.

In
order to reduce the rendering time for the raytracing, it is possible to decrease
anti-aliasing on the image by the same factor the resolution has been increased.
For example, if 16 times oversampling is enabled with a doubled resolution only 4 times oversampling is needed.
If you already rendered out your image in normal resolution it is acceptable to
increase image size in After Effects and render out only the depth map in doubled
resolution. Then use that z-buffer to apply Depth Of Field on the bigger resized
image.

If I understand it right you don't have to render your depth path alone at double resolution but your whole passes. And then resize it.

Or at least double size your beauty pass to match your depth pass, apply the DOF and resize it back to your normal resolution.
But you should not reformat your depth pass to match your resolution and then apply DOF, like I did.
Maybe it was clear for you guys, but just in case someone made the same mistake as I did.



Sorry about this long post discussing with myself, I prefer to write it down to share it to whoever will need it instead of keeping the solution for myself.

CGTalk Moderation
12-15-2010, 09:28 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.