Z-Depth - How to.


Guys, can somebody tell me how to extract a z-depth pass from a file, ie what nodes I use. Or perhaps point me to a tutorial.

Much appreciated.


The question is a little confusing (particularly the word “extract”)
Are you trying to create a rack focus or depth of field (via selective blurring) in an image or clip that contains a z channel or, are you just looking to swap the zdepth channel to another channel and write that out as a greyscale file?

For the first look to ZDefocus and ZBlur nodes in the filters tab for the latter use the Reorder node in the color tab (there is an example of that use of reorder, ie making the z channel visible, in the manual).

Also be aware that most zdepth channels are extended depth (16+ bits) data and if you swap it to RGB make sure you write that to a file format that supports extended depth greyscale otherwise you will clip or quantize the data

P.S. I spotlighted up (is that a verb?) the example in the manual (was page 657):

The Reorder node lets you shuffle channels. The argument to this command specifies
the new order. A channel can be copied to several different channels. The letter “l”
refers to the luminance pseudo-channel which can be substituted in place of the RGBA.
If an expression is on a channel that does not exist, Shake creates the channel. You can
use the Z channel as well. For example:
shake -reorder zzzz
places the Z channel into the RGBA channels for viewing.
To copy a channel from another image, use the Copy node.


Sorry for the late reply, I think I dont understand how the z-depth pass is rendered in an image. I thought it was a separate pass but am I to believe that it is actually embedded into a colour pass as a 32bit image? Then when you bring it into Shake you can use a z-focus node to utlise the depth pass??


Yes, shake maintains z and alpha channels in images (internally). So, if your 3d package is capable of writing a combined image (RGB, alpha(if you need it) and z channel) to a file format that supports it (like .exr) you can read that file in and just use one of the zblur nodes directly.

If you need (or want) to write the z-depth information out of the 3d package separately, then read both the image and zdepth files in (2 filein’s) and use a copy node to copy the zdepth info onto the image’s z-channel. (ie merge them) Then use a zblur node on the combined (RGB+z) image.


Ive been having a look at the framebuffer in MentalRay for Maya and Im a bit confused. There are options for RGBA 4x32bit but only one Depth 1x32 which would indicate that the z-depth is a separate pass. i dont know if you know Maya very well, but there is a flag under the renderable camera that I assume should be on called ‘z-depth channel’.

Will the z-depth be embedded in the RGBA 4x32bit if I turn this flag on?


Only slight familiar with maya, and that was in a time long ago and a much earlier version, so I can’t help with MR render settings.

You can certanly write combined file from maya but I think you are going to have some issues, as I recall both maya’s internal and MR both write peculiar (and different) z depth formats (not straight 16 or 32 bit int, but I am really reaching here) Both are floats (between 0 and 1) but as I recall maya’s is negitive and inverted (so infinity is zero and anything towards the camera is negative between 0 and -1 (-1 is the lens)) there is even a macro (included) called “maya zdepth macro” in the shake cookbook which translates the z depth dataset to straight (positive) int that shake can work with. Not sure if that is nessassary anymore (you can verify your z data by swapping it into rgb via reorder node and visually inspecting)

My best guess on file format would be iff, rla (wavefront) or exr (shake supported file formats table is on pgs 171-173)

It seems complex (and is) and a lot of work (and for one shot it certainly is) but once you get the pipline set it should be a breeze. (or thereabouts;:wink:


Okay, sounds like a lot of unnecessary work but like you say, once you have the pipeline in, it’ll all make sense. Thanks for your help, Ill be coming back to this thread as I tackle the process.

You’ve been a great help.


I’ll be following this post if you come up with anything; I’d appreciate you posting your findings.

Good luck.


Hey Guys,
Ran across this in the VFX downloads section

It is a shake node (macro) that has 2 input channels; an image and a gs image for depth and applys blur based on the GS image. So basically, if you would rather not mess with creating (or exporting) a imbedded zchannel, you can use this macro (I didn’t look at it, I assume it copies the channel 2 GS image into the channel 1 image’s z channel and then applies zblur) at any rate it’s free, and I thought it might be useful to you.


Looks interesting. I assume you would have to light your scene in such a way as to have the closest objects brighter than further away and then run the pass out?

I friend of mine ran this past me for Maya for a manual setup…

"Go to render globals, render options, environment fog… click the checker box and it will automatically assign an environment fog. set the color to black and uncheck the “color based transparency”. Saturation distance is the max scene distance where saturation will have effect on. experiment on that one. also, under the “clipping planes” tab set the fog near/far distance according to the min/max object-camera distances of your scene.
Assign a surface shader to all the scene objects, set the “out color” to white. also if there are any lights in your scene…turn them off. "

It works as a software pass.


I think you are misunderstanding his description (though it is a bit obtuse) I think that when he says “The depth-image needs no zChannel! It works with the luminance of your image.” he means it blurs based on the luminance of your depth image (input #2) not on the luminance of your primary (input#1) image.

He is saying that you don’t have to input an image with a z channel, that his macro will blur based on the lum value of the GS image you input for depth.

I agree, his description is not very clearly written. But hey, he is giving it away…

P.S. And yes, depth fogging is a common technique for generating a depth plate and works Ok for depth bluring. If you are using the Z channel for depth compositing you defiantly need 16bits of resolution in the depth so your renderer has to be able to output extended depth images (or you need to use the standard z-depth pass.) The disadvantage to using fog is that it requires more setup and a discrete pass, but you are sure what you are getting (relative distance from lens represented by 8 or 16 bit -int- luma values).


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.