Arnold Z-Depth Pass


#1
  • While learning & testing stuff in Arnold (MtoA), I noticed that scaling the camera locator (on the channel box not locator scale) in viewPort affects how Arnold renders Depth of Field. Is it a bug or something? 'Cause I did not find anything regarding it on solid angle’s Help Docs
    I am using Maya 2014 & MtoA v1.0.0.1

And I was following this guide - https://support.solidangle.com/display/mayatut/Z+Depth+AOV

How do we get an normalized ZDEPTH pass from Arnold, or is it better to normalize it in Nuke and get a wide range from my pass?

Is ZDepth pass D.O.F less accurate than a D.O.F rendered within Maya or any other 3D application. I have heard few people say that they render D.O.F in Maya and not composite it in external application, now is that a destructive workflow? I mean if there is a blurred detail you need, you cant possibly get it back and have re-render, isn’t it?

Thank You
-G


#2

Changing the LocatorScale in camera attributes this does not change the Z channel (just tested it with 1.0.0.3).

To get the best quality, depth of field should be rendered directly. Well, normally this is much more expensive than rendering without depth of field, especailly in renderers like mentalray. The reason is that you have to increase the number of samples to get a more or less noisefree image. In a renderer less intelligent than Arnold, you multiply the complete shader and light evaluation by several magnitudes. That is the reason why depth of field is done often in post. If it is used carefully with layering, then you can get quite good results, but it is limited. e.g. if you want to have some blurred elements in the foreground, you have problems. Blurrend forground objects reveal objects in background. But in compositing you don’t have any background information at this position. But the huge advantage is that you can play around and change the image in seconds instead of hours.


#3

I am sorry, I might not have been very clear. Locator scale doesn’t cause D.O.F changes but if you use a scale transform, it seems to create a very shallow D.O.F.
While IPR is on, scale the camera when D.O.F is active.

I have made a quick video of what I mean:-

Here is a link to it (124MB - skyDrive).

http://1drv.ms/1oldtQt

And thank you for the info!!
By the way, the help docs say that Z-Depth pass is written into the red channel, however I am getting my Z-Depth in the Alpha channel. I am rendering 32bit EXRs.
Is that okay?


#4

Scaling the camera ist never a good idea. We had several problems with scaled cameras and not all render translators fix it on the fly. So simply do not scale the camera. This is not a bug. If you scale the camera, you minimize the rest of your scene.

Concerning Z cannel and alpha. Well, I have to admit that I always create an extra AOV and write it directly into the exr what results in a seperate channel called Z.


#5

Yeah, Usually I use the LocatorScale option to make my camera locator bigger in the viewport, this happened while I was testing the ZDEPTH in Arnold.

I do the same!
Well, honestly I know it really doesn’t matter in a way, as you need to extract Z-Depth anyways from wherever it is.
Reason to ask was that, I was following the Help Doc and wasn’t getting exactly as they explained even after following the steps.

  • Is it good to normalize the ZDepth in Maya (if so, what’s the best way for Arnold) or is it safer to normalize in Nuke?

Thank a lot for the help


#6

What exactly do you mean with “normalize”? A 1/z value? I’d try to do it in post if possible. But why do you need a normalized one at all?


#7

By ‘normalize,’ I mean getting an acceptable value to use on my ZDefocus in Nuke.

The Z-Depth pass I get out of Arnold are insanely big in values that I need to grade (in VRAY i can input my black & white values, can we do that in Arnold?).
Is it because I work in real world units, that I get these big values, like 10000/12000 on my Z-Pass etc.