Lenscare Workflow - Vray Depth Pass


#1

Hello,

Just searched up Cgtalk but there doesn’t seem to be an answer to this yet.

What is the ideal way to map the depth map Black to White values. if the distance from camera to furthest object in an animated sequence is 20 units. Should I be mapping the black value to 20? Or doubling it for better results. How accurate should I be?

Have you guys mapped the distance to the black and white values? By using this method…does this cause the DOF to shift around unexpectedly?

Should I just like the camera decide the black and white values instead?

Which file format? 32bit or 16bit? The aliasing thing is an old myth right?

Any other good practices I should know about to get really decent results. I was looking at Alex Roman’s 11&7th. Are they sure he did that in Lenscare and not 3d DOF renders?

As a reference, I am using Maya2010 with AfterEffects CS5 and Lenscare 1.45.
Thank you!


#2

He did indeed use Lenscare.

I use Vray in C4D, which isnt as advanced as other versions, so perhaps there are better ways doing this in other programs.

We render the depth pass in an extra renderpass with the standard vray zdepth pass, no mapped falloff or anything. Remember that alpha channels in textures wont be transparent in the depth map, in those cases a linear falloff in cameraspace can be a workaround.

If Lenscare in After Effects can use Subpixels you should render the depth pass with double the resolution and without any AA, this gets the most accurate results. If not, render the depth pass with fixed antialiasing, not adaptive.

16 bit should be enough in most cases.

Also remember that lenscare often has to invent new stuff when something is blurred at the edges, so there will always be some artefacts there.


#3

Does anyone actually render the beauty pass in 8bit and then set up a separate render pass for 16bit depth?

Can anyone else confirm that lenscare can use subpixels in aftereffects as mentioned?

I really want to know more about Alex Romans DOF workflow. It really looks flawless. The artificating is non existent.

As for HDRI probes used also as the background, do you guys usually put a sphere just behind to make sure that the depth actually doesn’t run to infinity? Would this help the blurring of the edges?


#4

I would really like to know about this too. What I’ve done in the past has been using Mental ray for renderng zdepth, or just avoided using dof when so many artifacts were there.

I really wonder how you get a zdepth pass from vray that actually works with Lenscare. Thinking about displacement maps, alpha maps and camera distortion.

Would really like to get some insight on this! come on! someone has to know, right??


#5

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.