Yes, let me second Frischluft Lenscare as well- it is lightyears ahead of other Z Depth plugins as far as quality and flexibility goes. It can do stuff that other AE (and Nuke and Fusion) plugins simply cannot.
Adding to the lenscare comments, as long as you have a good depth pass with a nice range on the Z then you can mimic rack focus in most cases with frischluft. If you have a very shallow contrasty depth map then you wont be able to do much (probably would be the same deal with the 3D z data too).
I used to use RLA and RPF years ago but needed to stop once I got bigger jobs with longer render times, and needed to render on a farm using NET (rpfs dont work on NET). In that respect, those formats are a catch 22, Work great on a single box, but you can’t really use them on heavy jobs if you need a farm. I may be wrong as its been awhile, but I dont believe you can get bokeh/blooming lens effects out of your rpf via the 3d depth of field effect in AE–it just blurs the footage. But that may not be important for the work you are doing.
After all these years, i still don’t have a perfect dof workflow, and not sure there is one. I will sometimes even resort to making custom depth maps using 3d world gradients in camera space for the most control.
Chris recently updated the Zblur2 plugin to include multi-processor support, so it’s super speedy over net render. It’s honestly the best way to go for flexible DOF in post.
I think this is a holy grail of sorts - to have full & high quality control over DOF in post. I’m hoping that once deep compositing becomes more available & economical (which is just a matter of time), this may become possible.
Another way this might eventually be attained is is if the geometry and lights/textures of a scene could be baked & brought into a compositing app. While technically possible right now with certain apps, it’s a bit of headache, and there isn’t a solid workflow available.
Wouldn’t the upcoming Cineware, the unholy union of Adobe and Maxon, address this very thing?
I usually use Frishluft in AE as well, but I think that the crown has been stolen by this plugin for Nuke:
The fact that it is used for the beautiful bokeh effects in the new Pixar short "The Blue Umbrella " speaks volumes for it. I have never used it, but it does have “deep data” support which is part of the new EXR 2.0 spec. I usually struggle getting no artifacts on edges with post DOF, so having EXR2.0 export from C4D and support in Frushluft would be a great thing!
The raytraced DOF is C4D certainly looks better to me than the DOF in Vray, and along with the MB support are two of the best reasons for sticking with the “physical render” (jeez I hate that name though!) in C4D for certain jobs.
“The raytraced DOF is C4D certainly looks better to me than the DOF in Vray”
they look exact the same in vray if you set the bias i nb okeh settings in vray to 0.0 vray has extra control to how the lense type is, so you can tune to different looks. default is 0.5 atm. if you want the one like c4d simple set the bias to 0.
the mb atm works only in phyiscal render, yes, in the update we work on it works also in vray (quite faster than in physcial), this part is already working in internal builts, hopefully not too far away to give all.
Ok, so, I got confirmation from Chris at Biomekk that his Zblur2 plugin Also exports Z depth data.
“You can already get z channel information using the Position Pass post effect with Space set to Camera. The Blue channel will be your distance from camera.”
So, long story short, get Zblur2 for the depth data and Lenscare for the DOF in AE.
Interesting. So how does the 3d depth plugin in AE read the depth data? What format do you render?
EDIT–oh…so you are saying lenscare can read 3d z-depth data. Did not know that—trying it now…
Dunno, but I’m going to try it the moment I can get some air :). Trying to get this project done so I can get XParticles :). I want it SO BAD.
Yes, you can use the z-depth channel from RPF/RLA files. You apply the 3d channel extract filter to get the depth, then precompose the resulting depth channel and use that as your depth channel for the Frischluft plugin. It does produce a better result than the native depth channel due to the latter’s AA.
Did not know about the Position Pass post effect.
The Position Pass can be used in 3D-compatible compositing applications that can in turn be used to create all sorts of effects with it.
In short, this pass contains 32-bit encoded positional information. The composition application knows the location of each pixel in 3D space and can - depending on the application itself - calculate effects such as new lighting, 3D masks or other three-dimensional effects.
Since the position information is 32-bit encoded and can also contains negative values, such a pass can only be saved in 32-bit image formats: B3D, OpenEXR, PSD, PSB, TIF.
OpenEXR is the recommended format.
Note that Photoshop CS5 often has difficulty handling negative values and also cannot correctly read OpenEXR layers.
Make sure of the following when using this post effect:
Multi-Pass is enabled
The Multi-Pass Post Effect is enabled
Position Pass is enabled
Edge antialiasing has intentionally been omitted from the Position Pass function because this would render it unusable.
No, sorry, I’m not sure that it does. That was my poor hurried way of saying “I don’t always do DOF in post, but when I do, I use 3D Z-depth data or Lenscare using a depth map. Both of which can be created using Zblur2”. Not being careful about reading comprehension and writing has gotten me in trouble more than once in this forum.
This couldn’t have happened at a more perfect time. Sweet!
Isn’t a camera orientated position pass z channel the same as a normal depth pass, just ‘calibrated’ differently (different start and end points, colour values related to real world units)?
I seem to have gone down the position pass rabbit hole this morning. Lots of info out there.
My compositing approach is usually more streamlined and somewhat rudimentary so have not been using exrs. Here is a post from cineversity (which includes as separate link to a more detailed thread) if any are interested. Forgive me if this is all common knowledge or even worse, a dead end.
PS–this does not relate to the original vray vs phys renderer conversation—but might be useful.
Exactly right, except the more modern OpenEXR is the clear choice, in AE, anyway. After several hours of obsessive research this much I know. Going to be using position pass a lot more. You can do a ton with it! Zblur does NOT create z-depth info, that comes straight from Cinema. Lenscare cannot use z-depth data, it need a map that can be extracted exactly as Adam laid out. If I can get it to work in my project tomorrow, I’ll try and make a quick and dirty tut on it.
Yeah, fxphd had several awesome position pass classes in background fundementals a couple terms back. I’ve been dying to get into using it, but the AE position pass plug-in isn’t the best.
I’m assuming you guys are working with Nuke or Fusion?
AE here, but I know this sort of workflow resides in the murky waters approaching the edges of AE’s current abilities. Don’t have high hopes, but I’m finding it all interesting. on an even more unrelated note, I came across this relighting plugin for AE which uses a normal map and position pass out of c4d.
Haven’t taken a fxphd course in about a year-think I’m due for another. I’ve put off Nuke for way too long.
I’ve been eyeing Fusion for a little while now (it’s $2400 vs Nuke’s $4000); the Adobe CC events have made me wonder if I can get fusion to take over the function of both after effects and photoshop. I also think Fusion has a better future in store vs AE, as Adobe will lean on plug-in developers as long as they can. And if I really need AE for a single job, it’s $20 a month.