PDA

View Full Version : best DOF


donvlatko
07-07-2009, 11:03 AM
Can anyone explain/tell me what is best way to export depth of field pass for compositing ?

thanks

bigbossfr
07-07-2009, 11:07 AM
Just create Zdeph pass no ?

donvlatko
07-07-2009, 11:11 AM
I hear it's not so good, kind a hard to control

cbamber85
07-07-2009, 03:12 PM
I'd also like to know this, I find using z-channels gives horrendous 'halo-ing' on near/far edges, then if I manually soften the edge it gives even weirder problems - because I'm effectively extruding the edges backwards...

Stellios
07-07-2009, 05:52 PM
render your zdepth at 2ce the resoultion, set its size to 50% in post and use fruichlifts lensecare (not sure on speeeeling)... This will take care of most of your zdepth problems... I hears there are ways to slice up multiple zdepth files but it sounds scary...

cbamber85
07-07-2009, 06:04 PM
Thanks, I'll look into that!

Undseth
07-07-2009, 07:31 PM
I suspect that the professional way is to use a non-antialiased depth of field with a coverage pass. Yet nobody talks about this or shows examples, so I suspect that this method is either flawed/imperfect or it is some kind of super secret.

Stellios
07-07-2009, 07:38 PM
I suspect that the professional way is to use a non-antialiased depth of field with a coverage pass. Yet nobody talks about this or shows examples, so I suspect that this method is either flawed/imperfect or it is some kind of super secret.

Zdepth will never be perfect, its a hack at trying to get DOF. Zbuffers werent even supposed to do that job they were a part of the rendering pipeline that gave internal information to the renderer. People just realized they could be used for DOF in post. But it will never be more than a cheap trick.

Also you should NEVER anti Alias your Zbuffer. Changes in greyscale value represent a change in distance from the camera, it is a specific point in space that should not be tampered with. The halfway solution is to render out your zbuffer at twice the resolution and to shrink it in post.

cbamber85
07-07-2009, 07:58 PM
Also you should NEVER anti Alias your Zbuffer.

The halfway solution is to render out your zbuffer at twice the resolution and to shrink it in post.Surely that results in the same pixel graduation?

I suspect that the professional way is to use a non-antialiased depth of field with a coverage pass.Excuse my ignorance, but is a coverage pass? Is it just a matte layer for every object involved?

Stellios
07-07-2009, 07:59 PM
Surely that results in the same pixel graduation?


No, upping the resolution of an image does not change the values of the pixels.

cbamber85
07-07-2009, 08:12 PM
The halfway solution is to render out your zbuffer at twice the resolution and to shrink it in post.Shrinking it will. If you half the res of a picture, in the space of one pixel of your new image, there used to be four. So the system will take the average values of those four old pixels to define the new, the exact value depending on your resampling method. Either way you will end up with a pixel that is not the value of the foreground object, or the background object, giving a distance half-ish between - giving exactly the same effect as anti-aliasing.

Kev3D
07-08-2009, 03:25 AM
Maybe a good compositing package will still read the image at full resolution, only properly applying it's resampling algorithms if the layer is visible at render time. I could be way off here so correct me if I'm wrong.

I too am interested in the method for best depth of field. I want to render an animation where a camera travels down the side of a bottle. No chance here for rendering out different objects in seperate passes. Lots of refractions so render times would probably be insane if I wanted to use proper, in-camera depth of field and have no visible grain.

leif3d
07-08-2009, 11:22 PM
Doing depth of Field in Post is a hack. period.

With that said, don't be discouraged, because you can pull off very convincing results with several methods.

-One of them is rendering foreground and background separately. Old school.

-Another, is rendering your beauty without antialiasing, but 3 times the resolution and then shrink it down in post to match the Zbuffer (Which also needs to be rendered 3 times the res). I would do this in MR about a year ago and it worked very well. Renderman has a subpixel hidder attribute which allows for an aliased output that will match the beauty and Z passes perfectly in post. Keep in mind that filtering your image (shrinking) will not be as nice as what a 3D renderer can do, so you need to evaluate the trade offs.

-The only real way of doing a perfect defocus is doing it as part of your render.

That's the beauty of CG. With moving images, You can either fake something and get it 95% there, or you can do it the "real" way and deal with huge render times and be limited in post. Either way no one will ever notice, same goes for fake reflections (camera projections), fake displacements (bumps/normals), fake shadows (depth map), fake motion blur, fake grain, fake Vignettes, etc...as long as it looks great, that's all that matters.

Now...for a still that will be used in a poster at 300 dpi, then you probably want to be as accurate as possible.

All this depends on your tool set also, because a raytracer renderer will be very slow doing a defocus, but a REYES renderer will be very, very fast. The oposite goes for raytracing operations. So you might need to pick your toolset for the job at hand.

donvlatko
07-09-2009, 10:21 AM
"One of them is rendering foreground and background separately. Old school."

This look to me like only easy and safe way :)

lazzhar
07-09-2009, 10:33 AM
"One of them is rendering foreground and background separately. Old school."

This look to me like only easy and safe way :)

Be aware that it could increase render time if the foreground is hiding some parts that are expensive to render.
In the end, nothing can beat rendering 3d DOF.

chovasie1
07-09-2009, 10:37 AM
if youre not planing to render some feature film shots.. zdepth pass composited in fusion or some other software will do just fine! ;)
it's fast.. easy to control.. yes, it will never look like rendered DOF but works just fine for most people...

el diablo
07-09-2009, 06:09 PM
do a z pass at bty render resolution then unpremultiply the z pass. pipe that into your DOF node....el diablo

thematt
07-10-2009, 09:03 AM
one last thing render your Z pass in 32 bit!! very important will give you much more control and precision on the deph.
Also there is a deph node in fusion that work quite good, and also a plug that blurPro something find here http://photoshop.pluginsworld.com/plugins/adobe/717/richard-rosenman/dof-pro-v4-0.html use it once in a production pipeline and it works wonder.

cheers

jeremybirn
07-11-2009, 04:31 PM
I want to render an animation where a camera travels down the side of a bottle. No chance here for rendering out different objects in seperate passes. Lots of refractions so render times would probably be insane if I wanted to use proper, in-camera depth of field and have no visible grain.

For that situations, I'd consider rendering-out the environment around the bottle and blurring the whole environment somehow, maybe replacing what's around the bottle with a blurred environment cube, so when you render without DOF the background and what you see through refractions and in reflections would be out of focus already.

-jeremy

CGTalk Moderation
07-11-2009, 04:31 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.