Originally Posted by Troyan
This raises an interesting question. Or a couple, actually.
First, when using the z-depth info Cinema in OpenEXR or RPF, how is transparency that's less than 100% handled? As you can see in that simple cell test, one of the cells that moves behind another semi-transparent cell is not blurred at all, then is heavily blurred as it comes out from behind. One problem with making a map from Zblur2.
Second, in the case of position pass or z-depth data, why doesn't that just generate a z gradient? Why are there objects at all in the depth maps? If all we're controlling is where the focus point is from the camera and how far from that point forward and back from the focus point things get blurred, why is it necessary to include objects? Shouldn't z-depth data just be plotted points in 3D space? Am I only making sense to me?
The p-pass needs the objects in order to determine where it is in space. Each pixel is assigned not only rgb values but also xyz coordinates. Without any objects in which to generate pixels you would not have that info. But I like the idea of the space itself determining the z-depth though dont know if thats possible..