Luminance depth outputs with noise


#1

Hello,

I am working in a project for which I need to take thousands of depth images of some pieces of clothing to have a large database. The goal is to apply AI with this database so that the computers learn to distinguish between these clothing pieces.

Then I have two questions. I followed the tutorials that show how to take depth images with luminance presets on a Depth layer. The output images have four dimensions: RGB+Mask, I guess. The weird thing is that the values in R, in G and in B are not exactly the same, and I think they should for luminance pictures. They vary just a little but I am not sure which one to choose. I am curious because when I set the mask not to be rendered, the R, G and B values are the same.

Suppose I set the minimum of luminance in 110 and maximum in 180. After that, I save the image in a .png extension, that gives values between 0 -> 180 and 1 -> 110. To calculate the distance I do: Distance = 110+ (1 - Value of each pixel )*70

The second weird thing is that the output values do not correspond exactly with the real distance from the camera to the object. Before starting with nclothes, I have tried with a cube and knowing the distance is, for example 150 cm, the output distance computed is between 149,6 and 150,4 cm. Do you have any recommendation to reduce the noise of the rendered images?

Thank you in advance for any suggestion.

Enric Corona


#2

I don’t really know what to tell you about your luminance issues, but is there any reason you aren’t just using Camera Depth passes instead of render layers? Render passes render alongside your main rendering and take up little (almost no) additional rendertime, since the depth data is simply being called from the buffer and written to a file. You can have as many as you want, and with Camera Depth Remapped you can change all the settings you want for the various layers.

For example, sometimes I need a z-depth image for my faraway trees/hills, and a cleaner, closer one for my foreground elements. At rendertime, mental ray spits these out into separate subfolders in the /project/images folder, and it’s very convenient.


#3

Hello,

Thank you very much for the idea. I first tried to use Camera Depth Passes but the only file extension that worked with the depth image was the .iff extension for Maya Images. Otherwise, the file was corrupted and unreadable.

And if I use the .iff extension, I have to open it with fcheck one by one. Or is there any way to automatize this, like Matlab or Python understand this extension?

Thanks

Enric Corona


#4

I had the same problem, but found that the .tiff format works great. .tiff uncompressed are the files above for example, although Imgur converted them to .jpgs. Hope that helps.