Hello,
I am working in a project for which I need to take thousands of depth images of some pieces of clothing to have a large database. The goal is to apply AI with this database so that the computers learn to distinguish between these clothing pieces.
Then I have two questions. I followed the tutorials that show how to take depth images with luminance presets on a Depth layer. The output images have four dimensions: RGB+Mask, I guess. The weird thing is that the values in R, in G and in B are not exactly the same, and I think they should for luminance pictures. They vary just a little but I am not sure which one to choose. I am curious because when I set the mask not to be rendered, the R, G and B values are the same.
Suppose I set the minimum of luminance in 110 and maximum in 180. After that, I save the image in a .png extension, that gives values between 0 -> 180 and 1 -> 110. To calculate the distance I do: Distance = 110+ (1 - Value of each pixel )*70
The second weird thing is that the output values do not correspond exactly with the real distance from the camera to the object. Before starting with nclothes, I have tried with a cube and knowing the distance is, for example 150 cm, the output distance computed is between 149,6 and 150,4 cm. Do you have any recommendation to reduce the noise of the rendered images?
Thank you in advance for any suggestion.
Enric Corona

