I don’t see that. Load the Dalai Lama image from here in Photoshop and apply a gaussian blur filter to it. It will turn into a gray image instead of a blurred photo of the Dalai Lama.
Then convert the image to 32bpp. Apply the same filter again and you get the correct result.
Again, non-linear 8-bpc images are processed as non-linear in Photoshop, that's precisely the reason why they need to convert to 32-bpc. Because Photoshop is processing the 8-bcp image in log space and not in linear, as the bilinear filter assumes. What happens when we convert to 32-bpc is that Photoshop linearizes the image according with the working/image color space gamma, so it applies a kind of LUT to show the result in log space - not linear. What we are seeing in a 32-bit converted image, is a linear image, but we don't notice it because it has the LUT applied. Just to prove this:
Take the image
[img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_gray.jpg[/img]
and linearize it with a simple .4545 value.
[img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_linear.jpg[/img]
Notice we are still on 8-bpc (yes, you'll get better results by converting it first to 16-bpc, but just to prove the point, leave it as 8-bpc). Now we have a 8-bpc image in linear gamma. Re-scale the image:
[img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_linear2.jpg[/img]
We have processed the 8-bpc image in linear space.
Apply back a 2.2 gamma correction:
[img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_log.jpg[/img]
Correct result. If Photoshop could process non-linear 8-bpc in linear space, we wouldn't need any linearization/gamma correction process. But watch out, not all scaling algorithms should be performed in linear space. Genuine Fractals for example, assumes non-linear images and it should be applied in log space. Other Photoshop filters however (like blurs, sharpen, etc) should be applied in linear space.
I read this, but in 3ds max, which I use, it’s set by default i guess (which makes sense). I don’t know if you use renderman and it must be set manually there. So unfortunately I skipped this part, as I don’t know what appilication it has in 3ds max.
Would like to help you with that, but I don't use Max. Maybe someone else here could help you to implement that principle more appropriately for Max..
While I understand how gamma-correction works, and how linear images displayed and gamma-corrected, i still miss what place our non-linear perception plays here. As was stated, the tube of the monitor does not show us linear result due to technical limitations. Ok. Then it’s compensated by a gamma-corrected image. We get our linear display back. So where our non-linear vision plays the role here? Sorry I was tired today so maybe i miss something which lies on the surface. 
It's just that our non-linear response to light is approximately the invert of the monitor response. It's just other perspective of seeing the phenomenon.
About perception of gradients, “The art and science of digital compositing”, 2-nd edition, p. 421:
You would find that, visually, it’s almost impossible to distinguish the difference between two of the brightest colors, sat number 99 and number 100. In the darker colors, however, the difference between color number 1 and color number 2 would still remain noticeable.
This is right in a linear-intensity scale as I've shown before.
[img]http://imagic.ddgenvivo.tv/forums/LCSexplained/linramp.png[/img]
The text implies tones 99-100 are bright and tones 1-2 are dark.
In the darker colors, however, the difference between color number 1 and color number 2 would still remain noticeable. This is not merely due to the human to particular brightness levels, but also to the fact that the eye being more sensitive to the amount of change in those brightness levels.
Guess this sentence is incomplete:
[i]This is not merely due to the human to particular brightness levels[/i]
Otherwise it has no sense. Guess it should be: This is not merely due to the human [b]visual response[/b] to particular brightness levels. Anyway, they are referring here to that human vision is more sensitive to darker areas since variations in tones 1-2 are the darker areas of the scale and have the more notorious differences.
In our 100-color example, a white value of 100 is only about 1.01 times brighter than color number 99. At the low end, however, m color number 3 is twice as bright as color number 2".
Again, this was shown in the linear-intensity scale shown before. This linear way to capture values is present in RAW photographs and the reason why the half of the 12-14-bpc data is dedicated to brighter areas and why people tend to sub-expose photographs to avoid blowing out the highlights, but it only wastes bits for darker areas. And this is the reason why digital cameras manufacturers have compensated this 'under-exposing syndrome' in their hystograms or their built-in lightmeters, and why the 18% middle gray is now 12.5% (or even less according to the model and mark).
On this occasion, I would like to ask about some interesting thoughts mostly you expressed throughout this topic, and which left me with questions:
1.Why tonemapping fits more information into the image whan converting? Isn’t it just adds more contrast and converts gamma?
Tone-mapping or better said, Dynamic Range mapping (DR-mapping), allows to recover details in bright and dark areas in HDR images that simple gamma corrections can't in only 256 steps (8-bpc). Then, it indeed fits more relevant data in the same small 'space' that our LDR monitors are able to display:
[img]http://imagic.ddgenvivo.tv/forums/tonemapping/dptm/linnotm.png[/img]
[img]http://imagic.ddgenvivo.tv/forums/tonemapping/dptm/tm2.png[/img]
[font=Arial](one never knows the little details that can be hidden there...)[/font]
- From what I’ve read, video has a burnt-in the gamma-correction, whereas another images contain LUT for this, which is not what MasterZap wrote in his blog (saying gamma is in pixels, for JPEG’s, for example).
There's indeed cameras which have built-in gamma corrections, but there are cameras (like the Red One camera) that can shoot in RAW (linear) and we are able to choose among 4 gamma curve responses independently from the gamut's chromaticities. But I think you have forgotten your question here, right? :)
- “16-bit is NOT FP” Do you mean that 15-bit pch image does not necesary mean a floating-point image, as well as HDRI does not mean floating-point representation?
Yes, that may be the case, but I think I was referring to 16-bpc images are not full floating point (4294967296 usable values), just half precision floating point (32767 usable values) or mediumDR.
4.“Avoid sRGB color space, screen needs their own color profile.” I’m not sure what you meant by this?
It means that for a more accurate reproduction of colors, computers need to use their monitor color profile - at less the generic one, though a customized profile is desirable - not the sRGB profile. Monitor's color profiles describe the way our monitors are able to reproduce a given color, since monitor color spaces are different than sRGB color space, chromaticities can be very different these days. If we use sRGB color space, we are not only losing a more accurate color reproduction, but also we might be wasting the monitor capabilities to reproduce a wider range of colors.
I have a question.
I understand gamma correction however i don't understand the difference between correcting in a 3rd party app compared to in app?
A relevant difference is that it's not a good idea to save 8-bpc images in linear space. Also, depending on the 3D app and the third-party app, gamma corrections could not be so accurate - when it's not a simple power function/gamma curve, for example.
Gerardo