Gamma correction - do you care?


#262

On the other hand, the effect on bump maps and such does not change based on your output gamma. The bump map will change the surface normal by the same amount when you’re rendering to gamma 2.2 as when you’re rendering to 1.0.)
At least it will be very evident with normal bump, particularly in seams that will appear where uvw’s go.

As it has been discussed some pages back, linearizations must be performed for all components that affect colors in final render: plain colors, textures, lights, environment maps. Not on scalar textures, bumps or normal maps.

The eye perceives blacks more sensitively than brights
I read that it has nothing to do with our eyes, but with the brightness difference in dark tones and bright ones. The dark tones have more contrast and more tonal variation.

Don’t know the context of that sentence since I didn’t write it, but that’s indeed certain. Our visual response to brightness is more sensitive to dark areas than bright ones. and that’s the reason why 18% of a given luminance appears for us about half as bright, or because dark tones seems to have more tonal gradations in a linearly-encoded scale, which has a nonlinearly-increasing intensity:

or less tonal gradations in a linear-intensity scale, which has a linearly-increasing intensity.

On LDR displaying devices, purpose of gamma correction is to display images perceptually linear for human vision.
I don’t get it… once we talk that gamma is needed because monitors can’t show us linear result as it’s a technology limitation (was, but for convenience is used with TFT’s), and now that it has to do with our perception. How this correlates, our non-linear perception and measured non-linear photon emmision of a bright monitor pixel?
I also wanted to know, if in 3ds max we set our input gamma (for bitmaps) 2.2, this means that the renderer “knows” it’s 2.2, but not applies it, because it doesn’t make sense applying it twice, as our texture from photoshop already has a gamma of 2.2.

Maybe this helps to understand it more easily:
Our non-linear response to light is something like this:

Monitor’s non-linearity is something like this:

Then, the gamma correction is something like this:

This is why a linear input appears darker for us, and why a gamma correction is necessary (in monitor or baked in images) and why what is perceptually linear for us, is in fact non-linear, or better said, logarithmic.

Yes this is where I get lost too, gamma correction was used because of a limitation of the CRT tech, now that TFT can use a linear input, what’s the purpose of gamma correction now?

 Unless, as I suspect, is it a question of that GPUs have to be compatible with CRT technology and the whole old workflow yet.

At least for TFT-LCDs, the better behavior oscillates between an approximation of a power function of 1.13-1.15 (pretty close but not linear yet), it’s an aprox. because is not a power function but a formula. Problem there was the blacks and the fact that the image systems are gamma-encoded with gamma near to 2, so they opted for modifying the lightness response of TFT monitors to match the standard image systems.

Photoshop work with a 8-bit 2.2 gamma image as it was a linear one. The only solution is to work with a floating-point data. Or, using gamma 1.0 for 8-bit images while viewing them as 2.2 gamma (a lot of fuss).

Non-linear 8-bpc images are processed as non-linear in Photoshop.

Gerardo


#263

I don’t see that. Load the Dalai Lama image from here in Photoshop and apply a gaussian blur filter to it. It will turn into a gray image instead of a blurred photo of the Dalai Lama.
Then convert the image to 32bpp. Apply the same filter again and you get the correct result.


#264

Thank you for thr reply, Gerardo, and your interesting input in this topic.

I read this, but in 3ds max, which I use, it’s set by default i guess (which makes sense). I don’t know if you use renderman and it must be set manually there. So unfortunately I skipped this part, as I don’t know what appilication it has in 3ds max.

While I understand how gamma-correction works, and how linear images displayed and gamma-corrected, i still miss what place our non-linear perception plays here. As was stated, the tube of the monitor does not show us linear result due to technical limitations. Ok. Then it’s compensated by a gamma-corrected image. We get our linear display back. So where our non-linear vision plays the role here? Sorry I was tired today so maybe i miss something which lies on the surface. :cool:
About perception of gradients, “The art and science of digital compositing”, 2-nd edition, p. 421:
“You would find that, visually, it’s almost impossible to distinguish the difference between two of the brightest colors, sat number 99 and number 100. In the darker colors, however, the difference between color number 1 and color number 2 would still remain noticeable. This is not merely due to the human to particular brightness levels, but also to the fact that the eye being more sensitive to the amount of change in those brightness levels. In our 100-color example, a white value of 100 is only about 1.01 times brighter than color number 99. At the low end, however, m color number 3 is twice as bright as color number 2”

On this occasion, I would like to ask about some interesting thoughts mostly you expressed throughout this topic, and which left me with questions:
1.Why tonemapping fits more information into the image whan converting? Isn’t it just adds more contrast and converts gamma?

  1. From what I’ve read, video has a burnt-in the gamma-correction, whereas another images contain LUT for this, which is not what MasterZap wrote in his blog (saying gamma is in pixels, for JPEG’s, for example)

  2. “16-bit is NOT FP” Do you mean that 15-bit pch image does not necesary mean a floating-point image, as well as HDRI does not mean floating-point representation?

  3. “Avoid sRGB color space, screen needs their own color profile.” I’m not sure what you meant by this?


#265

I have a question.

I understand gamma correction however i don’t understand the difference between correcting in a 3rd party app compared to in app?


#266

The difference lies in the fact that renderers are linear and if the texture data that is fed into em is not then the gamma corrected output will be “messed up” as the data processed was non-linear and a 3rd app cant fix that.
If i understand it correctly.


#267

I don’t see that. Load the Dalai Lama image from here in Photoshop and apply a gaussian blur filter to it. It will turn into a gray image instead of a blurred photo of the Dalai Lama.
Then convert the image to 32bpp. Apply the same filter again and you get the correct result.

      Again, non-linear 8-bpc images are processed as non-linear in Photoshop, that's precisely the reason why they need to convert to 32-bpc.  Because Photoshop is processing the 8-bcp image in log space and not in linear, as the bilinear filter assumes.  What happens when we convert to 32-bpc is that Photoshop linearizes the image according with the working/image color space gamma, so it applies a kind of LUT to show the result in log space - not linear. What we are seeing in a 32-bit converted image, is a linear image, but we don't notice it because it has the LUT applied. Just to prove this:  
        
      Take the image 
      
      [img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_gray.jpg[/img]
      
      and linearize it with a simple .4545 value. 
      
      [img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_linear.jpg[/img]
      
      Notice we are still on 8-bpc (yes, you'll get better results by converting it first to 16-bpc, but just to prove the point, leave it as 8-bpc). Now we have a 8-bpc image in linear gamma. Re-scale the image:  
      
      [img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_linear2.jpg[/img]
        
      We have processed the 8-bpc image in linear space.  
      Apply back a 2.2 gamma correction:  
      
      [img]http://imagic.ddgenvivo.tv/forums/LCSexplained/gamma_dalai_lama_log.jpg[/img]
        
      Correct result.  If Photoshop could process non-linear 8-bpc in linear space, we wouldn't need any linearization/gamma correction process. But watch out, not all scaling algorithms should be performed in linear space. Genuine Fractals for example, assumes non-linear images and it should be applied in log space.  Other Photoshop filters however (like blurs, sharpen, etc) should be applied in linear space. 

I read this, but in 3ds max, which I use, it’s set by default i guess (which makes sense). I don’t know if you use renderman and it must be set manually there. So unfortunately I skipped this part, as I don’t know what appilication it has in 3ds max.

      Would like to help you with that, but I don't use Max. Maybe someone else here could help you to implement that principle more appropriately for Max..

While I understand how gamma-correction works, and how linear images displayed and gamma-corrected, i still miss what place our non-linear perception plays here. As was stated, the tube of the monitor does not show us linear result due to technical limitations. Ok. Then it’s compensated by a gamma-corrected image. We get our linear display back. So where our non-linear vision plays the role here? Sorry I was tired today so maybe i miss something which lies on the surface. :cool:

      It's just that our non-linear response to light is approximately the invert of the monitor response.  It's just other perspective of seeing the phenomenon.

About perception of gradients, “The art and science of digital compositing”, 2-nd edition, p. 421:
You would find that, visually, it’s almost impossible to distinguish the difference between two of the brightest colors, sat number 99 and number 100. In the darker colors, however, the difference between color number 1 and color number 2 would still remain noticeable.

      This is right in a linear-intensity scale as I've shown before. 
      
      [img]http://imagic.ddgenvivo.tv/forums/LCSexplained/linramp.png[/img]
      
      The text implies tones 99-100 are bright and tones 1-2 are dark.  

In the darker colors, however, the difference between color number 1 and color number 2 would still remain noticeable. This is not merely due to the human to particular brightness levels, but also to the fact that the eye being more sensitive to the amount of change in those brightness levels.

      Guess this sentence is incomplete:  
      [i]This is not merely due to the human to particular brightness levels[/i]  
      Otherwise it has no sense. Guess it should be: This is not merely due to the human [b]visual response[/b] to particular brightness levels.  Anyway, they are referring here to that human vision is more sensitive to darker areas since variations in tones 1-2 are the darker areas of the scale and have the more notorious differences.   

In our 100-color example, a white value of 100 is only about 1.01 times brighter than color number 99. At the low end, however, m color number 3 is twice as bright as color number 2".

      Again, this was shown in the linear-intensity scale shown before. This linear way to capture values is present in RAW photographs and the reason why the half of the 12-14-bpc data is dedicated to brighter areas and why people tend to sub-expose photographs to avoid blowing out the highlights, but it only wastes bits for darker areas. And this is the reason why digital cameras manufacturers have compensated this 'under-exposing syndrome' in their hystograms or their built-in lightmeters, and why the 18% middle gray is now 12.5% (or even less according to the model and mark).

On this occasion, I would like to ask about some interesting thoughts mostly you expressed throughout this topic, and which left me with questions:
1.Why tonemapping fits more information into the image whan converting? Isn’t it just adds more contrast and converts gamma?

      Tone-mapping or better said, Dynamic Range mapping (DR-mapping), allows to recover details in bright and dark areas in HDR images that simple gamma corrections can't in only 256 steps (8-bpc). Then, it indeed fits more relevant data in the same small 'space' that our LDR monitors are able to display:
      
      [img]http://imagic.ddgenvivo.tv/forums/tonemapping/dptm/linnotm.png[/img]
      [img]http://imagic.ddgenvivo.tv/forums/tonemapping/dptm/tm2.png[/img]
      [font=Arial](one never knows the little details that can be hidden there...)[/font]
  1. From what I’ve read, video has a burnt-in the gamma-correction, whereas another images contain LUT for this, which is not what MasterZap wrote in his blog (saying gamma is in pixels, for JPEG’s, for example).
      There's indeed cameras which have built-in gamma corrections, but there are cameras (like the Red One camera) that can shoot in RAW (linear) and we are able to choose among 4 gamma curve responses independently from the gamut's chromaticities. But I think you have forgotten your question here, right? :)  
  1. “16-bit is NOT FP” Do you mean that 15-bit pch image does not necesary mean a floating-point image, as well as HDRI does not mean floating-point representation?
      Yes, that may be the case, but I think I was referring to 16-bpc images are not full floating point (4294967296 usable values), just half precision floating point (32767 usable values) or mediumDR.

4.“Avoid sRGB color space, screen needs their own color profile.” I’m not sure what you meant by this?

      It means that for a more accurate reproduction of colors, computers need to use their monitor color  profile - at less the generic one, though a customized profile is desirable - not the sRGB profile. Monitor's color profiles describe the way our monitors are able to reproduce a given color, since monitor color spaces are different than sRGB color space, chromaticities can be very different these days. If we use sRGB color space, we are not only losing a more accurate color reproduction, but also we might be wasting the monitor capabilities to reproduce a wider range of colors.  

I have a question.

       I understand gamma correction however i don't understand the difference between correcting in a 3rd party app compared to in app?   
      A relevant difference is that it's not a good idea to save 8-bpc images in linear space.  Also, depending on the 3D app and the third-party app, gamma corrections could not be so accurate - when it's not a simple power function/gamma curve, for example.  
      
      
      
      Gerardo

#268

Thanks Gerardo. I will clarify some of my questions:
-what’s the role of color profiles in reading gamma-correction?
-where does the gamma-correction contain itself? Baked in pixel’s values or in some kind of LUT coming with JPEGs etc? I thought in a color profile.
-why photoshop does automatically show, read, and save correct gamma,whereas 3d-applications (as well as compositing programs) don’t?

I mentioned video, beacause it was mentioned it bakes 2.2 gamma into it. I thought it’s opposed to JPEGs etc (which do not bake, but rather store a special LUT for it), but perhaps it was meant as opposed to film, where the data has a higher-dynamic range preserved in formats.

Ok, I will try to illustrate gamma-correction process starting with image aquision till the final output, so you can see what I’m missing (if anything). I will return to issues of gradient perception and how photoshop works with non-linear images a bit later, as they are less inportant as those queations I aked now for this discussion. :slight_smile:




Tone mapping
the tonal or gamma correction should be encoded directly into image data stored in every jpeg file produced by those digital cameras. This standard is known as sRGB.
The sRGB standard for digital photos is flexible enough that in practice, every camera manufacturer have devised a proprietary transfer curve that is not a simple power function in order to try to get an edge on their competitors but the transfer curves are still roughly similar to a gamma 2.2. The standard allows each manufacturer to use their own transfer curve anyway.


#269






Software mistakes
The main mess has come about because the people who build the first graphical programs were not colour scientists and so just picked “what looked right” (hence PC’s and Macs being different).
Many software working with 8-bit images, works with them as with linear, making tonal and color mistakes. The solution is to work in 32-bit float. Or, using gamma 1.0 for 8-bit images while viewing them as 2.2 gamma (a lot of fuss).

The first thing that needs to be done is calibrating your monitor.

Links I found useful fro mthis thread which tell you “why”, an dnot only “how”.
http://livedocs.adobe.com/en_US/Photoshop/10.0/help.html?content=WS79B1AB30-AF45-4a89-9F0A-F83B1D991390.html
http://mymentalray.com/wiki/index.php/Gamma
http://www.ypoart.com/tutorials/tone/camera_dynamics.php - some good tonemapping explanation with camera examples. Points about 3d-programs are correct, but the ways of correcting look a bit outdated to some packages.

Damned, it looks so boring. And I thought I will make it look more understandable and funny. :smiley:


#270

Gerardo, I’ve read the whole thread now, and there are some questions.

Gamut - range of colors (what is the difference between color model then?)
Color model - relationship between values (as those 3d-tables you shown)
color space - absolute meaning of those values as colors (maximum values, as I understand).
“Color spaces contain info about color matrix (gamut), gamma, white points, rendering intends, LUTs, etc.”
Color management is not the same as linear workflow.
The purpose of a color management workflow is colorimetric consistency and predictability. Purpose for a linear workflow is a realistic light behavior.

sRGB is a color space model most monitors use (ok, some use aRGB, but how this concerns this discussion I missed).
rec709 is a HDTV color space.
They are similar.
Why do we concern those 2 color spaces? sRGB is a monitor standard, ok, but what about rec709, why it’s important?
In CG we try using compatible spaces for out output media (do you mean for output renders which we save with 2.2 gamma?).
Chromaticies (color characteristics) of rec709 and sRGB are similar, that’s why we use 2.2 gamma, which is similar too (to those models?).
That is to say, it’s not an accurate solution, but it works for practical purposes.
People who work for TV are mode concerned with colometric consistency. And that’s where the difference bwtween gamma 2.2 and rec709 occurs?
“We need to pay attention that the shape and size of our working colorspace contains - as much as possible - the color space of our output medium. But this is only critical when we work for output mediums with wide color spaces (like in film, laser projections, RealD…).”
And pure CG-guys(gals) may need not to worry about this stuff if they work only with CG and not combining it with other media.
MasterZap rant:
“And to the other queston floating around, yes, gamma 2.2 is “for all practical puspouses” identical to sRGB. YES, if you do a round-trip conversion repeatedly the wrong way, YES you get errors, and they grow. Totally known. This all dwarves in relation to the massive error of using gamma=1. Heck even gamma=2 (i.e. approximate it with squares and square roots) is sufficient to “look nice” compare to the atrocious mess that is oldschool nonlinear rendering”
Is it the difference between what? Between previewing with gamma 2.2 and baking sRGB gamma into an image by tonemqapping? Or for output saving?
What information do color profiles contain of individual files like JPEG?
Floating-point space has no gamma (right, it has gamma 1.0, which is the same).
Do you propose to save renders with sRGB or aRGB spaces instead of 2.2, depending on the output medium? You also said that it’s better to choose a color working space which covers any of your output spaces (the most wide). But choosing for what, for saving 8-bit images or viewing information into 3d-package?
The best thing is to save as a floating=point image, from what I understand. Then those gamma-related things become not important (except from inputting textures (or that’s where it’s crucial fro precise colorimetry?))
“An independent-device color space is the simplest way to keep color consistency among several and different applications and image devices in long-term.” What kind of device?
Thank you.


#271

In that case, I probably just misunderstood what you meant by “8-bpc images are processed as non-linear”. My interpretation was that Photoshop is awayre that it’s non-linear and applies non-linear scaling and filter operations to get the identical result as if I had done the same operations on a linear image. The current method of how Photoshop operates is likely to be counterintuitive to anyone who does not know the math behind the scenes. At least, it was my experience that it is hard to explain to our customers why they should expect different results in Photoshop applying the same operation on the same image, depending on what format they saved the image in. And even knowing the math, I can’t really come up with scenarios in which I would want to apply a linear filter operation on non-linear data.


#272

in my current scene i have serious problems with the aa.
at the moment i use min0 and max3 with treshold 0.02 and its impossible to get clean results in some areas. you can see that the generated sampling is way to low. also if i increase the min to 2 and max to 4 the problems stays (andmuch more higher rendertimes are a additionel thing). of course, its better, but the problem stays. i tested the rendering with gamma 2.2 in mia exposure and it worked great with min0 and max3 without any problems. so, are there some workarounds for these problems?

add infos: for viewing, the rendering is corrected with gamma 2.2 (gamma correction for viewport), but intern it renders linear. the same for compositing, at the end, there is gamma correction for viewing on screen. but theres the problem, if you apply the gamma 2.2 for final viewing the bad aa is visible. so for this, i use color curves (contrast changes only, no gamma changes) and things on my image inputs that darkens the bad aa areas, so they are not visible after the gamma 2.2 at the end of compositing foer viewing on monitor.

#273

excuse my ignorance but even after reading carefully most of this thread, I still don’t understand some very basic and fundamental ideas about gamma and would like to ask a few idiot questions.:slight_smile:

First, I don't understand this graph taken from [Wikipedia's Gamma correction page](http://en.wikipedia.org/wiki/Gamma_correction). The explanation there is written for people much smarter than me and I will greatly appreciate if this can be explained in a simpler way. 

[img]http://upload.wikimedia.org/wikipedia/en/5/5a/Gamma06_600.png[/img]


More specifically, I would like to know what these values on the horizontal and vertical bar represent. 
 Does a point on this graph represent a color by indicating a change of a color? If so which colors and to what degree? I'm asking this question because I don't get any difference when I apply any gamma  to these RGB colors  255 0 0, 0 255 0, and 0 0 255. My guess is because probably these colors are at the 0 or 1 points on this graph, so my question is: where is the position of these colors on the graph?
 It appears that 0 is the darkest and 1 is the lightest, if so, it looks like applying a gamma of 2.2 should make images darker which is consistent with this example on the [Wikipedia's Gamma correction](http://en.wikipedia.org/wiki/Gamma_correction)

BUT, if I apply a gamma of 2.2 on an image in a program it becomes lighter and 0.45 makes it darker. Then in Photoshop if I choose 32 bit Preview Options… from its View menu and apply a gamma preview of 2.2 images become darker. I’m confused from all this and will greatly appreciate some clarification here.
Another confusion along these lines is from examples like the one above - where the straight line indicating a linear image can mean two entirely different things - one is the original linear image that is created by using a linear workflow in a 3d rendering program or captured using digital photo sensors as a raw format and the other is the final corrected image that appears perceptually linear to our eyes but in fact is with very different color data from the original input image. If this is so then this means that a point on the straight line of the gamma graph means different colors for different cases. I hope you can see my confusion here and give some help.
I really appreciate all the help from those who shared their understanding in this thread and I really feel very low because I’m still not getting it. And what is worse I get more confused even with what appears to be helpful example like this one:

I don’t get this at all because I have no idea how our response to light can be measured other than by painting what we see and comparing it with the same thing being linearly captured? If so then wont’ the curve be actually like this?
which will make our perception the same as the output from a monitor?

I will greatly appreciate any input regarding my questions
and thanks for all the help.


#274

The line going from 0 to 1 represents dark and bright values of the image. Particularly, 0 is the darkest, 0.5 are midtones and 1 are the brightest tones.
Yes, you won’t get any changes by applying gamma to 0 and 1, as gamma is a complex multiplication, or rate. if I remember correctly, it is the value multiplied by itself many times. So 2 in the power of 5 will be 5x5x5x5x5. What’s is important, try to multiply any number by 0 or 1, and you will not get a new value. That’s why gamma doesn’t look like a straight line.
Oh yes, the values are normalized, which means your 0-255 RGB values are not 128, 240 etc, but are fit into 0-1 integer. So your 128 value will be 0,5, and 255 - 1.
if the line becomes higher, than the values increase, in our case become brighter. And vice-versa. Simply look where the new output values are in comparison with former input ones.
The diagonal line on the graph is your input, which is straight in our case and gray. The numbers for input are in the row below, horizontal. The output is measured in the vertical column with numbers, and with the new line. So, you take your input value, draw a line where it leads to the new output value, measured by the new line, and with a corresponding number in the vertical numbers column.
Just look at the input value, draw a vertical line to the gray. You have your 0,5. Then look at the new line (value is changed) and draw a line from the horizontal numbers, which will show you the output new value.
The gamma is a tricky thing, as sometimes it brightens things, and sometimes darkens, depending on how it’s implemented in a particular software.


#275

I should have been more clear with my question regarding this. Photoshop darkens the images when using gamma higher than 1, like 2.2, and vice versa only when using the 32 bit Preview Options… from its View menu which simulates monitor gamma and this behavior is as expected if referring to the gamma graph. that I showed in my previous post.
Otherwise, if applying gamma in Photoshop by using the Exposure from the Image menu > Adjustments, like in any other program that I’ve used, will make images lighter with gamma higher than 1 and darker with gamma less than 1 - which is not what I would expect if referring to the gamma graph. So, this is not random in the different programs but rather a consistent way that has some logic that I don’t understand and is very confusing.

Thank you for explaining how to read the graph. Very much appreciated. :thumbsup:.


#276

I’m not sure why, but gamma doesn’t work well with floating-point images, simply because any values below 0 and above 1 don’t behave as predictably (simple math. What happens if you multiply values above and below zero? Right, not just midtones elevation). Some systems treat superblack values (below 0) as negative, .i.e ensuring they won’t become bright with gamma operation. But superwhite (above 1) can have some issues. But that’s just my guess, I didn’t use floating-point images much.


#277

But as you suggested earlier using the word ‘normalize’, any range of numbers can be mapped to the 0 to 1 range for the purpose of gamma calculations, and at least on theory that should not be a problem. No?


#278

So my question at the moment is:
When we apply a gamma 0.45 to make an 8 bit sRGB image texture linear and even a color swatch the result is darker image/color. I wonder how this will look on a gamma graph. It is definitely not this gamma graph. because according to it the image/color will become lighter not darker.


#279

it’ a bit different for floating-point images. Normalized values are for maths comfort. Sorry, I’m not very experienced with this to comment.
But think about this: if you apply the whole curve of gamma to the 48 stops (considering the lower and the higher points are mapped as 0 and 1) or what a floating-point image usually has (a floating-point is not HDRI, but usually it is, as well as integer images are not HDRI, though they could be, despite how nasty it is), it will produce absolutely not what you visually expect, as your visible range will be like 8 stops only.


#280

The graph’s backwards that’s why. In common parlance a “gamma of y” is basically x raised to the power (1/y).

So a gamma of ~0.45 is x to the power 2.2, which is what you see in the solid line in the graph you linked to. Needlessly confusing I know.


#281

Because you’re applying gamma correction, not gamma.

Gamma correction is the inverse, it’s correcting for the gamma function.

Gamma function of 2.2 = Gamma Correction function of 0.454545…