Thank you for you help folks, your explanations may be above my level of intelligence:blush: and with my poor math skills I have hard time comprehending it.:) but to confirm if I get your explanations right:
Applying [b]gamma[/b] of 2.2 means the image gets darker?
Applying [b]gamma CORRECTION[/b] of 2.2 means image gets lighter?
Does this also mean that if this [gamma graph](http://upload.wikimedia.org/wikipedia/en/5/5a/Gamma06_600.png) becomes a gamma [b]correction [/b]graph the numbers on the bars should be from 1 to 0 not 0 to 1 as they currently are?
Do you, by any chance know, a link that shows gamma [b]correction [/b]graph not just gamma graph?
again, I apologize for my utter ignorance in this matter.
Gamma correction - do you care?
so how's that helping?...yeah have my thick head on today!
either the exposure preview in max is now usless to use and should be ignored at all costs and you should only use the actual 'render' or the gamma is not worth bothering with as it makes the preview look 'wrong'.
what's the result on this then?
…so if i disable gamma and i’m using mr photographic exposure control it [mr] puts a gamma of 2.2 for me…
with this my previews and renders look the same.
if however i enable gamma and se it to 2.2 as you described then my previews look brighter than my renders…
preplexing!
so am i right i thinking that if i use mental ray and the photographic exposure then i shoudl not set up any gamma as mr does this by default?
If someone can explain in plain english how to set up mental ray in max regards gamma when using the photographic exposure and to get the preview looking the same as the render then all would be happy valley sunshine!

Sorry, for keep talking about this, I just want to make sure I understand your reply. By saying it is backwards, do you mean that in this gamma graph instead of 2.2 at the bottom it should say 0.45 and at the top it should say 2.2?
I know this should help me but again I’m getting more confused more I try to understand it .LOL.
So if I take a linear image rendered by using a linear workflow or a raw uncorrected image from a camera and apply a gamma correction in Photoshop by typing 2.2, will I be technically correct if I say: I applied Gamma Correction of 0.45 ?
and I will be also correct if I say: I applied Gamma of 2.2 to the image?
Good lord, it is a total mess in my head. 
Sorry for being so eager, I’m really curious and confused about all this:)
A color profile is basically a (mathematical) description of a color space which can contain info about the color matrix, gamma, white points, black points, color transformations, 1D/3D LUTs, viewing conditions, chromatic adaptations, etc. Some of these data should be definitively there, some others might not be there. But gamma is definitively a data that is present in any color profile. Even when LUTs can be embedded in a color profile by depending on the format, there’s commonly no gamma correction performed within the color profile. There’s commonly just a gamma specification/tag (per channel) that a color engine from a Color Management (CM) system (at app level or at OS level) is be able to use/apply when displaying the images. Since gamma specified within a color profile can be edited, its gamma value/formula can be used for linearizing or gamma-correcting images accurately. Lin profiles (gamma=1.0) can be used for linearizing images and log profiles (gamma>1.0) can be used for gamma-correction/expansion.
-where does the gamma-correction contain itself? Baked in pixel’s values or in some kind of LUT coming with JPEGs etc? I thought in a color profile.
Though LUTs (commonly 64x64x64 matrix) can be embedded at record time in RAW files for some digital cinema cameras, they are never used in 8-bpc JPGs, nor even for digital photographic cameras, even less for gamma correction - it’s not necessary since color profiles take care about gamma in JPGs files. However, the way gamma is managed depends on the bit-depth, image device and the app. Talking in general terms about images generated within a computer (2D/3D), if the app has some sort of basic CM capabilities, it will use a log profile to display images. In that context, we model the overall contrast of our images with gamma>1. when we save the image in 8-16-bpc, this pretended contrast is saved with the image, and we say that gamma is baked in the image. But what we have done is to use this gamma to distribute the brighter and darker areas of our images in 256/32767 steps in a logarithmic way. When we load the image again, it will use the same log profile (gamma>1) again to display the image, and the pretended contrast will be preserved. If we assign other color profile to this image, with a different gamma value/formula, the pretended original contrast will be lost. In some apps working with 32-bpc images, we could create and process images in lin space, but we can preview the images in log space (gamma-corrected). In those cases we say that a kind of LUT has been applied (a kind because it couldn’t be a real LUT). Later, we can save this image with its linear gamma. In those cases we say that gamma is not baked in the image. But we might have the chance to bake it if we want to.
-why photoshop does automatically show, read, and save correct gamma,whereas 3d-applications (as well as compositing programs) don’t?
This can be done not only by Photoshop, but by any app able to support ICC/ICM profiles.
Photoshop can indeed show, read, and save correct gamma in ICC/ICM profiles automatically because its color correction/transformation engine (ACE) is able to manage the gamma tagged/embedded in these profiles. But be aware it depends on how we have configured Photoshop. If its CM policies has not been configured properly, pretended gamma and gamut could be lost. I think this is simpler in AE when CM is enabled. Excepting Lightwave 3D, other 3D apps have not ICC/ICM support and can not even display an image with its intended gamma&gamut because the lack of some kind of real CM support.Though a bit buggy at the beginning, the only other 3D package that I’m aware has LUT support is Houdini (all other packages need third-party plugins like ColorSymmetry).
I mentioned video, beacause it was mentioned it bakes 2.2 gamma into it. I thought it’s opposed to JPEGs etc (which do not bake, but rather store a special LUT for it), but perhaps it was meant as opposed to film, where the data has a higher-dynamic range preserved in formats.
As we have discused in previous pages, even when HDvideo assumes a .4545 input, the output gamma is not 2.2, it’s a gamma formula (rec 709) difficult to approximate with a simple gamma exponent. It can be more or less approximated by linearizing the image with a .4545 value and gamma correcting with a simple gamma exponent of about 1.94-1.95; or by linearizing the image with a .5556 value and gamma-correcting with a simple gamma exponent of 2.2 value.
Also, as we have seen lines above, JPGs do not have built-in LUTs.
Ok, I will try to illustrate gamma-correction process starting with image aquision till the final output, so you can see what I’m missing (if anything). I will return to issues of gradient perception and how photoshop works with non-linear images a bit later, as they are less inportant as those queations I aked now for this discussion.
Ok! looks like a very useful initiative from you
Let’s see…
[[img]http://img638.imageshack.us/img638/3426/1copyic.jpg[/img]](http://img638.imageshack.us/i/1copyic.jpg/) [[img]http://img229.imageshack.us/img229/2600/2copyjz.jpg[/img]](http://img229.imageshack.us/i/2copyjz.jpg/) [[img]http://img716.imageshack.us/img716/8026/3copyf.jpg[/img]](http://img716.imageshack.us/i/3copyf.jpg/) [[img]http://img40.imageshack.us/img40/2099/4copyy.jpg[/img]](http://img40.imageshack.us/i/4copyy.jpg/)
Screen1: Those devices have very different curve responses. Since color profiles are there for describing color spaces, they are useful not only for display purposes, but also for creating, editing, processing, color transforming and outputting images in the most accurate way as possible (well, it’s possible to be in fact more accurate through the PCS, but that’s another topic). The consistency concept comes from the fact that colors from an image stop to be ambiguous and we are able to apply the real meaning that a color value have for an image created, recorded, filmed, printed or projected in every different image device within the production pipeline. We can then process, display and output these images according to a consistent and predictable color flow.
Screen2: Black and white ranges for 8-bpc images are commonly expressed between 0-255, while range for 32-bpc are expressed between 0-1 (because its floating point values). Altho certainly some 8-bpc images doesn’t have any color profile tagged/embedded, in a CM workflow, we can assume an approximated color space and assign it.
Floating point images can indeed contain color profiles and color models, some few FP formats (like ProEXR) are able to retain color space data and DepthX support CMYK images. RAW files also support color space metadata and even 1D/3D LUTs, and we can always assign the camera color profile to a RAW file when converting it to a more conventional HFP format. Consider also that HFP images are saved commonly in log space (not linear).
Screen3: Some processes are benefited with their application in linear light, but some others not. Color blendings, motion blur, depth of field, glows, shrpen filters, etc behave better in linear light, but other operations like color keyings, artistic and stilyzing filters, color gradings, film grains, etc behave better in log space.
As for use linear images when using FP formats, it’s a good advice I think. Just consider also that HFP images are commonly not assumed as linear by most apps, so in those cases we might want to store them in log space.
Screen4:In tone-mapping/DR-mapping what we are mainly changing is the contrast ratio of the image, and the way this is done not necessarily involves to clip W/B points, this depends on the way the tone-mapping operator works.
Tone mapping
the tonal or gamma correction should be encoded directly into image data stored in every jpeg file produced by those digital cameras. This standard is known as sRGB.
The sRGB standard for digital photos is flexible enough that in practice, every camera manufacturer have devised a proprietary transfer curve that is not a simple power function in order to try to get an edge on their competitors but the transfer curves are still roughly similar to a gamma 2.2. The standard allows each manufacturer to use their own transfer curve anyway.
Cameras don’t use tonemapping operators when saving JPGs files. That’s not precisely tone-mapping, and this is why many photographers are preferring the term DR-mapping for unequivocal the real function of ‘tone-mapping’. What the camera does is to choose the linear middle portion of the exposed image that can be contained in 8-bpc and it assigns - without taking the camera color profile into account - a color profile (sRGB/aRGB) with its consequent gamma. This is the reason why RAW formats in digital cameras provide much more control about exposure, gamma, B/W points, noise level, etc for obtaining the maximum image quality. These RAW formats is what camera manufacturers keep as proprietary. Btw, about this topic, there’s an interesting chapter in Christian Bloch’s HDRI Handbook.
Gerardo
![]()
Screen8: A LUT doesn’t work precisely the same like a curve filter, it could be similar, but faster. What a real LUT basically does is to perform a color transformation according to a ‘correction table’ that specify the key points to replace the color values of an image. Instead of calculating the exact value for each pixel in the image, commonly a color transformation engine looks at the key points values from the table and interpolates the intermediate values - that’s why is called look at table
It supposes that higher the resolution of the table (matrix) higher the accuracy of the result. FP pipelines (and low-res matrices) need more sophisticated interpolation algorithms. 1D LUTs are simple tables that maps single values for each RGB channel (viable for decoupling-devices like CTR monitors), while 3D LUTs uses color cubes that take into account how the value of one color channel will affect the other channels, this, for every RGB channel. As we may notice, 3D LUTs provide more accurate color reproduction.
Screen9: Gamma corrections do not produce unexpected results when are applied to HDR images. Values are re-mapped with a log 2 base increments proportionally for the darker and brighter values the same as LDR images. This is what a LUT that we were talking previously, commonly does for preview purposes.
Software mistakes
The main mess has come about because the people who build the first graphical programs were not colour scientists and so just picked “what looked right” (hence PC’s and Macs being different).
Many software working with 8-bit images, works with them as with linear, making tonal and color mistakes. The solution is to work in 32-bit float. Or, using gamma 1.0 for 8-bit images while viewing them as 2.2 gamma (a lot of fuss).
I wouldn’t consider differences in gamma of PCs/MACs as the lack of science, neither about the way that 8-bpc images are handled. All the contrary. The differences in gamma of win/mac systems are just two different (and clever) ways to solve the same issue. Win systems use sRGB gamma, which is a special gamma formula that not only approximates our non-linear perception to light, but also it behaves better in blacks that the simple 2.2 power function (this is better when storing 8-bpc images). Mac systems on the other hand, uses a 1.8 gamma curve so that it behaves better in the darkest areas, while using a kind of LUT (1.4) to compensate for our log 2 base perception; so when saving the images, the 1.8 gamma is used; but when displaying the image, a compensation for 2.2 gamma is applied. Also, the way that 8-bpc images are handled doesn’t look like a mistake, but rather as the sacrifice of choosing between the lesser evil in that time. A filter algorithm has not a proper way to know if the gamma of an image is sRGB, 2.2, 1.8, rec.709, logCineon, etc. So the option to lock a linearization value was ambiguous and a higher bit depth conversion was also necessary and very time consuming in that time. The other option was to assume a linear input, allow the user to perform the linearization and apply the filter in a faster possible way. They opted by this last option. Nowadays, FP formats are more and more common and linear inputs will be even more common in the future. But I think they should add a checkbox to perform this linearization behind the scenes for gamma-corrected images. Faster processes and CM capabilities allow something like this these days.
Gerardo, I’ve read the whole thread now, and there are some questions.
Gamut - range of colors (what is the difference between color model then?)
Color model - relationship between values (as those 3d-tables you shown)
color space - absolute meaning of those values as colors (maximum values, as I understand.
A color model is basically a particular scheme to describe (mathematically) colors reproducible by different image devices. Each scheme or particular way to describe these colors are designed with different objectives according to what is relevant to represent; thus, we have the XYZ color model (accurate colorimetry/ambiguous luminosity), the RGB color model (though there are some drawbacks in its implementation, it’s used for additive colors), the CMYK color model (used for subtractive colors/print), HSV and HSL (for hue, saturation, value and lightness transformations), etc. Gamut on the other hand is, as you say, the range of colors that can be represented in a color space. We have already discussed what a color space is.
sRGB is a color space model most monitors use (ok, some use aRGB, but how this concerns this discussion I missed).
sRGB as we have seen then, is not a color model, is a color space. Ideally, monitors do not use sRGB, or aRGB color spaces, monitor use their own color spaces. What happens we we talk about sRGB monitors or aRGB monitors is that a particular monitor color space can have a gamut with similar size than sRGB or aRGB, but it doesn’t mean that its color space is precisely sRGB or aRGB.
rec709 is a HDTV color space.
They are similar.
rec709 is the most common color space for HDTV, but not the only one. Please, notice that sRGB and rec709 are not similar color spaces, their gamuts are similar, but their gamma is very different. xvColor is a color space for HDTV that it’s similar than Rec709, but its color gamut is expandible (at double) and can be used for HDR displays and LDR displays indistinctly.
Why do we concern those 2 color spaces? sRGB is a monitor standard, ok, but what about rec709, why it’s important?
Since Rec.709 is used for HDTV, it’s indeed important for people whom output their work for TV (VFX for TV series for example) or digital cinema for TV (i.e: TV spots/music videos). Before film LUTs, rec. 709 gamma was common for people whom worked in motion picture production, too.
In CG we try using compatible spaces for out output media (do you mean for output renders which we save with 2.2 gamma?).
Chromaticies (color characteristics) of rec709 and sRGB are similar, that’s why we use 2.2 gamma, which is similar too (to those models?).
That is to say, it’s not an accurate solution, but it works for practical purposes.
I didn’t say that, I said:
[i]Now, something that many people are not aware of is that gamut range of sRGB it’s pretty similar to rec.709. (rec. 709 cover a bit more yellows while sRGB cover a bit more blues).
Why this is important? because - except for Lightwave users - most of people can not manage colors within their 3D packages (at least you have an expensive color managment system working at OS level). In those cases, we have two solutions: or we manage colors before the CG work, or we try to use color spaces compatible with our output media. Since chromaticities of sRGB and HDTV are similar, this solution of using both is very similar to the 2.2 and sRGB thing we are talking about. That is to say, it’s not an accurate solution, but it works for practical purposes.
[/i]
I was referring there about sRGB gamma is not 2.2, it’s a gamma formula, but people whom can not apply an accurate linearization for sRGB, can approximate it with a simple 2.2 value for practical purposes. In the same way - but this is completely another subject - as happens with sRGB gamma, people whom is not able to manage colors within the 3D package, can mix sRGB chromaticities with Rec. 709 chromaticities without worry much about color inconsistency, because their color gamuts are similar.
People who work for TV are mode concerned with colometric consistency. And that’s where the difference bwtween gamma 2.2 and rec709 occurs?
No, people more concerned about colorimetric consistency is people whom work in motion picture production and print. Since the critical part of colorimetric consistency is color gamut (not gamma), people whom work for TV is the less worried about these topics.
“We need to pay attention that the shape and size of our working colorspace contains - as much as possible - the color space of our output medium. But this is only critical when we work for output mediums with wide color spaces (like in film, laser projections, RealD…).”
And pure CG-guys(gals) may need not to worry about this stuff if they work only with CG and not combining it with other media.
Ideally, people whom don’t work for any other output medium won’t need to worry much about colorimetric consistency since WYSIWYG (what you see is what you get). I say ideally because, you know, we have those old MACs and PCs
…knowing how to save that difference, you don’t need to worry much about ‘this stuff’.
MasterZap rant:
“And to the other queston floating around, yes, gamma 2.2 is “for all practical puspouses” identical to sRGB. YES, if you do a round-trip conversion repeatedly the wrong way, YES you get errors, and they grow. Totally known. This all dwarves in relation to the massive error of using gamma=1. Heck even gamma=2 (i.e. approximate it with squares and square roots) is sufficient to “look nice” compare to the atrocious mess that is oldschool nonlinear rendering”
Is it the difference between what? Between previewing with gamma 2.2 and baking sRGB gamma into an image by tonemqapping? Or for output saving?
sRGB gamma and 2.2 gamma are quite similar and, most of the times, viable for practical purposes. However there are some differences that we have discussed here.
Those differences arises if we linearize with a simple gamma value and then, in post, we apply the correct gamma formula. To avoid it, if we have used a simple gamma linearization in the CG process, we have to use a simple gamma correction in the post process.
What information do color profiles contain of individual files like JPEG?
We have already seen this previously, it can contain several data, but the most common - and relevant - is the color model, color matrix, gamma, white points, profile connection space and default rendering intent. - I’m referring to an ICC profile context, ICM/XML profiles can be slightly or very different.
Floating-point space has no gamma (right, it has gamma 1.0, which is the same).
Just in case, for CM workflows and LCS workflows, gamma value of 1.0 is indeed relevant.
Do you propose to save renders with sRGB or aRGB spaces instead of 2.2, depending on the output medium?
No, as general rule, I’d recommend to save renders in some FP format in the same linear working color space used for generating the image.
You also said that it’s better to choose a color working space which covers any of your output spaces (the most wide). But choosing for what, for saving 8-bit images or viewing information into 3d-package?
The working color space is the color space chosen for generating/processing the images.
The best thing is to save as a floating=point image, from what I understand. Then those gamma-related things become not important (except from inputting textures (or that’s where it’s crucial fro precise colorimetry?))
The gamma - and gamut - related stuff is indeed important for input images, preview and output purposes, since we will model the color&hues relationships and overall contrast according to an output color space.
“An independent-device color space is the simplest way to keep color consistency among several and different applications and image devices in long-term.” What kind of device?
Image devices uses in the production pipeline. If let’s say you save your images in your monitor color space which has a sRGB range, and you want to show those images to your client, who has a aRGB monitor, what will happen with the color reproduction of your images? There won’t be color consistency. Or let’s say your monitor fails and you buy a new one from a different brand. The samething can happen with your monitor, cameras, scanners, printers, projectors, or any other image device. To avoid it an appropriate conversion to an independent-device color space is highly advisable.
Gerardo
In that case, I probably just misunderstood what you meant by “8-bpc images are processed as non-linear”. My interpretation was that Photoshop is awayre that it’s non-linear and applies non-linear scaling and filter operations to get the identical result as if I had done the same operations on a linear image. The current method of how Photoshop operates is likely to be counterintuitive to anyone who does not know the math behind the scenes. At least, it was my experience that it is hard to explain to our customers why they should expect different results in Photoshop applying the same operation on the same image, depending on what format they saved the image in. And even knowing the math, I can’t really come up with scenarios in which I would want to apply a linear filter operation on non-linear data.
I’ve realized we were talking about the same thing but with different explanations
As we have discussed lines above, there’s indeed some filters that behave better in log space, others in linear space. It would be great if a filter algorithm could be aware about the in-built gamma of images and perform accordingly, pitifully it doesn’t, but Photoshop could add a checkbox to internally make a 16-bits conversion and appropriate linearization/gamma correction according with the image/working color space.
excuse my ignorance but even after reading carefully most of this thread, I still don’t understand some very basic and fundamental ideas about gamma and would like to ask a few idiot questions.
First, I don't understand this graph taken from [Wikipedia's Gamma correction page](http://en.wikipedia.org/wiki/Gamma_correction). The explanation there is written for people much smarter than me and I will greatly appreciate if this can be explained in a simpler way. [img]http://upload.wikimedia.org/wikipedia/en/5/5a/Gamma06_600.png[/img] More specifically, I would like to know what these values on the horizontal and vertical bar represent.Does a point on this graph represent a color by indicating a change of a color? If so which colors and to what degree? I’m asking this question because I don’t get any difference when I apply any gamma to these RGB colors 255 0 0, 0 255 0, and 0 0 255. My guess is because probably these colors are at the 0 or 1 points on this graph, so my question is: where is the position of these colors on the graph?
It appears that 0 is the darkest and 1 is the lightest, if so, it looks like applying a gamma of 2.2 should make images darker which is consistent with this example on the Wikipedia’s Gamma correctionBUT, if I apply a gamma of 2.2 on an image in a program it becomes lighter and 0.45 makes it darker. Then in Photoshop if I choose 32 bit Preview Options… from its View menu and apply a gamma preview of 2.2 images become darker. I’m confused from all this and will greatly appreciate some clarification here.
Another confusion along these lines is from examples like the one above - where the straight line indicating a linear image can mean two entirely different things - one is the original linear image that is created by using a linear workflow in a 3d rendering program or captured using digital photo sensors as a raw format and the other is the final corrected image that appears perceptually linear to our eyes but in fact is with very different color data from the original input image. If this is so then this means that a point on the straight line of the gamma graph means different colors for different cases. I hope you can see my confusion here and give some help.
I really appreciate all the help from those who shared their understanding in this thread and I really feel very low because I’m still not getting it. And what is worse I get more confused even with what appears to be helpful example like this one:
The wikipedia graph shows the values for gamma corrections as Per-Anders has already explained so well.
I don’t get this at all because I have no idea how our response to light can be measured other than by painting what we see and comparing it with the same thing being linearly captured? If so then wont’ the curve be actually like this?
which will make our perception the same as the output from a monitor?
I will greatly appreciate any input regarding my questions and thanks for all the help.
Our visual response is indeed in this way:
[img]http://imagic.ddgenvivo.tv/forums/LCSexplained/expansion.png[/img]
That’s the reason why, as we have discussed previously, 18% of a given luminance appears for us about 50% as bright.
I know this should help me but again I’m getting more confused more I try to understand it .LOL.
So if I take a linear image rendered by using a linear workflow or a raw uncorrected image from a camera and apply a gamma correction in Photoshop by typing 2.2, will I be technically correct if I say: I applied Gamma Correction of 0.45 ?
and I will be also correct if I say: I applied Gamma of 2.2 to the image?
Good lord, it is a total mess in my head.
Sorry for being so eager, I’m really curious and confused about all this:)
As we have discussed previously as well, we call linearizations when we apply values <1 and gamma corrections when apply gamma values >1. I think that may avoid confusions. at least more confusions 
Gerardo
That definitely helped - before I never new the importance of gamma vs gamma correction - but I’m still confused a lot. It will be very nice if someone confirms the validity of the following statements:
Applying Gamma of 2.2 means the image gets darker?
Applying Gamma Correction of 2.2 means image gets lighter?
Gamma of 2.2 and Gamma Correction of 0.454545 are two different ways to say the same thing? (edit: other than the obvious fact that ‘Correction’ implies reversing the effect of something that already has gamma)
This is important because I've noticed that in some programs the names of the controls are gamma but the effect is gamma correction which is really confusing.
Thank you for the explanation - this makes it clear what your graph reflects. I was imagining a graph reflecting where people will position the middle gray on a ramp in relation to its linear position.
That sounds good:) so we can understand what we are talking about here but as I said above I would like to know what gamma and gamma correction means when the software controls use these terms. Unfortunately they don’t use linearize for the name of the controls.
edit: Also your statement doesn’t fit very well with the gamma correction control in Photoshop which allows you to apply values lower than 1.
Thank you for your input Quadart, I appreciate it. ![]()
Well, here's the thing, the purpose of my question was to confirm the fact that it is really important to understand the difference between the terms[b] Gamma [/b]and [b]Gamma Correction[/b] and what they really mean when we use them. I started to read this thread again paying attention to how people use these two terms and it appears that a lot of us do not realize how important it is to use the correct term. And in my opinion that includes your message too:)
To illustrate the importance of this issue with an example: open a file in Photoshop and choose Image > Adjustments > Exposure. Pay attention to the name of the last slider - it is [b]Gamma Correction.[/b] Higher values makes the image brighter and lower values darker. Now go to Image > Mode and make sure that it is 32 Bits/Channel, if not make it. Then choose View > Preview > 32 Bit Preview Options... Pay attention to the name of the last slider - now its just [b]Gamma. [/b]Higher values makes the image darker and lower values brighter which is exact opposite of [b]Gamma Correction. [/b]Next go again to Image > Mode and this time choose 8 Bits/Channel. Photoshop will show HDR Conversion box and again, pay attention to the name of the last slider which like in the preview options is just [b]Gamma [/b]and does the same thing - higher values makes the image darker and lower values brighter. This makes me think that the validity of my questions is true. And also makes me think that choosing to use [b]Gamma [/b]and [b]Gamma Correction[/b] in this way in the programs is needlessly confusing. Applying [b]Gamma [/b]implies that the image is being changed without purposely compensating for previously applied gamma, like what monitors naturally do to an image. And applying [b]Gamma Correction [/b]implies that it is done for the purpose of compensating an image with a gamma curve or a display. Because gamma modifications applied from a user may have very different reasons I think Photoshop designers made this needlessly confusing. They should have done all gamma related controls behave the same either as [b]Gamma [/b]or[b] Gamma Correction. [/b]Then there are the other programs with similar non sense and also naming gamma related controls in a completely inconsistent manner.
And regarding the explanation in the rest of your message Quadart, I believe it shows more confusion than understanding about the purpose of using gamma/gamma correction, but I don't feel qualified enough to go over each point made and instead I would like to thank you for sharing your thoughts on this.:)
This thread has pretty impressively spiralled off into confuso-land! It feels like we’re trying to redefine terms that have been used in imaging for a long term for our needs.
Gamma and gamma correction can be used fairly interchangeably but a more proper definition says that “gamma” simply refers to the response of a display, or the encoding in an image (i.e. a numerical value), while “gamma correction” is the process of applying a gamma to an image to ensure it displays correctly.
Neither term has anything inately to do with linearising image data for rendering. When you “apply a gamma” to an image you are simply raising the pixel values in that image to the power (1/gamma). That means you could either be raising or lowering the gamma - which one you want to do depends on the context.
Abusing “gamma correction” to describe the process of linearising data for rendering seems stupid to me. In the past I’ve seen the phrase “applying an inverse gamma” used, which to me makes much more logical sense and doesn’t stomp all over “gamma correction” which as we’ve seen has quite well-defined meanings in computer graphics applications already.
Quadart - monitors darken the images because the output of the electron gun in a CRT is (like most things in nature) non-linear. It’s just a happy accident that the curve is almost exactly opposite to the response of the human visual system, which is why a linear ramp looks “right” without gamma correction.
@playmesumch00ns to your post above:
My ensuing confusion stemmed from terminology blurriness, and or brain blurriness. Im leaning toward the latter. 
Your clarification resets my previous understanding of how gamma works.
As RossRoss alluded to; Gamma Y = Y and Gamma Correction Y = 1/Y (*in the usual context).
The clarification takes the question of monitor image darkening with increased gamma off the table again. Yes, an input voltage value of 50% translates to an 18% light intensity output value when a power of 2.5 is applied to the voltage (or data) value, making for darker contrastier display images on a CRT/LCD than lower Y values.
I never thought about gamma and 3d rendering before this thread. As a result of it Im using a basic “LWF”, when needed, which takes a lot of hassle out of getting my renders to look much closer to what was intended.
Thanks for the thread, stew, and all of the other helpful contributors.
To add to the nuisance, PS CS4 just labels it as gamma now.
Must be that interchangeability thang. 
LOL, You actually think that this “gamma thing” has ever been clear in this thread?. I’m reading it for the second time and haven’t find any clarity on the subject. Have you, and where?
…It feels like we’re trying to redefine terms that have been used in imaging for a long term for our needs…
No, we are trying to find where and how these terms were defined. So far I haven’t found any source that everyone accepts as clear and complete definition. This makes me wonder what people have in mind when using terms like Gamma and Gamma correction and where they’ve learnt about it. My first gamma definition read was this wikipedia page quoted as a definition source on this page that was recommended by MasterZap in his post from this thread. As you said previously, the gamma graph on the wikipediapage is needlessly confusing. And yes, I am confused. What confuses me is, the graph clearly shows that gamma of 2.2 makes colors darker. But if I change gamma to 2.2 in programs like 3ds max or Maya it makes colors brighter.
So can anyone answer this simple question. Does gamma higher than 1 makes colors darker or brighter? And if it makes colors brighter I would like to see a gamma graph that shows this. I haven’t seen one yet and no one so far has provided one.
The only attempt so far in this thread that makes some sense with addressing this issue was made from Per-Anders in [his post](http://forums.cgsociety.org/showpost.php?p=6361432&postcount=280) on the previous page. If I understand it correctly it means that using [b]Gamma [/b]of 2.2 or [b]Gamma Correction[/b] of 0.4545 will make the some calculations on a given color. Unfortunately no one so far has clearly confirmed or denied this and[the rest of my questions](http://forums.cgsociety.org/showpost.php?p=6363541&postcount=287):sad:
While Per-Anders's explanation is consistent with the naming and calculations of the gamma related controls in Photoshop, this also means that if Per-Anders is right then the names of the gamma controls in 3ds max and Maya are confusingly wrong by not being named Gamma Correction. However one thing that supports the"used interchangeably" statement is a node in Maya that has the name gammaCorrect but its attribute's name is Gamma. So, go figure:shrug:
It is another question why it is needed at all to make all this very complicated matter more complicated by making the same calculation using two different names [b]Gamma [/b]and [b]Gamma Correction[/b] with their opposite numbers?
…Gamma and gamma correction can be used fairly interchangeably but a more proper definition says that “gamma” simply refers to the response of a display, or the encoding in an image (i.e. a numerical value), while “gamma correction” is the process of applying a gamma to an image to ensure it displays correctly…
“a more proper definition says” sounds like you found some explanation that fits your understanding:) Why don’t everyone of us here share their definition or definition source to see which one will be most popular as complete and clear?
Initially I thought gamma is a certain calculation that changes the luminance values of given colors from linear to nonlinear or the opposite and that’s it. I never expected that its usage should be complicated by getting the same result by applying it in the opposite way and calling it differently as “gamma correction”. This simply doesn’t make sense to me because it only indicates a purpose that is not necessarily reflecting the actual usage.
…Abusing “gamma correction” to describe the process of linearising data for rendering seems stupid to me. In the past I’ve seen the phrase “applying an inverse gamma” used, which to me makes much more logical sense and doesn’t stomp all over “gamma correction” which as we’ve seen has quite well-defined meanings in computer graphics applications already…
I completely agree with you on this. It should be just called gamma and it should always do the same thing when the same number other than 1 is used - either darkens or brighten colors. The way the gamma is applied should not indicate the intend as this is unnecessary complication with unrealistic purpose.
…It’s just a happy accident that the curve is almost exactly opposite to the response of the human visual system, which is why a linear ramp looks “right” without gamma correction.
In my understanding this is not the case and nothing linear looks right without gamma correction on a monitor with gamma.
I don’t think there is anything blurry with your brain, it is those who implemented gamma in the different programs in this exciting way - high impact radial blur with a lot of noise, dust, and smoke effects.LOL
LOL, Yeah, everything about gamma is so boringly simple that making each copy of Photoshop with different gamma naming convention is really exciting for the users.
This is what my PS CS4 shows me:
[img]http://img404.imageshack.us/img404/2991/photoshopgammacorrercti.jpg[/img]

I guess all things gamma are inverted in the OS X anti-gamma-gamma, parallel universe. :argh:
That definitely helped - before I never new the importance of gamma vs gamma correction - but I’m still confused a lot. It will be very nice if someone confirms the validity of the following statements:
Applying Gamma of 2.2 means the image gets darker?
Yes. the assumed gamma for an image device describes the response curve of this device. So understanding gamma as a power operator, a 2.2 value will darken the image.
Applying Gamma Correction of 2.2 means image gets lighter?
Yes. Commonly tools labeled as gamma in most of apps refer to gamma as gamma correctors and behaves inversely to power functions. This happens in apps and in our coloquial term when we refer to gamma, too. What gamma curves pretend in this context is to compensate the output response curves of image devices so that these devices behaves in a perceptually linear way.
Gamma of 2.2 and Gamma Correction of 0.454545 are two different ways to say the same thing? (edit: other than the obvious fact that ‘Correction’ implies reversing the effect of something that already has gamma)
Yes. The same as happens with a gamma operator and a pow operator, since a gamma function is the inverse of a power function (well, for our purpose of small data, since for mathematics this is not accurate for large data sets).
This is important because I’ve noticed that in some programs the names of the controls are gamma but the effect is gamma correction which is really confusing.
Yes, what happens is that gamma in physics and electronics refers to the response curve of an image device, while in photography and digital imagery generation, gamma is always referred for gamma corrections/adjustments. This is as you say, confusing. And that’s was the reason why Adobe eliminated the term gamma in Photoshop’s Exposure adjustments since CS3. This term used in CS2 version caused several complaints from users because it suppose adjustments/corrections, but when you move the slide at 2.2 your image gets darkened, the opposite of what people expected of a gamma correction tool. Later versions have the term gamma correction and it behaves as a gamma corrector operator. The term gamma only appears when converting 32-bpc image to a lower bit depth, since FP files are assumed as linear, so we have the chance to choose the output curve response (power function) assumed for a 8-16-bpc image.
That sounds good so we can understand what we are talking about here but as I said above I would like to know what gamma and gamma correction means when the software controls use these terms. Unfortunately they don’t use linearize for the name of the controls.
edit: Also your statement doesn’t fit very well with the gamma correction control in Photoshop which allows you to apply values lower than 1.
It fits well in a LCS workflow environment (like in this thread). They don’t use the term linearize (and they won’t use it) because that term is only proper in a LCS workflow, while gamma correction term involves a more general usage (like for artistic purposes for example).
To add to the nuisance, PS CS4 just labels it as gamma now.
Must be that interchangeability thang.
Photoshop uses gamma correction now (well, since CS3) for the Exposure adjustment tool, and gamma for one of its HDR conversion methods. As we have seen -at less in PS - they are not interchangeable.
Gerardo
Gerardo, great feedback to my questions. I really appreciated it and you can’t imagine how happy I feel to have you here:thumbsup:. This helps a lot and clears a lot of things, it also covers all the questions in the other post I made.
Thanks a million:bowdown:
It is actually labeled gamma correction via image>adjustments>Exposure and gamma in the Exposure adjustment layer. Typo? I guess it doesn’t matter. I hardly ever go direct with adjustments, using the adjustment layer option instead. The sliders produce the same results in both cases. Increasing the gamma greater than 1 brightens the image in both cases. In CS3 both Exposure options are labeled gamma correction. Just an FYI.
Thanks for pointing this Quadart, it is the same case on PC, I forgot to check that control before saying ‘the controls in Photoshop are named properly’. Now after this inconsistency, I’m sorry to take my word back.
And this is another case in support to this statement:
It is actually labeled gamma correction via image>adjustments>Exposure and gamma in the Exposure adjustment layer. Typo? I guess it doesn’t matter. I hardly ever go direct with adjustments, using the adjustment layer option instead. The sliders produce the same results in both cases. Increasing the gamma greater than 1 brightens the image in both cases. In CS3 both Exposure options are labeled gamma correction. Just an FYI.
In Photoshop CS4, the gamma operator (labeled just as gamma) in the HDR conversion method works as is expected (as a power function), but the gamma operator from the Adjustment Layer (labeled also just as gamma) behaves as the gamma correction operator from the Exposure adjustment tool. However Photoshop CS4 manual defines gamma tool from Exposure as a “simple power function” (not as a gamma function - like gamma correctors) and expressly says that "Exposure is primarily for use in HDR images". Which has sense. Besides it’s named correctly in the Exposure adjustment tool, and in Photoshop CS3 the gamma operator from the Adjustment Layer is labeled as gamma correction - as is supposed it should be. So I wouldn’t say these terms (gamma and gamma correction) are interchangeable in Photoshop, they are not by definition, but I’d say that this control in the current Photoshop version is not named consistently in that case, and seeing the manual, other tools of the same version that perform the same operation and also the previous Photoshop version that has not this inconsistency, it looks clearly like an error that indeed confuses people. Other apps like HDR Artizen or Photomatix has these tools labeled correctly.
Gerardo







which will make our perception the same as the output from a monitor?