Gamma correction - do you care?


#122

I’m experimenting with the gamma correction and a human head with SSS the whole day. With standard IBL settings the effect is way too strong. I have to lower the color gain of the hdri map and I’m losing a lot of the FG occlusion. The image looks quite flat. Seems like I now have to compensate that with an additional AO pass.
What did you guys had to change in your workflow since your using gamma correction.?


#123

I recommend using the mia_exposure_photographic shader (I use it nearly 100% of the time now) because then instead of having to change the intensity of your IBL you can just lower the exposure of the scene.


#124

We recently changed our workflow to gamma corrected. Had to read a lot of articles and tutorials about it. We tested few methods how to achieve it.

Btw, we use 3dsmax 9 and Vray
One of them was using a plugin ColorCorrect. we didn’t like that because of the need to add it to diffuse map and only then adding the bitmap, its settings + additional settings for gamma.

Vray has similar plugin, but it had similar issues so we looked for other ways.

So we went with this setup: http://www.aversis.be/tutorials/vray/essential_gamma_01.htm

First page is about the gamma and what are its benefits and next two pages walk you through the setup. Again, this is for 3dsmax and Vray, although i think it will work with other render engines too.

Tanks for all posters here. great stuff! :thumbsup:


#125

Hi,
Don’t forget to adjust your materials to compensate for the new linear feedback your getting. Your surfaces will have to be much darker to achieve a correct look. Obviously a material that looked right when displayed non-linearly will look incorrect without adjustment. Also, remember that your color picker will not be displayed linearly, so don’t be put off by what you might think is overly dark.


#126

Doesn’t matter whether you’re extracting from a photo or painting by eye: it’ll still be non-linear.

Of course, being completely correct isn’t necessarily a virtue. We don’t bother doing that on any of our data textures here, partly because we can’t be bothered, but mainly because keeping the midpoint at 0.5 is important in many cases. Instead we provide a contrast control to let artists crunch up spec and bump maps where necessary.


#127

I was surfing earlier and found this :

http://www.digitaltutors.com/store/product.php?productid=3470&cat=5&page=1

Digital Tutors just came out with a DVD set that covers a lot of what was discussed in this thread - including HDRI and Tone Mapping. Might be good to look at.


#128

:thumbsup::thumbsup:

Thanks for all the insight on the subject! Thoughts of colorspace and how to use them correctly have been rolling around in my head for a while now–I just didn’t know where I wanted/needed to start. This thread has cleared up those questions and provided a new look at how to produce the best imagery possible!

Thanks again to all who have contributed to this thread! It has been extremely refreshing!:beer:


#129

oops wrong post sry


#130

A lot of useful info that could be thrown in to the cgwiki. But mainly I wanted to say a big thanks for this thread, I haven’t fully read through it yet, but I had no clue about what any of this was. Good to learn something new.


#131

I must say that this is a subject I’d love to understand better, and this thread has helped me A LOT toward this goal. But, I must say, there’s something I’d like to ask to the LinearWorflow wise sages around this forum :wise: :

1 - I have a two samsung CRT monitors set up at home (SyncMaster 796MB and 793DF). I’ve already passed all the hassles trying to calibrating them but one problem persists - in the monitor’s built-in menu there is an option to turn sRGB ON or OFF.

Currently I have this option turned OFF, since when I turn it ON all the colors, wallpapers, photos, look foggy/greyed/washed out. With it ON the colors are more “alive”. Am I wrong by doing this? Should I turn it ON and calibrate the colors/brightness/contrast/gamma through NVidia Control Panel?

2 - Besides this, in Max and Mental Ray, I’ve done what I read here - turn gamma correction ON and put the value 2,2 for gamma and bitmap files Input and Output.

After I do this (and using photometric lights) do I need to apply any other type of correction, namely exposure control, when using Mental Ray? What about Scanline?

I know it takes a while to swallow all this linear workflow information and it really changes the way we look at our digital images, but I BEG knee down and rise hands to the sky to you, O’Lords of linearization, to look at me with pity of my own, shameful ignorance, and shed some light in my brain! :bowdown:


#132

wow this a very useful thread and kinda changed my view on gamma correction and all that, before i just used to render out a scene or image and as long as it looked good to me thats all that mattered but now i see the light :P. i will be reading more into this gamma thing


#133

sorry for the double post, but i begining to fully grasp this LWF method. Ive been reading this thread everyday to fully understand it but i have a few questions.
-for textures,bump map, displacement, normal maps, specular needs to be ungamma right?
-is there a procedure for this in photoshop?
check the image below and see if theres more i need to do to ensure im using LWF (any help or info will be appreciated.)


#134

Um, the monitor gamma side of things I’m less sure about but on my Mitsubishi Diamond Pro 2070SB monitor I have it set on sRGB mode. I believe that locks the colour temperature to 6500k and stops you being able to manually adjust everything. In my Windows Colour Management I’ve set it to use the standard sRGB monitor profile. In Photoshop I’m also using the standard sRGB working profile. They seem to all match up that way. I guess you’re right, you can then use the NVidia colour management to make any system wide adjustments. Just make sure you don’t also have that Adobe Color thingy starting up at system startup adding further complications.

It’s been too long since I used Max to give you specific details but yes, you still need to use the mr exposure control to add gamma 2.2 back onto your final image (unless rendering to floating point). Scanline… I have no idea!

:thumbsup:

Now moving on to younglion:

I wouldn’t ungamma normal maps! I assume they’ve been programmed to be correctly interpreted the way they are. You don’t ‘see’ the colours, they represent surface normal direction. To ungamma an 8bit texture in Photoshop, the way I do it is to convert it to 16bit (so that you don’t lose any data) then use Image > Adjustments > Exposure and set the Gamma control to 0.454.

I’m not sure if that’s the best way. At one point I was using Edit > Convert to Profile and assigning a custom 1.0 gamma profile but that’s a bit confusing because it ends up looking the same in Photoshop because it compensates for you.

:scream:

Instead of using the Framebuffer 0.454 method to boost the gamma of your renders I highly recommend using the mental ray mia_exposure_simple or mia_exposure_photographic lens shaders instead. Using the Framebuffer setting, mental ray does some automatic compensation behind-the-scenes to certain shader attributes but not to others, so you can still end up with wrong results. Plus the lens shaders are nicer for dealing with the overbrights etc.


#135

thanks jozvex i will continue to experment with this workflow.


#136

Just be careful that this fits into your workflow ok. I don’t know about Maya, but in XSI, custom render channels (framebuffers) are not adjusted by lens shaders. So, you might be surprised to find framebuffers rendering out much daker than expected.
I generaly use the exposure lens shaders for preview purposes only, remove them for final rendering, then do all gamma correction in post.


#137

These Mentalray render settings in Maya suggest that you will be rendering to a file that supports Float 32 bit. If you haven’t done so, this also requires you to select a file format that supports Float 32 bit like OpenEXR in the Common tab of the Render Settings. Mentalray integration in Maya is not very intelligent and will let you select framebuffer settings that are not supported by the currently selected output file format.
Also to get the intended output, you have to batch render, because Render View doesn’t support 32 bit and will give low dynamic range 8 bit image that will simply kill most of the tonal range coming out from the renderer without any tone mapping.
Now if you’ve done all of the above – meaning your are batch rendering to a Float 32 bit file, the Gamma settings of 0.454 will have no effect whatsoever on the output and will affect (ungamma) only input that is 8 and 16 bit texture files. The buffer gamma settings affect only 8 and 16 bit texture input and 8 and 16 bit output. It doesn’t do anything to 32 bit input and output. Unlike mia_exposure_simple or photographic, framebuffer gamma is not a tone mapping tool that brings nicely with a lot of user controls the high dynamic range coming from the renderer to the low dynamic range of the limited display of the monitor.

     When rendering to Float 32 bit file format that will be tone mapped outside Maya (that is without using mia_exposure_simple or photographic), the framebuffer can be used as an alternative to using gamma nodes to ungamma 8 and 16 bit textures. However it calculations on the input are not exactly the same as gamma node calculation on textures even though both actions have the same goal. Because of that the result is also slightly different and it is hard to say which one is better. Correcting textures with frambuffer gamma is less hassle but it also doesn’t gamma correct the color swatches which still need to be taken care of.
  
    Regarding your Photoshop settings, they are fine but it is more important what is selected in View > Proof Setup menu. By default it is Working CMYK which on 32 bit images will apply gamma of 2.2 on top of your rendered image making it twice as bright. So to avoid that change this to Monitor RGB.

#138

thanks for the info guys. i set the frame buffer back to its default and used the mia exposure lens, after in photoshop i applied a curves adjustment to it (S curve).
render with mia_simple exposure

Curves adjustment

So my question now is using the LWF is meant for you to have greater control of your work in post production?
and is that the main purpose of it?
or am i totally off track here?


#139

Yes, and you can think of mia_exposure_simple and photographic as an alternative to or even as an actual post production process which all happens within Maya. Although technically it happens at render time and has some minor advantages with sampling the color of the final pixels, functionally it is playing the role of a post production process.

The fact that you did better job in Photoshop shows that you are more experienced with it than mia_exposure tools. You could have simply omited the mia_exposure node from your rendering set up by deleting it and done all tone mapping in Photoshop.

You want to use external post production programs if you want to compose your rendering with other images or as in your case, you are more comfortable using the tools of another tone mapping program.

In these cases you can still use mia_exposure_simple or photographic for previewing purposes and disable those for the final rendering to get pure linear image without any tone mapping applied. There is a preview option in mia_exposure nodes  which allows you to quickly tune the tone mapping by loading a rendered file instead of making an actual rendering for each adjustment. This function is a pure post production process that you can use for other images that are not even produced with Maya. For example you can use  mia_exposure_ photographic to color correct HDR images or your raw file format photos taken from your digital camera. You are also free to connect to the mia_exposure other nodes like Contrast and RGB to HSV which does very similar job as the Photoshop’s curves. When rendering from Maya your goal with using the preview feature is to approximate the result of the post production program so that you can at one stage of your scene, specify the tone mapping and then continue with other aspects of your scene that can be test rendered to check its progress in its final post production approximation. Finally when you are done just disable the mia_exposure_node and render linear image without any corrections.  

by the way, nice 3D car :thumbsup:


#140

thanks emil3d for taking time to explain, looks like it will take me sometime to fully understand and get used to this workflow. thanks once again.


#141

Thanks Jozvex for taking the time to help me. So far, the main benefits I’ve seen is using a smaller number of lights to achieve a decent and even distribution of the illumination. And photometrics doesn’t seem as strange as they used to :smiley: