View Full Version : Linear workflow for gamma help

06 June 2010, 10:56 AM
Okay, started this as a separate thread so that maybe it'll be useful to other people as well instead of buried in my general thread about my scene.

I notice there are TWO primary ways to do this;

-one method, as espoused by djx ( that involves adding (or changing the values, if you already have one) a mia_exposure_simple under your camera's lens shaders, and also individually adding mip_gamma_gain nodes between file and shader for each material.

-and another method, by Ashraf Aiad (, also using the aforementioned mip_gamma_gain method, but then at the end he doesn't add/change mia_exposure_simple settings, but instead adds a mip_gamma_gain itself as a lens shader on the cam. He doesn't say whether he's doing this INSTEAD of the mip_gamma_gain for each material, or in ADDITION to.

Another thing, when you add a mip_gamma_gain, a problem presents itself where textures are corrupted in the viewport. This is mentioned in Ashraf's tut and he points to for a fix, but this page seems to refer to the display of the swatch in the hypershade, not of the texture in the viewport (and doesn't do anything in my case anyway). Then the next time you see Ashraf's viewport in the tut it seems to be displaying correctly, leaving me in the dust scratching my head!

If anyone could do a step-by-step to fill in the blanks, I'd (and I'm sure many others new to liniear workflow) would be eternally grateful.

06 June 2010, 04:56 PM
here's what I do, but it's going to be slightly different for everyone.

1. I put regular gamma nodes between the colors and textures and plug them into the shader. You leave diffuse, reflectivity, bump, and other non-color values alone. I punch in .454545 into the gamma value. You might notice in the viewport this changes your shader to display as gray...if you need your colors separated out still like most of us like, create a ramp, delete two of the 3 colors in the ramp, plug the gamma's output into the remaining ramp color, then adjust the default color value to your liking. Then just plug the ramp's output into your shader.

2. I use a light with a HDR amount of light, usually a MR dome with a HDR image mapped, but sometimes I use the physical sun for some scenes. Everyones's HDR image is going to have a different intensity in how bright beyond 1 it is. For physical sun you might end up adjusting (or not) the intensity of the sun.

3. create a MIA simple exposure node with a gamma of 2.2. The other values might vary from person to person and their preference. The gain value will depend on how strong your HDR image or physical sun are. I often use .15 for the gain with my HDR scenes because a HDR image I like using goes up high in intensity. Knee and compression go together and are a preference thing since it depends if you like compressing the overbrights in your compositing package or if you just want to strictly bring the exposure down - which is simpler and in which case make the compression 0 and the knee 1 (though I guess knee value doesn't matter if compression is 0). The MIA simple exposure node is only used for renderview previewing within maya....though maybe 2011 has tools in place to not need to do this?

4. When you render, make sure you're using EXR as the output format and that you're using 16 bit (or 32 if that's needed).

5. Remove the MIA simple exposure when rendering your final frames

6. note that for the subsurface scattering MR shaders, you'll have to uncheck the "screen" check box that's under (I think) algorithm control or you'll get gray where it should be superbright

7. When you're in your compositing package, apply a 2.2 gamma correction to your footage (might be .454545 depending on program or filter) to darken the gamma. And lower the exposure to bring the HDR lit images back under control to appear normal. Ultimately, the gamma and exposure is up to you. You don't exactly have to use 2.2 gamma. You need to just find the right combination that looks good and correct - because maybe you didn't get your colors quite right within maya anyway or you just find you like the look of a different gamma/exposure combination better.

Rendering as HDR makes grain flicker go away because those pixels have a huge range to average themselves into. It also makes final gather and GI more accurate and less grainy/noisy. Having all that extra exposure light within your images also lets you save some shots you rendered that normally might have been too bright or dark when rendered and had clamped black or white values. With HDR's, you can still use that footage instead of having to do a rerender

06 June 2010, 11:51 AM
I have another question; what does this worflow ( have over this one ( ??? In my other thread, someone told me to use djx's method but didn't say why.

djx puts mip_gamma_gain nodes for each texture/material, while 3dlight's tutorial instead changes settings in the MR framebuffer. Do both these methods work as well as each other? Why would anyone take the time to use djx's method when 3dlight's is so much simpler? (both use a mia_exposure_simple lens shader for render preview purposes and turn it off for final output).

06 June 2010, 12:10 PM
Because using the framebuffer gamma correction will also affect bump/normal/displacement textures, and also alpha/masks textures, and you don't want that :)

PS: Using gamma nodes can be as quick as setting the framebuffer gamma when you're using a script :) Haven't tried this one ( but it can apparently do it

06 June 2010, 12:35 PM
Ohhh I see, thankyou for that dot :)

What are your thoughts on Asraf Aiad's method (, where all he does is add a mip_gamma_gain as a lens shader and sets its gamma to 0.455 ?? (he does it right near the end of the vid)

Or will this also affect all bumps and other maps too?

06 June 2010, 01:49 PM
As he said, he's just showing how you can use the mip_gamma_correct node, not how to setup a linear workflow. Using it as a lens shader will apply it to the whole image, which is not what you want to get a proper linear workflow.

To sum up what I usually do in Maya (you can compare with sentry workflow, but it's actually very similar):

- put gamma correct nodes for what needs to be corrected: diffuse color, reflection color, refraction color etc...
This include textures (even if they are greyscale) and color swatches.
This doesn't concern values, like diffuse weight, reflection, glossiness, etc... But if you ever use a texture for this values, then you might have to gamma correct them (I still need more info on these)
This doesn't concern displacement/bump/normal map textures.

- use a mia_exposure_simple with a 2.2 gamma, to have the right preview in the Render View.

- when you render your final image before compositing, change the mia_exposure_simple gamma back to 1.

- Of course, use a 16 or 32 bits float image format like exr.

This technique work for maya 2010 and previous version, I haven't had the time to test the new Render View of 2011, which apparently can show float images, and maybe wouldn't need the use of the mia_exposure_simple node to have the right preview.
But be careful if you want to use Maya 2011 color management, which is supposed make the linear workflow really simple to setup, but doesn't work as it should. You can have a look at this thread (

I can only advise people to use deex shaders ( , which are mia_material that make the life much easier for rendering, with a "almost-one-click" linear workflow and render passes setup.

And by the way, this is only my understanding of the linear workflow, if I made mistakes, please correct me :)

06 June 2010, 02:57 PM
I'm writing a set of gamma tools as we speak that are going to make this process much easier and automate a bunch of the annoyances of working with these nodes. I've used all the other tools out there but in many cases I still end up having to do things by hand, or the process of working with these nodes ends up still being too cumbersome for my taste. I'm leaving for vacation today actually, so I'll be gone this coming week, but I only have a couple features left to write. Hopefully week after this one they'll be done. :)

As for actually setting up a proper linear workflow, you can always remove the exposure node from your camera on render time and render to 32 bit float. Then do all your tone-mapping in post with no looks 'baked-in' from the exposure node. The only caveat to this method is that you will most likely end up with aliasing on your edges with super bright pixels (pixels with values that exceed 1). You'll need to deal with this aliasing in your compositing package using glows or blooms. This is similar to what happens in an actual digital camera when it records super bright values. And, depending on your camera, it may actually record those edges as aliased as well.

Another method is to leave the exposure photographic node connected to your camera but set the gamma to one and set crush blacks to 0 and burn highlights to 1. You'll end up with linear, un-tonemapped data this way as well. But you'll have to deal with those edges in post again.

Another way is leave exposure photographic connected, but set the gamma to 1. You'll get slightly faster render times because you don't waste time working with super bright samples, but you're image also won't contain the full dynamic range because it has been tone-mapped in the render by crush blacks and burn highlights. The plus side is you don't have to worry about your edges in post. The downside is you can't make drastic changes to the image in post because you don't have full dynamic range.

Each one of these methods assumes you are working with linear data from your textures and you don't touch that framebuffer gamma control. It's an old control, you should never use it.

06 June 2010, 09:28 PM
I have a question. Right when you are about to batch render your image and you remove your lens mia simple exposure, do you keep the gamma nodes connected to your texture files? or do you unplug them as well?

06 June 2010, 09:48 PM
No you leave them on.The gamma nodes are bringing your file textures back down to linear space.

06 June 2010, 07:37 AM
Should I also apply gamma nodes to my HDR image I'm using for IBL? (not using maya's built in IBL, but just a sphere with a surface shader mapped to it which has the HDR image). So yeah, do I need a gamma node between the file and surface shader for this material?

06 June 2010, 09:22 AM
I would say no, HDR images are already in linear.

07 July 2010, 08:09 PM
I would say no, HDR images are already in linear.

Quite right. HDRIs are linear by nature and need absolutely no gamma correction nodes...ever. This whole 'gamma correction node' phenomenon only applies to 8 and 16 bit images that are saved in a non-linear space, such as sRGB.

07 July 2010, 03:17 AM
also applies to 2d and 3d procedurals

07 July 2010, 03:16 PM
Quite right. HDRIs are linear by nature and need absolutely no gamma correction nodes...ever. ..... Unless, someone opens 8 bit image in Photoshop, changes the mode to 32 bit and saves it as hdr or exr and sends it to you without telling you this.

07 July 2010, 06:22 PM
Unless, someone opens 8 bit image in Photoshop, changes the mode to 32 bit and saves it as hdr or exr and sends it to you without telling you this.

Wow...that's quite the scenario! Can't say I've had that happen myself, but it sounds like you definitely have. There should be a rule against doing that! And if the person breaks that rule they should have to fork over a month's questions asked. It's the least they can do. :)

CGTalk Moderation
07 July 2010, 06:22 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.