PDA

View Full Version : Linear Workflow: couple basic questions


LML
01-28-2012, 06:06 PM
hi all

I'm experimenting with a linear workflow in Maya and Vray--and I've had luck so far with just a simple test scene--but I've come up with a couple general questions, that aren't really application specific:

1) I am a bit confused about whether I should be shooting for an overexposed, washed out look as I go to post (i.e. After Effects). Is it the goal to have something that needs to be "brought back" under control in post? Or, should I still be shooting for a well balanced lighting, close to what I want my final image to look like (in my case, while viewing in the Vray Frame buffer, with sRGB view enabled)? I think the confusion comes from a few blog/tutorial posts that say something like, "the washed out look is normal--you'll just have to get used to it". But I'm thinking to myself, why not adjust f-stop, shutter speed, or light intensity in Maya to get a good looking result to start with?

2) the second question is related: I read that overbright, burned-out areas can be recovered in post if saving out to a 16bit or 32bit format, such as EXR. How are these areas brought back? Simply with curve and levels adjustments? Or, is it something more technical than that?

thanks very much!

LML

CHRiTTeR
01-28-2012, 09:04 PM
The recommended approach is to make it look as good as possible in the renderer and then adjust further in post.

1.
The washed out look is because low contrast. Adjusting f-stop, shutter speed and iso will not help you with this. They will only help in case of over- or underexposure.

If things look a bit washed out, you add an s curve in post which adds contrast.

2.
The curves and levels tools in photoshop arent that good when using dynamic range images. If im not mistaken you cant even use the curves tools on 32bit images.

In later versions of photoshop you can use the HDR toning tool. Which gives you quite some control, but i personally dont find it easy to get requested/expected results.

If you want to compress highlights, use vray's reinhard tonemapper.

molgamus
01-30-2012, 07:36 PM
A linear workflow is to make sure that 2+2=4 att all times. Why images look weird on monitors sometimes is because the sceen is adjusted with an sRGB Look-Up Table.

If you are painting textures in photoshop, a pixel value of .5 should become .25 if you duplicate your layer and choose multiply as a blending mode. However, a value of .5 appears a lot darker than it should be on a regular monitor if photoshop didn't compensate that with an sRGB LUT. When you render things in 3D you can select what LUT is to be applied to textures. If the textures are linear, you don't have to do anything. .2+.2 is still .4 when you render it.

I think it is a good idea to render out linear files with a bit depth higher than 8-bit, because then we are store image data in a range above 1 but 1+1 is still 2 and the math is correct.

However if you were to render an exr-file that already compensates for the LUT of the monitor, then the math is broken. sRGB is about 2.2 gamma, that means that .5+.5≠1. The values are changed to match our perception of the pixel value.

I go by these rules; make sure that textures are in a linear space, make sure the render provides linear images, view those images with an sRGB LUT to compensate for the monitor. In maya render view you can set this up by having the viewer "correct" the images with an sRGB LUT. But what you are sending out to disk is the mathematically correct linerar file.

A linear file appears quite dark on your monitor until you apply a LUT, should not be bright? Unless you apply a sRGB LUT to the rendered image AND view it through a sRGB LUT in the viewer.

Nuke and Shake are great tools for manipulating images with larger bit depths. I've heard great things about Eyeon Fusion, but havn't used it myself. Photoshop lacks proper 32-bit compablity.

Panupat
02-03-2012, 05:28 AM
I always view my render with sRGB enabled in vray since that is ultimately what the final comp should look like. Once I'm happy with that I usually send the raw render without sRGB to my comp guys. They know what to do from there. I try not to overexpose my renders at all - it's easier to get that done in post if you want and also a ton harder to fix what's overexposed.

molgamus
02-03-2012, 12:17 PM
I'm not sure where this method of overexposing comes from. It sounds crazy to me! If you were shooting live action on film you would have a range of 12-stops, this image data would have to be stored in 12 bit file, or a 10-bit file if log. If you shoot on a digital SLR you have 8-bit, using Cinestyle enables you get the most relevant image data into that 8-bit file. If you under or over expose you have not recorded all the image data that was available on set. And there is no way of getting it back in post.

If you are rendering, then it is a whole different story. We are storing data, a pixel value could be 10 000 if you like, as long as we are using a floating point file, such as TIFF or OpenEXR. A beauty render is expected to have its pixel values in the 0-1 range, but the data stored in the highlights above 1 can still be brought "back". Applying a sRGB LUT or any other kind of weirdness to the raw file should be avoided in my view. LUTs should be used with a viewer, not baked into the file.

CHRiTTeR
02-03-2012, 05:38 PM
I'm not sure where this method of overexposing comes from. It sounds crazy to me.

Its not that crazy. If you use vray for example, its adaptive dmc sampler lets dark parts get rendered with less samples to speed things up, since we cant see much in dark tones anyway we wont notice the noise as quick. This can get you into trouble in post in some (fairly rare) cases where you need to brighten the shadows quite a bit.

One of the solutions to this is to render overexposed (so darker parts get brighter and thus receive more samples), save it to float, lower the exposure back in post.

But in general you dont need to do this, and its only helpfull if you use such an adaptive renderer.

molgamus
02-05-2012, 12:22 PM
Then it would make sense. Do people often find that they need to bright up their shadows? Most renderers allow you view a background image so that you can match your plate. Would it not be cheaper to render a separate shadow pass that is exposed a couple of stops higher? How does adaptive sampling affect sss, speculars and reflections when you over-expose?

I understand that most of us prefer to get everything correct straight out of the renderer, but sometimes there is only time to do it in post.

Jag7799
02-13-2012, 06:42 PM
Hi,

It seems to me that you should try and understand exactly what a LWF is.

I came across this PDF recently which is great at explaining exactly what is happening.

http://www.pixsim.co.uk/downloads/The_Beginners_Explanation_of_Gamma_Correction_and_Linear_Workflow.pdf (http://www.pixsim.co.uk/downloads/The_Beginners_Explanation_of_Gamma_Correction_and_Linear_Workflow.pdf)

I hope this helps.

bullfrog
02-20-2012, 10:50 PM
Nuke and Shake are great tools for manipulating images with larger bit depths. I've heard great things about Eyeon Fusion, but havn't used it myself. Photoshop lacks proper 32-bit compablity.

I also try to get the linear workflow and I think I understood everything about it. But I'm stuck at the postproduction stage. I open my 32 bit float point image in photoshop and I can't really make any adjustement. If I convert my file in 8 bits I loose my linear workflow.

How come photoshop doesn't handle it properly? Are there any plug-in to install??

Jag7799
02-21-2012, 03:31 PM
Hey,

First of all. What format are you saving it as when it's 32 bit? If it's EXR, when photoshop loads the image, if you've saved without gamma correction(but in a LWF) on then it actually adds gamma compensation and makes the image overbright. So then you need to go and suck that out again.

If you've saved out the image with gamma correction, then it will look correct. It's worth noting that in almost all cases you should save it out as a linear image without gamma correction.

To make adjustments, convert the image in photoshop to half float (16 bit)

molgamus
02-21-2012, 03:56 PM
Converting a 32-bit floating point image to 16-bit half floating point would not break the linear workflow. A value of two would still be two and so on. However you lose a lot of precision!

A value in a 32-bit file is made of 1 sign bit, 8 exponent bits and 23 significand precision bits.

A value in a 16-bit file is made of 1 sign bit, 5 exponent bits and 10 significand precision bits.

As you can see this is a huge difference. Once you convert a 32-bit file to 16 you can never restore that data.

Saving an image (tif or jpg) as 8-bit usually applies a gamma correction. Very dangerous if you are working on a texture or anything else that will be processed by a rendering engine!

I'm not sure why Adobe doesn't fully support 32-bit image manipulation in photoshop. I don't think there are plugins that can work around the fact that most of photoshops filters and tools are not supported on images with higher bit depths. For texturing you have other tools at your disposal, like mudbox, mari, bodypaint.

CHRiTTeR
02-21-2012, 04:29 PM
Converting a 32-bit floating point image to 16-bit half floating point would not break the linear workflow. A value of two would still be two and so on. However you lose a lot of precision!

16bit (half float) is still enough precision for 99% of all renders out there though.

I'm not sure why Adobe doesn't fully support 32-bit image manipulation in photoshop.

My guess is they are having trouble with photoshop's core, which is based on medieval technologie.

molgamus
02-21-2012, 04:31 PM
Yes 32-bit precision is overkill almost always, but you should be aware of the data loss.

CGTalk Moderation
02-21-2012, 04:31 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.