PDA

View Full Version : Rendering Out each Light as its own layer?


Redsand1080
11-13-2008, 04:31 AM
I have seen a bunch of people render out each light's contribution seperately and composite the light's contribution to the surface in post. In post I think i understand how to composite the different light contributions. I think the light must first be multiplied by the ambient color and then that result must be added to the ambient color? but how do I actually render out each light's contribution seperately?? i.e. ONLY the key light, ONLY the fill light, ONLY the rim light?


I'm using Maya btw.

Can anyone help??

Thanks!:thumbsup:

frizDog
11-13-2008, 06:21 PM
You should be able to just turn off all the lights except the one you want. (I think you can just hide the lights you don't want).
And in your compositing package, just add all the passes back together. No need to do the multiplication thing you mentioned...

theotheo
11-13-2008, 08:07 PM
Another way to do it if you only have three lights (or three distinct lighting setup) is to color each light with a different color. Ie : key=red, fill=blue rim=green. And use each color channel as a matte for adjusting brightness or just adding them together.

-theo

jeremybirn
11-13-2008, 08:28 PM
If you have Maya 2009, then each light can be a member of a different pass contribution map, and all your light passes will render at once without much extra rendering time required per light.

If you have Maya 2008 or earlier, you'd create several layers, each layer would have all the geometry, but only 1 light (or 1 group of lights.)

Either way, you can just "add" together the layers in your compositing program. That's all, just "add" each layer.

theo's approach would work also, of course, but for that you'd probably want the objects to be pure white in that render layer, and to render the colors and textures of the objects as a separate ambient pass you'd multiply your light channels with.

-jeremy

Redsand1080
11-13-2008, 08:50 PM
wow thanks so much for the awesome replies. that helps out a lot. i have both maya 2008 and maya 2009 so i will try both methods.


once again...thanks much! :thumbsup:

Redsand1080
11-14-2008, 08:02 PM
jeremybirn: If you have Maya 2009, then each light can be a member of a different pass contribution map, and all your light passes will render at once without much extra rendering time required per light.



Thanks much for the advice. It worked out perfect...but the only thing is I took a big hit in render times when I used the render passes. Which makes no sense to me...because it's exporting framebuffers which are already present when the inital beauty render is complete, so it should take the same amount of time. And all the documentation I've seen says it should take the same amount of time...but it just isn't the case for me. Here are the render times for the same scene with and without passes:

1) NO PASSES: RC 0.1 info : wallclock 0:01:39.30 for rendering

2) PASSES: RC 0.2 info : wallclock 0:03:11.51 for rendering

Any idea why that might be? Is it possible to set up the passes in an inifficient manner?


Thanks much! :thumbsup:

1armedScissor
11-19-2008, 01:30 AM
Another way to do it if you only have three lights (or three distinct lighting setup) is to color each light with a different color. Ie : key=red, fill=blue rim=green. And use each color channel as a matte for adjusting brightness or just adding them together.

-theo

Be wary that using this method will likely result in artifacts due to multiplication errors because of the antialiasing and pixel filtering that is present in the rendered images.

cheers.

1armedScissor
11-19-2008, 01:34 AM
Thanks much for the advice. It worked out perfect...but the only thing is I took a big hit in render times when I used the render passes. Which makes no sense to me...because it's exporting framebuffers which are already present when the inital beauty render is complete, so it should take the same amount of time. And all the documentation I've seen says it should take the same amount of time...but it just isn't the case for me. Here are the render times for the same scene with and without passes:

1) NO PASSES: RC 0.1 info : wallclock 0:01:39.30 for rendering

2) PASSES: RC 0.2 info : wallclock 0:03:11.51 for rendering

Any idea why that might be? Is it possible to set up the passes in an inifficient manner?


Thanks much! :thumbsup:

Are you rendering your images with motion blur or depth of field? If so then these need to be applied to each framebuffer and becomes infinitely more expensive.

With that being said, I'm not certain that Maya's contribution maps & passes actually leverage mentalray framebuffers the way mentalray intends for them to be used (I'm guessing really, I haven't looked at them in depth myself).

Can you post a sample scene? There are many variables that could be the source of your problems.

Cheers

1armedScissor
11-19-2008, 01:36 AM
(ps this thread is probably more suited to the maya rendering sub forum here)

jeremybirn
11-19-2008, 01:44 AM
Be wary that using this method will likely result in artifacts due to multiplication errors because of the antialiasing and pixel filtering that is present in the rendered images.

Hmmm... It shouldn't. The data you get out of the red (or blue or green) channel of an image should be 100% as reliable as the data you get out of an alpha channel, and making 3-packs as render passes is something that's done in a lot of productions.

Just to make sure I'm following you, could you be more specific about the problem you're seeing, or post an image?

-jeremy

1armedScissor
11-19-2008, 04:04 AM
Hmmm... It shouldn't. The data you get out of the red (or blue or green) channel of an image should be 100% as reliable as the data you get out of an alpha channel, and making 3-packs as render passes is something that's done in a lot of productions.

Just to make sure I'm following you, could you be more specific about the problem you're seeing, or post an image?

-jeremy

It's a common misconception that this works and you're correct Jeremy there are many studios using this technique but it's still unfortunately incorrect. It just works some of the time. It's not just about the red green or blue channels, but rather how that data is being used.

It's about using monochrome lighting data as a multiplier with it's color texture component in the composite. The math involved is incorrect due to the anti aliasing and pixel filtering that is present in the rendered images. When rendering, the lighting is calculated prior to any anti aliasing or pixel filtering. That's why the artifacts don't appear in the beauty render.

This isn't the case when breaking up the rendering equation into it's component parts and trying to assemble them in the composite.

Here's a good example of what I'm referring to (this particular example shows a single monochrome RGB image being used for the lighting pass, but this could easily contain 1 light per channel and be split into 3 separate lights in the composite. You'll have the same problems either way);

http://mymentalray.com/forum/showthread.php?t=1491

jeremybirn
11-19-2008, 06:07 AM
1armedScissor -

Thanks for a terrific link you posted above! I think the list of times you need separate shadow passes is a little longer than what Master Zap provides, but I certainly agree that you should try to do post-reconstruction of the scene additively. If you need separate shadow passes at all, then ideally they should be applied to individual lighting passes one at a time, before those passes are added together.

Since we're in a "General Techniques" forum, and that example may be software-specific, I should mention that Mental Ray does adaptive over-sampling based on contrast, so a very bright edge in one render layer might get sampled more than a lower contrast edge in another render layer. MasterZap's example seems to show a case where a sub-pixel-sized discontinuity between how two layers are sampled could result in a matte line in a comp.

The main idea with matte lines between 3D elements is to back up and try to fix them at their source, rather than correcting for them in the comp the way a compositor would try to fix a badly pulled key. I think if he were using a fixed 8x8 pixel samples, a common setting in other popular production renderers, he wouldn't have had the problem. Even in MR, fixed non-adaptive oversampling is an option, albeit an expensive one. And perhaps multiple MR User Framebuffers output from the same render layer could have matching samples, avoiding the problem he saw when the elements were separate render layers?

-jeremy

1armedScissor
11-19-2008, 02:33 PM
The example provided isn't software specific. Any 3d renderer will produce the same problems unless you're not antialiasing your images.

You're right about finding the problem at the source instead of trying to fix it downstream, but the problem in this case IS with the source material. Regardless of what render you use, regardless of what sampling you use, whether it's fixed or adaptive is irrelevant because this is due to the antialiazing and pixel filtering. This problem persists even if your sampling matches 100%.

One way around this is to render out your source images with zero antialiasing or pixel filtering and supersampled. Then peform all of your compositing on the supersampled images and then resize at the end using the filter of your choice to the desired resoultion to remove all of the aliasing. Problem with this method is that for production quality you're looking at having to composite with images that are at least 16 times larger than what you'd normally require. This is obviously too expensive for most everyone for obvious reasons. Not to mention what about texture filtering? This will cause the same problems but you will have considerable problems if you try to turn it off and simply rely on using an supersampled image to composite with and hoping that downsizing will provide reliable texture filtering. There are many more issues with the compositing on supersampled images but that's another discussion all together.

In the particular case presented in the link above the only way to get rid of the artifacts for that scene is to render each element (both spheres and the plane) separately and then composite the rendering equation for each one individually prior to simply performing over operations based on thier depth from camera. This is a very simple case however and wouldn't work for more complicated geometry that overlaps upon itself and/or has textures with areas of high contrast.

Setup a scene similar to the one in the above link I posted and render out the elements for yourself. Take a look at any problem pixel in the final image and follow the math that's being used in your composite to produce that final pixel. You'll see that the math doens't work once you get to the multiply operation.

After the image has been antialiased you're screwed. You will have areas of high contrast that have been blended together. In the above link where the sphere's overlap one is very bright and the rear one is dark. For arguments sake we'll say the light area has a value of 1 and the dark has a value of 0, but becuase of the antialiasing there are now values in between 1 and 0. It's along these interpolated values that the math doens't work correctly, thus presenting the artifacts you see. When the beauty pass was being computed these values didn't exist.

Technically the math works just the way it's supposed to, it's not wrong, the expectation of what the result SHOULD be is the problem, and in the end the problem is with the source material to begin with.

Cheers.

1armedScissor
11-19-2008, 03:03 PM
1armedScissor -
If you need separate shadow passes at all, then ideally they should be applied to individual lighting passes one at a time, before those passes are added together.
-jeremy

This still requires a multiplication operation using antialiased images and will still result in the same problems. It only works (appears to work) when the areas that have been antialiased are of low contrast.

The lowest level of granularity for any lighting pass that can provide consistently accurate results is to have the multiply operation baked into the pass. For example one light could be broken into the following images and reliably composited (additively without artifacts);
- diffuse (diffuse lighitng * color texture)
- specular (specular lighting * specular color texture)
- (I'm purposely leaving out reflections as things work differently for environment reflections than raytraced reflections, that's another topic)

Simply adding these two renders in the composite will produce a result that is consistent without artifacts.

Keep the monochrome shadow/lighting pass however as it can still be used as a luminance mask for color correcting, but be wary that if you're going to perfrom multiplicative operations in the composite with anti aliased images that you're potentially introducing artifacts.

cheers

Seraph135
11-19-2008, 07:52 PM
You can in many cases still create the proper passes in your compositing pacakage that will work correctly with multiplication.

Take for example that senario on the mental ray forum that was posted. If you render a lighting pass (diffuse textures and lighting together) out of your 3d package. Then also render a diffuse pass. You can then divide the diffuse pass by the lighting pass to create what I would call a "raw lighting" pass (white shaders with just lighting information). This version of the raw lighting pass will have problems around its edges. However, you can color correct this and then multiply it back over the diffuse pass and get perfect results that match what comes out of the render.

This only reliably works if your rendering in floating point.

Tim J

1armedScissor
11-19-2008, 08:57 PM
You can in many cases still create the proper passes in your compositing pacakage that will work correctly with multiplication.

Take for example that senario on the mental ray forum that was posted. If you render a lighting pass (diffuse textures and lighting together) out of your 3d package. Then also render a diffuse pass. You can then divide the diffuse pass by the lighting pass to create what I would call a "raw lighting" pass (white shaders with just lighting information). This version of the raw lighting pass will have problems around its edges. However, you can color correct this and then multiply it back over the diffuse pass and get perfect results that match what comes out of the render.

This only reliably works if your rendering in floating point.

Tim J
Please bear with me as perhaps I'm not understanding what you're saying.

For the purpose of this discussion we'll ignore any; specular, reflective, sss contributions and focus on strictly the diffuse part of the rendering equation. Also 'lighting pass' will refer to a render comprised of (light * diffuse color texture) only and 'diffuse pass' will refer to (diffuse color texture) only.

If I follow your description then I don't see how this would be useful outside of bloating the composite. With the method you're proposing color correcting the "lighting pass" requires 3 separate operations "divide, color correct, multiply" and 2 source images; "lighting pass and "diffuse". Yet the results are the same as if you were to use only the "lighting pass" and a single color corrector.

Why would I want to use more source images and operators in my composite when there is no added benefit in doing so? Am I missing the benefit perhaps? Is there something I'm forgetting or not considering?

Again, please bear with me as maybe I'm not following what you're proposing exactly or I'm missing some other way it can be applied.

cheers


ps - good discussion everyone

Seraph135
11-19-2008, 10:00 PM
If I follow your description then I don't see how this would be useful outside of bloating the composite. With the method you're proposing color correcting the "lighting pass" requires 3 separate operations "divide, color correct, multiply" and 2 source images; "lighting pass and "diffuse". Yet the results are the same as if you were to use only the "lighting pass" and a single color corrector.

Why would I want to use more source images and operators in my composite when there is no added benefit in doing so? Am I missing the benefit perhaps? Is there something I'm forgetting or not considering?

Well your correct, it does bloat the file some. If all you want to do is change the color of the light then its better to just work with a single lighting pass. I was just pointing out that it was possible to create a raw lighting pass that would work correctly if you were so inclined.

Tim J

1armedScissor
11-19-2008, 11:18 PM
Well your correct, it does bloat the file some. If all you want to do is change the color of the light then its better to just work with a single lighting pass. I was just pointing out that it was possible to create a raw lighting pass that would work correctly if you were so inclined.

Tim J

Ahh I see, it's an alternative to rendering the raw lighting data. Sorry I thought you were proposing a solution to the problems I've outlined in this thread, and that I was missing something in regards to the application.

Thanks for the clarification Tim. Cheers.

rendermaniac
11-20-2008, 09:44 AM
The other things that can cause multiplication problems are motion blur (which is still filtering - just over time), and overlapping semi-transparent surfaces. This REALLY shows up badly on fur and feathers and cannot be fixed in comp.

If you do want to go down this route, the only way to do it properly is to keep 3d information around - this is exactly what Lightspeed http://people.csail.mit.edu/jrk/lightspeed/ does and Lpics http://www.vidimce.org/publications/lpics/ (at least what has been released publicly) doesn't do.

Simon

This still requires a multiplication operation using antialiased images and will still result in the same problems. It only works (appears to work) when the areas that have been antialiased are of low contrast.

The lowest level of granularity for any lighting pass that can provide consistently accurate results is to have the multiply operation baked into the pass. For example one light could be broken into the following images and reliably composited (additively without artifacts);
- diffuse (diffuse lighitng * color texture)
- specular (specular lighting * specular color texture)
- (I'm purposely leaving out reflections as things work differently for environment reflections than raytraced reflections, that's another topic)

Simply adding these two renders in the composite will produce a result that is consistent without artifacts.

Keep the monochrome shadow/lighting pass however as it can still be used as a luminance mask for color correcting, but be wary that if you're going to perfrom multiplicative operations in the composite with anti aliased images that you're potentially introducing artifacts.

cheers

CGTalk Moderation
11-20-2008, 09:44 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.