PDA

View Full Version : render passes, order + operation


pbarnes
08-29-2009, 06:10 PM
so i am learning compositing
i will be using mental ray for the rendering

now there are like 47 passes by default that maya MR will output (i have turn all on, i want to learn them all)
could some one help me out here

i want to know what order the composite goes in, i.e diffuse then spec and so on

also what layer/merge operation is need for each
i.e over, plus, multiply

i have attached a .txt with a list of the passes

could someone edit is into order with operations please

i know its a BIG ask but it will help me out a lot

many thanks

pbarnes.

EDIT:: BTW im comping in Nuke
also im not rendering this.... it is for a database of passes

scrimski
08-31-2009, 04:02 PM
Start with the diffuse as the base.

ao_ao - multiply
refl_reflection - screen
shd_shadow - multiply
spec_specular - screen
matte_matte used for masks(luma or alpha)

camz_depthremapped - used as input for a blur node or as mask for a blur node

normal_normalcam - normal passes are usuallly used for relighting. Multiply with a color, desaturate and screen the result wherever you need to.

Don't know most of the passes, so it's rather incomplete.

pbarnes
08-31-2009, 04:04 PM
cheers
any information is very useful

Aneks
09-01-2009, 01:49 PM
in a linear colour pipeline or in float you should never really screen anything. Instead you want to emulate the renderers behavior and add all lighting and primary passes like reflection, specular etc.

Ambinet Occlusion should only be used to multiply by the diffuse component to calculate the DIffuse contribution. Add the occluded ambient to this and then add all the other primary passes in linear float. Dont multiply reflection or spec by Ambient Occlusion. Instead use Reflection Occlussion.

I am very guilty of dispensing bad advice as I used to work with 8bit sRGB many mnay years ago and had no real grasp of the process. I am assuming that anyone with a solid renderman or mental ray pipline would be around this but amazingly people are still referring to old hacky stuff that was done years ago.

scrimski
09-01-2009, 05:09 PM
in a linear colour pipeline or in float you should never really screen anythingInteresting to hear, because I had a another post here in this forum where I was told never to add or subtract passes but use multiply or screen instead. Who's right?

Aneks
09-01-2009, 05:15 PM
depends on what you are trying to do and in which colour space you are working.

In linear float you should definitely add instead of screen. Screen is basically a flawed operation. It inverts both images does a weighted add and then inverts the result. This to emulate the process of double exposure and probably is designed to work well with log images. People have been using it with sRGB for ages. I know I have been guitly of that in the past. but in linear float it is bad.

subtracting in linear is another tricky one. You run the risk of negative values in your comp. Again in 8 and 16bit it is not possible to go below 0 but in float you can. this would be bad.

Mutliplying by occulsion is fine. Mostly multi-pass is about emulating the mathematical behavior of a render. In this case mental ray. I am more experienced with renderman based renders. But fundamentally it should be similar. In the most basic sense. The render will take the ambient and diffuse values (these will be mutliplied by occlusion if it is enabled) of the surface and then add in the contributions from specular, reflection (which are in turn mutliples by reflection occlusion) and then add additional lights. Shadows are calculated differently based on the renderer. Renderers are almost exlclusively linear and floating point in their claculations and then some apply lookup correction as a post process. Mental ray for maya does this first to negate the sRGB gamma on non-linear textures and then at the end as part of its display frame buffer.

If you are compositing 8bit sRGB then it doesn't really matter as you cannot really emulate the internal mechanics of your renderer. So in that workflow it is pretty much a massive hack and you might as well go for it.

pbarnes
09-01-2009, 05:18 PM
from another source i was told that the general rule was 'plus' for light passes (diff, spec, reflec ect.) and mult for shadow, ao ect

also i take it motion vector and normal go (last) after all the color and shadow, ao ect?

Aneks
09-01-2009, 05:26 PM
Normals, zDepth, ID, motion vectors, UVs, Point position etc are all referred to as secondary passes. None of these secondary passes need to be manipulated like that.

normals allow you to do things like create additional lighting, re-texture and some other sophisticated hacks. These need to be linear and float (and 32bit per channel) if they are to have any really value. So too depth and motion vectors. I have seen a lot of people colour correct depth passes or normalise them (put into a range of 0-1), you should not have to do this. Things like depth or world position need to have values which accurately refelct the 3d scene. If your compositing system is not able to deal with true float depth for example I suggest you go looking for one that does !

also i take it motion vector and normal go (last) after all the color and shadow, ao ect?

If you are going to apply a zDepth of motion blur in compositing then these operations should come after you have rebuilt your primary passes.

pbarnes
09-01-2009, 05:31 PM
Normals, zDepth, ID, motion vectors, UVs, Point position etc are all referred to as secondary passes. None of these secondary passes need to be manipulated like that.

normals allow you to do things like create additional lighting, re-texture and some other sophisticated hacks. These need to be linear and float (and 32bit per channel) if they are to have any really value. So too depth and motion vectors. I have seen a lot of people colour correct depth passes or normalise them (put into a rangge of 0-1) now if your compositing system is not able to deal with true float depth I suggest you go looking for one that does !



If you are going to apply a zDepth of motion blur in compositing then these operations should come after you have rebuilt your primary passes.


cool

i have little comp experience, and it is basic, just sticking to diff, spec, reflec, shadow, Zdepth and motion vect. atm


i am trying to automate the process of loading openEXR file in Nuke,
i have got the script working and now i am creating a database for MR so it knows what order to set up ur shuffles and merge nodes also with the correct operation

Aneks
09-01-2009, 06:26 PM
and really you need to be careful mutliplying anything too.

http://mymentalray.com/forum/showthread.php?t=1491

Aneks
09-01-2009, 06:29 PM
Interesting to hear, because I had a another post here in this forum where I was told never to add or subtract passes but use multiply or screen instead. Who's right?

depends what they were saying and in which circumstance. I am refering to a linear float workflow where you are using renderman or similar for full fidelity renders in say Feature Film vfx. This is how it is done in large studios. there are many solutions that work just fine in a variety of circumstances where you can get away with it. Remember doing any king of post processing of renders and even the render itself is often a hack. It just depends on what your pipeline considers to be acceptable hacks....

PEN1
09-14-2009, 10:13 PM
Hi All,

I am trying to learn how to render passes in maya 2009 and composite in photoshop. Here on some of the passes. I rendered them out all together from the master layer in maya. If I should render each object separately I was wondering if someone could direct me to some tutorials that would explain how to do it in maya 2009.

The first image is the master beauty and the AO combined. The master beauty layer was set on multiply and the AO was set to normal.

The second image is the master beauty, the third is the AO. The fourth is the beauty, and the fifth is the indirect. I did not use the last two passes because I know that they did not come out right. I can not figure out why I can not at least composite the master beauty and the AO without the image coming out so grainy. The reflection, shadow, specular, and ambient all came out black, so I know I set up the passes wrong. I tried to place the objects on separate layers, but could not get the passes to render at all. I batch rendered to get the results that I have....

Any info or direction would be greatly appreciated.

Thanks
:banghead: :banghead: :banghead:



http://img10.imageshack.us/img10/4540/picture33v.png
http://img182.imageshack.us/img182/7895/masterbeauty.jpg http://img441.imageshack.us/img441/2773/76567066.jpg
http://img338.imageshack.us/img338/2015/beautyn.jpg
http://img142.imageshack.us/img142/1537/indirect.jpg

Aneks
09-14-2009, 11:18 PM
this is probably approaching the sort of thing you need to post in one of the rendering forums. But let me give you some quick answers. Firstly you need to understand what colourspace is. This refers to the colour format the image is encoded in. By default mental ray should be linear, mental ray is linear internally, but Maya and many other host applicaitons tweak this so it is more like sRGB using lookups or gamma ecnoding.

For most folks you will probably work sRGB as the whole linear thing is a little tricky unless you actually want to roll up your sleeves and learn how compositing works. Basically sRGB looks good on the monitor so... anchors away!

many people comping in sRGB space say in an app like Photoshop will choose to multiply the AO (ambient occlusion) pass over a beauty or diffuse pass. If you are using a beauty pass then ideally it should already have AO calculated in the result. From the looks of yours you dont. Nevermind set this to normal and set the AO to multiply. NOT the other way around. A lot of folks do it this way, it is a bad technique but I am as much responsible for folks doing it this way as anyone so.......

Anyway the main thing with multipass renders is that you are remaking the internal process which combines things like spec, reflection, shadow and diffuse to re-create a result which is identical to the beauty pass. So in a real mutlipass workflow the beauty is not used.

Don't know why those passes are black without seeing the Maya file but I would say you have set them wrong.

Your AO pass is grainy because you do not have enough samples in your shader. Increase them and it will improve. It's probably set to like 16 by default, try 64 or 128. Also reduce the max distance, yours is set too high and it is showing lots of occlusion where there should not be any.

you want tutorials, well I wrote a bunch years ago for multipass compositing in Shake that have been re-published just about everywhere. They are old and badly out of date and frankly contain a lot of things that I now think of as innacurate and miss-informed but there ya go.

The best place for more current ones are the ones I did for FXPHD.com Almost no one I have ever seen correctly re-creates multipass rendering properly. But you can find my old Shake one here:

http://www.creativecrash.com/shake/tutorials/general/c/multipass-compositing

and my compositing ambient occlusion ones here:

http://www.creativecrash.com/shake/tutorials/general/c/compositing-ambient-occlusion-and-channel-based-lighting-

PEN1
09-15-2009, 11:43 PM
Thanks a lot Aneks for the help and all the info!

:applause: :applause:

sundialsvc4
09-16-2009, 01:16 AM
"Passes," a.k.a. node-based rendering pipelines, are where the computers conspire to remind you that they really are digital computers, and that, to them, your "beautiful images" really are just enormous files of binary data.

Setting your sights firmly upon your "final output," which is (say...) "a great big matrix of (R,G,B) tuples," your objective is to systematically explore all of the possible ways that you can get there.

Each of the various data sources that you are working with contain (when mapped into an X,Y,Z coordinate space...) probably many channels of information in addition to R, G, and B. But in any case, they can supply these channels of information, somehow, in terms of such a coordinate-space. In the end, (say...) only "R, G, and B" will make it to the final digital output.

The rendering node-network is literally a data flow diagram of how the various data sources that you have available to you "flow down-stream" to arrive at the final destination of "a frame." And... the world truly is your oyster. The potential for creativity is unlimited.

So how and where do you start? Learn the alphabet. One by one, look at what all of the possible channels of information are. (R, G, B, Alpha, ZDepth, specularity, you name it...) Then, separately, consider what all of the possible rendering-nodes that are available in your software can do ... what they take as inputs, what they can produce as outputs, and what "knobs" they have. If your software allows third-party nodes to be added, do some "surfing" to see what's out there.

Also, a thought question ... "gee! I wonder why this-or-that feature was included in this program?" Why, indeed.

Even though every mainstream 3D program is different, all of their authors are going to SIGGRAPH every year. :) Drinkin' the same drinks and dreamin' the same ideas. "What were they thinking? And why?"

Aneks
09-16-2009, 11:06 AM
I have absolutely no idea what you are on about. Why did you choose to post that in a thread that had up until now been kind of productive and useful?

sundialsvc4
09-17-2009, 03:49 AM
Find out what I am "on about." That's the point.

"Pass" is an archaic term: they are "data processing stages." The data being processed is not simply "RGB" and it's not simply being run sequentially. Every step "throws away a little sand and picks up a little dirt" ... none of them "add" new information. When you ask about "render passes, order and operation," this is the point-of-view you have to take.

I am not trying to be an :rolleyes: here. You posted a list of 47 things that a piece of software can do. Every one of them is just like this: they take inputs, they produce outputs, they do things along the way that are probably very "lossy" and that also inject noise. You're going to "wire them together" in a way that minimizes data-loss and noise.

I am not trying to be! What I am saying is ... "it depends." You're going to need to look at your various input-files and what's most important in terms of what you get out of them, and experiment. But the fundamental nature of some of these transforms ... loss and noise ... make some combos very "obviously wrong." Specularity and diffusion, for example. Wire 'em up, first one then the other, and then flip 'em. You'll see the difference instantly.

Go follow the data. Every single one is a mathematical function: output = f(input). The math makes you go :surprised and I do not claim to fully understand it either. But "when you feed noise into noise, the noise multiplies."

beaker
09-17-2009, 09:30 AM
anyways....

I plus most stuff and multiply the AO

If you have Maya 2010 then it includes Toxik. Maya includes a plugin that creates a toxik script with all your render layers setup in a comp. You could use that and see how Toxik assembles them and then just replicate it in Nuke.

I know your using MR but here is the formula from the Prman manual:
result = SpecularDirect - SpecularDirectShadow + SpecularIndirect +
SpecularEnvironment + Ambient +
DiffuseDirect + Translucence - DiffuseDirectShadow +
DiffuseIndirect + DiffuseEnvironment + Backscattering +
Subsurface + Rim + Refraction + Incandescence

pbarnes
09-17-2009, 02:14 PM
anyways....

I plus most stuff and multiply the AO

If you have Maya 2010 then it includes Toxik. Maya includes a plugin that creates a toxik script with all your render layers setup in a comp. You could use that and see how Toxik assembles them and then just replicate it in Nuke.

I know your using MR but here is the formula from the Prman manual:
result = SpecularDirect - SpecularDirectShadow + SpecularIndirect +
SpecularEnvironment + Ambient +
DiffuseDirect + Translucence - DiffuseDirectShadow +
DiffuseIndirect + DiffuseEnvironment + Backscattering +
Subsurface + Rim + Refraction + Incandescence
cool will be upgrading to 2010 if a month or so, so will have a look at that
prman is also very useful to know as i prob will b having a look at that soon,


from what i understand its common practice to multi pass renders,
i think weta only gave the comp team 2 passes one of gollum an the other of his hair
however the point of this thread was for those who do multipass

although it is very interesting seeing the flaws of multipass compared to single render

PEN1
09-18-2009, 08:22 PM
Thanks Aneks, Sundialsvc4, Beaker and, Pbarnes for the info....

I was wondering if anyone had any experience with using mia_x_pass shaders? I am using maya 2009 and have followed the DT render passes dvd. While working with the dvd files things come out correct but trying it on my own scene things haven't been so well.... I used mia_x shaders in my scene, but I'm having trouble rendering the passes. Am I correct in that you have to upgrade the shaders to the mia_x_passes? I've done that and seem to still have trouble getting the correct results. Wondering if anyone might be able to help.

Thanks

sundialsvc4
09-24-2009, 04:08 AM
I don't do Maya but in my own world I do this:

Get every distinct characteristic of the shot "out onto the hard drive as a separate file." (Or at least know exactly, distinctly, what they are.) Everything falls into one of two categories. Some of those elements are visual-data sources, which "contribute pixels," while the others are modulators (my term) whose only purpose is to affect something else. (Strictly speaking, these are distinctions, not of "the signal channels themselves," but of "how they can be applied.") One produces data; the other consumes it. Fire. Ice. The visual data sources, then, are "the pure music." The modulators, inevitably, are the "static." "Music first, noise last." On the left side of the screen, each "data source" is lined-up on its own. Then, modulations are attached to it one at a time left-to-right. Even if the same input is piped to more than one of them, each has "only one noise, so-far." Continuing to the right, the streams of data start to combine, leading to the final output(s) on the right side. Each time this happens, all of the "noises" in each input are in-effect multiplied. (But, it also matters less, especially when "very noisy" data is going to be masked-out or otherwise excluded anyway.) There are several ways that any input channel can be combined ... addition, average, maximum/minimum, and so on. Choose wisely. Stick viewers onto every intermediate output and build up the network one step at a time. The "music to noise" rule-of-thumb really helps to clear my thoughts. If you've got eighteen or so inputs running around in a shot, there are literally thousands of ways to combine them. Sort them out ("music", "noise"); keep them on parallel paths as long as you can; pay attention to noise as you combine them.

Aneks
09-24-2009, 10:26 AM
What I am saying is ... "it depends." You're going to need to look at your various input-files and what's most important in terms of what you get out of them, and experiment. But the fundamental nature of some of these transforms ... loss and noise ... make some combos very "obviously wrong." Specularity and diffusion, for example. Wire 'em up, first one then the other, and then flip 'em. You'll see the difference instantly.

Sundailsvc4 : i wouldn't usually be drawn on a topic like this but your post really annoys me. Some people on this list composite for a living in pipleines where people have spent a lot of time determining how and why multipass rendering is useful for their work. People come to this list with a question about professional practice, and maybe they want to learn the skills and techniques to participate in visual effects production. to this end it is important how and why compositing decisions are made.

Talking about vagaries and nuances which are not really relevant is in itself exactly the kind of 'noise' you mention. This is not black magic or composing a symphony, this is a technique used to make achieving a goal simpler and more versatile.

Without being overly rude let me suggest that if your intention is to help people then I suggest you take the time to learn before dispensing advice. If your intention is to inject rambling sentiment in a conversation that was about an existing professional practice then ... mission accomplished.

Sybexmed
10-06-2009, 08:37 PM
My question is, how are you able to get these render passes out of maya without any problems? Im having problems with render layers and mayas render passes.

PEN1
10-06-2009, 09:38 PM
In render settings inside the common tab create a name under "File name prefix" the passes will be located in folders under that title. To get the render passes batch render from the "masterlayer" in render layers. Once the render is done ya will get a message in the bottom window that say's see console. Go through your maya directory or do a search for the name that ya created in the "file name prefix". You will find the folders and passes there.

Hope this helps....
Good Luck

Sybexmed
10-06-2009, 11:26 PM
I guess we all have to wait for Core Render Pass by Mercuito to simplify things for us.

wobagi
10-18-2009, 10:07 PM
In linear float you should definitely add instead of screen. Screen is basically a flawed operation. It inverts both images does a weighted add and then inverts the result. This to emulate the process of double exposure and probably is designed to work well with log images. People have been using it with sRGB for ages. I know I have been guitly of that in the past. but in linear float it is bad.


Thanks for all your remarks, they are very valuable. However I'm not sure what you mean by weighted add. It's multiply. The formula for screen is 1-[(1-A)x(1-B)] = A + B - A x B, so it's not flawed, you just have to be careful.

CGTalk Moderation
10-18-2009, 10:07 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.