PDA

View Full Version : Renderman - evaluating attached shaders


-Vormav-
01-18-2006, 01:23 PM
Not to flood this board with questions, but a simple question occurred to me with Renderman shaders that I couldn't quite figure out.

Coming from Mental Ray, I'm having trouble seeing where exactly in the shader a Renderman shader evaluates an attached shader.
Let's say you had a simple shader with 2 different colors, color1 and color2. If the shading point being evaluated is the backside of a polygon, the shader returns color2. But if the shading point being evaluated is the front side, color1 is returned. In cases like this, it'd be a huge waste to actually calculate both shaders, and then decide which one to use after the fact.
In MR, tags that point to the attached shaders are passed as an arguement to all shaders automatically. From there, the attached shaders are never actually processed until you specifically tell the shader to do so (typically with one of the mi_eval_* functions, but there are other functions for this as well).

Of course, the format of Renderman shaders is rather different. From what I can tell, the actual return values of attached shaders are what's passed as arguements to the shader functions, not any pointers to the shaders or anything of the sort - meaning more of the case of evaluating all attached shaders, and then deciding whether or not they're needed after the fact. That's at least the format I keep seeing.
If I'm wrong on this, which I definitely could be, I'd love to know where exactly those shaders are being evaluated. Otherwise, I would at least assume that Renderman has a built-in way of supporting shaders in this format - so if anyone could point me in the right direction for that, I'd love that too. :)

playmesumch00ns
01-18-2006, 02:06 PM
Remember that you cannot attach more than one surface shader to any surface in prman. So the renderer doesn't decide which shader to execute, the shader itself has to decide which path to take depending on some conditional.

It's up to you whether you execute both paths or not, and that entirely depends on whether the conditional is uniform or varying, and what you're doing inside each path.

-Vormav-
01-19-2006, 06:51 AM
Heh, either I'm not understanding you, or you're not understanding me.

I'm not so much referring to attaching multiple surface shaders to a surface, but more to working with shaders in a shader network. So, take a shader like this:


surface color_switch(color color1 = (1,0,0); color color2 = (0,0,1))
{
if(someCondition) {
Ci = color1;
} else {
Ci = color2;
}
Oi = Os;
}

Say that when working in SLIM, I were to attach the output of an ambient occlusion shader to color1, and the output of a reflection to color 2, and then attach this color_switch shader as the surface shader of a sphere (all of which you definitely can do - although you might have direct the output of the reflection and AO through other nodes for SLIM to allow the connection- but I definitely have been able to make such shader connections in the past): The question then is - since color1 and color2 are both provided as direct arguements of the surface, then is it even possible to control how these shaders are called up: IE, don't bother evaluating the color1 shader in the network unless I've reached the conditional block where I would actually want to be using it.

Sorry if I'm just misunderstanding you, though.

pgregory
01-19-2006, 09:40 AM
Heh, either I'm not understanding you, or you're not understanding me.
Say that when working in SLIM, I were to attach the output of an ambient occlusion shader to color1, and the output of a reflection to color 2, and then attach this color_switch shader as the surface shader of a sphere (all of which you definitely can do - although you might have direct the output of the reflection and AO through other nodes for SLIM to allow the connection- but I definitely have been able to make such shader connections in the past): The question then is - since color1 and color2 are both provided as direct arguements of the surface, then is it even possible to control how these shaders are called up: IE, don't bother evaluating the color1 shader in the network unless I've reached the conditional block where I would actually want to be using it.

Be aware that SLIM is 'cheating' to a degree. When you see multiple 'shader nodes' at authoring time, that is actually all combined into a single shader at generation time. There is no concept of multiple 'shaders' in RenderMan. As such, I imagine (I've not seen the output shader code from SLIM so can't be sure) that the calculation of each 'node' will happen irrespective of whether it is needed or not.

You can certainly acheive the level of control you are after if you write the shaders by hand, i.e. put the calculation of your 'ambient occlusion' and 'reflection' values within the if statement itself, then the shader execution engine would not run the code that isn't needed.

Sorry I can't be more help on the SLIM front, the only way I could offer any more advice would be to see the generated .sl file from your shader network.

Cheers

Paul Gregory

-Vormav-
01-19-2006, 11:23 AM
Ahh... wasn't aware that that's how SLIM was handling things (hence why what playmesumch00ns was saying didn't really make sense to me ;)). But that makes sense then, and certainly changes things. Thanks. :)
I would assume though, that if you were to setup the network shaders as separate color functions (not just as surface shaders) that it wouldn't be too difficult to build some kind of shader node for SLIM that would pass maybe the names of connected shaders and then use some kind of eval statement within your surface shader to give this kind of flexibility. Should work, even if SLIM compiles all nodes into a single shader. Might be fun to try...

playmesumch00ns
01-19-2006, 12:04 PM
No, I'm afraid this will certainly not work!

As Paul said, the RiSpec does not allow for any concept of "surface shader chaining" like you see in mental ray, gelato or even Aqsis now (nice one Paul). The only types of shaders you are allowed to attach are surface, displacement, light, and atmosphere shaders. There is no concept of a "function" shader in RenderMan.

This is a slightly annoying limitation of the spec. Who knows, maybe Pixar will change it for PRMan 13 (doubt it, but we can live in hope).

There are two ways of getting around this.

The first is to do what slim does and build a network of functions. These functions are then called by the main body of auto-generated surface shader to evaluate the shading. It is perfectly possible to evaluate these functions only when they're needed in most cases, but Slim often takes the lazy approach and evaluates them all the time.

The second method is to use light shaders. RSL allows you to pass variables between light and surfaces shaders, and you can attach as many light shaders as you like to an object (note: you're attaching them to the OBJECT). By using light categories you can create special functions, like raytraced reflections, ambient occlusion etc. as interchangeable modules and "connect" them to any surface shader that understands the message passing by attaching them to the surface. There are overheads and caveats involved with this method, and you can end up with huge lists of variables being passed around.

Hope this makes it slightly clearer

-Vormav-
01-19-2006, 02:00 PM
The first is to do what slim does and build a network of functions. These functions are then called by the main body of auto-generated surface shader to evaluate the shading. It is perfectly possible to evaluate these functions only when they're needed in most cases, but Slim often takes the lazy approach and evaluates them all the time.
That's actually a lot like what I was getting at: instead of writing surface shaders, just write functions (ARMAN p.333 for a direct example of what I mean).
One such function sould be:


/*include functions*/
color eval_color (string shaderName, color constant)
{
color C = 0;
if(shaderName == "lambert") {
C = lambert(/*parameters*/); //In this context, lambert() is a function, NOT a surface shader
} else if(shaderName == "ambient_occlusion") {
C = ambient_occlusion(/*parameter*/);
} else {
C = constant;
}
//etc. obviously you'd want a better way of doing this, but it's just a quick example
return C;
}


Then, a surface shader that supports this might be in the form of:


/*include eval functions and other functions*/

surface color_switch(string color1, string color2)
{
if(someCondition) {
Ci = eval_color(color1);
} else {
Ci = eval_color(color2);
}
Oi = Os;
}


From there, all you'd really have to do is find a way of passing the parameters for the functions along to the eval function (which, as you mentioned, could potentially be done with light shaders). And, of course, the big drawback would be that for this to work, every surface shader would have to be written in two different forms: one as a regular surface shader, and another as an equivalent function that just returns a color (meaning it would take a bit of work to even get all of the standards shaders working properly).
That's what I was getting at. In hindsight, it probably wouldn't be worth it - it'd take a lot of effort to make the process automatic with a setup like this. But I don't see how this wouldn't actually work through RSL (although I'm sure that getting it to work through SLIM would be an entirely different matter...)
Just to clarify, I'm not talking about attaching multiple shaders, and I'm not talking about making a call to a surface shader as if it was a function: All it is is creating eval functions that take some parameter from surface shaders, and then running the appropriate function (equivalent to the surface shader you're using in the "network" - but would have to be coded separately - not that it'd be too hard to convert them anway). But yeah...again, considering all the work it'd take to make that automated.

Shaderhacker
01-19-2006, 05:33 PM
This sounds easy for your simple example, but trust me - when a whole production is developed this way, it becomes very very aggrevating and frustrating. Simply put, this is not a slight irritation but more a monumental set-back of Renderman. I would much prefer to use Mental Ray to develop on. It's more flexible and robust. While it doesn't have some really good abstract functions (i.e. illuminate) or pre-evaluates a grid of pixels at once, it's the best development API for a bunch of shader writers making multiple changes/additions in the pipeline.

-M

rendermaniac
01-19-2006, 05:55 PM
This is the very reason that Pixar introduced Gernarative functions into Slim in RAT6.5.

Before you had exactly the problem that you are stating - you would generate the result from both functions and then pick which one you want. This is very wasteful - especially if one or both of the functions calls raytracing, occlusion, irradiance etc.

Pixar have got around this limitation with a bit of a fudge - basically the branches of your switch are rolled into the switch statement instead of being called before if. So you go from:


surface (float switch = 0;)
{

color branch1() {
...do something clever here...
}

color branch2() {
...do something equally clever here...
}

color switch(color a;color b; float switch;) {
if (switch = 0) return a;
else return b;
}

color tmp1;
color tmp2;
color tmp3;

tmp1 = branch1();
tmp2 = branch2();
tmp3 = switch(tmp1,tmp2, switcher);

...

Ci = tmp3;
}


Now with generative functions Slim gives you:


surface (float switch = 0;)
{

color branch1() {
...do something clever here...
}

color branch2() {
...do something equally clever here...
}

color switch(color a;color b; float switch;) {
if (switch = 0) {
float a = branch1();
return a;
} else {
float b = branch2();
return b;
}
}

color tmp1;

tmp1 = switch(switcher);

...

Ci = tmp1;
}


Slim is basically just a Tcl preprocessor that generates shader code, but you always end up with a single surface shader.

Also your idea of pasing around function names would have to be done by a preprocessor - this is basically the same as Slim. RSL is not dynamic enough to do it. ie there is no eval function - once your shader is compiled you cannot change it at run time (would be neat if you could!).

Simon

PS Paul does Aqsis's layered shading do any culling of shaders in conditional code which don't need to be run? Would be nice if it does.

pgregory
01-19-2006, 06:00 PM
PS Paul does Aqsis's layered shading do any culling of shaders in conditional code which don't need to be run? Would be nice if it does.
At the moment no, the layers are just run sequentially, in fact the layers have no inherent knowledge of one another, so determining which layers would need to be evaluated is not currently possible, and would need a lot of thought to enable.

PaulG

rendermaniac
01-19-2006, 06:29 PM
At the moment no, the layers are just run sequentially, in fact the layers have no inherent knowledge of one another, so determining which layers would need to be evaluated is not currently possible, and would need a lot of thought to enable.

PaulG

Fair enough. That's the hard bit ;)

I guess the easiest way - and neatest - would be to use conditonal RIB for switching. You'd have to be able to switch on a shader value by running the conditional shaders (and any upstream nodes) first and using the results of that to cull and execute the main branches.

How you'd deal with varying results I do not know!

And multiple levels of conditionals could be sticky too ;)

Simon

playmesumch00ns
01-19-2006, 07:54 PM
Vormav: ooooohhh ok I see what you mean. You're using string parameters like function pointers.

Leaving aside the production pipeline issues, the surface shader would be huge as it would contain every possible shader! Although all those if's wouldn't hurt it in normal shading, in raytracing it would be incredibly slow.

It's a neat idea, but prman won't let you get round its limitations that easy!

Shaderhacker: you'd really rather develop shaders for mental ray than prman? The word "robust" nearly made me choke on my coffee :)

Shaderhacker
01-19-2006, 10:39 PM
Shaderhacker: you'd really rather develop shaders for mental ray than prman?

Yeap. Having spent some time on Renderman this past year, I can clearly see the benefits of developing shaders with a bunch of shader-writers using Mental Ray's API. Notice, I said "developing" shaders... not the renderer itself (which another matter)..

The concept of just writing a small shader that can be "linked" into a shader network and not have to worry about the final shader code being enormous is clearly a plus when working with several developers. Also, there doesn't seem to be anyway to debug RSL files during render-time like Mental Ray can.

The word "robust" nearly made me choke on my coffee :)
"robust" was the wrong word to use. :) I should've used "modular: Designed with standardized units or dimensions, as for easy assembly and repair or flexible arrangement and use" - which fits better and is definitely great for programmers. ;)

-M

pgregory
01-19-2006, 11:55 PM
... Also, there doesn't seem to be anyway to debug RSL files during render-time like Mental Ray can.

I have no experience of Mental Ray whatsoever. I would be very interested to hear just what facilities it provides to assist shader debugging. This is a subject that has been brought up a number of times in the Aqsis community. Any information you can give would be great.

Cheers

Paul Gregory

-Vormav-
01-20-2006, 12:46 AM
Vormav: ooooohhh ok I see what you mean. You're using string parameters like function pointers.

Leaving aside the production pipeline issues, the surface shader would be huge as it would contain every possible shader! Although all those if's wouldn't hurt it in normal shading, in raytracing it would be incredibly slow.
Yeah, another reason why it probably wouldn't be worth it. Just an idea, though. :p

Shaderhacker
01-20-2006, 01:25 AM
I have no experience of Mental Ray whatsoever. I would be very interested to hear just what facilities it provides to assist shader debugging. This is a subject that has been brought up a number of times in the Aqsis community. Any information you can give would be great.

Cheers

Paul Gregory

All shaders in MR are compiled C/C++ code (i.e. dynamic-linked libraries similiar to shadeops in PRman), therefore you can work in any C/C++ IDE (windows or unix) and setup break points in the code. There is no IDE (that I know of) for RSL. Right now, we debug with 'print' statements (with the exception of shadeops).:eek: While you may never run across any NAN pointers that corrupt renders, you still may run across values that are negative when they should be positive or precision errors which can make troubleshooting time-consuming.

Btw, I'm working on series of DVD tutorials on Mental Ray shaderwriting in productions using Maya as a front-end. It's going slow, but it's coming along. It will start from the beginning with Mental Ray and start a student in a VisualC++ IDE with the correct paths, libs, etc.. to get started with writing basic shaders.


-M

fred_lemaster
01-27-2006, 11:05 AM
Emacs is an available IDE for RSL.

rendermaniac
01-27-2006, 03:11 PM
The closest I have seen to stepping through shaders is Ian Stephensons Buffy http://www.dctsystems.co.uk/RenderMan/buffy.html .

Of course this is specific to Angel - his renderer - and BMRT as the slo's are binary compatible.

Simon

CGTalk Moderation
01-27-2006, 03:11 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.