PDA

View Full Version : Dataflow in a simple shading network?


wdavidlewis
03-19-2009, 09:21 PM
Here's the setup: you have a single mesh connected to a lambert and a texture.

Create a simple scene with a poly sphere and hook it up to lambert1. Set the lambert1 to get it's diffuse from a file. In the file node, select a texture. You get the texture wrapped on the sphere. If you delete history, you basically have three nodes. The place2dTexture node with multiple connections to the file node. The file node with either one or two connections to lambert1 (outColor and outTransparency with outColor being the important one).

The outColor attribute on the file node is a compound attribute with three floats: outColorR, outColorG, and outColorB. And if I do a getAttr on outColor, it's always been 0,0,0.

So, the basic question is: How does the lambert node get the all texture data?

While the texture has three channels, there's a ton more data than three floats. It seems like there's more going on here that data flowing across the outColor connection.

The next question is: if I was implementing the lambert shader in C++, how do I get access to all the texture data? It doesn't seem that I can just read it from outColor - or can I? Also remember that the 2dPlacement node can affect the data so I can't just read it from the file specified in the file node).

Thanks!

--- David

tbaypaul
03-20-2009, 01:12 AM
the place2dTexture node passes it's computed uv value to the file node which uses it to lookup the color of the image bitmap at that uv location. Then the file node passes that color value to the lambert shader. All of this happens at rendertime. It is based on the sampler that runs during the render operation. The sampler "renders" each pixel of the scene one at a time and then saves it's color value to a buffer before writing the completed image to the image file. So the entire shading graph is evaluated for each pixel. So you are really working a point at a time. Passing just enough info for that one point's contribution to the pixel's color. Usually the sampler renders multiple points for each pixel, then "averages" the colors to the final image's pixel color. So the sampler samples the scene multiple times for each pixel. That is why it is called the sampler. The sampler can also sample the scene at different times and then average the results to one color. This is called "Motion Blur".

The sampler provides alot of global default render information at the time of rendering. Information such as the point position of geometry being rendered in camera space, object space, etc...... You capture those attributes by creating global "variables" in your c++ code with the same name as maya uses for it. See the docs for......
Appendix C: Rendering attributes.

wdavidlewis
03-23-2009, 09:18 PM
Thanks for the reply. It was very helpful. The key idea that this graph describes the per-pixel dataflow is what I was missing.

Is this true for hardware shaders as well? I've inherited a shader node that inherits MPxHardwareShader. The render() function of the shader basically pulls the uniform constant values out, passes them to D3D and then calls DrawIndexedPrimitive() for every primitive. I'm a D3D noob, so it might be doing something per-pixel, but it really looks like it's per "frame" or whatever the right term is.

I will take a look at Appendix C. It looks like there's some useful stuff in there.

--- David

CGTalk Moderation
03-23-2009, 09:18 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.