Simple raytrace shadow question


#1

So I made a simple scene with a sphere, ground plane, and directional light with ray traced shadows enabled. All geo is using a generic white lambert material. Here is the mental ray render:

http://i.imgur.com/7VSKags.png

Now, just to be sure I’m not insane I tried connecting the out color from the lambert to a surface shader and then assigned the surface shader to the geometry. Same exact result:

http://i.imgur.com/ehoYjeq.png

Next I connected a bulge texture, mapped as a camera projection, to the surface shader:

http://i.imgur.com/UwvxCoF.png

Then I connected the out color from the original lambert to the U Width and V Width of the bulge texture (using a reverse node):

http://i.imgur.com/4MHN0Rn.png

WHY IS THERE NO SHADOW?!?!!?!?!?!?!?!???!

If I switch to maya software for the render then suddenly the shadow appears:
http://i.imgur.com/7Wj0z5q.png

This is the shading network:

http://i.imgur.com/GbvSUv3.png

Can someone please explain to me why this is happening???


#2

So it appears that the rabbit hole goes a little deeper.

In trying to troubleshoot the source of the issue I directly connected the output of the lambert to the projection node and then into the surface shader (bypassing the bulge texture):

http://i.imgur.com/QmhvKWi.png

So directly connecting the lambert color output to the surface shader worked just as fine (as you can see in the second image of my first post). But the image above makes it clear that if you attempt to run that color output through a projection node then mental ray no longer appears to calculate shadows.

Out of curiosity I decided to test a bling material as well. Here is the same scene with a reflective blinn assigned to the geometry:

http://i.imgur.com/atSSyTH.png

We can clearly see the reflections on both the ground and sphere. But here is the render with the bling output connected to the projection node:

http://i.imgur.com/xQB5k5M.png

So we can see that mental ray is also not calculating reflections through the projection node. Likewise, here is the same scene rendered with ambient occlusion:
http://i.imgur.com/dLduoZk.png

I had assumed that if I connected the occlusion output through the projection node I would get either an all-white or all-black result, hypothesizing that mental ray simply doesn’t calculate any raytracing when evaluating input connections prior to a projection node. But the actual result was considerably more bizarre:

http://i.imgur.com/2WEbXW1.png

The noise makes it clear that mental ray is calculating
something. It’s an obvious result of multi-sampling. But I can’t otherwise explain what else is going on.

Finally, I tried a test where I turned off the ray traced shadows and instead used depth map shadows. But, surprisingly, mental ray still did not evaluate them (the result looked identical to the first image – no shadow).

So
Yea. I’m at a loss.


#3

It seems like the shadow shader of the default maya materials gets lost somewhere in those particular setups. Try using mental Ray materials like mia_material_x instead.


#4

I’m not the least bit surprised that attaching color values to UV coordinates breaks something. When you’re experimenting with things the software wasn’t designed to do, don’t be surprised if it doesn’t work.

Shadows and shading are not calculated simultaneaously. It’s possible that mental ray calculates them in a different order, so the out color does not contain them. The surface shader will never recieve real shadows on its own.


#5

VanDerGoes: I’ve tried it with the mia material. I’ve created a generic matte mia material and tested it with the result output and no dice. I’ve also tried connecting the diffuse result to the shader with no success.

guccione: “I’m not the least bit surprised that attaching color values to UV coordinates breaks something.”

Actually, connecting color values to UV coordinates always seems to work just fine. However, that’s not what I was doing anyway. I didn’t connect color values to UV coordinates, I connected color values to the U width and V width parameters of the texture, which just controls the thickness of the lines. And that also works fine. The problem arises when I attempt to project that result back into the scene. I can completely skip the 2D texture node altogether and I still have the problem. The issue doesn’t appear to be related to UV coordinates, but rather that shadows are not calculated for any shaders that are run through a projection node when rendered with mental ray.

guccione: “The surface shader will never recieve real shadows on its own.”

If I connect the outcolor of a lambert material to a surface shader then the shadows totally appear and everything works fine. But if I attempt to take that color output and project it onto the geometry then suddenly the shadows disappear.

guccione: “When you’re experimenting with things the software wasn’t designed to do, don’t be surprised if it doesn’t work.”

The shader I am testing is included, by default, from Autodesk, in the Visor. And, not surprising, it totally works just fine… so as long as you are rendering with Maya. But if you switch to mental ray then it gives you a different result. However, unlike the layered shader, for example, there is no information that I can find that says that this particular shading network does not evaluate correctly when rendered with mental ray.


#6

“Always seems to work fine” does not mean it was designed to work that way.

Shadows are only appearing on the surface shader because they’re being converted to color beforehand; so it’s not actually recieving shadows. It’s just colored dark by the lambert’s output.

It could be that MR considers it too slow to evaluate all the channels of the shader going into the surface shader. Or that MR sees the sphere is not recieving any shadows, and applies that to the whole shader even though it’s on more than one object; it may be confused with one shader on two objects. Maybe 2 separate lamberts and surface shaders would do it.

There’s a hundred nodes included by default, that doesn’t mean that you can plug anything into anything else and expect it to work the way you imagine it. I’ve used plenty of hacks myself, and sometimes they work, but if they don’t, I don’t consider the software to be broken. A direct color to UV channel connection could contain more than just a float number, like a flag for which channel is r, g, or b. It’s safer to attach a single color channel (red, f.e.), but then float values or values outside of the 0.0-1.0 range can still cause problems.


#7

I fear the problem lies in the deep of the shader evaluation. Let’s see what’s happening:
[ul]
[li]first the surface shader is evaluated, it does not very much but evaluation its inputs[/li][li]so next the projection shader is evaluated. In this projection shader, the shader globals are manipulated. The world space coordinates are transformed into the local space of the projection shader, in this special case into camera coordinates. After evaluating it’s inputs, the original state is restored.[/li][li]Then the projection input surface/shadow shaders are evaluated with a complete wrong point position. This leads to a “wrong” result. Maybe maya software always transforms the shader globals into a correct space for evaluation. But it seems that mentalray knows nothing about this transformation.[/li][/ul]You can check that the light is not correct as well if you move it around.


#8
     Agreed.
     No, the actual CAST SHADOWS appear just fine when routing color output from a lambert through a surface shader, not just the diffuse illumination from the light source.
     I'm not talking about connecting random nodes.  I'm saying the ENTIRE SHADING NETWORK is right out of the visor.  Like this SPECIFIC EFFECT, the nodes, the connections, EVERYTHING, is right out of the visor.  So it's not really "the way I imagine it" so much as "the way this sample network was designed, by Autodesk, to work."
     Well if there is only one light in the scene, with an intensity of 1, and the only material in the scene is a generic lambert, that's white, with a diffuse value of 1, then I'm fairly certain that we can safely say that so long as we don't render with any other lighting (final gather, GI, etc) then you will never get any color values for a specific outColor that are outside a range of 0.0 to 1.0.  Therefore we can fairly well establish that any given outColor value (doesn't matter which one) from the lambert will be a simple float value ranging from 0.0 to 1.0 that effectively represents the luminosity of the assigned object with regards to that sole light source.
     
     Given this simple setup we shouldn't ever have any values that are outside the 0.0 to 1.0 range, because the brightest the lambert can be illuminated is 1.0 and a complete lack of illumination results in a simple 0.0 value, with no reason to get anything above or below that.
     
     As such, any input that can use a float value between 0.0 and 1.0 should have no problems when connecting a given lambert outColor value in this particular setup.
     
     So...
     I think you nailed it.  I can't speak for the specifics of your explanation, but it is clearly a result of some failed evaluation when working through the projection node.  I had assumed that the lambert would be evaluated according to the correct local/world space, seeing as how it works fine in Maya, but it clearly fails somewhere along the line when using mental ray (which is obvious from that bizarre occlusion result that I posted).
     
     The real reason I found the problem was because I wanted to try using the HalftoneDots.ma toon shading example in the visor along with soft shadows, occlusion, and final gather just to see what it would look like.  Which it appears I cannot do.
     
     C'est la vie.
     
     Thanks for all your help on this, guys.  I really appreciate it.