getting the "incidence angle" of a locator compared to a mesh



Hopefully i can get some creative minds to help me here.

Maybe the sampler info is the answer, however i am unsure how…

I would like to have a toon shader that rather than a light determining where the lighter areas of the shader are, i would like a locator to determine this.

Think of a point light. where it doesnt need a forward direction,nor an up direction, this is simply based on a center point ( being the locator).

Essentially it is a way to force a light incidence effect but based on a locator, not a light.

Dont worry about the reasons or situation, this is strictly a theory crafting design. :smiley:

ill keep doing my own research, but if anybody has an idea it would be much appreciated.

Facing ratio question

well… you could setup a locator, group a point light to it, then link with the object so only it does specular… maybe?


trying to avoid using any lights at all.

You know how you can have the diffuse of a surface shader hooked into a surf. luminance node and it pretty much become a lambert?

well im looking for a way to simulate that surf luminance node that measures a locator, not a light.


plug the translate of your locator into the rayDirection of sampler info node.
Plug the FacingRatio of the sampler info node into the vCoord of a ramp.
Plug ramp into surface shader col.



Thats a good start. i can control the ramp with the locator. However the viewing camera is still being considered and applied to the shader. This shader set up is not independent of the camera’s facing ratio.

ex: I can move the locator to any arbitrary position and the ramp will adjust to follow the locator.
If i move the camera to a different side of the mesh, then barely move the locator, the entire shader will change it’s appearance to make up for the moved camera.


Well, I built a solution that should have worked, but for some reason, Maya said nope! I’ll detail it for you just so you have a general idea.

Basically, if you want to get an incidence for a a surface from an object at an arbitrary point, you’ll want to evaluate the dot product between the surface normal of the object, and the vector pointing from the object to the arbitrary position (the locator).

So- getting the vector that points from surface to position:
I used a layered texture, the vector that points between two points is the difference between the x, y and z positions of the two objects. For the locator, that’s easy, it’s just the translate x, y, and z values. For the surface, you have to do it for each point on the surface, so you would use the sampler info and get the pointworld attribute. Then using the layered texture, you can subtract them. If you apply this texture to a surface shader, and apply the shader to your object, you will see some parts that are totally black (which just means the vectors are pointing in the negative x/y/z directions. Fine.). Note that these vectors are not normalized, but that part is easy to figure out, so ill leave that to you. This is basically it for that part.

To get the surface normal should be straightforward, but I think this is the part where the system gets broken. The sampler info node gives you camera normals, what we need, however, is world normals, as our other vector is in world space. To do this, I used a vectorProduct node, and connected the cameranormal attribute of the sampler info node into the the input1 of the vector product, and left input2 blank. To transform the vectors from camera to world, we need the transform matrix that relates the two. This just happens to be included in the sampler info node. The matrixEyeToWorld is precisely this matrix (according to google/other posts I’ve read). You can connect this attribute to the matrix attribute of the vector product node and set the operation to vector matrix product. Now you have the world space normals. The only thing left should just be taking the dot product of the two vectors we have calculated, which is again, done using a vector product node, plugging in the surface to locator vector into one input and the world space surface normals into the other, and setting the operation to dot product. If you haven’t done anything to normalize the vectors (or even if you have, probably), you should just be able to check normalize output. And have the incidence be normalized (basically between -1 and 1, obviously anything from -1 to 0 is not facing that point and would not receive light, so this is fine). But the weird thing: it works in the viewport, but if you try to render with maya software or mental ray, the result is incorrect (I basically just plugged in the output to a surface shader to see what happens).

Maybe if you’re not trying to render it directly it will work, I don’t know. If this still fails, consider using the surface luminance node. This would factor in all lights and incidences, without looking at specularity.



I have not tried it yet, however I just wanted to first say that this is exactly the kind of response I was looking for.
Thanks for breaking it down fully.

Now to go test it.


Alright i tried it out. It renders just fine in maya software, default viewport, high res viewport, and mental ray. not in Vray nor viewport 2.0.

These results will work just fine. thank you for your great math.


Really? That’s interesting. Keep in mind you could bake these to file textures if you wanted to use them with V-Ray etc. It didn’t work for me in software or mental ray, but that might be my version of maya (2013 SP2).

And no problem- being a Physics major definitely helps with technical things like this. :smiley:


physics major you say? awesome. thats an area id like to research more myself once ive had some fun doing art for awhile.

yeah i know baking in the texture would work, but the pipeline workflow i need it for needs to remain modular… baking in kind of linearizes things… but if that has to be done it has to be done.

thanks again for the great help.


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.