Well, I built a solution that should have worked, but for some reason, Maya said nope! I’ll detail it for you just so you have a general idea.
Basically, if you want to get an incidence for a a surface from an object at an arbitrary point, you’ll want to evaluate the dot product between the surface normal of the object, and the vector pointing from the object to the arbitrary position (the locator).
So- getting the vector that points from surface to position:
I used a layered texture, the vector that points between two points is the difference between the x, y and z positions of the two objects. For the locator, that’s easy, it’s just the translate x, y, and z values. For the surface, you have to do it for each point on the surface, so you would use the sampler info and get the pointworld attribute. Then using the layered texture, you can subtract them. If you apply this texture to a surface shader, and apply the shader to your object, you will see some parts that are totally black (which just means the vectors are pointing in the negative x/y/z directions. Fine.). Note that these vectors are not normalized, but that part is easy to figure out, so ill leave that to you. This is basically it for that part.
To get the surface normal should be straightforward, but I think this is the part where the system gets broken. The sampler info node gives you camera normals, what we need, however, is world normals, as our other vector is in world space. To do this, I used a vectorProduct node, and connected the cameranormal attribute of the sampler info node into the the input1 of the vector product, and left input2 blank. To transform the vectors from camera to world, we need the transform matrix that relates the two. This just happens to be included in the sampler info node. The matrixEyeToWorld is precisely this matrix (according to google/other posts I’ve read). You can connect this attribute to the matrix attribute of the vector product node and set the operation to vector matrix product. Now you have the world space normals. The only thing left should just be taking the dot product of the two vectors we have calculated, which is again, done using a vector product node, plugging in the surface to locator vector into one input and the world space surface normals into the other, and setting the operation to dot product. If you haven’t done anything to normalize the vectors (or even if you have, probably), you should just be able to check normalize output. And have the incidence be normalized (basically between -1 and 1, obviously anything from -1 to 0 is not facing that point and would not receive light, so this is fine). But the weird thing: it works in the viewport, but if you try to render with maya software or mental ray, the result is incorrect (I basically just plugged in the output to a surface shader to see what happens).
Maybe if you’re not trying to render it directly it will work, I don’t know. If this still fails, consider using the surface luminance node. This would factor in all lights and incidences, without looking at specularity.