View Full Version : RSL:Point Space -> UVW lookup help
10 October 2006, 03:01 PM
I need someone to please explain to me how space conversion works in rsl when you want to have a point to lookup its uvw location for a procedural noise.
I am stuck in my ray marcher when it comes to the noise, I want it to "stick" with a point. I know I am asking too much for an indepth explaination but if someone can please point me to some reading material it would really help.
Maya fluid has that sticky noise property and I want the same for my shadeop, and to do the 3d space to uvw conversion is what I am not getting.
Also it would be nice is someone could explain what uvw IS... I mean is it xyz? or uv and w being some varying height value? And for a fluidbox how does that convert? The api simply gives a vague defination of some sort of texture coordinates...
Thanks in advance.
10 October 2006, 01:25 PM
I'm not particularly familiar with maya fluids, so I couldn't tell you exactly what it's doing...
uvw is just a mapping from P to some other space, you can call it anything you like, and define it in any way you like!
Getting noise to "stick" with a volumetric effect is quite difficult. My guess is maya fluid is probably diffusing the initial P coordinates trhough the fluid sim as it evolves. If you want to replicate that behaviour, your best bet is probably to see if you can export that data in a format which you can read in the shader
10 October 2006, 04:39 PM
yes right now the shadeop is doing just that. I bake out the desities and veloticy data. The shader passes the point P to the shadeop and I get whatever information I need at that point.
The Maya api DOES allow you to retrive this so called uvw mapping:
" MFnFluid:: getCoordinates ( float *& u, float *& v, float *& w )"
Implementing the ray marcher shader given in the ArMan book, you ususally pass the same point but in different space: its current space and its object space for the volumelight function and for the volume density function you pass its object space.
I am just thinking out loud here, but if that point was somehow converted to uvw space then for every shading point P, you would end up with the same noise-density value becuase maya is keeping track of the uvw distortion. But now the quesiton comes, how does one convert the object space to uvw? RSL code has no such defination except for uv so I am guessing there is a formula or a code or an explaination out there to do the conversion...
10 October 2006, 04:48 PM
The transformation to uvw is completely arbitrary, it could be anything, and is defined to be whatever maya wants it to be. As I suggested, it may well be the initial [0,1) position of the cells in the grid convected to follow the flow, but it could be something else.
I'd get the uvw info as a grid using that call you quoted, and look it up in a shadeop using P.
10 October 2006, 05:26 PM
thanks playmesumch00ns for that quick reply.
but that brings me to the main quesiton how do I conversion? My shadeop can only retrive data for a given point P in the voxel. It doenst know what kind of data it is dealing with only for the simple knowledge that it is a float or a vector.
Say in my ray marching loop for P, I get the uvw vector value from my shadeop. Now what? This is where I am stuck at. RSL has no conversion where I can go like:
transform("object", "uvw", P ); Do I multiply the uwv vector with the point? Do I simply pass that uvw to the fbm noise funtion? You mention the conversion is arbitary and I agree, who knows maybe even maya's uvw is completely opposite to rsl but I have no way of getting to that stage when I have no way to convert it into something renderman understands.
I apologise if my question is really stupid maybe I am missing something that is terribly obvious.
10 October 2006, 09:38 AM
You're not being stupid, this kind of stuff can be difficult to get your head around :)
Your second guess is correct: you just use the uvw value straight in the noise function. Simply by looking up that uvw value using P, you have effectively done the transform( "uvw", P ).
Think about it like 2d texture mapping. What's really going on there? You're defining another coordinate system based on P, by attaching points in that 2d space to specific points on your 3d object, and interpolating in between. You're effectively transforming P to a 2d coordinate system, (s,t).
That transform isn't defined "procedurally", like a world->object, or a camera->ndc transform with a single matrix. It's defined by explicitly parameterising your geometry. This is the bit Maya's done for you already in its fluid internals, you're just grabbing the result and using it, in exactly the same way as Maya already sets up all the texture coordinate stuff for you, and you just grab the result to do texture mapping as s and t.
10 October 2006, 01:43 AM
thanks for that, it makes much sense now. I will try to implement it and I think this will work.
10 October 2006, 01:43 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.