A little vector maths help, please !!!


Hi there,

I am attempting to write an expression which puts motion vectors into rgbPP for regular Maya particles.

I got as far as this:

float $mult=0.5;

vector $vel=particleShape1.velocity;
float $xvel=$vel.x;
float $yvel=$vel.y;
float $zvel=$vel.z;


which does indeed do the job, just as long as the camera is pointing exactly down the z-axis. Fine, but of limited use. Now I need a bit of help with some vector transformation maths.
Ideally I want to get the motion vectors to work from my camera, whatever direction it is pointing.

I think I can get the unit vector to the camera:

vector $camPos=`getAttr camera1.translate`;
float $xCamPos = $camPos.x;
float $yCamPos = $camPos.y;
float $zCamPos = $camPos.z;
vector $camUnitAngle = `unit&lt;&lt;$xCamPos,$yCamPos,$zCamPos&gt;&gt;`;
float $xCamUnitAngle=$camUnitAngle.x;
float $yCamUnitAngle=$camUnitAngle.y;
float $zCamUnitAngle=$camUnitAngle.z;

(is that right???)

I think I need to multiply the inverse of this unit vector by the particle’s velocity but I am not sure how to do that. (I should have paid more attention to my maths teacher in that hot summer of 1976…) Any help would be gratefully received !

I should also let you know that I have no Python at all (yet)




I didn’t understand what you are trying to do. Are you trying to project 3D vectors to the image plane (viewed from the camera)? If this is the case, then you should use the inclusive matrix inverse of the camera, and the projection matrix.


Hi Zoharl,

Thanks for your message.

Basically what I am trying to do is to re-create the mv2DToxik motion vectors from scratch, and sticking them into the red and green channels of the rgbPP of a particle. This is so that I can render out motion vectors without having to instance an object to the particle.

Here are my results:


What I am missing is a way to get the particle velocity in ‘screen space’.

Is there a way to ‘transform’ the velocity of a particle by the camera position or aim direction or some vector operation or matrix operation (which I have little understanding of) so that I can get the velocity of the particle ‘relative to the camera’ ?

Thank you for your suggestion. I tried to access the worldInverseMatrix but it looks like it has 16 components. How can I use that ?

Thanks for your help, it is appreciated very much


Sorry, but I’m not familiar with ‘mv2DToxik motion vectors’, so if you would be able to define your problem more abstractly (math / physics) maybe I could be more helpful.

But I think if you take two points in 3D space and convert them to image space (screen coordinates), you’ll be able to convert your vectors as you wanted.

In graphics we work with homogenous coordinates. Which means that our matrices are 4x4, and we convert our 3D vectors to 4D adding 1 at the end of the vector:

(x,y,z) -> (x,y,z,w=1)

To convert a 4D vector back to 3D, you’ll need to divide by this last coordinate:

(x,y,z,w) = (x/w,y/w,z/w)

For an accurate behavior the worldInverseMatrix won’t be enough, you’ll also need the projection matrix. Needless to say that in python your life would have been much easier. Anyway see if these following links help you:



This mel script might be handy as far as converting a 3d point into screenspace (I use the procedures in it whenever I need screenspace stuff)…


The thing to do would be for each particle to query it’s 2d position at the start and end of each frame - once you have that screenspace vector, then calculate it’s length - that should give you what you want - although I’d be tempted to say doing this over lot’s of particles is going to be slow in mel.


Hi Zoharl,

Thanks for your speedy reply!

As I am trying to do this co-ordinate conversion in a MEL expression, I don’t know if Python would be much use anyway.

So, if I have a particle with worldVelocity = << Wx, Wy, Wz>>
to convert that to image space I need to, er, make that into a 1x4 matrix,
then multiply that by the camera’s WorldInverseMatrix, and then…
um… oh, just take the first three values of the resulting matrix
and that should be the ScreenSpace velocity vector.

Here goes…

float $mult=0.5;

//get the particle's World Space velocity
vector $vel=particleShape1.worldVelocity;
float $xVel=$vel.x;
float $yVel=$vel.y;
float $zVel=$vel.z;

// create particle's velocity matrix which is in World Space
matrix $WSvel[1][4]=<<$xVel,$yVel,$zVel,1>>;

// get the camera's World Inverse Matrix
float $v[]=`getAttr camera1.worldInverseMatrix`;
matrix $camWIM[4][4]=<< $v[ 0], $v[ 1], $v[ 2], $v[ 3]; $v[ 4], $v[ 5], $v[ 6], $v[ 7]; $v[ 8], $v[ 9], $v[10], $v[11]; $v[12], $v[13], $v[14], $v[15] >>;

//multiply particle's velocity matrix by the camera's World Inverse Matrix to get the velocity in Screen Space
matrix $SSvel[1][4]=$WSvel * $camWIM;

// convert that matrix back into a vector
vector $result = <<$SSvel[0][0],$SSvel[0][1],$SSvel[0][2]>>;
float $xResult = $mult * $result.x;
float $yResult = $mult * $result.y;
float $zResult = $mult * $result.z;


So far so good. It looks like it is working.


Thanks again, Zoharl, you have been very helpful indeed !


Hi Earlyworm,

Thank you for your reply. I’ve been staring at 185vfx.com for quite some time now and I think some of it is seeping in. It does indeed look like their code is the key to solving my problem.

Thanks for the tip and yes, I have one million particles so it will take some computing grunt to get through them all.

Cheers !



Just note for future reference that you can execute python() command from mel, and thus execute a python code. This is one way to python from expressions.


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.