# world space coordinate to image space

 09 September 2012 zaskar Expert portfolio franky****inFourFingaz - Dresden, Germany world space coordinate to image space Hi guys, i am struggeling with a project for a while now, unfortunately with no success. I want to calculate a worlds space coordinate (x,y,z) equivalent coordinate in a cameras view rectangle (x,y). Until now i tried to figure out what all the cameraShapes attributes like filmOffset, lensSqueezeRatio, preScale, postScale, whatever mean and mix them all in a MEL script. So far it works partially because some attribute combinations (for example lensSqueezeRatio other than 1.0 in conjunction with cameraScale other than 1.0 and filmOffsetV/H other than 0.0) just let the script fail, the world space point doesnt match the right pixel in the renderViewEditors image anymore. Now i wanted to try the MFnCamera.projectionMatrix in conjunction with the cameras transformation matrix to get this working, but have no glue how to do that. I can read out the matrix values using some mel-python calls but now i stuck on how to use those values. Maybe someone has done this before and could give some hints? If so, thanks in andvance, any suggestions would be greatly appreciated! share quote
 09 September 2012 skeelogy FX / Lighting / Pipeline   portfolio Skeel Lee FX TD Singapore Hey zaskar, Have you tried this script from Rob Bedrow? http://www.185vfx.com/resources/screenSpace.mel He wrote it quite a while back but it should still work pretty well! However, if you still want to use the MFnCamera.ProjectionMatrix: Just multiply the world space point by the camera's world inverse matrix, and then the projection matrix: ``````point = MPoint(worldPointX, worldPointY, worldPointZ) projectedPoint = point * cameraInverseWorldMatrix * projectionMatrix #after this, you have to divide each component by z after this, so that you get (x/z, y/z, 1)`````` The way you transform points in Maya is by post-multiplication i.e. you multiply the transformation matrices on the right-hand-side, one by one, in sequence, like the example above. You can get the cameraInverseWorldMatrix using: ``````#get the camera's dag path cameraDagPath = MDagPath() selList = MSelectionList() selList.add(camera) #"camera" is the name of the camera, of type string selList.getDagPath(0, cameraDagPath) #get the camera's world inverse matrix cameraInverseWorldMatrix = cameraDagPath.inclusiveMatrixInverse()`````` I don't have Maya with me now to test these snippets (and to be honest, I haven't used it in a while - sorry, using mostly Houdini now). See if you can figure out something from these. If not I might have to launch Maya at work and try to give you a more detailed answer. These are Python codes by the way, using Maya's Python API. __________________ Skeel http://cg.skeelogy.com/ Last edited by skeelogy : 09 September 2012 at 12:03 PM. Reason: providing more details share quote
 09 September 2012 zaskar Expert portfolio franky****inFourFingaz - Dresden, Germany Thanks skeelogy, as soon as I get my hands on maya again I will give it a try! share quote
 10 October 2012 zaskar Expert portfolio franky****inFourFingaz - Dresden, Germany hello again, i did have a look into Rob Bedrow script. It transforms a world point into the cameras objects space by post multiplying it with the cameras inverse matrix, but then it only uses some angular calculations to get the pixel position. It doesnt involve the internal projection transformations using all the different film back manipulators. So i tried to multiply the projection matrix, and thats what i scripted so far: HTML Code: ```proc float[] cTtransformPoint(float \$mtx[], float \$pt[]) // multiply matrix with point { float \$res[] = {}; \$res[0] = \$pt[0] * \$mtx[0] + \$pt[1] * \$mtx[4] + \$pt[2] * \$mtx[8] + \$mtx[12]; \$res[1] = \$pt[0] * \$mtx[1] + \$pt[1] * \$mtx[5] + \$pt[2] * \$mtx[9] + \$mtx[13]; \$res[2] = \$pt[0] * \$mtx[2] + \$pt[1] * \$mtx[6] + \$pt[2] * \$mtx[10] + \$mtx[14]; return \$res; }; proc float[] cGetProjectionMatrix(string \$shape) //get camera projection matrix { float \$res[] = {}; if(`objExists \$shape` && `nodeType \$shape` == "camera"){ python "import maya.OpenMaya as om"; python "list = om.MSelectionList()"; python ("list.add(' " + \$shape + "')"); python "depNode = om.MObject()"; python "list.getDependNode(0, depNode)"; python "camFn = om.MFnCamera(depNode)"; python "pMtx = om.MFloatMatrix()"; python "pMtx = camFn.projectionMatrix()"; for(\$i=0;\$i<=3;\$i++){ for(\$k=0;\$k<=3;\$k++) \$res[`size \$res`] = `python ("pMtx(" + \$i + ", " + \$k + ")")`; }; }; return \$res; // checked from inside api, result ok }; //--- create a scene with a cam and a locator, the locator is positioned //--- to match the upper right corner of the cameras resolution gate at default //--- resolution 640x480 file -f -new; camera -fl 35 -hfa 1.41732 -vfa 0.94488 -ovr 1.5 -dr 1; xform -ws -translation 3.778129 4.049427 0.776683; // random position xform -ws -rotation -18.217245 16.8 0; // random rotation spaceLocator -name "somePt"; xform -ws -translation 5.654142 4.606433 -11.286961; // upper right corner of view rect //--- string \$cam = "|camera1"; string \$camShape = "|camera1|cameraShape1"; float \$worldPt[] = `xform -q -ws -t "somePt"`; float \$cam_invMtx[] = `getAttr (\$cam + ".worldInverseMatrix")`; float \$cam_projMtx[] = `cGetProjectionMatrix \$camShape`; float \$pointInCamSpace[] = `cTtransformPoint \$cam_invMtx \$worldPt`; //get camera object space coordiantes of world point // result: 5.282721 3.96996 -10.280733 float \$pointProjected[] = `cTtransformPoint \$cam_projMtx \$pointInCamSpace`; // this gets us what? // result: 10.271957 10.292463 -10.08277 print(\$pointProjected[0] / \$pointProjected[2] + " " + \$pointProjected[1] / \$pointProjected[2] + "\n"); // result: -1.018763452 -1.020797189 // setting camera1.filmTranslateH to .666 and camera1.filmTranslateV to .5 aligns the locator at the // center of the view rectangle, the calculation then gives -0.3396873028 -0.3410429739 ``` I think that i missed the point. Are these values normalized to the side length of the view rectangle? Then why are they negative? And shouldnt the centered point give some values around .5 for each dimension? Last edited by zaskar : 10 October 2012 at 09:55 AM. Reason: some errors in the script share quote
 10 October 2012 zaskar Expert portfolio franky****inFourFingaz - Dresden, Germany Think it works now. The problem was the missing w component using MEL to query a point and thus the shortened first matrix multiplication procedure. Using the API or python api and a MPoint to catch the position implies the use of the fourth vector component, and the MPoint methods to multiply with a MMatrix also involves the w component of course. Finally, the result x / y components had to be divided by the w component. The result then is a 2D coordinate with the center of the image as origin reaching from -1,-1 at corner left-bottom to 1,1 at right-top. So some denormalization had to be done mutliplied by the current RenderGlobals width/height. If somebody else has any use for it:``````global proc float[] cTtransformPoint(float \$mtx[], float \$pt[]) // multiply 4x4 matrix with 4x vector { float \$res[] = {}; if(`size \$pt` == 3) \$pt[3] = 1.0; for(\$i=0;\$i<4;\$i++){ float \$tmp = 0; for(\$k=0;\$k<4;\$k++){ \$tmp += \$pt[\$k] * \$mtx[\$k * 4 + \$i]; }; \$res[\$i] = \$tmp; }; return \$res; }; global proc float[] cGetProjectionMatrix(string \$shape) //get camera projection matrix { float \$res[] = {}; if(`objExists \$shape` && `nodeType \$shape` == "camera"){ python "import maya.OpenMaya as om"; python "list = om.MSelectionList()"; python ("list.add(' " + \$shape + "')"); python "depNode = om.MObject()"; python "list.getDependNode(0, depNode)"; python "camFn = om.MFnCamera(depNode)"; python "pMtx = om.MFloatMatrix()"; python "pMtx = camFn.projectionMatrix()"; for(\$i=0;\$i<=3;\$i++){ for(\$k=0;\$k<=3;\$k++) \$res[`size \$res`] = `python ("pMtx(" + \$i + ", " + \$k + ")")`; }; }; return \$res; }; global proc float[] cWorldSpaceToImageSpace(string \$camera, float \$worldPt[]) { string \$camShape[] = `ls -dag -type "camera" \$camera`; if(! `size \$camShape`) return {}; string \$cam[] = `listRelatives -p -f \$camShape`; int \$resX = `getAttr "defaultResolution.width"`; int \$resY = `getAttr "defaultResolution.height"`; float \$cam_inverseMatrix[] = `getAttr (\$cam[0] + ".worldInverseMatrix")`; float \$cam_projectionMatrix[] = `cGetProjectionMatrix \$camShape[0]`; float \$ptInCamSpace[] = `cTtransformPoint \$cam_inverseMatrix \$worldPt`; float \$projectedPoint[] = `cTtransformPoint \$cam_projectionMatrix \$ptInCamSpace`; float \$resultX = ((\$projectedPoint[0] / \$projectedPoint[3]) / 2 + .5) * \$resX; float \$resultY = ((\$projectedPoint[1] / \$projectedPoint[3]) / 2 + .5) * \$resY; return {\$resultX, \$resultY}; }; //--- create a scene with a cam and a locator, the locator is positioned //--- to match the upper right corner of the cameras resolution gate at default //--- resolution 640x480 file -f -new; camera -fl 35 -hfa 1.41732 -vfa 0.94488 -ovr 1.5 -dr 1; xform -ws -translation 3.778129 4.049427 0.776683; // random position xform -ws -rotation -18.217245 16.8 0; // random rotation spaceLocator -name "somePt"; xform -ws -translation 5.654142 4.606433 -11.286961; // upper right corner of view rect //--- cWorldSpaceToImageSpace "camera1" (`xform -q -ws -t "somePt"`); // 639.726838 480.273826 // `````` Besides there seems to be a bug using orthographic cameras with ."cameraScale" attributes other than 1.0 in conjunction with mentalRay as renderer. MR simply ignores the value until some of the attributes inside the cameras AttributeEditor area for all the FilmBack values are changed, starting with "PreScale" as first of them. The script then misses the right pixel. However perspectivic cameras seems to work fine. Last edited by zaskar : 10 October 2012 at 07:18 PM. Reason: providing more information share quote
 10 October 2012 CGTalk Moderation Expert Thread automatically closed This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum. __________________ CGTalk Policy/Legalities Note that as CGTalk Members, you agree to the terms and conditions of using this website. share quote

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts vB code is On Smilies are On [IMG] code is On HTML code is Off CGSociety Society of Digital Artists www.cgsociety.org Powered by vBulletinCopyright ©2000 - 2006, Jelsoft Enterprises Ltd.