View Full Version : Object exported in crazy coordinates

03 March 2005, 07:39 PM
In my previous post, Bobo responded with an answer I think is about what I want to do now. My exporter is working (somewhat, still have a bug somewhere to find), but when I print out the values of the vertices stored in the file, they look like this:

650.325684, -1166.419922, 0.000000
650.325684, -578.594482, 0.000000
650.325684, -1166.419922, 161.909348
650.325684, -578.594482, 161.909348
0.000000, 0.000000, 0.000000
0.000000, 0.000000, 0.000000
0.000000, 0.000000, 0.000000
0.000000, 0.000000, 0.000000

That is supposed to just be a simple box. Is there a way I can get those coordinates sort of center themselves? Write now when my object draws, it draws waaaay out in the middle of no where (in the program I've written), because those values are so far from the origin, when really they don't have to be.

Does that make sense? I can attempt to rephrase it if not. So I think I need to convert the values to something called object space (as Bobo put it). If so, what all does that mean/entail?

Thanks in advance!

03 March 2005, 09:00 PM
I know exactly what you mean :)

Here is a short explanation of coordinates in 3D (you are a programmer, but I have no idea how well you know the 3D world).

Basically, each geometry primitive in 3ds max has a base object that supplies some construction data. From the beginning, or along the modifier stack, this data might become converted to a TriMesh, containing vertices with LOCAL coordinates relatively to the object's local center, and faces referencing them by index.

For example, the Sphere base object always creates the vertices at distances from the local center at [0,0,0] equal to the radius of the sphere. If you would create this sphere using MAXScript, it would end up at the world origin [0,0,0] (in most other 3D packages, this is what would happen to ANY object in any case, but Max let's you create objects anywhere by supplying the node transformation interactively - one more reason to love the program ;))

All modifiers you add to the modifier stack like Bend, Taper etc. are so-called OBJECT SPACE MODIFIERS. They modify the geometry in its local space, before the object has been actually positioned in the scene. The gizmo and center sub-objects of these modifiers let you control how exactly they work, but in general, the position, rotation and scale of the final Sphere has not effect on how a Bend modifier works.

After all modifiers have been applied, the NODE TRANSFORMATIONS are being applied - typically, Position, Rotation and Scale controllers supply translation, rotation and scaling data that is combined into a single Transformation Matrix which is used to transform the complete object into world space. For example, if the position is [100,200,300], the scale is [1,1,1] and the rotation is [0,0,0], the sphere will be simply shifted from the world origin [0,0,0] to the world coordinates [100,200,300]. At this point, asking for the WORLD COORDINATE of a sphere's vertex will return a Point3 value which contains both the offset from the center of the sphere AND the offset of the sphere from the world origin.

After the node transformations have been applied, the WORLD SPACE MODIFIERS (or Space Warps) are being applied. These are modifiers that affect the geometry of the object is world space, so they are dependent on the values in the Position, Rotation and Scale controllers. If you would create a Bend SpaceWarp and bind the sphere to it, when you move the sphere in the world, the bend effect will vary!

The SnapshotAsMesh function I used in the example snapshots ALL these transformations, including the World Space Modifiers / Space Warps, AND the Node Transform. So the values you are reading are the FINAL positions of vertices in world space, not the local positions in the object's local space relatively to its origin. This is a positive thing if you have Space Warps, but not as cool if you are exporting a single box. Note that if the position, rotation and scale of the imported object have been left at their defaults ([0,0,0], [0,0,0] and [1,1,1] respectively), the object will remain perfectly aligned to the original. In a way, the node transformations like position, rotation and scale are BAKED into the mesh, so if you have an animated box, you don't have to separately export any object transformation data, exporting each frame to a mesh file would be sufficient to capture the whole motion!

Since you want the local coordinates, you could do two things - either grab the .mesh property which returns the TriMesh on top of the stack BEFORE the Node Transformations and Space Warps (as long as you don't want them!), OR back-transform the vertices to local space by multiplying each coordinate with the inverse of the node transformation matrix:

temp = (getVert tmesh v) * inverse theObj.transform

Hope this helps!


03 March 2005, 09:29 PM
Thanks Bobo. It's getting closer! A couple of questions.

Does the getNormal function return the normals already normalized? The SDK doesn't really say. Anyway, if they are, I shouldn't have to apply the same inverse multiplications to them as well, right?

These are my new values for the vertices:

-169.583191, -158.146912, 0.000000
169.583191, -158.146912, 0.000000
-169.583191, 158.146912, 0.000000
169.583191, 158.146912, 0.000000
-169.583191, -158.146912, 73.868324
169.583191, -158.146912, 73.868324
-169.583191, 158.146912, 73.868324
169.583191, 158.146912, 73.868324

Are they messy like that because that is most likely the loweste common denominator of the original positions? I would prefer something like:

1.000, 3.5000, 0.000

You know? Just the respective lengths of the box. Perhaps that's just how big I've drawn the box in max...

Anyway it's definitely getting closer. My generic box seems to envelop the entire screen, and my cow object is getting drawn with huge patches missing, but again these two things could just be flaws in my program.

Thanks again for all your help, Bobo!

03 March 2005, 10:03 PM
Thanks Bobo. It's getting closer! A couple of questions.

Does the getNormal function return the normals already normalized? The SDK doesn't really say. Anyway, if they are, I shouldn't have to apply the same inverse multiplications to them as well, right?

The normals ARE normalized (in the sense of having a length of 1.0), BUT if you are reading them from a TriMesh taken using SnapshotAsMesh(), they will be pointing in WORLD space, so you would have to multiply them with the inverse of the node matrix.
In fact, it is a good idea to create an intermediate variable like theTM or something and get the node's inverse transformation into it to use for multiplications, otherwise you are perfoming a matrix3 inverse for each vertex and normal, which might be slower...

03 March 2005, 03:54 PM
Ahh yes, that makes sense. I was thinking since they're normalized it doesn't matter that the shape is way out in the middle of no where. However, since the object is in its world state, it is not necessarily translated along a linear path, which would change the direction of the normal vectors.

Thanks, Bobo!

CGTalk Moderation
03 March 2005, 03:54 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.