Don’t thank me yet. I already have one failed idea under my belt 
Anyways, the idea I had in mind was trying to find a way to constrain an object to a mesh, but use the UV coordinates of the mesh to control the constrained object’s position. Sort of like the built-in surface constraint, but for polygonal meshes.
To better illustrate my point, here’s what can already be done with the built-in surface constraint:
http://vimeo.com/6930811
If the same thing can be done with polygonal meshes, then it would be a lot more flexible, since a polygon mesh can have multiple UV maps and you can lay them out however you please.
So if this hypothetical constraint existed, you would need to specify the constraining geometry and the UV map, and then just animate or position objects by changing the UV coordinates on the constraint.
That’s the idea I had in mind, but it’s not easy to implement. Biggest issue is mapping a point on the UV map to the corresponding position on the mesh’s surface. There’s away to obtain the UV coordinates from the XYZ coordinates on a surface (sort of at least), but not the other way around. At least none that I could find in the SDK.
Thoughts? Ideas?