Aligning camera tracking point cloud to photogrammetry sourced model


I’m trying to find the neatest way to match a camera track from video sequence to a real world model generated from photogrammetry.

I’m currently using Boujou to create a solve. This gives me nodes and camera track i can export to Maya or blender. I then add an complex obj from the photogrammetry (scaled correctly) but finding it very difficult and time consuming trying to match the two manually by eye.
Is there a better solution or method I’m not aware of?


Identify a point in your point cloud that corresponds to an identifiable vertex on your model. Mark that point somehow—recolor it, change its shape or size, snap a small sphere to it, whatever. Make a locator and snap its location to the vertex on your geometry, then parent the geo under the locator. Move the locator to the position of the designated point in your point cloud. Now you have a pivot point where both the model and the point cloud are in registration. You should be able to finish solving the geometry’s location by rotating the locator.