Accuracy in Camera Match & Tracking: 2 Part Problem


I need to do what should be a fairly simple motion tracking job, though not entirely sure what the smartest approach would be in my particular case. It’s a two-fold problem.

Part 1: I’d like to do a highly accurate camera match to a segment of video where the camera is not moving. I have measured markers on my background plate, and what I believe to be correct camera info (lens length, image sensor dimension) and a good approximation of distance of subject’s focal plane from camera.

Part 2: I need to track a moving object in the video, and match an exact CG replica of same object in a highly precise way. The object in the video was derived directly from the CAD-model I am trying to match-move so theoretically I should be able to get a very accurate match. I’ve even gone so far as to put tiny tracker points on the object before the video was shot and of course took measurements of distances between those tracker points.
To clarify further, here’s a link to the actual background plate video. On the left side, you’ll see some 5mm black dots stuck to the scrim (for initial camera match) and on right side, tiny black pen-point dots on the white handle of the product object.

Regarding Part 1, I’ve already taken a stab at a camera match using the Camera Calibrator tag and its plane helper object. I’ve used this toolset on many projects but I never seem to get particularly accurate results. This time is no different. I ran through the typical process and the results calculated looked somewhat accurate in terms of the angle of reference plane to camera, but still a little bit off in the following way:

  • After running Camera Calibrator, the dimensions of my track-marked rectangle in my photography should match exactly the dimensions of a CG rectangle I created, but they’re off by a visible bit, particularly in the horizontal dimension (see Image 1)
  • The Camera Calibrator yields a focal length and sensor size that don’t coincide with my real-world input. (Focal length should be 50mm, sensor size should be 23.5mm, instead, I get 80.71mm and 36mm, respectively) (see image 1 & 2 side by side)

After seeing this, I cloned the camera, deleted the tag on cloned camera and corrected the focal length and sensor size settings, then moved this new camera along its local axis vector to see if my reference geo would line up any better with the markers in my photography. The results are similar, but similarly imperfect, so it’s making me wonder which is more physically accurate; the camera whose settings were created by the Calibrator, or the one whose settings and position I dialed in manually. Wondering if anyone out there has dealt with the same little conundrum and how you might’ve resolved it. The camera I shot with was a Nikon D7100.

I also wonder whether having all the real-world dimensional info is more a hindrance than a benefit. Assuming I do manage to get highly accurate settings and transform info for the camera itself, will the Motion Tracker toolset be able to give me anything more than an approximation of moving planes? Will having exacting dimensional data for the product I’m trying to track actually help me get a highly accurate match?
This is quite long & winded but I do hope someone can give some pointers either on specifics, or general approach, or both.
Thanks ahead of time to all responders!


A couple of points. First of all, this is NOT a “a fairly simple motion tracking job”, rather it’s a very difficult and maybe impossible job :slight_smile:

If the camera is not moving, you don’t need to track it. Also, you have only 4 points on a plane, so there’s not enough data to get a very accurate calibration.
For a proper object track you need a minimum of 7or 8 tracking points, not 4.
The object motion is quite subtle, and there’s not much 3D parallax or rotation to get any useful 3D information for the object.

Overall, I think this is more a job for a planar tracker rather then a 3D tracker. Unfortunately I’m not very well versed in planar tracking (Mocha e.t.c.)

I’d be curious to hear other opinions as well.


NoseMan is right in that w/out camera movement no tracking software out there can do a true 3D solve of a scene w/out it. Parallax is required to do that and a fixed camera doesn’t provide any. You’ll need to track lots of points and do a lot of manual hacking.

It might be surprising but a shaky moving camera is an easier tracking job than a tripod shot…be that w/c4d or with SynthEyes or other tracking tools.



as Noseman says this is an object tracking problem rather than a camera tracking problem. I’ve not used the c4d inbuilt camera trackers so I’m not sure exactly if it can do it. In the past I’ve used dedicated tracking program syntheyes to successful track objects with a fixed and moving camera.(I’m on quite an old version as I don’t use it enough to justify upgrading, but even the old version can track objects pretty well - and I know that a load more object tracking features have been added over the years.)

I’m not sure using a planar tracker will give you the results you need, as you will only get 2D information out of that(I think). You will need a program that can do proper object tracking if you want good results. I mention syntheyes because it’s the one I have, but I’m sure there are alternatives.



SynthEyes is the best tool out there for the money.


Trig, IceCaveMan, NoseMan,

I really appreciate all your responses and insights!
I may give SynthEyes a try based on the recommendation, though wondering if any of you could say something about it’s ease of use/learning curve? Is it realistic to think I could learn enough about the program in a day to be able to accomplish the object tracking I need to?

Also, there’s an Intro version and a Pro version at different price points, and I’d wonder if the Intro version would contain all the toolsets for what I need to accomplish.



Russ the guy behind syntheyes has a load of tutorials on youtube about object tracking for example

(that particular one may be a bit complex but it’s the first one I found).

So I’d watch a few of them to get an idea. If you are pushed for time you may want to try another approach(perhaps a 2D track combined with manual orientation). A lot of the time it’s easy to get bogged down in minutae that no-one else will ever notice.(I spent a few hours recently making sure a faked shadow was entirely accurate in a 1 second shot, when in reality I could have done something much quicker and people wouldn’t have noticed). You are not creating a true simulation you are creating a video, “smoke and mirrors” can cover a lot up.



With regards to the initial results you were getting using the camera calibrator they actually look fairly good to me. The only result that you care about when solving a camera match is the field of view - both the focal length and sensor size are meaningless.

If you make two cameras and plug in your values they are actually matching fairly closely - fov of 25.145 ° for one and 26.449 ° for the other. The discrepancy arises because in Cinema the camera and lenses are considered to be perfect but in the real world a camera/lens system always shoots with distortion - i.e. lines that should be parallel are imaged with a positive or negative bowing. This is pin-cushion or barrel distortion.

If you need your 3d tracking to be ‘perfect’ then you have to get into an un-distort/re-distort workflow where you calibrate for the distortion in your shot.

This is much more complicated of course.

Cheers, Simon W.


Trig, Simon,

Thanks, I appreciate the shared insights, once again!

Trig - I’ll check out the SynthEyes vids, and yes, too many occasions for me where I’ve labored too long on something less than necessarily. By the looks of my project scheduling there’s a fair chance I’ll need to simply limit my frame range and match things by eye and transform gizmo, frame by frame.

Simon - I suspected it might be the case that I’d need to run some lens correction, but thanks for confirming my suspicions, as it sounds like you have some experience with this type of thing. I watched a few tutorials on lens correction, though wonder if there’s any particular recommended standard workflow for taking out lens distortion (and putting it back in).

Nuke? Syntheyes? Looks like both of these have functions for doing this - wondering if you’d have a recommendation on that.

The one thing I’ve tried so far was using the Optical Compensation effect in AfterEffects; I rendered a pass from C4D of a the already camera-matched size-reference plane and tried to dial-in the subtle distortion on the photography plate to match the CG, though I wasn’t really getting what I needed out of it. I’d guess AE is maybe not the best tool for the job, but it’s the program I’m most familiar with.

I’d also wonder if there’s anything available out there that facilitates a more scientific approach, where you just type in the lens’ profile specs, and it automatically calibrates the distortion correction. Evidently AdobeLabs has an app called Lens Profile Creator, which I imagine could be useful to debarrelize based on existing parameters, though looks like it could be a convoluted workflow. Do you have any familiarity with this?



Lynda has a couple of SynthEyes online courses too. Perhaps not the best if you’re in a hurry, but worth a look. Sub for a month and you can investigate as you see fit.


Cinema does have a comprehensive set of tools for dealing with lens distortion within the tracking module.

You would need to shoot a distortion grid using the same camera and lens you filmed your live action plates to calibrate from - this then generates a lens distortion profile that you can use to un-distort your footage before tracking.

This does get a million times more complicated than just fudging things but it is the only way to do things properly.

Cheers, Simon W.


Anthony - Many thanks! I happen to have a Lynda account so I’ll get a look at them.

Simon - I appreciate the insights, again. Thanks to unnamed search-engine giant, and a little luck, I was able to find an image of the distortion grid for the particular camera lens used. I generated a profile using C4D’s Lens Distortion tool and the accompanying Post Effect. I didn’t get quite the correct distortion I was hoping for - the size reference object is still a little narrow compared to my background markers, as if pixel aspect ratio were different, though I doubt this could be the case.

It’s also possible that the markers are dimensionally off slightly, as I didn’t have complete supervision over the photoshoot.
I’d also wonder whether lens’s focus setting at the time of exposure would make a difference on the FOV value.

In any event, due to time constraints, I’m just gonna go ahead and fudge it and line things up by eye. However, I expect to get more work like this in the future (tripod, tabletop) so wondering if you’d have further recommendations in addition to this:

  • Next time I’ll shoot a distortion grid
  • Will take better care recording lens setting info and measuring marker distances.
  • Will provide more tracking points - including ones along a Y-axis

Otherwise, some questions:

  • Will the lens focus setting indeed make a difference on the FOV #, even on a lens with no zoom mechanism?
  • Would you happen to know if there’s a sure-fire way to get FOV #, or just absolute focal length, onto the metadata of video clips recorded with a high-end DSLR?

Finally, I looked at several tutorial vids on the subject of camera & object tracking, and lens correction, specifically using C4D. I didn’t see a single one where the creator of tutorial ran the lens correction on the footage first, then only after ran a camera match using the Camera Calibrator. Rather, the common workflow seemed to be to generate a lens profile from distortion grid, then apply the profile at render time using the LD Post Effect, in an already camera matched scene. This applies distortion to scene objects, but not background image. This works obviously enough, though approach seems backwards to me - wouldn’t the CG camera calibration be more accurate if the footage was corrected before trying to calculate the camera match??
Aside from texture mapping the footage as an image sequence to a plane and floating it in front of the camera, is there a way in C4D to make the camera see the viewport background image as scene object and hence apply the Lens Distortion as a post effect at render time?



Take note of the focal length and camera used so you can work out the sensor size.
Film a distortion grid with each lens/camera combination you will be shooting with.
Use the focal length as a guide only - lenses are most definitely not what they say they are and focal length should always be solved during the camera matching process.
Lenses do ‘breath’ so focal length does change on cheap lenses as you focus near/far - eliminating this is what causes true cinema lenses to be so expensive.
Depending on how much time you have you can shoot multiple distortion charts and solve each one to give you a range of values to use when tracking. However it is more important to get a well shot initial distortion grid so make sure you print a big chart and fill the frame with it.
I’ve only worked at companies where shots are un-distorted first and the un-distorted plates are used for camera tracking and 3d line-up.
You end up with double the footage and have to render over-sized so you can then re-apply your distortion when compositing your 3d over your BG plate but this is the most traditional way of doing things.

Cheers, Simon W.


Many thanks Simon, for dropping the wisdom. A lot of stuff here I wouldn’t have given consideration to otherwise.

>>> I’ve only worked at companies where shots are un-distorted first and the un-distorted plates are used for camera tracking and 3d line-up.

I’m sure there’s a small range of softwares capable of doing the un-distortion, but could you tell me in your experience which are the most common ones?



I would start out by seeing how far you can get using the built in tools in Cinema before branching out and spending more money. If you find yourself wanting more then I would second the recommendation for Syntheyes.

However you should have everything you need to get started doing undistorted/distort tracking in Cinema and being native you don’t have to worry about importing the track data properly afterwards.

Cheers, Simon W.