Markerless, video based facial capture technology


Here’s a little tech preview we created at Motek Entertainment, in collaboration with Dynamixyz and Ivo Diependaal.

Some quick info:

  • Head mounted camera solution

  • Capture speeds of 30, 60 or 120 fps

  • Operates in visible light or infra red spectrum

  • Integrated with full body motion capture system

  • Compatible with all 3D packages / game engines

  • Point based motion retargeting to joint based facerig

  • No pose interpolation and loss of motion data (compared to other pose based retargeting)

  • Morph / blendshape workflow available as well (although not shown)

For more information contact

Check here for some results:


Very interesting!! For some reason all good facial mocap solution has been a service sofar, and not a product you can buy. I really hope you plan on a software that can be bought or? :slight_smile:

How do you solve the chins/forehead? I don’t see any deformers there, but it looks to follow your moves very well…


That looks great!
Why do you call it markerless - what are the dots on the performer’s face?


Those are just the solver’s markers tracked onto the face :wink:

Pretty impressive stuff!


you have done great job. I like it…


Excellent… but is this going to be a hardware product ? only software ? Cloudbased ? Desktop ? Service ? Affordable ?

(And thanks for Brekel Kinect by the way, very fun to use !)


Sorry for the late replies, using a password manager for your browser is very nice untill you go to a new machine and forgot your old password :slight_smile:

Most facial mocap solutions probably are a full hardware/software solution as it’s usually not as simple as running a magic solver in software and get good results.
It’s important to capture good quality data, track it well and to get good results (especially without needing a lot of keyframing to enhance it) you need a good retargeting solution and rig which takes some expertise to build.

You can buy the tracker and hardware from Dynamixyz, we have worked very closely with them for the last few months helping to mature things.

For the example movie we have brought a lot to the mix in the regards of custom tools and expertise for rigging & retargeting the 2D tracking points.
We are offering that as a service at Motek, including integration with body motion capture and are looking into productizing our Maya based toolset in the near future if there is enough interest.

The chin and forehead deformations aren’t tracked (since our goal is to do the tracking without markers), but their deformation behavior can be derived from the motion of the lips and brows, so it then becomes a rigging solution.


Eightbit & Laserschwert:

Good question, we call it markerless since the actor doesn’t have to wear any markers or makeup during the capture.
The dots & lines on the video are indeed the tracking results that are drawn on top to see the quality of the track.

If you look at the 1080p stream and look at the dots & lines you can see that they are actually computer drawn as they are pixel perfect and in fully saturated colors, where the video data is greyscale.

I understand the confusing and thanks for bringing it up, I’ll update the description.



Dynamixyz offers a package containing a helmet with camera and tracker software.
We at Motek offer services that include capture of a face with or without full body as well, tracking, rigging and retargeting and we work directly and closely with Dynamixyz.

For more information about the services you can contact as I’m not sure about all the details yet, besides the technical stuff that is :slight_smile:

Nice to hear you enjoy my Brekel Kinect personal side project :slight_smile:


Er, I kinda have a hard time imagining that. Brows and lips can move pretty much independently of the underlying bone structure, and you can open your jaw without opening your mouth, or part your lips while clenching your jaws shut.

In fact it’s a relatively basic move for any lip sync to start opening the jaw a few frames before the mouth starts to follow, one of the reasons to implement ‘sticky lips’ deformers or controls in a face rig.


Aaaaaah I think there is some mixup of chin vs jaw.

You are definately right that the jaw and the lips move independently, and that distinction is crucial.
In fact this is where we slightly differ from the default Dynamixyz implementation as I do get tracking data from the jaw and use it in the rig, independently from what the lips are doing.
(There’s a small yellow tracking point in the video)

I thought the original post meantioned the deformation that happens on the area of skin under the underlip. If you put tension on your underlip the skin buckles and wrinkles there. That is not tracked and could be recreated with a normal map which is either rigged to the lip width or driven by a pose detector.

The brow tracking easily handles assymetric shapes and therefore wrinkling of the forehead can easily be rigged by relating it to what certain points of each brow are doing and blending in wrinkles on either the geometry or texture maps. (The example wrinkles the actual geometry.


Right, so this was just a misunderstanding then :slight_smile:

It’s interesting to see how video based face capture is finally getting into some off the shelf tools. I expected it to happen sooner once Avatar had been released.

Also, there doesn’t seem to be a definitive solution yet; for example this one requires special hardware, and I wonder how it’d work with an indirect system, where you want to drive a FACS-based blendshape rig instead of bones and direct transformations…


The reason we use dedicated hardware is because any video based solution is dependent on having good source images and we make use of some hardware to get good quality video.

Having a headmounted camera ensures the face is always visible and gives the actor a lot of freedom compared to sitting in front of a static camera or within a confined space that is 3D scanned at a high framerate.

The headmount also allows you to add a little light to the setup to ensure consistent illumination when moving around.
Working in the visible light spectrum means the actor has some annoying lights shining into his/her face, so tracking in the infra red spectrum is nice as the light is invisible to the human eye.

The tracker can work with any camera, it’s just that we chose one with a low weight and a particularly high frame rate, as humans can move pretty quickly during blinking and speaking 60 fps really is a must have if you want to do minimal cleanup.

So as you can see the reason for the hardware is to get a very high base quality with the minimum amount of restrictions for the actor.

I’ve experimenting with a blendshape rig as well (and btw it’s bloody difficult to perform all FACS poses) however we found a method like that always throws away data.
Either you stuff the system with all the possible pose/shape relations you can think off (FACS is only a start) and they end up fighting with eachother, or you limit the amount of poses/shapes at the expense of going into interpolation on some of the frames.
It’s a fine balance, the biggest advantage of this method is that the shapes can be finely art directed, however the interpolations are more difficult.

With a direct rigging approach tuning the retargeting may be a bit more tricky but once that is done you never throw away any performance data, in the worst case scenario the outcome may be a bit distorted but the motion detail is always there giving you a lot of options for motion cleanup or keyframe augmenting.

And off course you can do a hybrid approach, extend a direct approach with some pose/shape tweaks, which is what the Alien creature uses.

But even though this is interesting stuff, I’ll probably stop ranting for now :slight_smile:



Very nice demo, but just so i understand better… this is a comercial product that the dynamixyz guys sell? how it integrates with max for example?

and how easy it is to capture and trnasfer it to your own model?



The answer to your question is a bit of yes and no. :slight_smile:

Dynamixyz sells the hardware and the tracker.
The movie also shows the retargeting and rigging that we offer as a service at Motek. (including capture with or without full body)

The default implementation of Dynamixyz currently can export the 2D points to an ascii format, FBX format or C3D files, but you’ll have to transfer it to your rig yourself.

The outcome of what we deliver at Motek is a fully animated face rig.


This looks pretty cool,

What does the facial bone rig look like?


If you look at the second part of the movie with the talking head you can see a playblast of the lowrez mesh and the joints that are used for the deformation.

Internally there is some clever transforming going on in a custom plugin to do the retargeting and the results are piped into the translate/rotate of those joints.

For the hirez renderable mesh we also added some wrinkling effects mainly on the forehead and brow/nose regions.


Hello, i wanted to know if there is a link to the price structure for Performer? also have you used DAZ3d Characters? i can export my Characters to FBX format for Retargetting with iPisoft software, does your service retarget in the same manner? i only need facial animations, but is Hair included in the facial animation if the character has hair? thanks in advance


Dynamixyz offers a product called performer, but doesnt seem to indicate a price or where to purchase? do you have any information about that? Also if i were to use the performer Product, would you be able to use a DAZ3d FBX export to retarget the facial motion captures? Daz also exports BVH as well for its characters. what is the average price of your service for a 5 second facial animation? thanks in advance


I don’t have prices on hand but for purchasing a Dynamixyz helmet & software you can contact them for pricing info.
For prices regarding capture & retarget services from Motek you can drop a mail to