Markerless, video based facial capture technology

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
  01 January 2012
Markerless, video based facial capture technology

Here's a little tech preview we created at Motek Entertainment, in collaboration with Dynamixyz and Ivo Diependaal.



Some quick info:
- Head mounted camera solution
- Capture speeds of 30, 60 or 120 fps
- Operates in visible light or infra red spectrum
- Integrated with full body motion capture system

- Compatible with all 3D packages / game engines
- Point based motion retargeting to joint based facerig
- No pose interpolation and loss of motion data (compared to other pose based retargeting)
- Morph / blendshape workflow available as well (although not shown)

For more information contact info@motekentertainment.com


Check here for some results:
http://www.youtube.com/watch?v=3R2K5yXpWIM
http://www.youtube.com/watch?v=EuGFYyXJ10A
 
  01 January 2012
Very interesting!! For some reason all good facial mocap solution has been a service sofar, and not a product you can buy. I really hope you plan on a software that can be bought or?

How do you solve the chins/forehead? I don't see any deformers there, but it looks to follow your moves very well..
__________________
-=TLU, INSERTCOIN, CGS=- -=LW, SOFTIMAGE, Xsens MVN, RED ONE, Messiah:S, MaxwellRender etc=-
 
  01 January 2012
That looks great!
Why do you call it markerless - what are the dots on the performer's face?
__________________
If animation couldn't change the world, it wouldn't be such a Micky-Mouse place.
 
  01 January 2012
Originally Posted by EightBit: what are the dots on the performer's face?
Those are just the solver's markers tracked onto the face

Pretty impressive stuff!
 
  01 January 2012
great job

Originally Posted by brekel: Here's a little tech preview we created at Motek Entertainment, in collaboration with Dynamixyz and Ivo Diependaal.



Some quick info:
- Head mounted camera solution
- Capture speeds of 30, 60 or 120 fps
- Operates in visible light or infra red spectrum
- Integrated with full body motion capture system

- Compatible with all 3D packages / game engines
- Point based motion retargeting to joint based facerig
- No pose interpolation and loss of motion data (compared to other pose based retargeting)
- Morph / blendshape workflow available as well (although not shown)

For more information contact info@motekentertainment.com


Check here for some results:
http://www.youtube.com/watch?v=3R2K5yXpWIM
http://www.youtube.com/watch?v=EuGFYyXJ10A



you have done great job. I like it.........
 
  01 January 2012
Excellent... but is this going to be a hardware product ? only software ? Cloudbased ? Desktop ? Service ? Affordable ?

(And thanks for Brekel Kinect by the way, very fun to use !)
 
  01 January 2012
Sorry for the late replies, using a password manager for your browser is very nice untill you go to a new machine and forgot your old password


Sniffet:
Most facial mocap solutions probably are a full hardware/software solution as it's usually not as simple as running a magic solver in software and get good results.
It's important to capture good quality data, track it well and to get good results (especially without needing a lot of keyframing to enhance it) you need a good retargeting solution and rig which takes some expertise to build.


You can buy the tracker and hardware from Dynamixyz, we have worked very closely with them for the last few months helping to mature things.

For the example movie we have brought a lot to the mix in the regards of custom tools and expertise for rigging & retargeting the 2D tracking points.
We are offering that as a service at Motek, including integration with body motion capture and are looking into productizing our Maya based toolset in the near future if there is enough interest.



The chin and forehead deformations aren't tracked (since our goal is to do the tracking without markers), but their deformation behavior can be derived from the motion of the lips and brows, so it then becomes a rigging solution.
__________________
~~~~
Jasper Brekelmans
Senior Character/Mocap TD, custom Tool/Pipeline development
j@brekel.com
http://www.brekel.com
twitter: @brekelj
 
  01 January 2012
Eightbit & Laserschwert:

Good question, we call it markerless since the actor doesn't have to wear any markers or makeup during the capture.
The dots & lines on the video are indeed the tracking results that are drawn on top to see the quality of the track.

If you look at the 1080p stream and look at the dots & lines you can see that they are actually computer drawn as they are pixel perfect and in fully saturated colors, where the video data is greyscale.


I understand the confusing and thanks for bringing it up, I'll update the description.
__________________
~~~~
Jasper Brekelmans
Senior Character/Mocap TD, custom Tool/Pipeline development
j@brekel.com
http://www.brekel.com
twitter: @brekelj
 
  01 January 2012
EricM:

Dynamixyz offers a package containing a helmet with camera and tracker software.
We at Motek offer services that include capture of a face with or without full body as well, tracking, rigging and retargeting and we work directly and closely with Dynamixyz.

For more information about the services you can contact info@motekentertainment.com as I'm not sure about all the details yet, besides the technical stuff that is


Nice to hear you enjoy my Brekel Kinect personal side project
__________________
~~~~
Jasper Brekelmans
Senior Character/Mocap TD, custom Tool/Pipeline development
j@brekel.com
http://www.brekel.com
twitter: @brekelj
 
  01 January 2012
Originally Posted by brekel: The chin and forehead deformations aren't tracked (since our goal is to do the tracking without markers), but their deformation behavior can be derived from the motion of the lips and brows, so it then becomes a rigging solution.


Er, I kinda have a hard time imagining that. Brows and lips can move pretty much independently of the underlying bone structure, and you can open your jaw without opening your mouth, or part your lips while clenching your jaws shut.

In fact it's a relatively basic move for any lip sync to start opening the jaw a few frames before the mouth starts to follow, one of the reasons to implement 'sticky lips' deformers or controls in a face rig.
__________________
Tamas Varga
 
  01 January 2012
Aaaaaah I think there is some mixup of chin vs jaw.


You are definately right that the jaw and the lips move independently, and that distinction is crucial.
In fact this is where we slightly differ from the default Dynamixyz implementation as I do get tracking data from the jaw and use it in the rig, independently from what the lips are doing.
(There's a small yellow tracking point in the video)

I thought the original post meantioned the deformation that happens on the area of skin under the underlip. If you put tension on your underlip the skin buckles and wrinkles there. That is not tracked and could be recreated with a normal map which is either rigged to the lip width or driven by a pose detector.


The brow tracking easily handles assymetric shapes and therefore wrinkling of the forehead can easily be rigged by relating it to what certain points of each brow are doing and blending in wrinkles on either the geometry or texture maps. (The example wrinkles the actual geometry.
__________________
~~~~
Jasper Brekelmans
Senior Character/Mocap TD, custom Tool/Pipeline development
j@brekel.com
http://www.brekel.com
twitter: @brekelj
 
  01 January 2012
Right, so this was just a misunderstanding then

It's interesting to see how video based face capture is finally getting into some off the shelf tools. I expected it to happen sooner once Avatar had been released.

Also, there doesn't seem to be a definitive solution yet; for example this one requires special hardware, and I wonder how it'd work with an indirect system, where you want to drive a FACS-based blendshape rig instead of bones and direct transformations...
__________________
Tamas Varga
 
  01 January 2012
The reason we use dedicated hardware is because any video based solution is dependent on having good source images and we make use of some hardware to get good quality video.


Having a headmounted camera ensures the face is always visible and gives the actor a lot of freedom compared to sitting in front of a static camera or within a confined space that is 3D scanned at a high framerate.

The headmount also allows you to add a little light to the setup to ensure consistent illumination when moving around.
Working in the visible light spectrum means the actor has some annoying lights shining into his/her face, so tracking in the infra red spectrum is nice as the light is invisible to the human eye.


The tracker can work with any camera, it's just that we chose one with a low weight and a particularly high frame rate, as humans can move pretty quickly during blinking and speaking 60 fps really is a must have if you want to do minimal cleanup.


So as you can see the reason for the hardware is to get a very high base quality with the minimum amount of restrictions for the actor.


I've experimenting with a blendshape rig as well (and btw it's bloody difficult to perform all FACS poses) however we found a method like that always throws away data.
Either you stuff the system with all the possible pose/shape relations you can think off (FACS is only a start) and they end up fighting with eachother, or you limit the amount of poses/shapes at the expense of going into interpolation on some of the frames.
It's a fine balance, the biggest advantage of this method is that the shapes can be finely art directed, however the interpolations are more difficult.

With a direct rigging approach tuning the retargeting may be a bit more tricky but once that is done you never throw away any performance data, in the worst case scenario the outcome may be a bit distorted but the motion detail is always there giving you a lot of options for motion cleanup or keyframe augmenting.

And off course you can do a hybrid approach, extend a direct approach with some pose/shape tweaks, which is what the Alien creature uses.


But even though this is interesting stuff, I'll probably stop ranting for now
__________________
~~~~
Jasper Brekelmans
Senior Character/Mocap TD, custom Tool/Pipeline development
j@brekel.com
http://www.brekel.com
twitter: @brekelj
 
  01 January 2012
Brekel

Very nice demo, but just so i understand better... this is a comercial product that the dynamixyz guys sell? how it integrates with max for example?

and how easy it is to capture and trnasfer it to your own model?

Nildo
 
  01 January 2012
Thanks.
The answer to your question is a bit of yes and no.

Dynamixyz sells the hardware and the tracker.
The movie also shows the retargeting and rigging that we offer as a service at Motek. (including capture with or without full body)

The default implementation of Dynamixyz currently can export the 2D points to an ascii format, FBX format or C3D files, but you'll have to transfer it to your rig yourself.

The outcome of what we deliver at Motek is a fully animated face rig.
__________________
~~~~
Jasper Brekelmans
Senior Character/Mocap TD, custom Tool/Pipeline development
j@brekel.com
http://www.brekel.com
twitter: @brekelj
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 10:39 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.