Getting facial mocap data on head model


#1

Need help getting some facial mocap data that i made using my mocap system onto my head model. So far i got the actor face going with the mocap data, but i have no clue on how to get it on my actual head model.

BTW the head model that i am using has no blendshapes, which i see no use for thats why i bought a mocap system so that i can just take the data and transfer it the the model and thats it, unless im mistaken! or do i still have to create blendshapes even if im using mocap data? Because if thats the case than whats the use of the mocap data for than? and wouldnt it be limited to the blendshape? well i would really appreciate it if someone can help me with this im totally new with motion builder the version i have is 7.5 extensions & maya 2008 x64 both of them.

I thank you all in advance.

Joey


#2

Hi porckchop,

If you want to use MotionBuilder to handle your facial captures, you need to make use of blend shapes or clusters for MB to move around based on actor face expressions (afaik). That’s one of the many reasons why I decided to bring the motion data (.trc in my case) directly into Maya, where it is used to deform a facial muscle system we built.

Many good ideas on this process can be found in CG Toolkit’s “The art of rigging: vol 3” (http://www.cgtoolkit.com/book3.htm).


Daniel


#3

So than if i use blend shapes than im not going to actually get my actual facial expressions, instead ill be limited by the blenshapes that are made. Than whats the sense of facial mocap systems. I thought it was get your model head, record facial motion data, and bind the data to the model. But making blend shapes really isnt productive and i kind of feel ripped off with the mocap system now.

So thats it?


#4

You must have misunderstood me.

If you need to use MotionBuilder, you have two options:

  1. Pre-defined blend shapes.
  2. Cluster deformation of the head mesh.

If you can make do without MotionBuilder, you can import the raw mocap-data into your favourite 3d package (i.e Maya, Max…) and use it to deform the mesh in whatever way you best see fit.

As I wrote, what we are currently using is a pseudo muscle system which influences the shape of the head mesh. These muscles are deformed with the help of the raw mocap data imported into Maya.

If this is an approach you feel like testing out, I’d suggest buying the Art of Rigging book I linked to, as it explains this procedure in great detail.


Daniel


#5

o.k. so the raw data sounds good, but is there o.k. lets say a tutorial on making such cluster derformational riggs, i mean really before i go out and buy a book. I would just really like to see something work, after spending so much money and not getting anywhere so fast, i would like to find at least a tutorial somewhere that could probably explain to me how to setup a facial rig with muscles or clusters, i mean what ever is best. But again im not trying to be a hard a#@! im just really down quite a few bucks to keep on buying and probably go for broke.


#6

Know of any other tutorials that could be usefull?


#7

1.you have to convert the mocap data from world space to head space if your mocap camera is not constraint on your head.

2.you need rigging your model first and use the mocap data to drive the rigging.

here is my test link: http://fimg1.5460.net/radios/01/52/01/076553.mov


#8

O.k. my mocap data isnt coming out as point clouds instead its in .bvh wich will make a bone setup for the face. But now you say rigging the face. How do you go about rigging the face of the model for mocap? I wuld like to know because i think im going at it totally wrong.

BTW what mocap system do you have and what software are you using for your point clouds?

Thank you.
Joey


#9

would you mind send me a file about your mocap data or send me some screen snapshoots?

I use Vicon system, and rigging the model in 3ds max .

about point cloud processing, I use Max script.


#10

hi same i use max also but why can drive my face mdel with mocap data , is there any plugins , you know some tutorials about this subject plz help


#11

I would think with point cloud data that you would replicate the points on the mesh vertices and place a rivet bone or joint in those positions. You would then weight the rivets according to the way the muscles move the face features around, using appropriate falloff between the rivet positions. You would then strike a neutral pose for the face, much the same as a T-pose for a character rig. Then the point cloud deltas, or the difference between rest position and the position of deformation are used to drive the mesh rivets. Seems to me one could add sliders that exaggerated or smoothed out the data as well so you could have additional control over the acting recorded and transferred to the facial rig.

Best Regards
Randy


#12

Goto http://www.3dbuzz.com/vbforum/sv_home.php . In the motionbuilder menu, go to issue 4> Maya - setting up clusters + cluster shapes (this will help you understand what to do in MB). You should then be able to drive your character face with the actor face.

Alternatively after setting up the clusters in maya, you can use relations/expressions to map the animation from bvh to the clusters. This will be a lot more work and impractical if you have more than a few facial markers. zign track uses relatively few markers.

good luck.


#13

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.