facial animation setup?


Unfortunately it doesn’t work like that…

You need to be able to control where the character looks, you need the stuff he/she looks at. You need to be able to tilt the head, sometimes you might need a hand to touch the face, and so on. So you need to move a LOT of your scene into XSI and you need to duplicate parts of your rig with the complete functionality. If you use any app specific elements it might not even be possible at all.

Then there are the deformations. What if it’s a single mesh for the entire body, like if you’re doing a Hulk, and you have the rest of the rig in Maya? Even with a regular character you need deformations from the neck and the collar bones on the same mesh… where to cut the pipeline and insert the cached stuff from XSI and how to combine it with the rest?

So no, unfortunately it’s not that simple. And I haven’t even touched on how you really, really need to have fully shaded and rendered previews to evaluate human facial animation because so much is changed by shadows, SSS shading and speculars, even by things like eyelashes and of course you want to see eyebrows as well. Having to transfer the data regularly to a different app can make test renders quite complicated as well, you need to export the cache again and again…


Thanks for answering!

Actually, I was thinking about moving into XSI as much as I can, but it is not possible with this project, I have no time to move completely from 3ds max, which I use since 5th version, to the complete new package… I thought about making all the animation in XSI, but rendering is still in 3ds max. We also think about using stress maps displacing the wrinkles, but then I’ll need to export animated textures from XSI in 4k, I’m not sure it is worth it.

Well, the good point about this project is that it is sci-fi, character wear helmets and suits, and it makes finding the “pipeline cutting edge” easy. But still… Lots of other export-import issues. Even though XSI now have a send to max\maya button.

Well, I’ll try to stick to the software I know then, max. Talking about mocap and blendshapes, how do I make it? I mean, there are a lot of small movements in human face, so I’ll need to make about 6-8 points for mouth, nose controls, cheeck, brows, etc. What can you advise about that, what is the best way to control morphs(blendshapes) with mocap data?


I would advise against using 3D mocap and look into a 2D, facecam-based solution like Faceware. Nearly every studio uses this method, the difference is that movie VFX houses write their own solvers and sometimes use 2 or 4 cameras; whereas Faceware is more common for games and works with a single camera.

I’m actually not aware of any off-the-shelf system that could use 3D data (in c3d form) to drive a custom blendshape rig, the only such solution I ever heard about is Facerobot. Most studios using 3D mocap are driving a bone rig with it and the results are mixed - although the new game from the Heavy Rain team looks quite good, I think it’s called Beyond.


Also, can’t say much about rigging in Max, I think you should look into Paul Neale’s stuff if you need help or something.


The rigging approach is almost the same in any 3d app, I can transfer maya\xsi examples, if you have those.

Actually I thought about faceshift (which can make either 3d or 2d tracking, from both RGB and Depth sensors of kinect) or some other 2d tracking programs, but not faceware, since, as I get it, you need to send them your video, and they track it there… Strange approach for me, in fact.


Well the one thing I tried was the chained blendshapes in Maya.

So I put a Morpher modifier on top of a mesh I used as a target for another Morpher, and set the modifier on top to auto reload targets. It was very, very slow, whereas in Maya the different architecture means there’s practically no noticeable slowdown at all.

This means that the method I use for sticky lips in Maya is just not feasible in Max… although it’s not a question of features but architecture and application speed.


Yeah, I did not mean that everything is similar, but some general things can be transfered.

As far as I know, there are other ways to create sticky lips in max, using modifiers, scripts, etc.

Well, thanks a lot, I’ll try to dig into morph rigs then



Presonaly, I really like Blur’s facial animation, they use FaceRobot. But maybe there is no need in this, and I can stick with 3ds max, since all the rendering will be done there…

Check out the tracking with :-


Checked that one, well, dont really thing that it gives good enough results. I think it needs some testing


I have another question about facial animation, but i thought it should be in a software specific forum, here it is


A few questions about FR Is there a way to attach FR rig to character rig? If working with FR and 3ds max pipeline, how to export FR face animation to 3ds max’s character? In general - what is the FR\3ds max pipeline, are there any tutorials on that? FR\Maya pipeline tutorials will fit too, i think ill find a way to convert it to 3ds max, if there are no specific tools used. Thx!


Faceware Analyzer is the latest release in the Faceware software suite that allows you to do the video analysis portion of the pipeline in-house, without ever needing to send a video off-site. FTI still offers video tracking as a service, but with Analyzer you have the ability to have your team run the analysis which lets you capture, analyze, and retarget animation all on-site.


Hi all,

I finished my ibook on facial articulation and got it published on Monday.





Hi Hippydrome,

the book looks great! Any plans to release it on any other format for us poor non-IPad users?


Oooo, Ima have to check that out.

Incedently, I just did a tutorial for Digital Tutors showing how I do facial setups on Grimm.

Using nurbs curves with joints “riding along” with the curves. That with a combination of PSD’s, the rig is very versatile. I hope to get some feed back from it haha.


I’ve been waiting for this! Now I guess I’ll have to wait awhile longer until it is available on non-apple gear. I don’t like Apple’s “walled garden”). Hopefully it won’t be too much longer for that…

Which one is that? I went and looked for it, but DT doesn’t let you search by Author’s name (that I could see). Could you post a link?


lol, I think it goes live on the 1st. Sorry, I jumped the gun.



First written review about my ibook, The Art of Moving Points.





Here is the preview for the DT I worked on. Hope you guys check it out, airs on Monday.



Excellent, I’ll be sure to watch that :beer: !



Here is a link to the meshes that I used for my iBook. “The Art of Moving Point”.

These meshes are a good starting point to learn about topology needed for good articulation and deformation. They were used as the starting point for most all of my character work at Pixar.

Note that the spans are roughly laid out and could further polishing to improve the final deformations.