Fitting one head to another


#1

I have a problem task that I may have to repeat any number of times on different data, so I’m looking for a way to do it with minimal manual effort.

The problem is, given two models of a human head, which I will call the source and target models, to move the vertices of the target model such that the target matches the individual facial characteristics of the source, while still utilizing the topological organization (arrangement of polygons and edge flows) of the target model.

The motivation is that the source model presents the desired facial characteristics but does not have topology that is suitable for animation, while the target has proper topology but does not bear the desired facial characteristics.

An important constraint is that points in the deformed target model continue to represent the same face parts as they did before; for instance, if a point was on the tip of the nose before, it should be located at the tip of the source model’s nose, not at some other point along the bridge of the nose or elsewhere.

I have played with “Shrink Wrap” tools in DCC applications, but the algorithms these tools use are very simplistic. So far I have found them to invariably get very bad results when applied mechanically, in part due to the complex nature of the face and places where the surface self-occludes such as in the creases around the nostrils. An inherent problem in these algorithms is that they do not respect the association of vertices to particular facial landmarks.

I am aware that some of these tools will allow controlling the projection of vertices using a second, deformed target model. I imagine an accurate copy could be achieved by manipulating the local detail in both target models so that a line drawn between the corresponding vertex on the two targets would intersect the source surface at the point corresponding to the same detail. However, in doing so the artist has largely reproduced the source head by hand, twice over at that, defeating the purpose of using an automatic tool!

As an engineer as well as artist, my instinct was that some more automatic way of handing this must exist, since all faces, while quite varied in the proportions of features, are basically following a parametric template. In fact, I did find a fairly old research paper describing pretty much what I’m trying to do:

http://www.graphicsinterface.org/proceedings/2002/134/paper134.pdf

They are working from a scanned point cloud instead of a source model, but clearly one could use a subdivision surface on the source model to produce an arbitrarily dense point cloud which could drive the algorithm in the same manner as a point cloud from a 3D scanner.

Unfortunately, I don’t have time to implement this myself and I’m not aware of any commercial or open implementations of such an algorithm, although I’m thinking they must be out there somewhere. Perhaps someone knows of one?

I have also briefly looked at FaceGen. It’s very interesting and has parameterized facial variations (as I expected was possible,) but it uses its own fixed model internally. There is a way to make it deform a user model instead, but in order for this to work the user must first fit their own model to its internal reference model, which again brings me right back to square one.

Does anyone know of software designed to do facial fitting automatically? Or are there other tools that in some combination can be used to accomplish this in a manner that isn’t very tedious?

Any advice much appreciated!


#2

I am aware that some of these tools will allow controlling the projection of vertices using a second, deformed target model. I imagine an accurate copy could be achieved by manipulating the local detail in both target models so that a line drawn between the corresponding vertex on the two targets would intersect the source surface at the point corresponding to the same detail. However, in doing so the artist has largely reproduced the source head by hand, twice over at that, defeating the purpose of using an automatic tool!

If you’ve already got the two models, are they near the same level of vertices (if not the same number)? My immediate thought when reading the above paragraph was perhaps using the Morph Target modifier if they had a poly-count that they were given to use which would just re-arrange the verts from the target to the source. If one is less than the other you could always add in extra loops and see if adding the extra geometry to it allows you to use a Morpher.