Cineversity CV-AR


#1

How do I extract data from CV-AR model and apply it to another 3d object?

Thanks


#2

Your answer is in the title of your thread. There is a series of tutorials on Cineversity that covers getting the data into C4D and how to apply it.


#3

If you drop the CV-AR object into an xpresso window, it has data output ports you can reveal for all 30+ blendshapes detected by ARKit. Using Xpresso you can pipe them into the morph targets or other object parameters for other meshes or objects.

Here’s an example of CV-AR data driving a face rig I created… Not perfect, since I still need to tune my morph targets… And this was done with the first version, before it supported 60 fps capture.

https://www.dropbox.com/s/5qkkpscnn516zi4/CV-AR_newBlendShapes.mp4?dl=0


#4

By the way, is there a limit how close the phone can be to the face?
And does it only work when the phone is upright or does it also work when the phone is tilted horizontally?


#5

There is a sweet spot for face recognition, from around 25 cm to about a meter and a half. I can’t recall whether horizontal works, but it seems more natural to record in portrait, especially close up.


#6

Ah thanks. ^^

I was asking because I didn’t bought it yet but I’m planing to make a head-rig for a phone for simultaneous face mocap and body mocap somewhere in the near future.


#7

Others have built those kinds of rigs for Maya:

https://www.youtube.com/watch?v=MfnNJaVCLM8


#8

Yeah that was my inspiration. ^^


#9

I’m wondering if there is a way to decouple the generic facial mesh from the blend shapes xpresso in order to gauge precisely what each of the sliders do in response to the facial recognition. Or it would be helpful if there was a chart/guide. Of course many of them are straight forward, like “left eye blink”, but then there are many where one can intuit generally what “mouth pucker” means, but precisely what it would look like is hard to gauge especially when judged in isolation to the other movements. If there were some sort of Tpose for the facial expression blend shapes, like state-1 and state-2, denoting each end of the slider for each pose, my life would be a hell of a lot easier. Do you have any helpful tips?


#10

Apple’s ARKit documentation has very good visual examples of each blend shape on a generic mesh:

https://developer.apple.com/documentation/arkit/arfaceanchor/blendshapelocation

Click on each blend shape name to see it demonstrated.