Yes he truly is a genius. I was constantly amazed at how knowledgeable he is in so many areas.
If is blendshape correct you need only 2d information of position some point.
Not that I’m such an expert but I’d say that there still are a few AUs that aren’t easy to identify from a front view… But I suppose it ‘just’ takes a lot of software development and fine-tuning.
Me neither. Im just guessing.
Heh Hopefully there’ll be some good stuff on the DVD/BR.
Yep fantastic interview and Jim’s knowladge. IR leds was clever. Bar is set very high for any Avatar “making of” video since this one.
Avatar, the reviews are unobtaniumly good.
Avatar, the the reviews will make cynics turn blue.
Avatar, the reviews ave been var abave average.
Avatar, the reviews are raping your eyeballs.
Avatar, you should see your faces.
Avatar, the reviews aren’t in kansas anymore.
Avatar, the reviews are linking in.
Avatar, the reviews are one with Eywa.
Avatar, sky people go see.
Avatar, the reviews are getting some!
Avatar, the reviews will eat your eyes like Jujubees.
I’m not quite sure I understand from that Pop Science interview what JC meant when he was talking about additional reference cameras. He said with their setup, his stereoscopic camera was being mocaped to the virtual camera, the Mocap cams around the room were capturing the body movements of the actors, and their head cams were capturing their facial movements. He said that this allowed them to bring in several other reference cameras into the mocap volume as well. Why would he need additional reference cameras is my question? Are these reference cameras for the animators since he said that the facial capture rig was limited only to a somewhat distorted front view?
+1 for this one
Bravo! Well done! :applause:
This part of the interview made me go wide-eyed just imagining it… This is pretty revolutionnary filmmaking. I wish I could visit the set.
Oh! Yet another interesting interview! http://marketsaw.blogspot.com/2009/12/exclusive-james-cameron-interview-talks.html
The whole “chase cam” thing is invaluable for all the subtleties, I think that’s something future versions of performance capture rigs still will need to improve on. They derived a great deal of information from the face cam, but a lot of the extremely important subtleties came from human beings working it over utilizing the other witness cam footage for reference. I believe there’s some matte shaded examples of the side-by-sides in the “learning to fly” featurette.
I think that’s a big part of what was/is missing in some other efforts, Beowulf for instance, it’s like most of the time, the faces just had verts move in basically 2D from a front view and nothing shifted front-to-back on the face, it was just freaky at times.
Avatar, The reviews have been 10 years in the making
Avatar, The reviews will revolutionize review making
Avatar, The reviews are finding a diplomatic solution
haha, idk throwing these out there.
So they know what they’ve captured I guess. Since the helmet cam looks distorted to the point its unwatchable, and the transfer of facial performance to CG character isn’t a real time process, they would need witness cameras to help decide which take to use, and so they can check if the performance has translated to CG correctly, and so they have something to stick on the DVD extras.
Great interview, Cameron clearly knows his stuff.
Thanks, that audio interview on MarketSaw explained the reference camera footage much better.
Evidently it provides a way to shoot both your primary shots of a particular scene as well as additional coverage of multiple different angles all at the same time. Like if he’s shooting with his camera a wide shot of two characters interacting and one of them hands the other an object, he can splice in footage one of the reference cameras during the edit to see if it enhances the scene. If it does, he can then go into the mocap data of that scene and place a virtual camera at the same location of the ref footage camera and get that angle lit and rendered out for the film. Basically its a more immediate way to try different angles and cuts than to keep going in and out of the mocap data.
Plus, in the case above, he’s got high def reference of the hand and finger articulation when the character hands over the object. This would be good for the animators if he then choses to use that shot because, as I would imagine, alot of the subtly and minute articulation of hands and fingers is probably lost or not even picked up by traditional mocap.
At the end of the day, the reference footage is more or less a backup incase he choses to change camera angles or blocking after the actors are gone. Its just another example of Camerons genius and always planning ahead.
I think it’s very much a hybrid system, but I can’t say for sure what was ultimately developed. When I was there James Jacobs was developing a rig based facial animation system using an anatomical model (muscles, skeleton, etc) of a human face while Jeff Unay and his team where sculpting face shapes to compliment the creatures rig. Iain Matthews was involved in the development of the facial tracking system used.
I do apologize for name-dropping… but I wholeheartedly believe that within this industry at least, you lot should know at least some of the names of the very talented people involved in this process. I also apologize for leaving out the names of all the various people involved in the facial animation pipeline used at Weta, because I know I have left off some very important names in doing so - I would love to name drop every single one of you. But I am basing this on my knowledge from almost two years ago.
Hopefully the Cinefex article and/or the FXguide interview will shed more light on what and who was involved in the process.
Here’s a link to the “official” Avatar credits list. However, I should point out that it is a very flawed list. Many of my fellow artists were simply left off entirely or got credited incorrectly. A nasty habit of this industry unfortunately.
Shame on Fox!! :banghead:
Please use this thread for discussion about this film now that it is out: