View Full Version : Projecting a live plate onto geometry ? ideas?

05 May 2009, 04:48 PM
In theory ... is it possible to project a live action plate onto a piece of geometry ... deform the geometry and have that deform the area of the plate? to later be comped back into the plate ...

So imagine you have a video sequence of say .... a tree (and its a moving camera/tracked too) .. project the tree plate onto a proxy version of the tree .... maybe bend the tree geometry and have that deform the plate tree .... confusing uh?

This can be done a number of different ways, but im investigating a pipeline method that involves projecting a plate onto geo then deforming the plate which is then comped back into the plate .... um .. yeh dont worry, not even quite sure i understand that ...

Any thoughts? anyone see where im going with this? :)


05 May 2009, 05:45 PM
Well, thats was a wee bit confusing to read but I guess for the projecting itself the UVProject modifier might be useful, not sure how to approach the rest of it, guess you could just render images of the tree and only the tree, rendered in layers to comp back, maybe?

05 May 2009, 06:54 PM
Yeh the key been can you project a 2k sequence onto the geo then deform the geo to deform that area of the plate ... sounds odd but does have a practical use :)


05 May 2009, 04:23 AM
Uhmmm..... Did James Cameron ask you that question during an AVATAR set visit?

j/k... :P

Are you thinking of something like.. "Use Frame Sequence as Displacement Map"?


05 May 2009, 06:09 AM
Do you mean video texturing onto a UVmapped plane, then deforming that (so the tree appears to bend)? If so, then yes:)

05 May 2009, 08:56 AM
Do you mean video texturing onto a UVmapped plane, then deforming that (so the tree appears to bend)? If so, then yes

Pretty much, but has to be projected, then .. using the example of the tree, bend the geo which in turn, assuming the projected plate sticks to the geo ... will bend that area of the plate ....


05 May 2009, 12:02 PM
Ok, so take plate ... 3D track default focal object, animate object as desired ... project plate onto geometry .. this SHOULD pin the plate to the geo and if the geo is animated ... UNWRAP the geo and render out the unwrapped version as a sequence .... do whatever to this .... then reproject the sequence to the mesh, comp and replace in the original plate .....

ummm ... yeh i know im onto something ... its all gonna click into place shortly, i can feel it :)


05 May 2009, 12:23 PM
Dave, please provide some sketches or something to help us understand what should be happening.

I'm quite intrigued by this.


05 May 2009, 12:41 PM
Oh its just more theory at the moment, but im curious if the same techniques can be done in blender ...

Its the process of taking a live action character, such as a dog or lizard ... and animating the plate character ... im sure you've seen films where the dogs are talking ... well this is the process used, as above ... they create a CG double then using various projection techniques deform the plate via the animated geometry and comp it back into the plate ... of course this involves creating a CG double of the head then 3d tracking it seamlessly, and lots of post comping ...



05 May 2009, 10:49 PM
Well, in the current issue of Blenderart Magazine (Issue 21), there's a tutorial on how to use Blender to animate photographs of live people.

That's not exactly the same, but I suspect some of the techniques to animate one frame of live action can help figure out how to do the same for a whole reel.

As for talking dogs, I always thought the solution involved simply putting CGI over sections of the animal from the very beginning of the scene and using move-matching? So that if you replaced say the snout or mouth of the dog from the very beginning of a scene no one would notice it was replaced? Or is that impractical?

I never thought a whole head replacement was needed, because leaving some sections live would enhance the reality and give a good "goalpost" for the composite. If one can approximate the live sections left in the plate, then the result should fool any audience.

But yes, I have seen the kind of animation you're talking about. :)

05 May 2009, 08:44 AM
yeh you're right to a point.

Using a dog as an example, the jaw of the dog would be recreated in 3d, then match moved to the plate, the plate is then projected onto the geometry/3d jaw ... this takes away the need for relighting, shaders etc ... the 3d jaw is then animated as desired, which will drive the plate been projected .. then roto and comp takes over and puts it back into the plate...


05 May 2009, 08:51 AM
That sounds about right, sir!

I think a test is in order! :)

There are many ways to make this thing work.
My own favorite method of making texture maps is by heavily editing and mish-mashing live action photographs of the objects.

The other way would be that there is a very very detailed CGI replication of jaw and snout.. and that it is lighted, textured.. to match the exact plate..

Even if a total match cannot be attained this way, some Color applied during Post Production can finish the job.... Like that "yellow sand fog" Michael Bay likes to use in films like The Island, Bad Boys, and Transformers.

05 May 2009, 09:10 AM
yeh it can be done many ways, but the advantage of projecting the plate back is you have all the texture/lighting information baked in from the plate itself, including subtle effects such as fur moving etc ...

Don't recall the frog :)


05 May 2009, 09:25 AM
yeh it can be done many ways, but the advantage of projecting the plate back is you have all the texture/lighting information baked in from the plate itself, including subtle effects such as fur moving etc ...

Don't recall the frog :)


I suppose the mesh-work can be very very minimal.. I think in BlenderArt 21, the guy only actually animates the opening of the mouth and the eyes themselves, not bothering with a sculpted shape...

But even then, the mouth itself would have to exhibit some lighting changes....

Using only the barest minimums as moving meshes could ensure you retain things like the "moving hairs". But if the hair is under the chin.. the chin would have to move down at times.. and the hairs would be in the wrong perspective even if they did move downwards... In those cases.. you'd need a mesh chin too.

It could come down to deciding it shot-per-shot and depends on what the dog did. :P

With regards to the "fog"...

Here's an example:

The last thing added is a kind of "Yellow Color" to everything and I figure it's a "glue". You see the same amount of yellowing in ALL elements and so your eyes accept that everything there must have existed together. It's a clever trick to sell the illusion.. at least that's my belief.

It may be just coincidence but this "sun-drenching Yellow mist" appears to be also in The Island and other films.

05 May 2009, 10:12 AM
Frog! ... ok, i need more coffee :)

05 May 2009, 10:22 AM
Oh yeh, sticking a filter over everything and grading tends to pull it all together, the closer you get to duotone it becomes easier to sell the illusion because you're removing the visual cues that the brain picks up on to determine what's real. Though this should be kept to a minimal .. If you get a chance, check out The Mist, there was a black/white version on the special edition DVD, looked really cool with the creature work. Its something you rarely see today ..


05 May 2009, 10:49 AM
Hi all.

If I understand correctly what you want, you may use camera mapping and after make that projection sticky. I used this method for motion blurring a background photo I used in my bmw Z4 image. After the projection is made sticky, you can transform, deform, etc the geometry, and the image will follow that.

See for more info.

Hope this helps, and excuse if this is not what you meant... ;-)

05 May 2009, 12:25 PM
Ok, so the steps i'm interested in, to see if this can be done in blender is;

Project (aka camera map) a sequence onto an object, not just a single still ... but a sequence, ideally at 2k. Remember in the above examples the 3d object would be 3d tracked to the plate ...

Next when we have this ... can we then bake the sequence back out ... ie; unwrap the object and bake the sequence ... so if you saw this playing in a quicktime it would look like a typical unwrapped colour map, but animated ...

At this point the baked seq would be edited outside blender, cleaned up etc ...

Then bring the sequence back into blender .... and apply this back to the geometry ...

The object, say a dogs jaw is then animated as desired, rendered out and recomped to the original plate ...

Note the whole advantage of this is to keep the plate information such as original lighting and textures, effectively warping, deforming the plate to animate the required area ...



P.S .. how do you get a texture to preview in the blender viewport? whenever i select the texture preview mode thingy .. the object goes bright pink! ...

05 May 2009, 12:29 PM
Ok, so does anyone have a 2k sequence of a hairless dog? ... or maybe a lizard .... where the head moves? ...

05 May 2009, 12:41 PM
Well Dave there are places like this one:

But I think they will cost you.

If you have the right camera though you could film an animal....

05 May 2009, 01:38 PM
Yeh, is a good one too, relatively cheap ... i saw a few reptile shots that would be good test subjects, but will need to wait until my credits build back up.


05 May 2009, 11:27 PM
Visualizing the effect in my mind, I do have concerns that a lot of the peripheral movment around the mouth will have to be covered up too in the case of lizards and dogs.

I have a feeling that parts on the underside of the head and jaws will be expected to tuck under or change orientation (hence change of light and shade) as the animal "talks".

If you deform the geometry correctly, but force the plate to "stay the same" in parts you could end up with the same visual distortion that is done on purpose on some trackside sponsor logos-on-grass at the F1 races. That is, a skewed orientation of an image locked straight on to a camera against the natural background looks like it is "floating and upright" and does not conform to the expected perspective of the eye..

But then again, that's what testing is for. :)

05 May 2009, 08:42 AM
yeh, the animation in this technique is limited, ie you wouldn't be able to turn the whole head, its more for facial animation, by where the head may already be moving and say the eyes and mouth are fixed.


05 May 2009, 08:49 AM
I really think it will go shot by shot... or rather.. plate-by-plate.. Each one may demand specific solutions.

I won't be surprised if we need to go "Mr. Beaver" on them.. hehehe..
You know.. 100% CGI. Depends on what lines you are trying to mouth and what emotion is required and what plates you end up with.

But while we're on the idea of using it for possibly a longer feature.. It will end up with a comparison of what each Scene needs and how useful the plates are.

The other difficulty is because there can be varying disaparities between the Plates and Target End-Results... It could end up being a case of re-inventing the solution per Scene...occasionally surrendering to a 100% CGI actor.

And that's not even considering Blender's limitations. I would guess many if not all 3D packages will have problems with the deformation and perspective changes.

Still.. it is an interesting endeavor.

Kaptain Kubrick
05 May 2009, 09:12 AM
Hey - Well I can't do 2k at all - but we have a pug puppy like the one in MIB if you want some snaps or video of him I might be able to get you some footage. However my camera isn't the greatest - let me know if you are interested.

Proof of concept or something


CGTalk Moderation
05 May 2009, 09:12 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.