|01-23-2013, 01:40 PM||#1|
Join Date: Jan 2013
Robot eye replacement in Nuke, PFTrack & Maya?
First post here!
I'm trying to replace an actors eye with a more robotic one, ala the Terminator Eye videos on Youtube), which can be a bit silly but look like fun and might help me to understand how to do some other interesting shots.
Most guides point to doing this in 3DS Max, but as I'm on a Mac I thought I'd have a go at solving it myself.
I have an actor with tracking dots on the non moving parts of the face (for this shot anyway) - forehead, nose etc. And I've filmed it with two cameras to improve tracking if needed.
I've tried the shot two ways:
Method 1 Geometry moving, camera static
1a. I used PFTrack to do a user track on both cameras, and then used a mocap solver. Then I made a geometry track of the head with helper trackers coming from the mobcap solver. I must point out that I used a generic head that matched up in some ways but was a little thin. This worked fairly well and gave me a good track.
1b. I then exported the scene to Maya where I can see my geometry moving about as the real head did. I imported a robot head, lit it and rendered it out with alpha channels intact. A bog standard beauty render, I'm not worried about it looking good just yet.
1c. I imported the renders from Maya in Nuke. I unpelted the original actors head plate with Scanline Render in 'uv' projection mode. I then used this 'flattened' face to add digital make up around the eye with merge nodes and then reproject this unpelted face back onto the geometry in Scanline Render so that the make up would move correctly in relation to the face movement.
I then unpelted the rendered robot face and performed same operation as applying the digital make up, this time rotoing out the eye only to be comped back over the unpelted face.
Method 2 Camera moving, geometry static
2a. I used PFTrack to do a user track on both cameras, and then used a mocap solver. I solved the camera and oriented the scene.
2b. I brought the camera and tracks into Maya and lined my robot face in the middle so that it moved correctly with the original face plate and lined up correctly. I cut the faces of the head to where I just had the eye left. I rendered the eye out.
2c. Imported the eye render footage and straight slapped it over the face plate with a merge.
This works fine instantly BUT has a myriad of problems
Is there a better method 3 to be had? Am I way off the mark with my efforts? Method 1 seems to be a lot more accurate than 2.
Method 2 works fine instantly BUT has a myriad of problems and seems like a dead end version - I can't change the eye shape anymore as it can't be adjusted, method 1 allows me to keep changing size of eye as need be.
But in method 1, I have a problem reprojecting both unpelted images back onto the head as the robot eye looks pinched by the eye socket of the original face geometry-
The original head of course goes back over with no problems.
As I said earlier, I didn't use a perfectly matching head to track then later project back onto. Therefore the nose of the geo causes a little bit of an occlusion of my eye at some points-
^Do I need to have a much more accurate geo head if I want to do it this way?
Method one seems better and more possibilities to alter the shot, and also could be used to interact with Realflow for example as the geometry is moving.
Any better ways to do this with the software above?
|01-23-2013, 01:40 PM||#2|
Join Date: Sep 2003
Thread automatically closed
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
Note that as CGTalk Members, you agree to the terms and conditions of using this website.
|Thread Closed share thread|