Stereoscopic compositing in Nuke - workflow


#1

Hello!

Recently I am interested in composting of stereoscopic footage.

Single most important information for me is how to put 3d CG objects within stereo space. I`m looking for a way to create a “continuity” of 3d stereo depth - the depth of 3d CG elements, and that of a film footage. There is no conversion included - the film footage was shot in stereo3d.

I’m using Nuke and Ocula - I’ve got a few ideas of my own, but I want to start from the right place, so I will not ruin a workflow of a whole project.

I generated a Nuke Point Cloud - so I can more easily put CG objects within specific parts of a scene. Then I move the 3d stereo rig into the scene with appropriate scale - so there should be no errors of stereo base or data about cameras or lenses. Unfortunately (I think) this will not work with a scene where there is a lot of motion(of actors and other objects). The same apply to slow or static camera movements.

Last idea is use Ocula Solver and create a disparity map, which gives us an “model” of depth. The question is, how to work with this tool? Also, how to match the scale of a scene to camera? Maybe by using the data from the analysed shot, we can create 3d rig with paramaters equal to that of “real” cameras.

Thank you for any kind of hint or help! :slight_smile:


#2

The simplest solution is to replicate your stereo camera rig in the CG software. Fortunately, using stereo cameras can actually make matchmoving more accurate since you have twice the data available for solving. PFTrack has a decent stereo solver, although it takes some time to learn how to use it effectively.

First, let me admit that most of my experience in stereo compositing has been in Fusion, but the principles will hold. I’m familiar with Nuke and Ocula, though, so I’ll try to use the appropriate node names when I can remember them.

I haven’t done much work using 3d objects within the compositing system in a stereo workflow. At most, I’ve used a few cards with projected textures and displacement maps for backgrounds. Rather, I prefer a stereo CG render. Assuming the CG cameras were reasonably close to correct and the scene scale was accurate, you should not need to worry too much about inter-axial adjustments: the apparent internal depth of the CG objects should be pretty accurate. However, it’s common for the object to simply be at the wrong depth. That’s very easy to correct with a simple horizontal image translation (HIT) upstream of the merge. Use a Position node to perform the translation rather than a Transform because the Position won’t filter your pixels.

If you like, you can generate a disparity map of both your plate and your CG and use that to guide your placement. You’ll have to normalize the disparity for viewing with a Grade. You don’t need to look at the entire range, just the portion that covers your CG and environs. Adjust the HIT on the CG until the disparity values match visually.

Of course, you can alter the interaxial if you need to, but be aware that there may be some distortion or artifacts as a result.

If you want to get really precise, you can even adjust the CG’s disparity map and then use a New Eye node to reconstruct it. For instance: you have a CG character who is supposed to touch something. The hand doesn’t quite make contact, but the feet are solidly on the floor, a soft mask around the hand and a color corrector on the disparity map can be used to very slightly stretch the hand so that it makes contact.