Iāve been thinking about it, and I believe Iāve come up with something that might help you with the texturing.
Take your uniformly-lit photos like normal. Run this through Photofly, and obtain the new mesh. Bring the new model and cameras into Maya, and parent them to the model. Align the new model to the old one. You can do this through the ICP (iterative-closest-point) algorithm that has been implemented all over the place if you want 100% accuracy. Otherwise do it by hand. This will align the cameras to the old mesh as well. Create a fake zDepth shader, by creating a grayscale ramp set to projection, and scale the placement node so that it covers the entire head, and is oriented towards the camera. Bake this texture onto the retopologized mesh. Then, project the texture from the camera, onto the retopologized mesh, and bake the texture. This will give you a texture, as well as a camera-space depth map texture that you can control. Change the position of the placement node if you want more of this camera's texture to show through (white) or not (black). You can use this depth map as an alpha in your texture, so that things that are far away, or not visible from camera, will be masked out in the texture. Doing this for all photos should result in a nicely blended, uniform, texture.
Hopefully this makes sense :D