Crossing the uncanny valley WIP

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

Thread Tools Search this Thread Display Modes
  02 February 2012
You might have difficulty doing that, as your camera's won't match up 100% to your mesh, like they do now.

If you're going to go all out with the world-space normal map, give shooting with polarizers a try. See here:

If you stay still enough, you can take both pictures (with spec, and without), and subtract the one with spec from the one without, you can get a true spec map. Getting a world space normal map for both the diffuse, and spec maps, you can then use the hybrid-normal skin shader by Debevec et Al.
  02 February 2012
yep, i know about ICT lab and their work
  02 February 2012
I've been thinking about it, and I believe I've come up with something that might help you with the texturing.

Take your uniformly-lit photos like normal. Run this through Photofly, and obtain the new mesh. Bring the new model and cameras into Maya, and parent them to the model. Align the new model to the old one. You can do this through the ICP (iterative-closest-point) algorithm that has been implemented all over the place if you want 100% accuracy. Otherwise do it by hand. This will align the cameras to the old mesh as well. Create a fake zDepth shader, by creating a grayscale ramp set to projection, and scale the placement node so that it covers the entire head, and is oriented towards the camera. Bake this texture onto the retopologized mesh. Then, project the texture from the camera, onto the retopologized mesh, and bake the texture. This will give you a texture, as well as a camera-space depth map texture that you can control. Change the position of the placement node if you want more of this camera's texture to show through (white) or not (black). You can use this depth map as an alpha in your texture, so that things that are far away, or not visible from camera, will be masked out in the texture. Doing this for all photos should result in a nicely blended, uniform, texture.

Hopefully this makes sense

Last edited by NextDesign : 02 February 2012 at 11:38 PM.
  02 February 2012
good thinking

unfortunetly photofly can be very confused when it comes to this kind of shooting, since the ligt source would change (i wont be using a multicam setup)

so as for the texturing and aligning photos, im gonna be using pftrack, to create calibrated camera setup with imageplanes and locators as placeholders, then align the model by hand (but im gonna dig into the ICP thingy, cause i really dont know how to do such alignment in maya ). also thanks for the zdepth tip, im definitely trying that out

  02 February 2012
Interesting. Is that from a tutorial, or something you've made? I'd be interested in taking a look at that. I'm guessing what you're doing is aligning the cameras to hand-placed survey markers?
  02 February 2012
yeah, its something ive been working on (before i came across photofly) its done via the motion capture node. key is to feed the tracker with plenty of information about camera sensor, focal length, lens distortion etc, and then painfully manually placing survey points. basically its the same process i want to do later on when it comes to the motion capture trials, only with still frames
  02 February 2012
Have you thought about using FaceWare?

I've played with it, and it's very nice.
  02 February 2012
@ NextDesign nope, i want to do it by hand

slight off topic but still 'photo-scanning' related.




another one:

Last edited by kybel : 03 March 2012 at 04:15 PM.
  03 March 2012
back on topic :

quick SSS render to check thickness of the tissue

Last edited by kybel : 03 March 2012 at 12:48 AM.
  03 March 2012
It's looking very nice! With a quick glance at your render, it would seem that it could scatter a bit more, but it might be different when it's actually connected to the head.
  03 March 2012

modeling done!

next up : create uvs, and extract displacement information from scan data

Last edited by kybel : 03 March 2012 at 04:53 PM.
  03 March 2012
UVs are done, and im pretty happy about the result - one piece, one seam, only minor texture stretching

  03 March 2012
Still one of my favorite WIP threads here! Can't wait to see the final result...keep it up.

  03 March 2012
stop! displace time!
which comes with few problems. the main one : i dont have as many scan data as i would like

the top of the head (for me quite irrelevant) and especially the neck area are
missing. so i needed to improvise.

i downloaded the brilliant royalty free head scan by Lee Perry Smith ( all credits and thanks goes to Lee) and van Goghed his ears

then, with the help of maya and topogun, i fitted mine uv'd mesh onto the infinite scan.
resulting in a perfect match

at this stage, i could grab the displacement map right from topogun, but i needed to have some control over the data, so i decided to go for zbrush. this is how it looks after the projection

the result? all the fine scan&sculpted details fitted into the desired uv space, and thus compatible with my model

still a lot of work to be done in zbrush&photoshop here, since i dont want to just copy/paste the whole thing onto my model. so at this point im just happy that the workflow is tested and working fine

Last edited by kybel : 03 March 2012 at 08:53 PM.
  03 March 2012
Nice workaround...I'm curious if the heavy workload with all those steps pays off in the end.

reply share thread

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Society of Digital Artists

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump

All times are GMT. The time now is 06:38 AM.

Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.