Crossing the uncanny valley WIP


#21

well, the greatest benefit of photofly is thats its free, and gives you the results back super quickly thanks to cloud computing, the downside is that you have to send your data to external servers (i smell some goverment data mining conspiracy, jk :slight_smile: )

as an alternative i tried software called Agisoft photoscan, but couldnt get results as satisfying, its also pretty time and hardware heavy.

i was also checking scannerkiller, but it reguires also a projector input i think and some serious calibration, in case of photofly, you can just grab your camera - go anywhere and scan anything.

and im definetly looking forward to this : http://vimeo.com/29888889


#22

How did you do this?
great work by the way.
simon


#23

hi simon, and thanks for the compliment.

its done via technique i learned from Ryan Kingslien :


i took this raw scan into zbrush, smoothed it with the smooth subdiv brush and applied the texture on it

then i converted the texture into an intensity mask.

from this mask you can then extract the fine skin details with the inflate and smooth sliders in the deformation panel
its important to do this on the highest possible subdivision level, and also not overdo this step, otherwise the result looks sketchy and unnatural

hope this was somewhat helpful. cheers :wink:


#24

really interesting tool!
I have read something about this software some month ago but I never tryed it.
Have you used zbrush for retopologize the mesh?
Do you think that this method is quicker than the usual one (reference image and poly modelling)?
Thanks and keep going this interesting project!


#25

ciao marco.

i used topogun for retopology, and yes, i think its definitely quicker and cleaner. cause since you know the model is spot on, you can focus only on the clean proper topology

but this is imho also a great approach when it comes to poly modeling.
cause you can still delete the scanned mesh (or use it just as a guide) and work with the cameras that are already setup and aligned up, all you need to do is just plug in some image planes. and you can have as many cameras as you can possibly need, with perspective and everything. i personally cant imagine a better reference when it comes to facial modeling

much much more accurate than the classical front, side and 45° views, and definitely much more accurate than plugging image planes into ortho views


since this stage ive been working on this project the classical way - poly modeling on the mouth nose and mainly eyes. and having so many views to check the model is really a treat. :slight_smile:


#26

turns out I’m gonna need more cameras. much MOAR! (34 cameras to be precise)

it may seem like an overkill, but since ear is such a complex shape (and i want it to be very accurate to the reference), every single one of the cameras are very helpful, and the more angles you have the better, cause each one reveals a new subtle curvature, thickness, form, etc.

so far I’m just doing a ā€˜dirtyisch’ model, my main concern is to get the shape right, and not worrying about the topology. once I’m happy with it , ill give it some love and care in zbrush and then retopologize it in topogun


#27

ScannerKiller has been used extensively on hollywood blockbusters. XYZ RGB uses this as their own internal scanning software. It does require calibration, but it’s not very difficult. A projector isn’t needed actually, except when you’re scanning things without any obvious texture, like a mannequin torso. Otherwise it will pick up the key points from the image just like the other solutions.

Also, give this a try if you can. Using the normal map you extracted via crazy bump, you can use a mograph displace modifier in cinema 4d to displace the geometry via the normals in the image. This should give much nicer details than what zBrush is giving you, as I believe zBrush is just using the grayscale intensity to displace the mesh.


#28

thanks for the displacement tip. im definitely gonna try that out :slight_smile:


#29

No problem. :slight_smile: Just make sure that you turn down the shape recognition in crazy bump, as that might affect the result; too much will distort the mesh. You already have the low-frequency detail from the scan, what you want in the normal map is the high-frequency information from the images.

Also, if you haven’t done so already, it would probably be best to project the images onto your UVed mesh now. The automatic textures generated from Photofly aren’t the best quality, or resolution. Using a higher resolution texture, you’ll be able to get much more accurate, and dense, normals.


#30

im planing to shoot the textures separately, it has to be lit uniformly, also planing to light and compose a world space normal maps…just to challenge myself :slight_smile:


#31

You might have difficulty doing that, as your camera’s won’t match up 100% to your mesh, like they do now.

If you’re going to go all out with the world-space normal map, give shooting with polarizers a try. See here: http://onsetvfxtips.blogspot.com/2009/06/cross-polarization-photography-and-skin.html

If you stay still enough, you can take both pictures (with spec, and without), and subtract the one with spec from the one without, you can get a true spec map. Getting a world space normal map for both the diffuse, and spec maps, you can then use the hybrid-normal skin shader by Debevec et Al. http://www.cmlab.csie.ntu.edu.tw/~liubiti/HybridNormal/index.html


#32

yep, i know about ICT lab and their work :slight_smile:


#33

I’ve been thinking about it, and I believe I’ve come up with something that might help you with the texturing.

Take your uniformly-lit photos like normal. Run this through Photofly, and obtain the new mesh. Bring the new model and cameras into Maya, and parent them to the model. Align the new model to the old one. You can do this through the ICP (iterative-closest-point) algorithm that has been implemented all over the place if you want 100% accuracy. Otherwise do it by hand. This will align the cameras to the old mesh as well. Create a fake zDepth shader, by creating a grayscale ramp set to projection, and scale the placement node so that it covers the entire head, and is oriented towards the camera. Bake this texture onto the retopologized mesh. Then, project the texture from the camera, onto the retopologized mesh, and bake the texture. This will give you a texture, as well as a camera-space depth map texture that you can control. Change the position of the placement node if you want more of this camera's texture to show through (white) or not (black). You can use this depth map as an alpha in your texture, so that things that are far away, or not visible from camera, will be masked out in the texture. Doing this for all photos should result in a nicely blended, uniform, texture.

Hopefully this makes sense :D

#34

good thinking :slight_smile:

unfortunetly photofly can be very confused when it comes to this kind of shooting, since the ligt source would change (i wont be using a multicam setup)

so as for the texturing and aligning photos, im gonna be using pftrack, to create calibrated camera setup with imageplanes and locators as placeholders, then align the model by hand (but im gonna dig into the ICP thingy, cause i really dont know how to do such alignment in maya ). also thanks for the zdepth tip, im definitely trying that out :slight_smile:



#35

Interesting. Is that from a tutorial, or something you’ve made? I’d be interested in taking a look at that. I’m guessing what you’re doing is aligning the cameras to hand-placed survey markers?


#36

yeah, its something ive been working on (before i came across photofly) its done via the motion capture node. key is to feed the tracker with plenty of information about camera sensor, focal length, lens distortion etc, and then painfully manually placing survey points. basically its the same process i want to do later on when it comes to the motion capture trials, only with still frames


#37

Have you thought about using FaceWare? http://image-metrics.com/Faceware-Software/Overview

I’ve played with it, and it’s very nice.


#38

@ NextDesign nope, i want to do it by hand

slight off topic but still ā€˜photo-scanning’ related.

source:

scan:

result:

another one:


#39

back on topic :

quick SSS render to check thickness of the tissue


#40

It’s looking very nice! With a quick glance at your render, it would seem that it could scatter a bit more, but it might be different when it’s actually connected to the head.