Critique my workflow? (Digital Doubles)


#1

I hope this is the right place to post this. If not, I apologize.

Anyway, I’ve been making digital doubles, characters, and creatures a certain way for a game on APKNite for a while on but it occurred to me today that my workflow might not be optimal or might be out of date. It’s always good to check yourself against your peers every once in a while. Thing is, I usually work alone as a contractor so I don’t have anyone in person to bounce ideas off of. So here’s my workflow with some pictures mixed in. Let me know if you think any link in the chain is weak or might be reducing the accuracy and quality of the final piece.

Capture. I only have one camera, which I know isn’t ideal for capturing something as squishy and prone to change as a human face. But I have access to a human-sized powered turntable and studio lighting so I’m able to take 80-100 pictures in about 2-3 minutes. I process the pictures with Reality Capture because it does the best job in my experience. Obviously, if I’m making a creature, I just sculpt this part by hand.

  1. Clean up. The anatomical volumes and forms are usually accurate but the surface is usually pretty messy due to the limitation of having just one camera. This current WIP turned out even messier than usual. But next, I smooth the surface, trim off whatever I don’t need, and re-sculpt some of the secondary details.

  2. Retopology. This is pretty straight forward. I use Wrap3 to shrink wrap one of their base heads onto my sculpt.

  3. UV Mapping. Next, I take it into Maya to model in a mouth cavity and eye sockets and then unwrap the UVs.

  4. Fine Details. Because I only have the one camera, I can’t capture the pores, wrinkles, and microsurface, so I use highres displacement maps from SurfaceMimic or Texturing.xyz. I don’t like poly painting in ZBrush because the textures are dependent upon the resolution of the mesh, not of your UV map and I don’t get good results. The fidelity isn’t enough to capture individual pore, in my experience. So I use Mudbox to stencil the maps onto my low-res UVMapped model. I don’t know the bit-depth of the textures from SurfaceMimic but I export them as 16bit just in case. I load that texture into ZBrush as an alpha, mask the model by alpha, then I use the inflate brush to bring the details out.

  5. Baking the colour. Next, I bake the original colour map in ZBrush onto the ORIGINAL scan as a poly painting. Then I transfer that poly painting onto my clean organized mesh using ProjectAll. I use a morph target to ensure I don’t mess up the physical details of the model. Then I create a new 8k texture from that poly painting.

  6. Baking Displacement and Normals. Nothing fancy here. I use the multimap exporter to bake an 8k normal map and displacement map. The displacement is 32bit float.

  7. Rendering. Lastly, I render it in Arnold but I’m planning to switch to Octane soon. I guess that’s irrelevant.

My question is this: Is there any link in the chain where I’m losing the quality and accuracy of the details? I’m satisfied with how my models look, but I know the skin material won’t hold up if I need an ultra closeup like a shot of just the eye. It’s because the superfine details from surface mimic aren’t based on my actor so the fine details of his colour map don’t align with the sculpted pores. That’s a limitation of the way I scan them originally, using a single camera. Is there a way I could capture highres displacement maps using my single camera at the same time I capture the colour?

Thanks in advance for any tips you might have! And feel free to critique the work in general if something doesn’t look right to you!