New AI Can Make Jointed, Textured 3D Model Of A Person From A Few Seconds Of 2D Video

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

 
Thread Tools Search this Thread Display Modes
  5 Days Ago
New AI Can Make Jointed, Textured 3D Model Of A Person From A Few Seconds Of 2D Video

This may be wonderful for indy game developers in particular:

http://www.sciencemag.org/news/2018...w-seconds-video

https://youtu.be/nPOawky2eNk


Originally Posted by me: The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for
each frame creates a silhouette separating the person from the
background. Based on machine learning techniques—in which computers
learn a task from many examples—it roughly estimates the 3D body shape
and location of joints. In the second stage, it “unposes” the virtual
human created from each frame, making them all stand with arms out in a T
shape, and combines information about the T-posed people into one, more
accurate model. Finally, in the third stage, it applies color and
texture to the model based on recorded hair, clothing, and skin.
 
  5 Days Ago
Big studios also use 3d-photogrammetry, so this should streamline the process for all artists.
Imagine it will create perfect retopoed models in 10 years. And then, even from concept design? Who knows.
 
  3 Days Ago
Things are going so fast that it probably won't be 10 years.

More like 3 or 4 years I think.
 
  3 Days Ago
There's already realtime 3D capture that does high quality results, though it's the type of thing that isn't a rigged mesh but rather it captures a new mesh every frame. For faces they can get topology mapped on there automatically, but it'd be a different matter for a fully person because of differences in clothing.
__________________
The Z-Axis
 
  1 Day Ago
It's so frustrating watching this stuff develop and knowing I won't be getting my hands on it anytime soon. I recently popped in to the Poser forums and read part of a thread asking what users would do if they were in charge of the program's development...  It was all predictably incremental stuff... focus on a new rig and leave the program alone, forget about providing characters and make the rigging easier...  stuff like that.
Machine learning would be great for Poser as most of its users still seem to prefer making stills to animating or exporting assets... I said just a few weeks ago that machine learning would soon allow mo-cap from video, and that's almost the reverse of what this software is doing.  
I wonder if anyone is using machine learning for materials?  Poser, not being PBR, traditionally made creating realistic metal materials difficult. I'm waiting for machine learning to be trained on the appearance of various metals in different lighting conditions and which could then modify the shader nodes to produce the closest result to what it determines "gold" would look like in your scene...
Eventually I see machine learning routines being combined into a "final pass" tool...  For example I run video of an older game like Fable through the program and it identifies various things in my scene, such as a person running, a burning fire, a waterfall... It could then replace/alter those elements to make them look more natural...  Maybe it looks for poke through/self intersection in the clothing... Maybe it can replace static modeled hair with synthesized hair that has the appearance of buoyancy and inertia. Looping billboard flames might be able to be given a more random, more volumetric appearance. It could even be trained to recognize distribution of body mass and apply soft body type effects as a post effect. I had the same idea with deep fakes... Instead of replacing an actor with an actor, how close to realism could we get replacing a 15 year old DAZ model with an actor?  I've said for years that we would eventually stop "brute-forcing" everything and begin synthesizing entire images/sequences from very minimal input, but only now do I really see things finally moving in that direction.
There are so many little things I expect to see us take for granted in the near future...  Like color correcting a render in post...  I want to be able to bring the corrected image back into the program that produced it and use AI to reverse engineer the lighting and material changes so that the rendered image would exactly match the post-corrected image.
It seems like we have gotten quite good at teaching machines to recognize a 3d form from its silhouette.  I wonder how far we've come in teaching one to determine contours by highlight/shadow changes? Or if there is one that recognize lines of symmetry in faces, cars etc.? Single image photogrammetry would be great! 
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 07:21 PM.


Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.