New AI Can Make Jointed, Textured 3D Model Of A Person From A Few Seconds Of 2D Video

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

 
Thread Tools Search this Thread Display Modes
  1 Week Ago
New AI Can Make Jointed, Textured 3D Model Of A Person From A Few Seconds Of 2D Video

This may be wonderful for indy game developers in particular:

http://www.sciencemag.org/news/2018...w-seconds-video

https://youtu.be/nPOawky2eNk


Originally Posted by me: The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for
each frame creates a silhouette separating the person from the
background. Based on machine learning techniques—in which computers
learn a task from many examples—it roughly estimates the 3D body shape
and location of joints. In the second stage, it “unposes” the virtual
human created from each frame, making them all stand with arms out in a T
shape, and combines information about the T-posed people into one, more
accurate model. Finally, in the third stage, it applies color and
texture to the model based on recorded hair, clothing, and skin.
 
  1 Week Ago
Big studios also use 3d-photogrammetry, so this should streamline the process for all artists.
Imagine it will create perfect retopoed models in 10 years. And then, even from concept design? Who knows.
 
  1 Week Ago
Things are going so fast that it probably won't be 10 years.

More like 3 or 4 years I think.
 
  1 Week Ago
There's already realtime 3D capture that does high quality results, though it's the type of thing that isn't a rigged mesh but rather it captures a new mesh every frame. For faces they can get topology mapped on there automatically, but it'd be a different matter for a fully person because of differences in clothing.
__________________
The Z-Axis
 
  1 Week Ago
It's so frustrating watching this stuff develop and knowing I won't be getting my hands on it anytime soon. I recently popped in to the Poser forums and read part of a thread asking what users would do if they were in charge of the program's development...  It was all predictably incremental stuff... focus on a new rig and leave the program alone, forget about providing characters and make the rigging easier...  stuff like that.
Machine learning would be great for Poser as most of its users still seem to prefer making stills to animating or exporting assets... I said just a few weeks ago that machine learning would soon allow mo-cap from video, and that's almost the reverse of what this software is doing.  
I wonder if anyone is using machine learning for materials?  Poser, not being PBR, traditionally made creating realistic metal materials difficult. I'm waiting for machine learning to be trained on the appearance of various metals in different lighting conditions and which could then modify the shader nodes to produce the closest result to what it determines "gold" would look like in your scene...
Eventually I see machine learning routines being combined into a "final pass" tool...  For example I run video of an older game like Fable through the program and it identifies various things in my scene, such as a person running, a burning fire, a waterfall... It could then replace/alter those elements to make them look more natural...  Maybe it looks for poke through/self intersection in the clothing... Maybe it can replace static modeled hair with synthesized hair that has the appearance of buoyancy and inertia. Looping billboard flames might be able to be given a more random, more volumetric appearance. It could even be trained to recognize distribution of body mass and apply soft body type effects as a post effect. I had the same idea with deep fakes... Instead of replacing an actor with an actor, how close to realism could we get replacing a 15 year old DAZ model with an actor?  I've said for years that we would eventually stop "brute-forcing" everything and begin synthesizing entire images/sequences from very minimal input, but only now do I really see things finally moving in that direction.
There are so many little things I expect to see us take for granted in the near future...  Like color correcting a render in post...  I want to be able to bring the corrected image back into the program that produced it and use AI to reverse engineer the lighting and material changes so that the rendered image would exactly match the post-corrected image.
It seems like we have gotten quite good at teaching machines to recognize a 3d form from its silhouette.  I wonder how far we've come in teaching one to determine contours by highlight/shadow changes? Or if there is one that recognize lines of symmetry in faces, cars etc.? Single image photogrammetry would be great! 
 
  4 Days Ago
Quote: Things are going so fast that it probably won't be 10 years.

More like 3 or 4 years I think.

Agreed....to be honest maybe even less than that.   This AI business has been getting crazy fast.   I mean have you seen those fake AI of people...not the porn stuff....but it is getting as good as someone taking a ton of time to put a different face on a person in photoshop and is getting almost better results.

https://www.youtube.com/watch?v=dkoi7sZvWiU

Also have you seen the realtime stuff and how good it is getting.   Those render programs need to get with the time and speed up their render times to about 1/5th of what they are now.  
I mean if they can do this stuff in realtime...why can't a render engine now do a full realistic next to Pixar quality image at 1 frame per 5-10 minutes?   

https://www.youtube.com/watch?v=9owTAISsvwk

Behind the scenes there are a ton of things going to come out of the blue that we don't even know is out there yet.  I didn't know about this program till today.  
__________________
www.howtomakeyourownanime.com
 
  3 Days Ago
Originally Posted by ilovekaiju: I mean if they can do this stuff in realtime...why can't a render engine now do a full realistic next to Pixar quality image at 1 frame per 5-10 minutes?


If offline rendering sped up by say 5 x times, people would only buy 1 render node license for say Vray or Octane or Renderman, rather than 5 as they bought before to get the necessary speed.

Basically, render engine makers would lose 80% of their revenue per customer with a 5 x times speedup in rendering.

So there is no "economic incentive" to make offline rendering any faster.

The slower the render engine, the more render node licenses they can sell you, and the more CPU render boxes or GPUs also need to be bought.
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 11:07 PM.


Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.