New AI Can Make Jointed, Textured 3D Model Of A Person From A Few Seconds Of 2D Video

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

 
Thread Tools Display Modes
  04 April 2018
New AI Can Make Jointed, Textured 3D Model Of A Person From A Few Seconds Of 2D Video

This may be wonderful for indy game developers in particular:

http://www.sciencemag.org/news/2018...w-seconds-video

https://youtu.be/nPOawky2eNk


Originally Posted by me: The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for
each frame creates a silhouette separating the person from the
background. Based on machine learning techniques—in which computers
learn a task from many examples—it roughly estimates the 3D body shape
and location of joints. In the second stage, it “unposes” the virtual
human created from each frame, making them all stand with arms out in a T
shape, and combines information about the T-posed people into one, more
accurate model. Finally, in the third stage, it applies color and
texture to the model based on recorded hair, clothing, and skin.
 
  04 April 2018
Big studios also use 3d-photogrammetry, so this should streamline the process for all artists.
Imagine it will create perfect retopoed models in 10 years. And then, even from concept design? Who knows.
 
  04 April 2018
Things are going so fast that it probably won't be 10 years.

More like 3 or 4 years I think.
 
  04 April 2018
There's already realtime 3D capture that does high quality results, though it's the type of thing that isn't a rigged mesh but rather it captures a new mesh every frame. For faces they can get topology mapped on there automatically, but it'd be a different matter for a fully person because of differences in clothing.
__________________
The Z-Axis
 
  04 April 2018
It's so frustrating watching this stuff develop and knowing I won't be getting my hands on it anytime soon. I recently popped in to the Poser forums and read part of a thread asking what users would do if they were in charge of the program's development...  It was all predictably incremental stuff... focus on a new rig and leave the program alone, forget about providing characters and make the rigging easier...  stuff like that.
Machine learning would be great for Poser as most of its users still seem to prefer making stills to animating or exporting assets... I said just a few weeks ago that machine learning would soon allow mo-cap from video, and that's almost the reverse of what this software is doing.  
I wonder if anyone is using machine learning for materials?  Poser, not being PBR, traditionally made creating realistic metal materials difficult. I'm waiting for machine learning to be trained on the appearance of various metals in different lighting conditions and which could then modify the shader nodes to produce the closest result to what it determines "gold" would look like in your scene...
Eventually I see machine learning routines being combined into a "final pass" tool...  For example I run video of an older game like Fable through the program and it identifies various things in my scene, such as a person running, a burning fire, a waterfall... It could then replace/alter those elements to make them look more natural...  Maybe it looks for poke through/self intersection in the clothing... Maybe it can replace static modeled hair with synthesized hair that has the appearance of buoyancy and inertia. Looping billboard flames might be able to be given a more random, more volumetric appearance. It could even be trained to recognize distribution of body mass and apply soft body type effects as a post effect. I had the same idea with deep fakes... Instead of replacing an actor with an actor, how close to realism could we get replacing a 15 year old DAZ model with an actor?  I've said for years that we would eventually stop "brute-forcing" everything and begin synthesizing entire images/sequences from very minimal input, but only now do I really see things finally moving in that direction.
There are so many little things I expect to see us take for granted in the near future...  Like color correcting a render in post...  I want to be able to bring the corrected image back into the program that produced it and use AI to reverse engineer the lighting and material changes so that the rendered image would exactly match the post-corrected image.
It seems like we have gotten quite good at teaching machines to recognize a 3d form from its silhouette.  I wonder how far we've come in teaching one to determine contours by highlight/shadow changes? Or if there is one that recognize lines of symmetry in faces, cars etc.? Single image photogrammetry would be great! 
 
  04 April 2018
Quote: Things are going so fast that it probably won't be 10 years.

More like 3 or 4 years I think.

Agreed....to be honest maybe even less than that.   This AI business has been getting crazy fast.   I mean have you seen those fake AI of people...not the porn stuff....but it is getting as good as someone taking a ton of time to put a different face on a person in photoshop and is getting almost better results.

https://www.youtube.com/watch?v=dkoi7sZvWiU

Also have you seen the realtime stuff and how good it is getting.   Those render programs need to get with the time and speed up their render times to about 1/5th of what they are now.  
I mean if they can do this stuff in realtime...why can't a render engine now do a full realistic next to Pixar quality image at 1 frame per 5-10 minutes?   

https://www.youtube.com/watch?v=9owTAISsvwk

Behind the scenes there are a ton of things going to come out of the blue that we don't even know is out there yet.  I didn't know about this program till today.  
__________________
www.howtomakeyourownanime.com
 
  04 April 2018
Originally Posted by ilovekaiju: I mean if they can do this stuff in realtime...why can't a render engine now do a full realistic next to Pixar quality image at 1 frame per 5-10 minutes?


If offline rendering sped up by say 5 x times, people would only buy 1 render node license for say Vray or Octane or Renderman, rather than 5 as they bought before to get the necessary speed.

Basically, render engine makers would lose 80% of their revenue per customer with a 5 x times speedup in rendering.

So there is no "economic incentive" to make offline rendering any faster.

The slower the render engine, the more render node licenses they can sell you, and the more CPU render boxes or GPUs also need to be bought.
 
  05 May 2018
"So there is no "economic incentive" to make offline rendering any faster.
The slower the render engine, the more render node licenses they can sell you"

Go to tell this to Chaosgroup(just to name the renderer I use, but I'm sure there are many other example as well). Going from Vray 2.5 to 3.6 in many scene there are speedup of at least 10x between improvements in GI and sampling, denoiser, Adaptive lighting ecc. Are you actually doing rendering in your life? I ask because I do, and it's not unusual to get a 2x speedup just with a small free incremental update of the software, leave alone the big update. The rule is that no matter what your software and hardware are capable of, you will always find way to get slower rendertime(increasing your quality though).
__________________
www.3drenderandbeyond.com
www.3dtutorialandbeyond.com
www.facebook.com/3drenderandbeyond
 
  05 May 2018
Question is, is the rendering math in VRay or any other render engine so complex that you couldn't get that speed optimization way back in V 1.0 ?

You see, the fewer lines of code the rendering math is written in, and the fewer floating point operations are executed per pixel, the faster the rendering finishes on CPU, GPU or anything else.

If V 1.0 was complete and utter bloatware - very poorly written, with 30 to 40 times more code lines and math operations than actually needed to get the job done - and then V 2.0 got rid of some of that bloat, and then V 3.0 got rid of a little more, and then V 3.6  some more, then yes, you may see a 10 X speedup in rendering over a few versions, because you went from 90,000 very poorly written lines of code being executed per pixel on average down to a better optimized 9,000 lines of code being executed per pixel on average. What may be possible is actually to do it in 900 lines of code or even less if you have a really good coder or two on the team.

There is no economic incentive whatsoever to optimize the crap out of rendering code however if you are in the business of selling render node licenses.

Unless some competing render engine pulls far ahead of you in terms of speed, keeping your rendering code nice and slow sells licenses. Many, many licenses.

Then - version by version - you can gradually tighten your code and give your customers nice little 1.5 X or 2X  or 3Xspeedups here and there, so they pay for upgrades every year or two.


If you were a render engine company CEO, would you want to sell 400,000 node licenses with slow code a year, or just 40,000 node licenses with code that runs 10 times faster on the same CPU?

Would you want 10 times the revenue, or 1 times the revenue? Large offices with a lot of staff, or a small corner office with just 10 people working for you?


Its pure economics - if you were to write a real speed demon of a render engine, with super-tight code and super-clever math, running many many times faster than anything done before, and then did not charge a huge price per 1 render node, you'd probably put both yourself and the rest of the render engine manufacturers out of business.

So I stand by my point - there is no economic incentive to optimize rendering code probably, and there is no way to check at all whether the code is optimized properly, because everything other than say Blender Cycles is closed source code.

You will literally never see the actual code and math running under the hood of Vray, Octane, Corona or whatever you use, unless that company goes out of business and open sources the code.

That won't happen either because - economics economics - some rival will quickly buy up that that rendering code for pennies and prevent it from ever being open sourced in the first place.


Slow code in rendering and slow math in rendering sells node licenses, software seats, CPUs, GPUs, RAM and everything else you need for CG.


Hardware accelerated raytracing may be possible with a little 200 to 400 Dollar ASIC card in 2018, and nobody ever makes those cards for rendering - it would kill the sale of render node licenses and of a lot of other things. CPUs, GPUs. You'd need fewer software seats and fewer CG artists working, because the rendering waiting times bottleneck is completely obliterated. Everything would be completed and delivered much faster than possible today, unless you have a really generous budget for rendering slaves sitting in your office.


It a bit like electrical cars. Everbody said for decades "the won't sell, they won't be practical, they won't have any real range, they won't be fast, they'll be too expensive to recharge".

Then Tesla came along and proved all that wrong in just a few years.


What makes you think that render engine development is a super-special, super-ethical industry where everybody works 100 hour weeks to give you the fastest rendering technically possible?
 
  05 May 2018
I've been here long enough to read every kind of sensational daydreaming nonsense, including incredible quantum hardware available in a few years(that was 5 years ago), GPU rendering 100/1000x faster than CPU, and now keep hearing from people like you of cheap hardware accelerated rendering(that was actually promised one or two years ago but never delivered). The closest thing to somebody actually constraining the hardware it's Intel(and in the past also Nvidia, a bit). This is just because until recently Intel had no competitor, as soon as AMD provided much needed competition Intel start to lower the price and add more core. That being said, even now with healthy competition Intel is struggling to get their 10nm hardware available. If somebody can make a PCIe acceleration card capable of really faster render time then you can be sure that they will make it and that people will buy it in quantities. Sadly non of that ever happened. Probably you like conspiration theories a lot , but here in the CG world if one producer can make his software a lot faster then other you can be sure that they will do that. That's because here in the real world there's a lot of competition between render engine. For your information a lot of companies offer cheap and competitive price for the rendernodes(Autodesk Arnold was an isolated case) so this is not where their incomes comes from. Many artist I know of doesn't even need slaves or simply use online farms(that don't pay full price for the license). 
You should need to rendering more and dream/overthink less  eventually you will appreciate more all the speedup that software manufacturer offer to us(even for free sometime). My experience is that in the last few years my rendering time remained the same, the difference is that until a couple years ago I was using biased GI and every sort of tricks to get decent render speed, now I can get Brute force rendering without touching any settings in the same amount of time(a thing out of question a couple years ago), that's because my engine performance increased a lot, I paid little more than 300€ for upgrading from old version, including 10 node license.
__________________
www.3drenderandbeyond.com
www.3dtutorialandbeyond.com
www.facebook.com/3drenderandbeyond

Last edited by sirio : 05 May 2018 at 12:36 PM.
 
  05 May 2018
Originally Posted by moogaloonie: "It's so frustrating watching this stuff develop and knowing I won't be getting my hands on it anytime soon. I recently popped in to the Poser forums and read part of a thread asking what users would do if they were in charge of the program's development...  It was all predictably incremental stuff... focus on a new rig and leave the program alone, forget about providing characters and make the rigging easier...  stuff like that.
Machine learning would be great for Poser as most of its users still seem to prefer making stills to animating or exporting assets..."





I was a poser user since version 2 up until about five years ago when I switched to Daz studio
to access the higher quality genesis model rigs.
Along with Reallusion Iclone pro for Character animation.
 
Poser users today by and large are not interested in any of these advances in "machine learning".

The overwhelming majority of them are  still image portrait makers
patheticly clinging to the 12 year old "Victoria 4 "model as it is the last
Daz model that is in native poser format.

The poser animation tools(such as they are)
have not been effectively updated since the 1990's.

The few  lingering, online poser communities all have cult like atmospheres
where all of the many shortcomings of the decrepid, Kruft ridden, poser software
,are blamed on their "enemies" such as Daz. 

The base Daz Studio software and figure bases are free.

Smith Micro still charges hundreds of dollars for poser yet the native figures, that ship with poser,
are a cruel joke by year 2000 standards to say nothing of  how they compare to what we see in the prefab market in 2018.

Poser is a vestigial relic from a bygone century
and indeed the very last place we will see any of
these new AI algorithums implemented.

Last edited by ThreeDDude : 05 May 2018 at 12:49 PM. Reason: spelling
 
  05 May 2018
I can imagine this as a popular app - quick photogrammetry you can make funny animations with. Something like Apple's animoji, but full body. 

I don't think it's actually useful for anything else and not much of a time saver. Photogrammetry is already fast and easy, and so is rigging a body. So long as you've got a skeleton already you can rig a body like the ones in the video in seconds. The hard part of rigging is making the deformations look natural, weighting, cloth/soft body physics, etc., all of which is much more difficult. 
 
  05 May 2018
When I watched this video. All I could focus on is how horrible the results were.
And how much it looked like the animation work done in those insurance TV commercials for The General.
If you want horrible late 1980's style animation. AI seems to be really good at it.

-ScottA
__________________
My Gallery
 
  05 May 2018
Originally Posted by sirio: I've been here long enough to read every kind of sensational daydreaming nonsense, including incredible quantum hardware available in a few years(that was 5 years ago), GPU rendering 100/1000x faster than CPU, and now keep hearing from people like you of cheap hardware accelerated rendering(that was actually promised one or two years ago but never delivered). The closest thing to somebody actually constraining the hardware it's Intel(and in the past also Nvidia, a bit). This is just because until recently Intel had no competitor, as soon as AMD provided much needed competition Intel start to lower the price and add more core. That being said, even now with healthy competition Intel is struggling to get their 10nm hardware available. If somebody can make a PCIe acceleration card capable of really faster render time then you can be sure that they will make it and that people will buy it in quantities. Sadly non of that ever happened. Probably you like conspiration theories a lot , but here in the CG world if one producer can make his software a lot faster then other you can be sure that they will do that. That's because here in the real world there's a lot of competition between render engine. For your information a lot of companies offer cheap and competitive price for the rendernodes(Autodesk Arnold was an isolated case) so this is not where their incomes comes from. Many artist I know of doesn't even need slaves or simply use online farms(that don't pay full price for the license). 
You should need to rendering more and dream/overthink less  eventually you will appreciate more all the speedup that software manufacturer offer to us(even for free sometime). My experience is that in the last few years my rendering time remained the same, the difference is that until a couple years ago I was using biased GI and every sort of tricks to get decent render speed, now I can get Brute force rendering without touching any settings in the same amount of time(a thing out of question a couple years ago), that's because my engine performance increased a lot, I paid little more than 300€ for upgrading from old version, including 10 node license.

First, you're correct in some aspects here. "Quantum" computing isn't even quantum at all, and a complete joke. So I can see where you're coming from in general.

But in specific? Skeebertus and almost all of us here do a LOT of rendering - for a living. No need to insult him, he's been around the block far more than you have and is a very active member of this community.

Intel had no competitor? AMD has been their chief rival for 20 years. It appears you missed the last 18 years of CPU tech. "PCIe" acceleration card? Yes, we already all have those - they're called GPUs. Yes, they deliver those 100x/1000x advantages, with a few disadvantages too so it's up to the user to decide if that works for them. Unreal Engine alone is making a huge dent in Arch/Viz now - in realtime. At 4K.

Yes, there's a lot of competition between rendering engines and there always has been. I don't see how that's a point. If your rendertimes remained the same, then it was because you weren't updating your computers often enough. I don't see how that's the rendering engine's fault. Yes, we all optimize as much as we can to hit the sweet spot between speed and quality in rendering, but that's always been true. Yes, you could have "brute forced" your renderer the entire time but chose not to, for that exact reason.

It seems like you're picking on people while ignoring your own contradictions here.
__________________
Commodore 64 @ 1MHz
64KB RAM
1541 Floppy Drive


"Like stone we battle the wind... Beat down and strangle the rains..."
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 10:27 PM.


Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.