PDA

View Full Version : Is there anything we don't know?


Integrity
09-15-2005, 03:29 AM
I don't know how to explain what I want to ask but...is there anything that we don't know yet, that we don't understand or know how to implement into recreating in CG?

For example, the well known what-was a problem of creating realistic skin...now possible with SSS shaders and effects.

Or recreating the dynamic ranges in imagery with HDRI's.

Or rendering out accurate circle-of-confusion effects that once plagued the pinhole type renderings of the past.

I can't really think of anything that someone has figured out a way to render it out (from my very limited knowledge anyway)...but of course if we had infinitely fast processors we would be able to calculate everything...but I'm just talking about knowing it, not whether or not we would be able to render it in a reasonable amount of time.

flawedprefect
09-15-2005, 03:53 AM
Okay, I'll bite.

We haven't yet figured out how to make actors completely obselete using CG. We cannot sythesise a convincing vocal performance.

We are close, however, to completely realistic character movement.

Take Final Fantasy for example: there was some mo-cap used in that film, but much of the animation was hand-tweaked. The lip-sync sucked big time, because the animators were trying to be too realistic.

On the other hand, we have "The Incredibles" - non-realistic looking characters whose movements and lip-sync "seem" more real, because of their exaggeration.

Somewhere in the middle of those two lies a sweet spot, but it can only be achieved with a combination of motion capture and hand tweaking techniques.

mookid2005
09-15-2005, 03:56 AM
We still have to wait for renders....

That is the holy grail, for me instant GI please! :)

Bonedaddy
09-15-2005, 04:42 AM
I think a big new frontier is a synthesis of Debevec's light probe stuff with motion /performance/universal capture. Being able to accurately get an animated mesh (with textures and, ideally, shader info) of a person would be cool. Motion capture is a pretty limited technology, and to completely synthesize human motion (aside from rotoscoping or hiring awesome animators) is the next step. Greg Panos had some interesting work he was doing on using boujou-style matchmoving to capture facial expression. I think he had painted grids on peoples' faces to better track their features.

By the way, I'm in no way knocking animators -- for any sort of important motion, you really should hand-animate it -- I'm saying that, for many VFX shows, there is a huge need for (often tons of) realistic digital doubles, to where hand-animating them is just not an option. Mocap is a limited, often low-fidelity technology, which has trouble accurately reproducing facial and hand animations. Looking forward to the next step.

Bonedaddy
09-15-2005, 04:48 AM
Oh yeah, and fine-detail multi-fluid systems (with nice whitewater) that can interact with objects (lift them up, carry them around) and are fully directable would be nice. Heck, if you want to get into it, some sort of global dynamics system, where things can't interpenetrate would be awesome. There's huge advances to be made in the directability of dynamics. Volumetric shatter scripts built into the programs? Wood splintering and breaking? What about AI-based animation, like Endorphin's working on? Lots of thorny problems still out there.

Garma
09-15-2005, 06:38 AM
Oh yeah, and fine-detail multi-fluid systems (with nice whitewater) that can interact with objects (lift them up, carry them around) and are fully directable would be nice. Heck, if you want to get into it, some sort of global dynamics system, where things can't interpenetrate would be awesome. There's huge advances to be made in the directability of dynamics. Volumetric shatter scripts built into the programs? Wood splintering and breaking? What about AI-based animation, like Endorphin's working on? Lots of thorny problems still out there.


If I understand you correctly, you are talking about a physics engine, which Half Life2 had implemented with the havok engine. Talking about wood splintering, it was amazing to see.

check wikipedia:

http://en.wikipedia.org/wiki/Havok_%28software%29

jeremybirn
09-15-2005, 06:55 AM
Probably the biggest area where software is still in its infancy is computer vision.

Computer vision is the converse process to 3D rendering, it is going from an image into a 3D representation of what's in a scene.

As computer vision matures, it will drive technologies from robots to augmented reality systems, but it will help CG artists in digitizing models based on images, mocap without sensors, automatic roto without greenscreens, image based camera tracking and matching, etc.

-jeremy

darktding
09-15-2005, 07:06 AM
the Matrix?
Think about it, all we ever do in the end of the day is feed ourselves... if we are all plugged into this system....

Bonedaddy
09-15-2005, 07:20 AM
If I understand you correctly, you are talking about a physics engine, which Half Life2 had implemented with the havok engine. Talking about wood splintering, it was amazing to see.

check wikipedia:

http://en.wikipedia.org/wiki/Havok_%28software%29

Right. No, I'm talking about, say, within Maya. Something that just runs in realtime, say, while placing props around a scene, or while animating, so you don't have interpenetration, as you move your mouse around. I know some packages are starting to get this implementation (Motionbuilder and Endorphin have had simple versions of it for awhile), and the tech to do it is definitely out there (XSI's novodex stuff looks nice), but so far, to my mind, it's implemented clunkily within 3d packages like Maya and Max. It's totally a bells-and-whistles thing, but it would make my life easier if rigid body dynamics were a bit more prevalent and bulletproof.

wireFrame
09-15-2005, 07:24 AM
Funny, I used watch old science fiction tv series where a character types in some questions into a computer. Then computer then spits out several anwers--viola!

I imagine a software with built-in debugging system. Like a 3d software fixing its own bugs.

Headless
09-15-2005, 07:38 AM
If the question is more like "if I was making a film and had a large enough budget and my pick of the CG houses, is there anything I couldn't do", then personally I think these days that no there isn't. I think people like George Lucas, Peter Jackson and, dare I say it, The Wachowski Brothers, have shown that. In terms of the final images that go up on screen, I think these days it's no longer a question of whether or not something can be done, it's more a question of how much further the envelope can be pushed.

I think this is actually as much of a hinderance to film as it is a help. The problem that films have today is that the audience now knows that you can do anything with CG, and so the magic of cinema has been stripped away because as soon as something amazing appears on screen, the audience immediately knows it's computers, irregarless of how well executed the effect is.

I think the new challenge that film makers have is not how realistic they can make their CG but how they can use it in such a way that the audience is able to suspend their disbelief and roll with it.

wireFrame
09-15-2005, 07:47 AM
... I think the new challenge that film makers have is not how realistic they can make their CG but how they can use it in such a way that the audience is able to suspend belief and roll with it.

Let's see if Peter Jackson's "King Kong" can fool us if we're looking at the real Neomi Watts boobs or not.

rollmops
09-15-2005, 08:01 AM
Or recreating the dynamic ranges in imagery with HDRI's.

Well, for me , the next challenge is to learn how to break the limits of colors.
I mean with an ordinary 8bit image we use to go from black to white. But we may go from white to light. I'd like to learn how to preview the "film response". ( Sorry for my english ) There's more in a 16bit or cineon image than a crt screen allows us to see. Does anyone has a trick or a tutorial to share?

Thanks.

arvid
09-15-2005, 09:02 AM
edit: never mind

grafikdon
09-15-2005, 11:17 AM
Well...when will rendering become obsolete...You know I always thought of animating and 'saving as'...just like in Photoshop...paint and save...End of story. What a wonderful time that will be.

pogonip
09-15-2005, 11:45 AM
I still don't think they can do hair perfectly . Then onto virtual reality and holographics ...yay

Michael5188
09-15-2005, 05:50 PM
"Computer, make a good movie with an involving storyline, amazing animation and effects, and deep characters."

...loading...

beaker
09-15-2005, 06:27 PM
If the question is more like "if I was making a film and had a large enough budget and my pick of the CG houses, is there anything I couldn't do", then personally I think these days that no there isn't. I think people like George Lucas, Peter Jackson and, dare I say it, The Wachowski Brothers, have shown that. In terms of the final images that go up on screen, I think these days it's no longer a question of whether or not something can be done, it's more a question of how much further the envelope can be pushed.Yup, I think we can do anything (even 5 years ago we could do anything). The trouble is time and money. We are always short on one or the other to do it all in a production schedule.

Yes we can get super realistic of (fill in the blank), but does anyone actually have time for 100 hour a frame renders?

grafikdon
09-15-2005, 06:46 PM
"Computer, make a good movie with an involving storyline, amazing animation and effects, and deep characters."

...loading...


Bwahahaha...oh boy! Maybe in 1000000 years.

Albius
09-15-2005, 07:29 PM
Bwahahaha...oh boy! Maybe in 1000000 years.

Bah, I'd give it a couple of centuries, tops-- but by then I doubt there'll be much of a market for it. The easier art gets to make, the more important the underlying expression becomes.

JeroenDStout
09-15-2005, 08:20 PM
If we have computers who can do such a thing, they must be very intelligent. Intelligent enough to say "why don't we build computers which assist us?", or more likely: "Ok, meatbag, oh, wait, ERROR ERROR....... DEstroy! Destroy! Destroy!"

SpeccySteve
09-15-2005, 08:30 PM
Funny, I used watch old science fiction tv series where a character types in some questions into a computer. Then computer then spits out several anwers--viola!

I imagine a software with built-in debugging system. Like a 3d software fixing its own bugs.

I'd settle for a little dog shaped robot that hoovers my carpet for me.
That and a hover car.

I watched "Tomorrows World", we should all be going to work on jetpacks by now.

MJV
09-16-2005, 03:18 AM
There is still no such thing as a learning computer. There is no CG software that anticipates what I may want to do next based upon what I've done before, or will make suggestions, give advice, or help me plan.

playmesumch00ns
09-16-2005, 08:25 AM
Well, for me , the next challenge is to learn how to break the limits of colors.
I mean with an ordinary 8bit image we use to go from black to white. But we may go from white to light. I'd like to learn how to preview the "film response". ( Sorry for my english ) There's more in a 16bit or cineon image than a crt screen allows us to see. Does anyone has a trick or a tutorial to share?

Thanks.

And there's even more in a float EXR :)

HDR Monitors are available already (or at least prototyped). Haven't been able to see one myself, but people who saw them at Siggraph over the last few years say they're amazing.

Personally I'd like to see a proper physically based rendering approach with spectral colour representation (like Maxwell does) in a production renderer.

Fasty
09-16-2005, 11:45 AM
I'd settle for a little dog shaped robot that hoovers my carpet for me.
That and a hover car.

I watched "Tomorrows World", we should all be going to work on jetpacks by now.

http://www.roombavac.com/consumer/product_detail.cfm?prodid=52 It's not dog shaped, but hey

Agreed on the hover car and jetpacks.

SpeccySteve
09-16-2005, 01:47 PM
Heh, cool.

beaker
09-16-2005, 05:43 PM
HDR Monitors are available already (or at least prototyped). Haven't been able to see one myself, but people who saw them at Siggraph over the last few years say they're amazing.They were cool at siggraph, but 50k for the 42" tv and 20k for the 17" computer monitor was just a little too pricey for most people right now. Ouch.

beSigned
09-16-2005, 09:00 PM
I personally don't think that there are any borders. There are always goals to achieve.
I also don't think that you can make reality because you'd have to make everything on cellular basis or molecular basis if you want. Just imagine these bilions of bilions of bilions of polygons to make one single character. And the time to make it. It'd take a lifetime...

However I'm not educated in this so I might be wrong, perhaps fatally wrong. It's only my opinion.

Neoklassik
09-16-2005, 09:37 PM
It's funny that a somewhat "low end" (no offense meant) program like Carrara has collision detection built in while modeling, like you said for placing props... was quite easy. When I moved to Lightwave and then Cinema4d (though they both have have workarounds) I missed that one feature quite a bit. Not going back to Carrara just for that though. :)

Right. No, I'm talking about, say, within Maya. Something that just runs in realtime, say, while placing props around a scene, or while animating, so you don't have interpenetration, as you move your mouse around. I know some packages are starting to get this implementation (Motionbuilder and Endorphin have had simple versions of it for awhile), and the tech to do it is definitely out there (XSI's novodex stuff looks nice), but so far, to my mind, it's implemented clunkily within 3d packages like Maya and Max. It's totally a bells-and-whistles thing, but it would make my life easier if rigid body dynamics were a bit more prevalent and bulletproof.

Boone
09-16-2005, 09:40 PM
Apart from improvements in rendering time, I can't see CG in animation producing any new real benefits.

Most things have their limitations, and the only real ground gained in recent times is the speed in which CG is being produced. Pretty much all CG humans still look like Neo from Matrix Reloaded... :hmm:

leigh
09-16-2005, 10:12 PM
Of course there are things that we don't know.

I still don't know the full details of things I did one night in April 2000.







... Oh, we're talking about computer graphics? In that case, my first sentence still applies.

mangolass
09-17-2005, 02:44 AM
Pretty much all CG humans still look like Neo from Matrix Reloaded... :hmm:

Man ~ you've got to go out and see more movies!

Or at least listen to Beaker ~ seriously, we get fooled every day by photoreal CG humans that look and act so much better than Keanu that we don't even notice when they are skillfully used in a movie. Since people only notice the bad attempts some people think that's all that exists.

LT

JeroenDStout
09-17-2005, 10:13 AM
I still don't know the full details of things I did one night in April 2000.
Actually, Leigh, neither does the reindeer, the lemonade seller or the newspaper boy. You're just lucky I video taped it and will one day, should I have the opportunity, use it for the single-most embarrassing sequence in a Pixar movie.

And considering that, you're lucky I don't plan to work at Pixar!

*watches it*

Teeheehee! Those rubber 2m shoes really crack me up every time!

lightblitter22
09-17-2005, 01:12 PM
Apart from improvements in rendering time, I can't see CG in animation producing any new real benefits.

So you wouldn't want a smart rig that learns the movement characteristics of a character you are animating over time and makes it easier to animate once its learned enough about how the character moves (timing and so forth)? Or technology that lets you pull fully textured 3D environments and objects out of video footage? Or physics that simulates real life material properties like brittleness, malleability, melting/liquefying at high temperatures or becoming more rigid at cold temperatures? Autoarticulated characters that don't just stand still when you don't animate them by hand? Trees and vegetation so detailed that you could cut a stem or branch and see inside the plant? Procedural surfaces you can zoom into with a microscope? Modeling/texturing/animation without topology restrictions?
Audio physics coupled into object dynamics?

There's still loads of stuff coming and loads to be done on all aspects, from making the technology faster to work with and more natural to use to upping what it can do on the modeling, texturing, rigging, animation, dynamics and rendering side of things. :)

Boone
09-17-2005, 02:47 PM
Re: Mangolass.

Oh, of course there are instances where CG human doubles are flawless - but certainly not taking center stage. A good example of this would be the shot in The Hulk where two tank drivers fall out as he shakes the turret - As far the audience is concerned they would believe it was two stunt men.

I personally don't see the point in going there( using CG actors in center stage ) unless an actor is unable to complete shooting( Bruce Lee for arguments sake ). But thats only considering the amount of hassle to do such a feat. At the moment its cheaper & faster to get Tom Cruise himself for the shoot than to produce a CG version. Not only would you have to make vast strides in AI agent tech, but possibly also motion-capture and computer tech.

Another thing to take into account is that a computer isn't a human being, and vice versa. I have noticed that believeable results have been achieved with combined human performance enhanced with CG additions. An example of this would be the shot in AI where there is a female robot with a mechanized face runs towards the camera and looks( desperately ) in different directions. I do believe that we will soon reach a point where you can take one actor and replace their face with another. Its a much more realistic goal than trying to recreate an entire human body.

Re: LightBlitter22.

My friend, if my computer would make my breakfast in the morning, fetch the daily newspaper from down the local shops AND find my slippers - it would be a milestone in the progression of mankind. :bounce:

Just remember that everything has a limit - even computers. Its when we understand the limits of a tool that we become wise in it's application...Grasshopper. :wip:

HellBoy
09-17-2005, 03:02 PM
ok, even if someone did know something that wasn't though of before, do you think they'll say it in this public forum in front of MWarsame???
they'll try hard to do it themself or with their company to be the first....

excuse me? do you understand what I'm saying :shrug: neither do I know...

lightblitter22
09-17-2005, 03:26 PM
Ideas are cheap, Mo. Take the learning rig idea for example. You rig a character, animate it doing different stuff by hand and over time it begins to learn how that character moves so when you have to do something like the character bending forward to pick some keys off a table you don't have to key 15 different bones by hand and adjust f-curves to get the right timing. You'd pose the basic move and the rig would "time itself" in going through the poses, based on what its learned about the character from previous animation you've hand-keyed or mocapped.

How would you implement it though? It would take some AI-like learning algorithm using neural nets or similar mathematical constructs probably. Not easy to implement and get to work 'just right' in every situation you might use that rig in. :)

Its not ideas that are difficult to come up with, its implementing them right. I thought of a renderer that behaves just like the real world ages ago. So have others. But its Next Limit that have developed Maxwellrender, not me. Its getting from idea -> working implementation that makes all the difference.

Michael5188
09-17-2005, 04:48 PM
I think some advances people mention shouldn't be done. I think animation, modeling, and 3d in general is a lot of fun, I don't want the computer to do it for me. But then again I'm not worried about popping out a film cheaply and quickly.

JonyK
09-17-2005, 04:56 PM
I'm not much into the industry and probably don't know much more then the average noob (technical wise). My question is simply this:

You know how we have motion capture devices, that you can plug points onto a live human and capture the moving points and apply it to an character in max, etc.

Do we have something sort of like this, but instead the points represent vertices (well not really because that would be a lot), but what if you had a physical suit made, with all these points on the suit, the suit would be tight around the model sort of like spandex, you then can take the points and import them into a 3d program and hence, you have you're character almost instantly (well probably not) created.

Is this possible? Realistic? Do we have something like this? If so what is it called?

Thanks in advance,
Jon K

jeremybirn
09-17-2005, 06:22 PM
Oh, of course there are instances where CG human doubles are flawless - but certainly not taking center stage. A good example of this would be the shot in The Hulk where two tank drivers fall out as he shakes the turret - As far the audience is concerned they would believe it was two stunt men.

I personally don't see the point in going there( using CG actors in center stage ) unless an actor is unable to complete shooting( Bruce Lee for arguments sake ). But thats only considering the amount of hassle to do such a feat.

I only found out after seeing Lemony Snicket that so many of the shots of the baby were digital, including a lot of "center-stage" close-ups that I hadn't noticed - although when I saw them at SIGGRAPH I realized a baby couldn't have been given detailed direction to turn and look at the camera with a knife in his mouth then crawl the right way and bite the right thing, etc.

Directors who want to have a lot of control over where the camera is and what shots they get are another big reason for digital doubles in full-screen close-ups, like the close-up of Wesley Snipes running down a hallway and jumping out a window in Blade 2, where the camera stays in a close-up while he jumps through the window and dives through the air, you couldn't do that with a stunt man, and certainly couldn't do it while making it look like a close-up of the main actor.

I think the next "stage" in this is using digital doubles for things they theoretically could have shot with actors, only it would have been too slow, expensive, or limiting to their performance. Like zero-gravity situations. Right now most sci-fi tries to avoid putting actors in zero-G, even when they are in space, just because having to set-up each shot with harnesses and cables hidden behind them is a slow, limiting way to film, and they can't give the director real freedom to stage how they could jump around the spaceship bouncing between things during the shot. This kind of scene, or scenes making someone look younger or older beyond the limits of make-up, things like that, are the cutting-edge because using digital doubles is entirely optional, one of many approaches to evaluate. As digital doubles get better (as well as slightly faster and cheaper) we'll see more and more of these center-stage performances done digitally.

We're going to lose the assumption that digital doubles are always the last choice or the worst choice for every production, a "technique of last resort" only for situations where the actor has died or something. Maybe it'll always be a relatively slow, tricky, and expensive way to go, but not always the last choice in solving a difficult problem.

-jeremy

Headless
09-17-2005, 11:20 PM
Do we have something sort of like this, but instead the points represent vertices (well not really because that would be a lot), but what if you had a physical suit made, with all these points on the suit, the suit would be tight around the model sort of like spandex, you then can take the points and import them into a 3d program and hence, you have you're character almost instantly (well probably not) created.

Is this possible? Realistic? Do we have something like this? If so what is it called?
As far as I remember some older facial tracking methods were similar to this, but I don't think it's possible to do it with a full body suit (presumably because the camera resolution isn't good enough to pick stuff up with that much detail). Also, I really don't see why you would want to do this for body motion, as I think standard motion capture does a pretty good job of capturing full body motion as it is.

For the digital doubles thing, I don't think it'll ever replace actors, but I agree that there are alot of situations where it would be much easier to use a digital double, or even situations which are impossible to do with real people. In particular, as Jeremy says, certain camera moves or stunts might require doubles.

Personally I think the degree to which Weta Digital uses them is about right, where they seem to only bring them out when they really need to. By contrast I think they use them a bit too much for Matrix Reloaded/Revolutions. For shots such as Neo running across the sea of Agent Smiths on their shoulders, personally I think they'd have been better to get one of the Chinese guys on wires (or get Tony Ja to do it for real), and then do face replacement, because doubles can tend to look pretty bad if you don't get the animation right.

Back on to the topic of what the industry could do better, I think making the tools less technical and more intuitive would be a good step. At the moment, with something like animation for example, an animator still has alot of technical stuff to worry about, and really it would be nicer if they could just sit down and animate, and worry more about acting than the more techy side of things. Programs like ZBrush are definitely a step in the right direction.

CGTalk Moderation
09-17-2005, 11:20 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.