Why do some CG people look dead?


#27

I’m not saying remove it from the shot at all.

I’m saying the context they’re shown in is what matters as far as how detailed they need to be.

Rendering at a lower resolutions makes the uncanny valley disappear.
Render at higher resolutions increases the chances of uncanny valley being easier to detect.

if something is noticeably wrong looking at a certain display resolution, then that asset doesn’t have enough detail and care put into it for that particular situation.

I’m not saying you need to be able to zoom into an object down so that 1mm of skin fills the entire frame (though you would if that was what the shot was). Increasing the final output resolution effectively zooms in on assets.

You might not be able to rely on the same exact techniques when movies are shot at 8k. Assets will need to be even higher quality in order to hold up to that amount of final out resolution.

Some of the current techniques have limitations and will eventually have to be abandoned in order to simulate reality a little closer when shots are viewed under the higher magnifying glass future displays technology will bring.

I understand the point of this thread was the OP asking what does it take to fix uncanny valley shots.

My answer would be:
it’s a moving target

There is no one answer that holds true as the most important factor for each situation.

It depends on the viewing resolution, proximity of the asset to the camera, and if it’s an animation - then temporal resolution is another factor. You’ll only notice something is off if you can see it clearly.


#28

Realism, has been achieved visually? but then in motion, not yet…
Not for me, the visual aspect and motion itself come together.

Saying that we’re able to do x thing, since long ago is meaningless for the simple reason that movies are directed to fit into specific cases. This means that these digital doubles are limited in one way or many ways… to satisfy a purpose.
So it does not fit the idea of ‘achieved realism’.
Instead I would say, ‘good, satisfactory composition’.

The best for this kind of discussion, would be to be specific about what movie’s scenes look good/bad. Then we would analyze them as much as possible. And maybe understand the reason of why they are, yet not realistic.

Everything in a movie is supposed to be directed, so the viewer focus it’s eyes to where ever the action is. Therefore, lack of realism could pass unnoticed in many cases, not because one has been fooled, but because one did not observe it in detail.
There are other cases were cg things could pass unnoticed, but most are imo, vague situations.

Uncanny valley, is popularly used to define something as realistic or not.
It has turn ambiguous(wrong), to be used reliably.


#29

It’s the Eyes. The eyes are important just to avoid the problems (never mind about trying to “excel” with them).

The solutions, until recently, have had more to do with appealing to the perception of depth or feeling in the eyes than any kind of “Medical Replication” of how eyes work.

The usual advice is knowing where the speculars have to go, how wet you have to make the eyes look, and paying attention to the shadow that eyelids and eyelashes cast on the eyeball as well as checking to make sure the eyeballs reflect enough color from surrounding areas so they don’t appear too white.

When you’ve nailed that. Then you have to work hard again on the motion.
The more realistic something is, the harder you work and in that regard Stylization is also important.

But basically the tactics I’ve come across that are somewhat effective are more about disarming audience defenses by way of playing with their perception.

It only has to LOOK like a living human being or character after all.

P.S.: And Leigh is right. Final result is the only thing that matters. Our EVE character, for example, featured “contact lenses”, fake specular cards, and multiple rows of switchable eyelashes… it was all about making sure the image was what we wanted to a certain level.

Obsessing with some kind of “true solution” is going to be a waste of time. It’s called Visual Effects, not human genetics.


#30

I can’t speak for everything, but I thought at least some modern shots were shot with high speed cameras and not relying on CG or vector motion slow-mo filters

http://vimeo.com/48571597

I think Leigh and I really are saying the same thing, just focusing on different situations.

You gotta make the asset hold up for the shot it’s going to need to do.

If you have an extreme long duration close up shot, you’ll have to work that much harder to make it hold up and look believable.

You generally improve realism by bringing things closer to actual physical reality or closer to the human perception - which perceives things in a more exaggerated or stylized way and omits paying attention to other things - hence how magic works via distractions. To some extent, you could say extreme color grading and post effects counts as this.

I gave some examples of what the next few extreme stages would be to work on for realistic skin shading at future resolutions and/or specific extreme shots.

I never said everyone needs to implement that level of detail for every shot they do - it’s overkill, unnecessary, and too expensive - if even possible right now with today’s hardware.

I just don’t think our current techniques with their limitations will still be used 20 years from now in regards to shading. Technology will keep moving forward and we’ll keep raising the bar until we have no reason to raise it any further.

The bar will have to be raised at least a little bit when 8k arrives on the scene, especially when more extreme shots are called for.


#31

>Why do some CG people look dead?

They work too many hours :stuck_out_tongue:


#32

Post of the month.


#33

You could be right. Any of my college friends studying CG say the big deal now is getting realism in the eyes and the immediate area around them.


#34

Eyes are tough. Convincing deformation in the face its pretty darn hard too.
I find hyper-real stills ‘interesting’. But hyper-real animated is very difficult
not to look ‘uncanny’.
I also tend to ‘buy it’ more when its not exactly human.
Golem and Davy Jones were an easier sell to my eyes than Benjamin Button
which sometimes fell-over for me in certain shots even though I can hugely appreciate the work involved.


#35

They gave a great lecture on eyes at a big studio I worked at a few years back. They said that they were dissappointed with how some of their cg humanoid characters turned out in their latest big blockbuster. They had done extensive research and testing of modeling, texturing, animation, and lighting to make sure they looked realistic but they fell apart when rendered for the big screen. Afterwards they realized that the whole way cg eyes are traditionally textured, modeled and lit is wrong. They showed an example of the cg characters in playblast form and their eyes were looking exactly into the eyes of the real-life actor. They then showed the same shot lit and rendered and the eyelines no longer matched at all. Its something about the way the light bounces and refracts around in the cg cornea that makes the rendered eyelines look totally different than how they were animated. At the time they were trying to figure out if there was a way to animate them differently to compensate or write a shader that would more actually reflect the intended eyeline. I don’t know if they ever completely figured it out.


#36

A friend of mine who worked on both AVATAR and The Adventures of Tintin, and some of our own experience, leads me to say that’s actually a misconception.

You don’t go for Realism… you go for “Realism” (note double quotes).

It’s about what people see in photos or images of REAL people… and you try and nail that instead of going after some kind of Biology lesson.

Worked a treat for them… especially on AVATAR.

P.S.: Issue mentioned by Zac above also happened on our film. It’s true… the light and shadows in rendering really appear to change the orientation and geometry of characters and eyelines… It’s something you always have to look out for.

Align that with Leigh’s earlier advice that final image is all that counts and you already know there is (probably) NEVER going to be a “blanket solution”.


#37

Eyes have all sorts of micro refractive light going on.

All eyes are pigmented dark brown. The only reason they’re colored different is because of micro refractive light scattering within the iris.

Maybe it’s off topic, but the vision researchers I work with have done a lot of studies with eye tracking to understand what people actually look at when presented with different images or video. They’ve also put a lot of time into understanding the science behind magic tricks with why people automatically focus on certain motions and ignore others.

Part of what’s been discovered is how vision neurons fire when they focus. It’s like an unsharp mask. To strengthen the signal of a single neuron, the surrounding neuron signals are simultaneously suppressed. This creates a blur within the brain’s perception even though all the eye’s receptors are still picking up sharp input signals.

This impacts how we think we see color as well.


#38

Yes you can study all that…

Or you can just do step-render testing for yourselves… and then later have someone sit down and “cold watch” rendered footage. I find this approach easier than trying to look for white papers on “Micro-Refractions”.


#39

It’s ok, there is no such thing as micro refractions in first place (as it’s always at a discrete unit photonic scale anyway), and it’s incorrect to say it’s due to refractive scattering as it’s actually Rayleigh scattering, which is not refraction dependent, but rather polarization dependent, and generates refraction afterwards.

The irony of it is that, because it’s a Rayleigh Phenomena, it’s absolutely unnecessary to simulate it the way physics would have it. It’s a colour change based on the polarization of a susceptible particle changing wavelength and therefore perceived colour.

In layman terms, simulating the complex model would be utterly pointless, as it would produce no different result whatsoever than colouring things directly the way a particle would when affected at that point (IE: painting a bloody texture and emitting some minor energy from it).

Unless we’re now proposing we need to simulate in a forward fashion the molecular level wavelength of something to produce believable results, which would be, frankly speaking, absolutely ridiculous.

A model simplified through the elimination of reciprocal parts, and/or by replacing long parts of it that always produce the same pattern with a simplification, produces results, once sampled discretely, exactly as good as the full model would.

I think Lagrange proved this enough times over, and the discussion about such details, at least in the context of reproducing reality, is largely masturbation and e-peening.
Going into these details as if they mattered is akin to saying we should simulate at a galactic level to render a blue sky (Hi again, Mr Rayleigh), when painting or shading the right colours produces the same results with an added degree of control and at a vastly inferior cost.

The assumption we’re just mucking around in the VFX industry, as if we didn’t read the medical papers and get consultations from universities and professors at the top of their fields before spending mills on developing models, not to mention the large amount of competent physicists employed in various RnD departments, is also a bit offensive :stuck_out_tongue:


#40

I know Rayleigh scattering is tiny, but isn’t that exactly what modern skin shaders are now attempting to simulate because that’s how skin’s perceived color actually works? Take MR’s SSS2 shader and existing vray sss shaders for instance. I’ve seen people incorrectly call it “color bleed”.

And just like like tiny photons in real life, even though the light wavelength differences are small, when they happen everywhere across a surface, they add up they turn into something we actually do see on a macro level - hence things like seeing a blurry red color bleed around the corner of the nose or how skin in general having a blue “diffuse” color where light hits.

Take that eye close up shot posted earlier for instance:
http://www.youtube.com/watch?v=16HD0QHCT9g

If you watch it in HD, you can see as plain as day that most of the skin on the very left is starting to tint blue, then the skin above and below the eye is tinted green/yellow, while the eye lid is tinted blue where it’s bright, but is red where it’s darker since the light is coming from the left has to travel through the skin to that area.

If you pay special attention to the skin in the lower right of the screen, you’ll notice that each skin bulge isn’t simply blue or pink, it’s both depending on where the light hits when it moves. It’s a dynamic property. Less photons hit each skin bulge than the entire face and thus the light hitting the bulges travels less than the total light across the whole face.

Now if you took a normal SSS shader, you’d probably set the SSS radius to work on a macro scale for the whole face, but not those individual skin bumps. For this shot though, you’d ideally need both because you’re able to see those skin bulges fairly well. For most shots, painting them a static color is probably good enough if there’s not a lot of subtle movement. The more the skin wrinkles and stretches though, the more you’ll need to see the effect of smaller scale rayleigh scattering.

The problem with modern skin shaders is they all assume light intensity is the same for both macro and micro details and therefore assume light scatters the same amount for both which is incorrect.

It’s only offensive if you’re taking it way too personal

Everything we simulate with a computer is a fake. It’s a given Like you said, computers are not mini physical universes.

Anyone who can create a successful shot, is good at their job.
Creating good shots is what the the CG field ultimately is about right?

Of course it’s easier, but at the same time we have to be willing to learn things in order to write new shaders and software

We take energy conserving shaders for granted now after a lot of research was made.

Anisotropy is an attribute that might drive someone mad if they didn’t have it built into their shader. What other choice would they have other than setting up a really high res fine bump map or faking it with an image sequence of the highlight elongating and changing in sync with the camera?

We automatically dial in things like fresnel and refractive index settings because we’ve had to learn about them and saw a table of settings at some point (based on scientific measurements). We’re not born knowing the refractive index for glycerol. I for one at least would have to look it up.

We can always observe things and just wing it until it looks good enough for our shots, but if we don’t have a good and reliable method to represent it in CG, it won’t reliably shade correctly for every shot and could be a pain to replicate from shot to shot.


#41

I understand what you’re saying and what you’re saying is true.

But I guess you’re more a “shader technician” than a Director, Producer, Artist-on-deadline, or what you might term as “Shader users”.

We shader users do not assume we will automatically use certain effects. Because we have to check for render times, and check it against shot intent and style. We also assume we are probably going to change shades every few shots, or between the first and second act for example.

Because all that matters to us is that it “looks correct” for the audience experience.

So I understand we are quite far in terms of point-of-view on this.

It’s like if you’re a flight engineer and you start proposing that you want to make fighter jets with a new wing shape - pilots will probably think “Why go through all the trouble? I kill baddies with the current wing shape just fine.”

That’s not to say you shouldn’t study these things if you are in fact going to achieve higher shader performance - which shader users will in turn use to produce better pictures.

But you also understand this unsentimental view towards just focusing on the final image and not really caring all the time about underlying molecular principles is at the heart of artists who use shaders on a definite (and usually severe) deadline.

That said… if you could choose “Eye Shader” the way you choose a diffuse color? That would make life easier I bet.


#42

I’m under deadlines just like anyone else. I’m a generalist, so I have to do a little of everything. Like a lot of people, I’ve had to come up with my own band-aid solutions and workflows when I’ve had time.

Mainly it’s that over the years I’ve always neen working with the same anatomy subjects with the same (but evolving) models and shaders . I don’t ever really get to shelve my work and move onto something entirely different since human anatomy doesn’t really change.

So over time I’ve just noticed a lot of the shortcomings of modern shaders and techniques that I haven’t always been able to simulate with textures or lighting/distance rigs alone. There are some tricks you can do with really complex shader setups (see my avatar), but I swear it could probably all be condensed into a single new attribute slider on a shader instead of shoe-horned into an existing attribute.


#43

Ah… well psychologically I revel in “band-aid” solutions… It’s a strange quirk. I’m very old school - so I laugh in glee when people look at a flat matte painting backdrop we did and they say “That’s a fantastic and deep landscape!”

I love things like that because I feel like I’ve engaged people’s imaginations - with something really “low-rent”.

So in a strange way, I’m happy to “band-aid” eyeball shaders and things like that.

That said, of course someone like you coding new shaders for the eyes would be great… we can’t all have “Trickster Mentality”. :stuck_out_tongue:

But that’s probably what I’d be interested in… NEW SHADERS… but their use wouldn’t be guaranteed for the likes of me.


#44

just read the entire thread
really quite interesting

except for sentry 6666

ban pls


#45

I agree 100% with all of this. Good expanation cookepuss.


#46

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.