The Science Of CG


#161

Hello guys, just came to say thanks, you have done a great job compiling all this information. Please keep posting, and updating this info, technology is evolving continuously, I´ll be looking forward for more great info. Thank you in advance.

Maybe this podcast about physics in animation would be interesting for someone: http://www.fxguide.com/fxpodcasts/fxpodcast-siggraph-preview-animation-physics/


#162

First I wanna say it is interesting to think about how light and materials work. Sure.
And there is a lot of information in this thread. I really appreciate it.
But I feel it also shows a problem.

Imagine you start working in 3D. It is dangerous to say you need to understand all the physics to make good quality renderings. And I think 3D software is still too complicated for casual users.
Isn´t it fair to say that most of the users want to create pleasing results and asume that everything should work (physically) correct out of the box?
Like setup a scene, enable sun and sky, place objects with “real world materials” and get a (close to) photorealistic result when you render. Just like when you took a picture of said object outside.
And the you continue from there.

Let me compare this to music production.
Say you have a digital synthesizer with a grand piano preset. Maybe you add the characteristics of a concert hall the simulate the reverb and delay and start playing your music.
And if the synthesizer and effects are good it should sound like a real grand piano played in a concert hall. Out of the box.
Sure there is people who like to programm synthesizers and know every part and tweak in detail.
But you would not expect a musician to be a programmer just to achieve (close to) realistic results.
Some basic knowledge how the sound is synthesed, okay. More than that becomes a distraction from the creative process for most musicians, imho.

Can we apply this to 3D? Isn´t it more important to know about composition, color and basic photography technics to create a good rendering? So why is it still so difficult to achieve?


#163

This sounds like an extremely naive question. As a musician, would you rather learn how to play the piano or be able to press a button and hear the music? Which of the two sounds more creative?
Granted, there are many technical hurdles to overcome, but to alot of artists this can be an exciting challenge that pushes one’s skill/knowledge ever-onward. And certainly, an intrinsic part of the artistic process. It is, after all, a technology-driven medium. A carpenter can still produce beautiful work with the aid of power tools; his fundamentals are exactly the same; it’s only his tool-set that’s different. In recent years in this fledgling field huge advances have been made to make our lives easier and cut down on some of the more monotonous tasks like uv-unwrapping, but a ‘make cg’ button? No thanks.


#164

I’m afraid you did not understand me. My English is not perfect. Maybe I did not find the right words.
I don’t ask for a button that does everything for me. Thats why I wrote, that when I want a synthesizer to sound like a piano the synthesizer manufacturer usually provides a preset which comes close to a real piano. Off course the musician will still have to play the instrument to create music. He still needs to be able to compose, arrange and mix to end up with a song. But he does not have to programm the synthesizer to make it sound like a piano. He picks the preset and starts playing.

And I’m just wondering if 3D software and renderer could get a little easier to use so people can stop caring about photons and concentrate on lighting, composition, color… Like a photographer does.

That does not mean that programs should be limited to realistic results. Some musicians like to programm synthesizers to find new sounds. And some CG artists like to programm shaders to create new effects. But if your goal is photorealistic results - in my opinion the way to get these results is still too complicated.


#165

And I’m just wondering if 3D software and renderer could get a little easier to use so people can stop caring about photons and concentrate on lighting, composition, color… Like a photographer does.

If you want it to be easy, just do photography. Just work with things that already exist and are already real.

If you want to recreate reality, and be an artist doing artwork, you’re gonna have to do the work. If it were easy to do, it wouldn’t be “work”, and it wouldn’t be worth much.


#166

for regular CG lighting it is a lot of vector calculations. i don’t remember the exact specifics but i watched a houdini tutorial where he designed a basic point light shader by taking the dot product of the lights position in space and the objects position in space. the dot product i think being the angle perpendicular to the sum of both vectors? he explained it way better than i could, i’ll have to watch it again sometime for a refresher.

of course for shadows and reflections the basics of raytracing is not hard to grasp. lights shoot rays, it detects where they hit, and from those points rays for secondary bounces are shot, and the rays that pass through become the raytraced shadows. depth map shadows work a bit more like projection mapping which is why they have resolution and not ray count.

ambient occlusion basically detects how close surfaces are relative to each other, and shades them darker if they’re closer and lighter if they’re farther apart. i believe this uses randomly placed raytracing but don’t quote me on that.

image based lighting utilizes GI from luminant surfaces.

for GI its my understanding that most renderers use 1 of 3 methods.

the way i understand it QMC or quasi monte carlo generates random samples and then refines future samples by resampling those samples, assuming light falls off those surfaces uniformly. it basically takes a randomize result and trys to homogenize it which can create repeating patterns… it also can be very noisy without longer render times, though i’m not sure the reason for this. Vray uses a variant of QMC called deterministic monte carlo or DMC, though i’m not sure what they changed.

Stochastic sampling / photon / radiance mapping is a method by which “photons” containing light intensity and color data are shot around the scene and then interpolated, but this is now legacy in comparison to irradiance caching, which basically does the same thing but stores a bunch of distributed points around the scene before hand to speed up render times. Sometimes it relies on ambient occlusion to make denser irradiance points in more detailed areas, which can overall make a smoother better result.

Light caching or light mapping is very similar, but rather than spreading points relative to the lights, it does so relative to the camera, to figure out what is going to be in frame and what is not. it is a very nice trick for speed optimization.

again, there are people that are much more expert at this than me… my understanding is limited to what i need to know to make the software work for me. but it is good to know what is going on in the backend instead of just blindly tweaking it trial and error wise


#167

I recently started fiddling with Maxwell, and some parameters seem not clear to me. For example the roughness parameter controls the ratio between 0 (facing camera) and 90 (facing to sides) degree angle reflection. At the same time, it controls how reflective the surface is.

I guess it’s because as microfacets look in one direction, they also start to reflect clearer images. I’m so used to this distinction between diffuse and reflective slots in Vray, that it seems counterintuitive.

If I remember correctly, someone in this thread said that diffuse is not just a rough (as opposed to smooth, or mirror-like) reflection. So even if we took vray and used its reflection, blurred to 0,2 for example, it’s still not quite correct bahaviour.

Also what confused me is if the surface is 100 rough, it won’t reflect 90 degrees color at all. I guess it’s because it’s unrealistic scenario in real life.

So what maxwell does is it unites reflection and diffuse into one parameter so to speak. At least it’s as I understood. It’s interesting, as it’s closer to physics behaviour.


#168

just finished reading the whole tread…

super helpful information here…thank you too all who contributed.

i have a question.
it was touched here very briefly. though I still haven’t grasped the idea

so

my original question in mind was “why there’s two roughness attributes in most modern shaders, one for diffuse and one for spec?”

i later learned that the roughness for diffuse is just a “conversion” and blending from lambert to oren-nayar model (is that right?)

now my question is this:

if we still have one roughness value for a material, then wouldn’t a 100% rough (0% glossy) spec = 100% diffuse ?

I am asking because I’ve tested it with arnold shaders (alSurface) at least…and it doesn’t give the same result.

And I just can’t get it around my head, because roughness for me is just the same roughness for the whole material…

I hope I’m getting some simple things wrong here…would appreaciate if anyone even could point me in the right direction because google was no help with this…

thank you


#169

Hi, good questions!

A lot of shader parametres are left over from an older era (CGI-timeline-old :)) when everything was as quick and dirty as possible. Most people are also very familiar with the options they’ve always had, sometimes with a reluctance to change. Game engines have also only very recently moved to the ‘physically based rendering’ approach, so they were still requiring artists to use the separate diffuse and spec roughnesses etc.

There is also a convenience factor too, I think. Let’s say you had to quickly create an old car tire. You may want quite a rough, oren-nayery diffuse for the dusty rubber, with then a fairly glossy spec in certain areas using a texture. You do have flexibility there.

However, layered/compounded/BRDFed shaders are definitely the way forward and even though they can require some re-learning art first, they are super flexible and more intuitive once you get to know them.

Maxwell Render only has 1 roughness parametre that does exactly as you described, going from mirror perfect reflection to lamberty diffuse if you want it. I love it, but it does mean you have to do a bit more planning with your textures for masking which areas have certain levels of roughness etc.

In regards to Arnold not going from mirror to lambert, I can’t give you any sort of authoritative answer but I know that with Maxwell they had to add a special ‘lambert mode’ switch because lambert is extremely simplified and unrealistic in how diffuse it is. So a full roughness material never quite went full-lambert. So roughness 0-99 in Maxwell is realistic roughness, then 100 is lambert mode.

Ok, all of this is just my thoughts on the matter. Make of it what you will! :keenly:


#170

Thank you, Jared.

So at least i’m not going insane and what I’m thinking is logical.

I had another tread asking this

solaris6 replied with:

Not it not the same.
100% rough specular still have some rays concentration around reflect vector.
Diffuse itself have no dependency from reflect vector only surface normal.

Think about diffuse like sss with very short radius, short enough to leave hard shadows short not enough to completely randomize incoming ray, until incoming ray direction have no any relation to outcoming(due to 10-100 interreflections).

On other hands reflection component say you about rays with 1-3 intereflection other microgeometry profile, and mostly no interreflections at all, in some cases between neighbore microfacets(this is because sometimes light ray goes back to observeer). But in all cases reflected rays in opposite to diffuse rays strongly depended from incoming rays(concentrate about ideal reflect vector with some distribution dictates brdf rules and depend on roughness and anisotropy).

Yes it confuse some peoples, but diffuse and reflection components have differen nature and not smoothly blend each other other some factor(roughness for example you say).
However some renderers and brdf models make it. maxwell render have this feature, and GGX brdf finaly transform to diffuse.

For some less expierience peoples it simplifies workflow(but actually confuse finaly).

the terms somewhat confused me. can you shed some light on this and tell me what you think about this?

there are two things I would like differentiate:

does it really go glossy specular->rough specular-> full diffuse in real life? or

or is it just simulated like this for cg?

because as I understood from solaris6’s reply this isn’t the case and you can’t go from diffuse to spec (as you mentioned Maxwell does this)…

so…back to square one for me


#171

I also thought it’s the same (diffuse - a very diffused specular), but those two components reprersnt different light interactions. Diffuse is light going deeper, and reflecting from deeper area of material, whereas specular is a more direct reflection.
So in maxwell when it’s a very diffuse, it still has specular component, but it’s very difused. Still those are different components. It’s just in older renderers you would use a roughness value for diffuse shader, which was a fake diffused specular(reflection).


#172

It’s a bit hard to know whether he’s initially talking about a specific implementation (say, in Arnold) or all shaders in general. He’s right that ‘diffuse’ and ‘specular’ shading models are trying to solve different problems in the most efficient way that they can and so therefore two different models like phong specularity and lambert diffuse (just choosing random ones) may have no way to accurately transition from one to the other, but I’m not aware of any reason why a newly designed path tracing style shader couldn’t do it all in one model.

This is where I reach my limits, I don’t know the math or programming particularly well. :cry:

In terms of real life vs CG we are definitely just trying to simulate it. Accurately, but only up to a point. Nothing but NASA space research type materials reach anything close to lambert in terms of perfect diffusion as far as I’m aware and I imagine the same is true for a perfect mirror. No objects we encounter daily, reflect 100% of light or absorb 100% so really most materials we create should fall somewhere along a spectrum from 10-90 in every way.

Having a perfect mirror shader or a perfect lambert shader was just simpler to program and much, much faster to render.


#173

If you’re talking about my opinion, then in this thread it was discussed - whether diffuse is just a very spread-out specular (as I thought). But it isn’t so in physics as Playmesumch00ns explained. How it’s implemented in particular shader is a different thing. I don’t remember how it’s done in maxwell, I think one parameter controls transition from a diffused value to specular, and also IOR, and metalness value. Still I guess it doesn’t mean specular transitions to diffuse, it’s just made because maxwell diffuse is more correct, still having some reflection value. I think specular is a coating value over diffuse anyway. Sorry to be argumenting. :slight_smile: It’s what I remember from this thread.


#174

If it’s indeed the case; result of “diffuse color slot” is a different light interaction than specular , then I can make my peace with this. Though I’d really want to read about this if you have some scientific resources for this.

But if there are different light interactions , then I guess Maxwell does that one slider thing for just simplicity?

I didn’t have questions like this before. But now that shaders go physically based, one wonders if the sliders and slots also represent physically based interaction or just the result is physically correct.


#175

This is an interesting question, Cur10uS. But I’m not good at physics. You can ask on maxwell forums I think, and reply here. It would be interesting to read. :slight_smile:


#176

I don’t know why I’ve never read this , but good stuff here

https://en.wikipedia.org/wiki/Diffuse_reflection

also

http://computergraphics.stackexchange.com/questions/1513/how-physically-based-is-the-diffuse-and-specular-distinction


#177

No, they are only based very loosely on physics or mechanics, mostly old modes of calculating light transmission. There’s no accounting for charge or the actual photon as an actual particle like in real physics - but that’s fine because all the compute power in the world would bog down in such a simulation, and we’d never get anything rendered. They basically sumover the math loosely and come up with a “mostly right” answer, though. It’s right enough to help us make great images and animations.


#178

Hi, no I was talking about the quoted information in Curious’ post. I think you posted while I was still typing mine. :scream:

In post #121 Playmesumch00ns says:

So I think we are still confused. :shrug:

I really can’t see how specular and diffuse are any different in real life, they’re just naming the degree to which the incoming light was scattered (including how far into the object) before returning to the viewer.

I’ve made a very quick animation of a Maxwell material going from 0 roughness to 100 for what it may be worth:

http://jozvex.com/images/maxwellRoughness.mp4

You can see a pop at the end where it goes from 90 roughness to ‘lambert mode’ roughness which looks quite different.


#179

To chime in, and I’m not really sure I get the question correctly but…

Diffuse at 100% should be equal to a totally blurry (glossiness 0) colored reflection. Based on my understanding that is how things work in real life as a diffuse is really only light scattered in every direction due to micro surface detail (super tiny bumps).

In C4D’s native renderer you can get the same effect going which means you really only need a to be operating with a reflectance channel and you can just leave diffuse be.

Based on some other people’s input the same kind of works for V-Ray. Corona has a clamped glossiness curve though and you can’t fully reproduce this effect (not a bad thing, just the way it is implemented right now).

Naturally it is better to use diffuse too as the calculations are a lot faster and the results don’t differ from using reflections only.

Now, it is worth mentioning that I haven’t tested this stuff out, the C4D thing is what Nick (@GSG) showed in one of his presentations while the other is just forum science.

Specular on the other hand is a fake effect as far as I know as it should be part of the normal reflection behavior yet it is not always in order to save some calculation times.

Take this with a lot of salt guys :slight_smile:


#180

If diffuse and specular results are just a roughness difference, how do polarizer filters split them?