The Science Of CG


#169

Hi, good questions!

A lot of shader parametres are left over from an older era (CGI-timeline-old :)) when everything was as quick and dirty as possible. Most people are also very familiar with the options they’ve always had, sometimes with a reluctance to change. Game engines have also only very recently moved to the ‘physically based rendering’ approach, so they were still requiring artists to use the separate diffuse and spec roughnesses etc.

There is also a convenience factor too, I think. Let’s say you had to quickly create an old car tire. You may want quite a rough, oren-nayery diffuse for the dusty rubber, with then a fairly glossy spec in certain areas using a texture. You do have flexibility there.

However, layered/compounded/BRDFed shaders are definitely the way forward and even though they can require some re-learning art first, they are super flexible and more intuitive once you get to know them.

Maxwell Render only has 1 roughness parametre that does exactly as you described, going from mirror perfect reflection to lamberty diffuse if you want it. I love it, but it does mean you have to do a bit more planning with your textures for masking which areas have certain levels of roughness etc.

In regards to Arnold not going from mirror to lambert, I can’t give you any sort of authoritative answer but I know that with Maxwell they had to add a special ‘lambert mode’ switch because lambert is extremely simplified and unrealistic in how diffuse it is. So a full roughness material never quite went full-lambert. So roughness 0-99 in Maxwell is realistic roughness, then 100 is lambert mode.

Ok, all of this is just my thoughts on the matter. Make of it what you will! :keenly:


#170

Thank you, Jared.

So at least i’m not going insane and what I’m thinking is logical.

I had another tread asking this

solaris6 replied with:

Not it not the same.
100% rough specular still have some rays concentration around reflect vector.
Diffuse itself have no dependency from reflect vector only surface normal.

Think about diffuse like sss with very short radius, short enough to leave hard shadows short not enough to completely randomize incoming ray, until incoming ray direction have no any relation to outcoming(due to 10-100 interreflections).

On other hands reflection component say you about rays with 1-3 intereflection other microgeometry profile, and mostly no interreflections at all, in some cases between neighbore microfacets(this is because sometimes light ray goes back to observeer). But in all cases reflected rays in opposite to diffuse rays strongly depended from incoming rays(concentrate about ideal reflect vector with some distribution dictates brdf rules and depend on roughness and anisotropy).

Yes it confuse some peoples, but diffuse and reflection components have differen nature and not smoothly blend each other other some factor(roughness for example you say).
However some renderers and brdf models make it. maxwell render have this feature, and GGX brdf finaly transform to diffuse.

For some less expierience peoples it simplifies workflow(but actually confuse finaly).

the terms somewhat confused me. can you shed some light on this and tell me what you think about this?

there are two things I would like differentiate:

does it really go glossy specular->rough specular-> full diffuse in real life? or

or is it just simulated like this for cg?

because as I understood from solaris6’s reply this isn’t the case and you can’t go from diffuse to spec (as you mentioned Maxwell does this)…

so…back to square one for me


#171

I also thought it’s the same (diffuse - a very diffused specular), but those two components reprersnt different light interactions. Diffuse is light going deeper, and reflecting from deeper area of material, whereas specular is a more direct reflection.
So in maxwell when it’s a very diffuse, it still has specular component, but it’s very difused. Still those are different components. It’s just in older renderers you would use a roughness value for diffuse shader, which was a fake diffused specular(reflection).


#172

It’s a bit hard to know whether he’s initially talking about a specific implementation (say, in Arnold) or all shaders in general. He’s right that ‘diffuse’ and ‘specular’ shading models are trying to solve different problems in the most efficient way that they can and so therefore two different models like phong specularity and lambert diffuse (just choosing random ones) may have no way to accurately transition from one to the other, but I’m not aware of any reason why a newly designed path tracing style shader couldn’t do it all in one model.

This is where I reach my limits, I don’t know the math or programming particularly well. :cry:

In terms of real life vs CG we are definitely just trying to simulate it. Accurately, but only up to a point. Nothing but NASA space research type materials reach anything close to lambert in terms of perfect diffusion as far as I’m aware and I imagine the same is true for a perfect mirror. No objects we encounter daily, reflect 100% of light or absorb 100% so really most materials we create should fall somewhere along a spectrum from 10-90 in every way.

Having a perfect mirror shader or a perfect lambert shader was just simpler to program and much, much faster to render.


#173

If you’re talking about my opinion, then in this thread it was discussed - whether diffuse is just a very spread-out specular (as I thought). But it isn’t so in physics as Playmesumch00ns explained. How it’s implemented in particular shader is a different thing. I don’t remember how it’s done in maxwell, I think one parameter controls transition from a diffused value to specular, and also IOR, and metalness value. Still I guess it doesn’t mean specular transitions to diffuse, it’s just made because maxwell diffuse is more correct, still having some reflection value. I think specular is a coating value over diffuse anyway. Sorry to be argumenting. :slight_smile: It’s what I remember from this thread.


#174

If it’s indeed the case; result of “diffuse color slot” is a different light interaction than specular , then I can make my peace with this. Though I’d really want to read about this if you have some scientific resources for this.

But if there are different light interactions , then I guess Maxwell does that one slider thing for just simplicity?

I didn’t have questions like this before. But now that shaders go physically based, one wonders if the sliders and slots also represent physically based interaction or just the result is physically correct.


#175

This is an interesting question, Cur10uS. But I’m not good at physics. You can ask on maxwell forums I think, and reply here. It would be interesting to read. :slight_smile:


#176

I don’t know why I’ve never read this , but good stuff here

https://en.wikipedia.org/wiki/Diffuse_reflection

also

http://computergraphics.stackexchange.com/questions/1513/how-physically-based-is-the-diffuse-and-specular-distinction


#177

No, they are only based very loosely on physics or mechanics, mostly old modes of calculating light transmission. There’s no accounting for charge or the actual photon as an actual particle like in real physics - but that’s fine because all the compute power in the world would bog down in such a simulation, and we’d never get anything rendered. They basically sumover the math loosely and come up with a “mostly right” answer, though. It’s right enough to help us make great images and animations.


#178

Hi, no I was talking about the quoted information in Curious’ post. I think you posted while I was still typing mine. :scream:

In post #121 Playmesumch00ns says:

So I think we are still confused. :shrug:

I really can’t see how specular and diffuse are any different in real life, they’re just naming the degree to which the incoming light was scattered (including how far into the object) before returning to the viewer.

I’ve made a very quick animation of a Maxwell material going from 0 roughness to 100 for what it may be worth:

http://jozvex.com/images/maxwellRoughness.mp4

You can see a pop at the end where it goes from 90 roughness to ‘lambert mode’ roughness which looks quite different.


#179

To chime in, and I’m not really sure I get the question correctly but…

Diffuse at 100% should be equal to a totally blurry (glossiness 0) colored reflection. Based on my understanding that is how things work in real life as a diffuse is really only light scattered in every direction due to micro surface detail (super tiny bumps).

In C4D’s native renderer you can get the same effect going which means you really only need a to be operating with a reflectance channel and you can just leave diffuse be.

Based on some other people’s input the same kind of works for V-Ray. Corona has a clamped glossiness curve though and you can’t fully reproduce this effect (not a bad thing, just the way it is implemented right now).

Naturally it is better to use diffuse too as the calculations are a lot faster and the results don’t differ from using reflections only.

Now, it is worth mentioning that I haven’t tested this stuff out, the C4D thing is what Nick (@GSG) showed in one of his presentations while the other is just forum science.

Specular on the other hand is a fake effect as far as I know as it should be part of the normal reflection behavior yet it is not always in order to save some calculation times.

Take this with a lot of salt guys :slight_smile:


#180

If diffuse and specular results are just a roughness difference, how do polarizer filters split them?


#181

Well, we can consider it as just as angle difference - mirror-like or spread-out, but as I understood it, it’s not just this. With diffuse light bounces more inside, and more wavelengths get absorbed. That’s why diffuse gets color, whereas reflection doesn’t for dielectrics.
SSS is light getting even deeper.


#182

Yes, we can mostly blame point/omni lights for that. Having a light source that is 0 in size won’t show up as a reflection, so you needed something to fake it. I went through a specular purge for a while and turned it off in every way possible, but newer materials like the mental ray MILA and the Arnold standard material etc have much better specular highlights that actually match the reflection and can be much faster to render cleanly in certain situations.

It’s explained a bit here under Helmholtz Reciprocity:
https://forum.nvidia-arc.com/showthread.php?12879-MILA-and-fake-light-specular&p=48513#post48513


#183

well…if light bounces inside, like a shallow scattering effect…then IT IS different from specular other than roughness of the surface


#184

This reflection-only system is precisely what I was talking about, earlier. While it’s a good approximation (at first) for “physically based shading and lighting”, it doesn’t actually follow the laws of physics. All larger matter and materials recycle and emit light as well, in the form of charge. Charge IS light, though not usually or necessarily in the visible range. Since all matter emits charge differently, we have incoming light PLUS outgoing light at the foundation, not simply incoming light as the equations are written currently. What we see isn’t just a reflection, it’s a reflection augmented or deaugmented by that particular material’s charge profile, which is of course different for different materials.

So the current light algorithms and shaders in CGI are of course useful as approximations, but don’t let anyone tell you that they’re physically-based. We’re not even close, there. Light is a particle with an inherent wave-motion based on its internal, actual, physical spins. The current algorithms don’t even treat light as a particle correctly, with all the degrees of freedom its motion, mass, spin, and chirality actually entail.

We would need variables for different materials, and IOR is again a very weak approximation of this lacking many degrees of freedom. This wouldn’t really be that hard to incorporate into current maths, but it would make approximations much faster to calculate if done properly.

Color theory and light theory is not just 50 years behind, physically, but almost 100. Copenhagen stalled physics out in the 1920s, and its never recovered. But it’s not too late.


#185

I guess my question is, does it matter to us visually in the end? Given that back around the Maxwell 1.0 days, there were all these tests in which enclosed scenes (similar to Cornell boxes etc) were set up and when all the measured and/or approximated data were put into Maxwell, you got a render for all intents and purposes identical to the photo, what exactly are we missing out on?

Maxwell has IOR per wavelength, spectral interference, material emission, absorption, scattering, measured brdf support, the ‘K’ value etc. If you’re covering all the interactions that have an impact visually, that’s it’s job done. We had a system in Maxwell at one point many years ago (in the beta team, I’m sure it’s ok to mention) where you could draw spectral response curves yourself for any material you made. We tried all kinds of crazy advanced things but all it did was complicate the workflow without achieving any particular benefit.

Some renderers calculate blackbody radiation and other frequencies out of the visual spectrum, but they tend to be only for scientific purposes.

I hear what you’re saying, but unless there’s a real benefit I’m not sure why we’d go that far. Why has noone created an indy renderer based on the ‘real’ physics?


#186

In many cases, no, it wouldn’t matter and would make almost no difference visually. We’ve all seen excellent photorealistic renders by now, I should hope. :slight_smile:

But in SOME cases, the current maths are completely wrong, and especially about intensity. Albedo is a primary example of this. How can the moon and Enceladus emit light so far over unity? The standard models have no answer, you see, since there’s is a gravity-only, defunct physics. Of course, this example would only matter if one were trying to render such things with “physical accuracy”, and it’s admittedly a very rare (if not non-existent) situation.

Which leads us to your last questions. To calculate light physically would require either a massive overhaul in render-tech and/or vastly faster computing power. Light is a particle field. The wavefunction is simply one motion of this particle system; there is no “duality”, contrary to popular metaphysics of course, but as with all other field waves, the wave is the motion of the thing. The wave is not the thing. This is true of water, sound waves, and the solar wind but is intrinsically true of the photon itself. It is the stacked spins of the photon that appear as “waveforms” when we try to observe light, thus the wavelength. We are seeing the wobbles from one side or other, or even directly on, you see. If you don’t, I can diagram this, but even if all of modern neoclassical theory is wrong it still presents us huge challenges from a rendering perspective.

What this means is that to accurately calculate light we would need trillions upon trillions of particles for even the smallest scenes. They would be complex, discrete particles capable of spinning up into the larger particles we know better (electrons, protons, neutrons) and so on, to be accurate. So we’re talking about an absurd particle simulation even by current standards, and as you approach modeling such complexity you get diminishing returns across the board.

I guess what I’d like to see (or develop, heaven forbid) would be such a system but with quality variables, so we could see precisely how much or little compute power such a system might take. Perhaps 1% accuracy would be enough to produce good results. Perhaps with some simple optimizations (much like current ones) we could have a real physically accurate rendering engine which would be faster and take less work to get proper results. Who knows.


#187

If you’re curious or skeptical of such physics, good. You should be. But specifically what I’m talking about is Maxwell’s displacement field, classically. Newton tried it with his corpuscles, but Maxwell got much closer and had he been around when the electron and photon were discovered, I imagine Copenhagen would have had hell to pay. Currently there’s no such field in light calculation, and Maxwell didn’t then know that he was modeling the photon field at the time. But he laid the groundwork.


#188

Hey

I am researching about normal and bump maps.

Though I finally understand technical differences and normal calculation differences between them, it has become interesting to know why camera space is used for normals calculation? if a surface “displaces” one would assume that the normals direction are in world space as is the object itself. Hope someone can help me understand this.

Thanks.