Well, we can consider it as just as angle difference - mirror-like or spread-out, but as I understood it, it’s not just this. With diffuse light bounces more inside, and more wavelengths get absorbed. That’s why diffuse gets color, whereas reflection doesn’t for dielectrics.
SSS is light getting even deeper.
The Science Of CG
Yes, we can mostly blame point/omni lights for that. Having a light source that is 0 in size won’t show up as a reflection, so you needed something to fake it. I went through a specular purge for a while and turned it off in every way possible, but newer materials like the mental ray MILA and the Arnold standard material etc have much better specular highlights that actually match the reflection and can be much faster to render cleanly in certain situations.
It’s explained a bit here under Helmholtz Reciprocity:
https://forum.nvidia-arc.com/showthread.php?12879-MILA-and-fake-light-specular&p=48513#post48513
well…if light bounces inside, like a shallow scattering effect…then IT IS different from specular other than roughness of the surface
This reflection-only system is precisely what I was talking about, earlier. While it’s a good approximation (at first) for “physically based shading and lighting”, it doesn’t actually follow the laws of physics. All larger matter and materials recycle and emit light as well, in the form of charge. Charge IS light, though not usually or necessarily in the visible range. Since all matter emits charge differently, we have incoming light PLUS outgoing light at the foundation, not simply incoming light as the equations are written currently. What we see isn’t just a reflection, it’s a reflection augmented or deaugmented by that particular material’s charge profile, which is of course different for different materials.
So the current light algorithms and shaders in CGI are of course useful as approximations, but don’t let anyone tell you that they’re physically-based. We’re not even close, there. Light is a particle with an inherent wave-motion based on its internal, actual, physical spins. The current algorithms don’t even treat light as a particle correctly, with all the degrees of freedom its motion, mass, spin, and chirality actually entail.
We would need variables for different materials, and IOR is again a very weak approximation of this lacking many degrees of freedom. This wouldn’t really be that hard to incorporate into current maths, but it would make approximations much faster to calculate if done properly.
Color theory and light theory is not just 50 years behind, physically, but almost 100. Copenhagen stalled physics out in the 1920s, and its never recovered. But it’s not too late.
I guess my question is, does it matter to us visually in the end? Given that back around the Maxwell 1.0 days, there were all these tests in which enclosed scenes (similar to Cornell boxes etc) were set up and when all the measured and/or approximated data were put into Maxwell, you got a render for all intents and purposes identical to the photo, what exactly are we missing out on?
Maxwell has IOR per wavelength, spectral interference, material emission, absorption, scattering, measured brdf support, the ‘K’ value etc. If you’re covering all the interactions that have an impact visually, that’s it’s job done. We had a system in Maxwell at one point many years ago (in the beta team, I’m sure it’s ok to mention) where you could draw spectral response curves yourself for any material you made. We tried all kinds of crazy advanced things but all it did was complicate the workflow without achieving any particular benefit.
Some renderers calculate blackbody radiation and other frequencies out of the visual spectrum, but they tend to be only for scientific purposes.
I hear what you’re saying, but unless there’s a real benefit I’m not sure why we’d go that far. Why has noone created an indy renderer based on the ‘real’ physics?
In many cases, no, it wouldn’t matter and would make almost no difference visually. We’ve all seen excellent photorealistic renders by now, I should hope.
But in SOME cases, the current maths are completely wrong, and especially about intensity. Albedo is a primary example of this. How can the moon and Enceladus emit light so far over unity? The standard models have no answer, you see, since there’s is a gravity-only, defunct physics. Of course, this example would only matter if one were trying to render such things with “physical accuracy”, and it’s admittedly a very rare (if not non-existent) situation.
Which leads us to your last questions. To calculate light physically would require either a massive overhaul in render-tech and/or vastly faster computing power. Light is a particle field. The wavefunction is simply one motion of this particle system; there is no “duality”, contrary to popular metaphysics of course, but as with all other field waves, the wave is the motion of the thing. The wave is not the thing. This is true of water, sound waves, and the solar wind but is intrinsically true of the photon itself. It is the stacked spins of the photon that appear as “waveforms” when we try to observe light, thus the wavelength. We are seeing the wobbles from one side or other, or even directly on, you see. If you don’t, I can diagram this, but even if all of modern neoclassical theory is wrong it still presents us huge challenges from a rendering perspective.
What this means is that to accurately calculate light we would need trillions upon trillions of particles for even the smallest scenes. They would be complex, discrete particles capable of spinning up into the larger particles we know better (electrons, protons, neutrons) and so on, to be accurate. So we’re talking about an absurd particle simulation even by current standards, and as you approach modeling such complexity you get diminishing returns across the board.
I guess what I’d like to see (or develop, heaven forbid) would be such a system but with quality variables, so we could see precisely how much or little compute power such a system might take. Perhaps 1% accuracy would be enough to produce good results. Perhaps with some simple optimizations (much like current ones) we could have a real physically accurate rendering engine which would be faster and take less work to get proper results. Who knows.
If you’re curious or skeptical of such physics, good. You should be. But specifically what I’m talking about is Maxwell’s displacement field, classically. Newton tried it with his corpuscles, but Maxwell got much closer and had he been around when the electron and photon were discovered, I imagine Copenhagen would have had hell to pay. Currently there’s no such field in light calculation, and Maxwell didn’t then know that he was modeling the photon field at the time. But he laid the groundwork.
Hey
I am researching about normal and bump maps.
Though I finally understand technical differences and normal calculation differences between them, it has become interesting to know why camera space is used for normals calculation? if a surface “displaces” one would assume that the normals direction are in world space as is the object itself. Hope someone can help me understand this.
Thanks.