The Science Of CG


#141

Love this thread! Very informative, useful, and certainly helpful for 3D rendering.

Since however it’s labelled “The Science of CG”, I thought to toss some further examination everyone’s way. We need to keep in mind that photons do not yet “exist”, nor have ever been witnessed or observed. What we as examiners and scientists perceive is still quite literally a virtual event; a construct of mathematics to aid us in our understanding of physics.

An excellent essay on the topic:

Does Light Exist Between Events?

A relevant excerpt:

EXAMINING THE CLAIMED EVIDENCE The challenge may seem incredulous but until someone can provide evidence, I will have to stand by it, lest I stand among the faithful believers. It amounts to this: no one, throughout the history of science has ever seen or detected a photon in space! To this day, there has never existed a single scientific evidential experiment that has shown the existence of a wave or a particle of light between emission and detection. So on what evidence do the believers rely on to justify their alleged photons?

Granted, not all renderers work using “photon GI” of course, and Maxwell and other unbiased renderers are based on Maxwell’s excellent mathematics. But we also tend to regard such scientists as near-gods, and tend to put “faith” in their math without any real reason to necessarily do so.

I’m excited to see progress in this field myself. Perhaps one day we’ll use a simpler, faster, more accurate mechanism to obtain our photorealistic results.


#142

There’s no fundamental difference in between the math in rendering GI solutions, regardless of whether they’re biased or not. Pretty much all ray-tracing GI renderers today are solving Kajia’s rendering equation through methods based on monte carlo sampling or extensions of it. By no means does any of that renderers implement any of Maxwell’s wave propagation, otherwise we would be able to render things such as doppler shifts in moving light sources, polarized filters or the double slit experiment.


#143

@stew: Good to know, Stew. I wasn’t sure how Maxwell or Arnold (for example) worked internally, only used the prior a few times. I know mental ray pretty well and there’s certainly no magic going on under the hood there either. The essay I posted just kinda got me thinking a bit about the reverse nature of our rendering engines, and contemplating how one would go about reversing this reversal in practice. It’s a cause and effect relationship obviously, though, and it seems obvious that others would have tried to simplify things this way already.

I guess the point would be that photons are no more “real” in real life than they are in photon-GI raytracers, at this point in science. In fact, since one can quantify and test raytracing photons, they are actually more real than their allegedly-real counterparts, from a scientific standpoint.


#144

I think its time to update the pdf. :smiley:

But i wonder if any of you has actually downloaded it :curious: Is the .PDF version useful to anyone?


#145

Stew that would be cool: performing the double slit experiment in CG. As to photons being undetectable, consider this: Quantum Theory tells us that all particles move at a constant velocity, the speed of light, and that the velocity is divided among the three spatial dimensions and time. So if a particle is at rest, it is moving through time at the speed of light. Since a photon is moving spatially at the speed of light, it is not moving through time at all and could therefore be presumed to exist at all destinations simultaneously at the moment of its creation. One could presume that even if light does exist in particles called “photons” that there is no reason to believe we could detect them as discreet particles considering their nature. Not that this makes much difference to the quality of our renders. :smiley:


#146

Sure I did! Just to backup.


#147

Stew that would be cool: performing the double slit experiment in CG. As to photons being undetectable, consider this: Quantum Theory tells us that all particles move at a constant velocity, the speed of light, and that the velocity is divided among the three spatial dimensions and time. So if a particle is at rest, it is moving through time at the speed of light.

Indeed, but that’s a great many “ifs”. And to be more accurate, it should be “Quantum Hypothesis”, as these concepts have never made it to the “theory” end of the Scientific Method. Time is not a dimension in proof; “space-time” is and ad-hoc concept devised to explain the effects of gravity, except that it nullifies itself by using gravity to explain gravity. Once you divide by zero there, all other results are pretty much pointless. There are big differences between hypotheses and theories - not to get too involved in the semantics.

Concepts aside, I’d have no idea how to program or implement an “observer-based” rendering engine at all. Reverse-raytracing seems impossible, but perhaps there’s a better way yet.


#148

I think the big problem with quantum “hypothesis” is the presumption that quanta are non-dimensional particles, in other words they are infinitely small. One can see why so much mathematical manipulation must be done to avoid infinite results. I believe it is flawed at the root, and this is one of the reasons string theory is more attractive. OK who can write a renderer based on 2-dimensional strings instead of infinitely small particles? ;D This is a pretty fun conversation.

If one does not check the render farm, is the frame complete?


#149

I think the big problem with quantum “hypothesis” is the presumption that quanta are non-dimensional particles, in other words they are infinitely small. One can see why so much mathematical manipulation must be done to avoid infinite results. I believe it is flawed at the root, and this is one of the reasons string theory is more attractive. OK who can write a renderer based on 2-dimensional strings instead of infinitely small particles? ;D This is a pretty fun conversation.

Precisely, sir! I know it’s just conjecture, but if all the rendering engines are using the same math, and the same math is inherently “flawed” or outright wrong, then it would seem possible that another algorithm might do a better/more efficient job. Alas, I suffer from severe boredom when it comes to string theory, myself. Kinda leaning more into the plasma-electrical stuff lately.

But I wonder if such a thing would even be possible, with our current PCs’ instruction sets?


#150

Bored by string theory? This isn’t Sheldon Cooper, is it? :wink:


#151

As has been touched on before in this thread, when you start trying to do things more realistically, the scale of your objects and the scene start to become more important.

About this. If your position is towards the end of the pipeline and you have absolute no control over the scale of things, how would you deal with that? I always have to light my characters no taller than 5 inches. Tho I suppose I can get away with a lot of things since they are so stylized. But still really curious.


#152

If your position is towards the end of the pipeline and you have absolute no control over the scale of things, how would you deal with that?

Good question! Well, the most important thing is not so much the overall scale, but the scale of the objects compared to each other I suppose. It depends on the renderer you’re using and then the shaders/lights you’re using within that renderer. Some examples:

  • Maxwell Render has two global scale multipliers. One for the overall lighting/camera setup and then one for the attenuation of refractive/SSS materials. It starts out based on centimetres with a value of 1, so then if your scene is modelled in metres you set the scale multiplier to 10. The main reason you’d need to mess with this in an unbiased engine is because of the physically based camera. If your scene is way too big or small, it totally throws off the values you need to get the depth of field you’re after etc. I think the separate Attenuation multiplier might be a bit redundant at this point as the materials themselves have the ability to choose their unit of attenuation such as cm, mm, nm etc. I’m coming from the Maya plugin though so it may be unique to that plugin.

  • mental ray is less concerned with scale. Most of it’s units are arbitrary by design and so as long as your relative scale is consistent it shouldn’t matter too much. The SSS shaders have a scale attribute on them, because they rely on scale the most. The MIA shaders (architectural) have a scale value for their attenuation too. It’s basically up to the shader writer to code in scale related features if the shader requires it. I use the wom_archlight light shader for just about all my lighting these days as it is very physically based in terms of real world units (watts, candela etc) and it’s author built in plenty of scale control, which is great.

Basically, relative scale is what’s important. If you use IES light profiles I believe they are often implemented with a scale parametre as well.

Please tell me if I’ve overlooked something… :thumbsup:


#153

Its also important for lighting. A wrong scale gives you a wrong light falloff.


#154

As long as you scale up/down the intensity of your lights appropriately with the scene, falloff should always be the same. With inverse square (quadratic) falloff, something that is twice as far away is 1/4 as bright. That doesn’t depend on scale to my knowledge! :shrug:


#155

But its still something to consider for the folks out there who prefer to light with real world units (like myself) because it allows for a more predictable result.


#156

I don’t know. Are there many studio out there that doesn’t consider the scale of their models? Mine for one doesn’t. :shrug: Sometimes the characters that are sent to me were only a few millimeters in height lol. I sometime had to use the distance of SSS as low as 0.0001.

Real scale objects seem to be a problem for Maya. A human model that matches the real world is so huge and the camera acts all weird when going so far out.


#157

It gets even more fun when you’re sharing Maya assets with other facilities…


#158

Here’s the Updated PDF of this thread: http://db.tt/VndVZHXj (right click, save link as)

I just realized that In the first pdf some of the images where cut off(and I apologize) but that’s fixed now.


#159

Thanks for the .pdf CGphysics. Read it all. Quite handy, although I found myself reading through the whole thread anyway. And thank you all who shared this precious knowledge and discussed about it.


#160

A very interesting article http://www.cgchannel.com/2012/10/10-tips-for-lighting-and-look-development/