The Science Of CG


#81

Hey Rens, it’s been a long time. :beer: Glad you appeared. I use 3ds max. Yes, I remember the mention of this, but it’s just I can’t write shaders. But I would gladly test your shader.
I would love to see your further contribution, thank you.
I will finally make a pdf of it (so that pics won’t dissapear as often happens), maybe just a grab of forum format from another forum, will host it here perhaps, and will mention you and playmesumch00ns as main contributors.


#82

Guys, this is an awesome read. Thanks for your time and effort!


#83

It’s in the Conductors paragraph: “The index of refraction controls how distorted reflection will look and for metals brightens the reflection a bit.”

Just lots and lots of reading and experimentation :slight_smile:

Yup that’s what I’m talking about. If you don’t want to use a complex fresnel, then yes using an ior > 20 produces a similar curve (although of course it’s not wavelength dependent).


#84

Good stuff.

Been thinking the same thing lately, but couldn’t come up with any real examples. Did you happen to come across any?


#85

Linear Dodge = Add


#86

But how can you be sure the total amount of reflected light is 80% in this case? Shouldn’t you set your reflection to 80% of brightness as well?


#87

No, unfortunately not. I had a fairly long discussion with a few guys over at ompf.org here: http://ompf.org/forum/viewtopic.php?f=10&t=1374

To be honest, I’m still just not sure. Frankly I kinda like my explanation because it elegantly explains what we see in the hard-diffuse case (relfections tend to white at glancing angles) as well as the translucent case.

Having good IOR data for some coloured dielectrics might settle the argument, and certainly I haven’t spent enough time trawling the database to see if there’s something there. But even then, if the data are created by measuring reflectance, then we’re back to square one because again there’s no way of telling which explanation is correct since both could create the effect (except we’ve just wrapped up the bulk scattering in IOR data, if you see what I mean).

For CG purposes it doesn’t really matter a huge amount to be honest because it’s perfectly plausible to model both in the same way (diffuse+specular layer), regardless of exactly what’s causing the observed reflectance, but I’d still really like to know just what is going on.


#88

Fixed some mistakes…. There are some interesting points I missed:

But still it doesn’t take into account that either surface or subsurface reflection should be not close to 1 (except for a silver mirror).
The value above 1 won’t be exceeded, but it still does not limit most of reflections to the 5-80% range. I still don’t understand how I should limit my fresnel reflections to be physically correct.
I have 2 inputs:
diffuse
reflection

I usually set my diffuse not higher than 80% of brightness, though now when I’m thinking that a white paper is 80% of brightness, perhaps I should keep the diffuse value for many materials much darker than I usually do, as I set it just not to exceed the 80% threshold. Which is, perhaps, a bit simplified. But this deals only with subsurfac reflection. What about the surface reflection?
There is a control in mental ray which limits the “total” surface reflection. Should I set it to 0,8 for dielectrics then? Because if it mixes with diffuse, it can theoretically reach 1 at edges, right?

Does it mean that our transmission should never be 1, like it’s often suggested for glass?

“Frosted glass" vs translucency: I’m curious which approach is used in our common renderers, I guess “frosted glass” approach, as it definitely should be faster to calculate.

About mesure BRDF’s

So basically it’s a simplified 3d model of a microstructure with the vertical scale (roughness) function?

I’m also curious how reflection is calculated in today’s raytracers as opposed to complex measured BRDF’s: does it spread out the rays evenly (as never happens in real life I think was mentioned)?
Do measured BRDF’s calculate the subsurface reflection or only the surface one? Or lambertian model serves here pretty well?
I’m wondering if Maxwell treats lambertian model differently, apart from it calculates different wavelengths of light.

I’m not sure what you mean by “diffuse color”: do you mean that color comes from the subsurface? Though we said that dielectrics don’t tint their surface reflection, there are some examples like pearl, gasoline and so on. I’m even not sure if satin has a white surface reflection.

“Metals bounce off much more light as their microstructure is very rigid, not allowing light photons to penetrate too much, that’s why metals are so reflective (and hard)” – not sure, is it correct?

“Only dielectrics transmit light”, is it true? A golden leaf does transmit some light, as was mentioned.

Isn’t a fresnel reflection distribution has something common with the inverse-square falloff? As you change the angle of a surface, each additional angle not just doubles the amount of reflection, or close to it as it seems to me.

There are basically 3 inputs in mental ray fresnel curve setup: 0 degree reflection strength (facing to us), 90 degrees reflection (edges reflection), and curve shape, which controls how steep the peak is.
So is it the curve strangth must be reduced so the line becomes more straight or the diffference between 0 and 90 degrees should be smaller, so we increase the 0 degree strength or reduce the 90 degree strength?

And the reason I guess for this happening is that the more glossy the surface reflection is (coming to diffuse distribution), the more spread-out rays become, so the curve becomes “blurred”?

True… I wish we would have a shadertree like this, and maybe will within 10-20 years. Also there should be “transmission” parameter for subsurface I guess.:slight_smile: Because a photon can either be reflected from a surface\subsurface, absorbed, or - transmitted.

Surface:

  • Absorption (colour).
  • Roughness

Fresnel switch between surface and subsurface.

Subsurface:

  • Absorption (colour).
  • Roughness.
  • Transmission
  • Depth

I’m also wondering if reflecting roughness should have a separate parameter of roughtness than coming from the opposite side - SSS. Or the structure is the same for a material? It can be actually polished, right?

But isn’t it contradicts the idea that absorption has nothing to do with microgeometry? Or it has?

I’m also wondering, if anisotropic refraction possible?


#89

I’d have to look it up what is actually implemented in the standard shaders, but when writing shaders for mental ray you have the option of different scattering modes, like even scattering or bias towards the normal.
But no this doesn’t happen in real life. Blocking by other parts of the microgeometry (more at low angles) is one reason.

True
I wish we would have a shadertree like this, and maybe will within 10-20 years. Also there should be “transmission” parameter for subsurface I guess.:slight_smile: Because a photon can either be reflected from a surface\subsurface, absorbed, or - transmitted.

That would be the Depth, or how much gets absorbed with distance travelled. However, I think you can leave that one out as well as you just have the absorbtion, and then the roughness (or scattering). What makes it out of the other end is the transmission.

Depth might be a scaling parameter so you can always use the same colour for the absorption, as a couple of renderers have.

I’m also wondering if reflecting roughness should have a separate parameter of roughtness than coming from the opposite side - SSS. Or the structure is the same for a material? It can be actually polished, right?

You mean in going out? If not then just flip the material around and you have your front surface again. : )
But it’s interesting to look at what happens when light coming through a material hits the other side again; how a rough or polished surface affects that.

But isn’t it contradicts the idea that absorption has nothing to do with microgeometry? Or it has?

No, well, not in a uniform material. There you have light bouncing from particle to particle, absorbing, emitting, absorbing, emitting in another direction, etc… Quantum level.
If you have small parts of a material embedded in another material, like coloured plastic, then you have another surface interaction happening inside the material, where microstructure plays a role. It’s a step or two up in scale from the quantum level.

I’m also wondering, if anisotropic refraction possible?

Polaroid glasses.


#90

Thanks you Rens, I will digest what you said and make conclusions.

I’ve added some definitions and illustrations. :buttrock:

Basically I have almost no questions anymore except those I asked in the previous post, so when they will be all replied I will finish the pdf, so people could download it to their hard drives (as freehosting happens to lose pictures, which is sad when you’re looking for a thread started some years ago).

I’m wondering if different shading models like phong, blinn affect the actual raytraced reflection behaviour, except from the ward-anisotropic, which stretches the raytraced reflection. There should come the time when we will no longer need the specular highlight. Will those models, except from ward-anisotropic become obsolete (basically those who deal with specular highlights)? Area lights in mental ray already do not produce specular highlights. Vray version seems to be tied to specular somehow, so when I disable it for reflection, it simply looks wrong.
What I’m thinking, will those specular models like blinn become obsolete as we move more to pure raytracing?


#91

Measured BRDFs just measure the reflectivity of a surface for many different light/camera direction orientations using an instrument called a gonioreflectometer or a light stage. They’re not actually calculating anything to do with the surface, just measuring the amount of light that comes back towards the camera for a given light direction.

A BRDF cannot represent subsurface effects by its very nature, since it only deals with the integral at a single point. Subsurface effects rely on the light arriving over the entire surface, which is why we give the scattering functions for them a special name - BSSRDFs.

Light stage capture can represent subsurface effects if an entire object (such as a face) is catpured, as seen here: http://gl.ict.usc.edu/Research/afrf/ and you also find things called BTFs which are captures of a small section of a surface. Quite a lot of research has gone into them from a real-time point of view, particularly from Microsoft.

I’ll come back to how modern renderers calculate the BRDF later.

Yes, real materials are more of a continuum than the conductor/dielectric split I mentioned. But that split works pretty well in most cases, with some notable exceptions, such as satin.

The colours evident in pearls and gasoline are not due to surface reflection, they are due to thin-film interference. This is due to dispersion in thin (not much bigger than the wavelength of visible light) layers of a transmitting material (i.e. a dielectric) covering the surface. The complex patterns and colour shifts depend on the number and thickness of the layers. It’s essentially the same process that causes rainbows, just a bit more complicated.

Nope, has nothing to do with inverse-square falloff. It’s to do with the electromagnetic properties of the medium, whereas inverse-square falloff is purely due to the geometry of the scene.

I just tend to increase the strength of the 0 degree reflectance.

Not really, it’s more to do with interrefections between surface microgeometry, which is not modelled in any analytical BRDF (but would be in a measured BRDF, since it’s just capturing the end result). It’s quite a subtle effect.

I’m not following you here.

No. I think you’re misunderstanding what you’re calling a “specular model” is actually for. A BRDF (phong, blinn, ward etc are all BRDFs) just tells you, for a given pair of directions, how light light is scattered from the incoming direction onto the outgoing direction.

Now those directions may be between a lightsource and a camera, or they may be between two surfaces.

A specular highlight is just a reflection of a lightsource. So whether you’re dealing with point-like lightsources (like infinitely small cg point lights -“delta lights” as we call them), or area lights, or an HDRI, or raytraced interreflections between surfaces, the BRDF (whether it be ward, phong, whatever), is calculated in exactly the same way*. What you’re thinking of as a specular highlight is just a specular response to a delta light. So it’s not the specular models that need to be dropped as we move ever closer to physical accuracy, but delta lights themselves.

The way a raytracer goes about calculating reflections using a BRDF differs depending on the renderer (or even what mode the renderer is in). The are two different algorithms that are commonly used - path tracing (e.g. Maxwell, VRay/MR PPT shaders) and distribution ray tracing (e.g. regular VRay/MR, PRMan). We’ll examine distribution ray tracing since it’s possibly easier to understand.

In order to calculate how much light ends up at the camera for a given surface, we need to know how much light is scattered onto the viewing direction from the scene. That is to say, for every direction in the hemisphere, L, we need to apply the brdf, f(L,V), to find out how much light is scattered from L onto V, and add it to the total amount of light leaving the surface along direction V towards the camera.

The simplest way to do this (and this is completely correct) is just to generate, say, 256 rays in a uniform distribution over the hemisphere above the shading point. Then we just trace each ray and find out what it hits. If it doesn’t hit anything then we look up the colour of our HDRI at that point, apply the brdf and add it to the result. If it hits a light source we just see how bright the light is, apply the brdf and add it to the result. If it hits another object then we see how bright the object appears (which will require recursively evaluating the brdf and lighting integrals at the hit surface), apply the brdf and add it to the result.

So you can see that our brdf affects all the light bouncing around the scene. Now this is a pretty naive way of calculating the lighting. You can do lots of things such as importance sampling to speed up the calculation which I can go into if you’re interested, but that’s a longer discussion :slight_smile:

Different BRDFs have different strengths and weaknesses. Both Maxwell and Fryrender seem to use something similar to Schlick’s fast brdf. Other renderers give you a selection without ever really explaining why you’d want to choose one or the other. We use Ashikmin & Shirley here since it is the most flexible (can represent the widest range of materials to a good degree of accuracy), is energy-preserving (but not energy-conserving, unfortunately), and is anisotropic, so we can use it for everything. It’s not the fastest thing in the world but it does the job. We’re always investigating others as well.

*not quite exactly the same depending on the implementation, but that’s not important.


#92

What interests me if we can claim that simplified BRDF’s have a simple uniform distribution. Well, maybe it’s not very important anyway, if we come to a point when measured BRDF’s will be used commonly, people will know the differences. :wink:

Oh yeah I realised why it confused me, as metals can be very reflective but it’s a surface effect, just not much light comes to subsurface level anyway. Well, nevermind. :blush: You’re right, it will be absorption for the subsurface, or maybe a slider absorption-transmission. And depth, for this transmission.

What do you mean “the same colour”?

Yes, but I’m talking about subsurface light reflecting back and reflecting through the material. Can a material have diffeent structure on the surface and inside due to polishing? If yes, than those would be different parameters… oh wait, wouldn’t be the polished part just a surface reflection? :blush:

Oh, I got it: it bounces, but not from a microgeometry, but from particles?

So it’s a pretty rare effect. Well, that’s cool, I guess it allowas reducing the direct lighting amount. Anyway, as long it’s so rare, it doesn’t make sense including it into physics for CG description I guess. :slight_smile:


#93

Thank you Playmesumch00ns for the reply, very thorough explanation. :thumbsup:

I got it… I just thought that “shadowing” and “masking” represent some kind of approximation of microgometry but as long as I’m not a mathematician I could hardly imagine this. So they’re just measuring the amount of light for different angles. Well, anyway I guess it’s not critical as we don’t have measured BRDF’s for now.

I think it’s worth of mentioning that satin is an exception. I really have a hard time coming with another examples though. It’s just an exception from the rule really.

So it’s a dispersion. Anyway we won’t model it with dispersion I guess, just a tinted surface reflection. :slight_smile: I will include this into the dispersion description.

Do you have some kind of a formula or just eyeballing the effect? So Maxwell does it automatically, has it to do anything with BRDF model it uses?It’s worth mentioning that some renderers do it automatically. Looks like the further, the more correct surface shaders we will have, with less need for users to tweak them to be physically correct.

So I it’s just a several percent increase in 0 degree reflectivity?

Rens described a scientific shader, like this

Surface:

  • Absorption (colour).
  • Roughness

Fresnel switch between surface and subsurface.

Subsurface:

  • Absorption (colour).
  • Roughness.
  • Depth

I thought that we could use different roughness parameter for the subsurface reflection, but that it came to my mind that wha I’m talking about is SSS and surface reflection. So it’s my mistake. The only thing I think could be added to this universel shader is not just “absorption” naming for the subsurface, but a slider “absorption-transmission”, as it makes more sense to me.

So a BRDF is not dealing just with a specular CG highlight, right? Those delta lights are from the same era as CG highlights. So when you deop one, you drop the other. But yeah, as long as you say BRDF deals not just with a specular CG highlight, it makes sense.
So a BRDF is applied after the uniform distribution was calculated, right? Ok, so blinn and phong raytraced reflection will differ? The problem is that I understand how those BRDF models (like phong and blinn) behave with a cg fake highlight, there’s no a clear explanation for me how they deal with a raytraced reflection. Not sure if it’s important to understand.


#94

Again I think you’re a little confused over some terminology here - shadowing and masking are the names of two effects that are modelled by microfacet brdfs such as cook-torrance, ward and a&s. They are descriptions of the effects of ‘peaks’ in the surface microstructure shadowing light onto other microfacets and blocking the view of the camera to other microfacets, respectively.

This is just based on theory (but the theory seems to agree fairly well with what we observe in reality), so if it’s correct then measured brdfs will also include these effects, but just by virute of measurement rather than explicit modelling.

Do you have some kind of a formula or just eyeballing the effect? So Maxwell does it automatically, has it to do anything with BRDF model it uses?It’s worth mentioning that some renderers do it automatically. Looks like the further, the more correct surface shaders we will have, with less need for users to tweak them to be physically correct.

Yup just eyeballing. I haven’t played with maxwell in enough detail to know what it does. In any case, its handling of fresnel is a bit fubar anyway. And this from a ‘physically correct’ renderer…

So a BRDF is not dealing just with a specular CG highlight, right? Those delta lights are from the same era as CG highlights. So when you deop one, you drop the other. But yeah, as long as you say BRDF deals not just with a specular CG highlight, it makes sense.
So a BRDF is applied after the uniform distribution was calculated, right? Ok, so blinn and phong raytraced reflection will differ? The problem is that I understand how those BRDF models (like phong and blinn) behave with a cg fake highlight, there’s no a clear explanation for me how they deal with a raytraced reflection. Not sure if it’s important to understand.

There isn’t really anything to understand. As I said a brdf just tells you how much light is reflected from one direction onto another direction. Whether that first direction is a lightsource, and hdri or another surface doesn’t make a blind bit of difference. It’s important to stop thinking of raytraced reflections as fundamentally different things. They’re both simulating the same thing - specular reflections from a surface - they’re just calculated in a different way. So yes, raytraced reflections will look different if you switch between a phong brdf and a blinn.


#95

Thank you a lot for your explanations, I fixed the material and added some new remarks from your thoughts.

Isn’t it the thing I was mentioning, meaning that the shader limits the total diffuse+specular reflection to 1 but doesn’t care if it’s within 5-80% of reflecance for most cases (for dielectrics, at least)?

Finally, I organised the best from this thread into an article. You can download it to read offline. :applause: Just drop it into your browser to open.

Here is a link to the compressed article file
http://www.sendspace.com/file/hdvd8f
or
http://rapidshare.com/files/272431979/CGTalk_-The_Science_For_CG__article.zip.html
If you can provide a paid hosting like a paid rapidshare, please pm me.


#96

I love this thread. Some great contributions.

I’m working on a new shader, and I want to make sure I grasp all this.

So, surface roughness causes the incoming light rays to be diffused, which is what the diffuse component in CGI represents (the roughness component of oren-nayer increases/decrease the roughness “troffs”). The flatter parts of the surface reflect more directly, which is represent in CGI by the reflection component (slight surface roughness is what causes the reflections to blurred, thus the CGI diffuse component is just an extremely blurred reflection). These are both relative to each other, so if it’s just a flat surface, it’s 100% reflective, and 0% diffuse. If it’s half ruff, half flat, its akin to 50% diffuse 50% reflective.

Transparency is also relative to reflection, and is scaled relative to the amount of reflection. The reasoning here is,say, a light ray with a value of 1 hits the surface, .5 is reflected back, and .5 is transmitted. Out going energy (diffuse/reflection/refraction/subsurface) will never exceed the incoming energy.

Next, Fresnel.

So, for metals (and conductors generally?), the recommendation is to use complex Fresnel, and simple for all dielectics(?). Is there a table of IOR for different materials? Is that a good example of when to use complex/simple Fresnel calculations?

So, once ive calculated my Fresnel component, I multiply my reflection component by the Fresnel, and then to keep things energy preserving, I multiply the diffuse by the inverse of the Fresnel. Happy days.

Does that sound like a good start? Or utter rubbish :slight_smile:

Other thoughts:

Possible writing a method of automatically de-saturated overbright areas.

Shader models:

I’m going to go with Oren Nayer for diffuse and Cook Torrance for specular. Any recommendations for anisotropic reflections? I know of Shirly and Ward, not sure what the advantages are between them.

Well, that’s enough for now :slight_smile:


#97

There is subsurface (diffuse in cg, but that doesn’t mean surface reflection can’t be diffuse also) and surface reflection. Metals don’t have subsurface reflection, only dielectrics. It’s not about surface roughness, it’s about their chemical makeup. Tha amount of subsurface and surface relfection isn’t tied so much to its roughness I guess.
About oren-nayar:“The more rough a surface is, the more the diffuse reflection flattens out. Roughness is generally not talking about BIG roughness, but very fine bumps on a surface.”

You mean refraction is relative to reflection. Yes, it’s like this, and some energy may also be absorbed, but very little with glass. I’m more curious why reflective and refractive IOR is equal, but can’t find explanation which is understandable to me.

There is a table for some materials, but not for all. It makes sense using tables only for such materials as diamonds, liquids. Anyway, you won’t find much info on other materials IOR. And it makes little visual difference in my opinion.

I’m not qualified to raply to this, as I don’t know what operators are used.

Well, it’s a bit restricting. I’s better to eyeball this. Anyway if you make textures it’s not that hard to maintain. Just keep that in mind.

Check what manual says about those particular BRDFs in your renderer.


#98

What also might be a little (although i think its quite important for ppl to use it correctly) but intresting thing to mention is that many CG artist use the term ‘glossy’ in the wrong way. They refer to glossy as if it means more blurry but in fact more glossy means less blurry and less glossy is more blurry.

So, in short:

more glossy = less blurry
less glossy = more blurry

Thats also why always try to use the term ‘blurry’ reflections or refractions. That way there is no confusion. :smiley:

Also, perfect (glossyness of 100%) reflections dont exist in real life.

What i also find intresting is that most renderers (mental ray, vray, brazil, fryrender, etc…) always use lambertian diffuse shading (100% roughness) as a basis for the main coloring of the material.
You have absolutely no controll over it (except for the color).
Which ihmo is one of the biggest reasons why the lighting in GI renders often looks too ‘diffusse/flat’ and not verry contrasty.

You could do it correctly by using a layered material and using a base material with pure black diffuse color and use verry blurry reflections (like 20% glossy for example) instead, but then in my tests the GI gets screwed. Probably because GI is calculated using the diffuse colors and not taking the reflections into account (right?). So if i build materials who only use reflections (physical correct) and have all their diffuse colors set to black, i wont get good GI.
To take reflections into account in your GI solution you’ll need caustics and those are pretty slooooow and heavy on the resources.

Or am i wrong here?

It would be great if these renderers offered the possibility to ‘tighten the diffusing’ in a way so it still gives correct GI results but doesnt need heavy sampled reflection calculations.

This is where Maxwell and Fryrender kind of shine, because their materials use a roughness parameter, so its just a matter of setting the roughness to something like 90 or 80% to achieve a less diffusing but still rough-enough-looking material.


#99

I remember that long talk with you and Neil Blevins in that maxwell thread, and I agree with you. But using “diffuse-glossy-specular” reflection is so common and I think it doesn’t contradict that “specular”= “completely glossy”. The terminology in cg is vague and a bit chaotic. So in your terminology we should use “diffuse-blurry-glossy”?

True, I think it was mentioned (mirror is pretty close to 100% glossy (specular)), but you can make it 99 or 98% maybe? Just to feel more saint. :smiley:

You have, the roughness parameter, which is a switch between lambert and oren-nayar. At least in 3ds max.

Hmm, but the “diffuse” component, where you usually apply textures, is not a surface reflection, it’s a subsurface reflection. So it cannot reflect surroundings. It is light that went inside of the material and bounced outside. So you can’t use any kind of blinn\phong\ward BRDF for it and it’s not a direct reflection once again. So I don’t know how raytracing could be used to enhance the lambertian component.

Yes, of course… and those are too slow for most people, true.

What is “tightening the diffusing”? I’m not sure I understand you.


#100

The roughness value only makes things rougher. It kind of makes things look even flatter/more diffuse (which is what it does).
Thats not what im looking for, I want the opposite to happen, to make it less rough than default.
What i want to achieve is shading like in a maxwell or fryrender material with roughness set at 80-90%.

Although i think it wont be possible without using slow/heavy tecniques.
But theres nothing wrong with asking :smiley:

Theres a big difference in how it works in reality and how it gets calculated in CGI and I think you might be mixing them up a bit here, or it might be me who doesnt fully understand what you mean?

Everything we see is reflected light rays. So even diffuse colors (subsurface or no subsurface) are a consequence of light comming from surroundings and being bounced of (reflection) until the light rays enters our eyes or the camera.

Now, the rougher the surface of the object, the more the light gets scattered around and thus the more blurry reflections we get. In CGI we use shaders to mimic how light reacts to these rough surface. We use different shaders because not all materials have the same roughness structures and thus they react differently to light and verry simple controls (like roughness/glossyness) are added to offer a bit more possibilities. In the real world there is not such thing as phong, blinn, oren nayar, erc… shaders, not even for reflections.
In real life theres only the molecular structure of an object. But if you wanna recreate/model that on a pc… good luck with that, haha!! :wink:
So instead of forcing us to model the molecular structure of an object (and thus its tiny roughness) we get shaders that approximate how light reflects of these little bumps, spikes and pits.

A lambertian shader is a isotropic surface shader which means it bounces light of uniformly in all directions.

So I don’t know how raytracing could be used to enhance the lambertian component.

Im not asking to use raytracing to ‘enhance’ the lambertian component.

I think that using a lambert shader as a base for every material is not a good idea. (too much scattered light in GI).
I think that not all materials scatter light in an isotropic way, so id like more controll over the the diffuse color/pass/component of a material to be less isotropic.
So, like i said, you can do this with verry blurry reflections (but most of the time you get quite ugly results when you use verry low glossiness values) but then it is excluded from the GI calculation, except if you enable caustics but it will take forever to calculate and take tons of memory, even in not so verry complex scenes.

Now, my question is if there is a way this could be achieved in a more efficient way?