What mister3d is on about is that for many materials, the ‘diffuse’ part of the response comes from subsurface reflection - i.e. light that’s entered the material and scattered around a bit before leaving again. This doesn’t really make a difference regarding the point you are making though, that a constant (lambertian) BRDF is not a great approximation to the shape of real reflectance profiles. The most diffuse materials known to science still vary their reflectance with angle quite significantly.

As you have correctly identified, the reason we still use lambert for diffuse shading is one of computational efficiency. Irradiance caching, on which mental ray’s final gather and VRay’s light cache are based, relies on it (although extensions exist to handle arbitrary BRDFs with spherical harmonics, known as radiance caching, but I haven’t seen that implemented in a commerical renderer yet).

Incidentally, the convention for reflections is diffuse->glossy->specular. I prefer the term ‘shininess’ rather than ‘glossiness’ for describing how far you are from left-to-right on that scale, but each to their own.

Ultimately, as has already been said, neither perfect diffuse nor perfect specular reflections exist in the real world. The reason brute-force path tracers like maxwell, fry et al are able to handle them better than traditionaly biased renderers is mainly due to the fact that they are so bloody slow in the first place - their algorithms don’t make any distinction between different BRDFs so using a glossy reflection instead of a lambertian diffuse doesn’t make much difference to rendering time. That said, perfect specular and perfect diffuse surfaces will still allow the image to converge faster in those renderers.

Sorry, short answer: no. As you said, you could use multiple bounces of glossy reflection to do the same thing (and just ignore GI, by which I assume you mean photon mapping).

Ive been looking at the Ashikhmin & Shirley BRDF model you mentioned, and i have a question.

In the paper (An Anisotropic Phong BRDF Model), they have models for the diffuse & specular components, but how are they then calculating the ray-traced reflections in the example (figure 1)?

Is that covered in the monte-carlo section, because that is where it starts going over my head unfortunately

edit:
just read this:

There isn’t really anything to understand. As I said a brdf just tells you how much light is reflected from one direction onto another direction. Whether that first direction is a lightsource, and hdri or another surface doesn’t make a blind bit of difference. It’s important to stop thinking of raytraced reflections as fundamentally different things. They’re both simulating the same thing - specular reflections from a surface - they’re just calculated in a different way. So yes, raytraced reflections will look different if you switch between a phong brdf and a blinn.

This answers a lot. So i could use the Cook-Torrance/blinn/whatever specular model to attenuate my ray-traced reflections then. Cool. At work they simply multiply the reflections by the fresnel component, is it better to use one of the more popular specular models?

From a more technical perspective, how would you calculate the position of the light (K1 in the paper)? If i’m tracing everything, then everything is a light source.

edit:
I’ll assume the light source in the context of a perfect mirror would be the refletion vector, so i’ll try that to start.

Well doing it with a perfect mirror’s not going to give you anything interesting. What you’re trying to calculate is this: for every direction, D, in the hemisphere above P, how much light is scattered onto the viewing direction, V? This is what the BRDF gives you - you plug in pairs of directions and a shininess value and it tells you how much light is reflected from one to the other.

Now, for direct light sources you already have an explicit list of directions, L, given to you in the illuminance loop, but for raytraced reflections you need to generate those directions yourself.

Now the naive (but perfectly valid) way to do this is just to sample (i.e. trace rays) uniformly over the hemisphere. In RSL you’d do that something like so:

color result = 0, hitcol;
float numberOfSamples = 256;
float shininess = 100;
vector L;
vector V = -I;
gather( "illuminance", P, N, PI/2, numberOfSamples, "distribution", "uniform", "ray:direction", L, "surface:Ci", hitcol)
{
result += hitcol * brdf( L, V, shininess ) * L.N;
}
result /= numberOfSamples;

Now there’s obviously an awful lot of directions in the hemisphere (an infinite number in fact), so we’ll just choose a certain number (say 256) and hope that gives us a decent result - the less samples you use the noisier the result will be. The trouble is 256 samples is going to be hella slow and it’ll still be pretty noisy for fairly blurry BRDFs.

The trouble is that a uniform distribution of rays doesn’t really correspond to our BRDF very well. The image below is a graph of a cook-torrance specular “lobe”.

You can clearly see that for most directions the BRDF is zero or close to zero - so all the work we’re doing tracing rays along those directions is essentially wasted since the result will be multipled by a very small value in that gather loop above. Maybeless than a dozen directions would fall within the large, or “important” part of the lobe and hence will contribute anything significant to the image.

So wouldn’t it be nice if we could figure out what those important directions are and just trace along those, ignoring the rest of the hemisphere? Yes, it would and fortunately yes, we can. This is where section 3 of the paper comes in. What all those formulae are telling you is how to importance-sample the BRDF, i.e. how to generate a bunch of directions that we know are likely to have a significant contribution to the result. We are able to do this thanks to the magic of monte-carlo integration.

Monte carlo is a pretty expansive topic, but the gist of it is we can integrate an arbitrary function in any number of dimensions just by taking a number of samples of that function, adding them together and then dividing by the number of samples we took. If you look back up at the code snippet again, that’s exactly what we’re doing.

The good part is that we can improve the quality of the result if the samples that we take roughly match the distribution of the function we’re trying to integrate. All we have to do is account for the fact that we’re doing so or we end up with a biased result. The way we account for it is by weighting each sample by a measure of the probability of choosing that particular direction, called the probability density function (PDF). This turns out gather loop into something like this:

color result = 0, hitcol;
float numberOfSamples = 256;
float shininess = 100;
vector L;
vector V = -I;
vector directions[numberOfSamples];
generateSampleDirections( directions );
gather( "illuminance", P, N, PI/2, numberOfSamples, "distribution", directions, "ray:direction", L, "surface:Ci", hitcol)
{
result += hitcol * brdf( L, V, shininess )/pdf( L, V, shininess) * L.N;
}
result /= numberOfSamples;

In this example we’ve replaced the uniform distribution by a bunch of directions that we’ve calculated ourselves to match the BRDF, then in the gather loop we’re dividing each sample by the corresponding PDF for that direction (in the first example above we’re implicitly doing this because there is an equal chance of picking any uniform sample, therefore the PDF for each sample is 1).

So, all we need to do to get this to work is to write the functions generateSampleDirections() and pdf(), which is what’s detailed in section 3 of the paper. Unfortunately they don’t really tell you everything you need to know to do that and it’s beyond the scope of a forum post. If you want to learn this stuff the best thing you can do is buy this book. (I’ve plugged it so many times on here I should be on comission). In the meantime, you can download the source code here, which has implementations of all the functions you need to get started.

In a feature production enviroment, do you find this monte-carlo approach fast enough? Do you even bother with point based colour bleeding?

Another thing ive been reading about is how metalic colored reflections (eg gold) is based on Fresnel, does anyone have any resources/examples of this? The only method ive seen for calculating coloured reflections is just multiplying them by the diffuse color, and im guessing it’s a little more complex than this in the real world.

Yeah, for a full fresnel calculation of a metal reflection you need the n,k data for each wavelength interval, and calculate that for each angle (each sample).

You could however speed that up dramatically by multiplying the incoming colour with a 0-90 degree colour gradient or table, i.e. a lookup at a certain degree which would reflect a certain colour.

As for resources, try the Indigo forums and n,k data set.

edit: You could even complicate things even more by not only splitting it up into wavelength intervals, but also by its polarisation.
But however you calculate it, you always end up with a multiplier for the incoming light.

We always use point-based colour bleeding for diffuse interreflection. The monte-carlo integration is just for specular BRDFs. We’ll also use point-based with a narrow cone angle for doing blurry reflections of things that are too heavy to raytrace, which, unfortunately, is quite a lot of stuff

As Rens said, for the most accurate metals you really want n and k data per wavelength, which you can find tables for on the internet. However, most of the time you don’t want a physically accurate colour (and that’s assuming your spectral->rgb conversion works perfectly, which in a film pipeline it definitely won’t), you just want the colour the texture artist painted, so your approach of using the diffuse colour is the most sensible. The colour shifts in most metals are relatively minor anyway so you can get away with it.

One thing you can do to slightly increase realism is to do the maxwell bodge and use a real fresnel (i.e. not complex - so just prman’s regular fresnel() function) with a very high index of refraction, say 100 - 1000. This gives you a curve that looks roughly like a typical complex fresnel for conductors. There’s a characteristic ‘dip’ in the curve as it approaches glancing angles that gives you a little bit of darkening towards the edge of objects. This definitely increases realism, especially when you have a metal object self-reflecting.

Here are two pics of what I meant by the lookup thing, it works really well and you only need to calculate the incidence, or cam-surf angle, no Fresnel equations.

First is metal, second is dielectric.
Metal: surface angle as gradient coord input, multiply with full reflection.
Dielectric: surf angle as gradient coord, and use that to drive the amount of full refl or lambert.

I just wrote an rsl fresnel function based on the following that gave some nice quick results: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch17.html
(fresnel calc about half way down, uses both n & k, seems faster than renderman’s built in function).

Nice link, thanks. Second one doesn’t work, unfortunately.

I wrote a shader for mental ray which does complex fresnel and some other incidence related stuff. I used this link, which has RSL and VEX examples, really nice page with some good general and BRDF info as well: http://odforce.net/wiki/index.php/ReflectanceFunctions

I just wanted to throw something in, didn’t read the whole thread thow.

On the first page playmesumch00ns states that there are hardly any materials in “the real world” that reflect more then 90%.
THIS is definitely not true! There are indeed materials that go over this 90% reflectance. Just have a look at this site:

They offer this “white target” which reflects about 99% (+/- 1%) of light over the visible spectrum. And there are indeed many other materials that reflect more than 90%, and these are not only “space-age-special-materials” that are not present in everyday life

Just for those guys who think “Oh, well. So I’ll always make my materials less than 90% relfective and they’ll be fine… (which I thought a while back).” It’s really not that simple… and there might be situations where 2-3% difference in reflectivity makes a “whole” different image

EDIT:

I just measured an ordinary sheet of paper with a spectrophotometer (layed on a black object for not getting any reflection from transmitting) and it has a luminance value of 91.9% (CIELAB)…
The white of a lucky strike cigarettes package has a value of 92.2%…

The book - Physically Based Rendering by Matt Pharr and Greg Humphreys

Explain in great details about the physics and implementation of a raytracer. LuxRender is based on its pbrt renderer.

If you have computer science background, implementing a basic raytracer as practice also gives you lots experience of how a renderer and shader work in depth.

Then, mental ray, vray, brazil, maxwell are just another raytracer with different names on different buttons (of course, they all have its own uniqueness, but the principle is the same).

renderman renderers (ie. prman, 3delight, AIR, etc) are totally different beasts with different theory and approach than a raytracer.

but, of course, science is only one half of CG. Art is the other half.

Links are dead and this is a very important subject
most of the pix in the thread doesn’t apear in my browser due to hosting proxy policy in my country I can’t view them, so please could you upload it again or maybe give me an existing link.

I just stumbled upon this thread and it is amazing! There’s so much information in here and it’s a must-read for anyone that is even only vaguely interested in light and CG.
Thanks to all the contributors of this thread!!

Is it physically possible for a material to reflect 50% specular and 50% diffuse without the specular reflection being ‘glossy’ (blurry)?

NO - am I right?

If NO, then isn’t glossiness (if given a value between 0-1) basically a multiplier for the amount of specular reflection in a material - since specular reflection overrides diffuse reflection (that is, a material with 100% specular reflection has no diffuse reflection)

IF:
Diffuse Reflect = 1.0
Specular Reflection = 1.0
Glossiness = 1.0
THEN:
The material is 100% specular reflective.

IF:
Diffuse Reflect = 1.0
Specular Reflection = 1.0
Glossiness = 0.0
THEN:
The material is 100% diffuse reflection.

IF:
Diffuse Reflect = 1.0
Specular Reflection = 1.0
Glossiness = 0.5
THEN:
The material is 50% diffuse reflection and 50% specular reflection (but blurry)

Does any of that make sense? or if not could someone please show me where I’m going wrong.
Thanks for all the great help!

Hmmm, then I must be missing something. Could someone explain exactly what glossiness is and how it relates to specular/diffuse reflection?

From what I understood:

GLOSSY = is the state in-between DIFFUSE and SPECULAR.
More glossy (1) = less blurry (specular)
Less glossy (0) = more blurry (diffuse)
The more specular a material is, the less diffuse it is, and vice-versa.

GLOSSY = is the state in-between DIFFUSE and SPECULAR.

Your right.

Just keep in mind that surfaces in the real world are layered. My wacom has a coating that reflects light, but some light passes through and hits a ruffer surface bellow that coating, that is diffuse.