View Full Version : The Science For CG (article)
08-28-2009, 04:06 AM
Please don't post in this thread, it's just for grabbing purposes.
The article was originally created at and belongs to cgsociety.org (http://www.cgsociety.org)
It was organised from the original discussion and illustrated by Alexander Alexandrov (http://forums.cgsociety.org/member.php?u=126451).
Special thanks to 2 guys, who greatly contributed for creating this article: Rens Heeren (http://www.rensheeren.com/), and playmesumch00ns (http://forums.cgsociety.org/member.php?userid=7492). Without them this article definitely woudn't exist, as they provided the most of scientific information.
The Science For CG
Using physically correct route is a sign of an experienced cg-specialist.
Doing things in the physically correct way (or as close to it as understanding of physics and maths allows) means that the results are predictable.
Layering artistic hack after artistic hack into shaders quickly results in setups that are unmanageable and hard to make changes to. Moreover, if you don't light those materials in exactly the right way (as the original shader designer intended) the results can often be bizarre or just plain broken. This is especially important when you need to share shading and lighting setups between multiple artists working on different shots and sequences.
Try to keep physical correctness for as long as possible and only branch off into 'artistic licence' when you absolutely have to. It makes sense studying photography and traditional lighting to know how to break the rules without breaking the physical rules.
The best ways to achieve the worst cg:
Don’t use GI – leave completely black shadows. You can also overexpose the direct light to the bulk for a blown-out look.
Leave procedural shaders, don’t map anything. Or use simple tiled textures.
For materials, use the extremes: 100% white, 100% black and 100% saturated colors where possible.
Don’t use reflectivity for materials and don’t make maps for it.
If you do use reflections (for some bizarre reason), use mirror-like ones, don’t blur them.
Ignore such thing as fresnel falloff.
Ignore bump or displacement, pretending that the simple, absolutely even surface geometry you create exists in the real world
Turn on the ambient lighting and don’t use the inverse square falloff for lights, except for the sun and the moon. Ignore the scale of the scene.
Use hard shadows always.
Don’t use real world camera options. Ignore exposure, depth of field and never use motion blur for animations.
Using these rules guarantees you quite plain cg.
Those are the most common mistakes by beginners, and should not be treated as a rule to avoid. Actually all of them can be ignored for some aesthetical or technical reasons, but with caution.
In raytracing the calculations are physically-based (mimic in a realistic way, but still not physically precise of course), that's what they say in manuals for raytracers.
You cannot study lighting without concerns about materials and real-world cameras, because in realistic rendering we simulate photography and movie production. We do not simulate what our eyes see.
Geometry and geometry simulation considerations:
Geometry of most CG models cannot simulate a real-world geometry of objects and microstructure. To model every detail makes no sense – too hard to rig and map, also too heavy for viewports, so we use displacement or bump. Always use bump, or better, displacement for every surface, if you want a realistic result. At least in every CG movie production they provide at MINIMUM color, bump (or better, displacement), and specular (reflection) for every surface. Yet to notice, they create hand-drawn textures for all of these components, not just values. Of course in archviz they don't make buildings dirty and old, as the client doesn't want to see it as such :). But dirt helps "to sell" the picture as realistic.
Dirt is pretty, dirt is detail, dirt is scale. Remember: the smaller dirt details are, the bigger the object should be. Dirt scale tells about the object's scale.
Even a perfect building is never perfect.
Only model what you see (use references).
Model to a pixel accuracy if possible, not a 1/8th of an inch. Use real-world scale when you model, measure what you model if possible.
You should also try to provide high resolution textures for everything and of course unwrap everything.
There is one thing that concerns hard-surface modelling but is related to lighting - fillets. They are added to create highlights from lights and therefore enhance the feeling of form.(pic)
But think about the actual scale of fillets when adding them. Don't do a 10 meters fillet on a distant building for a fancy highlight, add fillets with a realistic scale.
The law of energy conservation: any reflected value cannot be stronger than at the start, so reflection most probably will be a bit dimmer, and so is lighting has an inverse-square falloff. The silver mirror reflects 99% of light though. You should never reflect more light than you receive.
Also the brightness must be neither 0, nor 100% as the law of energy conservation applies. Usually you set your diffuse value around 20-80% for dielectrics, but 0% for metals. Your color saturation is also not 100%, but around 80%, as it's probably cannot reflect 100% of light. (pic)
There is a direct and indirect lighting. A direct lighting is just a ray that hits a surface but stops there, so we have no light bounce. That's what never happens in real world. (pic)
Of course, for aesthetic reasons you can have completely black shadows, but it goes beyond this discussion.
The inverse-square light falloff: doubling the distance between the light and the subject results in one quarter of the light hitting the subject.
The light perspective: the more distant the light is, the less obvious is the inverse-square decay. If you double the distance of the light source from the object, you must set its intensity 4 times brighter to appear of the same intensity, but the falloff range will be bigger. So the sun is so distant and so big, that using the inverse square falloff for it doesn’t make sense, so we neglect it – we use no decay for direct sunlight, moonlight and starlight. The difference may be obvious at different planets of The Dolar System, but not on the Earth.
And this brings a very important note - the realistic scale of a scene is important, because light decaying in a realistic way is tied to the scale by its strength.(pic)
The inverse-square specular falloff: if you move the light closer to the specular (not glossy) reflection, it will appear bigger, but not brighter. This is because by doubling the distance you quadruple the area of highlight and the specular intensity scatters over 4 times bigger area, though becoming 4 times brighter.
Also, a small glossy reflection may appear brighter than a specular reflection: the area is spread-out, though the small concentrated area it occupied before was brighter, but we can’t see it due to limited dynamic range.
The angle of incidence equals the angle of reflectance. This means if you want to light a specular object, you can measure the angle visually and know where exactly to position your reflecting lightsource in relation to the camera so it’s visible in reflection. There is a utility in 3ds max called “place highlight”, and perhaps other 3d programs have similar tools.
Light distribution. In real life light diffuses most of the time, it's really rare you can see sharp shadows or a sharp cone from a spotlight. But when you create a spotlight in 3d software, usually it has a 1\1 rate of hotspot\falloff, which is wrong. It should have a rate of 1\10, and you get a more soft falloff which is correct. Just compare it with an area light which is a perfect example of how realistic light works.
08-28-2009, 04:06 AM
Light shadow may be sharp or diffused depending on the size of a light source. Use soft shadows most of the time and you will make no mistake. Though in many beginner's works you can see a car on a mirror-like surface with a harsh shadow.(pic)
There is also an air perspective: objects dim with distance because air contains small particles of dust etc. It should be present in realistic outdoor (and often indoor) scenes as long as it is physically-based.(pic)
Overbright problem : In very bright areas a rendered image might look too saturated, which is not correct and should be corrected in postwork, either in photoshop or in a compositing program. In photographs shadows are saturated but the more bright the luminance the less saturated the colors a(pic)
When a photon hits a surface, one of 3 things happens (at least that we typically model in CG):
Transmission.(transparency and translucency, also subsurface scattering)
Absorption (looks like no light information - black)
Objects can also emit light, which is emission.
There are 2 types of material as far as we're concerned:
conductive materials (metals)
dielectrics (everything else).
When light is reflected or went through a material and went out, those effects happen:
If a light bounces off a diffuse surface a color-bleeding happens.
If a light bounces off a specular surface a caustics reflection happens.
If a light travels through a refracted surface a refracted caustics happens.
If a light is absorbed by a surface and leaves from the opposite direction, a subsurface scaterring happens.
Caustics are caused by focusing. They can be caused by reflective or refractive surfaces, but they are a specular effect. Subsurface scattering is a diffuse effect, so light which goes out from subsurface-scattered material is subsurface transmission and if exits from the same direction as it went it’s subsurface reflection. (pic)
08-28-2009, 04:07 AM
Depending on a surface quality, reflections are divided into:
Specular reflections (very smooth surfaces) – surface effect (specular means Greek root "mirror")
Glossy reflections (in between diffuse and specular)
Diffuse reflections (rough surfaces) – subsurface effect.
Selective specular reflections are possible only for metals, which results in tinted reflections. Conductors (metals) don’t have subsurface reflection, only specular reflection.
Selective diffuse reflections are possible for dielectrics, which results in a colored surface quality.
Transmission can be:
direct transmission (completely transparent)
diffuse transmission (translucent materials – milky look)
selective transmission (colored glass - either direct or of a diffuse transmission). This kind of surface absorbs, reflects and transmits some of the light. It scatters light rays in many directions as they pass through.
Not a surface completely absorbs (completely black), completely reflects or transmits. Every surface reflects, transmits and absorbs to some extent.
1) Reflection. The photon bounces off the surface. Whether a particular photon is reflected, transmitted or absorbed by a material isn't down to the surface microstructure, it's down to its interaction with the atomic structure. I don't know nearly enough about quantum physics to say any more than that, but I don't think trying to qualify what's happening in terms of structure is helpful. Metals do, for instance, transmit light, just tiny amounts at very short distances (gold leaf clearly transmits light).
Dielectrics always reflect light exactly as it hit it, i.e. their reflections are "white", whereas conductors will colour their reflections.There are some rare exceptions like satin. What colour the reflected light is tinted by a conductor is dependent on its chemical makeup as well as the angle at which the light hits it. For instance light hitting a gold surface at a glancing angle is less yellow once reflected than light hitting the surface head-on.
Specular (direct) reflection may be isotropic or anisotropic. An anisotropic specular is stretched-out in a direction perpendicular to the grooves in the surface, whereas isotropic is evenly distributed. The raytraced reflection usually gets stretched, not only the CG hightlight.
Specular CG highlight
Raytraced anisotropic reflection
Subsurface being the part that mostly gives the colour to the bounced light - the 'diffuse' part. The surface part is dependent on how rough the surface is, ranging from mirror-like to diffuse (lambert) reflection at the extremes.
08-28-2009, 04:08 AM
A real surface's bidirectional reflectance distribution function (BRDF) describes how it reflects or absorbs light from different angles.
There are simplified BRDF's (no raytracing), then hybrid (raytraced+simplified) and measured (complex, real-world surface data).
At the moment of the article the hybrid models are popular the most, measured are not yet available, though some renderers like Vray already have prototypes to work with it. There are no yet libraries of measured data available. Raytraced BRDF's provide a more reflistic result than simplified ones, and measured ones provide even more realistic result.http://www.youtube.com/watch?v=enjPuiA-MOE
Simplified BRDF models are basically diffuse+specular, which means subsurface (diffuse) and surface (specular) reflection.
In the early days of CG there was no raytracing, so a raytraced lightsource reflection wasn’t an option. That’s why they came up with the idea of a specular highlight, which is fake. More and more people tend to use raytraced reflections instead of fake CG highlights, which gradually become obsolete.
This also reflects in lights in 3d programs: every modern renderer usually comes with an area light, which has a real reflection of the lightsource itself (because the renderer can actually render raytraced reflection), whereas old types of lights have no real reflection, they produce only a specular highlight. They are still useful though.
The most common simplified BRDF’s for diffuse are Lambert and Oren Nayar.
Lambert (simple diffuse)
Lambertian is basically the “diffuse” color, or scientifically – a subsurface reflection. Other models deal mostly with specular (blinn, phong), which is usually added on top of lambertian.
Lambertian DRDF simulates a subsurface reflection as light, which first went inside of a material and went out into the camera lens with evenly spread out distribution (as definitely light had to bounce inside and cannot be mirror-like at all).Therefore it does not reflect the direct image of surroundings.
Surface reflection, on the other hand, will never be as uniform as subsurface reflection, so it will always concentrate more light in the area of a bright object like a light source.
Oren Nayar (diffuse for rough surfaces)
There is often a "roughness" function for "diffuse" (subsurface reflection). This is so-called Oren–Nayar Reflectance Model (http://en.wikipedia.org/wiki/Oren–Nayar_Reflectance_Model) Its roughness parameter controls how much light is reflected back in the direction of the lightsource, which is a characteristic of "rough" (or dusty) surfaces. The more rough the surface is, the more the diffuse reflection flattens out. Roughness is generally not talking about BIG roughness, but very fine bumps on a surface. Stuff like velvet or skin can be considered to be rough, because at a very fine detail level, you have pores and the threads that make up the velvet. Something like plastic is far smoother at the microscopic level. Something like rubber, rock, rust, would have a much higher roughness than skin or velvet. Rough surfaces scatter light in many directions (but never quite evenly in all directions, this is a simplified representation).
The most common simplified BRDF's for specular are blinn, phong and ward. The base shader is always a lambertian + either a blinn or a phong highlight.
Blinn (highlight, less distortion at glancing angles)
Blinn is a refined version of Phong. The blinn highlight, compared to phong's, is much more capable of keeping its shape with the incidence angle..
Phong usually produces a more stretched highlight at a glancing angle, whereas blinn’s stays the same.
Ward-anisotropic (or simply “ward” – anisotropic highlights)
Ward is a decent, general microfacet specular model (like cook-torrance) that allows you to specify diffferent roughnesses in different directions, hence it's anisotropic. Ward is the name of the inventor, who invented anisotropy BRDF.
Cook-Torrance is a pretty great microfacet model. You can use different distrbution functions to get different shapes, including anisotropic
Lafortune is a multi-lobed model (i.e. it's like 3 phong's together), that is parameterised to allow you to "move" the position of each lobe as well as specify its roughness. By mathematically fitting measured BRDF data you can generate a fairly realistic representation of real-world materials.
There are other types, but they’re less common. Use your manual to inquire about particular used type.
Be warned: every renderer has their own definition for the above shading types. For example, 3ds max has something called Oren-Nayar-Blinn, which is an oren nayar shader with a blinn highlight. And lambert is usually the same as blinn except with no highlight. So things are very dependant on your 3d app.
It's important to stop thinking of raytraced reflections as fundamentally different things from BRDF's like a blinn or a phong CG highlights. They're both simulating the same thing - specular reflections from a surface - they're just calculated in a different way. So raytraced reflections will look different if you switch between a phong brdf and a blinn.
08-28-2009, 04:09 AM
While using "whatever looks right" is of course true, understanding what different brdf's do and what they're suited for is very important, if for no other reason than it gives you a head start in creating the surface you desire.
As computing power increases, the prevalence of physically correct rendering will increase too. Understanding the principles of shading is important for keeping the accuracy that will make your pictures look real.
In real world the microstructure of surfaces is very complex, and currently used models for reflections are very simple and thus, unfortunately, not precise. In real life every surface has it’s unique microstructure (though some renderers, like Vray, already adopt the possibility to input the BRDF data).
On a microstructure level in real life: Think of the surface microstructure like a mountain range modelled by a noise pattern (like in bryce or terragen or something). Increasing the specular roughness parameter of a BRDF raises the 'height' of those peaks.
Now imagine that mountain range being modelled out of quadrilaterals that are perfect mirrors. The sun is shining and we're flying in a plane looking down on a single square-mile part of that range wiith a camera. A specular BRDF model is essentially calculating how many of those mirror facets reflect the sun's light into our camera.
If you think about it, some of the mirrors will be in shadow from other peaks and so will reflect no light. Similarly other mirrors will be hidden from us because of other peaks blocking our view. These effects are known as shadowing and masking in the literature. Even measured BRDF's not yet take into account interreflection (a ray that first hits one plate, than hits another and finally hits the camera lens).
Shadowing and masking are the names of two effects that are modeled by microfacet brdfs such as cook-torrance, ward and a&s. They are descriptions of the effects of 'peaks' in the surface microstructure shadowing light onto other microfacets and blocking the view of the camera to other microfacets, respectively. Measured brdfs will also include these effects, but just by measurements rather than explicit modeling.
Measured BRDFs just measure the reflectivity of a surface for many different light/camera direction orientations using an instrument called a gonioreflectometer (http://en.wikipedia.org/wiki/Gonioreflectometer) or a light stage. They're not actually calculating anything to do with the surface, just measuring the amount of light that comes back towards the camera for a given light direction.
A BRDF cannot represent subsurface effects by its very nature, since it only deals with the integral at a single point. Subsurface effects rely on the light arriving over the entire surface, which is why we give the scattering functions for them a special name - BSSRDFs.
Light stage capture can represent subsurface effects if an entire object (such as a face) is captured, as seen here: http://gl.ict.usc.edu/Research/afrf/ and you also find things called BTFs which are captures of a small section of a surface. Quite a lot of research has gone into them from a real-time point of view, particularly from Microsoft.
The clever part of these models is that they can calculate how much light ends up in our camera for a particular region given only the pattern used to model the range (called the distribution function), and the roughness parameter, using statistical methods.
The exact distribution functions used depend on the model. They are just simple functions describing how many of those microfacets point in a given direction, or what kind of "shape" the microstructure is. There is some evidence that real microstructures are closer to fractal in nature, but I don't know of any BRDFs that can approximate this.
There is currently no single analytic BRDF model that can accurately represent all real-world materials; the Cook-Torrance model comes fairly close, but it is not very convenient to work with from a sampling point of view. It would be most accurate to use measured BRDF data, but even then, a certain number of approximations and assumptions are typically involved.
Reflection and refraction override diffuse. What that means is that if you have a highly reflective material such as metal, your diffuse won't be seen almost at all. So 100% reflective=0% diffuse. 100% refractive - reflection is present, but no diffuse at all. Conductors (metals) do not have diffuse component, so for metals you set diffuse to 0% (black).
All surfaces reflect. The least reflective surface known is having a 0.045 light reflectance http://forums.cgsociety.org/showthread.php?f=21&t=584234 . Most surfaces have a glossy reflection, mirror-like are rare.
The most reflective material available is Spectralon, which reflects about 90% (99%?) of incident light in a roughly lambertian fashion (but definitely NOT lambertian). A sheet of white paper is about 80% reflective.
Add mapped reflection to all materials without an exception if you want a realistic result.
Reflections are perhaps the second by importance factor of realism after global illumination (technically speaking, talent and experience are still valid)
Here is an illustration I did without reflection and added reflection. Look how much richer the second image looks.
This is what archvizers use constantly - fresnel reflections.
Reflections in real world are mostly blurred, not mirror-like.
The same about harsh and soft shadows. Use soft shadows most of the time and you will make no mistake. Though in many beginner's works you can see a car on a mirror-like surface with a harsh shadow.
The darkest materials you find commonly sit around 3% iirc. You can produce materials which reflect as little as 1% of incident light, but you don't find them anywhere except in a laboratory. The point is, surfaces reflect quite a lot. But reflections for dielectrics must have a fresnel falloff (read further about dielectrics).
Transparent on a mictostructure means light does not transforms into heat (black or dark material) and material transmits the light directly.
If transparent or translucent material is colored, it absorbs selective wavelengths of light, and will pass its color more readily then others. The complementary colors will not transmit at all.
Whether light is transmitted or reflected at the surface depends on the angle at which it hits the surface and the index of refraction of the material. We model this using the fresnel equations. Essentially light that hits a dielectric head-on is almost certainly to be transmitted, while light that hits at a glancing angle is almost certainly to be reflected.
Only one of the above events can occur for any photon/surface interaction, but we're modelling the net result of unimaginable numbers of these interactions, so for modelling a given surface we deal with the percentage of photons that undergo a particular type of interaction. So a metal might reflect 50% of the photons that hit it and absorb the other 50%, or we might model glass by saying that it transmits 90% of the photons that hit it dead-on, reflects 5% and absorbs the rest.
Even metals have transmission, nevertheless they are known to be very rigid. If you take a really thin sheet of gold for example and put a bright white light on the other side you'll see a little bit of green light coming through the sheet. This is odd because if you were to look on the side of the light source the gold would look yellowish again. I guess the green light would be the subsurface part, but there isn't much of it around.
Refraction is the case of transmission. Refraction is a bending of rays (changing their directions) as they transmitted from one medium to another. Different materials refract differently, it’s called an index of refraction, as well as the orientation of the surface relative to the light.
0 IOR means the transparent object is not visible (no bending of rays happens, both mediums are of the same density). You can see glass because of refraction of rays, and of partial absorption. The reason is the different medium densities. Light travels slightly slower when passing through a denser medium. If you imagine a cast stone into water that if it strikes at right angle, it won't change its direction. If at a steep angle, it will be more inert and will change the direction as it strikes the heavy water. So that's why when you look at a glass bottle you almost don't see the front side and see the refracted siluohette.
08-28-2009, 04:10 AM
There is also an index of reflective refraction and refractive refraction, which always equals (reflective refraction value is always the same as refractive refraction).
The IOR is actually exactly related to both refraction and reflection index in terms of both bending through the material as well as the energy of light and how that energy stops being absorbed by the material and when it starts to reflect it. Hence all materials (not just transparent ones) have an IOR value which deals with all levels of specular/reflective/refractive light. Therefore, if you want to be scientific about it, the values should be equal in both reflection and refraction. The numbers you can find in tables. Mental ray has both values controlled with the same fresnel input, whereas Vray uses 2 different inputs.
Whenever light enters a material, absorption occurs. How much depends on the material and how much light is scattered once it is inside the material. For example light tends to move through glass in a straight line without being scattered once it's inside. This is why glass appears transparent rather than translucent. Absoprtion still occurs, just not very much, which is why the images you see through a glass object are slightly tinted.
Translucency is the case of transmission.
Surface microgeometry can cause the light to scatter in multiple directions in a similar way to glossy and diffuse reflections. This causes effects like the transmission seen through frosted glass.
It might help to see transmission as having different values of subsurface roughness, compared to surface roughness which results in specular or diffuse reflections. So a milky glass has a rougher subsurface so to say than a clear glass. Add to that a value to say how deep light can penetrate the surface, as I'm not sure if subsurface roughness alone would cut it.
On a microstructure level: translucency is subsurface effects. I'm not sure if there is a commonly accepted terminology for this, but I tend to use "subsurface reflection" for light that enters a surface, bounces around inside a bit, then exits back the way it came, and "subsurface transmission" for light that does the same but exits on the opposite side of the object.
What's actually going on inside is rather complicated and is generally modelled as a random walk - i.e. the photon travels a short distance inside the material before it interacts with an atom and might be absorbed (or, to think of it another way, has some of its wavelengths absorbed), changes direction and does the same thing again many, many, many times.
What we normally think of as translucency is caused by light bouncing around multiple times inside a material. The frosted glass effect, or diffuse transmission as I would call it, is caused by light being scattered onto a different direction at the surface of a material.
Subsurface scattering is the case of transmission, and occurs when light enters a material (i.e. is transmitted), bounces around a bit inside, and exits at a different location from which it entered. The interactions inside the material cause some of its energy to be absorbed, usually different amounts at different wavelengths, so when the light exits it is dimmer and tinted. Subsurface scattering only occurs for materials that have a dielectric interface and is actually how every non-metallic material gets its colour (remember that dielectric reflections are always white). i.e. every time you see a coloured object that's not a metal, the light has entered the material, bounced around a bit becoming coloured in the process, then left the material again at a different point. Thankfully, most materials are so hard that the entry and exit points are almost identical and we can pretend that they are so.
Measured BSSRDF’s: the Jensen/Donner multilayered BSSRDF does a reasonable job of highly scattering materials (e.g. organic materials like skin). You can't simulate it with a BRDF because a BRDF by definition assumes that light enters and exits the material at the same point (a reasonable assumption for many surfaces). Path tracers like Maxwell at all don't use a BSSRDF but simulate the random walk process directly to calculate subsurface scattering.
The difference between the Diffuse and SSS/Translucency shader properties is the (subsurface) spread of the light, or how far from where the light enters the material will it exit the surface again. Light hitting a solid stone wall will exit the subsurface so close to where it entered that as far as your shader is concerned the spread is zero. Light hitting skin will usually exit a very visible distance away from where it entered, so this can't be ignored as skin will look 'dead' then, hence the need for Sub-Surface Scattering shaders.
There is also dispersion, which is a split refracted light. Dispersion is caused by the the tendency of materials to refract different wavelengths of light to a different degree, causing rainbow-like colour effects (such as rainbows, for instance :)). It is actually quite a common effect, you can see dispersed caustics through water and diamonds. For instance, Newton famously used a common glass prism to split light.
Dispersion produces dispersed refraction (not sure about this last term). It looks like a colored refraction, and as a result colored caustics http://farm4.static.flickr.com/3181/3098537284_9a619d37d8.jpg?v=0
The colours evident in pearls and gasoline are not due to surface reflection, they are due to thin-film interference. This is due to dispersion in thin (not much bigger than the wavelength of visible light) layers of a transmitting material (i.e. a dielectric) covering the surface. The complex patterns and colour shifts depend on the number and thickness of the layers. It's essentially the same process that causes rainbows, just a bit more complicated.
3) Absorption. The light energy is converted to heat energy and is 'lost'. Of course it isn't really lost, but in rendering we're only concerned with light and not heat. In practice this means that no material should ever be 100% reflective if you want things to look real.
If surface is white, it reflects all wavelengths, if it's colored, it reflects only color wavelengths of the color you can see. To you this means that some colored surfaces may not interact as readily to colored lighting as you would expect.
On a smooth surface at microlevel when a photon gets reflected off the surface it is gone. But on a rough surface the photon may, after surface reflection, have to go through the whole lottery again when it hits the surface another time and might then be absorbed after all. Absorption I assume happens mostly in the subsurface as it must be like a crazy pinball machine in there with all the light bouncing around and hitting everything. There you have light bouncing from particle to particle, absorbing, emitting, absorbing, emitting in another direction, etc.. Quantum level.
When the light is absorbed by a surface, it appears darker (dark clothing always gets hotter than white btw). That’s why the darkest material invented is a carbon fibers structure. So 100% black means 0% reflection off the surface, and this never happens in real life. All materials reflect to some extent.
Colour is a result of selective absorption, and selective reflection.
Surfaces appear colored because they absorb some wavelengths (the color you don’t see) and some reflect (the color you see). So 0% diffuse (black) won’t reflect any light at all whatever intensity hittinh it light is, because you tell it absorbs light completely. That’s why you should never set it to 0% diffuse (black diffuse) if you are doing a very dark material. There’s also an artistic reason for not using completely black diffuse for dielectrics: it makes the form to disappear and look like a hole in the picture.
So we can say the following: light, if not reflected, goes through the surface, and if not transformed into heat (absorbed), resulting in darker material (meaning no visible light information), goes through and out, which means transmission, or it might also come out from the same side as it entered, which is subsurface reflection. Color is always a case of absorption either in transparent or opaque material, meaning selective wavelengths absorption. Black - all wavelengths are absorbed, white - all reflected.
08-28-2009, 04:10 AM
Metals don’t have a diffuse component as they don’t subsurface scatter light (or so little it can be ignored). They produce only specular reflection. And metals tint direct reflection, whereas dielectrics do not, as there's much more absorbance of certain wavelengths with the surface reflection. That's why you can have coloured specular reflections with metals that you can't have with dielectrics.
Metals don't have a higher index of refraction than non-metals. Gold for instance has a real IOR of ~0.47. The difference is that they have a large complex part to their index of refraction, which dramatically changes the shape of the fresnel curves. It just so happens that putting a very high value (20-1000) into the real part of the fresnel equations while leaving the complex part at zero gives you a similar curve to proper complex fresnel. This is, I guess, why the Maxwell docs suggest using these values for metals (even giving some bullshit about it being because metals are 'denser'), which as far as I can see is completely bogus.
With metals light either gets absorbed or gets reflected off the surface. The amount of light bouncing around under the surface and coming back out is so little it can be ignored. Also with metals there's much more absorbance of certain wavelengths with the surface reflection. That's why you can have coloured specular reflections with metals that you can't have with dielectrics.
No real subject produces a perfect specular reflection. Polished metal, glass and water nearly do so, but not 100%.
For dielectrics absorption of certain wavelengths (it looks like color) is mostly a subsurface effect. For metals it's mostly a surface effect.
The fresnel reflection for dielectrics vs conductors
The fresnel rule also applies to metals, but make sure you use the full equation, not the simplified one used to speed up calculations for dielectrics. Here it gives you the ratio between reflected and absorbed light. Most shaders don't use the complex fresnel function.
For dielectrics usually a simpler version is used which only uses the value n (your shader 'IOR' value) as its user input (incidence angle it gets from the renderer). For metals the full equation must be used which has two user inputs, n and k (spread), and also uses complex numbers (http://en.wikipedia.org/wiki/Complex_Number). The simple equation basically keeps the k value at 0 which has the benefit of having only one parameter to worry about but you also won't have to bother with complex numbers. The thing is that a k of 0 only works for dielectrics and can't be used for metals which have varying k values.
Add to that the fact that not only different materials but also different wavelengths (!) of incoming light result in different n and k values, and you can see that it can get very complicated. This doesn't matter much in dielectrics so luckily we can still simulate those pretty accurately with only one value.
However it can be very noticable in metals and it's things like this that give metals like copper different reflection colours at different angles (slightly more green at grazing angles).
So ideally for metals you'd need a table with n,k values for the whole visible spectrum range. Which finally explains why the single n or IOR value found next to metals in a lot of shader IOR lists is useless as you'd need at least the k value as well and preferably those two for each wavelength in the visible spectrum.
But as long as most people can’t write a metal shader, setting the IOR value 20> will give a similar to complex metal fresnel reflection curve. The best thing would be if we have such equations automatically in our renderers, and probably they will appear with time.
Whether light is transmitted or reflected at the surface depends on the angle at which it hits the surface and the index of refraction of the material.
Fresnel reflection falloff is a rate between surface (specular) and subsurface (diffuse) reflection
All dielectrics have fresnel reflection so you should always apply a fresnel falloff for their reflection.
The fresnel refraction coefficient controls proportions between the camera facing reflection and away from it. The higher the value of IOR, the less difference is.
The fresnel reflection differs from a straight falloff by its curve: it's more gradual at the beginning and very steep at the end.
The ratio (!) between the subsurface (diffuse) and surface (specular reflection) parts is determined by the fresnel rule. In the case of a smooth blue plastic in a white environment making the object look more white at low angles and more blue at high angles. As it's a ratio between the two you can see how it can't be larger than one (the law of energy conservation).
Something like a town with white rooftops and blue streets. Viewed from a road some distance away you'll seen only the white rooftops while flying over the town you'll seen a lot more of the blue streets, but the amount of rooftops and streets stays the same.
Specular reflections of dielectrics are never tinted.
With dielectrics part of the light coming in gets absorbed, it's gone as far as CG is concerned.
The second part gets scattered in the subsurface of the material and actually makes it back out. Certain wavelengths get absorbed giving the colour to the material, like blue plastic. It's close enough to being perfect diffuse to get away with treating it as a lambert function.
The third part is the part that gets bounced off the surface of the material while almost the complete spectrum is being bounced back.
The fresnel equations have nothing to do with microgeometry, it’s essentially a statistical averaging of quantum effects i.e. interactions that depend on the atomic structure of the material. It's not useful to think about "shape" of the surface at this level, as how light interacts at this scale is governed purely by the electromagnetic properties of the material.
This is why we have the split between conductors and dielectrics, because at a quantum level they behave very differently. So the electromagnetic properties of a material decide (basically speaking) whether a single photon is reflected, transmitted or absorbed, and at what wavelengths. This is what we model with the fresnel equations. The surface microstructure on the other hand decides the scattered directions of many photons. This is what we model with the BRDF.
Adjusting the fresnel falloff:
First you start increasing away from the camera reflection (90 degrees reflection), and only when it reaches maximum, you start inceasing the facing the camera reflection (0 degrees reflection).
Also that you can't just use a fresnel function out of the box with rough surfaces. The reason is that the fresnel function only really works with perfectly specular surfaces. Maxwell perhaps gets around this by changing the fresnel result with a curve depending on surface roughness. This curve is user-controlled. You need to increase the 0 degree reflectance for rough surfaces. Just eyeball the effect. As far as I am aware, there is no BRDF that simulates this effect in any meaningful (much less phsyically accurate) way. It's to do with interrefections between surface microgeometry, which is not modelled in any analytical BRDF (but would be in a measured BRDF, since it's just capturing the end result). It's quite a subtle effect.
08-28-2009, 04:11 AM
To sum up, for a fully raytraced, scientifically-based shader you'd have:
- Absorption (colour)
Fresnel switch between surface and subsurface
- Absorption-transmission slider (colour)
- Depth (of penetrating light)
You should also be familliar with real-world camera controls and effects and how to use them in your 3d package.
depth of field
Those are essential. But there are many others, which are caused by effect filters, and so on and so forth.
Exposure is for how long a film exposes itself to light. Longer – brighter, shorter – darker images. The controls are: f-number (also controls depth of field), shutter speed and film speed (ISO). This is a crucial element of controlling the contrast and brightness in your renders.
Motion blur is caused by the shutter opened long enough so the moving object leaves a trace on the film. If you render an animation, it must be used always to avoid lugging. Animation without motion blur is a sign of unprofessional work. Usually it’s set at half a time per second (0,5). For stills it’s not as important, unless you want to show movement, or hide a lack of details.
Motion blur never has any kind of falloff, it’s always uniform. So you won’t have it more bright at the beginning and more dull at the end of the “tale” of a motion-blurred light for example. This is important to remember if you are trying to simulate motion blur.
Depth of field occurs because a lens focuses light rays coming from a single point in a cone toward the film back. The length of this cone is dependent on the distance of the subject from the camera and the arrangement of the lenses in the camera. If the film back does not lie exactly at the apex of this cone, then the intersection of the cone and the film forms a circle (the circle of confusion). This essentially means that 3d points become circles when projected onto the film and the image becomes blurred.
Depth of field is controlled mostly by f-number, but also by the size of lens and film gate, and creates an effect of falling out of focus, which is very important for a realistic rendering, as long as in many shots, such as macroshooting, it must be present as a rule.
A white balance
Every light has a relative color temperature, but our eyes lie about it (and about many other things) because adapt very quickly and we see most lights as white. A white balance is a colour temperature which will be taken as white, and hotter will be more blueish, and cooler will be more reddish. http://www.mediacollege.com/lightin...temperature.gif
To illustrate the color temperature, the rule to me is that there are no rules. There are kelvins and lumens, but if you stick to them you are a robot then, whereas you should work with color and brightness as an artist. The general rule is that outdoor lights are much brighter than indoor ones and there is a color of light sources. Some artists don't use any color for lightsources. This is because when they look at a light they see it white as their eyes adapt quiclky. But if you take a shot you will see that there is not such a thing as white light (unless it is at the whitebalance point).
Remember: color creates mood, and in movies they use colored gels, tint light sources and finally color grade shot material, so study movie shots for colour temperature and mood.
And also there is a monitor physics: your software must have a proper calibration http://forums.cgsociety.org/showthread.php?f=2&t=188341 , a proper color temperature (6500k) and gamma-correction applied http://forums.cgsociety.org/showthread.php?t=610790 always.
All of these effects should be under your control (where, how and when) if you want a physically resalistic rendering. The next step is how you achieve them in your renderer, either with a more advanced algorithm like raytracing, or better, spectral rendering (like Maxwell or Fryrender) or fake-based like Reyes (Renderman).
08-25-2010, 12:57 AM
Killer thread guys. Thanks so much for taking the time to explain all of this. It's a lot to process :)
08-25-2010, 04:07 AM
Is it worth drooping in some color space theory here (eg dealing with sRGB monitors), or do you reckon that's outside of the scope of this thread?
08-25-2010, 04:59 AM
Is it worth drooping in some color space theory here (eg dealing with sRGB monitors), or do you reckon that's outside of the scope of this thread?
Sure man, if you can add more info, I'm only for. :thumbsup:
08-25-2010, 04:59 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.