Tutorial:
Subsurface Scattering (SSS) is an effect of light passing through a translucent surface and being partially absorbed and refracted when it interacts with the inner material. In some cases, the light may have a uniform refraction (i.e. stained glass), while in other instances the refraction index may vary from particle to particle, and the light rays then become “scattered” (i.e. wax).
When I finally figured out what was going on inside real-life SSS materials, I realized that two main things are happening:
- In both “waxy” and “glassy” materials, light enters the object and some rays can get farther through the material than others. In other words, the farther into a SSS material you go, the more opaque it is (like thicker stained glass vs thinner stained glass).
and
- In “waxy” materials only, light also gets scattered by the particles inside, creating a blurred refraction. So the closer to the surface light is, the less it gets scattered. But, the more particles a ray hits and the farther the light travels, the more it becomes scattered. It gets really crazy in there.
The problem with 3d computer imaging programs is that they aren’t really 3d. They are what we call 2.5d. If a single ray passes through a 2d object (or polygon), that ray will intersect exactly one point. If a ray passes through a 3d object, that ray will intersect many points (some would say an infinite number, but for practical purposes we have to consider that there are a finite number of atoms/molecules that lie along any given ray. Plus science doesn’t really know).
In 2.5d, a ray passes through one side of an object intersecting one point, through the interior where it intersects ZERO points, and out the other side intersecting only one more point. Really, you have to think of “3d” computer generated objects as an arrangement of solid 2d objects (polygons) in a 3d space.
The reason computers have such difficulty with Subsurface Scattering is that the effect is entirely based on the fact that light may enter an object and interact with many particles before leaving or being absorbed.
Nevertheless, armed with a solid understanding of this phenomena, Lightwave can be fooled and cajoled into fulfilling our dark purpose(s).
Lightwave doesn’t have a native solution to simulate SSS, and SSS plug-ins are scarce, buggy and slow. So, laying in my bed one college winter holiday break with my computer 200 miles away in my apartment, I became determined to invent a plug-in-less solution to simulate SSS. And I did, that night. I’ve refined it over the years, and when Newtek changed the refraction blurring from a post-procesing effect to an actual surface effect, the solution was complete.
So, here you go:
-
Make a model.
-
Import the model to LW scene in scene editor.
-
Finalize all camera movements and the object you are working on.
-
Make a “near” null (for fog) - call it “FogMinNULL.” Move it to the point on the object nearest to the camera.
-
Make a “far” null (for fog) - call it “FogMaxNULL” Move it to the point on the object farthest from the camera.
5a. If you are making an animation (as opposed to a still image), you may have to animate your nulls so they are always at the near and far points on the object from the camera. (cleverer people than I could write a script for this I’m sure)
-
Activate fog volumetric (linear, black) and choose expression for min distance. We will now create a black fog that starts (is clear) at the point on the object that is nearest to the camera, and ends (gets totally black) at the point on the object which is farthest from the camera.
-
In the Graph editor, go to the expressions tab at the bottom, and hit the “builder” button.
7a. In the drop down on the Expression Builder, choose vmag (Distance, Explicit)
7b. Choose the X,Y, and Z coordinates of the active camera for inputs A-C, respectively.
7c. Choose the X,Y, and Z coordinates of FogMinNULL for inputs D-F, respectively.
7d. Accept the values, name the expression “FogMin” and hit “Create Expression.”
7e. Choose the X,Y, and Z coordinates of the active camera for inputs A-C, respectively.
7f. Choose the X,Y, and Z coordinates of FogMaxNULL for inputs D-F, respectively.
7g. Accept the values, name the expression “FogMax” and hit “Create Expression.”
7h. Apply FogMin to the Global.MinimumFogDistance Channel
7i. Apply FogMax to the Global.MaximumFogDistance Channel -
Make sure that no lights affect diffuse and turn off any global illumination including the default ambient intensity.
-
In Surface editor, give the object (and any objects within) a 100% white color, 100% luminescent surface. Give the surface any bump and smoothing if you feel it is necessary.
-
Render this scene at a good resolution, at the same size if not larger than the final movie resolution. Save this as some file name with the word min or minimum (I will refer to it as the min image). The render should show an image where the surface of the object gets darker the farther from the camers it is.
-
Go back to Modeler and flip the normals of the object. (hit f key)
11a. copy your object and give it a surface where it is 100% transparent, and choose an appropriate refraction index (I find that 1.4 or higher works pretty well for wax) and 100% refraction blurring (in the environment tab). The refraction blurring is not necessary if you plan to have a surface similar to stained glass. DO use refraction blurring if you plan to use a surface similar to wax.
11b. place the new transparent object in exactly the same place as the white object. -
Make sure “ray trace refraction” and “ray trace transparency” are checked in render options and render the scene once again. You should see the “insides” of your object. Save this as a file with the word max or maximum (I will refer to it as the max image).
-
In an image editor or movie compositing program (whichever is appropriate), bring the min image over the max image and set the layer blending mode to “difference.” What you will see at this point is a greyscale image of your object where thicker parts are represented by lighter values. Save this image or movie as “ObjectThickness.” You might start to see where I’m going here.
-
Go back to Lightwave (scene editor). You may want to save the old fog scene and white texture of the object in case you change something like a camera movement.
-
Remove the fog from the scene and let your once again lights affect diffuse as well as use any global illumination. Also, flip the normals of your object back to…well…normal.
-
In the surface editor, choose the transparancy channel and give your object an image map using the “ObjectThickness” image we just made and set the projection to “Front.” You can adjust the contrast, brightness, and gamma of the ObjectThickness image to tweak the transparency of the object.
-
Give your new surface the same refraction index you gave it in step 11a.
-
Tweak the rest of your surface and lighting properties.
-
Render, tweak, and repeat this step.
If LW radiosity were faster this is all we’d need to do, and we could put a glowing flame object in the candle. Since it is not (unless you have Fprime and a decent computer), you can add the diffused light source effect by doing the following:
-
Parent a null to the light and make sure it’s X,Y and Z are at 0,0,0 so it is in the same spot as the light.
-
In the sss surface luminosity channel, add a gradient layer, blending mode = additive, Input Parameter = Distance to object where the object is the null we just created.
-
Set the gradient alpha to 0 at the bottom and 100 at the top, and tweak.
(credit goes to ThriJ for steps 20-22)
Remember, the transparency map is dependent on the original camera position, so it won’t work if you move your camera at this point.
Someone should be able to write a plug-in that can do all this relatively easily.
Please let me know if anyone has done this before and report any errors in my tutorial. And as always, improve to your hearts content.