Photorealistic Studio Methods


I am an industrial/product designer and have recently gotten interested in improving my simple “screenshot” style renderings to as photorealistic as possible. So the types of renderings I want to do are like studio shots of objects from small products up to as large as cars, being as photorealistic as possible.

I have seen some incredible quality from Vray and have followed some tutorials i found online regarding lighting setups and render settings and have been very impressed with the results.

I got a copy of 3dword magazine and there was an article in there about photorealistic rendering with “Final Gathering.” And generally I find alot of places talking about FG when renders of this type are in question. As far as I learned, FG is a method where light is bounced from surfaces onto the scene.

I saw FG options in mental ray, however none in Vray. Is FG simply a way to set up the scene and bounce light around? or is it a specific rendering method. I.e. is it possible to use FG techniques with Vray as render engine??

Does anyone have any other tips generally about renders of this type?


“Final gathering” is a mental ray specific term; the general term for this type of GI calculations is “irradiance caching”. Many modern renderers support irradiance caching in one form or another. Specifically in V-Ray, its “irradiance map” engine is close to “final gathering” in mental ray.

Best regards,


There is a lot of naming convention confusion when it comes to global illumination. I’ll try to give you a short rundown of what the different things mean:

First off, there’s global illumination. This is a collective name for techniques for the interaction of diffuse surfaces. Basically, light bouncing around in the scene globally. Global illumination isn’t limited to any specific technique.

One of the many methods to achieve global illumination is photon mapping. Photon mapping is a fairly simple technique, which is why it’s fairly popular. It is a fairly fast technique, but it is nearly impossible to use it to achieve high quality global illumination with it. Getting a low-quality photon map will take seconds. Getting a medium-quality photon map will take minutes. Getting a high-quality photon map will take days. With photon mapping, the GI will essentially be made up of tens of thousands of highly inaccurate “spots” of light. By averaging these spots over a small area, you achieve noiseless results. The downside to this is that you lose details, and accuracy is still not very good.

Photon mapping goes like this:

1: Photon map is calculated in a pre-pass. The photon map is smoothed out, as well.
2: Rendering starts, and rays are fired from the camera into the scene.
3: When a ray hits a surface in the scene, the spot will be shaded by regular lights, as usual, and the data from the photon map.

The most primitive method of doing global illumination is the brute force method, sometimes called monte carlo or unbiased. This means that for every single pixel in the image, you fire a whole damned lot of rays randomly (or semi-randomly) into the scene. This is HIGHLY accurate, but slow as molasses. This is the method used primarily by renderers like Maxwell and FPrime. However, most renderers can use this method, though it’s generally not the default method. It goes like this:

1: Rays are fired from the camera into the scene. (Notice how it doesn’t need a pre-pass!)
2: A ray hits something in the scene.
3: From the spot in step 2, LOTS of rays are fired into the scene.
4: For every ray in step 3 that hits something, additional rays are fired for every light in the scene.

This means that rendering even a simple image needs a LOT of rays. A scene that renders in a couple of minutes without global illumination, and ten minutes with photon mapping, would take a few hours with brute force global illumination.

Because brute force rendering is so slow, a different technique was created: irradiance caching. Irradiance caching works by taking very accurate global illumination samples, but not at every pixel. On a flat surface, the global illumination is unlikely to change very rapidly, so it is unnecessary to calculate it at every pixel. With irradiance caching, you calculate it here and there, and then interpolate the results. The disadvantages of this technique is that it requires a pre-pass, just like photon mapping, and that it can lead to splotches and detail loss. With brute force global illumination, all you ever get is noise. Noise usually looks better than splotches.

This is where final gathering comes in. Because both brute force and irradiance caching GI can be fairly slow methods, and photon mapping has horribly poor quality, someone came up with a rather excellent idea:

Final gathering! It means that you first calculate a photon map, with all the splotches and detail loss and so on. Then you use a slower but more accurate method to calculate another bounce of global illumination. Essentially, it goes like this:

1: Photon map is calculated, as in the above example.
2: Rendering starts, and rays are fired from the camera.
3: A ray hits something in the scene, and regular shadow rays and such are fired, as usual. The photon map is ignored completely, because of the low quality.
4: Rays are fired into the scene from the spot in step 3. When these rays hit surfaces, the photon map is taken into account, but no shadow rays will be traced (the shadows are already calculated in the photon map).

The photon map is used to achieve a basic ambiance in the scene, and then you use the final gathering to smooth it out and bring back lost details. This is slower than just photon mapping, but MUCH higher quality. Also, since you do not have to fire shadow rays from the final gather rays, you save a LOT of time. Photon mapping + final gathering is a very popular technique. Final gathering works with both brute force rendering and irradiance caching.

Unfortunately, this is where the confusion starts. When mental ray users talk about final gathering, they’re actually talking about irradiance caching! In many renderers, irradiance caching is called final gathering, regardless of whether you actually use it for final gathering or not.

Okay, this didn’t turn out very short at all. I’m sure I’ve made a bunch of mistakes, as well. Feel free to correct me, if you find something inaccurate. :slight_smile:

And as Vlado says, you’ll be wanting the “irradiance map”. Or possibly just the brute force method, if you want a slower but even better-looking render.


Final Gathering isn’t exclusive to Mental Ray. Competing renderers like Turtle have it, and it does the same two things, smoothing out the photons in photon mapped GI and adding a simple single-bounce indirect diffuse tranfer of its own.

But Final Gathering is just one of many approaches to Global Illumination. If you have a renderer that is good at rendering with global illumination, then don’t worry if it uses a different set of algorithms than some others.



thanks for the replies guys.

i have been reading the vray manual and it has cleared up some confusion.

i guess the bottom line is that all the different GI methods (irradiance caching, irrandiance map, QMC, photon mapping, light cache, etc etc) all actually perform the same calculations but in different ways. so the difference between them is not their fundamenal ways to calculate the lighting of a scene, but in their METHOD of achieving this. so they all (with good enough settings) will produce pretty much the same result, the only differences being variations in speed vs quality.

so basically when the magazines say that they use “final gather” so setup their lighting, this is just their “method” of achieving the illumination in the scene. i could load the same scene in 3ds max and render it with vray with pure QMC, photon map, light cache, whatever, and i would get the same result - it would only differ in rendering time (and things like noise and splotches if the settings are not good).

am i right in saying this?


So QMC = Quasi-Monte Carlo?

Yes, you are right that there are many algorithms used for global illumination, all trying to solve the same problems.



Yes, with an awful lot of “buts” and “ifs.” :slight_smile:

If you set the primary light bounce in Vray to QMC, it’ll do it the brute force way.

For studio lighting setups, I’d recomend using irradiance caching whenever possible. If irradiance caching doesn’t give you high enough quality, use area lights instead.


what kind of lighting setups do you guys recommend for these kind of renders?

what i have been doing is to place the object on a bent plane (so the background looks like it dissapears in the distance - with slight gradiation) with maybe 2 square vray lights placed above the object behind the camera, and then using a HDR aswell for lighting.

some articles talk about using objects as reflectors to bounce indirect light onto the scene. what is the advantage of this? and would I then have to point the lights away from tthe scene onto the reflector so that the scene receives only indirect bounced light?

are there any other ideas behind lighting setups for this type of render?


If I were you, I’d do away with the HDRI completely. Just use bounce cards and area lights instead.

The advantage of luminous planes instead of area lights is that they, in some cases, render faster (if you use irradiance caching, for instance), and they look much better on sharp (ie, non-blurred/glossy) reflections. Like carpaint!


… exactly none. use them only for reflections (or with little power
to give some light nuances) as when used to enlight the scene they
return a very incoherent illumination difficult to smooth up with things
like finalgather, especially in animations.



so your recommendation is pretty much the same as this

thanks for the input.

i came across the maxwell render page…holy crap the renders on there look incredible! this package also looks great for doing studio type renders. might look into it.


Varies from renderer to renderer. In modo, for example, it’s often advantageous, speed-wise, to use luminous planes instead of area lights, as long as your meshes do not have large amounts of really high-frequency details. Taking 500 GI samples every n pixels is sometimes faster than taking 50 area light samples every single pixel, depending on the value of n.

Edit: Of course, I haven’t used Vray, so I don’t know about that…


great tutorial on 3 point lightig for anyone who is interested


CaptainObvious: i appreciate your feedback!! in the other maxwell thread aswell, it has cleared up alot of confusion.

I re-read your long post explaining the details about final gathering, irradiance caching, photon mapping, unbaised methods, etc…and I think I now understand this 100% but just want to make sure!

I use vray and it allows me to use a different GI method for the first bounce and for then for the second & onwards bounces.

When i select “Irradiance map” as the method for the first bounce it will do a couple passes and my scene will be covered in small spots everywhere. On flat surfaces there are far less spots than where geometry is changing (so this sounds like the irradiance caching you described).

If I then use QMC (which i believe is an unbiased method) for my second to 10th (lets say) bounces, will it use the results from the irradiance map calculated in the pre-passes and fire huge number of rays into the scene - but not the same amount in all places; in the places with less “irradiance map spots” there will be less points where rays are fired, and in locations with more “irradiance map spots”, at more points rays will be fired…is this right?

So if the above is correct…this method makes alot of sense to use. Pure unbiased (i.e. maxwell) actually wastes alot of processing time firing rays in locations where it is not actually needed to generate a good GI solution. The above method (irr map + qmc) is 100% the same as the unbaised (maxwell) method except that it doesnt waste time doing heavy calculation in areas of the scene where it is not really needed.

So the quality of the render will be almost identical but done with less calculations (depending on the nature of the geometry in the scene of course)

so, the question is…am i right? :slight_smile:


You’re entirely right about renderers like Maxwell “wasting” CPU time. That’s why they’re slow. :wink: But you’re not right about that method being unbiased. The definition of unbiased sampling is that you never get artifacts, but you do get noise. Irradiance caching can lead to detail loss and light leaks. This will NEVER* happen in an unbiased renderer. Even if you use irradiance caching for one bounce and QMC for the rest, you can still get artifact problems.

At any rate, since I haven’t actually used Vray, I’m a bit unsure about how stuff works in it, but here’s how I THINK it works:

The “primary bounce” is the first bounce from the camera. So if you set the primary to irradiance caching and the secondary to photon mapping, you get a standard photon mapping & final gathering render, like you can do in mental ray as well. This means that if you set the primary bounce to irradiance caching and the secondary bounce to QMC, the irradiance caching pass will, in a sense, be used as a final gathering pass on a QMC render, rather than a photon map. I’m unsure about this, but it seems to me that using irradiance caching for the primary and QMC for the secondary bounces would be something of a waste. Sure, you save time by calculating the GI only here and there, but you lose the caching on the secondary bounces so you don’t save all that much time. And even then, you still can get irradiance caching artifacts.

Generally, I’d suggest using light caching for the secondary bounces. Light caching is just like photon mapping, except the photons are fired from the camera instead of the lights. It has mostly advantages over photon mapping, in my opinion. Works great. Then use either irradiance caching or QMC for the primary bounce (this would be final gathering), to restore detail.

*Well, actually, light leaks can appear even with unbiased rendering. When a ray is traced from a surface, anything that lies closer than a certain distance is ignored. So in some cases, a ray being traced from the floor can miss the wall and fly straight out. This generally doesn’t happen very often, though.


A number of textbook definitions of global illumination include any surface-surface interaction, not just diffuse-diffuse light transfer. By the strict definiton, raytraced reflaction/refraction are also considered global illumination.

Just one more fact to improve the confusion :slight_smile:


Hmm, I suppose that’s true.


This thread is really really interesting. Probably some people notice that Vlado and Jeremy repost to this post.
Vlado is the developer of one of the best renderer in the world :wink: and Jeremy wrote one of the most interesting book about Lightning and rendering ! It’ always cool to learn something new in a sunday afternoon


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.