PDA

View Full Version : The 'how things work in mental ray(R)' question :)


KV99
07-25-2006, 05:13 PM
Whoa, well days and days with mental ray can manage to mess ones mind up a bit. I was playing around with shaders and other such things, but then took a step back to see what some settings really mean and how they work.. So, a simple question:

What is the difference between Renderer Trace Depth settings, Caustics and GI Trace Depth settings, and Final Gather Trace Depth settings? Do some of them stack or are related to each other? It is really confusing since Max'es own reference does not exactly pinpoint the differences either, though there 'are' differences, I can see that with test-renders, just don't exactly know what.

Another thing, let me get this Samples issue straight to hopefully save my rendering time in the future. Basically, in Rendering Settings you have Samples per Pixel? It is a 'mother' setting yes? Meaning that reducing this, basically reduces sampling of everything else in the render as well as long as to hold the 1/64 up to 1024 samples per pixel setting in work?

Well, Light objects, like area lghts have samples for their shadows. Those shadows have also samples attached, are they independant? Or, let me rephrase, do rendering samples affect at any rate the sampling of area light shadows?

GI settings have maximum number of photons per sample. Does this mean that if my rendering settings have samples in (not recommended) 1 and 1 samples per pixel mode, for that pixel, it renders that maximum number of photons per sample as maximum?

And as lights emit photons, I assume there is dependancy between maximum photons per sample and photons emit by the light? If that light would emit 10 thousand photons, and maximum photons per sample is 100, and my 'mother' setting of samples per pixel is 1 to 1, according to how I see thigns right now, it would render 100 photons to one pixel maximum regardless how intense the light is in that pixel?

I may be off BIGTIME here, because Im quite new to really base knowledge of how mental ray works (and will buy the book quite soon to help things), but it would relaly help if some more-or-less expert could tell me if my theories up there are somewhat accurate or are way off.

And one last thing, though I've tried to stay away from Caustics as well as I can right now, some basic questions. Caustics are the way light spreads off reflection and refraction if I am correct. Now, when GI is the light diffusion around the scene based on lightrays hitting the surfaces and bouncing on from there, all the while losing energy, causics is basically the same thing, just that takes only into account reflection and refraction settings of a material and throws samples around the scene based on that? So basically, until a ray actually hits a surface, there is no difference? Is that why there are photon settings for both GI and Caustics, to give better control to the user, when in a real basic level, they are both two sides of the same thing in a simulated raytraced lightenvironment?

Thank you

Mehran-Moghtadai
07-25-2006, 07:11 PM
maybe this (http://www.lamrug.org/resources/samplestips.html) could help you get sampling right.

And this (http://www.lamrug.org/presentations/jun2005/LAXSIUG_June_2005.pdf) for final gather.

theres more at lamrug.com

KV99
07-25-2006, 08:44 PM
Wow, that's a great link! Really technical and down to those important details that are required to actually produce something with your own skills other than 'memory effect' that results in post-tutorial effect.

mental ray is something of sorts where nearly all tutorials out there are about 'click this to achieve that' type, not explaining anything at all. And while mental ray reference is really good, it is a bit too script based and lacks explanations about how things actually work, which do exist in the link you gave, thanks. (and btw, it is .org and not .com, google helped)

I've checked out the evermotion mental ray cd's. A recommendation. Do NOT purchase those! Absolutely terrible, whoever narrates this has no idea about half the things that go on during the demonstration, and no I don't believe it's narrated by the author (and if it is, then based on some script or tutorial of some other sorts). A serious failure in learn-how dvd's. Mental ray isn't something that in my book fits this tutorial-no explanation-style tutoring.

Anyways thanks for the links, I have some worthy bedtime reading material now.

Mehran-Moghtadai
07-26-2006, 01:53 PM
No problem. An sorry about the link.

it's always great to know what you're exactly dealing with. And when you know that you have to keep on experimenting.

KV99
07-26-2006, 06:37 PM
One thing I didn't find an explanation though during first lookaround there. What are the differences between the three trace depth settings? The ones in renderer settings, the ones in caustics and GI, and the one under final gather?

Mehran-Moghtadai
07-26-2006, 07:55 PM
I might be wrong about this but here we go...

there are 3 controls one for raytrace, one for FG and another for GI.

It's kinda hard to explain but imagine a box with inversed normals and that they are all mirrors. So it would reflect infinitly in real world right... But in MR you have to tell it to what depth you want it to trace the reflections. so bascically how many times you want it to be reflected. So you tell the raytrace depth to be 20 and but you dont want your GI rays to reflect that much because it would take tromondous amount of time so you tell it only 5 times because the effect is gonna be there and you wont notice it if you had 100 even. And same goes for the FG rays if you dont want the FG rays to trace as much as the raytrace does you'll set it lower. So basically it just gives you more control on every single kind of ray.

I hope it helped a bit...

KV99
07-26-2006, 08:46 PM
I am not so sure about that. Yes, I am aware that trace depth determines how many times bouncing/reflecting/refracting bounce is limited, no question about that. I have a problem with another thing..

According to how I currently see thigns, but don't have anything to back this up with, is that in renderer settings, or raytrace settings sort of, you determine the trace depth limit and reflections/refractions limit that both fall under trace depth limit. And Im guessing this limits the reflections and refractions directly on the scene.

However, if I am correct, this has nothing to do with actual photon. Photon settings come in from indirect illumination tab. Caustics and Global Illumination 'both' have the same trace depth limit and they are actually two sides of the same thing from what I understand. One for how light bounces and diffuses around the scene, another how a photon reflects and refracts around the scene.

Since raytrace settings seem to be directly about reflections, refractions, light settings have settings for shadows, then indirect ollumination is settings for photons and their distrubution around the scene.

But the confusion in my book comes from final gather settings. It has trace depth and bounces and the like settings, while they do affect skylight and other lighting that is not directly photon based (right?), it confuses me what exactly does the trace depth, reflection and refraction settings limit there? If photons are limited based on caustics/gi depth settings, and reflections/refractions depth based on renderer settings, then what tracing does final gather settings actually limit?

Mehran-Moghtadai
07-26-2006, 11:44 PM
Final Gather uses final gather rays. It's pretty much like GI and caustics photons but more robust. It's pretty much the exact same thing it lets the rays refract and reflect and bounce... They all do the same thing except they are made for diffrent types of rays.

scroll down here (http://www.jozvex.com/tutorials/fg2.html) and you'll see more explanations

-Vormav-
07-26-2006, 11:51 PM
Here's how FG is really working: For each shaded point in the scene, a specified number of rays is cast outward from a hemisphere above that shaded point. So when these are average together, it effectively gives you a single-bounce average of the indirect illumination at that point.
When these FG rays reach an object, they call up and process that objects material shaders. Before MR 3.4 that's where it stops. But MR 3.4+ (so, Max8+) allow additional FG rays to be traced. And that's where the trace depth comes in: Each of FG's trace depth settings then determine whether or not the evaluated shaders are allowed to cast additional FG rays. It's the same with other trace depth settings: individual settings for reflections, refractions, diffuse bounces, and the sum of those three. FG's trace depth is independent of all other trace depth settings, because it is separate from the standard raytracing step, and the photon tracing step.

If you don't notice much of a difference in changing these settings, that's probably because it is very subtle. If you're number of FG photons is set to 500, then that's 500 samples around the scene you're averaging out, after which that sum is added to the direct illumination. If you don't have any reflections or refractions in your scene, then the only visible effect could be diffuse bounces (if you've enabled them). And even then, for most scenes this would be very subtle.
The Mental Ray book actually advises against increasing FG trace depth, though. The claim is that when you start increasing FG trace depth, it quickly gets slower than if you had just used a combination of global illumination and single-generation FG (which would likely look better anyway), because the number of traces increases relative to the number of shading points instead of the number of lights.

KV99
07-26-2006, 11:57 PM
How on earth do you come up with those links? Any other mental ray related you could share? (Though probably I could not find those because they are Maya related. But Im at least a year or two past the state where different 'programs' could halt my understanding things)

EDIT: @-Vormav-
Thanks for the explanation! That says quite alot, though one thing still confuses me, what exactly is that 'shaded point' you mentioned? I'd guess a sample, but if 500 FG photons around the entire scene, it doesnt seem much.

So basically FG has amount of FG photons that I can set. Then, it creates those virtual hemispheres on those shaded points (that, as I said, I dont exactly understand what is), and depending on the amount of shaded points, it divides the amount of FG photons between them? So with 'ten' of those shaded points, each would get 50 FG photons?

Of course, since that 'shaded point' thing is a bit confusing still and I might be way off with my thinking right now anyway, why is it that final gather produces HDRI light diffusion around the scene? I bet this has an answer if I'd know what exactly the shaded point is, but I haven't managed to get the effect of skylight or hdri without final gather. Nor that 'glow' effect with high glow values on materials and things like that.

And Im planning on getting 'Rendering with mental ray' as well, many seem to use it as reference and if I intend to get to the root of things such a thing will help I'm sure.

Mehran-Moghtadai
07-27-2006, 01:35 AM
the reason why you can light a scene with FG and a HDRI is because HDRI is high dynamic range which means that the values go higher then 1 and when you have values higher then one they produce FG rays. A glow or whatever you wanna call it. you dont need a HDRI to do this you can just use a surface shader in maya and set the value to >1 and if you use max you can use the standard shader and add an output map and set the value to >1 in the self illumination box. And this is basically what sets GI and FG appart from each other. GI works with lights only where as you can use objects to light your scenes with FG.

-Vormav-
07-27-2006, 02:05 AM
Thanks for the explanation! That says quite alot, though one thing still confuses me, what exactly is that 'shaded point' you mentioned? I'd guess a sample, but if 500 FG photons around the entire scene, it doesnt seem much.

It's a sample, yes, but not the render-time samples (IE sub-pixels). The radius, as far as I'm aware, determines how far apart each of these shaded sample points are from each other. MR can also vary the amount of samples relative to their distance to the camera (Max exposes this with the "radii in pixels" option). But anyway, if you had ever wondered why FG was view-dependent, that's why. ;)

So basically FG has amount of FG photons that I can set. Then, it creates those virtual hemispheres on those shaded points (that, as I said, I dont exactly understand what is), and depending on the amount of shaded points, it divides the amount of FG photons between them? So with 'ten' of those shaded points, each would get 50 FG photons?

No, the number of FG photons is per shaded point. So, if you were using 500 FG photons and had 10 shaded points, you'd actually be tracing 5,000 rays. That's part of the reason why multi-bounce FG gets so slow: If every ray from one of those points also cast a ray, you'd be up to 10,000 rays. The number of rays being cast goes up linearly with the number of bounces.


Of course, since that 'shaded point' thing is a bit confusing still and I might be way off with my thinking right now anyway, why is it that final gather produces HDRI light diffusion around the scene? I bet this has an answer if I'd know what exactly the shaded point is, but I haven't managed to get the effect of skylight or hdri without final gather. Nor that 'glow' effect with high glow values on materials and things like that.

FG is still sort of acting like single-bounce GI. The skylight and the glow material both take advantage of the way that the many FG samples are averaged out to determine the indirect illumination for a single point to "fake" illumination. It's easiest to explain it with the glow material so I'll start there:
Imagine that you have a scene that's nothing more than a box. It has no lights. Every wall in front of the camera has standard diffuse shaders. The wall behind the camera has a glow material. All the glow material is is a material that returns super-bright colors; that is, colors greater than 1, where the standard range of colors is 0-1.
Now, FG finds a sample - one of those shaded points I referred to before - in front of the camera. It then casts rays out from a hemisphere above that sample point (and by above, I mean 'above' in the direction of that point's normals. Just imagine that the hemisphere is aligned with the point's normals) out into the scene, and at all of the intersections the material shaders are called up. All of the walls in front of the camera have standard direct illumination-based shaders, so seeing how there are no lights in the scene, they return black, or rgb (0,0,0). The rays that intersect with the wall behind the camera are different though; glow materials are self-illuminating, and so although there are no lights in the scene, they still return non-black color values. Maybe super-bright values like (1.5,0.5,1.5).
After having traced all of these hundreds of points for that single FG point, FG averages out all of the rays it cast, and stores them in a photon map at that point. There's where it starts to act like light: Had all of the shaders for the walls been simple illumination-based shaders, all of the FG rays would have returned pure black, and the average would come out to be (0,0,0) - no indirect illumination. But the wall behind the camera didn't return black, and so the average comes out as something higher. The points in the center of the wall in front of the camera would have had the most FG rays hitting that back wall, and hence would have received more indirect illumination. The points in the corners would have had most of their FG rays hitting up against the non-glowing walls, and far fewer hitting that back wall, so they would be receiving less indirect illumination. And so, when the indirect illumination is added on top of the direct illumination, we'll see something that's going to look a lot like GI, but really isn't. I guess you could think of it more like color-bleeding light, in a way.


And Im planning on getting 'Rendering with mental ray' as well, many seem to use it as reference and if I intend to get to the root of things such a thing will help I'm sure.
If you do get ahold of this book, the chapter on Photon Mapping vs. Final Gathering that explains all of the above pretty well. It at least comes with a few diagrams that do a better job of explaining it than me. ;)

KV99
07-27-2006, 10:50 AM
@don_bertone - No I don't think that is so. FG is generated with even less than 100 illumination values. FG seems to take into account all illumination and cast rays according to that, which is different from GI that requires lights to generate those rays. If this is how I think it is right now, then it is quite a genius idea in mental ray that adds quite a bit of subtle, but great, control over how scene is lit exactly.


@-Vormav-

Thanks for the reply :)

And yeah it is interesting how low samples Final Gather needs to render quite a good result. I did a few test renders where it only has one sample (per shaded point then) and though it renders it obviously splotchy, there are alot more splotches than I first throught there would be.

Only one slight question that remains a bit confusing right now. Final Gather has reflections-refractions settings as well. When in raytracing, those had to do with the rays shot from camera to determine what the camera sees, in GI/Caustics photons those had to do with light emitted rays travelling through the scene, how does Final Gathers rays mix up in this? I can currently understand FG's bouncing or that 0 bounce setting where FG rays are not diffused further around the scene. Is the FG reflection/refraction limiting really related to in a situation where they are shot out, then travel to some object and acts just like photons emitted from light objects?

Now, come to think of it, I think that's exactly how it works. So correct me if I'm wrong. Anyways thanks for the help, the replies here and some research I've managed to do thanks to some of the links here have added a good pile on top of my slim mentalray knowledge. In time I'll probably come up with soem even more akward and strange questions, unless the ordered book finally arrives (cursed be those international orders).

Mehran-Moghtadai
07-27-2006, 12:58 PM
FG does work with under 100 illumination value no doubt but it wont light your scene that way. What I was talking about is a whole scene lit by FG no lights at all. If you have a HDRI map that has all values under 1 then you wont be able to light anything and it wont be called high dynamic range anymore. therfore not an HDRI.

KV99
07-27-2006, 04:11 PM
FG does work with under 100 illumination value no doubt but it wont light your scene that way. What I was talking about is a whole scene lit by FG no lights at all. If you have a HDRI map that has all values under 1 then you wont be able to light anything and it wont be called high dynamic range anymore. therfore not an HDRI.

No, this is wrong. Light spectrum and light emitting is not only everything above visual spectrum of 256 shades of colors (0.0-1.0). What HDRI does is have an image which has light information above that 256, and below. You can use a 0.0-1.0 image for skylight and it gets lit as well. Difference comes only when HDRI map includes data above that, which is usually the case of sunlight and the like, but Final Gather lighting is not limited to only those higher values. -Vormav- described that example quite well.

Skylight lighting is not limited to HDRI. HDRI is one of the options, but it does not require HDRI to work, nor those specific spectrum values to light the scene.

You can easily compare it like this.. Final Gather lights the scene based on illumination within the scene, no matter what it is. Either derived from lights or global illumination. In the case of skylight, it is a skydome with illumination based on the map added to it. If it is skylight with no high-range imagery, it is the same as if creating a inverted normals dome with the same map in the scene. That is why self-illumination objects generate effects through final-gather and exactly why final gather shades splotches in global illumination and things like that.

Renders, unless rendered to HDR, are not ever called 'high dynamic range imagery' anyway. Nor do you need a HDRI map to call it that, because for example, even a light with 1.6 multiplier is already above that 256 shades of light represented in current color palette systems in most of the render outputs.

A bit more about High Dynamic Range Imagery. A common picture is 24 bits, 8 bits per color. Usually it can also be 32 bit image, which means that a black-to-white alpha values have been attached to the image as well. What makes HDR 'HDR' is the fact that it stores additional light shading data, and it does not do it in 8 bits like other channels (meaning not just 256 shades), but in floating points. Usually that means an 16 or even 32 bits of additional data are stored, resulting in quite data-packing imagery.

But HDRI is overrated and the term is misused so many times it is ridiculous. You can render in HDR data, which means you render in high dynamic range, while you yourself only see those 256 shades on those three colors. Also, people say they lit the scene with HDRI, when, alot of times, it makes little to no difference and the same or very similar lighting can be achieved with that 24 bit environmental map already. That is one of the reasons why alot of HDR collections incluse jpeg environments alongside the HDR, because it makes a little difference unless the HDR does capture the depths of certain shadows and the ever-bright sun.

Mehran-Moghtadai
07-27-2006, 04:52 PM
I agree with what you say but the thing is that you're gettting what i'm saying all wrong. It's probably because either I don;t have enough experiance using your techniques or that you don't know how I use HDRIs. The correct way to use HDRI is not by using a skylight. It's to create a inversed normal sphear and apply the HDRI map to it. and the other thing is that you should have 0 lights. no default lights or anything.

An HDRI has an extra floating point value that is used to describe the exponent or persistence of light at any given pixel. This overall illumination information is used in the Final Gather process. Low Dynamic Range Image (LDRI), the kind of image that everyone is familiar with, has limitations when it comes to describing the range of colors necessary to correctly describe light values precisely. Think of a dark cathedral with strong light spilling through a stained glass window - the range from dark to bright is too broad for a conventional LDRI. Such an LDRI will have overexposed and very black areas.

Pixels that have a high floating point value (exponential value), are not affected very much by a darkening of the overall image. Pixels that have a lower persistence of light would be affected more by this same darkening operation. Creating your own HDRI involves taking several shots of the same subject matter with bracketed f stops and then assembling the images into a floating point tiff HDRI.

I hope this is clearer.

KV99
07-27-2006, 05:15 PM
Well that is 'exactly' what I posted.

But what you said was 'if you have a HDRI map that has all values under 1 then you wont be able to light anything' and 'the reason why you can light a scene with FG and a HDRI is because HDRI is high dynamic range which means that the values go higher then 1 and when you have values higher then one they produce FG rays'. Those two things confused me.

The thing is, you 'can' light a scene without HDRI range, just as I explained in my previous post. And whether you use inverted sphere (as I also mentioned in my post) or skylight, they function the same way. Skylight is a virtual spherical environment wrapped around the scene. Only difference is that an actual sphere gives you ctonrol about it's position and shape, that in 90% of the time is not required to light a scene.

I also mentioned 24 bit images also produce FG rays, they are not only limited to HDRI. FG rays are generated no matter what the illumination in the scene is. Even a material with 1/100 as self illumination will light the scene, though you cannot tell the difference.In fact, even 2 bit images can light a scene and the output can even be HDR as a result, even though no HDRI was used as an input.


Anyways, that subject aside, another question for techies. Glossy reflections (and I suppose refractions as well then) are completely dependant on renders sampling directly right? I did a test where I used glossy reflections, and the scene was filled with tiny spots that I figured were the spread samples of the original mirror-like reflection. That effect disappeared when I boosted rendering samples extremely high.

http://kristovaher.pri.ee/pub/glossy.jpg

This picture had samples increased high enough to lose the spots of glossy reflections around the floor. I'm just wondering if it is directly dependant on the samples, or if increasing samples only 'fakes' the actual, more smooth, result and the spots are depending on another value.

Mehran-Moghtadai
07-27-2006, 05:38 PM
Sorry about the confusion.

Oh god the DGS material. Yea it's pretty much dependant on the samples. But if you want super precise glossiness you should play around with the Contrast aswell. I once did a scene with glossy DGS and used 0.001 contrast and 4min 256max. It was hell to render though. Actually if you want more on DGS material visit here (http://www.highend3d.com/boards/index.php?showtopic=158994&st=0). DGS glass is pretty cool.

P.S.: You can always be sure I will comeup with the right links :D

KV99
07-27-2006, 06:01 PM
Alrght, some expert please tell me if my following theory is correct or not..

To calculate raytracing in the scene (and thus glossy reflections and refractions), camera shoots out rays around the scene.. per pixel based on samples value? The depth of those rays is determined by raytracing depth values, but if it is how I think it is, that is why a render with values of 1 to 4 looks like this:

http://kristovaher.pri.ee/pub/glossy_no.jpg

..and the one I posted previously, looks alot more smooth, because the amount of rays shot from the camera is increased and thus increases the quality of glossy reflections? Correct or no?


And thanks for the last link. That quite simply turned upside down how I looked at a transparent reflective material :)

Mehran-Moghtadai
07-27-2006, 06:04 PM
I'm no expert but I think what your saying is right.

KV99
07-27-2006, 06:21 PM
I'm no expert but I think what your saying is right.

The link you gave about DGS and lighting in general also claims that it is dependant on samples, just hoped someone knows for sure. But yeah, it seems right.

-Vormav-
07-28-2006, 05:29 AM
..and the one I posted previously, looks alot more smooth, because the amount of rays shot from the camera is increased and thus increases the quality of glossy reflections? Correct or no?
For the most part, yeah. Keep in mind though that glossy shaders also perform their own supersampling, sending out rays from the sampling point in pseudo random directions. The amount of rays sent out at that stage more directly correlates to the "accuracy" of the glossy shader. The amount of rays sent out from the camera more directly affects the "smoothness" of the glossy shader. But they're both kind of the same thing.
Also, one other thing to think about is that since glossy shaders do perform their own supersampling, increasing either the number of samples on the glossy shader or the number of rays sent out from the camera (ie samples per pixel) will cause the overall number of rays being cast (and material shaders being processed) to increase exponentially, which can certainly suck. Depending on the scene though, you may actually get better performance by increasing the sampling on the shader itself instead of per pixel; maybe cases where the glossy reflection only shows up in a small portion of the screen.
Even better, if you have a decent compositor, you can get even better performance and results by blurring a reflection/refraction/transparency pass separately.

Mehran-Moghtadai
07-28-2006, 01:42 PM
Is there a way of controling the Supersampling of a DGS material? Becaue I know the Metal(lume) shader has blurred reflection and you can control the samples but the DGS doesn't have such thing.

-Vormav-
07-29-2006, 12:29 AM
I probably should have said "most" glossy shaders. I think DGS is an exception to the rule, and relies purely on samples per pixel, which is unfortunate.

Mehran-Moghtadai
07-29-2006, 02:29 AM
DGS material is probably the most inefficient material. Hopefully next version of Mental ray will set some issues right. Unfortunatly DGS is the only solution to HDRR because by controling the Diffuse channel you can also controle the luminense of your scene.

KV99
08-01-2006, 06:54 PM
This might just be the stupidest question I've ever asked in this forums, because it seems such a simple thing but.. How can I make it so that the light gets actually reflected off the surface and be cast to that wall in the illustration? I tried the reflective materials but still no success. I am quite sure I'm missing something..

http://kristovaher.pri.ee/how.jpg

The goal is to test out this laser system where a beam is reflected between mirrors to create a network, but I cannot get even the slightest light test to work.

KV99
08-01-2006, 09:34 PM
Well, I got the one above to work with photons and caustics and learned a bit as a result, but..

I tried using Parti Volume shader on it to make the trajectory of the beams visible. It did not work (basically the beam goes 'through' the first mirror while the photons themselves travel til the end properly). Any suggestions?

Bao2
08-02-2006, 12:25 AM
I tried using Parti Volume shader on it to make the trajectory of the beams visible.
Any suggestions?

Look my Post 2509 (http://forums.cgsociety.org/showthread.php?t=104578&page=168) in the Jeff Patton mental ray shaders thread.

KV99
08-02-2006, 09:35 AM
Thanks, but the volume does not seem to be reflected off that surface there.. not in frame 0 nor 25. Didn't change anything, opened up and rendered to test, nothing.


I've got another question though. I was playing around with Shellac material that has two mental ray materials with Photon shaders. The problem is, only 'one' of those photon shaders will be used. So if I have one material with yellow photons and another with red, during rendering it only renders through one of the photon shaders.

I imagine this is because photons are calculated through the material only once and only through one photon shader per material, but is it possible to make it so that the other photon shader gets also taken into account?

Bao2
08-03-2006, 01:27 AM
Thanks, but the volume does not seem to be reflected off that surface there.. not in frame 0 nor 25. Didn't change anything, opened up and rendered to test, nothing.


Yes, true, seems in Max8 something is changed and now my scene don't works anymore.:shrug:
The problem seems be in the shader used for photon in the mirror material. I was trying
with no success to fix the scene.:sad:

Mehran-Moghtadai
08-03-2006, 01:35 AM
sorry to jump in here but I think if you add the volume light option in the effects and atmospherics parameters inside the lights paramateres you could maybe get the light traced.

KV99
08-03-2006, 02:52 AM
sorry to jump in here but I think if you add the volume light option in the effects and atmospherics parameters inside the lights paramateres you could maybe get the light traced.

No, from what I know, 3ds max atmospheric effects do not work with mental ray (of course I cannot state that as a fact, it's a gut feeling). That's why mental ray has it's own separate set of shaders for camera to produce all, and more, of those effects. It just doesn't seem to be working with what it used to be working with before. I'll look into it more when I have time.

CGTalk Moderation
08-03-2006, 02:52 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.