Looking for tips to avoid the beaching atomic spot with physical light falloff.


#3

Thanks for the links Wizlon. But I already have read them as carefully as I could plus a lot of other sources related to this.

In the mia_material section of Maya’s Help there is a few paragraphs about Gamma and Tone mapping. I found the information there basically as an attempt to explain a problem faced by the programmers that is passed to the users who somehow have to find their way out of it. There are no practical solutions, examples, or sample scenes included that actually could have been a much greater help than explaining problems with writing renderers.

The first link you posted is a 10 pages thread that basically is a diluted version with a lot of off topic noise of what already was mentioned in the Help. May be I didn’t manage to keep my concentration reading such a long thread but I failed to find a practical solution or example that addresses the specific issue raised in my message. I will be very grateful if you or anyone post a link to a post # in that or any thread that actually give a Maya example that can help me take specific steps in removing the atomic blotch in the image I posted. The best help would actually be if some nice person who knows how to fix this take a few minutes to create a scene with a cube and a light near a wall in it and shows a nice result that people like me can reproduce. That will be greatly appreciated.

  Thank you.

#4

The solution is in the thread I already posted.

Mia_exposure_simple - 2.2 for PC 1.8 for MAC

Gamma correct your textures (gamma correct node) and colour swatches (* RGB float value by self) ie R 0.338 is 0.338*0.338=0.114244 etc…

Light should have quadratic falloff (inverse squared).

Send me your scene if you want.


#5

could you explain in detail those 2 steps?


#6

Yes, as wilzon wrote, it’s simply because you are rendering the light in a linear color space, see Zap’s post on page 3 of that first link, he lists problems with rendering in the wrong color space, including “burned out lighting near lights w. decay” (physical light).

But today’s Macs also have a gamma of 2.2 just like PCs do, they changed a little while ago.


#7

The problem image I posted was already rendered with the a default PC 2.2 gamma of mia_exposure_simple connected to my camera. There is no textures to worry about degamming. And I like the color and the falloff in the rendering except the atomic spot.

I didn’t post the scene because I thought it would take less to create it instead of downloading and opening it, but anyway here it is. I would be very happy if you or anyone can fix the blotch without changing the rest of the falloff.

 Thank you for your help

#8

You could simply use my mia_material_rg phenomenon as your shader (on page 9 of that first thread, it uses the physically correct mia_material and automatically gamma corrects your colors and textures) and then plug the mia_exposure_simple tonemapper in the mental ray lens shader.


#9

Sorry Emil3d, I misunderstood. I think a combination of things here, the scale (730 cm = 24 feet wide room) brightness of the light (400 000!!) and it’s closeness to the wall are the cause of this. I understand you say that you can see clearly (with your eyes) in a similar room with four 100 watt lights (next to the wall I imagine), but can you take a photo of the exposed lightbulbs (no diffuser) next to the wall in this room when there is no other external source of light at all? I would recommend lowering the light intensity significantly and if you want to keep the scales and placements try playing with your camera’s exposure.

For example, if you want to keep everything the same in your example scene and just get rid of the bright spot try this: change the light intensity all the way down to 10 000, in mia_exposure_simple change: Gain = 10, Knee = 0.1, Compression = 5.


#10

If you have a real light bulb, evenly distributing light into every direction in a room that size and only 20cm away from the wall the same thing would happen - of course not to our eyes - they are capable of handling insane contrasts (our eyes are basically HDR cameras), but take a picture of it and it will have a similar burn out effect as you rendering. There are no devices that can display a scene as our eyes see it - apart from a few ultra expensive HDR screens…

So you have to hack it.
that could be tonemapping even stronger(hilite compression in exposure simple up to 5 or higher)
or using an IES profile to control the light distribution
or render as float and tonemap in post to name a few possible solutions.

This is not really a ‘problem faced by programmers’ or a Gamma issue, but a result of trying to get light in renders to behave physically correct…

EDIT: oops - I didn’t read Sphere’s post before, but anyway;-)


#11

A gain of 10 would actually increase the gap between low and high values. It acts like a multiplier to the range of colors. So to bring the range down to 0-1, the gain has to be lower than 1.
Using GI would also help, or at least increasing the number of FG secondary bounces. It would help you balance the ratio between the direct and indirect illumination.
Maybe a picture of this room you’re looking at would help as a good reference.
As Sphere and sixbysixx already said, a real picture would probably reveal the same issue.

Kako.


#12

i can see the importance of the mia_exposure node, but i think that it is limited in certain situations. what i expect from a tonemapper (especially in indoor scenes) is to fix the burned areas and lighten surfaces far from those hot zones just like in real life.
Yes, mia_exposure can fix the burned areas but the far away surfaces are darkened as a result (even if i tune the gain).So i don’t see how it is supposed to help achieve balanced interior lighting.

i came out with another solution: i use the mia_material by lowering the diffuse weight and diffuse value (0,01) and i increase the Fg indirect multiplier so the hot zones are not burned and the far away surfaces recieve more light.

here is the result same light settings:

Now, i know this is not the ideal way, but it’s the only solution for me to fix relativly burned areas close or directly exposed to the light source, and lighten dark areas far from the light source.

this thread lives me with some questioning:

-what’s the importance of the gamma value in the render globals, i can see that a value of 2.2 just darkens the render linearly compared to a value of 1. So why is it so important?

-I am really concerned about the “degaming textures” issue, i understood the XSI issue exemple posted earlier in this thread.Mia_exposure seems to wash textures a little bit, how can i keep texture info “objective” and evoyd washiness?


#13

Yeah, I’ve been wondering about that as well. Can’t be solved with gamma, because it’s more about expanding the range, right?
But then if I understand this right: this only counts for textures that are not being shaded, something like a texture applied to a surface shader or an image plane attached to the camera, no?

All other textures will automatically ‘expand their dynamic range’, because they are lit in linear float and therefore they can be tonemapped or rather need to be tonemapped afterwards in order to look ‘good’ - sorry, do I make sense? Don’t know how to describe what I mean.

But for a camera image plane, maybe what we are looking for is something like a reverse mia_exposuresimple node: a node that automatically reverses the tonemapping applied to the output and applies it to the input?

Is Zap here? Maybe he could help us out on this one?


#14

But he had increased the light intensity to 400000 to put more light in the scene instead of what you would actually do in real life, increase the camera’s exposure. By lowering the light intensity down to a more “normal” 10000 and then increasing the gain it’s equivalent to increasing camera exposure (no?), it doesn’t make the scene more bright and does make more sense to me. This is where those physically correct renderers like Maxwell and Fry make this so easy…once you’ve understood the principals of photography.


#15

I think I found the right way of doing the tone mapping but outside Maya in Photoshop.

Step by step continuing with the file I posted earlier.

  1. Disconnect mia_exposure_simple from the camera.

  2. Choose RGBA [Float] 4x32 Bit in the Primary Framebuffer

  3. Batch render a single image as OpenEXR file format

  4. Open the file in Photoshop and apply Image > Adjustments > Exposure

[img]http://i111.photobucket.com/albums/n129/Emil3d/fallofexposurecc.jpg[/img]
  1. Apply Image > Mode > 16 Bits/Channel

  2. Apply Curves color correction as shown or the way you like it:

[img]http://i111.photobucket.com/albums/n129/Emil3d/fallofCurvescc.jpg[/img]
  1. Apply Levels color correction to add additional contrast

  That’s a result I like. 

Now it becomes obvious that my color temperature choice in the mib_cie_d node could have been better and more in line with this chart. I have chosen a color temrature based on my inability or mia_exposure_simple inability to make a tone mapping preview in Maya that matches more closely the result in Photoshop.

Now the question and the challenge is how to match the Photoshop result as closely as possible with mia_exposure_simple in Maya for previewing purposes.

   Thanks again for all the help.

P.S. regarding photos reference, in my opinion photos are poor approximation of reality and I never rely on them as a reality check:). This is one reason I’m using 3D and a major reason (inspiration) for recreating reality as I see it not as my camera sees it:). And I think that’s one of the purposes of tools like 3D rendering isn’t it?.


#16

Hi Sphere. Now I see what you mean. When I read your post I didn’t stop to think how low 10000 was.
Actually I asked myself when would I use a gain higher then 1. That would probably be when the scene was underlit. But in this case, wouldn’t it be better to increase the light intensity? For example, instead of using an intensity of 10000 and a gain of 10, I would use an intensity of 100000 and a gain of 1.
I don’t know if increasing the gain is equivalent to increasing the camera exposure. It’s a good point. And the approach you gave to the problem is interesting to try and tune the lighting. If the light intensity and the gain in the mia_exposure_simple are inversely proportional (or at least close to it), then we could choose a low intensity for the light and tune only the gain until we get the desired result. It would be an advantage because tweaking the mia_exposure_simple with use_preview should be very fast.
Anyway, if that approach works, wouldn’t we have to change those proportional values so that the gain doesn’t exceed 1? I mean, getting a low range and multiplying it by a number higher than 1 wouldn’t be something like getting a lowres bitmap and converting it to highres? (Forgive me if that’s a stupid analogy. I know that image resolution and tonemapping are different things! Just didn’t know ho to explain the problem. :wink: )

Kako.


#17

Haha - yeah, sure. Being a photographer I’m actually quite fond of what photography can do to reality, but I know what you mean;-)
I think my point was also that no matter if you render a scene or take a picture of it, at the moment the resulting image will still have to be displayed on the same devices that can’t handle the high dynamic range of the real world, so the challenges being faced by photography and rendering to deal with this dynamic range are quite similar.

I like what you did to the image in PS.
Just a little note:
What you did in the levels at the end you could have also as easily done in the curves layer before: moving the shadow slider in levels to the right is exactly the same as moving the 0/0 anchor point in the curves to the right - same for the highlights. (you probably knew this anyway…)


#18

If light was calculated internaly using only 24-bit RGB then yes, certainly that would be a problem, but I think light is calculated internally using floating point, so it shouldn’t be. But about these numbers, light intensity etc., I wish they had some real-world meaning. Again, if you take for example Maxwell’s Watt and efficiency for lights, f-stop and shutterspeed for it’s cameras, at least you can make relationships with real-world settings. With mental ray it seems I’m always guessing.


#19

You are experiencing exactly the same issues as me in the “Sun/Sky Rendering Bad” thread. Sphere explained what was going on very well, but I’m still unsure of using this workflow.
To me, the end result is nice but it’s disjointed in that it’s hard to adjust the colours right inside Maya…because after the corrections in Photoshop the colours change quite a bit as you’ve demonstrated (you started the thread with a warm yellow interior but after the adjustments in Photoshop the room is cold and white).

But then again, maybe using a lower intensity for the physical light and using GI instead to brighten the room is a better solution for interiors? Instead of using a higher intensity light and trying to adjust the gamma?


#20

I made some more tests and realized that the light intensity doesn’t seem to matter much. For example, I rendered two files - one with a light intensity of 5 000 000 and the other with only 10. One rendering was completely white and the other completely black. But with the exposure control in Photoshop which actually is equivalent to the Gain in the mia_exposure_simple, I managed to quickly match identical results from both renderings.:slight_smile: They were really identical to the last image I posted in this thread.

Also it appears that the gamma control in both Photoshop and mia_exposure_simple can be used freely to any extreme until the desired look is achieved. For example the image with the Photoshop exposure settings that I posted earlier correcting the rendering with light intensity of 400 000, can be reproduced with mia_exposure_simple by setting the Gain at about 0.01 and the Gamma at about 9. Unfortunately there is nothing in Maya that can mach the editing controls like Photoshop’s curves and levels. I don’t expect a 3D program to include sophisticated post editing features but for previewing purposes we need some specialized test render utility with a little bit more than the Pedestal control which is kind of pathetic. At least a time saving workflow like a test rendering that automatically opens in Photoshop or other post editors.

Just for the record, the claim in my first post that the physical light fall off burns out details that cannot be pulled out by any tone mapping was based on the fact that I was using the OpenEXR file that gets created when using Render View in the \images mp\ folder in the current project. I thought that this file is equivalent (dynamic range) to the file created when using Batch render. It turns out that it is not. Both files are 32 bit but the file in the temporary folder has clipped range and nothing can be done about it in Photoshop. As far as I understand this file was supposed to be the true rendering coming from Mental Ray which then in turn is displayed but destroyed by the Render View which is not capable of handling 32 bit. Now because of this feed back for what Mental Ray renders becomes even more complicated since it requires launching the Batch renderer process every time a test render is needed.:sad:


#21

Turn on ‘Preview Convert Tiles’ and turn off ‘Preview Tonemap Tiles’ in the Preview tab of the render globals. The render view will look like on a funky acid trip, but the temp file wil be alright.

I hope MasterZap will kick yer butt for calling the mia_exposure_simple pathetic. As he said, it’s called simple for a reason.


#22

Btw, try this settings with the mia_exposure_simple: Pedestal 0.0, Gain 1.0, Knee 0.0, Compression 6.0 or higher, Gamma 2.2 (or 1.0 if you set the gamma in the primary framebuffer). This will compress the whole image into a visible range, basically the same thing you did in photoshop.

Keep in mind that burnt spots are a general problem, also in photography. If you want to have it more physiological, i.e. more working like the human eye, you wont get around local tonemappers.