'Energy Conservation' in Vray (and others)


#1

Who’s ready to riot against this abomination with me?!

Who else had to apply hacks just to get a white object to look like it’s not dark grey!

Who else has never gotten a single benefit from this ‘feature’, only extra work, extensive node networks and complicated hacks to get around?!

I’ve set diffuse and / or specular values to 1,000,000.00 (1 Million times brighter than 100% white) and still gotten dark grey. While the rest of the scene is nearly blown out. I have to seperate the white object, and triple the light values, to get what I need, just to get a… realistic look!!

“White plastic with a clearcoat on it? Sorry, here’s dark grey; when you turned up your diffuse, I turned down your relfection for you… you’re welcome. No, I won’t stop doing that.”

3D rendering is not a simulator, those take days to render one small image. To arbitrarily make 1 thing “realistic”, with no OFF switch, makes no sense, and has forced me to march through mud just to get back to square 1, for the last 8yrs. And without a single net-benefit.

When a director says “make the whites a little whiter” I cannot just sit there and say “sorry, no can do, that’s unrealistic.”

For you arch-viz guys, imagine that you were forced to make interiors really dark when veiwed from the exterior on a sunny day - the way it really is in real life.

The only people I can imagine benefitting from this are people who can’t tell the difference btwn a good and bad image, and rely on what they hear is “physically accurate” to decide if it looks good or not.

I sure hope I’m not alone on this… !


#2

I’ve no idea how vray handles energy conservation. But render engines in this modern times are trying to simulate light in a more or less physical correct way. And the problem here is that physical light behaves more predictable with a energy conserving shaders. If you e.g. have a maya phong shader and you turn on reflections, then the reflections are simply added to the diffuse layer what results in a too bright shader.

But you are right, the user should be able to break these rules at any time.


#3

“I’ve no idea how vray handles energy conservation”
You and me both, brudda. Why a shader reflecting at anywhere from 100% to 1,000,000% gets clamped at 35%… but not if it’s a saturated color; only if it’s white. Makes no goddamn sense at all. I don’t even want to know the excuse at this point. It’s probably something like “well to be really accurate you have to make the lighting 3 times brighter (as I’ve had to do) and re-adjust every other shader in the scene to match”. It’s no different than saying “here’s a shader that doesn’t work with the lights you have nor any other shaders you might use”, or “you can’t make this light invisible because that’s not realistic”.

Don’t give us a solution that’s half “physically accurate” and half inaccurate, and expect it to all just work out. The only lights that are physically accurate at all are the IES lights; so don’t give us shaders that require them, without at least telling us in advance.

" then the reflections are simply added to the diffuse layer what results in a too bright shader. "
Yea, I know - too bright, like, something that actually looks white. And 'cuz we’re too stupid to just turn it down if it’s brighter than that. White plastic doesn’t actually exist in the ‘real world’. Hell, not even light grey.

Pardon me, just venting. Gotta get the stupid out. :banghead:


#4

If you have a shader’s diffuse color set to white, and it turns out grey that either means you don’t have enough light to make it show up as white, or (don’t bite my head off for this) you aren’t using proper linear workflow.

haggi is right. renderers like mental ray and v-ray can only give us physically accurate lighting if they are adhering to the laws of physics as they pertain to light. It’s not Mental Images or Chaos Group trying to control what you do, it’s that you can’t have a renderer that renders light in a physically accurate way, and then ask it to make exceptions. So, while “make the object white” sounds like a very simple request, it’s important to understand that under the hood NOTHING is simple when it comes to physically accurate renderers.

If you want an object to be brighter, more saturated, darker, etc. and can’t make that happen without affecting the rest of the objects in your scene, you can always render an object ID pass and fix it in your compositing program of choice instead of fighting with Maya/V-ray and getting frustrated. And, if you are using a linear image format (.exr .hdr) you can very acute control over this.

Or, you can use the maya software renderer instead, if you are concerned with getting physically incorrect renders.


#5

Yes, if you re-read my post you’ll see I’ve been through all of that.

Beware of Trojans bearing gifts. If you have to take your lighting from 1.0 to 3.0 just to get a light, desaturate color, just because someone says “physically accurate”, then something is obviously wrong. You cannot have half the renderer be “accurate” while the other half is innaccurate (raytrace shadows, bump/normal maps, infinitely small & invisble lights, changes in falloff, irradiance cache, light cache, photons, limited bounces, motion blur cheats, etc. etc.) and expect it to work. 3D rendering is not simulation, it’s all cheated, to render in a decent amount of time.

If all other colors and textures work, but white does NOT, how is that “physically accurate”, in any way that doesn’t necessitate a completely different way to shade and light a scene? A way that none of us really knows, much less have we been informed that it will even be neccessary. Should we start every scene by going through every default setting, to triple the lighting, and set a maximum of 0.33 in every shading channel? Except when we actually want white, of course.

And let’s not even get into your diffuse or reflection going up and down completely under the hood, without your knowledge of it in the slightest. You can’t even determine what your diffuse or reflection values are, if they’re being constantly re-balanced under the hood.

Be careful that your response is not merely “it’s physically accurate, therefore, there can’t possibly be a problem with it except for the user”, that’s a naive position to take.


#6

you understand that the intensity multiplier on v-ray lights corresponds to real world units right? if you leave the default values and change it to lumens or watts, you can see that it changes to an extremely low value. of course you won’t get white that way, and it’s a wonder you’d get other colors to come out properly as well. Changing from 1 to 3 is not that big of a jump, and is still extremely dim.

and as far as it not being a simulator it seems pretty damn close to me. If I source an HDR environment for an IBL in a composited shot, match the camera settings with v-ray physical camera, and correctly gamma correct the colors on my shaders, I get a result that matches almost perfectly with only a little post work to be done. Whites and all.

All that aside, I DO see your point. Yes, it’s possible to be in a scenario where you want to get a particular thing to look a certain way, but can’t reconcile that with the rest of the elements of the image. However, I don’t seem to struggle with that very often at all.


#7

1.0 is the default for the dome light - no lumens or watts option - and changing it means changing the HDR intensity. I set up a car, then the driver, with no problems - then I got to the hair on the driver. The ‘blonde’ preset came out dark brown. Nothing I did would brighten it (which means there’s a clamp on it, as well as the see-saw) - until I tripled the dome light. His teeth are also completely grey. But sure, realism is more important than aesthetics, even when your client won’t accept the work you’ve done.

and as far as it not being a simulator it seems pretty damn close to me.

“Pretty damn close” is not physically accurate. Oren-Nayer diffuse with a Ward specular is “pretty damn close” without energy conservation too. It’s irrelevant to my point that adding something “realistic”, when it’s a restriction, without integrating it with the rest of the renderer, makes no sense.

How often have you had an exterior HDR with the sun’s value at 65,000? Or restricted yourself to IES lights, where the falloff cannot be changed without affecting the intensity? Do you use SSS for any wood surfaces? That’s “physically accurate”. Anything else is not the least bit simulator-worthy. Even having separate diffuse and reflection brdf’s is a cheat. How about if the default shader suddenly required Brute Force for both direct and indirect light, and all secondary bounces - but no one said anything until you were halfway done setting up shaders? That’s what I’m trying to get at.

and correctly gamma correct the colors on my shaders

I set up the colorspace settings for my show. I know what the right or wrong colorspace looks like, and what to do with it.

I don’t seem to struggle with that very often at all.

And how often has it actually helped you? When they introduced this see-saw of diffuse / reflection that you can’t even see, much less do anything about, did you say “oh that’s a relief”? Or “wow that looks so much better”? If not, why is it worth one minute of grief or extra work, even if it’s not ‘very often’?


#8

What I meant was that the light values, colors, and tones end up matching reality very closely. No, it isn’t a complete physical light simulator.

As far as the dome light, I never leave the value at 1, but I also never use it by itself. I’m always using a .exr environment map. I tend to get the light values in the HDR to be a fraction of what they are in reality, and then use the multiplier to get them up to values like 65k. That, combined with the lens shader always gives me correct lighting.

I guess I can put it this way. No, I’ve never jumped for joy at under the hood adjustments that occur, but the only time they’ve been a problem is when there was really a problem with my lighting set up (weird, just like photographing something in real life), or linear workflow setup. As far as it helping me, when I plug in predictable values based on real life and get a realistic looking result, I consider that a great help. I felt like I struggled much more with mental ray in this regard.

but whatever, if you really think it’s broken and nothing can be improved on your end, maybe Chaos Group should be sending out damages cheques to all of it’s users.


#9

??
You grossly under-expose your HDRs?! None of the studios I’ve ever heard of shoots or uses anything but calibrated HDRs. So once you touch the default 1.0, you’re breaking it, full-on cheating. And nobody uses 65K intensity ‘anything’, that makes indirect lighting almost impossible to resolve, and slows down the render - so I don’t know what you’re trying to pull here.

" predictable values based on real life "
I’ve already told you, and shown you, several times, that 3D rendering is NOT real life. And you don’t even know that energy conservation is what’s providing you with a realistic results, Vray does a thousand things differently than MR.

So spare me the juvenile, unsupported, thinly veiled “I works for me, you must be doing something wrong” or “it’s physically accurate, so it must be good” drivel.

And for god’s sake I NEVER said it was “broken”. It’s no wonder this conversation has gotten stupid, you’re reading words that aren’t even there. No, my scene isn’t perfect, but neither are any of yours, and neither is anyone’s who wants to spend less than a month on one frame of a 10,000 frame animation.

If you can’t tell me why it’s a good thing to tie user’s hands when a render artifact appears, or require 10x more work for the same image in the name of the “physically accurate” idol that you apparently pray to, then you have nothing to add here.


#10

ok man I wasn’t trying to start a fight here. so let’s just say you win and it’s a dumb feature. have fun being mad about it.


#11

Being an arch viz pixel pusher with Maya and V-Ray, I can’t say I’ve ever had this problem with VRay Physical Camera and a HDRI and achieving very white materials.

I really like using the tonemapper to achieve near white whites and then being able to control it in something like Photoshop or Lightroom and smash the highlights and whites in post.

White fur though…that is a major pain!

What was it specifically that you were having trouble with? Perhaps if you post a simple scene or picture we could help?


#12

No - you were just trying to be elitist ‘without’ starting a fight. I even warned you that’s what you were doing. Arrogance in the absence of knowledge is what makes me mad, not cg that just needs some improvement.

Thanks burgerman, I know the workarounds, I’m just tired of taking the extra time for no good reason. In the last case, it was hair. There are a thousand things to wrestle with to get nice cg, so when anyone adds yet another thing to trip over, making things slower instead of faster, I try to get it removed.

The see-sawing, and clamping, of diffuse and reflection is imho no different than eliminating all the spec components (Phong/Blinn/Ward), and saying “too bad, they’re not realistic”. Even Lambert does not exist in the real world, since it has no roughness, and its reflectivity was set long before indirect bounces were ever calculated. Everything should be raytrace reflection, with a different gloss, scattering and IOR, if you want to force realism. Until you do that, leave us options, that’s all!!


#13

The idea was not bad. If you could find the time, make a simple scene where your problems occur and share it or the images. Then we all know exactly what you are talking about.


#14

Thanks but I applied the workaround a long time ago. I’m just sick of doing workarounds for a feature that takes control away the artist, and locks it. It doesn’t make one speck of sense to me. Would everyone be so passive if depth of field was locked on, and locked to the lens settings of the camera - because that’s realistic? Or if IES lights with brute force GI were the only lighting options?


#15

Well, after working some time with more or less energy conserving shading, we do not encounter much problems. Au contraire, our lighting artists now are really happy that they do not need to adapt the shading depending on the lighting in the scene any more because the shaders now behave as expected. We have a much smoother workflow now. But this may be quite different from your requirements.


#16

You’ve GOT to be kidding me. In over 15yrs of 3d rendering I’ve never had, nor even heard of anyone, anywhere, or any shot, having any problem with diffuse/reflection balance. That includes international tv commercials and major motion pictures.

What I ‘have’ heard of, constantly, is notes from supervisors that want more of this or less of that - and they’re not asking for a see-saw. They may ask for more reflection, but that doesn’t mean they also want it to be darker. And vice-versa. Granted; I haven’t had this problem in Vray until trying to get blonde hair, and you may have supes that say “ok” if you tell them their request is not realistic, but the idea of locking it that way is no less ridiculous, imho.


#17

I find it interesting that you’d bemoan a methodology that approaches physical accuracy (what our eyes see) much better than the outright guesswork we all dealt with before. Since you still have control either way, and don’t have to keep things energy conserving or physically accurate, why are you so upset about it?

Nobody’s making you render realism easier and faster, but the option is there if you want to use it.

Your “Everything should be raytrace reflection, with a different gloss, scattering and IOR, if you want to force realism.” is also baffling, to me. Why should everything be the opposite of how reality works? That seems unintuitive and backwards, to me.


#18

I don’t think you’ve read many of my posts - if you go back and read them you’ll understand that maximum physical accuracy is not the least bit practical; that’s why we have ‘interpolated’ GI, limited diffuse & reflection bounces, lights without falloff, and specular, for example. None of those things are realistic. Even geometry is unrealistic. Who renders glass using volumetrics? Real glass isn’t hollow with an infinitely thin skin.

Those innacuracies give rise to other innaccuracies. And if you’re shackled into doing everything else “realistically”, you can get stuck with artifacts, or with single frames that take days to render, or have to shoot all your HDR’s over again, then redo all your lookdev, or other drastic measures.

And it’s not an option with Vray, that I know of. If you know of a switch somwhere to turn it off, you would win the internet today.

Why is raytraced reflection the opposite of how reality works? It’s more accurate than cg lighting/shading + interpolated GI. All lighting is either emission, or reflection of light, none of it is cg light eminating from a single point floating in space. All lights should have an emissive filament and a properly focused reflector, if you want the most “realism”. Imagine how happy people would be if they were forced to render 100 bounce brute force + reflective caustics, just to make a light work :arteest:


#19

I don’t think you’ve read many of my posts - if you go back and read them you’ll understand that maximum physical accuracy is not the least bit practical;

I understand you have directed this at Infernal, but I too am puzzled to why you think this since every new and old renderer these days is pretty much moving towards maximum physical accuracy? Even ILM on full CG projects are now simulating real camera, light values etc.

I think it easy to be grounded in reality, certainly it is more predictable to how light and materials will behave before rendering tests. We know about fstop and lux values…although CG will always be a cheat (as you have mentioned) I think the greater picture it’s all about consistency, reliability and faster times because we know how things will behave.

It is no sense to have a physics based system in CG that is not related to real world physics at all…why should light and materials be any different? Why should we create scenes in metres and centimetres not potato units? It just makes sense to have some system to work with.

I mean no disrespect and I’m not saying you are wrong (because there is no right or wrong in this) - I’m curious to why you think this and what should it be? It is an interesting discussion.

At what line do you draw? Is Global Illumination too realistic? Do you go back to the days of non linear rendering and creating spot lights that shoot through floors to simulate light bounces with an array of lights to simulate sky?

I guess luckily though even with V-Ray it is still possible to work the old way. V-Ray supports phong, lambert where all diffuse can go past 1.0, non linear workflow, GI off etc… if that helps.


#20

No disrespect taken - I just wish I could understand why my examples aren’t sufficient - the line I draw is where it becomes utterly impractical to create the perfect environment that the render engine will accept, and perform in a reasonable amount of time. Do you set raytrace depth to upwards of 100? It’s the exact same thing. ILM has tens of thousands of cores to render on, 50hrs per frame is no big deal to them, how about you?

You simply cannot justify something ‘just’ by saying it’s realistic. If that made sense, then there would be no cg lights, no interpolated GI, because they’re not realistic. Even HDR’s have little basis in reality, because their light doesn’t change as an object moves around. We’re not scientists, we need good looking images more than we need accuracy. And we need to be able to make changes according to artistic direction, which usually has 0 basis in reality. For example, I was told to make the G.I. Joe ‘Cobra Stealth fighter’ visible - at night. Our audience is people, not data sheets.

“Those innacuracies give rise to other innaccuracies. And if you’re shackled into doing everything else “realistically”, you can get stuck with artifacts, or with single frames that take days to render, or have to shoot all your HDR’s over again, then redo all your lookdev, or other drastic measures.”

Does that really not make sense? Of course I want things more realistic. But I have to be able to render them, for that to be any advantage at all. Until all these things are done realistically, then we NEED the flexibilty to be able to compensate for them. Most of 3D rendering is still a HACK. You can’t give us a hack, tie our hands, and still expect things to come out photreal.

Energy conservation is not an advancement like GI. It’s nothing that a good TD couldn’t set up on his own, with scripts. Or an artist with a good eye couldn’t do manually. It’s being locked to a see-saw, and clamped, that I object to.

Unless you’re working in the scientific or legal field, aesthetics must come before physical accuracy. Every one of the most incredible images you see today are touched-up, tweaked, and color-corrected. None of them are 100% real. If accuracy / realism / perfection doesn’t serve aesthetics, it’s discarded. If you’re working on LOTR, and ‘realism’ turns out looking ugly, do you keep it that way? Not if you want to keep your job, you don’t.

Does everyone here think that realism = aethetics?? If that were the case, then the Mona Lisa, and all the master’s works, would have become obsolete, and “ugly”, with the invention of the camera. I wondered which was more important for a little while too, but it soon became obvious. Aesthetics take priority.