'Energy Conservation' in Vray (and others)


#21

I think we are actually on the same page here although I’m more so talking about the process rather than what the end result should be. The workflow makes more sense to have units, I’m not saying it is realistic and produces realistic renders as result - renders never do look realistic most of the time without post as you say. What I’m saying is, that it’s nice to have a reliable method of setting up lights and materials without a lot of guesswork anymore than what we used to 10+ years ago.

Like my comparison earlier with physics in CG. It is much easier to have gravity set to 9.8 and heave typical real world weights in order to get predictable results. Otherwise we’d be using numbers which have no meaning at all. The physics in Maya, again, will never be 100% scientific - ever, but it is great to be in the same ballpark when doing that sort of stuff. The same logic applies to rendering and lighting in my eyes.

After all that we can apply artistic flair to the real world settings.

the line I draw is where it becomes utterly impractical to create the perfect environment that the render engine will accept, and perform in a reasonable amount of time.

I agree!

You simply cannot justify something ‘just’ by saying it’s realistic. If that made sense, then there would be no cg lights, no interpolated GI, because they’re not realistic. Even HDR’s have little basis in reality, because their light doesn’t change as an object moves around. We’re not scientists, we need good looking images more than we need accuracy. And we need to be able to make changes according to artistic direction, which usually has 0 basis in reality.

I agree with what you say here as well. I don’t think any pro has submitted raw renders to clients, as we all know they usually get smashed in post. But as I say earlier, it is nice in rendering to have an idea or what the scene will render like initially.

“Those innacuracies give rise to other innaccuracies. And if you’re shackled into doing everything else “realistically”, you can get stuck with artifacts, or with single frames that take days to render, or have to shoot all your HDR’s over again, then redo all your lookdev, or other drastic measures.”

Does that really not make sense? Of course I want things more realistic. But I have to be able to render them, for that to be any advantage at all. Until all these things are done realistically, then we NEED the flexibility to be able to compensate for them. Most of 3D rendering is still a HACK. You can’t give us a hack, tie our hands, and still expect things to come out photreal.

Yes, inaccuracies and cost in increased render time too! Moore’s law, with faster hardware, rendering times stays the same over the years! I think we do have nice flexibility now with 32bit float exrs passes.

It’s being locked to a see-saw, and clamped, that I object to.

What do you mean here specifically? clamped? We have the ability to still use Maya shaders in most 3rd party renderers which are not realistic (because they can go past a diffuse value of 1.0 for instance). We can also choose to revert to the past and use linear lights with no relation to real world.

Unless you’re working in the scientific or legal field, aesthetics must come before physical accuracy. Every one of the most incredible images you see today are touched-up, tweaked, and color-corrected. None of them are 100% real. If accuracy / realism / perfection doesn’t serve aesthetics, it’s discarded. If you’re working on LOTR, and ‘realism’ turns out looking ugly, do you keep it that way? Not if you want to keep your job, you don’t.

Agree 100%.

On the side I work as an architectural photographer - the amount of cheating that takes places and spending a tonne of time in post certainly justifies your logic of aesthetics > physical accuracy. Could not agree more.

I guess, my main point earlier and during this post is that it’s just easier to work with a base set of measurements and rules that are very predictable in their outcome. It is not scientific or accurate, but it’s reliable. I think reliability is the key - even Vlado these days says he uses Universal Settings in rendering because it is reliable. It may take more machine hours to render, but reliability and predictability means more time to the artist and thus more time can be made with less tweaks, less settings and more on image and post.

Does everyone here think that realism = aethetics??

No way! Not at all.

If that were the case, then the Mona Lisa, and all the master’s works, would have become obsolete, and “ugly”, with the invention of the camera. I wondered which was more important for a little while too, but it soon became obvious. Aesthetics take priority.

It is an interesting point. Back to my ILM reference, Rango is a very non realistic movie in look! But their workflow on that particular job was setup to be physically accurate, not 100% scientifically based or anything - but because it is possible to work like this as it is predictable results in lighting and camera. Cinematography in CG is easier now we have similar ways in which to use the virtual camera like the real world one. After the base of physically plausible settings, artistic flair is added on top.

This my point, I hope I explained it properly. Again I do think we are on the same page. But perhaps you might disagree with the workflow, that’s cool if you do too. Thanks for the interesting discussion.


#22

Ok I think I misunderstood this, because… it’s so hard to imagine. Are you saying that you or your co-workers have actually had trouble with the ‘guesswork’, just setting up diffuse and reflection levels? It seems to me that prior to energy conservation, you simply set whatever levels look good. With energy conservation, you ‘try’ to set what looks good, and the diffuse level changes while you’re adjusting the reflection, and vice-versa, giving you an unpredictable result each time, and making things like white chalk become much tougher. It’s hard to imagine the latter being easier than the former.

I’m glad you do understand what I’m trying to say.

What do you mean here specifically? clamped? We have the ability to still use Maya shaders in most 3rd party renderers which are not realistic (because they can go past a diffuse value of 1.0 for instance). We can also choose to revert to the past and use linear lights with no relation to real world.

The most recent problem I had was the vray hair shader ( I believe it’s always shader based anyway); I could not even get dark blonde, much less the platinum blonde that I needed, even by setting the diffuse or the reflection level to 1,000,000.0 (I go to extremes in testing things). You can’t really have actual energy conservation if the surface is reflecting more light than is hitting it anyway. So it’s pretty much clamped at 1.0. I don’t like to go above 1.0, but it beats making a new layer, to render one thing by itself, with the lighting tripled.

Back to my ILM reference, Rango is a very non realistic movie in look! But their workflow on that particular job was setup to be physically accurate, not 100% scientifically based or anything - but because it is possible to work like this as it is predictable results in lighting and camera. Cinematography in CG is easier now we have similar ways in which to use the virtual camera like the real world one. After the base of physically plausible settings, artistic flair is added on top.

This my point, I hope I explained it properly.

I think so, but ILM really isn’t a good thing to go by, it’s kinda like saying what’s good for an F1 car is good for the family station wagon. Their pipeline is set up by geniuses (I’m not exaggerating) and refined for decades by other geniuses - and all the individual work is overseen by other uber-smart people, with tons of resources and time. They could spend an entire day on a problem and it wouldn’t affect delivery. Not so for many of us. They’ll also go to extremes for very minor improvements.

What you describe sounds a bit like using Presets. It’s absolutely a time-saver to have consistent standards / measures / and similar values to real world. But they have to be adjustable, that’s all.


#23

I very much understand where you’re coming from, and agree with you on most points. But I don’t understand why you’re upset or worked up about the topic - as Hamburger said, you can always use non-realistic shaders if you choose.

And also in line with Hamburger’s post, I’m in arch/viz. I need consistency AND physical near-accuracy. I need lights to behave predictably, and shaders to do the same. I don’t have time to toss in my artistic talents (however paltry) for my work, because of insane deadlines and tons of pressure. Energy conserving shaders and predictable lighting really, really help speed things up. And rendertime? Well, 28 cores tend to chomp those down to a manageable level, too.

As for physical accuracy, and my comment about ray-tracing being the reverse of reality, it gets a little nerdy here. Raytracing is of course the computer’s method of reverse-illuminating, where the “camera” shoots out the rays instead of receiving the light, as a real camera or our eyes do. This is where poor physics comes into play, and where my take on things gets a little dicey and controversial. Since the 1920s Copenhagen Interpretation forbade physics from remaining physical, mechanical, and “real”, there’s been almost no progress in physics theory since. The mainstream doesn’t even admit that photons exist. They refuse to witness physical reality as it IS, and instead prefer to hide in their tensors and math-art, which have little to no bearing in actual, physical reality.

Neoclassical physics, however, fixes this. It’s not mainstream, it’s highly controversial, and it’s highly scorned by academics due to its simplicity threatening all that they believe they know. It overturns almost a hundred years of esoteric garbage from the physics world - simply by allowing the photon to be a real particle, with mass, spin, and chirality.

How would this affect 3D rendering? Well instead of shooting rays from the camera, each material would have photon emission as it does in real life. Each light would emit photons just like real lights. Each material would recycle the charge field just like in real life. The “virtual camera” would then become a receptor, instead of a caster of rays.

Granted, this is all hypothetical, and the sheer amount of math to run such a simulation would crush even the strongest and fasted systems. But that’s what real, neoclassical physics will bring to the table. It’s a way off however, but is part of my life’s work.


#24

I really like energy conservation, but I’ve noticed on MR’s MIA material that you can get slightly unexpected results sometimes when you try to isolate a parameter.

For example if you want to isolate the reflections, you’d turn your diffuse weight all the way off or black out the diffuse color. You then get your reflections dialed in, but when you add your diffuse back in, suddenly things will be a little different with your reflections - often they’re not strong enough once you add the diffuse back in. It’s annoying not being able to perfectly isolate the attributes from each other.

My other main beef with energy conservation is it needs to really include all shading parameters such as SSS and ambient to be effective. I know companies are working on new shaders that try to do this, but in the meantime it’s still troublesome to have just a few parameters be energy conserving while you’re also adding in other non-energy conserving parameters or additional shaders to your overall shader and they’re not included in the energy conservation.


#25

The topic didn’t get me upset in the slightest. Being told that “it’s physically accurate and therefore you’re doing it wrong” by someone (neither you nor Hamburger) who doesn’t know what they’re talking about, did. I even explained why it’s not that simple, and that the position is arrogant, before it was told to me more than once.

It’s the vray Hair shader that gave me a hard time, and Maya’s reflection doesn’t work in Vray at all. Using shaders from different renderers can cost you render time and create bizarre artifacts too, it’s just a good thing to avoid. It would also give you the opposite of the quick, predictable results you wanted.

Like I was saying, I’m a little confused at the idea that there’s difficulty setting up something that’s not locked to diffuse/reflection balance - to me, that compares to being forced to use IES lights for everything. I know how archviz uses lots of presets, and that makes sense, but if you have good looking preset shaders, why would you need energy conservation.

Do you guys render animations? I have 4 minutes of video to render - if each frame took 15min. to render, that would be 60 straight days of rendering. So I’m kinda screwed if I have to use a realistic feature I don’t need. That’s why I’m using vray in the first place, it absolutely screams, with some final quality shots taking less than 2 min. per frame.(!)

Maybe it’s because you guys’ lighting environments are pretty much always sunny days? In other fields you have to light scenes in ancient caves, around lava, on mars, outer space, with exotic materials never seen before, alien light sources, volumetrics, etc., and have different HDR’s shot and calibrated by different people and lighting pipelines on every show. Most shows are an attempt to do things that no one has ever seen before. That’s probably why I get less predictable results, and why rigidity is such an enemy.

But every one of us uses the unrealistic flexibility, so I guess I was anticipating a little more agreement on the issue


#26

Exactly!

My other main beef with energy conservation is it needs to really include all shading parameters such as SSS and ambient to be effective. I know companies are working on new shaders that try to do this, but in the meantime it’s still troublesome to have just a few parameters be energy conserving while you’re also adding in other non-energy conserving parameters or additional shaders to your overall shader and they’re not included in the energy conservation.

That’s an even better point.

But I imagine having all those parameters being tied together could be even worse :scream:

I bet what you would want, for people who really want energy conservation, would be a slider that goes from 100% diffuse to 100% reflection. And if the other channels you mention are included, the multiple sliders would go up and down, actually showing you how much it’s changing them.


#27

yeah, I think the new MR layered shader lets you tell it how much contribution each “layer” AKA attribute has for the overall shader. If I’m not mistaken, I think it’ll let you solo an individual layer as is, but I don’t remember. I still haven’t played with it yet, only seen the videos and write-ups, so I may have forgotten some of the details.


#28

So just to jump in (without reading 100% of the discussion and referring to the inital post) , you are trying to get a white material.
Ever thought how a pure white , glossy material would look like? So white with white reflections? We laugh at customers who want the white even more whiter. We started creating our whitebalance cards to “calibrate” our workset and to understand where you can set something and whereelse you can change that previous setting.
Not sure if its only in Maya, but there are settings which you can enable at one spot in the menu but you can overwrite ore disable it somewhere else. We came across of stupid things that ruined our projects just because that you were able to do so.
You never want 100% white and 100% black in any shader set as diffuse color. Even in unbiased render engines like Maxwell they advice you not to do 100% settings.


#29

I think the main issue overall is that there is no one standard by which all these parameters are calibrated.

Its not quite like a bars and tones setup NTSC (never the same color tee hee) baseline.

Sure we all calibrate HDR to colorchecker cards and have different profiles for each show.

Yet which standard do the different renders go by?
Which standard do the different applications go by?

I remember trying to even explain linear to sRGB for a few years in between 2005-2008 to befuddled glances. Then you begin to even understand that though you can get a render engine to display how you like you then have to make the application display the shaders and even color pickers to react accordingly. Spent 5plus years at Digital Domain. We even had a custom right click the color picker sRGB conversion happening so that the color you picked would be put through the sRGB curve and end up the exact color you picked. We often had many interns get upset saying that the color ended up darker after they picked it in the display. Reading your posts I understand you already understand this.

What I am attempting to add to this thread is the fact that I have seen no overall standard by which all the apps/renderers have aimed to conform to. I mean, yes we have linear/sRGB. But overall once an HDRI is created as say a calibration standard on a show as we had at Digital Domain.

It is only, and can only be, an internal standard that we use. We tested the render output by displaying it in nuke and rolling the mouse over the final values to see how hot the whites get above 1.0. That the white point does look white at 1.0. It was all quite a bit of labor between the different apps as well as we had houdini, maya, max, and Vray, Renderman, Mental Ray, Mantra, etc.

There are times when white is actually white and its being colored by the green tint of glass or the bounced value of a beige rug, the yellowing tint of a setting sun. etc etc. Stuff I know you’re aware of and yet we had clients ask to “color correct” certain things because cgi sometimes has to give a certain look vs an accurate look.

So somewhere in that abstraction is a possible world where you could get all the accuracy that you’d want but I am just not sure what each engine conforms to?

Is it…

Photography and color checker boards being set to a certain white point that creates a lighting HDRI that correlates to a value of 1.0 in nuke only after being rendered to a linear exr AND appears perfectly white when displayed in an sRGB space?

Or is there some numerical mathematical values that should be a standard by which all the internal values each render engine could point to regardless of the subdivision math they use to anti alias and smooth out shadows? Lumens? Pantones?

So far all I have seen is each company having its own internal color calibration methods that seem to deal with the fact that the render engines and their app integration and display of images, shaders, color pickers exist in their own bubbles vs trying to get a render engine to conform to one.

Lots of time and money was spent in the search and creation of such standard at DD and even then. It exists in the DD bubble too in spite of render engine particulars or lack thereof.


#30

‘Trying to get a white material’ and trying to get “pure white” are 2 different things. And you do get pure white if you have a piece of chalk sitting out in the sun. But if it’s overcast, and the chalk looks like concrete in respect to its surroundings, then it’s grey, not white.

But no, what you missed is that I needed blonde hair, and it came out brown, despite dozens of other shaders looking perfect. But I’m also not looking for a workaround, I’m just tired of doing workarounds for things that don’t help me in the first place.

I don’t think Maya would be able to disable energy conservation in Vray, and I believe it’s done in the shader too. I’d be glad to be wrong though.

we all calibrate HDR to colorchecker cards and have different profiles for each show.

Yet which standard do the different renders go by?

I do believe they all treat 18% grey (linear) the same, but yea, if they were all identical, the renders wouldn’t look any different. That is one reason we need flexibilty.

The other reason is that HDRs come out different all the time, no two have the same sun value for example, because it’s ridiculously impractical to render with the sun’s full brightness - so it depends on the whim of whoever assembles it. Values of 10.0 can cause hotspots that are higly render-intensive to get rid of, much less 65,000.0, so some places will clamp them even lower. There’s just too many variables.

Yup, money that smaller studios don’t have - reason 3 we need flexibility


#31

The chalk example is very good. Lets say you create a chalk material in a scene you have set up. You share this shader. Someone picks it up and sets it in his scene. He does not know what your lighting was or how it was set up. He probably did not use linear workflow, or maybe you did not. How is your chalk shader supposed to work in his scene? It simply cant by just drag & drop.
Since I work as a 3D Artists, there was neither a preset nor a shader working properly without adjusting it to my scene. It is a balance between lighting and the shader.
You say you used a blonde preset. What kind of preset was that? What shader is that? Ever tried the FastSSS Shader presets in a scene? The different skin types for instance? Its all up to your lighting and adjustments.


#32

… and it’s those adjustments that energy conservation gets in the way of, that’s the whole point of my message. Obviously a shader isn’t going to work going from a linear workflow to an sRGB one, I’m not stupid. You should read the whole thread before you assume anything else, and making me repeat everything I’ve typed -


#33

To be honest, the linear “workflow” is a piece of crap in Maya. And Vray is not doing it better. We found out that color swatches need to be gamma corrected. The linear workflow checkbox does not affect those. Same for lightshaders and surface shaders (e.g. with bitmap textures piped into their color slot).
Further on you can´t even put AdobeRGB color profile´s into Maya. That means you have to use poor sRGB, which has the smaller color space.

Back to your hair. As far as I got it you are using an hdri piped into a dome light. Well how can you expect an representable blonde color on your hair with a frozen lighting situation within an HDRI? Is your camera inside or outside the car? If its inside, you might consider portal lights on your windows.


#34

you know what kills me, is maya’s procedural textures are mixed in sRGB color space with no option for linear.

The ramp texture is an obvious giveaway:

What the freaking hell? How’s that for an oversight?

Why don’t all procedural textures offer a checkbox option to calculate in linear space?
It’s been bothering me for years, but the developers have only offered gamma correction for file textures.


#35

From what I can say, the colors in linear mode are mixed linear. At least that’s what I can see in the source code. There is no color conversion happening in the ramp node. But the sources are quite old…


#36

Wait about two weeks, it seems to be fixed…or at least heading in the right direction.

https://www.youtube.com/watch?v=XIP3yF_KteI


#37

yeah I hope they fix the swatch and how ramps appear in the attribute editor because you’re making color decisions based on wrong display feedback


#38

Uh maybe because every single other shader works as expected?

And again - this is not about a particular instance, it’s about the concept of energy conservation and how it’s implemented.


#39

If I need to deviate away from energy conservation for whatever reason (artistic usually), I just use the old legacy shaders… they render just fine too (in mr/vray), produce AOVs, and you can push the values to whatever you like.


#40

Thanks, but do you mean use the Maya shaders? Tried that, and the glossy reflection results were all over the place, and I’d hesitate to waste yet another hour gambling on whether I can get a legacy hair shader to work; that’s where it bit me on the ass. The point of this post is that I’ve wasted too much time already figuring out what work-arounds to use for this locked energy conservation “feature” garbage.