so what did you do in the end that got over the energy conservation limitations? or are you still dealing with the issues mentioned earlier? Im confused if you got it figured out another way already or what
'Energy Conservation' in Vray (and others)
I tripled the light level for the hair pass - it’s a good thing I was already doing it seperately. It was the only way to get blonde hair; even setting the hair color to 1,000,000.00 above white resulted in brown. Using the “blonde” preset.
Maybe the fact that all the other shaders were also Vray, and responded more logically, is an indication there’s something wrong with the hair shader
An entire thread spewing misinformation over a bad hair shader. Use a VRay material on the hair, or use VRay’s hair material if you’re shading curves.
Physical shading and energy conservation is literally the golden standard in 3D rendering. Without it, nothing in your scene sits together, nothing looks right, and you have zero hope of achieving anything approaching photorealism.
How do you possibly expect to get correct results out of a renderer when you’re not even using their supplied and tested surface models?
This would be like buying a nice sports car and complaining that it drives poorly when you put grapefruit juice in the gas tank.
So, I’m just gonna post this out here after alternately lol’ing and facepalming my way through this thread.
HDR, sane linear render settings, “blonde” vraymtlhair3 preset and “white matted” vraymtrlhair3 preset, altered to 1/“white” diffuse (otherwise all your contribution comes through refraction as a backlight translucency fake), overall color mult @ 1.0 (preset clamps it at .8).
White sphere, black sphere, gray sphere (gamma corrected swatch for mid gray), furry balls, one hdr with reasonable range at “1”
No magic, note the color mapping settings in the screen cap.

Not sure I understand here Trey. You’re telling me that when you applied physically correct values to physically based shading models, you’ve somehow gotten a physically plausible result from a physically based render package?
Don’t worry guys, when Xdugef comes out we won’t have to worry about any of this. Until then…
@Kweechy: that’s the expectation, yes.
The thing is that if one expects professional-ish looking results out of this stuff, one still needs to do their homework and understand what’s going on under the hood, as it were, to use another car analogy. Yours was better (story of my life). My gas tank is full of grain alcohol and rainwater, for to protect my precious bodily fluids. Anyhow…
http://en.wikipedia.org/wiki/Conservation_of_energy
Energy conservation… If not the math itself, then the principle? Inverse square falloff for lights?
Energy conservation dictates, in pretty much its entirety, that you can’t put out more energy than you receive. That’s it, more or less. Same as what they teach as the “no free lunch” rule in thermodynamics, even in the bible belt public schools that I’m a product of, who are still arguing, in part, that Jesus was riding a velociraptor around 1900 years ago.
http://en.wikipedia.org/wiki/There_ain’t_no_such_thing_as_a_free_lunch
So, to sum up. It’s cool if you want to use 3000 individual spotlights at an intensity of 4.5 billion to make something “white” if it works. Shouldn’t be necessary, and I’m pretty certain that my assumptions about color spaces being utterly wrong are on the mark, but delivering and getting paid is half of what we do, right? “If it looks right, it’s right” is pretty much a guiding philosophy in the vfx world, who rely on approximations.
I’d strongly suggest folks understand the theory (if not the rough outline of the math) behind some of these things before posting super ill-informed/wrong/laughable expert opinions and or manifestos. Just because you don’t understand something doesn’t make it wrong.
Arrogance really is proportional to stupidity, isn’t it.
“Without it, nothing in your scene sits together, nothing looks right, and you have zero hope of achieving anything approaching photorealism”
That’s one of the stupidest ‘noob’ things I’ve ever read. Energy Conservation simply reduces diffuse when reflection is turned up, and vice-versa. Are you telling me it’s impossible to do that, without Energy Conservation? Maybe YOU are incapable of seeing when something looks fake, and adjusting your shaders appropriately, but that doesn’t mean that no one else can.
“How do you possibly expect to get correct results out of a renderer when you’re not even using their supplied and tested surface models?”
Who the fuck told you that I wasn’t using all vray default presets?
Would you care to tell me what two methods are used to reflect light in the real world? Obviously there’s only one. Reflection, absorbed or diffused at varuios amounts. So why does 3D rendering use AT LEAST TWO methods? Is it “physically accurate” to use one render engine for diffuse and a completely different one for reflection? How many ‘samples’ of Brute force GI do you use? Don’t use Irradiance cache, much less Light cache, those are even less accurate. Do all your HDR’s have the sun’s value at 65,000 x brighter than white? Then just stfu, you smartass peon. Your garbage isn’t anymore ‘realistic’.
If you actually ‘had’ read the entire thread, you would have seen alllll the examples of 3D rendering’s INACCURACY that I mentioned, and where I mentioned that 3D rendering is NOT simulation, it’s a HACK, because it’s utterly impractical to use a simulator for media production. And all the situations where the rendering is expected to NOT be realistic.
And the fact that you used a CG light? And that your hair is far more opaque at glancing angles than it is at the perpendicular? Yea, ‘so’ realistic.
I would just love to see you run around trying to shoot brand new HDR’s when you realize you didn’t shoot them to within .0001% of what the renderer is expecting. For the scene you have that’s on an alien planet.
And that your hair is far more opaque at glancing angles than it is at the perpendicular? Yea, ‘so’ realistic.
Well, to me this is seems to be quite realistic. Simply because you have more overlapping hairs at grazing angle. I see it every moring looking into the mirror.
Fun fact… I happen to do precisely that. For a living.
Bluntly: the grownups are talking right now, and under the assumption that people are here to learn something. Keep it civil, please.
That was quite literally a 5 minute exercise to demonstrate something pretty basic: how to render something “white” and “gray” under a proper linear setup – reading the thread one would think it was impossible. It’s not.
Amazing looking white fur will take a bit more time, what with it being one of the most notoriously difficult problems in all of CG. I’d probably start with blending in a bit of true scattering to take the edge off the self shadowing, if I were to go down that road.
You might want to IMDb the people you talk to before you call industry professionals noobs.
Just sayin’
It’s not impossible, the equations and node setups are pretty simple to replicate an energy conserving model (assuming you’re not worried about doing a proper microfacet model or anything like that).
The reason I don’t is that I would just end up setting it up identically to how Arnold, VRay, Mantra (you know, any renderer on Earth) already does internally.
Diffuse is an approximation for high roughness reflection rays, and for high density subsurface scatter rays, but just because we break the calculations into two separate approximations, it doesn’t make it somehow “arbitrary”. All of these models are based on scientifically captured and analyzed scanned data of real world materials. If the CG shading models were so physically inaccurate as you seem to “feel”, then we could never hope to fit these models to measured data.
http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf
I did read the entire thread, and all I see is someone with a tenuous grasp on the inner workings of 3D rendering.
Every single studio on the planet that is remotely reputable is using physically based shading models with energy conservation, and lately are using full microfacet shading models like the ones featured in that PDF.
So who are we going to believe here when it comes to getting realistic results? Some guy on the internet who seems to be struggling to set up a proper scene, or Weta, ILM, Digital Domain, MPC, The Mill, Framestore, or even Pixar…yes, Pixar, the studio that makes CGI cartoons uses microfacet shading models instead of “feelzy” shading approaches, because it looks more realistic and behaves properly with lighting.
At this point, I need two things from you:
-
A real world material that can return more energy than it receives (that isn’t emitting light of its own). While you’re at it, send that example along to NASA, MIT, and Stanford as well…I’m sure they’d be interested.
-
Some scene files to see what it is you’re doing with materials and lights, and where you’re going wrong.
If we go back to the first post of this thread, we can see that guccione isn’t asking for help. He opened this thread to complain. Nobody joined in on the “riot”, and we’re just feeding the flames by doing anything other than agreeing. He’s going to respond to this post with some more vitriol, which is fine, but I wouldn’t waste any more time on this thread.
Oh is ‘that’ where his arrogance comes from?
And I guess he’s the only one who has a page on IMDB.
Saying “Who’s with me?” is not a “complaint”.
And not one of you has stepped up to show me how LOCKING this feature ‘on’ has helped a single artist, ever.
Jumping on the smarmy, arrogant “you just don’t know what you’re doing” nerd-talk bandwagon is obviously not simply disagreeing.
Oh I’m so glad it’s not impossible to turn down your own diffuse. (?) Where do you get these bizarre assumptions of what I’m saying? Are you honestly incapable of keeping your diffuse and reflection values from totaling less than 100%?? “Omg this look so fake!! Why? Oh! Diffuse and reflection totals 105%!! That’s as fake as a point light (which is still acceptable for some reason) How disastrous!!”
whoosh right over your head.
Do you honestly think that Phong is completely “accurate”? And Lambert?? Shadow maps?? Interpolated GI? Why are we still allowed to use those, yet we’re forced to use energy conservation - with the excuse that you cling to like a marsupial; “It’s physically accurate!!”
If you honestly did, then either your comprehension is absolute garbage, or you simply ignored anything that got in the way of the arrogant tirade that you’re currently surfing. Like;
Yes and since nobody does any 3D outside of those big studios, or they have their own $multi-million pipeline, it should be fine to restrict everyone to the same standards, including renders that take more than a day per frame.
I dunno, how about believing your eyes? If you think Pixar movies look realistic, there’s not much more to say. It does explain a lot. Renderman is far more of a hack than the other renderers, except when fully raytraced, which they’ve avoided like cancer until very recently.
If you seriously believe that I’m saying energy conservation is unrealistic, which is exactly what you just implied, you’re incredibly naive. Or, you say it because it supports your conceited, self-glorifying tirade.
Sure, as soon as you show me a real-world surface with the same properties as Phong, a diffuse material with 0 roughness (Lambert) or a real scene that’s visible despite using a negligable fraction of rays of light.
I had the ‘audacity’ to use the only HDR at my disposal, shot by someone with little experience, in a place that can’t be replicated again, to set up dozens of shaders with no problems, then got to the hair shader which responded to a third of the light that all the other shaders did. My fault for not forcing my vfx to hire a trained, dedicated HDR photographer, and the budget to re-build sets, right?
Really guccione, I implore you, check out Xdugef. You’ll love it. It’s what you’ve been waiting for. It has real lights, like the ones on the fans in your computer and everything. It will make you less mad at VRay. It will make your brute forces more brute. Your whites will be even be more white than the ones on the planes in Microsoft Flight Simulator 98. Xdugef is the software you seek. It has the best phong ever too. Better than Mental Ray’s, better than Maya Software Render or even Maya Hardware 2.0! You will be able to Phong so hard you will think you are an Italian god of some kind… like Saturn… 'cause lead is AWESOME!
Uh, so, reading the thread, you assumed I said it’s ‘impossible’ for evergy conservation to work, ever? I clearly stated, more than once, just that it should not be locked on, just like the vast majority of 3D features. Is there some reason that you completely ignored that part?
Should every cg lighter in the world quit their job until they’ve completed a full semester of photography class? Should they buy all the nec. equipment too, or forget about doing their job??
“Learn something”… funny, there isn’t a single thing I’ve learned from the peanut gallery, except that you’ll all ignore key points of mine in your campaign to look and sound “grownup”. And that you won’t actually say that 3d rendering is as accurate as a simulation, you’ll just continually imply that it is.
Amazing looking white fur will take a bit more time, what with it being one of the most notoriously difficult problems in all of CG.
Yet, if I have problems with it, I should be belittled, mocked, and vilified, for suggesting that the shader should be a little more open. How very “grownup” of you.
And ‘physically correct’ is different from “amazing”, I guess? Do you even understand what I’m getting at?