The Science Of CG


#21

Thanks for that book reference!

Photography is also a subject that interests me.

I’m not really a practicing photographer but I do take some pictures sometimes for reference or texture purposes.
Light itself is something that fascinates me for some reason.
And especially where the possibility of merging real world lighting into CG lighting.

When I’m working with CG cameras I wonder what else could theoretically be possible with cg generated images.
Like would it be possible to add additional channels to an image that comes out of a render engine than just an alpha or depth channel.
Maybe something like an IR channel that you could generate from inside the cg cameras attributes.
Like having a camera that captures the image of the visible spectrum, and also the IR spectrum that you can edit to influence the visible spectrum of the final image.
That you can render to a seperate pass too.
It could not only allow for light control, it could also be used to more accurately simulate colors for explosion FX from simple bon fires to jet engines and even the formation of stars and space FX.
They use a similar technique to color the images that come from the hubble telescope.
What I’m talking about is a loose reversal of that in a way.
Except you generate a grayscale image of the heat information to simulate explosions and heat FX, it can also be represented with a multi-colored image to control the temperature of the illumination.
And maybe even a 3rd ultra-violet channel that could be used to control gamma dynamically.
Making the simulation of sun light much easier and more realistic.

I don’t know what’s actually possible with advanced render engines like Maxwell, Fry, I don’t even have V-Ray yet.
I use Cinema4D 10 with the built in Advanced Render module.
And it’s slow when doing GI, SSS and caustics.
So there’s no chance of lighting a scene with accuracy of lighting in mind.
I do what I can.
But it’s real easy to blow the image in the long run.


#22

+1.
Awesome thread.


#23

I didn’t wanna start a new thread about this, so i add this question here instead… if it’s ok.

When you guys create your shaders, which approach do you use ? do you take the physically accurate route ? meaning that you try to find measured data that matches the material you’re recreating and use them in the creation of your shader.

Or do you search for lots of referencematerials (images and text) about the shader you’re reacreating and then simply eye out the fine details from those reference pics to recreate the look and feel of a material on your own ? let’s call it the shoot-from-the-hip approach. :slight_smile:

Which is your favorite method to use ? or do you use a completly different approach to this ?

/ Magnus


#24

when creating shaders, i try to eyeball characteristics of material that are visible in realworld and at the same time trying to make it visualy pleasing even if its not physically correct sometimes. noone cares about IOR, BDRF and stuff, its only a help for you to make things look good.


#25

I agree. I don’t really pay attention to a physical correct approach. It just gives me a rough outline. Afterwards I tweak the settings of the shader, to get the look I want and that’s not always a photorealistic look. The material has to work for the scene you want to use it for.


#26

That’s interesting to hear that you guys pretty much shoot from the hip when creating materials instead of relying on a more physically correct approach. Mind if i ask which renderengines you use ?

I’m using both Maxwell and Fry for my work and both engines use the more physically accurate approach to shader creation, to be honest it feels better when working as you know what you get when you modify values and apply textures to them. I don’t see the same simple logic when working with Vray or Finalrender or similar engines.

/ Magnus


#27

In my case I started to love MODO and it’s render-engine. It’s easy to set-up and gives you pretty fast results. Due to the fact that I have also a MAYA license running here at work. Some of my works I’m doing in MAYA rendering with MENTAL RAY as engine.


#28

i mostly work on feature movies and using XSI / mental ray. to be more precise with my opinion, its nice to have physically correct rendering and im allways trying to keep as close to it as its possible. but reality is when director tells you that over there is too dark and that thing over there should be more reflective or something, you never tell him that you will not do that because this way its physically correct :slight_smile: and many times i have experienced that he told us guys, i know that in reality it wouldnt be like that, but i need it to catch attention or i need it for better composition. an he is right. because what we do is art (at least from some point of view) and we should go for believable AND visualy attractive results rather than only for physical accuracy. reality isnt really attractive in many cases and without little of art it would look… you know, boring.


#29

Using PRMan for visual effects production, I try to keep physical correctness for as long as possible and only branch off into ‘artistic licence’ when I absolutely have to.

The reason for this is that, in my experience, doing things the right way (or as close to it as my limited understanding of physics and maths allows) means that the results are predictable.

Layering artistic hack after artistic hack into shaders quickly results in setups tha are unmanageable and hard to make changes to. Moreover, if you don’t light those materials in exactly the right way (as the original shader designer intended) the results can often be bizarre or just plain broken. This is especially important when you need to share shading and lighting setups between multiple artists working on different shots and sequences.


#30

Firstly I just wanted to say thankyou to all those who have contributed to this thread; it’s been highly educational and has certainly made me re think some of my workflow.

Secondly to Dtox, I think what you are asking for is called Spectral Rendering, that is rendering using a much larger energy range than visible light squeezed into the confines of RGB channels. Mental Ray supports it, but I have never had any actual experience of using it, I heard about it in a conversation with an employee of Mental Images who explained that he was helping a client create spectral shaders so that they could render images in 18 colour channels, describing the appearance of a surface as it would appear in different ‘lights’ for instance how ultra violet light would effect it, or what it would look like in infra red. Hope that is some use to your future research.


#31

agreed. fascinating thread. thanks.


#32

Well “spectral rendering” normally refers to sampling the visible spectrum (usually the range 380-780nm if I remember correctly) much more finely than your typical RGB colour representation. In other words instead of having 3 colour components–R, G and B–you might have 8, or 10, or 30.

This allows you to simulate effects such as dispersion–light “breaking up” into its “component colours” when refracted. Of course there’s no reason why you can’t represent “colours” outside of the visible spectrum as well, but there’s very little reason you’d want to for entertainment purposes.


#33

This allows you to simulate effects such as dispersion–light “breaking up” into its “component colours” when refracted.

Is that what dispersion actually does?
I’m talking about the dispersion attribute of a basic reflective and/or transparent material.


#34

In what shader in what renderer? Dispersion is caused by the the tendency of materials to refract different wavelengths of light to a different degree, causing rainbow-like colour effects (such as rainbows, for instance :)).

There’s a rather more in-depth explanation on wikipedia http://en.wikipedia.org/wiki/Dispersion_(optics)


#35

In a basic cinema4d shader there’s an option called dispersion in the reflection and transparency channels that’s controlled by percentage.

I always thought it dispersed the actual reflection making it blurry.
That doesn’t account for transparency though, so it’s always kind of been a mystery to me.
The effect it gives isn’t very dramatic, so not knowing exactly what it does I always avoided using it since it doesn’t seem necessary.

Also, when you see the term “additive” in regard to cg lighting, what exactly does that mean?
Does it have the same meaning as when you see it as a blending method within a shader structure?
Does it refer to “additive color theory”?


#36

Yes, basically. It usually refers to the assumption made that light combines in a linear, additive fashion, i.e. rendering an image using several lights individually then adding them together (as in A+B+C) will give you the same image as if you’d rendered an image with all the lights turned on.

There are certain situations in which this isn’t true (such as capturing images on film), but it’s close enough and is such a dramatically simplifying assumption that it’s good to use.

I’m not familiar with cinema4d, so I don’t really know what that control is doing. Sounds like it’s a blurry reflection control that’s just poorly named (it might be better called ‘divergence’).


#37

Yes, basically. It usually refers to the assumption made that light combines in a linear, additive fashion, i.e. rendering an image using several lights individually then adding them together (as in A+B+C) will give you the same image as if you’d rendered an image with all the lights turned on.

Are you referring to compositing a multi-layer image together where the lights are done as separate passes?
It also allows you further control, no?

Is there a specific blending mode that’s required/recommended for the math to properly come together in a situation like that?
Such as “add”, which would also refer to the “additive” function I asked about earlier?

This is sort of an off the cuff question.
Would it benefit me to use real values in my light attributes even if I’m not using a renderer like Maxwell that’s built that way?
For example, assume I’m lighting a pretty standard interior scene with a few normal area lights to simulate real halogen lights.
Is there any benefit to using the same color temperature in a CG area light that a physical halogen light uses even if I’m not using something like Maxwell or Vray?

I’m not familiar with cinema4d, so I don’t really know what that control is doing. Sounds like it’s a blurry reflection control that’s just poorly named (it might be better called ‘divergence’).

That’s exactly what it was.
A poorly named option for blur.
In the next version of cinema 4d it’s just called “blurriness”.
Why the hell they would even call it dispersion is beyond me.

C4D does that alot it seems.
Mostly in material parameters.


#38

Yes, just a plain ‘add’. I can never remember which of photoshop’s blend modes actually do that, but it’s just an add node in Shake. It’s the assumption of the addition combination of light that allows us to do things like render lights out as separate passes then add them together again.

This is sort of an off the cuff question.
Would it benefit me to use real values in my light attributes even if I’m not using a renderer like Maxwell that’s built that way?
For example, assume I’m lighting a pretty standard interior scene with a few normal area lights to simulate real halogen lights.
Is there any benefit to using the same color temperature in a CG area light that a physical halogen light uses even if I’m not using something like Maxwell or Vray?

It depends on what you’re doing really. If you’re creating entirely CG images then if it helps you choose values for lights that make sense with each other, then it’s probably worthwhile since it saves you the hassle of manually choosing colours according to temperatue. If you’re trying to match to a plate then it’s pretty useless since the exact colours recorded on a piece of film bear very little resemblance to any kind of ‘real’ colours.

The ‘real’ values in Maxwell are just a convenience really since the image is tonemapped before output anyway so the exact numbers are meaningless really.


#39

The ‘real’ values in Maxwell are just a convenience really since the image is tonemapped before output anyway so the exact numbers are meaningless really.

So Maxwell tone maps its output.
Damn, I never knew that.
Is that a standard method in 3rd party render engines?
Or is it more of an exclusive thing with Maxwell?

I learned a bit about tone mapping at a photography board last year.
On a slightly different subject, there might be some benefit to rendering to RAW files.
A renderer could theoretically do it flawlessly because the image doesn’t come from an optical device.
You could match rendered footage in the RAW workflow to footage from a camera like the Red1, and stills in something like Adobe cameraRAW.
Then you’d have an even greater amount of control over the images attributes before it is actually converted into a bitmap image.
You’d also have the RAW file for archiving.

Anyone who’s used the Adobe Camera RAW workflow for digital images knows it’s benefits.
Now with the Red1 there’s a RAW workflow for digital video that outputs at 2k and 4k.
With RAW you’re working directly with the captured data in a pre-bitmap form.
Working with them in ACR is great.
You can easily adjust the white balance, the color temperature, the light type, shadows/highlights, standard and midtone contrast, sharpening.
All before the image even gets into photoshop.

And in many cases, no further editing is necessary.


#40

Why not just render to OpenEXR then you’ve got extra bits to play with? One day all film will be shot digital with direct output to linear exr. Or at least I hope so.