Maxwell Demonstrates Realtime Interactive Lighting


#21

@mdme_sadie

You haven’t read the thread, do you ? :stuck_out_tongue: :smiley: :wink:

Part of the official announcement:

edit: … haha, … Mike you did it again. :wink:

take care
Oleg


#22

Yes, you’re entirely missing it, actually. The first step had nothing to do with the second. Do you guys even bother to read posts or do you just start typing? He’s not loading 5 files to do the lighting thing. That’s something entirely separate. Read my last post.

_Mike


#23

um, i think i must be dumb, because i did read what you posted. but it just looks like what i said. the composite is part one asyou said. the second part looks like he’s just manipulating the intensity of the composite layers (nice that it’s allowing you to do it while the render is going, but nothing astounding in itself, any engine can render multipass in a single sweep), and there’s nothing showing the lights moving around, just what appears to be the intensity of the light layers changing. so i must really be missing something here.


#24

You guys should have split that video in two. A lot of this confusion would have been avoided. Now, you spend a lot of time just clarifying…


#25

I guess you are right … :rolleyes: :wink:

take care
Oleg


#26

Hm, looks like its simply compositing whats already there, and you arent so much able to ‘edit’ the effects of a light completly, as much as youre only able to adjust their intensity and color, something rather do-able with compositing and layers. Not exactly new or innovative. It doesnt look like its going to magicly generate new caustics.


#27

It may look like it from the video, but it is purely coincidental. There could be 3 mxi files merged into one (they didn’t have to be 5) and there could be 58 emitters in the scene. Merging was not a prerequisite for the interactive emitter adjustemt.

He could have opened just one of the mxi files (the most grainny one) and adjust the emitters of that grainny single file.

Whoever did that video was a little lazy and jammed both features in a single video file. So, now many people think that one feature was a prerequisite step for the other.


#28

well, the merging caches part certainly seems a good idea, though can this be used as part of a distributed rendering solution? i.e. you have 5 machines all set up rendering, then on a server machine you get an image updating as and when with the composite of the 5 nodes? now that would be a very useful with as intensive an engine as maxwell (and if so then well done for an elegant solution). or is it only a post treatment in this scenario?


#29

Yes, you are still utterly not getting it.

I don’t have any puppets; I’m not sure how to make this any simpler.

But I’ll try: Forget the compositing; that’s a different feature and UNRELATED TO THE LIGHTING DEMONSTRATION. It’s not related to the lighting demonstration. The assembly of multiple images is not related to the lighting demonstration.

The lighting tool does not require compositing. The lighting demonstration does not require compositing. It is not compositing passes and controlling the “blend.” You are not seeing pre-rendered passes being blended.

To clarify: realtime adjustment of lights in your scene does not require rendering passes of any kind. You don’t have to render passes. You don’t pre-render passes, you don’t render caches, you don’t composite, you don’t pre-composite, you don’t assemble images. You just render; only you can fully adjust your light intensities at any time. WITHOUT PASSES.

Did I mention this does not require compositing of any kind? This also does not require rendering passes. There is no compositing, nor blending of separate passes being employed.

Passes are not necessary. Assemblage of “multiple passes” is not necessary. No pre-rendering of any kind is necessary. You don’t actually have to pre-render anything! And what’s more: you don’t have to pre-render any passes! Furthermore, no pre-rendering of passes is required, and no compositing takes place.


#30

@Mike

Well, … I’m almost getting it … ;o) SCNR

take care
Oleg


#31

mverta thanks for clearing that up.

I do have a question though. How exactly do you see this saving you a “billion” hours tweaking? You will still have to setup your lights, start a render, kill render, add/remove/reposition lights, start another render, kill, tweak shaders, start a render, kill… until you have a good light/shader setup, and you will still have to manually tweak the light parameters while the image renders or after. Consider also that while you are doing this Maxwell takes considerably longer to render anything worthwhile than the average renderer. It might be nice to be able to tweak the lights while you are looking at the render in progress, but the manual effort you put into lighting the shot is the same, and the total time spent on the processor will likely be considerably longer than with any other renderer/method to achieve the same/better results.


#32

well as far as i can see it’s what i’d call a pass, doesn’t matter if it’s internal to a file or a visible layer. i think we’re facing a difference of language here. a pass doesn’t have to be rendered at a seperate time, it can be rendered simultaneously, most engines can render all lights in one go, but give you access to each individual lights seperate light, shadow, specular etc… through the resulting multilayered file (be it tiff or psd or propprietary, or whatever), that you don’t see it as what you consider a pass (i.e. a seperate render that happens on it’s own), doesn’t mean that it isn’t a pass (as far as i’m concerned it’s a pass, even if it’s only done on a per fragment, sample rather than a whole image at a go).


#33

:rolleyes: OK… i think we got it, Mike. People showing genuine interest in your product need explanation not patronising.

As for these features, they both look fantastic, certainly. It’s about time coop rendering worked and this will surely bring a lot of users who abandoned Maxwell back. Speed is one of the main complaints of most MW users, so now that coop rendering is working this will open up the possibilities for proper full-res commercial renders.

The interactive lighting also looks great and will surely be of use. I can imagine that some artists might choose to render scenes with loads of lights and then use this tool to turn some of them on and off, effectively sculpting their lighting rig. When they’re happy they can then go back and remove those emitters to keep the scene tidy and the renders optimised. This has great potential.

It’s good to see NL attempting to fix old problems (coop) and introducing new features at the same time. In short, it’s good we’re finally seeing a balance at NL.


#34

I’m showing this to my boss tomorrow. We’re on VRay right now. I’d love to start experimenting with Maxwell a few hours a day.

I think I can clarify the two sides of the argument, too: a renderer like C4D can spit out your image with individual elements (including each light) each on their own layer. Tweak opacity and blending in Photoshop, and you’ve got a compositing process that seems similar to what we view in the video.

But Maxwell’s solution is so much more. We seem to be talking about real-time interactive global illumination and caustics here. Adjust a light, and the scene reactes realistically. Obviously this is way beyond screwing with opacity levels on specular passes. I can totally change the look and feel of a scene in a snap.

As artists, we’re still going to need to have a vision for our scenes (7:00AM summer morning, clear dawn sky, dew on the grass, etc). This tool I think is basically a sophisticated update of the ‘render to PSD’ workflow. I like.


#35

That’s the problem I had. He loads these five MXI files then starts flipping trough the lights making it appear like there are 5 passes in one file. What I’m guessing now, based on the information that has been given, is that the MXIs are actually strips or parts of images similar to buckets, and the combiner is just stitching them together… Which I hope is not correct because it’s be really silly to have to manually assemble every image you render on multiple machines, whether they are buckets, strips, caches, or layers ;p I kind of liked my original preception that it was some sort of cache you could progressively add data rendered on other machines to for refinement. I think they should abandon this current tool and work on that.

I’m sure Mike will be along in 10 minutes or so telling me how wrong I am :slight_smile: Maybe he’ll draw some pictures and post some charts and graphs or something.


#36

My suspicion is that people will become spoiled by this feature almost instantly. But Maxwell is surrounded by a ton of myth, and very little field experience. The vast majority of nay-sayers have never even used it, let alone learned to master its unique approach.

What I can tell you is that Maxwell’s material/lighting system behaves so predictably, thanks to its strict real-world model, that after awhile, you barely need to run render tests to see how a material is shaping up. I can get to the 90% point on a scene now, with textures and lights the way I want them, without ever having to render. And that’s because in a given CG room, for example, modeled in real-world scale, I know approximately what size light(s) I would actually use to light it with, so I just plug those in. Real-world wattage/efficiency values for lights translate, as do camera ISO settings and f-stop. They actually mean things in Maxwell, and are calibrated so as to very closely replicate real-world behaviour. Add the extremely predictable material reflectance model and we’re talking about a fraction of the time in setup. To say nothing of the fact that GI calculates with physical accuracy, flicker-free, in both network and cooperative mode, with the best AA I’ve ever seen. Add this realtime light adjustment on top and you’ve saved another huge chunk of time. And you’re going to need it, because Maxwell’s speed is the price for all this. And even that is getting better; the engine hasn’t been optimized at all yet.

It’ll be curious to see how things shape up.

_Mike


#37

Oh dear I can’t believe how badly this announcement turned out…

I want to point out in case people don’t know, that Mike is a lead (probably the leadest?) A-Team tester on Maxwell. He’s not interpreting the video at all, he works directly with the latest features and the developers. I’m an A-Team tester too but I’m not quite so super high up yet as to use that specific new tool hehe.

It really isn’t a basic compositing system at all.

Example: I create a 3D house. I put 50 lights in it, and have a sun and sky. All the lights are turned off except for the sun and sky. I render the image 1 time and get a sunlit house. Now, from here until eternity I can turn on/off/adjust any of those 50 lights (and sun/sky) creating entirely new lighting with unbiased GI and caustics and reflects etc etc in realtime.

My boss says: “You know we could use a nightshot with light through the bedroom window instead.”

I say: “Sure” And in realtime, turn off the sun, and switch on the bedroom light and adjust it’s brightness to get the right effect. All from the 1 original image I rendered. New caustics, new GI, new reflections of different parts of the house etc.


#38

… I suspect my bias was evident :stuck_out_tongue: But people may not realize that I support Maxwell because of the results I get, not because I work for Next Limit; I don’t. I just get to play with all the new toys sooner. :smiley:

_Mike


#39

So you’re saying C4D can render a GI scene with each lights influence on the GI calculation separately, in layers, and when you turn on/off one of these layers, the GI calculation in all the other layers magically update as well? The entire scene lighting updates? For example one of the caustics that this light is creating, changes intensity depending on how you adjust the opacity of that lights layer in Photoshop? Or a shadow cast by this light (which is most likely in a shadow layer) changes when you change the opacity of this lights layer? Or that you can turn a GI daylight scene into a GI nightscene, by just turning off or on a bunch of layers?


#40

“This is not a color mixing process, the intensity variations are energy based and are calculated in high dynamical range. Power emission can be scaled naturally.”

The thing that sets this apart, surely, is the fact that each light and indeed the whole scene is rendered in MXI format (pretty similar to HDRI). Doesn’t this mean that unlike a normal multipass render, where tweaking the intesity of different lighting passes results in a loss of detail due to the narrowness of the RGB gamut that the scene is rendered in, Maxwell will allow you adjust the light in your scene without the worry that if you make any serious adjustments to any of your light-sources you’re going to lose all the contrast.

And just on a quick afterthought, yes, you still have to position all your lights, but doesn’t this feature mean that you don’t have to spend any time messing around with their intensity or doing test renders to determine how strong the GI calculation should be? Sounds good to me.