Maxwell Demonstrates Realtime Interactive Lighting


#1

http://www.maxwellrender.com/files/cooperative_and_edition.avi

In this video, 2 separate features are being shown - 1) the assembly of cooperatively rendered images into a single image and 2) Realtime interactive lighting. The lighting does not require multiple passes - you don’t need a separate file for each light. Even without cooperative rendering, in a standard single-node render, you just automatically have control over the intensity of any and all lights in the scene both during the render or after it’s finished.

This is just a preview of the feature; interface refinements are pending, but the advantage of the system is pretty obvious. For all intents and purposes, you can just set all your lights to some default value and make adjustments as the render progresses, or when it’s done as much as you want to. Caustics, indirect lighting, reflections, everything is controllable in this fashion, not just light intensity.

_Mike


#2

Holy crap! It may be that I’m just new to all this but that sounds really cool. Especially since I was under the impression that Maxwell was somehow substandard compared to renderers like MR and Vray. I like.


#3

Not substandard! Just a different creature alltogether.


#4

Maxwell’s still in development, and its parent company, Next Limit, has taken a serious beating, PR-wise, for a host of reasons - some legit and some not. But at the engine’s core is some serious potential, and this latest feature I think is a good example. Even traditional R-G-B channel-lighting for post-process control requires multiple passes, and doesn’t carry things like caustic control or reflection with it without additional passes. This is the best implementation I’ve seen, to date.

_Mike


#5

Mike:
Youexplanation is pretty clear. Maybe they should of posted you explanation with the video so that there was no confusion, but anyway thanks for the explanation


#6

Its interesting, and hopefully will be useful, but I would love to see how it compares to lpics in terms of features and performance.


#7

lpics, I think, is designed more for making assorted lighting decisions on test renders before rerendering, hence the frequent comparisons to offline rendering. It’s not really something that includes refraction, caustics or any particularly advanced lighting effects… or even soft shadows by the looks of it.

I do have a question: do extra lights still incur a time penalty in Maxwell? I wonder because it would be spectacularly nifty to be able to light a scene with all the different lighting conditions at once and just switch them on and off.


#8

According to what they are saying in NL forum it does not slow it down or hinders it.


#9

So this is really not much of a feature at this point, I mean all you can do is adjust the intensity of the various lights individually, and maybe eventually adjust some other parameters of the lights. In a system like LPICS or IPR Caching, you can adjust virtually every shading parameter of every object and light in the scene including swapping out shaders etirely, the only thing you can’t do is move objects or the camera. You can move the lights, and add/remove objects and lights that exist in the cache, though. Your ipr image can be full res, and at final frame quality (yes caustics and shadows, etc, can be included but you will be limited as to how lights can be moved while using them, if they are not cached on disk). While not “realtime” the frame is rerendered in seconds or fractions of a second, and you can work with sequences of animation or different takes, so you could, say, light the same scene for different times of day/night in one go.

I’m not trying to bash on Maxwell because it is a work in progress, just pointing out that if you think this is the best implementation you’ve seen, you’ve probably not seen much.


#10

Doesn’t LPICS use a renderfarm of really good computers to do that though? I thought you needed a farm all helping at once.

In Maxwell you just render and then tweak all the unbiased GIness in realtime on one PC.


#11

In LPICS (which I should say I haven’t used) and IPR Caching (which I do use) what they do is generate a cache, which could take a renderfarm depending on what you are putting into it. Once you have the cache you can tweak all you want and render in fractions of a second on a single workstation. Basicly they precompute as much of the scene as they are able while still allowing you to change important parameters for lighting and shading. If you watch the Maxwell video, it sure seems like the Maxwell guys are trying to do exactly the same thing but they aren’t achieving the same results. They have those “Cooperative MXI files” which they merge and then load so they can relight the scene. With LPICs or IPR Caching you just have a single cache file full of scene information. Those Coopperative MXI files seem to be image caches, I’m sure they took a while, probably a very long time to generate given everything I’ve heard about Maxwell. But the fact they they seem to generate one for each light in the scene makes it look like they aren’t really caches at all but simpler image formats which would limit what you can adjust in the scene. The demo shows exactly what can be done with any half decent compositing app by rendering a layer for each light.


#12

Wow this is awesome !
Gee next Maxwell will be able to adjust
Depth of field in realtime !!!

It does seem pretty limited at the moment.
You obviously can’t add lights but its a step in
the right direction.

good work Next limit. Wheres my XSI plugin ?


#13

Cronholio
Acoording to the explanation here and in the Maxwell forum you don’t have to render layers. All that you have to do is rende one scene You do not need to do coperative rendering ( tha is supposed to be another feature).Also, i think that it is a good addition to a rendering software. I am wondering if the software you mention is a rendering software or it sits on top of the modeling, rendering software?


#14

[left]

So, you just render your image and tweak the light(s) as you wish while rendering

and/or at the end … :stuck_out_tongue: :smiley: :wink:

btw. lpics is a system to light your scene and tweek your shaders, it is [b]not[/b] a render.

nb: I love the new feature of maxwell! :D :D :D



take care
Oleg
[/left]

#15

Yes tikal26 is right, the multiple MXIs is for the coop render but completely separate from the relighting (which works with the daylight stuff too I think).

The coop is like distrubuted rendering in that multiple computers work on the same render, except because M~W isn’t a bucket renderer it doesn’t DR in the usual way. As far as I know a relative in Spain (for example) could contribute time to rendering the scene, send you the MXI and then it improves the quality of your current image that you yourself are rendering.


#16

Hey guys!
I just visited their homepage to check out the promo-video, and left with a couple of question marks…
Am I the only one who thinks the car in the video (where the camera is panning) is moving sideways???.. it almost seems to float away from the camera!

Makes you wonder, why they chose that clip for the promo-video???

hmm…


#17

The reason I assumed the MXI files were actually light layers is because he has five of them, and five lights in the scene, but I think I understand now…

OK so the MXI is a precomputed part of an image, right? So you do have to render something before you can use this relighting tool, the same as if you used LPICS or IPR caching or just used a compositor to relight a shot. In this Maxwell scenario you have to render at least a rough version of the image, then keep adding MXIs to it to refine it. Looking at the video, he’s not relighting a render in progress which is what some people here seem to be suggesting. He has an image and adds the precomputed MXI files. Adding the precomputed MXIs improves the granularity of the resulting image, however, it still a long long way from being a final frame, and I’m sure they took considerable cycles to generate the MXI files. I don’t really see how this can be considered an improvement over LPICs or IPR Caching or even rendering multiple layers and relighting in a compositor. In all those scenarios you can move, add and remove lights, in addition to manipulating their parameters and the parameters of the surface shaders. It’s interesting in that you can apparently keep the image caches you generate and add to them for progressive refinement and the final output, but you can already do that in to some extent with the other methods people are using.

If you can actually move the lights in the shot and change shaders while using the existing image and MXI files I’ll be impressed, but I’m inclined to believe that’s not possible because of the nature of Maxwell as a renderer, and the fact that he’s relighting a full image, not a cache; unless someone can demonstrate otherwise. A relighting tool isn’t worth a whole lot to a lighter if they are limited to just manipulating existing light parameters in their current positions. That’s kind of the whole point of these other relighting systems, the ability to make sweeping changes to the light setup and parameters of the lights and shaders. If you can’t move the lights in Maxwell’s “relighting” system, you are going to be stuck in the usual cycle of setting up lights, test render, revise setup, test render, on and on until you have a setup that works. Then you can move on to the relighting tool and generate additional MXI files. If that’s the case, it probably won’t save you much time if any.


#18

No, you’re wrong.

.MXI is just Maxwell’s native HDR file format, not a special image cache or anything like that. The video shows 2 SEPARATE, UNRELATED processes. One process is the recombining of cooperatively rendered frames into a single one. The entirely separate, unrelated feature being shown is Maxwell’s realtime interactive lighting function. You render; just render like any other time. You render your final image; no special settings, no accommodations, except that during or after the render you can completely change the contributions of any and all lights in the scenes, including their caustics, etc.

So for a typical workflow, you could set up a bunch of key and fill lights with arbitrary values, and just pick your lighting as the render goes, or afterward, as many times as you like.

You could render a house in the daytime, for example, and just by moving a couple of sliders, turn off the sun, turning on the interior house lights and have an entirely new image without re-rendering. You can do this during a render or afterwards. There is no impact on rendering time, or workflow, beyond the fact that it just saved you about a billion hours in tweaking.

_Mike


#19

@Cronholio

Hmm, … I have to disagree, it is mindblowing to have the possibility to change
the lightsettings. I can change the mood inside my rendering as I wish …

I can change from daylight to night, turn lamps on and off … it is amazing,
I don’t have to rerender the picture! It saves me a lot of time, this is a fact.

btw. another fact is that you can change all this while rendering too. :wink:

edit: … Mike beat me to it. ;o)

take care
Oleg


#20

um… sorry, but am i missing something? to me it looked like he loaded in 5 files to generate a new composite, then just changed the layer opacity/exposure… that’s not relighting, that’s just changing layer opacity. you can do that with any apps output and photoshop. the only difference is here it’s seperate gi emitters for layers rather than traditional lights (in most other apps), which is nice for sure, but still not relighting, there’s no ability shown to change a lights position in realtime, it’s size, falloff charactaristics etc.