HOW TO IMPROVE LIGHTWAVE: Rendering


#1

Ok in order to clear the air, I have decided to post a series of threads
where ideas on how to improve Lightwave can be posted in an organized fashion.
The idea is to provide an one stop place where developers (from Newtek, or independent)can come in and get ideas.

This thread along with its sister threads:

HOW TO IMPROVE LIGHTWAVE: Character Animation
HOW TO IMPROVE LIGHTWAVE: Dynamics
HOW TO IMPROVE LIGHTWAVE: Effects
HOW TO IMPROVE LIGHTWAVE: Layout
HOW TO IMPROVE LIGHTWAVE: Modeling
HOW TO IMPROVE LIGHTWAVE: Rendering
HOW TO IMPROVE LIGHTWAVE: SDK and plug-in development
HOW TO IMPROVE LIGHTWAVE: Interoperability with other apps
HOW TO IMPROVE LIGHTWAVE: Workflow
HOW TO IMPROVE LIGHTWAVE: Expressions

…will be used to provide input on ho to move Lightwave back into the forefront of CG graphics.

But in order to keep things positive (glass half full), and make it something
worth while, there are some rules for this thread:

1) STAY ON TOPIC:
Please dont provide rants, just provide your ideas.
2) NO BASHING:
No bashing Lightwave or any other app. I mean it.
3) LINKS:
If new research is pointed out, please provide links to it.
4) PROVIDE EXAMPLES:
Show us real life examples on how your idea would help you.
5) WORKFLOW
Please along with your comment, provide suggestions of how to
implement the concept or idea on the workflow. Workflow is one of the
things that make Lightwave what it is.
6) KEEP IT POSITIVE
The can do attitude is what made this community once great, cynicism is
killing it. Someone once told me “Optimism is an act of defiance”.
Thanks,
-Roberto
PS Ill post the other threads later today.


#2

List could be huge, but here are some examples:

  • More robust texture filtering to avoid flickering during animations. Eliptical filtering, more robust mip-map, etc.
  • Subpixel displacement maps
  • Motion blur asigned individual to objects, lights and cameras.
  • More robust and faster GI solution. Implement of true photon mapping, fast one bounce montecarlo/interpolated radisity (like Final Gathering in Mental Ray), individual samples set for first and secondary bounces, view independent GI solution, noiseless dither patterns to avoid animation flickering, individual pass for GI solution, etc.
  • True volumentric instancing
  • More efficient area lights, perhaps ussing depth maps.
  • A complete rewrite of antialiasing algorithms, perhaps solution is go to pure ray tracer, insteed of hibrid zbuffer/raytracer present today. First intersection pass acceleration to provide hi sample rate on scanline rendering. AA passes asigned individual to objects/materials.
  • A new way to graphical adjust texture setting like positio, rotation and size.

#3

Improve Hypervoxels to give better results in creating the look of fluids
Instancing
Less noisy shadows when using Area lights


#4

I agree with Juan :slight_smile:
Hola amigo :thumbsup:

I could write 4-5 pages of requests for the rendering engine.

I had a lot of dreams before the announcement of LW8.0 features.
Since then, I put all my bets on Worley’s FPrime.


#5
  1. Give an alternative to the existing geomtry/pass AA system.
    I’m not sure what the AA system MAX uses is, but it gives a very wide range of results, has a lot of user control, and more importantly gives better results (less jaggies in bright areas) than LW in a fraction of the time.
    Remember when LW’ers used to laugh at MAX’s render engine, well that changed a few years ago, big time.

  2. GI sample blending.
    Even though you have some gross control over how many samples LW generates, there is no control allowance for sample blending. The Noise Reduction feature in LW is all but usless, no control over it, and it often has more effect at flatteniing out bump maps. Other systems (XSI, Vray) alow for a controlled meathod of blending GI samples, which means that far less samples are needed to produce a non-splotchy render in a small fraction of the time. OK, it may ony be 90%, 80% or 70% accurate (depending on what you set it to), but I would rather have an 80% accurate, clean, nice looking render today, rather than a 100% accurate splotchy render in a few days time.

  3. A MAJOR overhaul of the texture/light baking feature.
    In other apps I get a small render hit for using this feature, in LW I get an exponetial render that often takes days/frame.

  4. Storing and incemental add to GI solutions.
    Basically LW starts from scratch for each frame of an animation, whereas not only will Vray allow you to save the solution to hard disk for later use, but it will allow you to add to the solution if the camera is moved and now faces an area that wasn’t calculated for before, but JUST the area it couln’t see before, any areas in the new frame that also appeared in the previous frame are NOT calculated. This means that render times get fast instead of slower. Also it allows for you to just render every tenth frame (calculating and adding to the solution as you go), then render the entire animation JUST from the stored GI solution with no new calculation, which results in an animation rendered with GI in a little over a tenth of the time with no flickering.
    Obviously just works for camera movement.

  5. Layered renders
    Imagine a render pannel that looks like the Layer pannel in Photoshop. You could set each layer to a different type of buffer (depth, specular, reflection, etc), or decide which object was on each layer, copy layers and change settings on the copy, decide which lights are active in an given layer. The possibilities are endless.

  6. Front projection improvements.
    For those times you don’t have time to break a render down into layers and comp later, or there’s a reason you want it all in one shot. Why after all this time do we still have front projection mapping controlled by a balance of diffuse and luminosity (which never is 100% right) ?

  7. Node based surfacing.
    Would allow for the instancing of surface perameters. Changes to your fractual proceedural in duffuse would altomatically flow into your specular channel, even if you were to add a levels adjuster node between the two.

  8. Surface Baker on N-gons.
    I still don’t know why I have to sub-divide all by floorplan polygons with their many sides, just to be able to use surface baker.


#6

The number one thing I would like to see is better motion blurs, more like actual photos where bright spots don’t get washed out but instead have a streaking effect. Like the motion blur example in HDR shop. If I had a choice in the camera panel to make the motion blur passes additive I would be a little happier. The image would get brighter with each pass but that could be easily corrected for with some full precision exposure control.

If I am trying to make a photo-realistic animation, this is the part that always kills it for me. It’s a very subtle thing but the details are what I am looking for.


#7

You have a very valid point here.

The nature of LW MotionBlur tends to absorb Specular channel.
(As I realize this, the samples of LW uses are composed on black cause there is nothing to begin with, and the dithered motionblur creates Chess-like patterns for the sake of speed. Averaging the samples weakness the entire image.)
I am afraid Additive mode wouldnt work cause the other channels cannot become victims of this weakness.
Special treatment of Specular channel is needed. You can export it as a Buffer and “Add” it as Post-Proc.

Generally speaking, to solve such core problems you have to start digging very deeply into LW algorithms, something scary for the time being.


#8

The Lw rendering engine has a great final output quality. But so do otherr rendering engines like Mental Ray and Vray, for example.

so here some suggestion to make it better :

  1. make it waaaay faster : compared to other renderinge engines, Lw is slow. given the fact that it has also no subpixel displacement, when u have to make some kinda work with disp maps and normal maps great problems arrive. howeber, if it’s faster, the everiday works wich are less complicated, will benefit of it and the workflow in general.

  2. a better AA algorithm. Lw requires too many passes to obtain a good result. this slows the workflow as well. this is obviously particulary noticeable in works for print, and high resolution, not TV works too much.

  3. improve screamernet in a good way.this is really important.For Pro work u currently have to use something like Butterfly or other plugs to make Sn work properly. it just should make u able to do a good job easily.

  4. a good and fixed psd exporter

  5. instances . they are great and are a need. allow things to render faster :slight_smile:

  6. Finally, it’s obvious : but honestly i would like F prime to render all things Lw,shaders , etc.and and take the place of the current engine.
    it’s fast and has quite the same quality of Lw current engine, plus a cool preview window and the possibility to resume rendering. just great.


#9

I would have to second a more robust easy-to-use network rendering system.


#10

I’d like to see alot of the above and would also love to see something like fprime built in too.

ultimately, I would love to see LW facilitate full RIB compliance. Then they wouldn’t have to worry about a rendering engine. There are free/open source ones out there for the entry/mid level folks and PRMan for those that have the funds to go that route.


#11
  1. Faster AA.
  2. Faster and better MotionBlur.
  3. Better DOF. (More like X-DOF 2)
  4. Faster and less noisy blured reflections and maybe a texture input on the blured reflection.
  5. SSS
  6. Faster and better Caustics.
  7. Faster GI
  8. Better noisereduction filters
  9. Better and faster area lights
  10. Faster and improved Hypervoxels

Just a few. :smiley:


#12
  • Node base surfacing or to be able to create group of layer and affect different effect to each group.
  • Better and Faster Hypervoxel. Not only one ball but o be able to choose the number of balls and contraint them to the movement of particules or vertices of mesh and motion modifier for them.
  1. Layered renders
    Imagine a render pannel that looks like the Layer pannel in Photoshop. You could set each layer to a different type of buffer (depth, specular, reflection, etc), or decide which object was on each layer, copy layers and change settings on the copy, decide which lights are active in an given layer. The possibilities are endless.

Yog : it’s a great request !


#13

-Faster Hypervoxels
-MicroPolygon Displacement (This is a really needed thing it’d make alot more posible without insane amounts of geometry)
-SSS
-Faster Caustics(I’ve absolutely never used caustics they’re to slow)


#14

Had to drop by to majorly echo what uncon said about the motion blur, that really is the extra kicker that makes something read right, and especially with everything else being Full Precision, it’s due for inclusion or working out. True lens artifacts in general would be nice, beyond Corona, more into streaks, abberations, realistic flares, etc. I’ve seen opengl demos that do that stuff realtime, even though they’re just using spheres, there should be a way to bring that method on board for nonrealtime. Ever since I saw the example image of highlight streaking on splutterfish’s site I’ve been more jealous.

Aside from that, the usual stuff that has been mentioned before and, and again in this thread. Displacement…SSS…Faster, more intense, etc. ‘more like *Ray’. More things built-in would be great, but preferably also open enough that other people can add to it without resorting to programming tricks that come with either huge speed hits (most of the SSS plugins), or other major handicaps (FPrime, while awesome, is still severely hamstringed by not being able to ‘get to’ more things inside of LW). I think that’s the main factor that should be addressed, would make a lot more possible.


#15

In addition to the good points above…

  1. Must be able to input 16bpc (bits-per-channel) image formats, like 16bpc Photoshop files etc2.
  2. Must be able to output more 16bpc image formats from the render menu (rather than from a plug-in that can’t be accessed by other plug-ins).

These 16bpc workflow enhancements are essential, or you can eliminate Lightwave from the workflow of a lot of film/television production houses (which is what is happening).

  1. Lightwave needs to be able to seamlessly input and output After Effects camera motion data (like C4D & Maya can). Current work-arounds are no good. Despite Newtek snuggling up with Eyeon/DFX+, After Effects is still the most popular compositing application on the planet, and LW can’t very well integrate with it.

#16

Lighting:

Soft shadows for all light types
Volumetrics for all light types
Visible lights within renders

Volumetrics:
Volumetric primitives
Light scattering in volumetrics
More fallof options for hypervoxels
More gradient options in hypervoxel textures
Multiple hypertextures in hypervoxels
Ability to apply hypertextures as a texture to objects
Particle lights without GI
particle instancing of lights
Lensflares for particles and other types of special effects

Surfacing:
More gradient options in general
Nested texture channels, ie instead of a color swatch for each layer, have a texture swatch, similar to what dark tree textures have
Colors in all channels not just color


#17
  • Let Hypervoxels use the particle size from an FX HV Emitter!
    What use is the particle size after calculating a bounce or similar interaction when Hypervoxels can’t access that size info for HV size ? - the rendered HV’s never match the proper size to match the simulated particle animation.

  • Multipass rendering control.


#18

What about some kind of “node based rendering output editor,” just an Idea.:love:


#19

Hmm, there’s a lot of good reasons to do something like that!

How would you show multiple instances of the same tree?
That is, your cool example shows making a postage stamp .tif file, and then processing a field rendered second render with many layers.

Now how would I repeat that for 3 cameras? If the nodes were copied and pasted, how could i change all 3 cameras at once? It seems like the branching can go two ways. Not just “starting with this camera, split and do both of these things…” but also “for all of these cameras, do This Process, which involves splitting into these two things…”
Wow, hard to diagram.

Anyone have a screenshot of how other renderers let you setup advanced output methods like this? I wonder how Maya, C4D, etc, handle it.


#20

Lots of good ideas in here, nice way to get people cuntructivly discussing this stuff. Best idea to Roberto!