Yo Raf,
My post was not addressing any of your comments in this thread (I didn't even read them). I was merely replying to people on the provious page.
Nothing in the article discusses shading, shading languages, deployment or development issues or anything similar.
I wasn't replying to the article but to what people had to say about speed of ray-tracers vs. "other" renderers on the last page.
I didn't even read the article. I deem it probably be mostly exaggerating marketing babble. We all know the difference betweenthe dandy wording of a Cinefex article and the reality the show we worked on, being discussed in that very article. I don't expect this thing to be any different. :)
[…] (an emabarrasing raytracing implementation pretty much devoid of any intelligent acceleration structure).
So what is Pixar's irradiance atlas? I'd say it is a pretty good acceleration structure for anything that is ray-traced and 'fuzzy'. PRMan's ray-tracing speed with sharp things (sharp reflections/refractions e.g.) still isn't great and a lot of that comes down to the lame ray-tracing core of PRMan. But Pixar has come a long way too in the last three years. And lets face it: the things that are more expensive to do in ray-tracing are glossy reflections and refractions. And these can be done much faster using point-based techniques (at the cost of lacking 2ndary speculars/reflections, i.e. accuracy).
Also, ever tried getting a fast ambocc pass out of mental delay etc, when using micropolygon level displacement? A production shading rate demands 4 micropolys per pixel. Good luck with that. :)
And whatever accelration structure you have: part of it will fail once shaders start passing arbitrary information around. I.e. it doesn't change the fact that shading is the major spanner in the works. See my comments about Arnold & OSL vs C++ shaders.
The same goes for the claims of comparable accuracy, feedback and lack of artifacts. It might hold true when they compare internally, but when you bring in other vendors the artifacts and flickering have been gone for quite a while now.
With static geometry this isn't hard. If you have static geo and you don't bake your lighting, you are doing sth. wrong.
Flickering, in a ray-tracer, is mostly a function of the number of samples and certainly, shader (anti-)aliasing, if the renderer has ray-differentials.
If you can shoot enough samples, flickering becomes a non-issue. In term, if your shaders are fast enough, shooting many samples is a non-issue. Which brings us back to shading being the bootleneck in PRMan & co., that prevents people from using many samples.
Which in term led Pixar to come up with a plan B which, for the time being, are point-based techniques.
Often times even PRMan heavyweights ended up running their ambocc bakes into a raytracer for how unwieldy, slow and memory hungry point caches were, not to mention the ridiculous network and wrangling impact when you compare the results to what a decent RT can do in a fraction of the time recomputing.
Sorry man, this is plain bollocks. Even in a ray-tracer, you'd be a tard to not bake any property which doesn't change from frame to frame, store it on disk, share it between boxes rendering the same geo.
What is a decent ray-tracer? Define "decent". I'd be curious. ;)
On that note: ambocc is physically incorrect and going out of fashion, from what I've seen lately.
Comparable accuracy, on large, complex, and highly variable density in sets that aren’t lodded crazily is absolutely not there,
Huh? You must be talking about ray-tracing. With point-based, this is a non-issue. Even if you do sth. utterly stupid like baking your entire scene into a single cloud. As Pixar followed DNA and made their ptcs stored in a spatial data structure, even such mistakes are absorbed rather gracefully by 3Delight & PRMan.
raytracing will run circles around pointbased with its splotches and missing areas that require multiframe all the time.
Run circles? Man, you need to switch back to home-grown. ;)
Splotches are the ptc equivalent of ray-tracing noise. It plain means you need more density in your ptc. Just as you need more samples if you ray-traced property gets too noisy.
Multi-frame? Who cares? Can you explain the conceptual difference between storing data, implicitly, into an in-memory irradiance cache and an in-memory point cloud? There is none.
3Delight has used automatic in-memory point clouds for their SSS since 2003. This is fine and dandy as long as you don't want to reuse data (deforming creature, changing lights). But as soon as you have static geo & lights (greek marble temple, sun), it is stupid to not give the user a way to create the ptc on disk and re-use it.
As far as multi-frame goes: you can run a wrapper script that renders the same RIB n times and preps it for the pre-passes using Ri filters or even environment variables that make it reference different archives containing the data needed for each pass.
The latter works well for all in-memory rendering too (no wrapper script, the user doesn't even know about 'multi-frame'.
Any ray-tracer fills its spatial datastructures before it can ray-trace. What do you call this? A pre-pass? Multi-frame? This is just using different terms for same thing. Any clever algorithm known to us at this time uses some sort of caching.
To cache data, you need to first calculate it. Whether this happens continously or all at once, beforehand, doesn’t really matter. I guess I’m saying I have no idea what you are talking about. 
The difference between a renderer who caches shit unasked and one that does it after you tell it to, from the artist's perspective, is nil. But from the TD's the latter offers a much greater degree of flexibility (or rather: the former doesn't offer any).
The great thing about point-based is that it addresses exactly the issues you mention: lazy loading of baked properties. Even if your scene is not loaded lazily, this is much less of an issue with ptc-based techniques than in a ray-tracer as complexity does not matter.
I.e., try ray-traced micropoly displacements in any of you "decent" ray-tracers there. Good luck with that & the "running circles" thing. ;)
Last but not least, the claims of point based techniques ushering a new era of things previously impossible in film productions is pure, unadulterated BS, since several flavours of raytracing have been used for years, and remain in use today even after pointclouds have been massively adopted, to obtain stellar ambocc, irradiance caching and SSS,
Yeah, remind me again, which full blown VFX show lately was done using all ray-tracing with no pipeline effort (the renderer did it all "automatically") and the annoying "multi-frame" setups you seem to get so hooked on? :)
You can certainly ray-trace scenes of "Avatar" complexity, but you will end up doing a lot of tesselation & shading numerous times since when you run out of RAM (which you will, rapidly and all the time, during rendering), you need to either cache to disk (not allowed, according to you, since that would be, er, "point based") or you need to throw the data out and re-do the computations when a bloody stray ray happens to hit that section of your scene (again).
You may also not realize that irradiance caching is, fundamentally, a point-based technique.
It doesn't matter how the irradiance was acquired. See my comment above about doing sth. "clever". Regardless of what algorithm was used to calculate a property: if it gets re-used, not caching it [i]is[/i] stupid. How you cache it doesn't matter. "Cache" and "point cloud" are two names for the same thing, conceptually.
and even now that pointclouds are available and acccessible, it’s not uncommon to trace them from something when other building methods aren’t viable.
Point clouds are another tool in the box, nothing more, nothing less. I think their use will go from storing data that is used to calculate properties to just being used to store data, as ray-tracing becomes more of a viable option.
It currently isn't, therefore point-based techniques are great. What they aren't are the wholy grail of rendering techniques (of this era even). I think we can agree on that. :P
.mm
) and local shadingrate values it was possible to render it with 6-7 gig of ram. with mr, it renders one bucket and then it run out of 8 gig ram and took ages to render the model. its also possible with mr and all the displacement but i would guess it took 50% more RAM to render without cache to disk. rendertime isnt much different for 3delight an mr, except the motion blur, which is in mr, a pain in the ass compared to 3delight (without trace dispmaps and motion blur of course). but the different RAM usage is a big point here.