PIXAR :: Technology Focus


#61

Yo Raf,

 My post was not addressing any of your comments in this thread (I didn't even read them). I was merely replying to people on the provious page. 

Nothing in the article discusses shading, shading languages, deployment or development issues or anything similar.

 I wasn't replying to the article but to what people had to say about speed of ray-tracers vs. "other" renderers on the last page.
 I didn't even read the article. I deem it probably be mostly exaggerating marketing babble. We all know the difference betweenthe dandy wording of a Cinefex article and the reality the show we worked on, being discussed in that very article. I don't expect this thing to be any different. :)

[…] (an emabarrasing raytracing implementation pretty much devoid of any intelligent acceleration structure).

 So what is Pixar's irradiance atlas? I'd say it is a pretty good acceleration structure for anything that is ray-traced and 'fuzzy'. PRMan's ray-tracing speed with sharp things (sharp reflections/refractions e.g.) still isn't great and a lot of that comes down to the lame ray-tracing core of PRMan. But Pixar has come a long way too in the last three years. And lets face it: the things that are more expensive to do in ray-tracing are glossy reflections and refractions. And these can be done much faster using point-based techniques (at the cost of lacking 2ndary speculars/reflections, i.e. accuracy).
 Also, ever tried getting a fast ambocc pass out of mental delay etc, when using micropolygon level displacement? A production shading rate demands 4 micropolys per pixel. Good luck with that. :)
 
 And whatever accelration structure you have: part of it will fail once shaders start passing arbitrary information around. I.e. it doesn't change the fact that shading is the major spanner in the works. See my comments about Arnold & OSL vs C++ shaders.

The same goes for the claims of comparable accuracy, feedback and lack of artifacts. It might hold true when they compare internally, but when you bring in other vendors the artifacts and flickering have been gone for quite a while now.

 With static geometry this isn't hard. If you have static geo and you don't bake your lighting, you are doing sth. wrong.
 Flickering, in a ray-tracer, is mostly a function of the number of samples and certainly, shader (anti-)aliasing, if the renderer has ray-differentials.
 If you can shoot enough samples, flickering becomes a non-issue. In term, if your shaders are fast enough, shooting many samples is a non-issue. Which brings us back to shading being the bootleneck in PRMan & co., that prevents people from using many samples.
 Which in term led Pixar to come up with a plan B which, for the time being, are point-based techniques.

Often times even PRMan heavyweights ended up running their ambocc bakes into a raytracer for how unwieldy, slow and memory hungry point caches were, not to mention the ridiculous network and wrangling impact when you compare the results to what a decent RT can do in a fraction of the time recomputing.

 Sorry man, this is plain bollocks. Even in a ray-tracer, you'd be a tard to not bake any property which doesn't change from frame to frame, store it on disk, share it between boxes rendering the same geo. 
 What is a decent ray-tracer? Define "decent". I'd be curious. ;)
 
 On that note: ambocc is physically incorrect and going out of fashion, from what I've seen lately.

Comparable accuracy, on large, complex, and highly variable density in sets that aren’t lodded crazily is absolutely not there,

 Huh? You must be talking about ray-tracing. With point-based, this is a non-issue. Even if you do sth. utterly stupid like baking your entire scene into a single cloud. As Pixar followed DNA and made their ptcs stored in a spatial data structure,  even such mistakes are absorbed rather gracefully by 3Delight & PRMan.

raytracing will run circles around pointbased with its splotches and missing areas that require multiframe all the time.

 Run circles? Man, you need to switch back to home-grown. ;)
 Splotches are the ptc equivalent of ray-tracing noise. It plain means you need more density in your ptc. Just as you need more samples if you ray-traced property gets too noisy.
 Multi-frame? Who cares? Can you explain the conceptual difference between storing data, implicitly, into an in-memory irradiance cache and an in-memory point cloud? There is none.
 3Delight has used automatic in-memory point clouds for their SSS since 2003. This is fine and dandy as long as you don't want to reuse data (deforming creature, changing lights). But as soon as you have static geo & lights (greek marble temple, sun), it is stupid to not give the user a way to create the ptc on disk and re-use it.
 As far as multi-frame goes: you can run a wrapper script that renders the same RIB n times and preps it for the pre-passes using Ri filters or even environment variables that make it reference different archives containing the data needed for each pass.
 The latter works well for all in-memory rendering too (no wrapper script, the user doesn't even know about 'multi-frame'.
 Any ray-tracer fills its spatial datastructures before it can ray-trace. What do you call this? A pre-pass? Multi-frame? This is just using different terms for same thing. Any clever algorithm known to us at this time uses some sort of caching.

To cache data, you need to first calculate it. Whether this happens continously or all at once, beforehand, doesn’t really matter. I guess I’m saying I have no idea what you are talking about. :slight_smile:

 The difference between a renderer who caches shit unasked and one that does it after you tell it to, from the artist's perspective, is nil. But from the TD's the latter offers a much greater degree of flexibility (or rather: the former doesn't offer any).
 
 The great thing about point-based is that it addresses exactly the issues you mention: lazy loading of baked properties. Even if your scene is not loaded lazily, this is much less of an issue with ptc-based techniques than in a ray-tracer as complexity does not matter.
 I.e., try ray-traced micropoly displacements in any of you "decent" ray-tracers there. Good luck with that & the "running circles" thing. ;)

Last but not least, the claims of point based techniques ushering a new era of things previously impossible in film productions is pure, unadulterated BS, since several flavours of raytracing have been used for years, and remain in use today even after pointclouds have been massively adopted, to obtain stellar ambocc, irradiance caching and SSS,

 Yeah, remind me again, which full blown VFX show lately was done using all ray-tracing with no pipeline effort (the renderer did it all "automatically") and the annoying "multi-frame" setups you seem to get so hooked on? :)

 You can certainly ray-trace scenes of "Avatar" complexity, but you will end up doing a lot of tesselation & shading numerous times since when you run out of RAM (which you will, rapidly and all the time, during rendering), you need to either cache to disk (not allowed, according to you, since that would be, er, "point based") or you need to throw the data out and re-do the computations when a bloody stray ray happens to hit that section of your scene (again).
 
 You may also not realize that irradiance caching is, fundamentally, a point-based technique.
 It doesn't matter how the irradiance was acquired. See my comment above about doing sth. "clever". Regardless of what algorithm was used to calculate a property: if it gets re-used, not caching it [i]is[/i] stupid. How you cache it doesn't matter. "Cache" and "point cloud" are two names for the same thing, conceptually.

and even now that pointclouds are available and acccessible, it’s not uncommon to trace them from something when other building methods aren’t viable.

 Point clouds are another tool in the box, nothing more, nothing less. I think their use will go from storing data that is used to calculate properties to just being used to store data, as ray-tracing becomes more of a viable option.
 It currently isn't, therefore point-based techniques are great. What they aren't are the wholy grail of rendering techniques (of this era even). I think we can agree on that. :P
 
 .mm

#62

Just make sure the camera used to bake the ptc contains all objects you need to appear in your ptc. Or use an orthographic projection showing the entire scene with your original camera as a dicing one.

There are many ways to do this and they can be setup so they happen fully automatic.

.mm


#63

thats the point, you are running out of memory. my last model has 3.5 mio polys, most objects has dispmap on top. with 3delight it renders without any prob with subdivs on all objects and dispmap on top for most. with optimized raytrace settings (optimized, whats possible for me :wink: ) and local shadingrate values it was possible to render it with 6-7 gig of ram. with mr, it renders one bucket and then it run out of 8 gig ram and took ages to render the model. its also possible with mr and all the displacement but i would guess it took 50% more RAM to render without cache to disk. rendertime isnt much different for 3delight an mr, except the motion blur, which is in mr, a pain in the ass compared to 3delight (without trace dispmaps and motion blur of course). but the different RAM usage is a big point here.


#64

Interesting article, though this is not exactly new technology. infact I’ve used it with rms for a couple of years, wonder why it’s being showcased now?
and yes it does have some downsides. bloody huge files! :slight_smile:


#65

Interesting article, though this is not exactly new technology. infact I’ve used it with rms for a couple of years, wonder why it’s being showcased now? and yes it does have some downsides. bloody huge files!
Hehe, thought that to. It is nothing new. It is used for years in production and more a standard feature not a special feature. Even more funny are these surprised comments:
"Wow, never heard about it, I want that too for MR, so cool technic… "
what??? get alive, or back to school!!!


#66

I think nobody is guilty he doesn’t understand the technology he never worked with. :slight_smile:


#67

at the end of the day… you’ve just gotta bloody render it! lol


#68

Does it mean cinema tickets become significantly more favourable now for movies which was rendered with PTC technologie or who will profit of it?

:wink:


#69

That’s a selective bias and that’s okay and I guess that’s okay in film production where you want that sort of level of control probably anyway but it’s not okay in many other cases.

And even ortographic projection doesn’t help you to solve the uneven coverage problem. So you naturally bias the solution heavenly to certain geometry arrangements.

I think neither the article nor the technical memo mentions these short commings and that is a bit lame especially for Per Christensen standards (I guess that’s why it is just a technical memo and not a journal or conference article).


#70

I really wish that sampling could be set with a “samples per unit area” (even if it’s approximate) rather than using the resolution and shading rate. You might even want different sampling depending on what you are baking.

The current way of setting this is very unintuitive. I have requested this several times now - so I don’t see it coming any time soon.

It might be nice to have the option of setting up baking by grouping attributes and have no camera at all too. You could then do multiple bakes at different sampling, from the same scene. Obviously camera based baking is useful - but generally makes it less reusable.

The main problem with point clouds is handling all the data, and making all the shaders point cloud aware, but both of these aren’t insurmountable issues.

I think there is still quite a lot of scope for improving raytracing in prman (and RenderMan renderers in general). For example there could be separate attributes, shading rates or even shaders for different ray types/depths eg by extending Ri conditionals.

The ray bundling that Arnold/OSL uses is very interesting too. This could allow SIMD shading for ray hits rather than the current point sampling. Just hitting geometry is quite fast in prman - it is running shaders which really slows things down.

Simon


#71

great article, quite interesting :slight_smile:


#72

I learn a lot when very smart developers don’t agree. Thanks for your insight guys, your comments are far more informative then the article itself.

I really enjoy the 3delight and MR combo in Softimage. The implementation is perfect and one renderer’s weakness is the other’s strength.


#73

Interesting article for sure. I too always find it peculiar that things are compared only to non optimized algorithms, which is a pet peevee of mine with many technical papers as they fail to show what ‘real’ optimization they have achieved. Instead they show theoretical optimization they have achieved against the lowest common denominator. This happens all the time in technical papers. As a person doing some 3d technology development, I would very often like to have a more balanced comparison in the papers/reports showing how well it works towards at least the most actually used/common production models for whatever the paper is trying to show. Even preferably the most recent, open models. All that aside, I think it’s always great when people try to show us more behind their thinking about things like this.


#74

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.


#75

Thread re-opened for comment as Vault article.


#76

why is this on the front page?


#77

Hi Michael,

This article is not about Toy Story 3, it is a Vault’ article about the technology of Point Lighting within RenderMan. Being a highly rated story when it was posted, it still is relevant and is great reading.


#78

Thanks for the reposting of the article on Pixar and point-cloud. It was very interesting, as were the comments posted about it! I was able to read the article as presented (side by side paragraphs) but appreciated the comments about it being awkwardly formatted.

The caption on the first example of “Up” has the same caption as a previous example; ie the light on the wall is reflected off the sneaker.


#79

hi guys,

for me point based approaches were quite a lifesaver because as a student ressources are pretty limited and i had (and still have) to do a lot of rendering on a single notebook. prman gives me the tools i need to scale my renders so i can reach deadlines. renderers like mr and vray usually don’t give you the same scalability since a lot of rendertime optimization happens over changing sample counts which at a certain level introduces noise.

what i realize once again when reading to the comments here is that a lot of people don’t know how a renderman compliant renderer actually works. compared to mr and vray prman is very complex and rather tricky but and that is why i think its still the world leading renderer is that it just is flexible in ANY way. at every step in the rendering pipeline you get the necessary API to customize and reimplement. like this you really can render anything you want

the biggest drawback of prman is simply that there isn’t much information about it. i see often how people from the art section work with RMS/RfM and get pretty good results but still to get it to the next level some rather demanding technical knowledge would be needed. when a studio has the right ratio of designers and capable technicians there’s nothing you can’t do with prman.

btw prman has one of the fastest raytracers on the market! what make people think that its slower than in others is the way how shader execution is handled. when using techniques that avoid shader execution at ray hits show how powerful the raytracer actually is. also keep in mind that prmans raytracer is extremely powerful. i don’t know any renderer which gives you such control over raytracing

grs
Patrik


#80

can you explain this more? i read a lot that renderman’s raytracer is very slow. thanks