PIXAR :: Technology Focus


#41

This paper describes the implementation of this technique which also explain the difference between irradiance caching and other GI techniques.

http://graphics.pixar.com/library/PointBasedColorBleeding/

The book Point-Based Graphics also explains in great details about an early implementation of this technique and other point-based techniques.


#42

I’ve had quite the opposite experience with Raytracers.
RPS for us has been the tool that has allowed us not to hold back in production because it has been capable of delivering frames from very complex scenes with upwards of 100 million polys (or the equivalent of in displacement maps) with SSS, DOF, MB refraction and reflections, Color bleed and deep shadows at 1080p resolutions.
The pass system is extremely stable and predictable and the overall stability is incredible.

I’m sure you guys know what you’re talking about., but our experience with this software has been nothing short a godsend after depending on MR for years with failed success.

I wouldn’t say RPS is going to “rool over and die” any time soon. Quite the opposite.


#43

Unfortunately the article oversimplifies things and therefore does not really focus on the short coming of the technique and there are a lot. For example really complicated scenes with lots of obscurrence the technique doesn’t work at all because it misses contributing scene elements that are not primary visible or obscured by other objects. Also this technique works only with LDD light paths no LSD or LSDS light paths are possible with it.


#44

I’m not sure what renderer you are using, but this doesn’t happen with RPS.
All you need to do is generate the point cloud with a camera that has all the elements to be rendered in its frustum. No element should be left behind.


#45

Well they talk about how effective it is because they are using the REYES system. I’m only familiar with this from “Digital Rendering and Lighting, Second Edition” (I only use Mental ray) REYES = Render everything your eyes see would suggest that it does not commpute anything you don’t see in your render’r. If you are rendering without this system then yeah a point cloud would be generated for objects you dont see… but I’m not sure if (tied in with REYES) would work that way.


#46

Sorry to correct you, but REYES = Renders Everything You Ever Saw
And Renderman only renders and dices what is in the camera frustum, this goes for anything it does.


#47

Good catch. :smiley:

So renderman/ reyes will take into account things you can’t see. Like if I had a neon green ball being a big box it would still GI the green ball? Even though the camera can’t see it?

Again sorry, I’m not too familiar with renderman.


#48

This seems to have converged to yet another thread where people compare apples to oranges for the sake of bashing a product.

   Firstly, if VRay (or mental delay) does what you need, why even bother comparing it to another renderer, like e.g. PRMan? 
   
   Studios that use PRMan, 3Delight or AIR etc. usually depend on being able to write their own shaders. This is true for 9 out of 10 places I worked at, who use these renderers.
   Programmable shading + shader message passing + arbitrary information attached to rays fired accross the scene + arbitrary control flow and inside the shader + multiple shaders running in parallel using both normal calls and callbacks to each other makes one thing more or less impossible (or lets say: exponentially harder) for the developers of such renderers: optimizations that cover even most cases (and far from all).
   On top, these renderers use an array of rendering algorithms that can be mixed and matched in a single render. Last time I tried to use mental delay's scanline frontend half of the features that worked before exploded in my face or didn't behave the way they did when only ray-tracing got used.
   
   3Delight e.g. offers a pure ray-tracing mode. Or a REYES + ray-tracing hybrid. But at the same time, you can mix with point-based lookups, photon mapping, deep shadows, ray-traced 2ndary but point-based tertiary reflections etc etc.
   PRMan is more or less the same. Check how many algorithms you super fast 'other' render offers, how well they actually work and if they work at all, when combined arbitrarily in a single image.
   And all this is not even considering the shading subsystem.
   
   I would guess that Arnold, running Open Shading Language (OSL) shaders, was rounghly 10-30 times slower than when it uses its built-in C++ shaders.
   Even then, OSL radiance closures are currently hard wired & written in C++, so we're not even talking vanilla OSL here.
   This is a guess so maybe it's only 10 times slower. But I'd be surprised if it wasn't at least [i]one[/i] order or magnitude slower. In any case: it sure will be [i]a lot[/i] slower and I believe the guys in Spain and the OSL team at Sony both know their shite quite well.
   
   People seem to completely miss how modern shading pipelines in big VFX facilities work. One thing that people in feature film VFX are more interested in that photorealism is photorealism paired with artistic control.
   The latter always means hacking reality. Look at any Renaissance painting. Do you think all light in that painting has square falloff? Do you think the shadow directions match the positions of the lightsources? Do you think reflections or specular highlights do? Did you ever work in a professional photo studio? You know how much people digitally 'hack' & 'cheat' lighting even there?
   It is not that hard to write a reasonably fast unbiased path tracer that renders very photoreal looking images. Check out what students of computer science with an emphasis on image synthesis produce these days. Quite stuning.

But a rendering system with production level flexibility? The more flexible you make a renderer, the harder it becomes to optimize to ensure it keeps producing images “fast” (whatever that actually means) .

   Writing shaders in C++ is neither fun nor half as fast as writing them in a specialized high level language like RSL. If that weren't true, SPI wouldn't bother having a team of developers, in parallel with the main Arnold team, developing OSL. They'd just keep using the hard-wired C++ shaders (as they had to, in the movies aready produced with that renderer in the past).
   The latter boils down to how much it costs to develop a shading pipeline in C++ vs e.g. RSL (or OSL). Which is why studios that need this flexibility prefer using renderers like PRMan or 3Delight (and will be interested in Arnold, once it has OSL support).
   
   Because for what it costs to hire another senior shader writing TD for a year, you can add 40 blades + render licenses to your farm (this assumes you use 3Delight. PRMan will mean you can probably only add 25, AIR will mean you can probably add 65+).
   These boxes can make up for the speed that using a highl level, specialized shader VM loses you in terms of the renderer's speed. And you can add another bunch next year and the year after.
   So costs are equal, but the flexibility of your shading system by far surpasses what VRay, Arnold or mental delay offer with their current, canned C++ shaders.
   In other words: in the 1st world, machines are always cheaper than experts.
   So this is a valid path and the reason why these places don't care that PRMan etc. are sometimes slower than VRay & co. Because for the cases were it matters, you simply can't use the latter as their shading system are either too inflexible or too hard (aka: expensive) to access/extend.
   
   That being said, if you don't need the flexibility PRMan & co offer, just use one of these other renders.
   But please don't blabber about the latter being "faster" than the REYES/hybird ones until you truly understand what is going on, under the hood.
   Yes, Arnold is bloody fast from all I've seen. But: it doesn't (yet) use OSL.
   
   It all depends what degree of flexibility you need. In conclusion the reason one is slower than the other has very little (and sometimes nothing) to do with the underlying algorithm and everything to do with the shading subsystem.
   On a typical frame on any production I worked on, in the last 15 years, shading was 90% of the render time. Go figure.
   
   
   Beers,
   
   Moritz

#49

normally I wouldnt point something out like this but Im glad im not the only one. Havent read all the post here yet but. a perfect example of not knowing what paragraph to read next is on page two :slight_smile:

nice article though :slight_smile:


#50

unfortunately the CGsociety article has a horrific layout and is really incomplete.

Here is a link to the presentation which some of you will find interesting:

http://graphics.pixar.com/library/PointBasedColorBleeding/SlidesFromAnnecy09.pdf

Enjoy


#51

This is very true. And, you can basically write an advanced raytracer by yourself by just following Matt and Greg’s book http://www.pbrt.org/.


#52

Hey Moritz,
We can argue about why PRMan is chosen or not in studios until the cows, and several other farm animals, come home, and we can do the same about shading languages and deployment, but all I was addressing, and the point still stand, is that the article and Pixar’s claims are ridiculous because of the biased context.

Nothing in the article discusses shading, shading languages, deployment or development issues or anything similar.
All it does is pimping point clouds for the sake of themselves and for their use for look-up heavy processes like bleeding and ambocc.
Limiting the discussion to those subjects, their time comparisons are ludicrous and inaccurate, since they fail to mention that the slowness and differences are when they compare the times to their own freaken faults (an emabarrasing raytracing implementation pretty much devoid of any intelligent acceleration structure).

Bring into the arena decent (or even crappy) raytracers that don’t suck, even the ones riding on chiefly REYES products like 3delight, and you know that those numbers are simply wrong.

The same goes for the claims of comparable accuracy, feedback and lack of artifacts. It might hold true when they compare internally, but when you bring in other vendors the artifacts and flickering have been gone for quite a while now.
Often times even PRMan heavyweights ended up running their ambocc bakes into a raytracer for how unwieldy, slow and memory hungry point caches were, not to mention the ridiculous network and wrangling impact when you compare the results to what a decent RT can do in a fraction of the time recomputing.
Comparable accuracy, on large, complex, and highly variable density in sets that aren’t lodded crazily is absolutely not there, raytracing will run circles around pointbased with its splotches and missing areas that require multiframe all the time.

Last but not least, the claims of point based techniques ushering a new era of things previously impossible in film productions is pure, unadulterated BS, since several flavours of raytracing have been used for years, and remain in use today even after pointclouds have been massively adopted, to obtain stellar ambocc, irradiance caching and SSS, and even now that pointclouds are available and acccessible, it’s not uncommon to trace them from something when other building methods aren’t viable.

So while all the points you bring up are valid in certain contexts, in the context of the article they are really irrelevant, and Pixar’s claims remain highly artificial and conveniently forgetful about their competition having far exceeded their technologies and results for quite a few years now.

And yes, AL (and consequently guardians and all the movies we’ve worked on and the one we’re working on) is a PRMan bastion for many reasons, most good, some legacyish, like everywhere. It doesn’t mean that everybody using it feels great about it though, or about having to write several GBs worth of maps on disk and having to optimize the hell out of a step in a multi-staged process just to get a freaken ambocc pass out that would render in minutes on a less obsolescent engine (and I’m not talking about AL specifically here, because I can’t, this is what you’ll hear from a large number of TDs in many different, and large and PRMan friendly, shops).

But then, I don’t render much here, and my opinions do not represent those of my employer, so don’t align my personal rants for fun, with my employer’s choices, you’d find quite some divergence :slight_smile:

AL’s most likely differs, mine is that PRMan’s and point based techniques aren’t always the holy grail, they are often a bunted screwdriver handed to you when you have a nail in the wall you have to deal with. They remain great when you’re dealing with screws, but we don’t deal with just those like REYES fedayins would like me to buy into :slight_smile:


#53

In the article about Avatar in Cinefex #120, they talked about a new technique to replace known occlusion and color bleeding techniques called “Sperical Harmonics” in Renderman. Does it have to do with point rendering? What’s the difference?


#54

The paper http://graphics.pixar.com/library/PointBasedColorBleeding/ explains how Spherical Harmonics is used in this point-based technique. SH is also used to optimize IBL too, among other fields in 3D CG.


#55

Ok, so spherical harmonics is not really a rendering technique, but more a set of mathematical functions used for that rendering technique that uses also point clouds.


#56

That is the problem there are many scene arrangements where it is impossible to have all scene elements in one single camera viewpoint, or you have a very uneven distribution of coverage. I don’t know the technical memo doesn’t mention if it is possible to mix point clouds from different viewpoints to get a better coverage. I think technical it should be possible but each viewpoint will increase render and storage space substantially. Also making pre-processing more difficult. This is also the major drawback since obviously this doesn’t seem to happen automatically so the user itself has to make sure of that.


#57

When a scene is too large to compute a point cloud from, I’ve found that it is usually best to just render every frame from the shot camera. This of course has to do with scene complexity and will likely require a workaround for combining point cloud maps.
I’m not sure you can blame the renderer for this, because every renderer has its pitfalls.


#58

Great post Moritz.
I have to add that using RMS + RPS has been more than enough for us. We are not rendering a major motion picture, but we have rendered very complex scenes with mostly out-of-the-box tools.
Which just goes to show that you don’t have to be a coding genius to take advantage of REYES.


#59

I wish I could agree, and trust me, I’m as picky as you can get for rendering efficiency, but our experience with the software for over 2 years now has been quite the opposite of yours.
Once of the main differences of REYES for us has been reliability and that has saved us massive headaches.


#60

The fact is, while raytracers are great, point clouds scale greater than raytracing. If you have a scene with blurry reflections, motion blur, color bounce and displacements, nothing will match the speed of micropolys and point clouds.