really cool article… but now i want that in MR to render my scenes 
PIXAR :: Technology Focus
Nice article, The paragraphs are all over the show though, you may want too look into that ;).
The displaced mesh is itself baked into the point cloud. The baked points/disks are then used to calculate occlusion using a point-based occlusion algorithm. So you do get occlusion from even finer details of your displaced mesh. Since you’re not calculating displacement again, it just doesn’t incur any additional cost.
Or is the resolution of this cloud much higher than I imagine now?
The resolution of the point cloud depends on the scene’s Shading Rate. Lower shading rates (i.e. better quality) give a denser point cloud and higher values give sparse ones.
-Sachin
Awesome article, thank you. I wish they’d put this render method into 3ds max. Are you able to use it in Renderman for Maya?
3Dlight can do it.
And Houdini´s Mantra also implemented point clouds in a very similar way as prman like 2 years ago, with the plus that you can access/modify the points, and create them, directly from Houdini and then use them in Mantra, much easier and flexible than prman. And is really fast indeed.
Anyway i think this technique is more or less and standar in most film studios nowdays.
Thanks for the article.
Wish mental ray had some sort of point cloud based GI/SSS 
and i am sick of grainy occlusions from raytraced renderers…
Games these days can render occlusion in real time…but doing the same takes so much time in 3d apps…?
We just need a max to rib converter. :banghead:
Even XSI got 3delight.
Or, fbx or mdd to Maya.
Not that I don’t appreciate what point based rendering has been used for, but the way it’s presented here it sounds like the only true way to otherwise unobtainable results never achieve previously
While ray tracing and radiosity can create stunning results, the memory consumption and computational overhead when used in actual production has been extremely high. For production-sized datasets these methods were simply impractical to deploy, if not impossible.
I guess that all sony’s CGI movies using arnold, large amounts of work by ILM, most of BUF’s work, quite a few passes in avatar, and in general a large amount (in the thousands) of shots rendered in the last 5 years with arnold, MRay and VRay that found their way into films never happened then…
Point based rendering is brilliant because it enables an aging and many times patched over architecture like Pixar’s to do something raytracers have been doing effortlessly for years (the ridiculous costs of rays they refer to is only valid if you stick to PRMan’s less than mediocre raytracing), but it’s far from being the only way to approach this problem, or something mutually exclusive with what raytracers with cheap rays (see arnold) can do pretty damn well in a fraction of the time and without all the pains in the ass of point clouds being lobbed around.
Bravo to all involved for having given PRMan’s the Nth swan song when everybody’s been waiting for it to roll over and die for several years now, but the general tone of the article makes it sound like PRMan’s shortcomings are common to all other engines and technologies available, when they actually aren’t 
Blue Sky’s inhouse renderer is also raytracer. However, the fundamental algorithm of raytracing Global Illumination, either Monte Carlo, photon mapping, bidirectional, etc, is slow but physically accurate.
The point-based technique presented here and implemented in Renderman is a hack, but a very clever hack to do GI. This technique could be real time and cheap as proven by Michael’s GPU Gem 2 implementation, which the raytracing one is very difficult to accomplish in current hardwares.
I completely agree with Raffael.
To have used point based rendering for a while now, I can tell you this is not an eldorado and this will not solve your problem
They claim that pointbased rendering is a lot faster than raytracing (typical marketing Bullshit). They should mention that true in Prman because their raytracing algorithm are very very very slow compared to mental ray or Vray for instances. So they need to rely on a technique like point based to get thing rendered in a decent time ( well compared to MR or Vray this is still slower ! )
The result you got is only an approximation of what raytracing can produce and sometimes it’s not as accurate and you can really see it.
I didn’t find very scalable either having to compute occlusion on a big set you end up to have millions of point in your point cloud and that slow down drasticly your rendertimes.
And this is a 2 pass approach, 1st write your point cloud representing your scene, then process the point cloud and then read back into your final render. The setup isn’t that easy ( even so you got nice implemntation such as 3delight in XSI that make it transparent ) and you need to store quite large files on your disk. That cause also some network issues and sometimes switching back to raytracing even in Prman gets you faster render.
So yes it s very disapointing ! Specially when you got the same complex scene that render in less a minute in MR while you need to wait at least 2 min just to see the first bucket to start to render !
I have been using side by side Mr and Prman on many movies now (eg.Harry Potter, The Dark knight, clash, where the wild things are). You will always have issue depanding what you do but since a couple of years mr is getting easier and easier to use ( and could be even more easier I agree ) I will not bother to use Prman anymore and I know that quite a lot of people are thinking like me.
So it’s compensated by having large renderfarms, eh? That’s amazing, I believed this is the holy grail, but you have to learn renderman. What a disappointment to hear…
Slow faked and sometimes unrealistic GI, just like 2 pros told?
I aggree with Raf and Saturn, the point based method has it’s upsides (flickerfree GI and bounce, relatively quick bake time, lower memory consumption) but being an approximation it definitely doesn’t have the same fidelity as a proper raytraced AO or light bounce. It can also look messy on dense, highly detailed surfaces with interpenetrating geometry (as in, the AO starts looking too black or dirty/smudgy and you see more light leaks from bounce so more tweaking is neccessary). Definitely not the most user friendly way to light.
On the other hand it’s the only way I’ve been able to do screen filling renders with displacement and motion blur, something that would always kill Vray or MR unless geo was simple. Correct me if things have changed though, it’s been a while since I used either.
Wiro
I can only speak for V-Ray but it is extremely capable of handling hugely complex geometry - admittedly the majority of my experience is with raw (rather than displaced) meshes, so it may be completely irrelevant to what you’re talking about doing in PRMan…
Hopefully someone more knowledgeable than me can throw in their two cents either way, but even with a very modest set up it consistantly knocks my socks off at what it can handle.
[http://vimeo.com/6312200](http://vimeo.com/6312200)
That was true maybe 5 years ago. It’s not true anymore specially since we switched to 64 bits even for hair. For instance on my last job ( compare the merkaat http://www.vimeo.com/10676495 ) we had cownd of furry animals, final gathering, SSS, raytracing, displacement and 3d motion blur, the average time for 1920x1080 was around 10-12 min a frame. Not sure if this slow or not for you but for me it s pretty quick ( even if I would like to get even quicker)
“Faster computation times - 4x to 10x speedup versus ray tracing large scenes”
quite alot of variation in the speed of raytracers out there
Having so many guys from Animal Logic around: First off - you’re doing a GRAND job! AL always has been a truly inspirational company! Secondly, reading your posts made me wonder what render-engine you used primarily on “Legend of the Guardians”?
I always assumed you guys used a renderman compliant renderer on these projects, probably because of mayaman, but as it seems you seem to rather like raytracers 
Cheers