PIXAR :: Technology Focus


#92

“Because of the extreme complexity of certain shots these point clouds can still be over a whopping 300 gigabytes, and generating the same effect on these types of datasets is just not possible with ray tracing or radiosity methods, because of the higher memory cost.”

Wow I didn’t realize that what was done to create Speed Racer, Cloudy w/Chance of Meatballs, Tron Legacy, Real Steel and Green Lantern was “impossible” much less “not viable”. These all had full cg and/or photoreal feature quality rendering with radiosity, and no point clouds. They also didn’t need terabytes of extra drive space, hundreds of man hours debugging point clouds, and more, higher-priced labor, all for a lower quality result. Point clouds and ambient occlusion cannot match raytracing accuracy, and the difference would be visible to anyone if they were shown side-by-side examples.

You may think it’s impossible after listening to Renderman people, and that’s because in one context, it is true; using raytracing for everything in Renderman is not the least bit viable. The last time I tested it, rendering the exact same Maya scene in Mental Ray and Renderman, the score was (MR), 4 seconds and (RM), 17 … wait for it … minutes. A plain lambert shaded character with 12 raytrace shadow lights, no soft shadows, low quality settings, 1024x786. Apparently the latest version of Renderman is 4 times faster at raytracing, but 4 x faster than 17x60sec. is about 250 sec., against 4 sec. with Mental Ray, it has a lot further to go.

Renderman definitely has it’s strengths, it’s handling of tons of geo and textures, displacement and motion blur are hard to beat, but the photorealism is only ‘good enough’ to fool the average movie-goer - some of whom thought Beowulf had live actors. Just look at stills from the latest Pixar movie - do they really look like real miniatures? Of course not, and Pixar isn’t worried about that, they’re more interested in the overall look, and they do look great, and maybe that’s why they put reasonably fast raytracing on the back burner.


#93

sorry, but a lambert shaded character has nothing to do with a production like renderbenchmark.


#94

Pixar Pixar Pixar Pixar Maya Maya Maya Renderman Renderman Renderman Pixar Pixar Pixar Pixar Maya Maya Maya Renderman Renderman Renderman Pixar Pixar Pixar Pixar Maya Maya Maya Renderman Renderman Renderman…

Pixar is awesome
Pixar is amazing
Pixar is sublime
Pixar is God
Pixar is Goddess
Pixar is everything

Without Pixar world would split in two. Without Pixar hungry people couldn’t found anything to eat. Without Pixar there wouldn’t be peace in the world. Without Pixar there wouldn’t be time & space & quantum physics.

And remember as ALL the production articles on the internet says the same thing over and over; “that sort of huge animation productions is not for faint of heart”

LOL


#95

WETA is using pantaray with renderman. pantaray for raytracing
http://www.fxguide.com/featured/tintin-weta-goes-animated/

would it be possible to render Avatar,TinTin and Rango with a 100% raytracer at the same time and money?


#96

You can’t have a mocap equipment without Andy Serkis packed with it. And remember VFX is not for faint for heart!


#97

to answer this you have to know which parts of the shots of rango for example are raytraced and how much of the scene was visible for the rays. also, how complex was the reyes generated geometry (and shading effects) so you know how much you have to trace. but i am not sure if someone will answer, perhaps you get the same answer like for avatar and for this, it would not be possible to trace all the stuff like reyes need to render. would be great to get some more indepth informations here. :wink:


#98

I was testing raytrace shadow speed, not overall speed, so I didn’t use textues. Personally I think that comparing overall speeds, when one traces and the other does not, is pointless, unless they look identical. Speaking of which, the time it takes to render shadow maps and point clouds (and re-render if anything changes or is broken) is almost never included in the comparisons. It was a feature film production model w/ half mil polygons, bigger and more complex than a human, if that makes any difference. I also noted that in a (production) scene with several spotlights, switching just one of them from shadow map to raytrace would double the rendertime.

Nobody thinks that renderman (and by that I mean PRman, Renderman itself is not software, only a specification) raytraces as fast as others, it’s not designed from the ground-up to trace as the others are, and as far as I can tell they are so biased against tracing there’s no way it’s a priority for them.

Pixar has created some amazing technology, like motion blur that renders about as fast as no motion blur, point clouds got them an Emmy - why they throw out the baby with the bathwater when it comes to raytracing is beyond me - but maybe it really is because they don’t have the need to do or the experience with live action rendering, and don’t care if their work is photoreal.

The reason motion blur is so fast in PRman is because anything that will be blurry is dropped in quality at rendertime. If you’re willing to lose a little more realism, it can even render faster with motion blur than without. So why on earth do they simply say “raytracing takes too long to evaluate shaders multiple times and needs too much memory” when they could simply do the same thing - use low-res geo and textures, no displacement or subdivision, and ignore everything in the shaders except diffuse and maybe spec/reflection - and end up with a better looking render?


#99

You cant just ignore stuff when using raytracing… You cant just ‘drop’ displacement, for example. The whole point of usig raytracing is because its more detailed.


#100

They don’t use PRman afaik, I know ILM does not, they use their own implementation of the renderman spec and their own raytrace code. When you consider the downsides to ambient occlusion and point clouds, yes it’s quite possible it could be done as fast in a raytracer as you could in PRman. But fast & cheap are not the greatest things to judge renders by :slight_smile: Raytraced images would look far more real. These look like paintings by comparison :
http://images3.wikia.nocookie.net/jamescameronsavatar/images/5/5f/Valkyrie.jpg
http://www.zastavki.com/pictures/1280x800/2009/Movies_Movies_A_Avatar_ship_019167_.jpg

If the dozens of brilliant people at Weta or ILM decided to use a raytracer, there’s no way they’d fail at it. Renderman became the established tool because machines used to be slow and low on ram, but that’s all changed. When computers double in speed and ram yet again, will they still be saying raytracing is too slow?


#101

Actually the point of raytracing is that it’s more accurate. It also automates a lot of things.

Of course you can ignore stuff. When a ray hits a surface, there’s many channels to evalute, right? Diff, spec / reflection, textures, bump, etc. There’s no law or physics that forces shaders to evaluate all of them all the time. One show that I worked on had a reflection shader that explicitly ignored bump, saved lots of render time, and the difference in look is indistinguishable to the human eye. Another show had a shader that used different hdrs depending on whether or not it was in shadow. There’s even off-the-shelf software that comes with enough tools to do a few things like that.

Don’t forget that this is software, it can be written to do whatever you want.


#102

Yes, sorry, thats what i meant.


#103

when i writte Avatar i mean 20-50 detailed na’vi characters standing in a forest that is ultra detailed. with fog and smoke and fire and DOF and motion blur.
when i writte Rango i am talking about 20 furry characters with detailed fur ,feathers in close ups with motion blur and DOF.
there is no way that a raytracer could render this in 2 years . i read the paper from Alice in Wonderland where they had problems rendering normal furry characters. and i watched Green Lantern and i saw how detailed alien skin looks in a raytracer. was this displacemnt or a normal map from the 90’s?


#104

they used displacements for green lantern (and to say you cant render displacements in a raytracer like in reyes is not true, you need more ram, but the quality is the same).
alice on the other hand shows whats possible with a raytracer and how this tech can help to simplify certain things. and hair renders great with the renderer they used.
it would be great to know whats raytraced in rango, i think they uses it alot. the reflections looks awesome in the movie, same for all the refractive stuff. the question is, did they used renderman for this or mental ray for example. the driller from transformers3 is rendered with mental ray, so i think they could used the same mixture for rango?


#105

Again, those were not done in PRman (even if they had been, it would be very highly customized) but in software written by their own people using the renderman spec., so it’s not really fair to compare them to off-the-shelf raytracers. Also, all those elements were most likely not rendered at once, except for crowds, and raytracers are capable of rendering billions of polygons anyway. I’m actually opposed to raytracing hair, I don’t see much improvement and it does take a great deal longer to render - but hair is almost always rendered separately too. So at worst, raytracing would require you to render a few more things separately, but there’s nothing wrong with doing things a little different to get a better look. And it certainly doesn’t mean that it’s “impossible” or “not viable”!

Speaking of rendering separate passes - I hear from one former lighter at Pixar - that they don’t! Apparently rendering everything in-camera is the way they’ve always done it, and they’ve stuck to their old ways. If you get a note that one light is too bright, re-render the whole shot :eek: If that ain’t true I’d love to be corrected! But it might further explain their bias against raytracing.

I’m pretty sure that Rango was all renderman, it has the same flat & lifted look, and mixing 2 different renders is really problemtaic. All the glass looked extremely good, but I can tell you that is because of ILM’s brilliant people with decades of experience, and I’m sure that compositing wizardry is involved too.


#106

What’s a ‘driller’? Can this be viewed on the web? They do occasionally use MR there, but I’d be surprised if they used it in the feature -


#107

That’s what I’ve been told at a recent Pixar masterclass. Not surprising given the amount of control they have over lighting, and they mentioned that render time was never much of a problem despite having a fairly smallish renderfarm (compared to other big studios)


#108

ilm uses mr since a long time. especially for the transformers movies.
the driller is the most complex asset, the tentacle roboter thing. i dont know how much is prerendered like point based gi here (or baked self occlusion) and i am sure they uses some optimizations like env sampling, max raylenght and stuff like this.

http://1.bp.blogspot.com/-J-_7MSiudp8/TrL_YorhjeI/AAAAAAAALh4/ie5UXxHzkzM/s1600/Dotm-driller-film-hudsontower-1.jpg
http://images1.wikia.nocookie.net/__cb20110713125516/transformers/images/thumb/b/b3/Dotm-driller-film-hudsontower-2.png/640px-Dotm-driller-film-hudsontower-2.png
http://fc06.deviantart.net/fs70/f/2011/120/5/d/driller_poster_tf3_by_starkilleroflegion-d3f9ylf.jpg

guccione, have you ever used a raytracer? it sounds like you have no experience with this kind of renderer?

to pixar rendertimes, if its true what you can hear around the net, then they are high, but they have a big renderfarm, so thats not a problem at all.


#109

At Annecy Festival this year they showed a chart of average rendertimes from their movies.
If I remember correctly the lowest was The Incredibles ticking in at around 7 hours (or 5, it’s a long time ago :D) and the highest was Cars (this was before Cars 2) and had an average rendertime of 15 hours a frame.


#110

That’s a shame, by making post-processing more difficult or impossible, you waste render time. The render times may be low but as I was saying before they put a huge amount of time and effort into reducing render times (fast render times are only worth so much effort, at a certain point it becomes self-defeating), and usually don’t include the time it takes to bake. They also don’t need to worry about realism, they’re not trying to match live-action.


#111

I was under the impression that only Kim Libreri’s projects at ILM used MR, like Poseidon. I’m sure I would have heard if Transformers used MR! Do you have any links? I’d love to know more - like how do they deal with motion blur.

!
Heh, I have a lot more experience with raytracers, and it’s what I learned on. What made you think that? I’m the one that’s saying raytracing is not as slow as renderman people will tell you, and definitely looks more real! :slight_smile: