PIXAR :: Technology Focus


#100

They don’t use PRman afaik, I know ILM does not, they use their own implementation of the renderman spec and their own raytrace code. When you consider the downsides to ambient occlusion and point clouds, yes it’s quite possible it could be done as fast in a raytracer as you could in PRman. But fast & cheap are not the greatest things to judge renders by :slight_smile: Raytraced images would look far more real. These look like paintings by comparison :
http://images3.wikia.nocookie.net/jamescameronsavatar/images/5/5f/Valkyrie.jpg
http://www.zastavki.com/pictures/1280x800/2009/Movies_Movies_A_Avatar_ship_019167_.jpg

If the dozens of brilliant people at Weta or ILM decided to use a raytracer, there’s no way they’d fail at it. Renderman became the established tool because machines used to be slow and low on ram, but that’s all changed. When computers double in speed and ram yet again, will they still be saying raytracing is too slow?


#101

Actually the point of raytracing is that it’s more accurate. It also automates a lot of things.

Of course you can ignore stuff. When a ray hits a surface, there’s many channels to evalute, right? Diff, spec / reflection, textures, bump, etc. There’s no law or physics that forces shaders to evaluate all of them all the time. One show that I worked on had a reflection shader that explicitly ignored bump, saved lots of render time, and the difference in look is indistinguishable to the human eye. Another show had a shader that used different hdrs depending on whether or not it was in shadow. There’s even off-the-shelf software that comes with enough tools to do a few things like that.

Don’t forget that this is software, it can be written to do whatever you want.


#102

Yes, sorry, thats what i meant.


#103

when i writte Avatar i mean 20-50 detailed na’vi characters standing in a forest that is ultra detailed. with fog and smoke and fire and DOF and motion blur.
when i writte Rango i am talking about 20 furry characters with detailed fur ,feathers in close ups with motion blur and DOF.
there is no way that a raytracer could render this in 2 years . i read the paper from Alice in Wonderland where they had problems rendering normal furry characters. and i watched Green Lantern and i saw how detailed alien skin looks in a raytracer. was this displacemnt or a normal map from the 90’s?


#104

they used displacements for green lantern (and to say you cant render displacements in a raytracer like in reyes is not true, you need more ram, but the quality is the same).
alice on the other hand shows whats possible with a raytracer and how this tech can help to simplify certain things. and hair renders great with the renderer they used.
it would be great to know whats raytraced in rango, i think they uses it alot. the reflections looks awesome in the movie, same for all the refractive stuff. the question is, did they used renderman for this or mental ray for example. the driller from transformers3 is rendered with mental ray, so i think they could used the same mixture for rango?


#105

Again, those were not done in PRman (even if they had been, it would be very highly customized) but in software written by their own people using the renderman spec., so it’s not really fair to compare them to off-the-shelf raytracers. Also, all those elements were most likely not rendered at once, except for crowds, and raytracers are capable of rendering billions of polygons anyway. I’m actually opposed to raytracing hair, I don’t see much improvement and it does take a great deal longer to render - but hair is almost always rendered separately too. So at worst, raytracing would require you to render a few more things separately, but there’s nothing wrong with doing things a little different to get a better look. And it certainly doesn’t mean that it’s “impossible” or “not viable”!

Speaking of rendering separate passes - I hear from one former lighter at Pixar - that they don’t! Apparently rendering everything in-camera is the way they’ve always done it, and they’ve stuck to their old ways. If you get a note that one light is too bright, re-render the whole shot :eek: If that ain’t true I’d love to be corrected! But it might further explain their bias against raytracing.

I’m pretty sure that Rango was all renderman, it has the same flat & lifted look, and mixing 2 different renders is really problemtaic. All the glass looked extremely good, but I can tell you that is because of ILM’s brilliant people with decades of experience, and I’m sure that compositing wizardry is involved too.


#106

What’s a ‘driller’? Can this be viewed on the web? They do occasionally use MR there, but I’d be surprised if they used it in the feature -


#107

That’s what I’ve been told at a recent Pixar masterclass. Not surprising given the amount of control they have over lighting, and they mentioned that render time was never much of a problem despite having a fairly smallish renderfarm (compared to other big studios)


#108

ilm uses mr since a long time. especially for the transformers movies.
the driller is the most complex asset, the tentacle roboter thing. i dont know how much is prerendered like point based gi here (or baked self occlusion) and i am sure they uses some optimizations like env sampling, max raylenght and stuff like this.

http://1.bp.blogspot.com/-J-_7MSiudp8/TrL_YorhjeI/AAAAAAAALh4/ie5UXxHzkzM/s1600/Dotm-driller-film-hudsontower-1.jpg
http://images1.wikia.nocookie.net/__cb20110713125516/transformers/images/thumb/b/b3/Dotm-driller-film-hudsontower-2.png/640px-Dotm-driller-film-hudsontower-2.png
http://fc06.deviantart.net/fs70/f/2011/120/5/d/driller_poster_tf3_by_starkilleroflegion-d3f9ylf.jpg

guccione, have you ever used a raytracer? it sounds like you have no experience with this kind of renderer?

to pixar rendertimes, if its true what you can hear around the net, then they are high, but they have a big renderfarm, so thats not a problem at all.


#109

At Annecy Festival this year they showed a chart of average rendertimes from their movies.
If I remember correctly the lowest was The Incredibles ticking in at around 7 hours (or 5, it’s a long time ago :D) and the highest was Cars (this was before Cars 2) and had an average rendertime of 15 hours a frame.


#110

That’s a shame, by making post-processing more difficult or impossible, you waste render time. The render times may be low but as I was saying before they put a huge amount of time and effort into reducing render times (fast render times are only worth so much effort, at a certain point it becomes self-defeating), and usually don’t include the time it takes to bake. They also don’t need to worry about realism, they’re not trying to match live-action.


#111

I was under the impression that only Kim Libreri’s projects at ILM used MR, like Poseidon. I’m sure I would have heard if Transformers used MR! Do you have any links? I’d love to know more - like how do they deal with motion blur.

!
Heh, I have a lot more experience with raytracers, and it’s what I learned on. What made you think that? I’m the one that’s saying raytracing is not as slow as renderman people will tell you, and definitely looks more real! :slight_smile:


#112

discussion from 2007

"You’re correct Bonedaddy, they mixed Mental Ray and PRMan.

The TD on Transformers Hilmar Koch held a talk about the VFX at the ‘eDIT 10. Filmmaker’s Festival’ in Frankfurt/Germany this month.

On one slide they showed test renderings comparing the render times between MR and PRMan. They showed GI, Area Lights and Reflection times. For GI the time for MR was alot lower than PRMan (I think a third or quarter) on the rest they were even. He mentioned there were issues in getting to match the renders from both, especially with displacement. In the end they were using MR for some passed while doing the main work with PRMan."

I stand corrected! Also about PRMan, if this info is accurate. I’d heard from someone who worked on Pirates 2 that it was not. Who knows anymore.


#113

they rendered motion blur for transformers3 with mr, its raytraced mb.
cant talk for transformers1 in terms of mb (the whole optimizations comes in mr3.9), but i heard they mixed renderman and mr. mr for reflection stuff.
they also used mr for episode2 and also hulk was rendered (partially?) with mr so they uses it alot more then you think (the soft shadows of dobby from harry potter2 are another good example).

i ask you because it looks like that you think alot of stuff is not possible with a raytracer. if you look for example how long BUF is using mr for their amazing work or the matrix movies, or the whole blu sky movies, (not to mention the first ambient environments rendering where ao was raytraced before point based stuff was used) its all raytracing since a while. and i dont talk about the last 3 years or so in which raytracing is used more and more (the whole importance sampling stuff makes alot possible).


#114

Well you should re-read my posts, I’ve been arguing the exact opposite. The article that this thread is from is saying that a lot is not possible with raytracing. I’ve been arguing the whole that that’s false.


#115

sorry, i misunderstood you here. :wink:


#116

an article worth to be deep thinked


#117

I feel like this is getting off topic, but different studios worked on TF3. The studio I worked at that did some TF3 shots used FumeFX/Krakatoa, and VRay.


#118

You can really take that for granted these days!


#119

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.