You can’t have a mocap equipment without Andy Serkis packed with it. And remember VFX is not for faint for heart!
PIXAR :: Technology Focus
to answer this you have to know which parts of the shots of rango for example are raytraced and how much of the scene was visible for the rays. also, how complex was the reyes generated geometry (and shading effects) so you know how much you have to trace. but i am not sure if someone will answer, perhaps you get the same answer like for avatar and for this, it would not be possible to trace all the stuff like reyes need to render. would be great to get some more indepth informations here. 
I was testing raytrace shadow speed, not overall speed, so I didn’t use textues. Personally I think that comparing overall speeds, when one traces and the other does not, is pointless, unless they look identical. Speaking of which, the time it takes to render shadow maps and point clouds (and re-render if anything changes or is broken) is almost never included in the comparisons. It was a feature film production model w/ half mil polygons, bigger and more complex than a human, if that makes any difference. I also noted that in a (production) scene with several spotlights, switching just one of them from shadow map to raytrace would double the rendertime.
Nobody thinks that renderman (and by that I mean PRman, Renderman itself is not software, only a specification) raytraces as fast as others, it’s not designed from the ground-up to trace as the others are, and as far as I can tell they are so biased against tracing there’s no way it’s a priority for them.
Pixar has created some amazing technology, like motion blur that renders about as fast as no motion blur, point clouds got them an Emmy - why they throw out the baby with the bathwater when it comes to raytracing is beyond me - but maybe it really is because they don’t have the need to do or the experience with live action rendering, and don’t care if their work is photoreal.
The reason motion blur is so fast in PRman is because anything that will be blurry is dropped in quality at rendertime. If you’re willing to lose a little more realism, it can even render faster with motion blur than without. So why on earth do they simply say “raytracing takes too long to evaluate shaders multiple times and needs too much memory” when they could simply do the same thing - use low-res geo and textures, no displacement or subdivision, and ignore everything in the shaders except diffuse and maybe spec/reflection - and end up with a better looking render?
You cant just ignore stuff when using raytracing… You cant just ‘drop’ displacement, for example. The whole point of usig raytracing is because its more detailed.
They don’t use PRman afaik, I know ILM does not, they use their own implementation of the renderman spec and their own raytrace code. When you consider the downsides to ambient occlusion and point clouds, yes it’s quite possible it could be done as fast in a raytracer as you could in PRman. But fast & cheap are not the greatest things to judge renders by
Raytraced images would look far more real. These look like paintings by comparison :
http://images3.wikia.nocookie.net/jamescameronsavatar/images/5/5f/Valkyrie.jpg
http://www.zastavki.com/pictures/1280x800/2009/Movies_Movies_A_Avatar_ship_019167_.jpg
If the dozens of brilliant people at Weta or ILM decided to use a raytracer, there’s no way they’d fail at it. Renderman became the established tool because machines used to be slow and low on ram, but that’s all changed. When computers double in speed and ram yet again, will they still be saying raytracing is too slow?
Actually the point of raytracing is that it’s more accurate. It also automates a lot of things.
Of course you can ignore stuff. When a ray hits a surface, there’s many channels to evalute, right? Diff, spec / reflection, textures, bump, etc. There’s no law or physics that forces shaders to evaluate all of them all the time. One show that I worked on had a reflection shader that explicitly ignored bump, saved lots of render time, and the difference in look is indistinguishable to the human eye. Another show had a shader that used different hdrs depending on whether or not it was in shadow. There’s even off-the-shelf software that comes with enough tools to do a few things like that.
Don’t forget that this is software, it can be written to do whatever you want.
when i writte Avatar i mean 20-50 detailed na’vi characters standing in a forest that is ultra detailed. with fog and smoke and fire and DOF and motion blur.
when i writte Rango i am talking about 20 furry characters with detailed fur ,feathers in close ups with motion blur and DOF.
there is no way that a raytracer could render this in 2 years . i read the paper from Alice in Wonderland where they had problems rendering normal furry characters. and i watched Green Lantern and i saw how detailed alien skin looks in a raytracer. was this displacemnt or a normal map from the 90’s?
they used displacements for green lantern (and to say you cant render displacements in a raytracer like in reyes is not true, you need more ram, but the quality is the same).
alice on the other hand shows whats possible with a raytracer and how this tech can help to simplify certain things. and hair renders great with the renderer they used.
it would be great to know whats raytraced in rango, i think they uses it alot. the reflections looks awesome in the movie, same for all the refractive stuff. the question is, did they used renderman for this or mental ray for example. the driller from transformers3 is rendered with mental ray, so i think they could used the same mixture for rango?
Again, those were not done in PRman (even if they had been, it would be very highly customized) but in software written by their own people using the renderman spec., so it’s not really fair to compare them to off-the-shelf raytracers. Also, all those elements were most likely not rendered at once, except for crowds, and raytracers are capable of rendering billions of polygons anyway. I’m actually opposed to raytracing hair, I don’t see much improvement and it does take a great deal longer to render - but hair is almost always rendered separately too. So at worst, raytracing would require you to render a few more things separately, but there’s nothing wrong with doing things a little different to get a better look. And it certainly doesn’t mean that it’s “impossible” or “not viable”!
Speaking of rendering separate passes - I hear from one former lighter at Pixar - that they don’t! Apparently rendering everything in-camera is the way they’ve always done it, and they’ve stuck to their old ways. If you get a note that one light is too bright, re-render the whole shot :eek: If that ain’t true I’d love to be corrected! But it might further explain their bias against raytracing.
I’m pretty sure that Rango was all renderman, it has the same flat & lifted look, and mixing 2 different renders is really problemtaic. All the glass looked extremely good, but I can tell you that is because of ILM’s brilliant people with decades of experience, and I’m sure that compositing wizardry is involved too.
What’s a ‘driller’? Can this be viewed on the web? They do occasionally use MR there, but I’d be surprised if they used it in the feature -
That’s what I’ve been told at a recent Pixar masterclass. Not surprising given the amount of control they have over lighting, and they mentioned that render time was never much of a problem despite having a fairly smallish renderfarm (compared to other big studios)
ilm uses mr since a long time. especially for the transformers movies.
the driller is the most complex asset, the tentacle roboter thing. i dont know how much is prerendered like point based gi here (or baked self occlusion) and i am sure they uses some optimizations like env sampling, max raylenght and stuff like this.
http://1.bp.blogspot.com/-J-_7MSiudp8/TrL_YorhjeI/AAAAAAAALh4/ie5UXxHzkzM/s1600/Dotm-driller-film-hudsontower-1.jpg
http://images1.wikia.nocookie.net/__cb20110713125516/transformers/images/thumb/b/b3/Dotm-driller-film-hudsontower-2.png/640px-Dotm-driller-film-hudsontower-2.png
http://fc06.deviantart.net/fs70/f/2011/120/5/d/driller_poster_tf3_by_starkilleroflegion-d3f9ylf.jpg
guccione, have you ever used a raytracer? it sounds like you have no experience with this kind of renderer?
to pixar rendertimes, if its true what you can hear around the net, then they are high, but they have a big renderfarm, so thats not a problem at all.
At Annecy Festival this year they showed a chart of average rendertimes from their movies.
If I remember correctly the lowest was The Incredibles ticking in at around 7 hours (or 5, it’s a long time ago :D) and the highest was Cars (this was before Cars 2) and had an average rendertime of 15 hours a frame.
That’s a shame, by making post-processing more difficult or impossible, you waste render time. The render times may be low but as I was saying before they put a huge amount of time and effort into reducing render times (fast render times are only worth so much effort, at a certain point it becomes self-defeating), and usually don’t include the time it takes to bake. They also don’t need to worry about realism, they’re not trying to match live-action.
I was under the impression that only Kim Libreri’s projects at ILM used MR, like Poseidon. I’m sure I would have heard if Transformers used MR! Do you have any links? I’d love to know more - like how do they deal with motion blur.
!
Heh, I have a lot more experience with raytracers, and it’s what I learned on. What made you think that? I’m the one that’s saying raytracing is not as slow as renderman people will tell you, and definitely looks more real! 
discussion from 2007
"You’re correct Bonedaddy, they mixed Mental Ray and PRMan.
The TD on Transformers Hilmar Koch held a talk about the VFX at the ‘eDIT 10. Filmmaker’s Festival’ in Frankfurt/Germany this month.
On one slide they showed test renderings comparing the render times between MR and PRMan. They showed GI, Area Lights and Reflection times. For GI the time for MR was alot lower than PRMan (I think a third or quarter) on the rest they were even. He mentioned there were issues in getting to match the renders from both, especially with displacement. In the end they were using MR for some passed while doing the main work with PRMan."
I stand corrected! Also about PRMan, if this info is accurate. I’d heard from someone who worked on Pirates 2 that it was not. Who knows anymore.
they rendered motion blur for transformers3 with mr, its raytraced mb.
cant talk for transformers1 in terms of mb (the whole optimizations comes in mr3.9), but i heard they mixed renderman and mr. mr for reflection stuff.
they also used mr for episode2 and also hulk was rendered (partially?) with mr so they uses it alot more then you think (the soft shadows of dobby from harry potter2 are another good example).
i ask you because it looks like that you think alot of stuff is not possible with a raytracer. if you look for example how long BUF is using mr for their amazing work or the matrix movies, or the whole blu sky movies, (not to mention the first ambient environments rendering where ao was raytraced before point based stuff was used) its all raytracing since a while. and i dont talk about the last 3 years or so in which raytracing is used more and more (the whole importance sampling stuff makes alot possible).
Well you should re-read my posts, I’ve been arguing the exact opposite. The article that this thread is from is saying that a lot is not possible with raytracing. I’ve been arguing the whole that that’s false.