If the dozens of brilliant people at Weta or ILM decided to use a raytracer, thereās no way theyād fail at it. Renderman became the established tool because machines used to be slow and low on ram, but thatās all changed. When computers double in speed and ram yet again, will they still be saying raytracing is too slow?
Actually the point of raytracing is that itās more accurate. It also automates a lot of things.
Of course you can ignore stuff. When a ray hits a surface, thereās many channels to evalute, right? Diff, spec / reflection, textures, bump, etc. Thereās no law or physics that forces shaders to evaluate all of them all the time. One show that I worked on had a reflection shader that explicitly ignored bump, saved lots of render time, and the difference in look is indistinguishable to the human eye. Another show had a shader that used different hdrs depending on whether or not it was in shadow. Thereās even off-the-shelf software that comes with enough tools to do a few things like that.
Donāt forget that this is software, it can be written to do whatever you want.
when i writte Avatar i mean 20-50 detailed naāvi characters standing in a forest that is ultra detailed. with fog and smoke and fire and DOF and motion blur.
when i writte Rango i am talking about 20 furry characters with detailed fur ,feathers in close ups with motion blur and DOF.
there is no way that a raytracer could render this in 2 years . i read the paper from Alice in Wonderland where they had problems rendering normal furry characters. and i watched Green Lantern and i saw how detailed alien skin looks in a raytracer. was this displacemnt or a normal map from the 90ās?
they used displacements for green lantern (and to say you cant render displacements in a raytracer like in reyes is not true, you need more ram, but the quality is the same).
alice on the other hand shows whats possible with a raytracer and how this tech can help to simplify certain things. and hair renders great with the renderer they used.
it would be great to know whats raytraced in rango, i think they uses it alot. the reflections looks awesome in the movie, same for all the refractive stuff. the question is, did they used renderman for this or mental ray for example. the driller from transformers3 is rendered with mental ray, so i think they could used the same mixture for rango?
Again, those were not done in PRman (even if they had been, it would be very highly customized) but in software written by their own people using the renderman spec., so itās not really fair to compare them to off-the-shelf raytracers. Also, all those elements were most likely not rendered at once, except for crowds, and raytracers are capable of rendering billions of polygons anyway. Iām actually opposed to raytracing hair, I donāt see much improvement and it does take a great deal longer to render - but hair is almost always rendered separately too. So at worst, raytracing would require you to render a few more things separately, but thereās nothing wrong with doing things a little different to get a better look. And it certainly doesnāt mean that itās āimpossibleā or ānot viableā!
Speaking of rendering separate passes - I hear from one former lighter at Pixar - that they donāt! Apparently rendering everything in-camera is the way theyāve always done it, and theyāve stuck to their old ways. If you get a note that one light is too bright, re-render the whole shot :eek: If that aināt true Iād love to be corrected! But it might further explain their bias against raytracing.
Iām pretty sure that Rango was all renderman, it has the same flat & lifted look, and mixing 2 different renders is really problemtaic. All the glass looked extremely good, but I can tell you that is because of ILMās brilliant people with decades of experience, and Iām sure that compositing wizardry is involved too.
Thatās what Iāve been told at a recent Pixar masterclass. Not surprising given the amount of control they have over lighting, and they mentioned that render time was never much of a problem despite having a fairly smallish renderfarm (compared to other big studios)
ilm uses mr since a long time. especially for the transformers movies.
the driller is the most complex asset, the tentacle roboter thing. i dont know how much is prerendered like point based gi here (or baked self occlusion) and i am sure they uses some optimizations like env sampling, max raylenght and stuff like this.
At Annecy Festival this year they showed a chart of average rendertimes from their movies.
If I remember correctly the lowest was The Incredibles ticking in at around 7 hours (or 5, itās a long time ago :D) and the highest was Cars (this was before Cars 2) and had an average rendertime of 15 hours a frame.
Thatās a shame, by making post-processing more difficult or impossible, you waste render time. The render times may be low but as I was saying before they put a huge amount of time and effort into reducing render times (fast render times are only worth so much effort, at a certain point it becomes self-defeating), and usually donāt include the time it takes to bake. They also donāt need to worry about realism, theyāre not trying to match live-action.
I was under the impression that only Kim Libreriās projects at ILM used MR, like Poseidon. Iām sure I would have heard if Transformers used MR! Do you have any links? Iād love to know more - like how do they deal with motion blur.
!
Heh, I have a lot more experience with raytracers, and itās what I learned on. What made you think that? Iām the one thatās saying raytracing is not as slow as renderman people will tell you, and definitely looks more real!
"Youāre correct Bonedaddy, they mixed Mental Ray and PRMan.
The TD on Transformers Hilmar Koch held a talk about the VFX at the āeDIT 10. Filmmakerās Festivalā in Frankfurt/Germany this month.
On one slide they showed test renderings comparing the render times between MR and PRMan. They showed GI, Area Lights and Reflection times. For GI the time for MR was alot lower than PRMan (I think a third or quarter) on the rest they were even. He mentioned there were issues in getting to match the renders from both, especially with displacement. In the end they were using MR for some passed while doing the main work with PRMan."
I stand corrected! Also about PRMan, if this info is accurate. Iād heard from someone who worked on Pirates 2 that it was not. Who knows anymore.
they rendered motion blur for transformers3 with mr, its raytraced mb.
cant talk for transformers1 in terms of mb (the whole optimizations comes in mr3.9), but i heard they mixed renderman and mr. mr for reflection stuff.
they also used mr for episode2 and also hulk was rendered (partially?) with mr so they uses it alot more then you think (the soft shadows of dobby from harry potter2 are another good example).
i ask you because it looks like that you think alot of stuff is not possible with a raytracer. if you look for example how long BUF is using mr for their amazing work or the matrix movies, or the whole blu sky movies, (not to mention the first ambient environments rendering where ao was raytraced before point based stuff was used) its all raytracing since a while. and i dont talk about the last 3 years or so in which raytracing is used more and more (the whole importance sampling stuff makes alot possible).
Well you should re-read my posts, Iāve been arguing the exact opposite. The article that this thread is from is saying that a lot is not possible with raytracing. Iāve been arguing the whole that thatās false.
I feel like this is getting off topic, but different studios worked on TF3. The studio I worked at that did some TF3 shots used FumeFX/Krakatoa, and VRay.
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.