Renderman is dead!! Or at least on its way to die!!


#61

Usually not underbidding, ensuring reliable delivery, doing good work, and getting paid fairly for that work helps.

In fact, I personally feel that a well thought out renderman pipeline in a small studio can offer some pretty interesting options AND be cost effective if you’re not afraid to get your TD hands a little dirty. It is one of the more memory efficient renderers on the market, so if you’re cramped for space for machines and/or power, it’s actually not that crazy of an option to maximize the limited boxes you have.

IF I had to set up something small, I would be looking at a mix of Arnold and prman. Approach with the idea to strike a balance between using a renderer that’s artist friendly, and yet leave yourself some room to tackle some of our industry’s more difficult problems.

If paying 2000/node meant getting a job done vs. not getting it done, and I still turned a profit, I’d sign that paper any day of the week.

-Lu


#62

you talk about prman but if you look for example at 3delight, the price is much lower here.
and 3delight for xsi is easy to use, integration is really simple and user friendly, i think its easier then the xsi mr integration. so as always, integration is the key for smaller shops.


#63

It is funny that you singled out three movies with some of the biggest amounts of work ever put into cramming raytracing into the pipe through alternative means, complex shaders, and intensive pre-processing/bakes, to advocate rasterisation (because if you imply a raytracer wasn’t involved or necessary one has to assume you are advocating a rasterisation only REYES engine).

You are aware of the efforts Pixar is putting into extending massively PRMan’s raytracing abilities right? :wink:

This thread gets funnier and funnier, especially when people pull out the avatar card without having worked on it or knowing much of the pipe those pixels went through.


#64

We seem to be getting more and more these days. That’s the internet for you: there’s a lot of opinion, but not much wisdom :slight_smile:


#65

I think it’s related to “everyone knows someone who knows someone who worked on Avatar”. It’s just such a huge project, with so many people involved, that you can sound credible by talking about it, without actually knowing anything about it. I think we should start promoting the idea of “The Avatar Effect” :smiley:


#66

i was obvious talking about a 100% raytracer like Arnold. and my post was if a raytracer can render out those movies in the same time. or should the studio spend more money ?


#67

i was obvious talking about a 100% raytracer like Arnold. and my post was if a raytracer can render out those movies in the same time. or should the studio spend more money ?

well, consider the vast amount of artists working on such a show… if your renderer makes it more easy to setup/finish the shots, you would be able to reduce the staff. so you could buy/rent a hell of a renderfarm just from those savings.

thats basically the philosophy of arnold… cpus are much cheaper than artists. i think this just didnt work before because of memory limits / hardware prices. (you could have paid quiet a few artists (the hole year) just for one SGI challenge server back in the days).


#68

Yes! I mean, I frequent LW forum and Rob hang out there sometimes and he work on Avatar!!! :applause:

On the topic, I can’t say much. Being a hobbyist I don’t have access to it.

EDIT: Oh wait, thanks to this topic, I found out 3Delight have free 2 core license. I’ll give it a try. And the free license and be used for commercial use. Whoa!


#69

Its a tough debate right now imo. I mean look at Pixar versus blue sky. Pixar’s using a Reyes renderer with a lot of raytracing capabilities, while Blue sky is using a Brute force raytracer now with huge focus on voxelization. Having attended several lectures from both studios over the years, and especially a recent Rio one, the Blue sky average frame rendertime is substantially greater than the Pixar Average. Almost double on Rio which is higher than previous blue sky films. They argue though that they don’t spend nearly the time on point caching and brick maps etc, but they also talked about how they were constantly killing their SSD disk based systems with all the giant amounts of caching that the raytracer and in particular voxel system was generating.

so you have the typical argument regarding Artist time versus Rendering time, but imo, good pipeline structure’s and artists can always speed up the PRman pipeline, while the only thing that can speed up the raytracer’s is better computers and better algo’s.

I am still excited by Arnold as well as it’s future with OSL and Alembic, but for me right now I still think a good team with Prman will be more flexible. Beign a guy focused more on cartoons that photo realistic visual FX, Listening to all the challenges faced on both Meatballs with Arnold, and Rio with their renderer (which has mainly been used on cartoony animations) the struggles they face to keep the raytracer stylized seems to be a lot of extra development work currently as well, while Prman is far more flexible currently as you can easily throw photoreal out the window if you’d like.


#70

We seem to be getting more and more these days. That’s the internet for you: there’s a lot of opinion, but not much wisdom :)[/QUOTE]

i wrote most of that render pipe, and can tell you now there’s very few people on this planet that know how any of it worked.

but i can say we did use renderman. and we did use a lot of raytracing.

in terms of this thread the issue is people relate software packages directly to the final output they see. they say “renderman did this” or “maya did that” when in truth all of these software tools are just pieces of the overall tech that makes these films look the way they do. there’s also a huge amount of human effort and decisions that influence what you see in that final result. so much so that it gets tricky to really attribute any of the film to any one renderer, or technique.

the reason there’s very little wisdom in these threads is anyone that knows how something was actually done is either legally unable to tell anyone, or is just far too busy on the next project to spend time hanging out on forums every day.

the truth is that we dont use anything in isolation, and we dont use anything in its out-of-the box form. the problems we are trying to solve are generally beyond the scope of any package that exists, so we have to adapt and improvise. it just happens that renderman is very extensible, and very adaptable. i’m sure we’re using it in ways the devs have never even considered. the result of this is that pipelines can become very messy and very frustrating things to use and maintain. and maybe this is where the OPs frustration is stemming from.

at the end of the day renderman helps us solve problems. in more ways than just rendering. and i dont see these problems going away anytime soon.


#71

Sounds good. But, if you are a freelancer/small shop, you probably would not get “that much difficult job”.

We are small team in a big company. We “Usually not underbidding, ensuring reliable delivery, doing good work, and getting paid fairly for that work”. But, we can not afford $2000/node + $500/node annual maintenance.


#72

Beign a guy focused more on cartoons that photo realistic visual FX, Listening to all the challenges faced on both Meatballs with Arnold, and Rio with their renderer (which has mainly been used on cartoony animations) the struggles they face to keep the raytracer stylized seems to be a lot of extra development work currently as well, while Prman is far more flexible currently as you can easily throw photoreal out the window if you’d like.

well i dont think this is a valid point, because you could always disable GI and get similar non-photoreal results. but you would not have to mess with shadowmap bias settings and environmentmaps (just a name a few).

when it comes to flexibility, as long the renderer supports DSOs, riReads and custom shaders you shouldnt be that limited. of course a company like weta/ilm/dd could have different requirements i`m not aware of.

(can`t comment on all aspects because the last time i used PRMan in production was 2003, and back then it didnt have raytracing at all) but i still remember the feeling i felt after i switched back to mentalray and didnt have to fake everything to get something simple done.
especially when working on commercials.


#73

I hope you’re not complaining about this. :slight_smile:
The internet is the most amazing thing that happened to humanity in the last decade or so.
The lack of wisdom “is present” whether internet exists or not, it’s just that you and/or me won’t be able to get in touch with this. The fact that we - the people - can express our opinions (now matter how ridiculous) cannot be matched by all the gold and platinum in the universe. It’s priceless to be more succinct.


#74

this link is currently floating around on the mailinglists:

http://www.hollywoodreporter.com/news/transformers-dark-moons-powerful-visual-208967

  1. Massive computing power was needed so that the Driller could destroy the skyscraper: Rendering is the process of calculating the information in a CG file for final video output – essentially by turning numbers into images. It took a staggering 288 hours per frame to render the Driller along with the photoreal CG building that includes all those reflections in its glass.

:curious:


#75

@colinbear - thanks for that post


#76

288 HOURS!? This is on a farm, so not in real time right?

Wow 22.8 years. That’s pretty insane. … taking an hour to load that large of a scene? I’d like to see the specs on those workstations. I’m sure they’re pretty high, just curious.


#77

Well you should be able to distribute a single frame’s buckets around the farm but I think it decreases efficiency (so a 288-hour render split into 288 buckets is still not going to finish in a single hour). It’s monstrous though… just as it must have been to model and texture and animate it…


#78

None of these renders can be done in a single pass. Never ever.

It is broken down into a number of different passes handled by different TDs. Otherwise, pressing the render button would be like a russian roulette : no garantee it won’t die at some point. And not delivering is not an option.


#79

Thats probably 288-core hours in the first place. Throw a 1000 cores at it and suddenly it doesn’t seem so impossible anymore. It’s big but not impossibly big.


#80

More than 200,000 rendering hours per day, wow. So that is over 8,333 cores if I did the math right. Can only imagine what ILM’s electric bill must be like!