Renderman vs. Vray vs. Arnold.


#21

On top of the GPU render. V-ray unlike ris and arnold offer 2 shading language. You can build OSL shaders. But and this is i think ? a unique feature you can render GLSL shaders in V-ray.
Vray RT / texture baking / glsl translator are other proof of the great versatility of the tool.

http://docs.chaosgroup.com/display/VRAY3/GLSL+support+in+V-Ray


#22

There’s a lot of misinformation on this thread. Starting from that video, to many comments on RenderMan being dead, which is ironically the opposite.
As a user, let me clear things up a bit.

-RenderMan has a new pathtracer renderer now called RIS, which is taking all the Pixar development focus.
-There is a lot of Hyperion (Disney renderer) technology in RIS, including the new denoiser used in Big Hero 6 and Disney shaders. The development teams share progress.
-Yes, RenderMan REYES is very much being deprecated, so is RSL language, which means RIS is just being born and will be used in all their future films, starting with Finding Dory.
-Katana comes bundled with RenderMan RIS.
-It has many integrators, including the first commercial bidirectional pathtracer (rays from camera and lights).
-Web-based Tractor render manager included.
-Shading and Lighting is C++ and supports OSL.
-Houdini 15 native plugin.
-Blender native plugin.
-C4D support is being developed.
-Google Zync support.
-Free for non-commercial use, so you can give it a try instead of reading misinformation online…

I hope that helped clear things up, at least on the RenderMan side.

If you guys need to learn RenderMan, the new renderman community is a great place to start, as well as the renderman university


#23

There’s a lot of misinformation on this thread. Starting from that video, to many comments on RenderMan being dead, which is ironically the opposite.

I disagree, when you cannot even render a decent volume using RIS after 7 hrs and there is not one shipping example nor can support provide a set up that’s out of the box to be used with Katanna. As a CG sup prman is the last choice in rendering for me, its a combination of arnold and mantra on any show I set up.


#24

There’s been quite a few improvements in volume rendering with the last two updates in the last month or so. I’m happy with the speed Pixar are developing renderman RIS I think it augers well for the future


#25

The Developments on volumes and fluids in version 20 are great and VDB support has been great for over a year now. It is very fast, especially with the denoiser. I’m not sure why you’d need to render a frame for 7 hours…might be user error?
Of course any renderer has its shortcomings, but development has been incredibly fast, including a restructuring of the development team and tight collaboration with Disney’s r&d department, so renderman is more alive than ever.
The fact that you choose a different renderer doesn’t mean the ones you haven’t chosen will somehow disappear.


#26

Does the denoiser interpolate? Does it produce flicker and/or remove small details? Exactly how does it work?


#27

Here, I think there’s some interesting info in this section:
http://renderman.pixar.com/resources/current/RenderMan/risDenoise.html
Hope that clears some doubts.
Cheers.


#28

Basic understanding of raytracing internals & color pipelines will give you enough skills to produce good results with any engine and to accomodate them to your needs. If and how raytracing engines will give you access to that kind of information is another matter.


#29

I worked with all 3 renderers in the past years (prman only before RIS) and I’m evaluating at the moment all Arnold, Vray and Prman20 for a personal project and for teaching reasons.

All I can say right now. Renderman 20 is very strong! very strong! And I really hate to use it and thought the same as “mister3d” when they gave it away for free.
But it is pretty much now like another arnold if not better. Anyway when I make my conclusion I will post again. Maybe it is of help.


#30

That denoiser seems great, but I’m skeptical to how it works when there is detailed geometry in the background, can anyone confirm how good it is exactly? Pros and cons? Also why don’t other renderers implement something similar?


#31

-It’s not a simple post process. It essentially makes a bunch of AOV’s as part of your EXR beauty pass, including albedo, z, vector, normal, variance, etc… and then using proprietary code (developed for Big Hero 6) it denoises your image.
-It has no flicker, because it uses cross-frame interpolation.
-iRay also has developed a denoiser and I suspect other renderers will follow suit.
I’ve used it successfully in production and If you’re skeptical, watch big hero 6…or try it on your own, since RenderMan is now free for non-commercial use.


#32

It’s not that simple for production. These are some simple examples I can think of…
-If a renderer doesn’t support VDB, it might break a Maya/Houdini pipe
-if it doesn’t have a denoiser, you might be looking at very high render times.
-not having deep data output might break a stereoscopic pipe or compositing pipe.
-if a renderer is not stable, you will have issues meeting deadlines and staying within budget. This is a huge point, because you’re getting the same development budget used for film production at Pixar and Disney. They can buffer massive costs, because they make it, test it and use it, then it trickles down to the user in a very polished manner. Same thing goes for the render manager Tractor, which you can get for $100 bucks…
-if a renderer is not extensible, it might not fit a big pipeline.

Not to mention every renderer has workflow differences, which in itself is a huge point to consider.


#33

I see, thanks. Also found some more information on it and it seems nice. What I wonder still, is do you have to use .exr for it to work? I’m usually stuck with rendering to jpegs. Also my work uses max so I’m just curious… But again thanks for the information nonetheless.


#34

Yes, you need to use EXR.
As far as I know, there is no development planned for 3dsmax “yet”.


#35

I know, although there is a guy doing something of a conversion, but I’m not technical enough to know whether or not he’ll be able to convert it all properly without re-inventing stuff.

https://vimeo.com/133612957


#36

Yep, saw that a couple of weeks ago, it looks like a really great job so far.

As far as the volume questions…this is an HD frame that cooked 1 hour.
-All VDB files from the openVDB site, including the dragon.
-multiscatter volume shader.
-Emissive volume.
-Bidirectional pathtracing.

This is on a very old dual xeon X5550 with 24gb ram. On a modern workstation, this should be fully converged in under 30 minutes.

FULL RES IMAGE


#37

A new article just went online from fxguide. It’s about day 3 at siggraph and has a huge focus on rendering. Really great read.

http://www.fxguide.com/featured/siggraph-day-3-and-out/


#38

In my opinion, denoise filters kill detail in areas which get only indirect or little lighting which is very very bad, since those areas are expected to have more diffuse detail to show and richer diffuse gradients. Personally I would never use any denoise filter in post pro. A good biased interpolator algorithm is better than any denoiser. I see denoise filters used in the Blender community quite often with Cycles and the results are catastrophic in my opinion.

  Every raytracing engine and raytracing algorithm work more or less the same under the hood.

#39

The details of the technology are there for you to read in the renderman docs and I believe Pixar released the papers.
Comparing it to the blender solution is very strange since it’s not the same solution at all. It has been successfully used in production for years now at Disney and you’re getting the same tech used in big hero 6. I didn’t see any “catastrophic” results there…
It’s also free to use with blender.

Since the main algorithms are similar in all engines, the differences are in the details, especially development focus and workflows, which makes the decision of choosing one even more important.


#40

thanks for all the info Leif ! Just to be picky there is one point that is not 100% true :slight_smile:

The first commercial BDPT is Maxwell. And it is the only way he knows. If Pixar use VCM method for their BDPT it first compute a photon map before path sampling. Maxwell in this regards is more brute force, and i guess a little more accurate. But i have no idea on what is their exact method.

My biggest complain to pixar is that they ditch RSL. Because you ended up with an new render engine that is sell as Pixar Renderman, but that has nothing to do with Renderman. ris is pixar answer to arnold, it’s a beautiful answer no doubt about that. but i would have love to see a binding of their new function into Rsl in order to unify and not isolate the reyes and ris world.

cheers

E