Ah well like I said, maybe he’s improving multi-threading for 3.0. I’m using 2.4.
I bet it also depends a lot on how much pre-processing it needs to do. If you have no sss, hair, deformation, or irradiance/light caches, just raw geo and brute force GI, vray probably goes to full multi-thread almost immediately. I’ve been meaning to test a scene like that. My test from above had hair, motion blur, deformation and light cache.
About your plugin - I ran it, no errors, but I can’t find anything that it did. Does it create a menu or button somewhere? It didn’t seem to change any vray settings -
with it in your scripts (not plug-in) folder, just type vraytuner; and it will load a new V-Ray Tuner panel at the left of your Maya UI.
It doesn’t change anything by default. It’s not one of those “1-click fix” type scripts. It also doesn’t create any dependencies like a plug-in can. I suggest you read the read me and look at the vimeo links in there to understand the features.
Ah, got it. I was confused when you said “read the read me and vimeo links in there” because you only linked to your .mel, not the .zip, which I’ve since found on your site.
Very nice!! I don’t even know enough about vray to use all the features you have for it, and I’m no amateur. Are you already aware that docking the render view makes the main window resize, whenever you hit ‘render’? I’m betting that you’re tackling that annoying bug that keeps vray from closing the render view, bravo, it’s such a pain.
you need to disable Auto Resize in the Render window to fix that. But V-Ray 3 automatically removes the Maya render window so you don’t need that anymore.
Bit of a misleading statement.
Rendering Engines with inefficient threading aren’t the only reason, or even the chief reason, why that’s done.
Peaking memory and keeping as many nodes dark as possible is a much bigger factor in making that choice.
It’s also a bit of old news, mostly coming from when everybody and their dog used RenderMan, which has always had, and seems to keep having, the poorest and most wasteful multithreading possible.
Studios that moved to more modern and better scaling models and engines don’t do it nearly as frequently.
The scaling outlined in the original post is actually quite poor, so it’s either a particularly tricky scene, due to its nature or possibly even poor settings choice, or there’s a bottlneck somewhere (which might be anything from thrashing the memory to storage bound issues), or VRay has work to do (and apparently they have done a lot of work to scale better in recent times).
Lets not keep propagating relatively dated knowledge as verb, the most modern models and engines don’t have “a lot” of single threaded heading any longer, or not very often at least in the context of rendering most frames of a movie
ya, I’d say that based on my render tests, that’s a pretty accurate statement. I won’t be adding this to V-Ray Tuner since, from my experience, I don’t see any scenarios where you spend any major amount of CPU time in poorly-threaded tasks.
Yeah, I read a pixar article about making Monster’s University and they were saying their main rendering goal was to keep each frame under 20 gigs memory so they could pack 4 parallel renders per machine with their 96 gig ram render nodes. I guess I assumed if pixar is doing that with all their new raytracing GI stuff they have going on, I was figuring most other major studios using renderman probably did the same.
Then later I read an article and watched a video about Arnold renderer where they made a huge deal how efficient their engine is and how other engines end up making people render multiple jobs in parallel to try to recoup that lost efficiency from single-threading.
These articles weren’t very old, so I was assuming they were somewhat accurate
It’s not dated to the point of being non-sense, and anything you read about rendering engines that are also commercial products, or nominated for sci or tech awards, is to be taken with a shovelful of salt, but rendering in general is possibly the discipline I’ve seen advancing the most and in the most revolutionary fashion in the last five years, and keeps doing so.
Also nobody really comes out with a lot about the management and IT side of things, because it’s highly local and not really good marketing, but when you reach the thousands of nodes scale it has a considerable impact on a facility’s choice, but that choice might have to be presented otherwise to the public buying or awarding prizes to their products.
Raytracing is one of the few problems in CG that can scale to truly impressive widths, but for some engines that scalability requires a pretty radical model change (see PRMan 19 in example, and we’ll see if that will have been enough, the fact Disney is going proprietary for DA instead of putting the chips on 19 isn’t terribly auspicious, but might just as well mean nothing, or even be a political choice).
I’d say of all disciplines and of all times this is one to keep a constantly watchful eye on rendering and assume knowledge can obsolesce fast. Between major paradigm shifts in representation, in programming, and in hardware it’s pretty varied and easy to lose track of. Exciting times and all, I guess
Well I still run into this issue with MR - especially with heavy geometry that has rendertime active polysmooth nodes in order to keep scene file sizes and network traffic manageable. I generally have seen a 15-20% render performance boost rendering files in parallel.
It’s especially a problem in MR with the “fast” SSS shaders that use lightmaps which become more of a problem as output resolutions increase to 4k or higher. I know there’s new SSS shaders for MR now or you can override the light map resolution for the legacy shaders, but I still have to wonder how often people will encounter single-threaded bottlenecks, especially dealing with older files for long-running projects.
Mental Ray is anything but a modern engine. iRay might have stood, or still stand, a chance, I don’t know, but the old engine is like the old PRMan, I don’t think it can be redeemed by anything short of a complete reboot that keeps only the name around.
In that case, and with some shaders that rely on accumulating a crapton of samples before actually doing anything with the shading itself, yes, you could still be bottlenecked down the line.
You got to be one of three or four people I’ve heard in the last year even bothering to mention MRay though
Yeah, MR doesn’t have the most bleeding-edge features at the moment, but it’s still capable of good work. We’ve all heard that about many software over the years and it’s always a temporary thing for the major players.
In some small studios/industries we have to ride things out and can’t justify switching software every time there’s a new kid on the block. It would be nice if the time/manpower/money was available to do so, but there’s too much work to do and often enough artistic freedom in shading to not have to adhere to the rendering requirements of matching the lighting from footage, even though I still lust after many of the rendering features other engines have right now.
Given MR’s integration with most of the major software, the backing from Nvidia, and the progress they’ve been making with MILA and the recent better maya integration, IMO they’ll pull through in the end. Every renderer engine’s goals pretty much are the same anyway. They’ll all get there.
I’m not sure I would call vRay, Arnold and PRman new kids on the block either
Chaos Group is a triple digit employees company and has been for a while, and at this point it’s probably more used and more backed than MRay.
MRay is still around uniquely and solely because it comes bundled. Had it been an option it would have sold nothing, and could you save even just 200$ on the cost of a seat by giving it up probably a good half or more of AD’s clients would instantly save the 200$ and renounce it to spend them on something that actually works. It’s not like that poll didn’t crop up in a couple places to come up with a 90% “screw MRay” result
As it is it’s already disappeared from entire creative fields.
An engine so slow and unpredictable, even “for free” is simply a fake saving. When another engine renders the same stuff 10 to 50 times faster and gets the results iterated to quality into a fraction of the time you’re saving a penny to spend a pound by sticking with the old.
Honestly, get in touch with Chaos Group or Solid Angle and get a trial for one project. Lets see how much more stuff you can render and how much quicker when you push it through a decent product. I’m not having a go, Sentry, I am most sincerely recommending you do.
yeah I fully agree, but then again every renderer is heading towards massive parallel computing, and ultimately on the GPU. CPU’s aren’t going to own rendering forever.
In the meantime at least MR offers rendering compatibility with all our legacy work, vs ditching a lot of work it en mass to switch renderers. I certainly can’t defend MR as being the most modern or best integrated right now, but for our needs, other renderers are going to have to offer something that’s 20x better, not 2-3x better sway me over.
It’s similar to asking you to ditch maya and switch to houdini. It’s easier said than done when you have 15 years of maya files that you continuously work work on for clients.
Thanks. I plan on evaluating vray closely this year. I want to fully explore MR's MILA first to see what I think of it. We don't have a lot of staff so most of my time is spent actually working on productions. The budget isn't an issue, but the R&D time in recreating a bunch of shaders that we rely on and then learning the nuances of a new renderer is.
I mean I can put it this way, to some extent it's almost arguably more worthwhile for us to drop $30k on additional rendering computers so we can focus on production than it is drop $10k on vray and then invest a lot of R&D time into switching to it in order to get the gains from it. Then given how much is currently going on with MR, I have to wonder if once we switched renderers, MR will have narrowed the gap anyway.
Then again as our (small) render farm expands, MR's license costs make less and less sense and vray's cost looks better and better.
And right now vray is the renderer I'm leaning towards the most because I feel like it has the most industry support, tutorials, etc and the price is right.
What I'd like to find out at some point is if vray can render single 20+ million polygon objects. MR last I tried with maya 2013, it can't and starts omitting various triangles as if they were deleted. I know that sounds like a silly thing, but that limitation has created headaches on a few occasions with things like large connected artery trees that have lots of specific curves and branches.
Other renderers DO offer something 20x better very often, that’s what I’m saying
MRay just doesn’t render the same amount of stuff in the same amount of time (let alone anywhere as reliably) as Arnold for almost any and every scenario.
Why do you think MRay has always been patchy in film, adoption wise, while Arnold swept in and picked up every other major house before it was even publicly available?
Try something else man, you have no idea how much of a difference there is. You are talking double digit multipliers, yes.
yeah I should probably make evaluating new renderers a priority. It’s just such a PITA to migrate over, but that’s probably what it’ll take to move forward in both output and quality.
I second every word ThE_JaCo has said about the subject. Not that Jaco needs me to backup his words at all, but I felt I had to comment on this as I’m right now stuck on a mray show. Even worst, I had to go back to mray after working with arnold for almost 2 years.
I used to work on mray only, and I was strugling against changing, and then I had to. It was such a breeze of fresh air. The worst part on mray, other than it being slow, unpredictable and unintuitive, is how every single thing in it is a hack. And the fact than some mray nodes aren’t compatible with other mray new nodes drives me completely crazy. And even the most simple tasks are somewhat intrincate and silly, you end up connecting 4 nodes to make a map just work as a gobo.
And I didn’t even mention how much more capable are arnold and vray churning out nice results with hair, displacement, and everything in it without having to re write shaders, or connecting very complex networks of hacks, or it won’t even render at all.
So yeah, making the change is a pain, but once you’re settled, you’ll never look back. And maybe some of the lost hair will grow back.
Yeah, and I have to admit some of the resistance from me is just stubbornness of dealing with MR’s intricacies all these years and being bitter about learning all that stuff just to ditch it out of frustration.
Things like FGshooter being a solution for final gather is where I started saying “are you’re kidding me?” - great that there’s a solution, but yet it’s another band-aid for something that seems like it should work right in the first place.
I understand completely, I’ve been in your shoes for longer that I care to admit. I know ditching years of tricks and knowledge makes you feel bitter now, but imagine that all that brain activity can be emptied to now use it for more creative and pleasent activities. So, you’ll be changing bitter knowledge for relief and more creative output. Nothing to lose here.
Take it from someone that is now back and stuck with mray, I can’t wait to ditch it out again.
Cheers.