Maya performance in huge scenes



our vfx studio has recently switched to maya. I am a long time user of maya but I have always been very disappointed with maya’s performance in huge scenes (100 000 000 and + polys):

  • render layers take forever to create and take a long time to switch to

  • viewport freezes constantly, for minutes when working

  • etc

    I know most of the optimizations (references, proxies,etc) that can be done to a scene, but I wonder how so many vfx studios can work in maya with huge geo sets and find maya’s performance acceptable.

    Any thought on this topic?


This is off topic slightly but you mentioned you recently switched to Maya from something else. I was wondering what software you have switched from that did support 100 million polygon scenes? It seems like your comparing Maya to something else and I am just curious to know what that was.

Also what are your machine specs, amount of Ram and GPU etc…


is there a special reason you need to see all objects in the viewport…?

you could work with render proxy objects or standins…
and use sceneAssembly to change the objects by rendertime…


I’ve found Maya’s viewport scalability to be poor. Things like proxies are perfectly fine when a scene is legitimately too large to handle, but Maya falls over long before that. Some examples:

  • If I have smooth mesh on, alt-dragging in the viewport with tumble on object turned on hitches badly. It’s doing a hit test to see what you clicked on, and for some reason it does it against the subdivided geometry. Once the hit test finishes, it rotates fine. This means that even if Maya can render the subdivided objects at 100 FPS, the user experience still sucks. I don’t know why they don’t do tumble hit testing against the base geometry.
  • If I have high-poly objects like mesh trees and bushes in the scene that I want to see, but don’t want to interact with, I’ll put them in a display layer and set them to reference. Yet, it still causes rotating the viewport to hitch before it begins to rotate. Apparently it’s still doing tumble on object hit detection against everything in the scene, even if they’re set to reference and it’ll ignore the hit anyway.

Just some recent examples I’ve found where I had to lower viewport detail even though the viewport renderer had no problem rendering it. It’s fine to need to reduce the viewport when you reach GPU or memory limits–that’s what proxies and so on should be for–but in these cases I’m having to cripple the viewport long before that. With a scene with about 1M polys that easily renders at 100 FPS or more (when navigating the view, not necessarily on playback), I still have to turn stuff off because rotating the viewport freezes for a quarter second every time.

(I can’t speak for 100M polys. That’s well beyond anything I’ve tried to do, and with the problems I’ve had at 1/100th that I wouldn’t even try it. Try disabling tumble on object in tumble options–it makes viewport navigation awful and I hate turning it off, but for dense scenes it makes a big difference.)


*hehe, I’ve been reporting viewport lag and selection lag for four years and they still haven’t fixed it. :smiley:

But I think the OP means a different thing… I guess it’s because maya has to make all connections (like connect each shape/transform to a render layer etc) in the background - which is stupid in my opinion, but it’s what every TD praises.


We are coming from Softimage. The pass system was less ressource-intensive and better built I was told. We have recreated a similar one for maya but changing, duplicating render layers and adding objects to them is still as slow. We have top notch cpu systems. Also the artists here consider softimage as better in terms of performance. I would say it’s probably 50% the user and 50% maya performance flaws imho, but I know by experience that maya can become very often unresponsive and slow when manipulating huge scenes.

In our environment department, we inherit from complex scenes at the end of the cg pipeline. We need to create new passes / render layers, mattes, overrides, but we don’t always know how the scene was built by the other departments. So at some point, we need a “visual representation” of what is what to decide and apply our overrides. Of course we use standins but even them can be long to add to render layers and to select.

I have never used sceneAssembly, maybe it could help us…

I think you point the two major problems in maya: the viewport performance and the total lack of multi-threading support. We had a demo with Autodesk and we said that new features were great, but the viewport and the fact that all operations in maya were single-threaded made our work sometimes very difficult to do. I said the same things as you: “I have been submitting tickets for years about that!” and they replied “Oh yes? but where?? Ahhhhhh okaaaayy you have posted that on the idea forums and small annoying things, you have not emailed you AD representative!” We kind of had a good laugh…

I love working in maya, but we are starting to look a little bit higher to see if another soft could help us better. Yes we have huge environment scenes to handle, entire cities or several blocks of streets, and the artists here now take three days to do what it took them a few hours in softimage. : S


Ah Softimage sniff. There where a lot of things done for performance of very hirez singular geometries. So maybe related to its better performance with certain types of scenes/geometries.

Most of the productions I’ve worked on since (with Maya et all since) sooner or later (aka after animation is approved) switch everything to point cache (Alembic, etc) data for those later stages. There is a hell of a lot of overhead with live rigs and shape animations, etc which severely impact the performance footprint.
Point Cached scenes strip all that junk out.

Scene Assemblies are likely worth checking out too. I am far from an expert-but all i know is that pipelines I’ve worked in are using them…so they are beneficial to the pipeline. And can be set-up to be quite transparent to the users. Likely a pipeline TD is gonna be in charge of setting it all up to play nicely.


I’ve gotten in the habit of using bounding box mode whenever possible with heavy scenes. I wish you could mix shaded and bounding box modes together.

I’m also not happy about how maya displays wireframes on dense objects. With really dense objects you can’t even see the shader color when you’re adjusting it. I think C4d has the right idea by making objects highlight by putting an outline around the object vs lighting up its wireframe. I’d much rather see maya’s wireframe handled more similar to how zbrush displays it as a subtle thing.


Undo. Shape animation. Uggh. Oh just quit and start again… :banghead:
I know its not easy but I think Softimage was smarter about this too.


have you tried the new shape authoring tools in ext2…?
its a pimped version of the CamD tools…


I have hopes! :keenly:
Evaluating 2016 at the mo. But the pipeline is still 2015.
Inevitable I’ll get it I suspect for the next show.


Does this relate to switching from Softimage? Softimage is single-threaded (outside of ICE), as far as I recall doesn’t implement GPU instancing, and struggles with performance with larger number of objects, so it shouldn’t fare better out-of-the-box for larger scenes. Are you comparing performance in similar scene setup, or is the work your doing now with Maya very different? How are the scenes made up, is it few objects with large number of polys and materials? (and what GPU?)
As noted above, it’s worth looking at the Scene Assembly videos out there; the feature set is a bit spartan, but with it you can use a “baked” GPU-optimized cache of the geometry so you don’t have to load the high-res model at all. Classic “render layers” also are on the way out, with the new “render setup” replacing them in Extension 2.


I am one of the few in the studio not coming from softimage, I’ve been a maya user for a long time. I am not sure how softimage was in terms of performance compared to maya, I can only relate to what my collegues say about it.

I tend to believe what they say though, because I have compared similar operations in maya and houdini (manipulating and deleting faces on hi-res lidars) and maya was without a doubt slower. I’ve worked in 3ds max too and it was not feeling that slow.

Yes it’s true that in maya I have worked on larger scenes than I did in other softwares, but in the other softwares I don’t remember waiting for a minute when selecting a large hierarchy, or having the viewport refreshing for a few minutes when changing my selection.

But like I said, I have never worked with such huge scenes in other softwares, heavy scenes yes but not 100 000 000 and + polygons.


Which OS and version of Maya, and hardware are you on? You could switch the Vewport to either Core Profile (Strict) or DirectX11 to see if you’re getting hit by cost of the older OpenGL selection implementation


I believe them too. I once tested myself with a piece of geo of 500k polys some modeling operations like detaching shells or cutting etc… maya was the slowest out of all the software I tested even after I disabled the undo queue and history.


Just take a look on this video.


this is quite true


Painful. But nothing new.


Yeah it’s so frustrating that they keep adding new features and not improving the basics : S


maya is handling quiet well highpoly object as a whole, but when it comes to component level things get slower.

I don’t know what they did to the last 3dsmax but the viewport is crazy fast on high poly and on component level. We need this in maya.