.: Q&D :: Future Trend of 3d Software :.


Hey guys, I just wanted to know what you guys thought and I was hoping to get some discussion on this. It seems that the current trend of 3d software is that we have the 4 giants of 3d software, namely 3d Studio Max, Lightwave, Maya, and XSI which are used for everything from modeling to UVW mapping, to animating, to simulation, to some post work.

I’m starting to see other software out there however, that are far better but much more specialized. My question to you is, do you think that this trend will continue? Will these software giants be replaced by smaller peices of sofware that is specialized in one task?

This is almost like the industrial revolution and assembly line production, which we know is more efficient.

For modeling, Luxology is going to come out with Modo an extremely well developed sub-D modeling tool. For UVW Mapping, I know lots of people use Deep UV (though its not entirely mainstream). For high poly detailing, I know of nothing that beats out Pixologic’s ZBrush. As for animation, rigging, and rendering, it seems nobody even holds a candle other than the “giants” as of yet (correct me if I’m wrong)

Do you guys think that we will see (in the near future) a pipeline that consists of multiple tools developed by multiple companies… or at least just multiple tools?

Modo to Zbursh to (Some rigging/animation suite) to (Some uber awesome rendering suite)


For those of you that don’t know:

Pixelogic: http://pixologic.com/home/home.shtml
Luxology: http://www.luxology.com

Also for animation there are tools called motion builder and animation master


I seriously doubt that specialized programs will replaces the 3d packages that are out there. If anything people have been using programs like Zbrush, Silo, Wings, and Clay (beta testers at least) as tools to go along with their rendering package of choice. You can own a separate modeler, displacement creator (can’t think of the official term for Zbrush :wink: ), animation program, and at the end of the day still wind up in Maya, XSI, Lighwave, Houdini, Blender…etc.


This is already very common. Mirai was used for modeling in LOTR, Maya for animation, Renderman for rendering, as well as all the custom tools and plugins developed by WETA. Despite all the stupid bickering and questions about what app was used on a movie, most companies use multiple pieces of software to accomplish tasks, either because one app cannot accomplish everything or because different artists are more profficient/comfortable in different apps.

I suppose what you are asking is if the specialized apps will constitue the entire pipeline. Dunno. But it wouldn’t be unreasonable, particularly if a common interchange format is developed that makes this easy. However I’m going to assert that this will be more fully implemented at large studios, simply because the divisions of labor lean in this direction. A small studio or a freelancer would have difficulty maintaining multiple licenses and keeping up with many different apps as they develop.

Certainly modeling tools are easily split off from the main app, e.g. silo, wings3d and zbrush, but certain things need to be in an integrated main application. For example, animation can be seperated from dynamics, particles, rendering and lighting, modeling can be seperated from texturing and uv mapping. But having to hop from on application to another to make a tweak is irritating, particularly when the multiple areas overlap and interract makes the entire proccess a pain in the @ss, particularly for the individual user. Plus, while standard modeling formats abound like .obj, a format that supports the entire range of cg needs is not available, and probably would never be simply the development of each seperate application would constantly be adding features that are not supported by the format and by the other applications.

So while having many peripheral applications will enhance workflows and pipelines, I maintain that there still needs to be a main application which ties all of this together.


I totally see the need for a single unified format that has all elements from modeling to animation as well as neccessary if this current trend continues. I guess what I’m amazed by is that the 3d CG software market still has so much room for new ideas and better workflow.

Take microsoft for example. They have made a do-it-all office client. The barrier to entry for that market is larger than moutains. However, that is not the case with 3d apps. It seems due to the complex nature of CG, there is much more room to wiggle.

Goon: Your probaby right about a complete take over from smaller 3d apps though.


If I were a large-scale production, I would think a core package with the options to pick and choose (ala plugins?) would be the best option.

Well-written plugins tend to do a much better job and can be custom tailored to the task at hand. The winner of the 3D app war would be the company that can best integrate the specific needs of a production.

However, for the smaller homegrown shops, the current selection of off-the-shelf 3D apps offerings are great. It’s good times for CG.




I think that both models will continue to be used into the future. With a single monolithic app, you have better integration of all the parts, and only one interface to learn. For the single user, this is very appealing because trying to keep multiple interfaces in your brain at once is a huge pain. For larger shops, especially where division of labor is practiced, different departments could supplement or replace their part of the pipeline with an individual app, with the monolithic app still serving as the backbone of the pipeline. Production studios that freely intermingle different apps have to create their own backbone of the pipeline that can store and translate data among the apps.

Building an entire pipeline from individual apps is probably only now becoming practical. Actually, it isn’t quite there yet. You have stand-alone apps that do modeling, UV unwrapping, and texture map generation well. You also have several choices in great stand-alone renderers, some of which are built off of the Renderman specification. But in the middle where you have layout, rigging, animation, dynamics, and lighting, there isn’t really much in the way of good stand-alone applications. I haven’t used Motion Builder before, but it might serve as a stand-alone rigging and animation app. But what would you light in? The lighting app has to be tied to the renderer to some degree. Shader creation in the texturing department is also tied in closely to the renderer. Particle systems tend to have to keep the renderer in mind as well.

Now that the modeling and rendering areas have been covered by several good stand-alone choices, perhaps the other parts of the pipeline are next. An animation package that took in OBJs or LWOs and spit out deformed meshes snapshotted per frame would certainly be possible. Likewise a lighting package could be written that reads in these object sequences and has light rigs that drive a Renderman renderer would also be possible.

The key to establishing a pipeline that incorporates multiple stand-alone apps is common data formats that they all can read. OBJ is a bit limited in the data it can store. LWOs are better, but their texture info is very Lightwave specific. DXF and 3DS are not really good choices. Houdini’s BGEO format is a really good step in the right direction for a common geometry format, but Side Effects hasn’t published or documented it well enough. There aren’t any standard formats I’m aware of that store a sequence of snapshotted vertex positions (Lightwave’s MDD format is the closest I know of, but it’s undocumented). There also isn’t a good standard way to attach shaders to geometry that would work across multiple renderers (shaders themselves have to be renderer specific of course).

Oh, one more thing. For a multi-app pipeline it is probably a good idea to not have the geometry stored within the scene file. You have files that represent your geometry, like LWO or OBJ. You have separate files that define the texture, like Renderman SL and SLOs. The geometry file simply specifies which polygons get which shaders, and the shaders can access all the painted weight maps and UV coordinates stored within the geometry file. The exporter that submits the scene to the renderer such as the RIB generator would have to tie these together. The scene file should access all the various pieces of geometry via a path to the file where the geometry is stored. This way geometry can be edited independently of the scene file, and the modeling app doesn’t have to know anything about the scene specification. Lightwave’s LWS scene files work like this, though the LWS specification itself is pretty limited. Maya and other apps can work like this to some degree if you use a lot of references, though the way Maya scene files are built such an approach would get unwieldy really quick. RIB files aren’t designed to read a scene back in well, so they wouldn’t work as a scene format. But the way RIBs use Read Archives to break out geometry into individual files separate from the main scene file is also along these same lines.

Perhaps an a-la-carte pipeline will be possible in a few years, but more standard formats and ways of passing data around need to be developed first. Should prove interesting to watch though.

Michael Duffy
Guy Who Spends Too Much Time Thinking Of CG Pipelines


Perhaps an a-la-carte pipeline will be possible in a few years, but more standard formats and ways of passing data around need to be developed first. Should prove interesting to watch though.

Mike, you rock. You have great insight into this topic. Do you know of any movement to have a standardized format or formats?

If I were a large-scale production, I would think a core package with the options to pick and choose (ala plugins?) would be the best option.

Well-written plugins tend to do a much better job and can be custom tailored to the task at hand. The winner of the 3D app war would be the company that can best integrate the specific needs of a production.

You make an excellent point, but the problem with the plugin system is that it does not offer workflow changes. At least in the plugins I have seen. Take ZBrush’s popularity for example. The software is absolutley amazing. You can generate super detailed models to use for normal and/or displacement map generation that can then be rendered in an app. ZBrush’s main difference is the fact the you can simply draw directly on the model, a very different workflow than soft selecting vertices and pulling/pushing them.

I think in the end, the winner of a 3d app “war” will be the one who incorporates all these new workflow changes into they’re monolithic app.



I do not think we’re that far away from having a Zbrush-like workflow incorporated into a monolith app. Think of Maya’s super-crappy Artisan tool. It allows you to push and pull verts, determine falloff, and interactively manipulate a mesh very similar conceptually to Zbrush. However, Maya does not display the hi-res interpolation that Zbrush does. And Artisan is not efficient by any means. Basically what it boils down to is that the architechture of the these monolith 3D apps are not flexible enough to effeciently handle new workflows. A lot of plug-ins are written using what already exists because they are forced to abide by the architecture. And considering that some of the current applications still have a lot of legacy code residing in them, we’re basically stuck with trying to teach an old dog a new trick. This is a huge limiting factor for pushing new workflows into an app. However, if these barriers can be broken down, then why shouldn’t we have Zbrush tightly integrated into a modeling workflow?

We’re currently working with a multi app pipeline, but it’s biting us in the ass over and over again. Mainly it is because of MDuffy’s analysis of a universally compatible format to transfer from one app to another. It doesn’t exist, therefore we have problems. This is the main reason why some studios opt for their proprietary software to circumvent the possibility of leaving artifacts along the pipeline and dirtying up the information pool. However, to have the luxury of your own dedicated pipeline requires talented programmers (and they’re not cheap), and a good bit of R&D to streamline the system. Small start-up studios usually do not have the financial power or the time to do this. Large studios that do are often “commited” to their investment of time and money for quite some time, and risk the possibility of being left behind by the competition.

I’m expecting some companies to try and homogenize the miriad of 3D applications. But there will always be a mad genius who thinks their way is better than the other mad genius. Therefore, having every app speaking the same language seems like a bleak possibility.




I don’t see there ever being a single “standard” format…
Taking a look in the CAD industry you’ll see that the largest player (AutoDesk) purposefully changed their format to keep its competition from being 100% compatible. Also the number of features and technologies is different enough that a single file format seems unlikely. About the closest thing I can see is a mroe advanced DXF type of format that most apps can read into and work with the data from there.
3DS has been around long enough that its almost the equivalent of a “Standard”.


I think that this is where open source apps can step in. With open source apps you can use specialized software builds that specifically suit your workflow needs. Or if you have a connection to a cleaver programmer or two or three, you can request that certain source code is added to your app or even coded into your app. The problem is that most big software companies don’t favor open source product offerings. This lack of compliance leaves room for individuals to take this matter into their own hands and create their own open source software apps. We can look here on CGtalk and see that some 3d and 2d artist and software app coders have presented us with such open source software choices. These apps are very stable even for the most demanding workflows.

If you are a new media developer who keeps up with open source projects you would see that some of these apps developments are joining forces to create a more compatible environment for working between these different software structures. I don’t think that 3d and 2d media companies of the future will need to maintain a pure proprietary base of applications. With open source apps they can extend their apps features by using freely available source code from the media software development communities.

They can even develop source code that gets passed around to other developers in these media software development communities and it comes back to their studio with even more features added into their original sources. I see this practice happening all of the time. I have even made use of this community method of file and source sharing to further my own efforts as a 3d artist. A project that would have taken me weeks to master only takes me days to prepare as a working functional application that I can use through community brainstorming.

Maybe the old way of keeping to yourself as a company and pure in-house problem solving will give way to more media software development community involvement if you make use of open source software.

-Often that mad genius is a wiz kid who is just playing with a software feature in an open source development. Maybe it’s a feature that they read about in a Siggraph research paper or on a website. Or maybe it’s a feature that was left for dead by one computer scientist on an outdated website.

Oh yeah! I love to see these wiz kids get into action and start coding great source code that can be later tweaked and refined into powerful software features. Hey these mad genius types keep me smiling as I thankfully make use of their features in my daily downloads of open source software builds.

Have fun!


I think that this is where open source apps can step in. With open source apps you can use specialized software builds that specifically suit your workflow needs. Or if you have a connection to a cleaver programmer or two or three, you can request that certain source code is added to your app or even coded into your app.

This would a great thing, but I’m sure you’ll agree with me that the Open Source movement has to evolve a bit more for it become a viable option. There is quite a bit of overhead involved with coding 3d apps due to they’re complex nature. Let’s face it, the current open source modeling programs aren’t that great. I would love it if it were open source but it seems a bit far fetched now.


It’s all about the plug-ins.


Common standards can be implemented in either Open Source or commercial applications. With Open Source you just add support to the main code base, and with closed source apps you write plugins to do the work. Open Source probably gives you a little more flexibility because you can adjust the way the app stores data in order to hold more generalized information (such as adding the ability to store and retrieve multiple values per vertex, etc.)

I don’t think most of the Open Source apps are quite produciton worth yet (though on the rendering side several are close). But the Open Source apps would probably benefit most from implementing open standards because individual authors could each write a piece of the pipeline, and several apps could be used together to pull off everything that’s needed in the production. Monolithic app developers have less of an incentive to support open standards because they would rather you stay entirely within their app. But even so, individuals could write and release plugins that implement these standards in the monolithic apps so that they could be used for part of the pipeline as well.

Writing a 3D app is a big undertaking, but perhaps not as big as one might think once you know how. With all the books and info out about how to write game engines, the knowledge needed to write a 3D app is pretty well available. Also if you are just trying to implement part of the pipeline, the task is a lot easier. If you are writing a lighting app, you don’t have to worry about all the deformers and constraints that an animation package uses. If you are writing an animation package you don’t have to worry about all the topology modification tools that a modeling app needs. And if the source code is released for reading and writing to the common formats, then it will be easier to code up support for those formats for all the desired apps out there. Also if you narrow your focus on what is supported (for example, polys that can be smoothed to Sub-Ds) and don’t try to support everything you have ever seen in another app (NURBS, Bezier surfaces, quadratic surfaces, CSG, hierarchical Sub-D’s, etc.) then the individual apps become much easier and faster to write, and the common formats also are much, much easier to deal with.

So then the question becomes, how do you create and popularize the standard formats? Do you take existing formats that are partially supported by other apps and try to make them work? Do you create a completely new format and write plugins yourself or work with the developers of existing apps to try and get support for your new format added? Do you work with existing formats first, and then phase in new formats later as things mature (for example, creating a new scene format but using existing geometry formats)? And how do you get people to actually use, test, and develop for these new common formats? Will it take one of the big apps (Open Source or commercial) to release a format specification and the tools to work with it before this could happen? Who decides what is part of the standard?

Michael Duffy


Hehehe… well, I implemented the majority of the translation pipeline used on the “The Adventures of Jimmy Neutron” TV series, and I’m working on implementing the pipeline for our next feature film right now, so I’m painfully aware of many of the pitfalls of tying multiple apps together.

There are a few groups working on standard formats, but these formats aren’t very open. Kayderra is the Filmbox format (FBX) which is designed to be a universal exchange format among 3D apps. However it is closed source and licensed out by Kayderra. SoftImage has a bunch of translators to get various other formats into the XSI format, but I don’t think they’ve implemented a way to get back out. There are several geometry formats that have kind-of become standard, but most of them are old and their designs are a bit limiting.

On the rendering side of things there is the Renderman specification, and that is fairly universal now. Pixar decides what is in the format and I think some elements of the company wish now that they could close the format, but for now it is a pretty workable standard. The only restriction with using Renderman as your rendering standard is that you won’t be able to use non-Renderman renderers, like Lightwave, Mental Ray, Brazil, etc. And the complexities of shading language would probably make it extremely difficult if not impossible to translate from RIB into something another renderer could understand. But on the other hand, there are several Renderman compatible renderers out there to choose from, and the shading language of the Renderman spec gives a lot of flexibility.

If you limited your textures/shaders to a common illumination model and only used texture maps instead of procedural textures to drive your shaders, then you actually could translate textured geometry pretty readily from one rendering app to another. And you would want to design your other standard data formats (geometry and scene) such that texture/surface association and animation was handled in a generic way, so that you could use the rest of the pipeline with Brazil or Mental Ray or Lightwave, and you would just make sure that you textured and surfaced objects with a specific renderer in mind. All the other steps in the pipeline though should work fine regardless of what rendering and texturing solution one chose.

Who knows… maybe I should start my own movement to establish standardized formats for data exchange among apps. (^_^)

Michael Duffy


Who knows… maybe I should start my own movement to establish standardized formats for data exchange among apps. (^_^)

I would totally be with you on that one.


I’m surprised that no one mentioned Messiah as a contender for killer character animation tools.


Messiah could cover the rigging and animation part of the pipeline. We used the plugin version of it on the Jimmy Neutron movie, and I wrote several tools to make working with Messiah a little faster and easier. The Lightwave Plugin version of Messiah simply didn’t scale well because it was sitting on top of Lightwave’s panels plugin architecture, and that wasn’t designed to handle such a large project as Messiah. I haven’t worked with the stand-alone version of Messiah yet because it wasn’t ready in time for our project after the Jimmy Neutron movie.

Michael Duffy


One or two of the four giants you mentioned will be replaced by Cinema 4D in the near future, I am sure.:slight_smile:


Hi Micheal,
messiah:animate 5 is miles ahead of project:messiah 1.5.7 that you used on Jimmy. All too often users of the old LW plugin make the mistake of judging the new version based on that old plug. Even the leap from m:a4 to m:a5 was quite significant. If you haven’t used m:a5, then it’s time for you to take another look, no matter what you’re using now:)

Actually, it’s not so surprising. We’re not exactly the largest company out there!:smiley: It’s quite easy for our software to get overlooked in the shuffle with all the fanfair that the huge companies can generate for their apps. We’ve often been called “Hollywood’s best kept secret”… a tag that I often resisted, but has grown on me over the years.

The undeniable fact is, thanx to our users, messiah:animate 5 is the best combination of ease of use, flexibility, & power that you’re going to find in a character animation & setup tool. And because it is designed to be a perfect companion to other apps, all of our users use m:a5 in conjunction with nearly everything out there.

The real task is in making arstists aware that m:a5 is a powerful character animation package that can be added to thier existing toolset. As a first stage, we’ll be creating a new m:a5 demo soon so that others can discover our tools. We’ll post more info when it’s available.


ps: didn’t mean to hijack this thread, but I just thought it important not to overlook our tools just because we may not be as visible.