Which graphics mode is best?


Software, OpenGL, Direct3D…

They all work fine…but they must have advantages/disadvantages, and I’d like to know what they are so I can use the optimal one for my situation.


depends on your card i guess, i have a crap card and i use software all the time. But it doesnt matter in the render as thats CPU power


I have a GeForce 4 Ti 4200 64mb.

How is that important? Do some of those options not use the card?

edit – I have an idea that Direct3D is closely related to DirectX, and that they are both graphics APIs and both do about the same thing but that DirectX is made by Microsoft…Im guessing that Software means Max has its own graphics API…

My question is…will any of these change the capabilities of what is shown in max’s viewports, and will any of them do it faster or slower?


openGL uses the card. Dirext3D uses directX. Direct3D can be the fastest. but it crashes my comp, i have a dually and a chunk of ram, so software is fine for me.

If you card CAN handle it, i think openGL gives the best lookin results in veiwport


Opengl is generally supposed to be the better… but directx has always been faster and looked better for me. I just reinstalled my vid. card drivers… so I dunno yet if opengl will have any improvement… but generally i’ve seen directx be faster. The only time I switch to opengl is if something in directx is being wierd… which is rare. Opengl also has a problem with transparency for me… so It gets annoying fast. I don’t know if this will happen to you. Anyway… i’d say experiment between opengl and directx to see which gives the faster results or the best looking and determine from there which one is right for your computer. Start by making a standard sphere convert it to editable poly with like 2 or 3 meshsmooth iterations on it… see which one is faster when moving around the viewport. also try experimenting with the scenes that have textures in them to see which looks better/faster… whatever. Directx also has the ability (with a directx9 card I believe… which a ti4200 is not) to display directx shaders in the viewport. I think it might work with that card, but maybe not be fully supported… either way… experiment!


I would reccomend Software just for cards that don’t support 3d functions (that’s right, prehistoric). As you have a Geforce 4Ti you should choose between Direct3D and OpenGl (much faster than software).

I usually use OpenGl for stability and because I’m more used to it.

Direct3D is a little bit less stable but, if you’re working in the gaming industry, it’s about the only way to go since max has some directx materials. This way you can see your objects as they will look in your game engine because you can use different shaders (normal mappinjg, specular, etc.).


DirectX isn’t a graphics API, it’s more of a general API which contains several specialized parts, one of them being Direct3D (others are for example sound). And ofcourse made by Microsoft.


My ATI fire GLX1 works best in open GL because ATI writes custom drivers for 3ds max. It is not good with textured views - but in wireframe it totally flies - seemingly regardless of how many polys you throw at it. Ive had some seriously large files and been able to navigate easilly around in this way.

I find it difficult sometimes getting my head around the marketing with graphics cards. it seems like they are attempting to blind you with ‘insane’ FPS scores in halo and x million triangles per second. Maybe someone can explain the relevance of this in terms of 3D, because i cant, certainly not from the ‘gaming’ cards i’ve tried over the years. For this reason i’d tend to trust something like the ATI because you know their engineers have built it with a proffesional purpose in mind.

my ten cents.


well gaming cards and cards built for cg work like max or maya are two totally seperate cards built for two seperate things. although you would think that a great gaming card would also be good for modelling, such is not the case.

modelling cards: don’t really know AS much about them, but they are built for stuff like raytracing, and i think also help witht he rendering as well (split some of the load with the cpu), so the have the drivers to maximize efficency on working with calculation-intensive processes like lighting, wierd materials, lots of polys, speeding up vieports. thus, for games, which mainly just need poly-chugging over and over, they arn’t built very well for that, they are for higher end calculations. its more like there are multiple cpu’s in the card built for certain things instead ofjust one for the chugging.

gaming cards: built for speed. most of the calculations are “simpler” (no need for much raytracing or complicated stuff), they are intended to draw the image, wipe it off fast and dirty and repeat. and the drivers are maximized to just chug through the polys. so they lots of memory and a cpu that chughs and not much else.

thats why the basis for gaming cards is how well they chug through games. higher fps (frames per second) = faster chugging = better gaming expierence.

they are different cards built for different purposes: one for simple chugging and one for more complicated chugging.


they are different cards built for different purposes: one for simple chugging and one for more complicated chugging.

i knew there was a scientific explanation for that! :wink:


lol, oh im sure there is, but thats the general idea.

shush you.


Well, you’ll find most professional cards are based on games cards anyhow (particularly the nVidia and ATI models), the difference is more to do with how they cope with wireframes than anything else. Render cards are different again.

as it stands, direct3D is the best supported mode for max, you set your viewports up so they show quite a decent reprisentation of your material in realtime with the direct3D drivers. (more than just a diffuse texture - bumps, specularity, glossiness, transparency).

It also seems that Direct3D can show textures much clearer, which is useful for mapping blueprints to planes for references.

There are some funny ‘features’ in direct3d mode, for instance it doesn’t seem to be able to show 2 sided polygons.


still want a matrox parhelia :drool:


Largo, I think you are a little confused…graphics cards may be used for viewport display but when it comes to rendering, that’s all software. We are just recently starting to see hardware to assist in rendering such as the PURE hardware raytracer which starts at $7000 a card…


no, i know that MOST cards help only in vieport display, i was wondering if the workstation (like firegl) cards help with rendering. since i KNOW they are specialized for the more complicated calculations (they don’t do games very well at all, not built for that) and so i wasn’t sure whether or not those calculations included helping with the rendering.

and your right, the workstation cards are bloody expensive (can get up to thousands).

and, um, rendering uses the cpu (ie: hardware). the software is like you know, max, windows, the environment, and what sets up teh caluclations, but the hardware (cpu) is what has to do them…


FireGL’s are essentially Radeons, just as Quadros are essentially Geforces.


and they play games perfectly well, thankyou very much :slight_smile:


and, um, rendering uses the cpu (ie: hardware). the software is like you know, max, windows, the environment, and what sets up teh caluclations, but the hardware (cpu) is what has to do them…

:slight_smile: yes, I know what the difference between software and hardware is. the point is that you don’t need ANY software at all. you could have 100% hardware…and things would run a lot faster…but people wouldn’t like having to install a new card for every program they bought (actually, I wouldn’t mind having a Maya chip :P). You could write software to do everything that your graphics card does, it would just be a lot slower…and it would be using a lot of RAM. the purpose of the graphics card is to hard-wire a bunch of functions that are used for rapid rendering and give it some special allocated memory so that these things don’t slow down regular computer operations so much. but these functions which are built in are completely different from the functions that you would want to do high quality raytracing renders…so it does not help rendering at all, only viewport rendering.


i was wondering if the workstation (like firegl) cards help with rendering

I’m running the FireGL X2 256 (agp). It has four geometry engines and eight parallel rendering pipelines. (The newer PCI models have 8 and 8 = 16). While it definately offers unbelievable performance onscreen, the rendering capabilities are where this puppy really makes a difference. I’m not even running it in a workstation…just a Dell Optiplex GX260 with P4 2.5 and 2GN of standard ram. Even with those system specs I get pretty decent rendering times (consider also that I’m not doing gigantic scenes and/or high poly modeling…yet!).

The FIREGL has hardware accelerated rendering using OpenGL Shading and DirectX 9 HLSL for the following aspects:

[li]Anti-aliased points and lines or full scene anti-aliasing (2X, 4X, 6X)[/li][li]3D lines and triangles[/li][li] Stipple points[/li][li] Two-sided lighting[/li][li] Up to 8 light sources[/li][li] Directional and local lighting[/li][li] OpenGL overlay planes[/li][li] Occlusion culling[/li][li] 6 user defined clip planes[/li][li] OpenGL polymode functions[/li][li] 32-bit (24+8-bit stencil) Z Buffer[/li][li] Fast Z and color clears[/li][li] Full DX9 vertex shader support with 4 vertex units[/li][li] Multiple render target support[/li][li] Shadow volume rendering acceleration[/li][/ul]I’ve also used a Wildcat VP series and an Nvidia QUadro FX, both of which I sent back to the manufacturers in favor of the FireGL. I had quite a bit of performance and tech trouble from both cards. The FireGL, on the other hand, has been a real pleasure to work with, essentially trouble free, which should be expected from any workstation card.

As a side note, on the new system I’ve just built I’m running the ATI X700 Pro 256 PCI-E, which has been great so far, but I’ve yet to render or model very much on it - the system is currently connected to my 52" Samsung DLP HD monitor in the living room as an entertainment monster…1920x1080 @ 85hhz! Kind of hard to model from 10 feet away. grin While this pic isn’t related to the thread, I’m really proud of it, so here’s a little eye-candy:

Hope the FIRE GL info helps!:buttrock:


When you say render you mean viewport rendering, right? Because your graphics card contributes nothing to your render speeds, that’s all CPU & RAM.

  • R


careful with burn in!

also, what about jsut flashing these cards over to fireGL? you know the softmod hacks. ive heard it performs just as well