Hello There !
I just wanted to know why professional cards gets bad results when used in games, it seems so odd to me ! I can understand that gaming video cards, such as ATI Radeon family or Nvidia Gf series, behave badly in 3d softwares, but that question arrass myself ! Please help
Hello There !
because theyre built for stability and speed, they dont have all the fancy add-on bits that make games look pretty, that is why they perform badly in games. Having said that mine performs ok.
At least quadro´s are performing fine in games, most of them are up to the task like their gaming counterparts only missing 2%-5% in fps. That´s doesn´t count for FireGL´s and Wildcat´s, I don´t know how the latest wildcats perform in games, but at least the 7210 is capable of running DX7 and the wildcat realizm is fully compatible with DX9b or c. OGL should be not question (quake, doom3).
That pretty much gives you all the asnwers you need.
That pdf doesnt give any answers. It claims only the quadros do overlays, wrong, the geforce cards also do multiple overlays. It then claims the gf cards have less clip regions, wrong. Then it shows what antialiased lines look like… the problem is I can enable full screen 8x8 AA on my gf card to get even better quality and it has less of a performance drop than the aa lines method does. Double sided polygons, I have them too.
So just what is it Im still missing?
That one always bugged me. What exactly is the difference between AA lines and FSAA? I would think FSAA would be better because then your solid viewports have AA as well as your wireframe ones.
As for the rest I am not sure about. You genereally don’t end up with many more features on the hardware level, it is mainly software implementing those features a little bit differently to gear the card for pushing as many polys as fast as possible.
FSAA used to be slow. eg. 4x4 would be 16 times slower, so the special method to only AA the lines would be much faster. But this is no longer the case. With modern gfx cards you can have 8x8 AA with only a 10-20% performance drop, or you can match the old aa line quality with 4x4 and only suffer a 5-10% speed hit.
Hardware line AA is now null and void.
I see, I kinda thought it was something like that. An old method implemented to squeeze all of the AA preformnace that you could out of older cards.
ill only speak about quadrofx 3000 vs a geforce card. ( cuz i use both )
even with the “stability” programs will still crash.
the speed isnt THAT much more impressive to be worth the cash difference
quadros perform great in games, and look awesome.
geforce cards are actually very good in all the 3d apps. theyre fine in houdini, mirai, clay, zbrush, xsi, max, LW… etc.
if you got cash to burn. go ahead.
if you want to save cash and get a top of the line gaming card, then you should be more than happy in the end.
I wonder how that picture changed so dramatically, 2 years ago that argument wouldn´t have been heading so clearly one way. For example in the Quadro 2 Pro days there were quite some differences in performance and stability in a wide range of programs. But today gaming cards are so powerful that the features that once justified the high price range of a professional card aren´t that much of an impact anymore. And even software vendors are facing the fact that there are people that like to use it with average mainstream grafics cards, they won´t certify drivers or test anything with it but at least today for the nvidia cards you having a smooth ride in most every software package. And there are no guarantees that your software is crashing less with a professional video card. My experiences are that there are so many other influences, especially when you´re working in a network environment, that will lead to a crash of your software that you won´t notice the difference. And take a look at the feature list of a quadro 2 pro, it´s not that much different to that of a quadro FX 3000 besides PS and VS. Just my thoughts…
The reason for the dramatic change over the past couple of years is that the games have gotten a lot more complex than they were. Games are throwing so many polygons at the cards and doing so many fancy operations that the gaming cards have to be fast and they have to have good drivers to support all of those features a good speeds without bombing out. As a result what used to be obscure features only used in CG apps are now more mainstream and used in games, which mean that the gaming drivers must support them without a hitch.
I think it is almost to the point that the games are pulling ahead of the CG apps in terms of “viewport” features.
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.