It's here. The 2013 Pro gpu roundup at CGCHANNEL.COM


#8

I appreciate the mature reply.
No offense taken, none intended, to you or to the author. I hope the article will be corrected and completed because hardware info like that is surprisingly rare and it’s a shame to see a solid effort framed in a somewhat poor context (purely in terms of source checking and accessory info), that’s about it.

I’m not questioning or discrediting the numbers (I would have no way to do so not having access to 80% of those cards, if I had the inclination to begin with) and, believe it or not, have no bias and more than a little hope that OCL will mature and AMD will do well, more so on the CPU and next gen hybrid cards than on GPUs, for the sake of competition and my interests as a consumer.


#9

Websites always seem to benchmark pro cards against other pro cards or consumer gamer cards against consumer gamer cards. I think it’s a gross oversight to not include at least one or two gaming cards among the group of professional cards so people can understand the pros and cons of what’s out there.

For a lot of people, it’s getting harder to justify a pro card. Especially when 3d software and rendering companies seem to be going out of their way to make their software run well on gaming cards. Also not to mention gaming cards do have the bleeding edge fastest hardware which run better on other apps - like adobe software or game engines.

Features like 10 bpc color are great in theory, except the part where hardly any software takes advantage of those display color depths…and for a lot of people, the final destination is going to be 8bpc anyway. IMO it’s better to see banding issues up front than to discover them after the “final” was delivered because you didn’t see the banding on your 10bpc color card.

Regardless what card a person chooses, there’s going to be pros and cons. It’s just always a shame when articles don’t bother to include the full range of what’s on the market that many pros actually are considering. Tom’s Hardware has had some nice video card articles lately with a good mixture of cards.


#11

I agree that it will be interesting to see a few consumer cards tested, and I think the author said he’s working on this.

My guess is that for most apps (esp. the ones that use OpenGL), gaming cards will be slower in general than their pro graphics counterparts due to the application-specific performance tuning that happens in pro gfx drivers. Of course, looking at price-performance, gaming cards will often score really well.

One reason to be wary of making direct comparisons is that there is a misconception/misperception that performance (or price-performance) is the most important consideration for most pro graphics users. It’s not.

Productivity is. -And reliability seems to be a key for delivering that. On-going and exhaustive testing, driver tuning, bug fixes and certification efforts carried out between the ISVs and the hardware makers is expensive. It’s valuable. It’s the primary reason pro cards cost more.

There seems to be less perceived importance on these issues here in the M&E market -probably for a lot of different reasons. Nobody’s saying that gaming cards can’t work for some 3D artists/pipelines. They absolutely might. In fact, some really will. -Depends on the specific app and 3D card and driver version and proper settings, etc etc.

In the CAD/CAM/CAE world (i.e. 80% of the pro graphics market), certification and support (i.e. quick bug fixes, performance optimization, ISV support) seem to continue to be much more of a concern for users.

In any case, forums like this are a great resource for folks looking to research a bit about their apps and needs so they can make smarter buying decisions for themselves.

(edited)


#12

Isn’t the w9000 starting to get old. I mean, isn’t it last year tech?


#13

Apple are going for OpenCL performance. W9000 is still the king of OpenCL/compute in the pro gfx space.

https://compubench.com/result.jsp?test=CLB10101


#14

Most reviews always use benchmarks as the primary core of the review, where the commentary is along the lines of “yup looks like the quadro did well in this test”. The rest of the review is usually typical fluff like how many video ports it has or how the card’s fit and finish is. IMO that’s seriously lazy. Any computer geek off the street can click the ‘run benchmark’ button and record the time.

What people want is an actual professional who knows enough of the basics of the different apps to sit down and actually try to work on a real-world heavy scene file. Take 10 minutes to load up a 3ds max scene and start selecting objects, moving vertices/curve points, change some materials around, try out the sculpt tools, get some particles going. Take note of if certain cards take a long time to select a heavy poly model or don’t draw something on the screen correctly. Then do the same with maya, cinema 4d, etc. People are interested in what cards struggle with these basic functions.

Talking about pro features, how is the stereo 3D on the pro cards? How much faster do quadros switch to stereo mode than the geforces or AMD vs Nvidia on the matter? How seemless is the experience? I can tell you it kinda sucks on geforces, where your screens will flicker and blank out for a few seconds before coming back on in stereo mode. Then it happens all over again when you go back to 2D mode or switch from windowed to fullscreen. Stuff like that, IMO are things pros who might pay extra for a pro card want to know.

It bothers me when a review raves about a certain card that performed benchmarks well, you buy it, and within 3 minutes you know it’s not going to work for you. Aside from fiddling with control panel settings, your only option is to try older or certified drivers, or upgrade your 3D software if you’re using an older version.


#15

I know… But it doesn’t change the fact that the w9000 is “old news” by now tech wise. As for OpenCL performance, this will be great when my application will be updated to use it :wink:


#16

I agree with Sentry66 100%.


#18

So true that DX viewports (Nitous/VP2, etc) really help level the playing field between game cards and pro cards in terms of reliability/performance in ADSK apps, for sure. *Displacement, AO and other advanced modes can be problematic in DX for VP2, though.


#19

thanks for the comment on W9000. I don’t think I could afford two of those GPUs, even with what I’m sure is a lower BTO cost for Apple’s machines, but glad I won’t be missing much when I opt for their mid-end, which is based on the W8000. I don’t need compute power anyway. For anyone interested, the custom naming of the Mac Pro’s GPU is outlined here:

http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/


#21

With what cards was that?

I have friends who have been doing a preposterous amount of previz rendering with those, a lot of it on laptops, and I’ve played with it myself, and never had any issues.
If anything anything DX tends to be incredibly forgiving of gaming cards and run of the mill WHQL drivers in my experience.


#22

I’m looking forward to the consumer/pro comparison review… while I think any/nearly-all reviews are incomplete, this was one of the closer ones I’ve read as of late. Are there things to make it more complete sure… but, I find it can give me a good thought baseline at what to look at.

I just really wish (as we all know this to be true), that drivers could (and should) be provided for both professional and gamer purposes… variants optimized for either case. There is fundamentally no real reason why the market is fragmented like it is than mistaken business strategies that believe in boxing and wall building.

there, i’m off my soap box.


#23

The GPU wants to invade the compute space. Yet all gaming cards are intentionally handicaped via drivers for openCL/CUDA. And the unrestricted pro versions of those cards are about 5-10 times more expensive. This is BS on all fronts.


#24

On AMD consumer GPUs, compute is only handicapped for dual precision (full float) operations. I believe the same is true with NV cards. Single precision FP ops are supported by the full capacity of the cards.

This should make consumer cards useful for many non-scientific/engineering workloads. Most OCL/DirectCommpute/CUDA functions in DCC/M&E apps (such as rendering, video/fx/CODEC processing, physics, etc) typically only make use of SP FP.


#25

is that a software limitation or a hardware limitation?


#26

Software (drivers), I would think -but I’m not 100% positive, Dave.

Does it matter? Either way, let’s look at this for a second.

Lot’s of companies disable features/capabilities in some of their lower-tier products (cars, phones, etc etc etc). They do for valid reasons, I believe.

I can understand the impulse some feel to lash out against “unfair” or “deceptive” business practices. But consider this from the perspective of the vendor:

-many customers who want the best DP compute performance also expect and need the highest levels of support/service. Institutional and corporate users in this space are relatively expensive to support. Selling RADEONs (at RADEON prices) and then supporting them at this level probably doesn’t make good business sense.

-pro graphics cards for compute also typically use ECC (error correcting) components in the memory and memory controller. This costs much more than typical GDDR RAM found on gaming cards. They also often are equipped with MORE memory than their consumer counterparts. Take this product fr instance, which was just announced a few minutes ago. http://www.hpcwire.com/off-the-wire/amd-introduces-new-firepro-s10000/

-There is value in performance, reliability and flexibility. If one really demands the very highest levels of performance (i.e. productivity) for professional work, then free market dynamics demand that they will pay more. A lot more. But heck, it’s really a small price to pay for the profits this can help create. This is true with virtually all other commercial “tools” or technologies including hammers, clothing, cameras, automobiles, etc, etc. Why would it be any different for graphics technology ?


#27

Hello everyone, I am the author of the Pro Graphics review up on CG Channel.com right now. I just wanted to say thanks to everyone who took the time to read it, I am continually surprised by how quickly the graphics card reviews spread over the internet. I jsut wanted to first off say that I appreciate people’s honest opinions and feedback, I admit up front that I am not a hardware or software engineer, nor do I understand many of the more technical features of a lot of the hardware I test, I am a CG artist by profession, so i try to approach my reviews from more of a non-technical point of view for those who just want to plug the card in, and know how fast is this going to run scenes with X amount of polygons / X amount of textures / in the most common off-the-shelf applications. I tend to leave the more technical nitty-gritties to sites like Tom’s hardware, and AnandTech. My reviews are from the perspective of an average run-of-the-mill CG artist, conveying test results in a fashion that most artists want to see, at least that is my intention.

To JaCo, I found your post to be very helpful, thank you. I am always looking for feedback about what people like, and don’t like about my reviews, it helps to sort through the facts and decide which ones are important and which ones people want to see. I encourage honest opinions, I don’t get a whole lot of comments on the actual article which makes it tough to know what people do and do not want as far as review content goes.

I would like to clarify a few things to try to alleviate some confusion, if you all would bear with me for a moment.

First off, yes, there is no W9000 or K6000 included in this review simply because Nvidia and AMD did not provide those cards for reviewing. They feel that those are geared more towards the scientific / engineering professions and that their BM numbers were not relavent to the DCC / Entertainment fields. Weather or not this is true is debatable, but those are the reasons they gave me.

The big topic that people keep hitting on is the inclusion of consumer-level cards. I am currently working on a consumer-level graphics review at the moment. I have not included them in the past simply because I have not been able to get my hands on any GeForce or Radeon cards until only recently, and as I am a professional artist, and part-time writer, and I live in California, I am broke 90% of the time so I can’t afford to buy those cards myself to test :slight_smile: So yes I will have a consumer card review for you all in the near future, it won’t be directly comparing consume cards to Pro cards, but you will be able to reference to pro review against the consumer review as they will mostly be using the same benchmarks. Also, both Nvidia & AMD prefer that Pro Cards and consumer cards be kept to their own reviews and not put into the same review together ( now before all the conspiracy theorests start screaming PRO CARDS ARE A SCAM, I think it is more along the lines of the fact that they are 2 compleatly different markets, and when you start mixing products aimed for different markets into a single lumped review, things can get confusing. Ask any marketing executive for ANY product line, not just computer hardware, and they will tell you that. It is not much of an issue anyway as I mentioned before, the consumer card review will use mostly the same benchmarks as the pro card review does, so you can go back and compare between the reviews once the consumer review goes live ) I am only at the beginning of the benchmarking portion of the GeForce / Radeon review, but early results so far are that the Pro cards are faster at these applications, and I am also seeing several weird UI glitches in both 3ds Max and MAYA 2013 / 2014 so far, but we will get tinto that more with the actual consumer review.

Next, there are some technical aspects of the pro cards that I have not mentioned, the 30 bit color output ( a reader actually posted this one on the article comments ) and i is a great point that I neglected to touch upon as I don’t have a 30 bit display to actually test the feature with ( I am in the process of fixing that one as we speak ) some of the more specific features that don’t directly relate to 3D performance in DCC applications I do not have a lot of experience with so I choose not to risk inaccurate information by diving to deeply into subject matter that I am not totally familiar with, but again, from what I am seeing, it seems that these are things that people do want to know more about so I will be looking to include more of those in the future.

As for some quotes that people feel are inaccurate: “Second, the GPU chips on pro cards are usually hand-picked from the highest-quality parts of a production run” That is directly out of the mouths of both several Nvidia and AMD reps, granted it was a few years ago, but I assume it is still the way thy do things, but I am not that familiar with either AMD or Nvidia’s quality control practices, I can only take their comments at face value. If many of you feel that there is not enough factual information to back this claim, I would be happy to remove it from future articles, opinions?

Secondly: “The first of these is that pro cards typically carry much more RAM than their consumer equivalents: important for displaying large datasets in 3D applications, and even more so for GPU computing.” For this one, I think my wording is what is confusing people, as I am comparing Pro cards vs consumer cards based on thier model type, not from a cost stand-point, for example, a Quadro K5000 is the pro equivalent of a GTX 680, the K5000 carries 4GB of RAM, the GTX 680 carries 2GB ( I think there are a couple of vendors that offer 4GB 680’s but they are not Nvidia reference designs ) / the K6000 is the pro version of the GTX Titan, the K6000 carries 12 GB of RAM, the Titan carries 6 GB etc. So again, I am comparing them based on their specific models, not on a cost basis, again perhaps the wording is coming across differently, maybe I should be comparing based on cost? Again, your opinions?

As for the SLI section, I am sure there have been several tests already of DCC applications and SLI, but until now I have not had a pair of same-model Quadro cards to actually test it myself, and I prefer to only include topics and tests that I have first-hand experience with. Maybe this is a redundant test given that SLI and Cross-Fire tech has been around for so long now, but I have had readers ask for these BM’s in the past so I thought it would be an interesting addition, and it is just further confirmation of those facts.

As for testing methodology, you have another great point, I will be adding a section on my testing methods as well as driver versions. As for my methods, results ( both frame rates and render times are averaged over 5 - 10 testing sessions, and I can tell you that the values are not a very wide range. FPS values are typicallt within 3 - 6 FPS of each session, and render times usually fall within 1 - 2 tenths of a second between sessions. It seems this is very relevant information, so i will be adding it to the article, thanks for bringing this one to my attention.

So I hope this clarifies some things for you all, again, I do appreciate any feedback you all have, it only helps my reviews get better over time. It seems like to general consensus is that you all like the benchmarks performed with actual 3D scenes in off-the-shelf DCC applications, and what you all want in addition to those, is test results from consumer - level cards, and a more in-depth section on deeper hardware features and testing methodologies, does that sound correct?

So thanks again everyone for taking the time to check out the review, if you have any additional comments or questions, please feel free to comment in the article’s comments section, I get notifications when you do so it is easier to collect information from there, or feel free to email me directly, and again all your concerns and feedback help to make my articles better, thanks again!

Jason