It's here. The 2013 Pro gpu roundup at CGCHANNEL.COM


#16

I agree with Sentry66 100%.


#18

So true that DX viewports (Nitous/VP2, etc) really help level the playing field between game cards and pro cards in terms of reliability/performance in ADSK apps, for sure. *Displacement, AO and other advanced modes can be problematic in DX for VP2, though.


#19

thanks for the comment on W9000. I don’t think I could afford two of those GPUs, even with what I’m sure is a lower BTO cost for Apple’s machines, but glad I won’t be missing much when I opt for their mid-end, which is based on the W8000. I don’t need compute power anyway. For anyone interested, the custom naming of the Mac Pro’s GPU is outlined here:

http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/


#21

With what cards was that?

I have friends who have been doing a preposterous amount of previz rendering with those, a lot of it on laptops, and I’ve played with it myself, and never had any issues.
If anything anything DX tends to be incredibly forgiving of gaming cards and run of the mill WHQL drivers in my experience.


#22

I’m looking forward to the consumer/pro comparison review… while I think any/nearly-all reviews are incomplete, this was one of the closer ones I’ve read as of late. Are there things to make it more complete sure… but, I find it can give me a good thought baseline at what to look at.

I just really wish (as we all know this to be true), that drivers could (and should) be provided for both professional and gamer purposes… variants optimized for either case. There is fundamentally no real reason why the market is fragmented like it is than mistaken business strategies that believe in boxing and wall building.

there, i’m off my soap box.


#23

The GPU wants to invade the compute space. Yet all gaming cards are intentionally handicaped via drivers for openCL/CUDA. And the unrestricted pro versions of those cards are about 5-10 times more expensive. This is BS on all fronts.


#24

On AMD consumer GPUs, compute is only handicapped for dual precision (full float) operations. I believe the same is true with NV cards. Single precision FP ops are supported by the full capacity of the cards.

This should make consumer cards useful for many non-scientific/engineering workloads. Most OCL/DirectCommpute/CUDA functions in DCC/M&E apps (such as rendering, video/fx/CODEC processing, physics, etc) typically only make use of SP FP.


#25

is that a software limitation or a hardware limitation?


#26

Software (drivers), I would think -but I’m not 100% positive, Dave.

Does it matter? Either way, let’s look at this for a second.

Lot’s of companies disable features/capabilities in some of their lower-tier products (cars, phones, etc etc etc). They do for valid reasons, I believe.

I can understand the impulse some feel to lash out against “unfair” or “deceptive” business practices. But consider this from the perspective of the vendor:

-many customers who want the best DP compute performance also expect and need the highest levels of support/service. Institutional and corporate users in this space are relatively expensive to support. Selling RADEONs (at RADEON prices) and then supporting them at this level probably doesn’t make good business sense.

-pro graphics cards for compute also typically use ECC (error correcting) components in the memory and memory controller. This costs much more than typical GDDR RAM found on gaming cards. They also often are equipped with MORE memory than their consumer counterparts. Take this product fr instance, which was just announced a few minutes ago. http://www.hpcwire.com/off-the-wire/amd-introduces-new-firepro-s10000/

-There is value in performance, reliability and flexibility. If one really demands the very highest levels of performance (i.e. productivity) for professional work, then free market dynamics demand that they will pay more. A lot more. But heck, it’s really a small price to pay for the profits this can help create. This is true with virtually all other commercial “tools” or technologies including hammers, clothing, cameras, automobiles, etc, etc. Why would it be any different for graphics technology ?


#27

Hello everyone, I am the author of the Pro Graphics review up on CG Channel.com right now. I just wanted to say thanks to everyone who took the time to read it, I am continually surprised by how quickly the graphics card reviews spread over the internet. I jsut wanted to first off say that I appreciate people’s honest opinions and feedback, I admit up front that I am not a hardware or software engineer, nor do I understand many of the more technical features of a lot of the hardware I test, I am a CG artist by profession, so i try to approach my reviews from more of a non-technical point of view for those who just want to plug the card in, and know how fast is this going to run scenes with X amount of polygons / X amount of textures / in the most common off-the-shelf applications. I tend to leave the more technical nitty-gritties to sites like Tom’s hardware, and AnandTech. My reviews are from the perspective of an average run-of-the-mill CG artist, conveying test results in a fashion that most artists want to see, at least that is my intention.

To JaCo, I found your post to be very helpful, thank you. I am always looking for feedback about what people like, and don’t like about my reviews, it helps to sort through the facts and decide which ones are important and which ones people want to see. I encourage honest opinions, I don’t get a whole lot of comments on the actual article which makes it tough to know what people do and do not want as far as review content goes.

I would like to clarify a few things to try to alleviate some confusion, if you all would bear with me for a moment.

First off, yes, there is no W9000 or K6000 included in this review simply because Nvidia and AMD did not provide those cards for reviewing. They feel that those are geared more towards the scientific / engineering professions and that their BM numbers were not relavent to the DCC / Entertainment fields. Weather or not this is true is debatable, but those are the reasons they gave me.

The big topic that people keep hitting on is the inclusion of consumer-level cards. I am currently working on a consumer-level graphics review at the moment. I have not included them in the past simply because I have not been able to get my hands on any GeForce or Radeon cards until only recently, and as I am a professional artist, and part-time writer, and I live in California, I am broke 90% of the time so I can’t afford to buy those cards myself to test :slight_smile: So yes I will have a consumer card review for you all in the near future, it won’t be directly comparing consume cards to Pro cards, but you will be able to reference to pro review against the consumer review as they will mostly be using the same benchmarks. Also, both Nvidia & AMD prefer that Pro Cards and consumer cards be kept to their own reviews and not put into the same review together ( now before all the conspiracy theorests start screaming PRO CARDS ARE A SCAM, I think it is more along the lines of the fact that they are 2 compleatly different markets, and when you start mixing products aimed for different markets into a single lumped review, things can get confusing. Ask any marketing executive for ANY product line, not just computer hardware, and they will tell you that. It is not much of an issue anyway as I mentioned before, the consumer card review will use mostly the same benchmarks as the pro card review does, so you can go back and compare between the reviews once the consumer review goes live ) I am only at the beginning of the benchmarking portion of the GeForce / Radeon review, but early results so far are that the Pro cards are faster at these applications, and I am also seeing several weird UI glitches in both 3ds Max and MAYA 2013 / 2014 so far, but we will get tinto that more with the actual consumer review.

Next, there are some technical aspects of the pro cards that I have not mentioned, the 30 bit color output ( a reader actually posted this one on the article comments ) and i is a great point that I neglected to touch upon as I don’t have a 30 bit display to actually test the feature with ( I am in the process of fixing that one as we speak ) some of the more specific features that don’t directly relate to 3D performance in DCC applications I do not have a lot of experience with so I choose not to risk inaccurate information by diving to deeply into subject matter that I am not totally familiar with, but again, from what I am seeing, it seems that these are things that people do want to know more about so I will be looking to include more of those in the future.

As for some quotes that people feel are inaccurate: “Second, the GPU chips on pro cards are usually hand-picked from the highest-quality parts of a production run” That is directly out of the mouths of both several Nvidia and AMD reps, granted it was a few years ago, but I assume it is still the way thy do things, but I am not that familiar with either AMD or Nvidia’s quality control practices, I can only take their comments at face value. If many of you feel that there is not enough factual information to back this claim, I would be happy to remove it from future articles, opinions?

Secondly: “The first of these is that pro cards typically carry much more RAM than their consumer equivalents: important for displaying large datasets in 3D applications, and even more so for GPU computing.” For this one, I think my wording is what is confusing people, as I am comparing Pro cards vs consumer cards based on thier model type, not from a cost stand-point, for example, a Quadro K5000 is the pro equivalent of a GTX 680, the K5000 carries 4GB of RAM, the GTX 680 carries 2GB ( I think there are a couple of vendors that offer 4GB 680’s but they are not Nvidia reference designs ) / the K6000 is the pro version of the GTX Titan, the K6000 carries 12 GB of RAM, the Titan carries 6 GB etc. So again, I am comparing them based on their specific models, not on a cost basis, again perhaps the wording is coming across differently, maybe I should be comparing based on cost? Again, your opinions?

As for the SLI section, I am sure there have been several tests already of DCC applications and SLI, but until now I have not had a pair of same-model Quadro cards to actually test it myself, and I prefer to only include topics and tests that I have first-hand experience with. Maybe this is a redundant test given that SLI and Cross-Fire tech has been around for so long now, but I have had readers ask for these BM’s in the past so I thought it would be an interesting addition, and it is just further confirmation of those facts.

As for testing methodology, you have another great point, I will be adding a section on my testing methods as well as driver versions. As for my methods, results ( both frame rates and render times are averaged over 5 - 10 testing sessions, and I can tell you that the values are not a very wide range. FPS values are typicallt within 3 - 6 FPS of each session, and render times usually fall within 1 - 2 tenths of a second between sessions. It seems this is very relevant information, so i will be adding it to the article, thanks for bringing this one to my attention.

So I hope this clarifies some things for you all, again, I do appreciate any feedback you all have, it only helps my reviews get better over time. It seems like to general consensus is that you all like the benchmarks performed with actual 3D scenes in off-the-shelf DCC applications, and what you all want in addition to those, is test results from consumer - level cards, and a more in-depth section on deeper hardware features and testing methodologies, does that sound correct?

So thanks again everyone for taking the time to check out the review, if you have any additional comments or questions, please feel free to comment in the article’s comments section, I get notifications when you do so it is easier to collect information from there, or feel free to email me directly, and again all your concerns and feedback help to make my articles better, thanks again!

Jason


#28

You might be exagerating a bit.
As Adam pointed out the restrictions are rather insignificant in many regards (on the OCL/CUDA front).
DP floats are rarely used to begin with, and when they are it’s usually within restricted domains in terms of both application and field, and that’s the only part that is artificially crippled. Even then, some gaming cards get a much less stringent constraint than others (IE: nVIDIA Titan has a very minor unit fetch restriction compared to the 780, not sure what the numbers are for AMD so I can’t speak for that).

And 5-10 times more expensive? Nah, sorry, that’s over dramatizing it. Usually it’s 1.5 to 3. A Titan is about a grand, a k5k is definitely not 5 grands (it can be found sub 2k often), let alone 10.

The single most expensive card on the market is the k6000, which retails in large shops for 4.7-4.8k, and can be had for 4.4 - 4.5 from many resellers, but that’s largely because of the top of the line premium, and the 12GB of ram coupled with relatively little draw which is ridiculously expensive at manufacturing stage at this point. There’s no gaming equivalent to it, but even if you wanted to compare it to a 780 (which is three steps down) you wouldn’t make it to 10x.

I’m playing devil’s advocate a bit here, I’m not a huge fan of the artificial distinction between pro and consumer cards myself, and I also find the gap excessive, but it’s both gone down a metric ton from just a few years ago (and you could soft mod back then), and it’s nowhere near the ratios you mention.

@Adam: It’s 100% ID tag. It’s been proven recently by resoldering a gtx into a quadro 5k (fermi). Some people still throw around the idea that the chips are also first pick, much like Intel phases and brands CPUs, but I have yet to see or hear any evidence of this. It’s always passing mentions, individuals, comments, but I have never found a line by nVIDIA confirming it, and if they could pile more reasons to buy a quadro on the flimsy ones they have I believe they would.


#29

I ask because Apple doesn’t have a pro driver and a consumer driver so these types of software limitations don’t appear on the OS X side.

Looking around, it seems that Apple’s mid-range Mac Pro 2013 GPU is in between the W8000 and W9000, which is promising. You get the bandwidth of the W9000 with 3GB on each GPU and the compute cores of the W8000. That’s a sweet spot for me.


#30

You don’t need multiple drivers to implement the split from a systems vendor side of things. The difference is how the card IDs itself, you can then act accordingly drivers side by either refusing deployment and splitting drivers (Windows strategy) or by using unified drivers that drop or cripple some features (OS-X and Linux strategy).

Windows is such a huge gaming platform that providing different drivers and stuff like Experience with more differences than just how many units are recruited for DP probably makes sense to them. OS-X and Linux users tend to have a different park of apps to use, and have no DX requirements, not to mention a different mentality in general towards drivers and OS coating software, in that context unified drivers made more sense I guess, but DP will still be crippled (and while I can’t quite comment first hand on the Mac side of things, I have done plenty work on both win and linux with DP FFTs in CUDA on both my old 680 and my current Titan and I can confirm the bottlenecking at fetch time is identical).


#31

ya, I agree that, because of the optimizations for games in Windows, that it makes sense to split the driver but OS X only needs the one since it’s not a gaming platform. So Apple doesn’t gimp the driver for gaming cards and they (and Autodesk) offer full support with gaming cards in Maya, Mudbox, etc. Sure, there’s wiggle room but since you can’t force people onto FirePro or Quadro on an iMac, MacBook Pro or the BTO options for existing Mac Pros, Apple’s not likely to offer a handicapped compute setup just to match a Windows gaming driver and they write a major part of OpenGL and OpenCL for OS X which AMD and Nvidia plug into. I don’t see anything about limitations of double-precision floats in OpenCL in OS X due to driver limitations:

https://developer.apple.com/library/mac/documentation/Performance/Conceptual/OpenCL_MacProgGuide/OpenCL_MacProgGuide.pdf


#32

Ad far as i know the limitation is present on all platforms. Apple doesn’t write the drivers from that far down :slight_smile: they don’t get to make a call themselves on narrower dp. Just try some ocl tests that implement dp if you want to confirm. Same for the hardware stereo and colour depth per buffer AFAIK; e.g.:If you want hardware stereo in Nuke it’s got to be a quadro.

The gamma settings you mention I’m not sure about, as in I don’t actually know what you’re referring to. As you know my hands-on on Macs is near nothing, and I don’t really deal with that colour management in those fields (in film we take completely different routes and toolsets anyway).


#33

well, the hardware stereo thing is a hardware limitation. The only reason you can’t do 30-bit output on OS X is because Apple’s yet to sort it out, on any hardware. It’s annoying and I’ve griped about it for years. But when it comes, I am willing to bet cash money that it will work on all hardware that is capable of it because they won’t gimp the drivers.

What are you referring to about the gamma settings?


#34

It’s not up to Apple though, it’s up to nVIDIA (or AMD). If they decide that the bottom-most layers won’t broadcast 10bit per channel to inidividual buffers then they won’t.
On Windows in example DX can do it because there was no reason to cripple it (since DX isn’t used in colour sensitive apps much) and games might exploit/require the feature, but no amount of circumvention will get you the same in OGL unless you buy a quadro.
Same for certain HW stereo features and output devices support (which is why you will be hard pressed to find a comp workstation around any film shop with anything less than a relatively recent quadro usually, for a mix of nukeX node accel and stereo support).

Apple does contribute to the drivers for their platform, for sure, but make no mistake, they don’t write them from scratch and don’t get to willy-nilly decide what’s exposed how across the board. Particularly for OCL/CUDA implementations where, AFAIK, they don’t even implement that layer at all and leave it entirely to the mothership.
Of course Apple can strong-arm their hardware providers into offering certain feature sets on certain cards which differ from the stand-alone sale offering, but with such a gaping chasm between all their hardware lines and their new pros I doubt they would even bother.
I know of no WS class Mac coming out that has anything less than a firepro, and everything else with one or two exceptions seems to be going the way of integrated graphics, so I don’t think they even will have to contend with the features at all.

What are you referring to about the gamma settings?

I must have been hallucinating or something. I re-read posts a couple times at least before replying, must have jumbled “gaming” with gamma repeatedly and convinced myself I was responding to God knows what… Ignore that :stuck_out_tongue:


#35

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.