It's here. The 2013 Pro gpu roundup at CGCHANNEL.COM

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

Thread Tools Search this Thread Display Modes
  11 November 2013
I agree with Sentry66 100%.
  11 November 2013
Originally Posted by aglick: My guess is that for most apps (esp. the ones that use OpenGL), gaming cards will be slower in general than their pro graphics counterparts due to the application-specific performance tuning that happens in pro gfx drivers. Of course, looking at price-performance, gaming cards will often score really well.

Most of that tuning can be enabled on the consumer cards with a few small tweaks but yeah GeForce/Radeon tend to perform much better in their native DirectX11 environments with the designated consumer drivers.

Originally Posted by aglick: Productivity is. -And *reliability* seems to be a key for delivering that. On-going and exhaustive testing, driver tuning, bug fixes and certification efforts carried out between the ISVs and the hardware makers. This testing is expensive. It's valuable. It's the primary reason pro cards cost more.

I agree but this is just as important at the consumer level as well as far as stability goes as even gamers put extremely high value in a cards ability to hold up during intense levels of operation. There are certain markets where zero tolerance is key and I agree the added benefits of the pro level cards come into play but stability should be an all-around thing and users should be able to buy a consumer level card with confidence to use with their DCC of choice be it Max, Maya, Blender, Zbrush, Modo or whatever. This is especially important now that many DCC's support DirectX11 where people will likely invest in the consumer cards.

Originally Posted by aglick: There seems to be less perceived importance on these issues here in the M&E market -probably for a lot of different reasons. Nobody's saying that gaming cards can't work for some 3D artists/pipelines. They absolutely might. In fact, some really will. -Depends on the specific app and 3D card and driver version and proper settings, etc etc.

I think the issues are still relevant it's just they need more justification when you're talking about the difference between $300 and $3000 in this demographic. Coming from a games background we've always used consumer level cards because it's what our users have, it allows us to try and replicate the target hardware we'll ship on (or approximate if you're working on console) and they've generally run quite well in the DCC's.
My opinions do not represent those of my employer.
  11 November 2013
So true that DX viewports (Nitous/VP2, etc) really help level the playing field between game cards and pro cards in terms of reliability/performance in ADSK apps, for sure. *Displacement, AO and other advanced modes can be problematic in DX for VP2, though.

Last edited by aglick : 11 November 2013 at 07:34 PM.
  11 November 2013
thanks for the comment on W9000. I don't think I could afford two of those GPUs, even with what I'm sure is a lower BTO cost for Apple's machines, but glad I won't be missing much when I opt for their mid-end, which is based on the W8000. I don't need compute power anyway. For anyone interested, the custom naming of the Mac Pro's GPU is outlined here:
  11 November 2013
Originally Posted by aglick: *Displacement, AO and other advanced modes can be problematic in DX for VP2, though.

PM me any issues you guys run into and I'll investigate. We've got a bunch of good test scenes with displacement and all the screen space effects that I can crank up on my card nicely.
My opinions do not represent those of my employer.
  11 November 2013
Originally Posted by aglick: So true that DX viewports (Nitous/VP2, etc) really help level the playing field between game cards and pro cards in terms of reliability/performance in ADSK apps, for sure. *Displacement, AO and other advanced modes can be problematic in DX for VP2, though.

With what cards was that?

I have friends who have been doing a preposterous amount of previz rendering with those, a lot of it on laptops, and I've played with it myself, and never had any issues.
If anything anything DX tends to be incredibly forgiving of gaming cards and run of the mill WHQL drivers in my experience.
Come, Join the Cult - Rigging from First Principles
  11 November 2013
I'm looking forward to the consumer/pro comparison review... while I think any/nearly-all reviews are incomplete, this was one of the closer ones I've read as of late. Are there things to make it more complete sure.. but, I find it can give me a good thought baseline at what to look at.

I just really wish (as we all know this to be true), that drivers could (and should) be provided for both professional and gamer purposes... variants optimized for either case. There is fundamentally no real reason why the market is fragmented like it is than mistaken business strategies that believe in boxing and wall building.

there, i'm off my soap box.
-- LinkedIn Profile --
-- Blog --
-- Portfolio --
  11 November 2013
The GPU wants to invade the compute space. Yet all gaming cards are intentionally handicaped via drivers for openCL/CUDA. And the unrestricted pro versions of those cards are about 5-10 times more expensive. This is BS on all fronts.
"Any intelligent fool can make things bigger, more complex & more violent..." Einstein
  11 November 2013
On AMD consumer GPUs, compute is only handicapped for dual precision (full float) operations. I believe the same is true with NV cards. Single precision FP ops are supported by the full capacity of the cards.

This should make consumer cards useful for many non-scientific/engineering workloads. Most OCL/DirectCommpute/CUDA functions in DCC/M&E apps (such as rendering, video/fx/CODEC processing, physics, etc) typically only make use of SP FP.
  11 November 2013
is that a software limitation or a hardware limitation?
  11 November 2013
Software (drivers), I would think -but I'm not 100% positive, Dave.

Does it matter? Either way, let's look at this for a second.

Lot's of companies disable features/capabilities in some of their lower-tier products (cars, phones, etc etc etc). They do for valid reasons, I believe.

I can understand the impulse some feel to lash out against "unfair" or "deceptive" business practices. But consider this from the perspective of the vendor:

-many customers who want the best DP compute performance also expect and need the highest levels of support/service. Institutional and corporate users in this space are relatively expensive to support. Selling RADEONs (at RADEON prices) and then supporting them at this level probably doesn't make good business sense.

-pro graphics cards for compute also typically use ECC (error correcting) components in the memory and memory controller. This costs much more than typical GDDR RAM found on gaming cards. They also often are equipped with MORE memory than their consumer counterparts. Take this product fr instance, which was just announced a few minutes ago.

-There is value in performance, reliability and flexibility. If one really demands the very highest levels of performance (i.e. productivity) for professional work, then free market dynamics demand that they will pay more. A lot more. But heck, it's really a small price to pay for the profits this can help create. This is true with virtually all other commercial "tools" or technologies including hammers, clothing, cameras, automobiles, etc, etc. Why would it be any different for graphics technology ?
  11 November 2013
Hello everyone, I am the author of the Pro Graphics review up on CG right now. I just wanted to say thanks to everyone who took the time to read it, I am continually surprised by how quickly the graphics card reviews spread over the internet. I jsut wanted to first off say that I appreciate people's honest opinions and feedback, I admit up front that I am not a hardware or software engineer, nor do I understand many of the more technical features of a lot of the hardware I test, I am a CG artist by profession, so i try to approach my reviews from more of a non-technical point of view for those who just want to plug the card in, and know how fast is this going to run scenes with X amount of polygons / X amount of textures / in the most common off-the-shelf applications. I tend to leave the more technical nitty-gritties to sites like Tom's hardware, and AnandTech. My reviews are from the perspective of an average run-of-the-mill CG artist, conveying test results in a fashion that most artists want to see, at least that is my intention.

To JaCo, I found your post to be very helpful, thank you. I am always looking for feedback about what people like, and don't like about my reviews, it helps to sort through the facts and decide which ones are important and which ones people want to see. I encourage honest opinions, I don't get a whole lot of comments on the actual article which makes it tough to know what people do and do not want as far as review content goes.

I would like to clarify a few things to try to alleviate some confusion, if you all would bear with me for a moment.

First off, yes, there is no W9000 or K6000 included in this review simply because Nvidia and AMD did not provide those cards for reviewing. They feel that those are geared more towards the scientific / engineering professions and that their BM numbers were not relavent to the DCC / Entertainment fields. Weather or not this is true is debatable, but those are the reasons they gave me.

The big topic that people keep hitting on is the inclusion of consumer-level cards. I am currently working on a consumer-level graphics review at the moment. I have not included them in the past simply because I have not been able to get my hands on any GeForce or Radeon cards until only recently, and as I am a professional artist, and part-time writer, and I live in California, I am broke 90% of the time so I can't afford to buy those cards myself to test So yes I will have a consumer card review for you all in the near future, it won't be directly comparing consume cards to Pro cards, but you will be able to reference to pro review against the consumer review as they will mostly be using the same benchmarks. Also, both Nvidia & AMD prefer that Pro Cards and consumer cards be kept to their own reviews and not put into the same review together ( now before all the conspiracy theorests start screaming PRO CARDS ARE A SCAM, I think it is more along the lines of the fact that they are 2 compleatly different markets, and when you start mixing products aimed for different markets into a single lumped review, things can get confusing. Ask any marketing executive for ANY product line, not just computer hardware, and they will tell you that. It is not much of an issue anyway as I mentioned before, the consumer card review will use mostly the same benchmarks as the pro card review does, so you can go back and compare between the reviews once the consumer review goes live ) I am only at the beginning of the benchmarking portion of the GeForce / Radeon review, but early results so far are that the Pro cards are faster at these applications, and I am also seeing several weird UI glitches in both 3ds Max and MAYA 2013 / 2014 so far, but we will get tinto that more with the actual consumer review.

Next, there are some technical aspects of the pro cards that I have not mentioned, the 30 bit color output ( a reader actually posted this one on the article comments ) and i is a great point that I neglected to touch upon as I don't have a 30 bit display to actually test the feature with ( I am in the process of fixing that one as we speak ) some of the more specific features that don't directly relate to 3D performance in DCC applications I do not have a lot of experience with so I choose not to risk inaccurate information by diving to deeply into subject matter that I am not totally familiar with, but again, from what I am seeing, it seems that these are things that people do want to know more about so I will be looking to include more of those in the future.

As for some quotes that people feel are inaccurate: "Second, the GPU chips on pro cards are usually hand-picked from the highest-quality parts of a production run" That is directly out of the mouths of both several Nvidia and AMD reps, granted it was a few years ago, but I assume it is still the way thy do things, but I am not that familiar with either AMD or Nvidia's quality control practices, I can only take their comments at face value. If many of you feel that there is not enough factual information to back this claim, I would be happy to remove it from future articles, opinions?

Secondly: "The first of these is that pro cards typically carry much more RAM than their consumer equivalents: important for displaying large datasets in 3D applications, and even more so for GPU computing." For this one, I think my wording is what is confusing people, as I am comparing Pro cards vs consumer cards based on thier model type, not from a cost stand-point, for example, a Quadro K5000 is the pro equivalent of a GTX 680, the K5000 carries 4GB of RAM, the GTX 680 carries 2GB ( I think there are a couple of vendors that offer 4GB 680's but they are not Nvidia reference designs ) / the K6000 is the pro version of the GTX Titan, the K6000 carries 12 GB of RAM, the Titan carries 6 GB etc. So again, I am comparing them based on their specific models, not on a cost basis, again perhaps the wording is coming across differently, maybe I should be comparing based on cost? Again, your opinions?

As for the SLI section, I am sure there have been several tests already of DCC applications and SLI, but until now I have not had a pair of same-model Quadro cards to actually test it myself, and I prefer to only include topics and tests that I have first-hand experience with. Maybe this is a redundant test given that SLI and Cross-Fire tech has been around for so long now, but I have had readers ask for these BM's in the past so I thought it would be an interesting addition, and it is just further confirmation of those facts.

As for testing methodology, you have another great point, I will be adding a section on my testing methods as well as driver versions. As for my methods, results ( both frame rates and render times are averaged over 5 - 10 testing sessions, and I can tell you that the values are not a very wide range. FPS values are typicallt within 3 - 6 FPS of each session, and render times usually fall within 1 - 2 tenths of a second between sessions. It seems this is very relevant information, so i will be adding it to the article, thanks for bringing this one to my attention.

So I hope this clarifies some things for you all, again, I do appreciate any feedback you all have, it only helps my reviews get better over time. It seems like to general consensus is that you all like the benchmarks performed with actual 3D scenes in off-the-shelf DCC applications, and what you all want in addition to those, is test results from consumer - level cards, and a more in-depth section on deeper hardware features and testing methodologies, does that sound correct?

So thanks again everyone for taking the time to check out the review, if you have any additional comments or questions, please feel free to comment in the article's comments section, I get notifications when you do so it is easier to collect information from there, or feel free to email me directly, and again all your concerns and feedback help to make my articles better, thanks again!

  11 November 2013
Originally Posted by mustique: The GPU wants to invade the compute space. Yet all gaming cards are intentionally handicaped via drivers for openCL/CUDA. And the unrestricted pro versions of those cards are about 5-10 times more expensive. This is BS on all fronts.

You might be exagerating a bit.
As Adam pointed out the restrictions are rather insignificant in many regards (on the OCL/CUDA front).
DP floats are rarely used to begin with, and when they are it's usually within restricted domains in terms of both application and field, and that's the only part that is artificially crippled. Even then, some gaming cards get a much less stringent constraint than others (IE: nVIDIA Titan has a very minor unit fetch restriction compared to the 780, not sure what the numbers are for AMD so I can't speak for that).

And 5-10 times more expensive? Nah, sorry, that's over dramatizing it. Usually it's 1.5 to 3. A Titan is about a grand, a k5k is definitely not 5 grands (it can be found sub 2k often), let alone 10.

The single most expensive card on the market is the k6000, which retails in large shops for 4.7-4.8k, and can be had for 4.4 - 4.5 from many resellers, but that's largely because of the top of the line premium, and the 12GB of ram coupled with relatively little draw which is ridiculously expensive at manufacturing stage at this point. There's no gaming equivalent to it, but even if you wanted to compare it to a 780 (which is three steps down) you wouldn't make it to 10x.

I'm playing devil's advocate a bit here, I'm not a huge fan of the artificial distinction between pro and consumer cards myself, and I also find the gap excessive, but it's both gone down a metric ton from just a few years ago (and you could soft mod back then), and it's nowhere near the ratios you mention.

@Adam: It's 100% ID tag. It's been proven recently by resoldering a gtx into a quadro 5k (fermi). Some people still throw around the idea that the chips are also first pick, much like Intel phases and brands CPUs, but I have yet to see or hear any evidence of this. It's always passing mentions, individuals, comments, but I have never found a line by nVIDIA confirming it, and if they could pile more reasons to buy a quadro on the flimsy ones they have I believe they would.
Come, Join the Cult - Rigging from First Principles

Last edited by ThE_JacO : 11 November 2013 at 01:54 AM.
  11 November 2013
Originally Posted by aglick: Software (drivers), I would think -but I'm not 100% positive, Dave.

Does it matter? Either way, let's look at this for a second.

I ask because Apple doesn't have a pro driver and a consumer driver so these types of software limitations don't appear on the OS X side.

Looking around, it seems that Apple's mid-range Mac Pro 2013 GPU is in between the W8000 and W9000, which is promising. You get the bandwidth of the W9000 with 3GB on each GPU and the compute cores of the W8000. That's a sweet spot for me.
  11 November 2013
Originally Posted by cgbeige: I ask because Apple doesn't have a pro driver and a consumer driver so these types of software limitations don't appear on the OS X side.

You don't need multiple drivers to implement the split from a systems vendor side of things. The difference is how the card IDs itself, you can then act accordingly drivers side by either refusing deployment and splitting drivers (Windows strategy) or by using unified drivers that drop or cripple some features (OS-X and Linux strategy).

Windows is such a huge gaming platform that providing different drivers and stuff like Experience with more differences than just how many units are recruited for DP probably makes sense to them. OS-X and Linux users tend to have a different park of apps to use, and have no DX requirements, not to mention a different mentality in general towards drivers and OS coating software, in that context unified drivers made more sense I guess, but DP will still be crippled (and while I can't quite comment first hand on the Mac side of things, I have done plenty work on both win and linux with DP FFTs in CUDA on both my old 680 and my current Titan and I can confirm the bottlenecking at fetch time is identical).
Come, Join the Cult - Rigging from First Principles
Thread Closed share thread

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Society of Digital Artists

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump

All times are GMT. The time now is 08:01 AM.

Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.