It's here. The 2013 Pro gpu roundup at CGCHANNEL.COM


#6

Huh? Why would I? I’m very far from being a fan of nVIDIA. I use CUDA over OCL for convenience and context, but I hardly prefer either company to the other, and have a mild dislike for both in terms of corporate and marketing policies.
I’ve also been a long time non-fan of quadros. So what I believe you’re trying to imply there (I imagine that’d be that I’m some nVIDIA fanboy and therefore resent some numbers on a website) has no foundation whatsoever.

Jason knows what he’s doing.

Nowhere I said he doesn’t, but the article doesn’t show that at all, see my notes, which are all pretty factual and easily verifiable.
I deal with facts, and I take measure of demonstrated competence (which is different from making an assumption about the person himself) by those. The article doesn’t show much, regardless of the data presented (which again as mentioned I’m grateful to see available).

His methodology seems sound enough. His results back up what I’ve seen in my own testing and others I’ve seen. I think that for the very most part, his observations and analysis’ where it matters are pretty much right on. There may be a few editorial details of the article which may be hard to substantiate (hand-picking ASICs, memory capacity comparisons, etc) but i’d be interested to know what you found in his work that materially missed the mark or drew misleading conclusions about the data recorded.

Again I’m not entirely sure what you’re getting at.
Where in my post did I say I thought the numbers inaccurate?
I said the journalism is sloppy, important cards are missing, the “facts” are unverified and in some cases plain wrong, and there is VERY important information missing (such as why you would actually HAVE TO pick a quadro/firepro in place of a gtx/radeon before numbers even come into play).

As for the methodology, how can you say it’s accurate when there’s no indication whatsoever of what methodology that would be?
No indication of drivers, number of iterations and type of weight (average vs mean vs weighted) and so on.

If you can make gaming cards work for you and your pipeline, great! But your experience doesn’t/can’t de-legitimize the market for pro-class graphics hardware.

Huh? Man, do you have an axe to grind or something? I didn’t say that anywhere either. You seem to be having a go at me by putting a slew of words I didn’t say in my mouth.

Your assertion that AMD FP cards can’t competently provide stereo, 10-bit color, etc is also not well-supported by the facts. There are many high-end visual simulation and digital media solutions leveraging these features on FirePro cards with perfectly acceptable results.

OK, this is going completely in the absurd.
I said the GTX cards can’t do that compared to quadros, not that AMD cards can’t.
I mostly mention nVIDIA cards in my post because that’s what I have applicable experience with, while I have little with AMD therefore I held comments on that front.

Come on, man. :slight_smile: Why not give credit where credit is due? AMD seem to have been doing their homework. Looking at straight up performance and price-performance it looks to me like FirePro has the strongest pro gfx offerings for ADSK 3D and a few other toolsets.

But Individual mileage may vary, right?

You completely mis-interpreted (I assume unintentionally) my post and have twisted everything I said entirely out of context and meaning.
I not ONE sentence I drew ANY comparison of any sorts between AMD and nVIDIA, and re-reading my post there is nothing that might even remotely suggest it unless you are somehow projecting a bias I don’t have onto it. As I said if I mention nVIDIA more prominently it’s because I have considerable experience with their brand and cards, while nowhere nears as much with AMD’s therefore I didn’t feel qualified to talk about objectivity and factual accuracy.

Again you sound like you have a vested interest. Can I ask what your current employer is? I seem to sense a bias.
You are very close to an ad hominem in your post.
If I had to be as assuming and aggressive as you are I would probably imagine you work for AMD or for some associate of it and are particularly defensive of this article because it paints them in a somewhat favourable light. Would that be an accurate assumption?

Edit:
Ops, apparently it would be accurate:
2013-10-1
AMD Selects SAPPHIRE as Exclusive Global Distribution Partner for AMD FirePro™
Aren’t you an evangelist for them?


#7

ok fair enough. Maybe I was interpreting your words with a denfensive ear.

I work for Sapphire. And I am biased. I’m an AMD fan. I also have a kneejerk reaction against kneejerk reactions.

It sounded to me like you were discrediting the article, the author and the veracity of his conclusions. If you were not, then I apologize for being overly sensitive.

Much Respect,

-Adam


#8

I appreciate the mature reply.
No offense taken, none intended, to you or to the author. I hope the article will be corrected and completed because hardware info like that is surprisingly rare and it’s a shame to see a solid effort framed in a somewhat poor context (purely in terms of source checking and accessory info), that’s about it.

I’m not questioning or discrediting the numbers (I would have no way to do so not having access to 80% of those cards, if I had the inclination to begin with) and, believe it or not, have no bias and more than a little hope that OCL will mature and AMD will do well, more so on the CPU and next gen hybrid cards than on GPUs, for the sake of competition and my interests as a consumer.


#9

Websites always seem to benchmark pro cards against other pro cards or consumer gamer cards against consumer gamer cards. I think it’s a gross oversight to not include at least one or two gaming cards among the group of professional cards so people can understand the pros and cons of what’s out there.

For a lot of people, it’s getting harder to justify a pro card. Especially when 3d software and rendering companies seem to be going out of their way to make their software run well on gaming cards. Also not to mention gaming cards do have the bleeding edge fastest hardware which run better on other apps - like adobe software or game engines.

Features like 10 bpc color are great in theory, except the part where hardly any software takes advantage of those display color depths…and for a lot of people, the final destination is going to be 8bpc anyway. IMO it’s better to see banding issues up front than to discover them after the “final” was delivered because you didn’t see the banding on your 10bpc color card.

Regardless what card a person chooses, there’s going to be pros and cons. It’s just always a shame when articles don’t bother to include the full range of what’s on the market that many pros actually are considering. Tom’s Hardware has had some nice video card articles lately with a good mixture of cards.


#11

I agree that it will be interesting to see a few consumer cards tested, and I think the author said he’s working on this.

My guess is that for most apps (esp. the ones that use OpenGL), gaming cards will be slower in general than their pro graphics counterparts due to the application-specific performance tuning that happens in pro gfx drivers. Of course, looking at price-performance, gaming cards will often score really well.

One reason to be wary of making direct comparisons is that there is a misconception/misperception that performance (or price-performance) is the most important consideration for most pro graphics users. It’s not.

Productivity is. -And reliability seems to be a key for delivering that. On-going and exhaustive testing, driver tuning, bug fixes and certification efforts carried out between the ISVs and the hardware makers is expensive. It’s valuable. It’s the primary reason pro cards cost more.

There seems to be less perceived importance on these issues here in the M&E market -probably for a lot of different reasons. Nobody’s saying that gaming cards can’t work for some 3D artists/pipelines. They absolutely might. In fact, some really will. -Depends on the specific app and 3D card and driver version and proper settings, etc etc.

In the CAD/CAM/CAE world (i.e. 80% of the pro graphics market), certification and support (i.e. quick bug fixes, performance optimization, ISV support) seem to continue to be much more of a concern for users.

In any case, forums like this are a great resource for folks looking to research a bit about their apps and needs so they can make smarter buying decisions for themselves.

(edited)


#12

Isn’t the w9000 starting to get old. I mean, isn’t it last year tech?


#13

Apple are going for OpenCL performance. W9000 is still the king of OpenCL/compute in the pro gfx space.

https://compubench.com/result.jsp?test=CLB10101


#14

Most reviews always use benchmarks as the primary core of the review, where the commentary is along the lines of “yup looks like the quadro did well in this test”. The rest of the review is usually typical fluff like how many video ports it has or how the card’s fit and finish is. IMO that’s seriously lazy. Any computer geek off the street can click the ‘run benchmark’ button and record the time.

What people want is an actual professional who knows enough of the basics of the different apps to sit down and actually try to work on a real-world heavy scene file. Take 10 minutes to load up a 3ds max scene and start selecting objects, moving vertices/curve points, change some materials around, try out the sculpt tools, get some particles going. Take note of if certain cards take a long time to select a heavy poly model or don’t draw something on the screen correctly. Then do the same with maya, cinema 4d, etc. People are interested in what cards struggle with these basic functions.

Talking about pro features, how is the stereo 3D on the pro cards? How much faster do quadros switch to stereo mode than the geforces or AMD vs Nvidia on the matter? How seemless is the experience? I can tell you it kinda sucks on geforces, where your screens will flicker and blank out for a few seconds before coming back on in stereo mode. Then it happens all over again when you go back to 2D mode or switch from windowed to fullscreen. Stuff like that, IMO are things pros who might pay extra for a pro card want to know.

It bothers me when a review raves about a certain card that performed benchmarks well, you buy it, and within 3 minutes you know it’s not going to work for you. Aside from fiddling with control panel settings, your only option is to try older or certified drivers, or upgrade your 3D software if you’re using an older version.


#15

I know… But it doesn’t change the fact that the w9000 is “old news” by now tech wise. As for OpenCL performance, this will be great when my application will be updated to use it :wink:


#16

I agree with Sentry66 100%.


#18

So true that DX viewports (Nitous/VP2, etc) really help level the playing field between game cards and pro cards in terms of reliability/performance in ADSK apps, for sure. *Displacement, AO and other advanced modes can be problematic in DX for VP2, though.


#19

thanks for the comment on W9000. I don’t think I could afford two of those GPUs, even with what I’m sure is a lower BTO cost for Apple’s machines, but glad I won’t be missing much when I opt for their mid-end, which is based on the W8000. I don’t need compute power anyway. For anyone interested, the custom naming of the Mac Pro’s GPU is outlined here:

http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/


#21

With what cards was that?

I have friends who have been doing a preposterous amount of previz rendering with those, a lot of it on laptops, and I’ve played with it myself, and never had any issues.
If anything anything DX tends to be incredibly forgiving of gaming cards and run of the mill WHQL drivers in my experience.


#22

I’m looking forward to the consumer/pro comparison review… while I think any/nearly-all reviews are incomplete, this was one of the closer ones I’ve read as of late. Are there things to make it more complete sure… but, I find it can give me a good thought baseline at what to look at.

I just really wish (as we all know this to be true), that drivers could (and should) be provided for both professional and gamer purposes… variants optimized for either case. There is fundamentally no real reason why the market is fragmented like it is than mistaken business strategies that believe in boxing and wall building.

there, i’m off my soap box.


#23

The GPU wants to invade the compute space. Yet all gaming cards are intentionally handicaped via drivers for openCL/CUDA. And the unrestricted pro versions of those cards are about 5-10 times more expensive. This is BS on all fronts.


#24

On AMD consumer GPUs, compute is only handicapped for dual precision (full float) operations. I believe the same is true with NV cards. Single precision FP ops are supported by the full capacity of the cards.

This should make consumer cards useful for many non-scientific/engineering workloads. Most OCL/DirectCommpute/CUDA functions in DCC/M&E apps (such as rendering, video/fx/CODEC processing, physics, etc) typically only make use of SP FP.


#25

is that a software limitation or a hardware limitation?