It's here. The 2013 Pro gpu roundup at CGCHANNEL.COM


#7

ok fair enough. Maybe I was interpreting your words with a denfensive ear.

I work for Sapphire. And I am biased. I’m an AMD fan. I also have a kneejerk reaction against kneejerk reactions.

It sounded to me like you were discrediting the article, the author and the veracity of his conclusions. If you were not, then I apologize for being overly sensitive.

Much Respect,

-Adam


#8

I appreciate the mature reply.
No offense taken, none intended, to you or to the author. I hope the article will be corrected and completed because hardware info like that is surprisingly rare and it’s a shame to see a solid effort framed in a somewhat poor context (purely in terms of source checking and accessory info), that’s about it.

I’m not questioning or discrediting the numbers (I would have no way to do so not having access to 80% of those cards, if I had the inclination to begin with) and, believe it or not, have no bias and more than a little hope that OCL will mature and AMD will do well, more so on the CPU and next gen hybrid cards than on GPUs, for the sake of competition and my interests as a consumer.


#9

Websites always seem to benchmark pro cards against other pro cards or consumer gamer cards against consumer gamer cards. I think it’s a gross oversight to not include at least one or two gaming cards among the group of professional cards so people can understand the pros and cons of what’s out there.

For a lot of people, it’s getting harder to justify a pro card. Especially when 3d software and rendering companies seem to be going out of their way to make their software run well on gaming cards. Also not to mention gaming cards do have the bleeding edge fastest hardware which run better on other apps - like adobe software or game engines.

Features like 10 bpc color are great in theory, except the part where hardly any software takes advantage of those display color depths…and for a lot of people, the final destination is going to be 8bpc anyway. IMO it’s better to see banding issues up front than to discover them after the “final” was delivered because you didn’t see the banding on your 10bpc color card.

Regardless what card a person chooses, there’s going to be pros and cons. It’s just always a shame when articles don’t bother to include the full range of what’s on the market that many pros actually are considering. Tom’s Hardware has had some nice video card articles lately with a good mixture of cards.


#11

I agree that it will be interesting to see a few consumer cards tested, and I think the author said he’s working on this.

My guess is that for most apps (esp. the ones that use OpenGL), gaming cards will be slower in general than their pro graphics counterparts due to the application-specific performance tuning that happens in pro gfx drivers. Of course, looking at price-performance, gaming cards will often score really well.

One reason to be wary of making direct comparisons is that there is a misconception/misperception that performance (or price-performance) is the most important consideration for most pro graphics users. It’s not.

Productivity is. -And reliability seems to be a key for delivering that. On-going and exhaustive testing, driver tuning, bug fixes and certification efforts carried out between the ISVs and the hardware makers is expensive. It’s valuable. It’s the primary reason pro cards cost more.

There seems to be less perceived importance on these issues here in the M&E market -probably for a lot of different reasons. Nobody’s saying that gaming cards can’t work for some 3D artists/pipelines. They absolutely might. In fact, some really will. -Depends on the specific app and 3D card and driver version and proper settings, etc etc.

In the CAD/CAM/CAE world (i.e. 80% of the pro graphics market), certification and support (i.e. quick bug fixes, performance optimization, ISV support) seem to continue to be much more of a concern for users.

In any case, forums like this are a great resource for folks looking to research a bit about their apps and needs so they can make smarter buying decisions for themselves.

(edited)


#12

Isn’t the w9000 starting to get old. I mean, isn’t it last year tech?


#13

Apple are going for OpenCL performance. W9000 is still the king of OpenCL/compute in the pro gfx space.

https://compubench.com/result.jsp?test=CLB10101


#14

Most reviews always use benchmarks as the primary core of the review, where the commentary is along the lines of “yup looks like the quadro did well in this test”. The rest of the review is usually typical fluff like how many video ports it has or how the card’s fit and finish is. IMO that’s seriously lazy. Any computer geek off the street can click the ‘run benchmark’ button and record the time.

What people want is an actual professional who knows enough of the basics of the different apps to sit down and actually try to work on a real-world heavy scene file. Take 10 minutes to load up a 3ds max scene and start selecting objects, moving vertices/curve points, change some materials around, try out the sculpt tools, get some particles going. Take note of if certain cards take a long time to select a heavy poly model or don’t draw something on the screen correctly. Then do the same with maya, cinema 4d, etc. People are interested in what cards struggle with these basic functions.

Talking about pro features, how is the stereo 3D on the pro cards? How much faster do quadros switch to stereo mode than the geforces or AMD vs Nvidia on the matter? How seemless is the experience? I can tell you it kinda sucks on geforces, where your screens will flicker and blank out for a few seconds before coming back on in stereo mode. Then it happens all over again when you go back to 2D mode or switch from windowed to fullscreen. Stuff like that, IMO are things pros who might pay extra for a pro card want to know.

It bothers me when a review raves about a certain card that performed benchmarks well, you buy it, and within 3 minutes you know it’s not going to work for you. Aside from fiddling with control panel settings, your only option is to try older or certified drivers, or upgrade your 3D software if you’re using an older version.


#15

I know… But it doesn’t change the fact that the w9000 is “old news” by now tech wise. As for OpenCL performance, this will be great when my application will be updated to use it :wink:


#16

I agree with Sentry66 100%.


#18

So true that DX viewports (Nitous/VP2, etc) really help level the playing field between game cards and pro cards in terms of reliability/performance in ADSK apps, for sure. *Displacement, AO and other advanced modes can be problematic in DX for VP2, though.


#19

thanks for the comment on W9000. I don’t think I could afford two of those GPUs, even with what I’m sure is a lower BTO cost for Apple’s machines, but glad I won’t be missing much when I opt for their mid-end, which is based on the W8000. I don’t need compute power anyway. For anyone interested, the custom naming of the Mac Pro’s GPU is outlined here:

http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/


#21

With what cards was that?

I have friends who have been doing a preposterous amount of previz rendering with those, a lot of it on laptops, and I’ve played with it myself, and never had any issues.
If anything anything DX tends to be incredibly forgiving of gaming cards and run of the mill WHQL drivers in my experience.


#22

I’m looking forward to the consumer/pro comparison review… while I think any/nearly-all reviews are incomplete, this was one of the closer ones I’ve read as of late. Are there things to make it more complete sure… but, I find it can give me a good thought baseline at what to look at.

I just really wish (as we all know this to be true), that drivers could (and should) be provided for both professional and gamer purposes… variants optimized for either case. There is fundamentally no real reason why the market is fragmented like it is than mistaken business strategies that believe in boxing and wall building.

there, i’m off my soap box.


#23

The GPU wants to invade the compute space. Yet all gaming cards are intentionally handicaped via drivers for openCL/CUDA. And the unrestricted pro versions of those cards are about 5-10 times more expensive. This is BS on all fronts.


#24

On AMD consumer GPUs, compute is only handicapped for dual precision (full float) operations. I believe the same is true with NV cards. Single precision FP ops are supported by the full capacity of the cards.

This should make consumer cards useful for many non-scientific/engineering workloads. Most OCL/DirectCommpute/CUDA functions in DCC/M&E apps (such as rendering, video/fx/CODEC processing, physics, etc) typically only make use of SP FP.


#25

is that a software limitation or a hardware limitation?


#26

Software (drivers), I would think -but I’m not 100% positive, Dave.

Does it matter? Either way, let’s look at this for a second.

Lot’s of companies disable features/capabilities in some of their lower-tier products (cars, phones, etc etc etc). They do for valid reasons, I believe.

I can understand the impulse some feel to lash out against “unfair” or “deceptive” business practices. But consider this from the perspective of the vendor:

-many customers who want the best DP compute performance also expect and need the highest levels of support/service. Institutional and corporate users in this space are relatively expensive to support. Selling RADEONs (at RADEON prices) and then supporting them at this level probably doesn’t make good business sense.

-pro graphics cards for compute also typically use ECC (error correcting) components in the memory and memory controller. This costs much more than typical GDDR RAM found on gaming cards. They also often are equipped with MORE memory than their consumer counterparts. Take this product fr instance, which was just announced a few minutes ago. http://www.hpcwire.com/off-the-wire/amd-introduces-new-firepro-s10000/

-There is value in performance, reliability and flexibility. If one really demands the very highest levels of performance (i.e. productivity) for professional work, then free market dynamics demand that they will pay more. A lot more. But heck, it’s really a small price to pay for the profits this can help create. This is true with virtually all other commercial “tools” or technologies including hammers, clothing, cameras, automobiles, etc, etc. Why would it be any different for graphics technology ?