OpenCL vs CUDA - building a new workstation


Hello all,

I’m about to build a new computer. I’m going to go ahead and build it on a Core i7-3770K with 32 gigs of RAM, I’ll be pushing the system up to 4.2GHZ, which, on this benchmark chart, , will put it at about 11500 or on par with a Xeon E5-2650.

I’m having trouble figuring out what I want to do with the graphics cards. I have all of the Autodesk student software, but I’m primarily a Blender 3D user. Blender 3D’s new internal renderer doesn’t support OpenCL rendering on AMD cards, but AMD gaming cards are far superior to the latest generation of Nvidia gaming cards for GPGPU work.

Looking at video cards, I have the option between doing something like grabbing two GTX 580s off of ebay, each with 1536MB of RAM, or one Radeon 7970 now and another one down the road. Each 7970 has 3 gigs of RAM on it, which means it can handle more complex scenes while rendering. However, I’d not get acceleration with Blender 3D’s internal renderer.

This isn’t too bothersome to me - I’m more worried that programs outside of Blender 3D will not support acceleration with the AMD cards.

How widespread is OpenCL adoption in other programs? Would I be getting more than I lost if I went with AMD cards?



well… you’re kinda comparing apples to oranges growing on a banana tree. The parallel processing framework differences are significant between OpenCL and CUDA, and honestly play little to no part in actual GPU performance.

matter of fact, CUDA is not even used except in simulation calculations like for particles, etc… probably likewise for OpenCL.

I think what you should really be looking at is the OpenGL performance versus DirectX on the cards… and in terms of Maya , just OpenGL.

Saying that, attempting to put together an SLI configuration or Crossfire is a waste of money, I do not know of any 3D programs that utilize it. Vray and iRay perhaps, but I have neither and can’t say for sure there.

so, unless you’re doing some serious scientific number crunching… or visualizations, CUDA and OpenCL is a moot point IMO.


thats not true. a lot of modern render engines use OpenCL or CUDA to render (Arion, iRay, Octane, VRay RT, Cycles in Blender, …)

also Adobe uses a lot of CUDA stuff in the latest releases of Premiere and AE. also Blender does some of the compositing calculations now as OpenCL.
also in the compositing sector Nuke and Fusion now have some calculations on the GPU as OpenCL or CUDA.

right now CUDA is more supported in apps then OpenCL


Perhaps a clarification is needed on what exactly he’s doing… rendering or visualization. There’s a huge difference in how the final image is calculated.

If you are doing viz work… then yes, a high performance parallel computing setup is a major benefit, and CUDA is widely adopted while OpenCL is catching up and looking good… but they are still totally different.

… and check this out and tell me that consumer GTX cards can’t be used for it:

still, the GPU is the main workhorse, the CUDA cores are more-or-less an off loaded virtual core that allows significant paralleling of tasks executed by the GPU.


The issue is that in GPGPU tests where we’re doing an apples to apples comparison, AMD cards are immensely faster than Nvidia cards.,3279-12.html

They also come with more RAM per dollar spent, and AMD gaming cards should have more OpenGL performance than Nvidia gaming cards, dollar for dollar. This means I’d be able to work on more complicated meshes to my knowledge. Nvidia is deliberately crippling OpenGL performance in their consumer lines just like they’re crippling GPGPU performance because they want to sell their Quadro and Tesla parts respectively.

AMD does have a reason to not support OpenGL performance on their consumer parts, but because they aren’t trying to sell a part that’s exclusively made for GPGPU stuff, they haven’t eliminated their GPGPU performance in their consumer cards.

OpenCL and CUDA are both APIs to access the parallel processing capabilities of a GPU. OpenCL works on all platforms, but CUDA is Nvidia only. A CUDA core is roughly equivalent to an AMD stream processor. I believe has articles in their history that go into the actual GPU architectures themselves.

For GPGPU acceleration of tasks, an increasing number of programs are supporting multiple graphics cards. Since I’m a computer science major and a gamer with an art hobby, 3D work is third on my totem poll of things my system needs to do. However, in the end I want to buy whatever’s going to save me the most time. (I.E. if I can’t use Cycles, I’ll use Luxrender, but if the time I lose in fidgeting with Luxrender on an AMD GPU offsets the time I’d have if I just used an Nvidia GPU and Cycles, then I’d have to take that into consideration.)

I’m actually not ordering my graphics cards for this system build - yet - so I have time to wait and see if OpenCL starts getting more market traction, but assuming I’m going AMD - what am I going to miss out on?


Unfortunately CUDA works only on Nvidia cards and they intentionally cripple most of those cards to force users to buy more expensive products. In other words, they’re sheisty as hell. On the other hand OpenCL is an open standard supported by many manufacturers and developers. Seems like an easy decision in my opinion. :shrug:



ok, so it took a while, but i found an interesting article that seemed to be objective about comparisons…

since you’re going to be a CS major… this may be a good way to evaluate the benefits.


Nvidia deliberately cripples the performance of their consumer parts GPGPU performance. The GTX 4 series had absolutely phenomenal performance, then the 5 series didn’t really improve, and the 6 series backslid.

AMD’s actually gotten the jump on them - Nvidia cards are no longer the best for GPGPU. Even the Tesla cards are beat down by the consumer Radeon cards, now-a-days. Their only advantage is CUDA. Their latest chipset, GK110 (which isn’t available to consumers yet), may be better suited to GPGPU stuff, but they’ve made it clear they’re not going to give consumers good GPGPU parts.

( GK110, which the latest Tesla cards use, should come with 3.5-4.58 teraflops at single precision, 2 teraflops of double precision performance. The latest Radeon flagship card, the 7970, gives ~1 TFLOPS of double precision compute performance, )

There’s no question that AMD completely smears Nvidia here. We’re talking ice in a blast furnace - all that’s left is water vapor. In raw GPGPU compute performance, Nvidia is so far behind in this round (Radeon 7xxx vs Nvidia’s everything) that I’d not even consider them if Blender Cycles worked on AMD cards.

However, Nvidia has wider application support for CUDA. Far fewer applications support OpenCL than CUDA - to my knowledge.

So yeah, I think I’ve answered my own question, sort of. I need to hold out on a GPU upgrade for awhile.


I figured I would make this thread informative since I’m doing all of this research! Sometimes I wish I had money to burn so I could experiment, but the pressure’s on me to get things right! :smiley:

Nvidia’s releasing a new card codenamed ‘Titan’, it’s going to be a GK110 card released at around $900.,20614.html

It’s going to use the same GPU as the K20X Tesla card, that Tesla card’s floating point precision is theoretically 3.95 TFLOPS single precision, 1.31 TFLOPS double precision, .

The pro is that it’ll have 8 gigs of RAM on the GPU. The con is that card for card it’s not that much faster than a 7970 in theoretical compute performance (4.3 TFLOPS single precision, 1.01 TFLOPS double precision ) and actually loses in single precision performance, so it might actually be slower in certain applications. However, it’ll be able to handle more complex scenes than other cards.

AMD hasn’t officially declared their real successor to their 7xxx series of cards. There’s rebranding going on - where they’re taking 7xxx cards and calling them 8xxx cards, but nothing really new - yet. My verdict: Wait for the new AMD architecture and grab the x970 card unless you need something NOW, in which case go for the 7970. 6GB versions exist.

Edit: I’m finding benchmarks contrary to earlier stuff, Luxrender’s real world benchmarks show Nvidia coming out ahead.


You can compare theoretical performance all day, and think of picking up SLI setups for cheaper than a single bigger card and all, also all day, but you would still be failing to consider three things:

  1. What apps do you use (other than blender), and what do they use? OCL support is still rare and far apart, CUDA on the other hand is a mature and well adopted framework with some actual inroads in our field

  2. What will the drivers for your chosen platform(s) be like? I occasionally give AMD/ATI cards a shot, even if I’ve been on nVIDIA forever now, and at the end of the day, while much better than before, I still find their linux drivers abysmal (while the nVIDIA ones are just plain bad), but that might be changing

  3. SLI is still largely pointless in DCC in nine out of ten cases or worse

I’d like to see AMD doing better in our markets, because nVIDIA has been dropping many balls, but sadly it doesn’t seem to be quite that time yet, and while you do seem to have a hardware fetish to satisfy (and there’s nothing wrong with that), I seek comfort and QoL first, and nVIDIA still wins on that front.


In a perfect world, everybody would develop for OpenCL when it comes to GPU computing and we would have healthy competition among GPU vendors.

Unfortunately OpenCL requires low level coding. CUDA has visualbasic like high level development tools which make it much easier to develop for. That’s why we see Adobe and the likes support CUDA rather than OpenCL. Hopefully openCL will catch up with better dev tools and things change for the better.

On a side note, Tesla and Quadro cards are known to outperform AMD cards for openCL calculations as well since they have more mature drivers. So don’t get caught by theoretical TFLOP numbers, but look for reviews on realworld performance tests for your application.

If I’d need GPU acceleration today I’d get a Tesla or quadro. (or a cheap Geforce 580 GTX) A new rumoured GK110 based “Geforce Titan” card (based on Tesla K20 silicon) is on the horizon too. Lets keep fingers crossed that it’s not as crippled as other geforce cards.


I use any application that’s free at the moment, I’m fine working on either Windows or Linux though I have a preference for the former. If I need to render something huge (and I haven’t for awhile, shame on me) I jump over to Linux because Blender 3D’s internal renderer had a 33% faster render time on Linux than on Windows. I don’t know about Cycles.

I’ve been digging through benchmarks trying to compare Nvidia and AMD, most of what I do is gaming. Gaming benchmarks are rather straightforward, common, and thorough, unfortunately there doesn’t seem to be a great source for unbiased benchmarks of various things used in this industry.

Finding benchmarks on the Tesla and Quadro cards is really difficult. I don’t like using theoretical numbers because, as the Luxrender links I posted earlier show, there are ways for companies to significantly skew those numbers, but I’m coming up empty.


Currently Blender Cycles GPU rendering : -

(1) OpenCL support development is on hold.

(2) Therefore it does not work well (or simply not work) with ATI cards.

(3) A mid/low end nVidia card give better performance than a high-end CPU.

(4) nVidia 6xx CUDA is crippled to promote Quadro cards, so 5xx series is better for Cycles.


Actually, the Tesla cards available on Ebay look like they might be worthwhile…


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.