Cuda vs OpenCL


#3

There was a time when ATI drivers were unreliable but that’s no longer the case. Things have steadily improved ever since they were bought out by AMD (which for the record was eight years ago). Those who would disagree probably haven’t used an AMD card in years.

For applications that use OpenCL there’s no comparison because the low end AMD cards are faster than the high end Nvidia cards and nothing else comes close to the high end AMD cards. The GeForce Titan starts at $1,000 if you can even find one in stock and it’s roughly half as fast as the Radeon R9 290X which is $550 (speaking in terms of OpenCL performance).

The only reason I see for going with a Nvidia card is for applications that require CUDA to function (which isn’t many these days). Mari used to be another reason to buy an Nvidia card because the early versions supported only Nvidia cards but now they support AMD cards as well.


#4

so going through my applications:

Im assuming zbrush doesnt make use of the gpu at all, so no difference between nvidia and amd there?

mari now supports amd, but does that necessarily mean that opencl would greatly benefit the performance over cuda?

I hear that maya works best under amd cards, but that may just be related to the higher end workstation cards?

a few of nukes nodes can be gpu accelerated if you have a cuda capable card. I don’t believe there is any opencl support yet, but I havent heard much about the future

so are any of these opencl applications that would be improved more with an amd than with an nvidia?


#5

Every time I do a big upgrade, I get tempted by a radeon card again because of the price/performance. The past 3 times (most recently a 7870) I’ve been bitten every time. Graphics glitches, instability and poor multiple screen performance (losing a window forever into an area of the screen that doesnt exist, crashing if a 3d window clips another screen whilst resizing. I end up ebaying them and buying a geforce equivalent.

Truth be told I’ll probably try it again in a few months with my next upgrade, but I’m expecting to do the same again.


#6

I’ve found Geforce drivers have been crap this past year-and-a-bit with very high poly-count scenes. It only shows when you’re really pushing them but still, they are not reliable like they once were. Obviously, this is just my anecdotal experience but it is across a few different machines. Of course they offer CUDA too but I’d rather have better viewport stability.
I’d certainly be tempted to try a Radeon right now if I wasn’t much interested in CUDA.
I’ve had some good & bad experiences with both cards in the past but never a real ‘nightmare’ with either type.


#7

Correct, the GPU doesn’t matter at all for ZBrush.

Mari doesn’t use OpenCL or CUDA that I know of. It uses OpenGL so not sure why they supported only Nvidia cards in early versions. Maybe there was a feature missing or maybe something about the version of OpenGL supported.

The current high end AMD workstation cards perform very well in Maya. I wouldn’t say one or the other is “best” because each will be faster at one task and slower at others.

I don’t know about The Foundry’s intentions but I know other developers have chosen to use OpenCL moving forward. For example Adobe used CUDA originally in some of the Creative Suite products and has since started porting features to OpenCL.

http://blogs.adobe.com/premierepro/2012/05/opencl-and-premiere-pro-cs6.html

I’m hoping developers see CUDA as a stepping stone, or a stopgap, not as the be-all end-all solution because as a paying client I don’t want to be tied to a single hardware vendor because one feature in one application requires it. Do not want.

If an application uses OpenCL then the AMD cards will be faster, by a lot. Whether that matters for your workflow and how much is up to you.


#8

ya, Mari uses OpenCL for it’s persistent tile evaluation. But you only want to use it if you have a dual GPU config like the Mac Pro 2013 since using it with your display GPU will introduce jitters while navigating your scene. The labeling isn’t really clear but the the dedicated GPU in the Mac Pro is the first in Mari’s OpenCL GPU list.


#9

I just get tired of radeon cards always running into the problem where the Catalyst Control Center doesn’t launch on Windows and you have to gut the entire driver, hunt down some random microsoft patch and then reinstall the driver and pray.

It’s completely absurd that something like this even occasionally happening.

That and at least in maya the radeon cards often don’t draw wireframe on shaded correctly. It draws the wireframe all the way through the object overlayed on top of the shaded view. Not a deal-breaker, but lame.


#10

ya, that’s annoying but the AMD driver doesn’t do that in OS X or Linux


#11

it doesnt really seem like theres any reason to go amd then. Im assuming that even if applications decide to support opencl, its not like they would just drop support for cuda right?

is there cuda support in maya for things like particle sims?


#12

Well, last I checked, AMD cards were faster for OpenCL so there’s that reason:

http://www.tomshardware.com/reviews/firepro-w8000-w9000-benchmark,3265-20.html

That’s last gen but I doubt much has changed in the newer revs.

Despite Nvidia literally paying out to companies to use CUDA, people are moving away from it, not towards it. Autodesk will never use CUDA for its compute stuff since it’s vendor-specific. The only exception is iRay, which is owned by Nvidia, so it’s obviously CUDA-only but isn’t supported anyway in Maya without a 3rd party option.


#13

Do you have a source for that? Because I’m certainly not seeing that trend.
While some stuff has inched towards OGL 4.x recently, support for OCL in CGI hasn’t exactly soared, and what few companies supported or planned to have either regretted it, didn’t come out with the product for OCL in the end, or are abandoning it.

VRay RT is the only commercial GPU engine I know of with OCL support, and they regretted it. Fabric that was supposed to support AMD’s hybrid architecture in the end, probably tired of non-deliveries, went with CUDA, all the other engines went with CUDA, What (considerable) support Foundry has for GPUs is split between CUDA (nukeX) and OGL.

The scientific computation world is firmly in CUDA territory and therefore advancing it a lot faster than OCL has a chance to. We’re talking over 95% penetration, and we (CG) captialize a lot on what they do.

All major VFX shops and SW vendors are partnered, or at least use, nVIDIA hardware, not AMD.

Do you have any examples of vendors moving from CUDA to OCL? I can only think of literally a couple examples of libraries/functionalities being ported to OGL (not OCL) that used to be CUDA exclusive, that’s it.

Not that I’m happy with propietary standards, not at all in fact, but OCL is far from doing well, and Mantle, which is a bit of a joke frankly, is clearly not swinging our way as it’s trying to leverage games hard, and has taken away from the OCL commitment when it comes to AMD.

Sorry, I’m just not seeing this trend of OCL making major strides in VFX/CGI. At all. I might need to be pointed to some examples if there are any.
OGL 4.x is the only thing that stands some chance to move certain things back into open domain, but it’s certainly not a complete replacement for what GPGPU handlers like CUDA or OCL do.


#14

Indigo and lux render support OCL as well. Also the upcoming Arnold on GPU is OCL only AFAIK, this is a strong indicators things are slowly changing.
CUDA may have the initial advantage but in the long run closed standard can’t win, and this is very good for final users.


#15

Bear with me here, but lux isn’t a commercial solution (they’re expected to go open standard) Arnold GPU isn’t out, and the indigo GPU news are three weeks old (no idea if it’s available publicly, if it is I stand corrected on that one).

I do hope you’re right, I’d much rather work to an open standard and see it mature to usable levels, which it isn’t right now compared to cuda 6.x, but I don’t believe one announced product and one in development represent a trend inversion compared to the onslaught of solutions still coming out for cuda, or abandoning ocl for it, let alone when you look at VFX shop partnerships.

The hint of one potentially happening in the future? Maybe. I very much hope so :slight_smile:


#16

if openCL were to gain adoption, even at half the usage proportion that cuda is currently at, how long would you anticipate that taking? is it on the order of months? a few years? many years?

ideally id like my new build to last me a while, but I wouldn’t want to throw money at a cuda card now just to see OCL gaining traction within the next 3 years for example


#17

If ocl will, at any point, get significant traction, nvidia will simply stop writing artificially crippled drivers that intend to make cuda look good.
Besides, it’s not like ocl doesn’t work on nvidia cards, it does.
As for it taking over to the point of marginalising cuda, certainly not this year or the next. It could consider itself lucky to be as supported within a couple years.
Buying now you are buying old anyway, the big arch jump will be one or two gens from now for gpgpu computing.


#18

Nem2k, the point is not if it will take a few months or a decade, the point is if you need CUDA/OCL now or in the next few years. For example I’m doing mostly archiviz and none of my software(Cinema4D, Vray, Autocad, FormZ, Marvelous Designer, Photoshop, Nuke, AfterEffect, and many more) strictly depends or use any form of GPU computing. I’ve also considered GPU renderer but they are a joke at the moment, good for small product visualization but unusable for big/complex scenes.
If your softwares do not need GPGPU(and they mostly do not need it besides Mari) you will be fine with both Nvidia or AMD cards.


#19

Adobe for one (already linked to it earlier). They said the reason was popular demand. Some software uses OpenCL and never used CUDA to begin with like Houdini, Final Cut Pro X, RealFlow, Fusion, and x264.

http://www.sidefx.com/index.php?option=com_content&task=view&id=2358&Itemid=380
http://www.apple.com/final-cut-pro/all-features/
http://support.nextlimit.com/display/rf2013docs/HyFLIP+-+GPU+Simulations
http://manual.eyeonline.com/eyeonmanual/fusion/user-manual/appendices/preferences/opencl
http://git.videolan.org/?p=x264.git;a=commit;h=3a5f6c0aeacfcb21e7853ab4879f23ec8ae5e042


#20

Basically others have already written my response for me


#21

Yeah, that’s why I phrased the question the way I did, which is dropping cuda in favor of OCL, addressing a comment that was specifically phrased as a trend of dropping CUDA.

Insofar the entire list is made of one entry: Adobe, due to popular demand that largely amounted to Apple pitching a tall tent in the AMD camp with the ashtray mac pro.

Other than that everybody is doing what they always did before, which is stay in OGL realm for as long as humanly possible, and when GPGPU is necessary go for whatever is the more convenient implementation to respond to the installation base they front.

There is literally no trend in either direction as far as I can see.
CUDA has a crushing majority of the scientific market and the totality of prestige VFX vendors as it always did, OCL has about three quarters of economics/encryption/encoding as it always did, and commercial applications are few and far between and either CUDA or dual platform, with one exception out and one possibly coming out of about 10 that is OCL only.

Nothing new just yet :slight_smile:
We’ll see what it’ll be like post Volta, and whether nVIDIA will be pressed or not to offer better OCL support (because make no mistake, the fact AMD performs better in it right now is entirely artificial as nVIDIA has all the interest in the world to make CUDA look the better solution).
At present time though nVIDIA offers OCL and CUDA and superior OGL 4.x support, and AMD offers OCL and Mantle and partial OGL 4.x support, but might turn that around with the new DX compatible shading pipe now making its way into OGL.


#22

My guess is that the Adobe switch has more to do with Nvidia giving them the tech for free in CUDA at first and then they realized, oh look our customers don’t all run Nvidia…