Cuda vs OpenCL


#21

Yeah, that’s why I phrased the question the way I did, which is dropping cuda in favor of OCL, addressing a comment that was specifically phrased as a trend of dropping CUDA.

Insofar the entire list is made of one entry: Adobe, due to popular demand that largely amounted to Apple pitching a tall tent in the AMD camp with the ashtray mac pro.

Other than that everybody is doing what they always did before, which is stay in OGL realm for as long as humanly possible, and when GPGPU is necessary go for whatever is the more convenient implementation to respond to the installation base they front.

There is literally no trend in either direction as far as I can see.
CUDA has a crushing majority of the scientific market and the totality of prestige VFX vendors as it always did, OCL has about three quarters of economics/encryption/encoding as it always did, and commercial applications are few and far between and either CUDA or dual platform, with one exception out and one possibly coming out of about 10 that is OCL only.

Nothing new just yet :slight_smile:
We’ll see what it’ll be like post Volta, and whether nVIDIA will be pressed or not to offer better OCL support (because make no mistake, the fact AMD performs better in it right now is entirely artificial as nVIDIA has all the interest in the world to make CUDA look the better solution).
At present time though nVIDIA offers OCL and CUDA and superior OGL 4.x support, and AMD offers OCL and Mantle and partial OGL 4.x support, but might turn that around with the new DX compatible shading pipe now making its way into OGL.


#22

My guess is that the Adobe switch has more to do with Nvidia giving them the tech for free in CUDA at first and then they realized, oh look our customers don’t all run Nvidia…


#23

Haven’t heard this before. Who are they paying to use CUDA?


#24

Nobody, ever, that I heard of, and I’ve been embedded in the GPGPU bog for close to four years now.

That said nVIDIA pursues partnerships and deals with sample hardware and developer network aggressively. Test cards and infinite term loaners of new models aren’t unheard of at all, but compared to the cost of choosing the wrong platform that’s stuff that has practically no bearing. AMD has often been lagging behind on that, not reaching potential markets at all, or doing so late or with underwhelming offers.

I guess if you really wanted to stretch it you could say it’s a pay-in, in a way, but more realistically they are just aggressive and very good at partnership with clients of all sizes. Conversely AMD has done well in the last four years with massive deals, grabbing platform investors, making it to all consoles and a generous amount of steam boxes and so on, but rarely approaches developers and smaller CTOs in the field.


#25

People always prefer open tech, but the biggest platform in the history of computing was Intel’s propietary X86 architecture (with anti-trust lawsuit preventing pseudo competition like AMD that never exceeded %20 market share).

I’ll be glad to get rid of nvidia cards as soon as redshift and octane go OCL.


#26

It’d be cool if it happened, but they really need to take their head out of their arse with single board HSA ASAP.

It was the thing that could have put them on the forefront again, they were at least a year if not two ahead of nVIDIA in image and perception if not in actually development, and it’s again an open consortium and not a closed thing like Maxwell and Volta will do; then, yet again, years late, and one of the more influential members of the consortium ARM has already got into bed with nVIDIA, and another Qualcomm, wished it had.

To compound that, while HSA with short memory distance is barely a test board reality with nothing accessible to the public, nVIDIA has already very successfully virtualized the entire access layer in CUDA 6 and now provided full ARM compatibility, with smart and complete disregard for Tegra exclusivity, in CUDA 6.5, so in a couple quarters CUDA apps will simply work with their proprietary HSA like platform, while OCL and HSA specs are still mysterious and remote to developers.

As open standards and consortiums Khronos and HSA are piling up blunders from a developer point of view, and developers facing markets are what decides the faith of technologies such as these.

I badly want to drop CUDA in favor of something equivalent but open, but as someone doing GPGPU work in VFX they are making it extremely hard for me to even consider it in the mid term, and it’ll be considerably harder again next year between CUDA 7, Fabric, and Optix.