That’s fundamentally incorrect.
There are currently more available CUDA or double OCL + CUDA implementations than there are OCL engines.
VRay RT in example offers both.
And while nVIDIA isn’t putting much stock in OCL and optimizing accordingly (unsurprisingly), it can and will run both unless there are hardware targeted optimizations.
AMD will lock you with OCL only for now.
That precludes you a lot of products such as Octane (CUDA only), RedShift (CUDA only) and some pretty decent CUDA extensions for other softwares.
At this point in time CUDA is quite simply more adopted, more mature, better documented, and much better served. Giving it up for some mythical apps that benefit from OCL is not reasonable. Not unless you are putting together a folding rig, or a bitcoin miner and so on. In DCC OCL is largely not of the relevance we’d all hope for yet.
I guess you’re correct in saying that CUDA is used a lot more in software, but the most used renderers are not limited to CUDA. AMD, even though limited to OpenCl, excels more than Nvidia in OpenCl. And benchmarks have shown that Nvidia CUDA speeds are almost, if not equivalent to their OpenCL speeds. Which means that i can conclude that if AMD were to support CUDA, Nvidia’s gonna get their ass whipped. Its a pity that all these renderers only support CUDA, I guess its SDK is more user friendly. But as for Vray-RT, OpenCL and CUDA are on par, AMD beats Nvidia, WITH THE EXCEPTION OF THE TITAN.
Incorrect at best.
FLOPs are floating point operations per second, NOT double precision ops. Big difference. DP FlOps aren’t an atomic operation, you don’t meausre by them.
Double precision involves a lot more to be taken into consideraiton, not last that many videocards are artificially crippled in their DP for market phasing (IE: GTX 6 and 7 cards, but not the 5s, quadros or Titans).
I wasnt going into depth, thats just more or less an introductory explanation.
I’m not entirely sure where you are going with this.
It’s a 386 to Pentium set of notions. Modern CPUs are not that simple.
There is a lot more differentiating the two than that, and CPU architecture these days is ridiculously complex. A number of other factors will also come into play (IE: whether the compiler was set to, or even able, to take advantage of some features).
But I cannot be proved wrong. Indeed CPUs are increasingly complex nowadays, but can you explain why the intel i7-2600K is capable of about 125GFLOPS, meanwhile the Radeon HD7970 does about 4.2TFLOPS
You keep mentioning OCL as if it’s the premiere, or even the only, GPU rendering platform. That is a million miles off the truth.
nVIDIA has such an overwhelming dominance in the DCC market that nobody in their right mind would make an OCL only commercial product. It’s easier to find something CUDA only than it is to find something OCL only between products of any relevance.Crossfire will do absolutely nothing for your viewport, and is generally regarded as a waste of money outside of gaming.
On top of that the 79xx has considerable syncronicity issues when it really gets taxed (IE: offline rendering on a GPU).
You’re wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.
I still do not think a GPU upgrade is necessary. Invest in Xeons

