We’ve all seen how much resources GPU makers are putting into GPU development recently specifically for AI and crypto mining. It’s becoming apparent that they’re now wanting to sell specialty GPU rackmount gear with specialty chips and push traditional GPU’s to the background. Nvidia has stated in a recent announcement during Q&A they aren’t releasing a new gaming GPU for a “long while”. Obviously AMD isn’t giving them enough competition in the gaming space.
On the CPU side however, we’re now seeing things heat up between Intel and AMD …and ARM. ARM cpu’s are now competing in the server market as a lower-cost, lower power consumption alternative CPU’s with the potential for better scaling in the future with more core counts. Nvidia and AMD are now fragmenting their chip architectures into specialty processors for AI and crypto mining and putting gaming cards on the backburner. Microsoft has made a version of Windows that runs on ARM CPU (via emulation code) and we’re now seeing super computers and big-data companies like Google that are starting to utilize ARM CPU’s. Even Apple is now investing into ARM-based architecture for its upcoming laptops. We’ve seen this happen before when x86 started to overtake SGI’s MIPS and Dec’s Alpha CPU’s for the mainstream CG market.
It struck me the other day when I configured our render farm to utilize network rendering for Blackmagic Design Fusion, mass-photogrammetry processing, and when I was asked by a team member if they could use the farm for some machine-learning tasks for tapping into Google’s text to speech library. Our farm being CPU-based, it supports just about anything we’d ever throw at it. However, if our hardware was GPU-based, we would mostly be limited to using it a small handful of CG render engines.
I started thinking how cheap and popular the raspberry pi’s have become as a cheap and easy way for writing code and testing mass-scale processing. Meanwhile, writing GPU-code forces developers to rewrite entire code bases. If it was easy, everything would be using the GPU right? ARM is already running native linux distros that support it. Linux software porting to ARM would likely be much simpler than CUDA or OpenCL’s restrictions.
I know for years most CG artists have been saying GPU’s are the future and we have GPU render engines, but that has always been contingent on GPU makers pushing the technology forward and keeping it affordable. If GPU’s start regularly having a 1.5-2 year life cycle like the GTX 1080 is having, can we really still predict that GPU’s are definitely the future of CG rendering? Or maybe they’ll remain as mainly an augmentation for rendering and for a few specialty render engines?