Originally Posted by brasco
Or they could design the hardware to work with x86 like the Xeon Phi. Not sure which is more efficient though, I imagine the GPU industry has the better suited kit, but I like the idea of just dropping in a PCIe accelerator card for any CPU grunt work.
You can create all kinds of hardware to support any kind of calculation. There are just two questions to answer:
1. Will enough people buy it to keep the price down? (For graphic cards, that means "consumer segment".)
2. How different is the instruction set from a normal CPU, and what is the cost of supporting it?
If you need a second full CPU because your software works with the full CPU instruction set, then this CPU will be just as expensive as your main one. If you reduce the functionality, you (or rather, the software manufacturer) will need to compile the code differently (or maybe even adapt the source) to support this special instruction set. If many different hardware platforms exist which require adaption, it becomes unlikely that every software supports them all.
While the thought is nice - just do X and you can render cheaper and faster -, it is a fallacy. Render power doesn't fall from the sky, not even for "cloud" rendering