Not very fast I’ll wager, but if it allows me to do look dev on my couch I’ll be ordering an M1 MacBook Air.
Macs could soon take performance leadership for 3d work
I agree with everything you’ve said.
People are getting excited without knowing what price will have to be paid for the privilege of owning an ARM based Mac Pro.
AMD construct their whole product stack from EPYCs down to the cheapest Ryzen with chiplets which means there’s virtually zero waste in silicon. AMD sells millions upon millions of these chiplets in their product stack so they have massive efficiencies of scale. Same with GPUs, nVidia and AMD sell GPUs in the millions.
How does Apple compete on those terms with their monolithic CPUs and their own discrete GPUs (discrete GPU does not also imply upgradeable GPUs) which they’ll make in far less quantity than either AMD or nVidia. With such small quantities how will Apple offer 8, 16, 24, 32 core options? Maybe they’ll use multiple SOCs? Whatever they do the Mac market is still well under 10% of the enormous PC market which Intel, AMD and nVidia are shipping millions of units into. So how does Apple compete on price?
Apple’s Afterburner Card, a simple FPGA board for ProRes decompression is priced at $2000. How much will discrete GPUs cost if that’s a benchmark for Apple custom silicon?
By the time Pro level ARM based Macs ship AMD and nVidia will be on 5nm too, we could see a 24 core Ryzen for $800 and PCIe 5 GPUs for $1000 msrp and 64 core Threadrippers for $2.5k and a new 96 or 128 core high end version. I don’t think Apple will have any sort of advantage when you factor in Price/performance.
There is one huge advantage to buying a PC that is knowing that you’re not lining the pockets of the particularly oleaginous Tim
Cook.
I love using Macs, but I finally sold my Mac a while back and went PC again. It seems like I’m always waiting for the next thing to come to Apple. So if they get these great chips and GPUS, How long until something competitive with the crazy RTX line comes in for 3D work?
But it feels like it’s a long way off. I can comfortably wait it out on the pc and see how the wind blows.
My plan?
I’ve been waiting to buy the new AMD 5950x and a pair of 3080 cards for my 3rd PC build. Just can’t find any inventory.
I’m overdue for a system upgrade and won’t wait on an Apple Silicon MacPro.
Am glad Apple is pushing industry forward w/innovation. And hate the fact that my new PC will likely be drawing over 1,000 watts when rendering! The future carbon tribunal will find me guilty as charged.
“I’ve been waiting to buy”…
just get a lenovo p620, off the shelf.
immediately available, solid builds, “no leds or stupid crap”, and not bad prices. Available right now, no pissing around. “all you can eat”
Put it to work, and move along.
Paul I appreciate the no-fuss of such a proposition, truly, but from what I’ve seen such a system would be much more expensive AND involve compromises.
-I want dual 3080’s or 3090’s. Not Quadra cards as in that Lenova system.
-I want the latest 5950x as it features fast single core performance plus 32 threads
-I want control of power supply and NVME. I’ll likely be adding a third GPU to drive monitors. So I need massive MOBO and 1400watt PS, and may still need risers to connect all cards.
-Quite possibly need to go open case
And…I don’t mind doing the assembly. Came into some family money this month, but want to spend thoughtfully.
Apple makes more CPUs (well, SOCs) annually than either Intel or AMD do in the PC arena, so lets stop acting like they are some little upstart (Apple makes (via TSMC) the SOCs for ~200+ million iPhones, iPads, and now M1 Macs. The entire PC market is around 260 million annually. AMD has about 1/3 of that share, Intel has the rest).
The M1 has a relatively small die size, so there is plenty of room for Apple to move on up to 8, 16, or more performance cores.
And the GPU is pretty interesting. Much better performance than typical integrated graphics, at the performance of about an Nvidia 1050. Again, the GPU cores are easily replicated on a larger die.
What is really interesting is the “terrible idea” of shared memory. That’s a not optimal in conventional PC designs, but it confers huge advantages if you can get everything (good CPU, Good GPU) on the same package. A massive amount of GPU render inefficiency comes from shuttling data from storage or RAM to VRAM. That completely goes away in a high performance shared memory design. James Orbach from Octane has already commented on those efficiencies, and indicates that Octane performance on M1 is really good…
For a moment there I thought you tried to disingenuously use false equivalence and strawmen arguments, oh wait, you did!
Shipping millions of phone size SOCs is no measure of how Apple will fare when shipping workstation grade and sized SOCs. I saw Apple’s leading shill suggest 128 core ARM chips may be planned, yeah that won’t be small and will likely be a monolithic design such that an error in the wrong place could write off the whole chip not just a core like AMD’s efficient chiplet design. AMD, nVidia and Intel are going down the chiplet route for CPU and GPUs, there’s a reason for that.
To my knowledge no one here said shared memory was a ‘terrible idea’ but how does that work with discrete GPUs that Apple have said they’re developing? Surely dGPUs which are not part of the SOC will not have shared memory. If I’m wrong and they do have shared memory then it’ll rule out PCIe based discrete GPUs. Trashcan 2.0 Incoming?
Who is James Orbach? Is that Jules’ less well known brother? I’d take anything OTOY says with a massive pinch of salt. Didn’t he say Octane on an iPhone was amazing too?
The 2019 Mac Pro looks like an anachronism yet it is priced 2-3 times the price of a well engineered PC workstation which is objectively much faster. If Apple thinks it can rock up and pull the same silly pricing stunt but with ARM chips and Apple GPUs, manages the odd benchmark win it’ll end up being just as big a flop as the 2019 Mac Pro. The fanboys on forums might declare victory (none of them actually buy this stuff anyway) but everyone with sense will still go for the bang for buck option and VFX shops will still buy PCs from suppliers that offer on-site support. More and more Indie shops will continue their switch over to the PC because that’s where the value is.
Infograph, I wasn’t responding to you, FWIW.
Workstation sized CPUs and SOCs tend to be very expensive, yes, often because their large die size leads to low yields. But that is true throughout the industry. And Apple will happily charge very high prices for their premium products, as I’m sure we can all agree.
And quite frankly now one knows how Apple will deal with building their upcoming processors. Will they use chiplets like AMD? 3D packaging? Or simply leverage the expertise they have developed over the past decade?
Apple’s phone SOCs may have started out small, but they are hardly so now. And they seem to have transitioned from the 88mm A14 to the 119mm M1 fairly handily.
Maybe this effort will fail. Who knows. But I’m looking forward to seeing what happens.
[And my apologies to Jules Orbach for getting his first name wrong].
Good post, NWoolridge. You are correct in noting Apple’s position. No vendor has anywhere close to the experience Apple does with modern chip design.
x86 is not modern and it’s on the way out. We’ll still see them selling en masse for 3-5 years, but the writing is on the wall. AMD made a big splashy announcement this week: “Hey we’ve got an ARM-based processor! It’s ‘almost ready.’” Meanwhile Nvidia goes out and BUYS Arm!
ARM can simply do far more with less juice. And it can do more computations per cycle. Apple is way ahead right now with this chip architecture and their integrated business model is a massive advantage.
Yes Orbach from Octane has praised Apple’s chip architecture, saying it’s the performance-per-watt champ. No one who knows anything argues the point. Can they scale that? Wouldn’t bet against them.
All that said, I’m buying an x86 PC for my next computer. It’ll be my last x86 purchase.
I’ve been singing the praises of Apple’s SOCs on other forums, when I said Apple would ditch AMD GPUs and go their own way with their own in house designs I was descended upon by Apple fanboys saying I hadn’t got a clue what I’m talking about. Despite Apple making it vert clear in their own developer documentation.
I’ve said many times Apple’s SOCs are extremely interesting and will be not be lacking in performance. I’ve had to explain to Apple fanboys on Blender Artists that integrated GPUs does not mean slow using the example of the next gen consoles.
When someone writes ‘No vendor has anywhere close to the experience Apple does with modern chip design’ you know they’re a clueless imbecile. This is not to trash Apple but it’s simply an untrue statement. Apple license designs from ARM and their GPU tech has been acquired because they destroyed a British company to get it. Yay Apple you’re so wonderful, let’s not let the truth about this vile company get in the way of the virtuous image it likes to represent.
To say x86 is not modern as if ARM is vastly more modern is again utterly clueless, both architectures rooted from the 1980s, ARM is marginally younger by about 5 years or so.
Let’s bring some realism to the conversation and explore how Apple can scale up their designs from impressive phones and laptops to workstations. Can an SOC be made out of chiplets? It doesn’t look like it as it is a System on Chip which includes the system memory. Apple’s workstation SOCs may be very different from their portable SOCs and dispense with certain parts of the SOC. My guess is they’ll use multi-SOCs connected via some highspeed bus to scale up but that’s just a guess. I struggle to see how massively high core count SOCs can be fabricated affordably in such low volumes, the iMac Pro and Mac Pro are rounding errors in the Apple computer sales let alone the wider PC market.
Maybe Apple’s wafer costs will be spread over all 5nm products and can drive down the cost due to huge volumes at TSMC but how do they overcome yield issues as the SOCs grow in size? The only thing that makes sense is multi SOCs unless they are not going to be bound by price and the sky is the limit in pricing for the Pro Mac models, though any performance victory is a pyrrhic victory if it comes at an absurd cost to the end user.
I’m interested to see how this pans out not because I want to go back to the Mac but because I just don’t understand the economics of it.
On the other hand Nvidia is also making Soc, certainly knows a thing or two about general computing, parallelism, graphics, AI accelerator and now owns ARM.
Yes. Nvidia has made a very interesting pivot.
-They’ve been lead GPU player
-They are effectively pushing AI
-They’ve learned to tie in powerful software layers like CUDA, RT, GRID
-Now they own ARM
I’d put them ahead of Intel and AMD as the company most likely to dominate independent chip design as we move into the new era.
AMD is smoking it w/CPUs now, and will kick ass for the next year or two at least. But will hit the wall hard if they can’t transition effectively away from x86.
It’s hard to see a path for Intel unless they do something crazy with Graphene or some new unforeseen technology.
-Five years ago I was here yelling that 3d CPU rendering was doomed. It was.
-Four years ago told you Nvidia would grow bigger than Intel. It did. Intel was 4x bigger then. Investors now say Nvidia could be next trillion dollar company.
-Now here to say that x86 is in countdown mode. It is.
You want me on that wall. You need me on that wall. 
The future battle?
-Apple Silicon
-Nvidia’s Progress with ARM and integration
-Amazon’s ARM-based Gravitron2 (server market)
-Open Source Risc-V
-Qualcomm
-Ampere
-AMD if they haul ass in transition from x86
-Fujitsu’s ARM (server market)
-Microsoft gets serious in custom chip market?
-Samsung mimics Apple again?
-Perhaps others in RISC…it’s a strange new world.
Someone is trying to reinvent themselves as some sort of techno prophet and appears desperate for attention.
Reading tech press and regurgitating company PR a Prophet does not make.
CPU rendering doomed? LOL what an arse you’re making of yourself, the CPU still the most used device for 3D rendering by a LONG way. The future is cloud based hybrid rendering, any renderer limited to GPU is walking dead long term.
So you’re the guy telling us x86 is in countdown mode. And? What do you want, a fucking medal? 
Apple license designs from ARM and their GPU tech has been acquired because they destroyed a British company to get it.
The first clause is false, and the second is tendentious.
Firstly, it is important to know that ARM has two major sets of IP: its Instruction set architecture (ISA), and its reference CPU designs (microarchitecture). Apple does not license ARM CPU designs. Apple has an ARM architecture license (the most expensive and all-encompassing one), which allows them to implement the ISA on their own microarchitecture designs. Since 2012 they have been designing and improving their own microarchitecture.
This is why Apple was not interested in buying ARM, and why they couldn’t care less who bought ARM: they already have everything they need since they have an in-perpetuity right to use the ARM ISA in their own designs.
Apple’s microarchitecture is completely independent of ARM’s reference designs, and diverges specifically in the depth of their out-of-order execution queues, and in the width of their decoder blocks. Of course, Apple has also been able to add to the processor many special purpose blocks, like ML acceleration units, image processing, and other capabilities integrated with their software stack.
Contrast with, say, Qualcomm, who despite having an architecture license, just released the Snapdragon 888, which just packages ARM reference designs (one X1, three A78 cores, and four A55 cores). They haven’t made an equivalent investment in new design work.
The tendentious part involves, I assume, Imagination Technologies. Apple licenses some of their IP, which I assume is somehow destroying them…?
I’m not going to play semantics with you, the ARM ISA is still a design that is licensed! FFS. Apple did not design this, it’s not their own work is it?
If Apple was so setup for the future as you like to present then they wouldn’t need to keep relicensing from ARM. Apple would not have been so unconcerned about the nVidia takeover of ARM as you like to present because of the history between nVidia and Apple. nVidia have said licensing terms will not change but after the merger has been approved by regulators watch things change.
Before throwing accusations of biased viewpoints around you should really do your homework better, Apple destroyed Imagination Technologies when it attempted an aggressive takeover. Imagination Technologies is no longer a British company and now owned by Chinese venture capital and is a shell of its former self thanks to the sanctimonious corporation you’re such a fan of.
Despite failing in their takeover bid and destruction of Imagination’s value Apple still have to come cap in hand to license certain IP because they couldn’t get hold of the patent portfolio at a knock down price.
I’m not going to play semantics with you, the ARM ISA is still a design that is licensed!
Well semantics are important, and in the context of a semiconductor discussion, it would surprise me if anyone knowledgable about the area would read “Apple license designs from ARM” and think that the object of that sentence is the ISA.
Apple did not design this, it’s not their own work is it?
Well, since Arm Ltd. was founded as a joint venture between Apple and Acorn Computer, and Apple initially licensed the ISA when they had a huge financial stake in the company, in a sense it was.
Are you suggesting the ARM ISA was not designed and therefore not a design licensed by ARM?
A financial stake is not a design it’s just a financial stake. Maybe you think Apple was preparing for this moment 30 odd years ago? Prescient right?
And now nVidia will likely own ARM and take it down a path that benefits nVidia first and foremost. nVidia likes to sell silicon and my strong bet is that licensing terms will change to reflect that. nVidia will want nVidia CPUs and GPUs in mobile tech not licensed IP. nVidia is not ARM and they are big enough to do to Apple what Apple do to others, our way or the highway.
Are you suggesting the ARM ISA was not designed and therefore not a design licensed by ARM?
Of course not. I would have thought that my meaning was clear, but I apologize if I was not specific enough. In the context of a discussion of CPUs and SOCs, where the whole focus of the discourse has been around how specific CPU and SOC designs have been derived and implemented, when someone makes reference to licensed “designs” (note the plural) the natural inference is that the speaker was referring to set of specific microarchitecture designs. While an ISA is a designed spec, I would never refer to it as “a design” or “designs”. To do so would be sloppy, and I was giving you more credit than that.
And, as much as they might like to, NVidia cannot retroactively alter the terms of an agreement their subsidiary entered into in good faith. Apple has the right to use the ARM ISA for as long as they see fit. If NVidia adds to or changes the ISA, Apple has no need to implement those changes unless, once again, they feel the need to. The fact that they use their own microarchitecture means that they are more or less completely independent of whatever NVidia does.