Link to the Tom’s Hardware page…
I have to say that i expected more. The MP speed is ok, more or less what was to be expected. The single threaded speed however is pretty bad. Something that will be noticable in many user interaction scenarios.
That’s 17.63 for a single 12 core CPU.
I don’t think this is due until next year.
I’d welcome the extra cores, but the single core performance is worse than what I get now.
Hopefully there’s a higher end offering at some point in the near future.
I wonder if they expect the second GPU (which isn’t used for display) to pick up the processing slack. It seems like a machine designed for software that isn’t out yet. The fact that as of right now it’s an AMD only GPU doesn’t bode well for other more common GPU accelerated programs, either.
I bought a machine like that before - the Power Mac G5. I kept being told that software would be more and more optimized for that processor and I never really saw a significant gain over my older machine before they switched to Intel and made that machine obsolete.
I’m more hesitant than ever with Apple’s new “pro” line - I’m going to wait and see how it works on software I use everyday.
Well, at least you had a whole-house heater with the G5. Plus, it made neat airplane sounds 24/7.
Well, the CB score might not be as high as you’d like - though it is faster than anything else I see on that chart - but that’s hardly Apple’s fault. They don’t design or manufacture desktop processors (yet), so they are beholden to whatever Intel comes up with. AFAIK, AMD isn’t doing much better in this space.
Regardless of expectations, bottom line is the new machine is most certainly a significant improvement over the outgoing Mac Pros with respect to speed, especially when you factor in the other parts of the machine - from faster RAM and PCI-E flash-based storage (which absolutely trounces SATA), to the secondary GFX card which - as mentioned earlier - is not used for driving displays at all. For those of us who use Macs in our professional worklfows, this new generation of machines is a welcome addition.
So this is only a test of a prerelease version of the Xeon chip. Not in the actual new Mac Pro, nor running OSX Mavericks. So this means we have to put a very large grain of salt on these scores.
Ugh that single core is disappointing.
I think Maxine going to really have to start focusing on multi threading the core because as these CPUs continue to go for more cores and less power consumption we will co to he to see single core performance stunted. This is why we’re are seeing so many over locked 3930s which compared to current 12core systems have around the same multicore Performance while having a far better single core.
It was you who made me think twice about my 2.7ghz 8 core cpu. But when I parted ways with the two of them in favor of the 3.1ghz version, I barely noticed a difference. My overall benchmark went from about 23.4 to 24.6 and I was out of pocket by about $800. My point is, minus the blame part, is while it would be nice to overclock all of those Xeon cores up to 4.0ghz, the overall performance dip isn’t that dramatic when we’re talking a few ghz.
Sadly the past is a pretty good indicator that the OS or the mainboard etc. does not have an influence beyond a few single digit percent. If this is the processor the new Mac Pro will get, this is the performance you will have to live with.
But a Month ago there was another geekbenchscore of the new MacPro and that was very much disappointing. This new test is already so much better. I keep my hopes up for the real thing when it will be official released. But ok like you write this could also be the best scores that it can produce…
I wouldn’t be surprised if it hasn’t been decided yet which exact chip will be used, chances are that there will be a range of CPUs to choose from anyway. Where the CPU that was tested here fit’s into this, only Apple knows.
2.7 to 3.1, sure not the world of difference there. But if youre overclocking then youre probably going from 2.7 to over 4GHz, then you really notice it.
Yeah, don’t mean to be anti Apple (as I type this on my MacBook Pro) but that is very disappointing performance. Especially the single threaded.
The 3930k system I built quite some time ago is very very close to the same multi processor score with half the processors, and sizably faster in the single threaded. It’s honestly been a fantastic machine.
What IS interesting to me is the fact that you can put two of those new Xeons in a system. Then your Cinebench is like 34 before overclocking. THAT is interesting.
Yea but your arguing just the multicore processing point. I’m arguing the speed within the c4d. Rendering is important but one can always choose other options in a crunch like renderfarms or soon the new teamrender. for render boxes sure go for the best multi core. but for a personal workstation where so many tasks in C4D and other apps are single core. a good graphics card and a good single core speed are key. Every time you run a simulation, or a complex set of deformers, playback of complex scenes, you could have more cores but they aren’t helping you as much. And as cores continue to go the way of more at 2-3GHZ ranges instead of the 4 GHZ, that seriously sucks.
As Mash said overclocking .2 to .4 GHZ may not be much but most overclock systems can easily be the 1GHz range. The lowend overclock on the 3930 (3.2GHz) is 4GHz with most average ones being the 4.2 (full 1GHZ OC) to 4.4 and typical lucky ones being 4.5 or 4.6. Being a six core CPU it’s the best bang for your buck reliable overclocker too as you not only get the extra 1 GHZ single core speed, but you achieve it at 6 cores typically raking in an extra 6GHz of render power. and typically around 2.5-3 points on your CB score. Most notbale mothboards now have an overclocking button that automates the process and favors reliability instead of hardcore speeds and the one I use typically gets people into the 4.2 to 4.3 with the click of a button and no clue about overclocking.
This is the approach I went with, bought a day one 3930k and OC’d it, unfortunately it’s not the best chip (requires a lot of Vcore) but it happily does 4.5Ghz 24/7 under water. Phase change is my next stop since Intel have no competition and seem to only be pushing more cores :rolleyes:
Hopefully there will be a crossover soon when older software updates to more modern/alternative algorithms that utilise the newer hardware.
I bought a the SR-X motherboard for its massive array of features in the overclocking department, only to discover that the only Xeons that fit the socket have locked multipliers and cannot be overclocked manually. However the E5-2687W does have turbo boost, which will overclock a single core to 3.8ghz. If there’s performances differences between this and my 6 core gaming machine, I don’t notice them.
I test renders constantly throughout the production process, obviously this extra processing power allows me to test material and lighting changes with much less waiting time. It has, as I frequently over explain to my wife, made a world of difference. If the socket stays the same for this next generation of Xeons, I’ll be upgrading at the earliest convenience. But also keeping an eye on per core performance, which I’m sure will improve.
I stopped geeking out over hardware a while ago, but I doubt we will see much progress on actually core speeds. It seems like that department has pretty much flat lined. I mean the average chip is still 2.0-2.7gHz just more cores, and it seems to have been that way forever in a tech time span…seems like they are only making headway by shrinking the die, which means same speed, but more space for more cores OR less power consumption.
I somewhat made that mistake too. Although theyre still i7’s and although the clockspeeds remain much the same, each generation has been given a 10-15% speed bump. My new 2.7GHz laptop renders at the same speed as my 5 year old 3.8GHz desktop for example.
I found the 4770K @ 4GHz to be surprisingly fast in single processing speed. So far the fastest editor feedback i ever encountered in C4D, it might even be faster if i weren’t using a three years old GPU