First AMD Threadripper Cinebench R15 results published

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

REPLY TO THREAD
 
Thread Tools Search this Thread Display Modes
  07 July 2017
Originally Posted by sirio: Be aware that only a small part of CPU-monkey Cinebench score are actually coming from real CB tests. Especially for new CPU they use their in-house system to figure out the score but it's just hypothetical and it can be wrong by a significant margin. 

Well, that's good to know. :(


Quote: With OpenCL render engines such as AMD Prorender BOTH the GPU(s) and CPU(s) can be utilized simultaneously. Best of both worlds


Agreed! But it's still going to take some time before these options become up-to-par with the competition in both speed & features. 
 
  07 July 2017
Originally Posted by sirio: Be aware that only a small part of CPU-monkey Cinebench score are actually coming from real CB tests. Especially for new CPU they use their in-house system to figure out the score but it's just hypothetical and it can be wrong by a significant margin.


um - if that is true, I would call it fraud. If someone writes "Cinebench" on his tests, I fully expect them to run a real cinebench and not somehow guess cr@p.

I wonder what Maxon says about that? It's their trademark after all.
 
  07 July 2017
I am not aware of any misconduct in regards to Cinebench. Where Sirio got that information from is a mystery to me. Sirio: Please don't hesitate to contact me privately on this.
It happens that CPU manufacturers use CB on alpha or beta systems that do not find their way into production, but over all the years that i follow this there was never an event where there was an actual misuse. There was a bit of trouble years ago when GPU manufcaturers tried to outwit the OGL benchmark, but that hole was quickly plugged.
__________________
- www.bonkers.de -
The views expressed on this post are my personal opinions and do not represent the views of my employer.
 
  07 July 2017
@Srek
I've read it somewhere on their site, as a matter of fact they have got Threadripper(and Epyc) results long before it was officially released and they still report above 3400points while the average result is "only" slightly above 3000. From what I've seen a 2xEpyc system will reach near 6900points(~20%more than a 2x2699V4), I've also seen much lower score from preproduction BTW. I believe that with more powerful CPU Cinebench should get more complex/long test because the testing time has becomes too short for new many-core CPUs and if you run the test 3 time you can get quite different results, i.e with 2x 2699 you can easily get from 5400 to 5800 on the same machine, probably a longer test will give you more precise idea. Also other rendering benchmarks from Vray gives comparable results(~20% more than a 2x 2699v4) for Epyc system: https://benchmark.chaosgroup.com/cpu
__________________
www.3drenderandbeyond.com
www.3dtutorialandbeyond.com
www.facebook.com/3drenderandbeyond

Last edited by sirio : 07 July 2017 at 04:30 PM.
 
  07 July 2017
CB Results can vary, i would recommend 5-10 runs at least if you are after reliable results.
That said, serious deviations should only occur if stuff like AV checks, app updates or similar things occur during the test.
Even the short time it takes modern multi core processors to finish the MP test is sufficient for an accurate result.
That said i would still run it multiple times to rule out thermic problems etc.
As for pre production CPUs, i tested quite a few from Intel over time and i refrained from posting results on them since almost always the shipping product will perform differently. With intel alpha or beta processors were usually easier to overclock and were often running at higher clockrates than the finished product would. To this day pre production Xeons are sought after for overclocking purposes.
It might be that pre production AMD CPUs are the same, if yes it is just unprofessional of anyone to publish such results without referencing the exact specs and conditions of the CPU and the test environment.
__________________
- www.bonkers.de -
The views expressed on this post are my personal opinions and do not represent the views of my employer.
 
  07 July 2017
They do report that some CPU are from preproduction indeed. Anyway, bottom line is that beside having realistic expectation about the performance both Threadripper and Epyc are very good CPU and when you factor in the price then there's no competition with Intel right now. I'm planning to add at least two more Threadripper slave in about a month, as soon as ready I'll report my specs and CB results.
__________________
www.3drenderandbeyond.com
www.3dtutorialandbeyond.com
www.facebook.com/3drenderandbeyond
 
  07 July 2017
Originally Posted by tapaul: I am looking forward to this as well.
I got the dates wrong - Alienware is announcing the systems on July 27 for preorder.  I'm hearing thread ripper appearing elsewhere early august. Of course at that point, they will be hard to find.

I wish Alienware/Dell had Ryzen systems that support 2 GPUS, but I guess I might as well build one.
 
  07 July 2017
Originally Posted by hvanderwegen: With OpenCL render engines such as AMD Prorender BOTH the GPU(s) and CPU(s) can be utilized simultaneously. Best of both worlds - smart move by Maxon: everyone wins. Imagine having two fast GPUs and two Epyc CPUs working together... :-)

According to leaked SiSoft Sandra tests that are plastered all over the internet, a 2 X AMD EPYC system reaches only 1,242 GFLOPS total.

Compare that to the 11,500 FP32 GFLOPS achieved by a single GTX 1080 Ti GPU - 9 times the power of a dual AMD EPYC system in a single graphics card.

I tried to explain this in some earlier postings on this forum - GPUs are far more powerful for parallel processing math ops than even the beefiest CPUs.

When I switched my video processing work - which is all just highly multithreadable floating point math operations - from a Core i7 CPU to an entry level Nvidia GPU on the same system, my algorithms really started to fly.

When you compare the relative power of computing hardware, always look at the 16bit and 32bit precision GFLOPS numbers. They are on tech sites, google, wikipedia and elsewhere.

GFLOPS is not a 100% measurement of how a system will perform under heavy load, and the efficiency of the parallelization of the algorithms over many cores will have an impact.

But for example, a 1,000 GFLOP CPU has no hope at all of getting anywhere near the computing performance of a 10,000 GFLOP GPU if what you are doing is essentially parallelized math calculations.

EPYC is significant for server and workstation applications that are stuck in CPU land - either all the tens of millions of lines of code of software running on those systems is CPU only and unfeasible to port to GPU without spending millions of Dollars and years of work doing the porting. For example, porting Windows Server to run on a GPU-only system with no CPU might be a gargantuan task.

Or you are looking at computing tasks of a nature that are not very suitable for current GPU architectures. If you are running a data center server, a dual AMD EPYC might smoke a Titan Xp GPU, simply because GPUs are not currently designed to handle that kind of task as efficiently as a Server CPU.

But the GFLOPS numbers do matter. A 1 Teraflop CPU cannot smoke a 10 Teraflop GPU in a computer graphics task like realtime video processing or 3D rendering.

It may smoke a 10 Teraflop GPU in an internet server or database task that requires 128Gb RAM, relies on other things CPUs are good at, and is highly suitable for a CPU architecture in general, and not so much for a GPU architecture.

The next 32 Core Xeon coming from Intel is rumored to come with FPGA capabilities built-in to compensate for some of this. There you may see a server CPU that - because of its programmable FPGA core or cores - may be able to challenge some mid-range GPUs in things like rendering. But you'd need to re-write the rendering code to take advantage of the FPGA cores.

I think that Intel may have acquired FPGA maker Altera for 16.7 Billion Dollars for one chief reason - GPUs are getting so fast at some tasks that CPU architectures simply cannot scale up to meet the challenge.

So Intel is getting around the problem by designing Xeons that can offload some very intensive computational tasks to programmable FPGA cores.

An FPGA can be programmed to behave like a programmable custom-chip or ASIC that is then very efficient at specific tasks - like mining Bitcoin, or monitoring a stock price.

So in 3D rendering and related areas, we may eventually see a shootout between plain vanilla CPUs, new CPUs with FPGA capabilities built in, GPUs as we know them today, or new GPUs with new CPU architecture-like capabilities or even FPGA-like capabilities.
 
  07 July 2017
Alienware Area 51  Threadripper edition is now available to configure.  They only have the 16 core option:

http://www.dell.com/en-us/shop/prod...rea51-r3?~ck=bt

Starts at $3000.  I was also looking at an iMac, so that pricing isn't too insane. But I would have settled for the 12 core and a lower price.

Alienware/Dell is getting thread ripper exclusively. If you want other options, it's build it yourself or use a custom builder like Maingear or Digital Storm.

EDIT: Main gear is up, too. A little better pricing, I think.  Still around $3000 when adding things like a decent video card, 32 GB ram, and liquid cooling.
https://www.maingear.com/promotions...hreadripper.php

Last edited by BubblegumDimension : 07 July 2017 at 02:41 PM.
 
  07 July 2017
Originally Posted by BubblegumDimension: If you want other options, it's build it yourself
https://www.maingear.com/promotions...hreadripper.php
You know Chris, the difficulty level for assembling a computer is about as hight as the making of a western omelette.
__________________
Wut?
 
  07 July 2017
Originally Posted by laurent: You know Chris, the difficulty level for assembling a computer is about as hight as the making of a western omelette.
Yeah, I built my last one.  Stuff went wrong with it on first startup. It wasn't much fun troubleshooting that as a newbie switching from a Mac after 10+ years!  
 
  07 July 2017
Originally Posted by BubblegumDimension: Alienware Area 51  Threadripper edition is now available to configure.  They only have the 16 core option:

http://www.dell.com/en-us/shop/prod...rea51-r3?~ck=bt


I just noticed that the Alienware system's weight is listed as "4. Starting weight: 61.73lbs (28kg)*"

Don't know if that's a mistake on their webpage or not, but this is one heavy workstation casing if true. =)

Put that on an IKEA plywood desk and it'll break, ha ha. =)
 
  07 July 2017
Originally Posted by skeebertus: I just noticed that the Alienware system's weight is listed as "4. Starting weight: 61.73lbs (28kg)*"

Don't know if that's a mistake on their webpage or not, but this is one heavy workstation casing if true. =)

Put that on an IKEA plywood desk and it'll break, ha ha. =)
There are super heavy beasts, for sure.  

I'm a bit disappointed by Alienware's offering. It's big, heavy. starts at $3000 with a mid range GPU, and only has 4 ram slots. This machine quickly shoots past $4000.  Once parts become available in August I guess we will know how it compares to a self built machine in price. 
 
  07 July 2017
Originally Posted by BubblegumDimension: There are super heavy beasts, for sure.  

I'm a bit disappointed by Alienware's offering. It's big, heavy. starts at $3000 with a mid range GPU, and only has 4 ram slots. This machine quickly shoots past $4000.  Once parts become available in August I guess we will know how it compares to a self built machine in price. 
When i was responsible for Maxon IT i made the mistake of ordering an Alienware system only once. They do what you want, but they are clearly designed for your average 15 year old, or what the Dell marketing department thinks a 15 year old wants. For everyone else they are huge, ugly, impractical lumps of metal and plastic.
Before that and since then all high perfomance single CPU systems here are self build.
__________________
- www.bonkers.de -
The views expressed on this post are my personal opinions and do not represent the views of my employer.
 
  07 July 2017
Originally Posted by Srek: When i was responsible for Maxon IT i made the mistake of ordering an Alienware system only once. They do what you want, but they are clearly designed for your average 15 year old, or what the Dell marketing department thinks a 15 year old wants. For everyone else they are huge, ugly, impractical lumps of metal and plastic.
Before that and since then all high perfomance single CPU systems here are self build.
Thanks for the heads up. I'm annoyed Alienware/Dell is the exclusive OEM partner for Threadripper.  Build it myself it is, if I can get the parts.  God knows intel doesn't have enough of their current processors to go around.

Last edited by BubblegumDimension : 07 July 2017 at 08:17 PM.
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 11:24 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.