I don’t either. Just needed some fluff for my opening post.
The Xeon w-3175x is definitely the best chip on the market right now, in terms of core count.
But it’s so f’in typical of Intel to make their best chip a single socket beast.
The alternative chip is the Xeon Gold, for a max of 48 cores at the moment.
Definitely better on the CPU front to have 48, but core for core they don’t compare the W-3175x.
Intel play these games, because they know if you want more cores, you have to pay more.
You could go dual 56 core Platinum and max out at 112 cores, if you have 24k for processors alone.
Plus working at an abysmal 2.6ghz (3.6ghz max turbo). That would feel like rendering Christmas, but wouldn’t be the best bet for everyday shit.
Truly, if I had the money. I’d get a GPU beast, single W-3175x. and a dual platinum for VRay/Corona.
Definitely nice to see AMD back in the game. I just bought about 10k’s worth of stock, because I think it’s a sure bet that they’ll win back a big share of the market in the next 3-4 years. I just don’t think it’ll happen overnight.
I was telling friends back when AMD stock was $12…BUYBUYBUY.
AMD’s 7nm translates to great clock speed. I just want to see that come to ThreadRipper.
I think we’ll see 32core/64 thread still under $1,800…maybe 48/96 around $2,400
I don’t want to invest over 2k in CPU when I know I’ll be GPU rendering
…not relevant post…
Wrong thread. NoseMan knows better.
Cited as having “… seen 20% faster GPU render performance in Cinema 4D than on a Windows workstation maxed out with three of the latest Nvidia Quadro cards.”
I have little doubt that’s w/Pro Render.
I won’t be building a system in the next 4 months at least, so I’ll keep an open mind. But color me skeptical…both about the performance and about a price of under $12k for a suitable config.
Again I don’t want to get into platform war discussions. This thread is for PC folks.
@Noseman: How is that helping for the Windows users?
Trying to stay on the bleeding edge is always going to bleed your wallet dry. I personnaly always try to stick to the “Next best” option, which is usually much cheaper and does 80% of the same stuff.
I’m in the market for a new workstation myself, probably during the summer. You can build pretty decent stuff from €2000 and I’don’t think anything past €4000 is really worth it for most people (VAT and monitor not included) : 24 cores, 64Go of Ram, dual RTX 2080, 1Go SSD + 8 TB storage…
What percentage of people here need more than that on a day to day basis?
Who really needs (not want!) a $10.000 machine or more?
I mean if I REALLY need some extra render power, I just outsource stuff to a render farm anyway. There’s no way I will ever beat 3000 CPUs.
didn’t read the OT carefully. Apologies for that…
“This thread is for PC folks”
How does it work, PC folks(including you) can post in Mac topic(and quite often just to troll, not referred to you) but a Mac user can not post in a PC topic? At least you can show some “tolerance”
I own both so I guess I’m allowed to post here as well
Last time I looked my Mac was a personal computer.
AMD is the only processor currently toting PCIe Gen 4. Intel doesn’t have it.
This will bring storage speed that screams so loud you can hear it out in space. Again…this is only for the few who own a new AMD motherboard.
Sequential read and write speeds of 4,950MB/s and 4,250MB/s, respectively. That’s ten times the performance of many SATA SSDs, and fifty times faster than some hard disk drives.
RAM speed on the Intel Xeon is capped.
News flash: Ryzen 3000-Series CPUs may support memory at 5,000MHz. The upcoming CPUs will support up to DDR4-5000.
For comparison: the RAM speed on the Intel Xeon W, featured in a certain computer announced recently:
Redshift V3 is available now and it features support for Nvidia RT.
The 2080 TI was already 30% faster than the 1080TI, but now this Redshift RT support will take that speed and bump it up another 15-100% (depending on scene).
And even as an Octane guy I’ve got to concede: Redshift is FAAAAST.
Hmmm…how many 2080TI can I afford? Or should I wait for 3080 generation?
I would say also depends what kind of projects you do. Going from 30 sec to 15 sec is no big deal going from 1 hour to 30 minutes is.
You win!!! only at pissing contest though, funny you open this thread to avoid that…
If you want ECC RAM that’s what you get no matter if you use Xeon or TR or Epyc.
Faster RAM aren’t that important for most tasks(please name one task in your pipeline that benefit significantly from it, including benchmark). Gamers will love that though, especially if comes with shiny RGB led lighting.
TR specifically need fast RAM to excel because of the architecture shortcoming(the bus between the chiplet runs at RAM speed), Intel uses monolithic dies so doesn’t have this issue.
Ryzen has Infiniti Fabric, which is impacted by RAM speed.
AMD and Intel CPUs?
If a person wants to know who is winning the PC performance battles…it’s usually a good idea to track what the enthusiasts are buying.
Overwhelmingly they are buying Nvidia for GPU
Overwhelmingly they are buying AMD for CPU
ECC RAM for what? I’ve never had memory corruption problems, and if my PC crash, no lives or fortune will be lost.
This “Shortcoming” is actually the strength of Ryzen.
Intel acknowledged it’s future designs are interconnected chiplets based.
The best bet may be to keep the powder dry til next year when Nvidia will release 7nm GPU. There will certainly also be a bump in CUDA core and RT chips. The Ampere family 3080 TI should be one hell of a card.
Of course that release time could drag out so there is risk in waiting…
Meanwhile there are rumors AMD could launch a 64 core/128 thread Threadripper in Q4.
I’d likely look at the 32 core version of that chip. If that product isn’t 7nm I will likely go for faster clock speed, save money and opt for the 16 core 3950x
Was looking at benchmarks for the new Ryzen3900x. This product is a 12-Core 3.8 GHz (4.6 GHz Max Boost). Price: $499.
Cinebench R15 score for this little guy is 3104. That’s with the 3950x still a few months away and Threadripper coming later this year. The 3950x will have approx 25% more performance…over 4000 cinebench.
For comparison my current 8 core/16 thread PC lands around 1250 CB score. My 2009 dual Intel was around 1000.
Current: 1250 CB
3950x: >4000 CB