New machine, is this worth it?


Hi there,

I’m considering building a new box. I currently have two Xeon E5-2620 v2 (6 cores, 2.10 GHz) and I’m thinking to buy an Intel Core i9-7920X X-Series 2.9 GHz.
The other components are the usual: SSD Drive, 32GB ram and I’ll upgrade from a current Quadro (stable, but slow) to a GTX 1080ti.

Do you think this is worth it?
My applications are C4D, Arnold, Adobe CC, Affinity, Blender 3D.


yup, you will notice a huge difference. That i7 has single core turbo boost to 4.0ghz i think, which will basically double your viewport performance.

the 1080ti wont do much for you in arnold. If you aren’t planning on serious 4k gaming or using GPU renders, its overkill. 1070 will be more than enough. But if you use redshift or thea or octane, that 1080ti will scream :))


Thank you for your reply, Aleksey.
The 1080 is because I’m seeing a lot of software going in that direction and I think at some point even Arnold would take advantage of the GPU. I remember they posted a render already a while back showing their GPU sample.
So, GPU aside it seems it’s an update I could benefit from. Good to know!


Have you looked at a Threadripper 1950x? Higher total Cinebench points and lower cost. Though probably a smaller single thread speed.


Thank you for chiming in, Luke.
I actually considered AMD, although I still think I’m going with Intel, maybe it’s a matter of habits :slight_smile:


Hey, believe me, I do love TR1950, really very good performance for a nice price. I rendered my last project on it and it was really fast (car stills)


I would go for the Threadripper too, the 7920 costs more and render slower, do your math.


I am waiting for new iMac Pro…!


I would personally stay with Xeon. The V4’s are definitely a much quicker chip than the V2 and you can design a powerful 20 core machine for under 5k.


I knew this was going to get confusing :slight_smile:
Some of you guys are happy with the AMD, where can I find the Cinebench scores online? The few websites I have found are not updated with the i9-7920X, so I can’t really compare the results with the Threadripper.
That said, on CPU Benchmarks, the winner is the i9-7920X with a score of 23,556 vs the Threaddripper which scores 21,977 (5 positions below). By the way, the i9 scores a tad higher than the V4, which costs much more:

Also, I think another important aspect to take into account is how many applications take advantage of multiple cores. That is something I don’t know and I would be curious to learn about.


"That said, on CPU Benchmarks, the winner is the i9-7920X with a score of 23,556 vs the Threaddripper which scores 21,977 "

The top end Threadripper cost less than the i9 7920 and there’s no way a 7920 can beat a 1950X in well multithreaded application, even the slower 1290X can match the 7920 score for rendering and it cost barely half the price. You should not look for generic benchmark, just check rendering benchmark from Cinebench 15, Corona, Vray ecc, then you will have realistic expectation of what you will get. Cinebench in particular gives you also the single threaded score which is the most important thing for single/badly threaded apps(in fact 90% of a typical workflow use barely a couple cores when not rendering/simulating), in this regard both Threadripper and i9 will easily outperform any Xeon out there(usually by a significant margin), in order to get both high GHZ and large core count on Xeon you will need to pay a lot more than prosumer CPUs.


Depends on your priorities. Looking at performance the 12 core i9 7920x gets 190 in cinebench single core and 2450 multithreaded, whilst the thread ripper 1950x gets a somewhat slower 165 single core, but more than makes up for it with 3000 multithreaded when rendering. So if its all about rendering, TR is the easy choice to make, but everything else you do will be about 20% slower due to the lower single core clocks.

If you plan to overclock then you can bump the i9 up to 3000 CB and TR up to 3400 CB.

At $1200 for the i9 and $800 for the TR, plus the extra difference in motherboard costs, youll be paying about an extra $500 overall for the intel system. the only real question is how much do you personally weight render speed vs everything else.


Thanks for the info, Matthew.
I’m not sure about overclocking it, especially with the Threadripper as it already generates more heat than Intel (and also needs more power).

As for the main use, besides the apps I already mentioned, yes, I’ll use it for renders, mainly Arnold. But there are other things I’m considering, like x-particles which I know runs better on i7/i9 CPUs rather than Xeon.
And then possibly Da Vinci (if I’m not mistaken another GPU optimized software) and maybe Fusion (trying to ditch Adobe altogether).


If you’re using Arnold, you’ll definitely want the higher Cinebench amount.

As for X-Particles, I have dual 3.1 Gz Xeons, and I’m very content with xp speed.


I know you know your stuff on hardware but :
165 single core vs 190 single-core - That makes the Threadripper just 14% behind on single core, right ?
And 3000 vs 2450 leaves the Intel 18% behind on multi-threaded.
I’m not sure the Intel makes a convincing case even at the same price, let alone at 50% extra.
Still that’s the chip only. Supporting mobo & RAM is likely similar for both & I guess that dilutes the importance of the price difference on the chip.
Even so, I just can’t see past the 16-core Threadripper for CG work on price/ performance right now.


Thanks, Luke.
I assume when you refer to the CB and Arnold you are referring to the multi-core score, not the single-core score, right?

@ Decade: I’m a little confused by your post. It seems you are saying that Intel is not a clear winner anymore, but isn’t what Matthew implied as well?

Anyway, after this thread, I’m not so sure about Intel at this point. Thanks to the insights you guys shared with me, I kept searching and reading more about the AMD and, well, I must admit I’m impressed. I might actually end up going with the TR!


And speaking of CB CPU being pushed, here’s an interesting score.
This one, on the other hand, caught my attention because of the cooling system!


I was just pointing out that the single core is 14% slower, not 20% slower, unless my maths is off.
But yes, I basically agree with him, I’m just leaning a bit more towards the AMD than he is.
Anyway, money where my mouth is, I ordered a 16-core threadripper, motherboard & RAM today, to replace my 5-year old 6-core Intel.


Oh, that’s interesting!
Would you mind sharing your experience after you receive it?
Like I said, at this point, I’m really considering the TR. The only thing I’m a little confused is the configuration of the components as it seems it needs a good cooling system (I think the TR already comes with an “adapter”?), and also figure out what type of ram to use.
I would appreciate that!



yeah, googling around seems like threadripper gets you better bang for your buck. I vaguely remember there being some limit with ram amount or pci lanes, but can’t seem to find any information to back this up, maybe it was the earlier threadripper?

I guess its been out for a while now so most software vendors have adapted, but i remember cinema4d wouldn’t work with threadripper when it first launched. they fixed it pretty quickly, i think a month or so, but more niche programs could potentially have compatibility issues?

maybe something unforseen like encoding video? or even the embree tech that is inbuilt into cinema’s physical render. i don’t know what arnold uses or takes advantage of.

i would def get it as a render node, but not sure i’d use it as my primary machine.

[edit]: never trust anyone who suggests you get a xeon, they don’t know what they are talking about. Only benefit of xeons and ECC memory is in extremely accurate fields, like weather forecasting, financial forecasting and shit like that, where a millionth of a decimal can have serious consequences.

[edit2]: clarification 3 posts down.


Depends on your situation. Before threadripper made its appearance this year, if you needed a smaller number of nodes** but a high amount of Cinebench point on a renderfarm, Xeons were your best option.

** For example, Arnold costs $600/node/year - a farm of smaller nodes would eat up a lot of money over time just in subscriptions. Houdini indie is limited to six nodes total. There’s also physical space in your office & power consumption to consider, Windows Pro licenses (for remote access), network I/O…

There’s also the reality that if your primary workstation goes down (which happened to me last year due to a power surge that went right through my ‘high end’ CyberPower UPS), you’re simply out of luck until you can diagnose the problem and get it fixed. With at least a Xeon machine as a node, I had a full backup ready and didn’t have any significant delays in my (very busy) workload at the time.