Best for Maya: AMD or Intel cpu?


#1

I"m looking into building a newer system to get as much speed in Maya rendering as I can buy, but I’d like some expert opinions of which CPU may have the edge for Maya 2014-2015. There’s much talk about graphics cards, but is there any major advantage in AMD or Intel cpu’s?

(I do run some other programs (Terragen, Photoshop, etc.)…but it’s the rendering speed for 3D animations that is what I’m really trying to speed up. I currently am using a Intel Core i7 920 on one machine, and an older AMD cpu that’s much slower on the other, so that’s the one I’ll rebuild, but any suggestions or specs will be appreciated.)


#2

You can use Cinebench to compare CPUs.
It offers single threaded and highly multithreaded tests that will give you a good idea on what to expect from a CPU regarding simulation and rendering.
The Benchmark: http://www.maxon.net/en/products/cinebench.html
Results for comparison: http://www.cbscores.com/
Make sure to enable the advanced tests and enable the single threaded test, by default it is off.

Cheers
Björn


#3

Ever since the i7 920 theres really only been one high end option for 3D, intel. Unfortunately AMD just havent been keeping up. Their 4 core cpus are much slower than intel’s and their 8 core cpu is practically fraudulent as it doesnt contain 8 full cpu, just 4 and a few spare scraps for a speed boost.


#4

If you’re going to bad mouth AMD you should at least get it right. Every set of two processor cores share one floating point unit. So the eight core processor models have eight fully functional processor cores for everything except floating point calculations (which will use the four shared floating point units).

It’s not exact but machines roughly under $1,000 I’d recommend AMD like the FX-8350 processor. The rendering bang for the buck is pretty good in that price range. For builds roughly over $1,000 it makes more sense to go with Intel like the i7-4770K or i7-4970K processor.


#5

Since the original Pentium all x86 CPUs come with integrated FPUs. AMD changed the long standing definition of a CPU core by separating the two again. If they had put one focus of their marketing on communicating this change to the customer all would be nice, but they went the other way and just redefined the term core and use the higher number of “cores” for their marketing.


#6

I took Bjorn’s suggestion and studied the benchmarks on the cbscores website, and it’s very eye-opening. I can’t claim to be anywhere near as knowledgeable as all of you who answered here, so my first inclination was to go to the site, sort by rendering speed, and see things looked like.

I was startled, to say the least, at the way the listed sorted out. The top half was totally Intel. The AMD 8350 (presumably the FX 8350), overclocked, and with 8 cores and 8 threads, first appeared about half way down the list–but was still beat by the i7 4770K not-overclocked and with only 4 cores and 8 threads, as well as any number of other i7’s with 4 or more cores/threads. The other numbers, single-cpu score, etc. seem to vary greatly, so I tried to think mainly of render speed as some kind of marker, but I did not see a single 4770K that was lower in the list than the first appearance of AMD.

To my eye, the difference in the render speed scores is not immense, especially with other variations in play, and from a practical standpoint may be negligible. Still, based on the above site and presuming truth and accuracy is okay, I could decide that Intel has a real edge in processor tech and, if it was between the FX 8350 or the i7 4770K, then the i7 would be the choice, overclocked or not.

However…when you add in bang for the buck, a quick check of eBay prices (lowest overall pricing) for the two cpu’s shows that there can as much as a $100+ dollar difference between the FX 8350 and the 4770K .
Lowest today:
i7 4770K ~ $260.00 USD
FX 8350 ~ $165.00 USD

That’s significant to most people, I think, since the difference could well cover most of a motherboard or a huge chunk of RAM. I’d be interested in hearing expert opinions on whether I’ve made any mistakes in my suppositions, and especially whether the speed differences between these two cpu’s is, in fact, negligible by most accounts.
Thanks to all…


#7

It comes down to the budget. What are looking to spend and do you have anything on hand already like a monitor or Windows license (assuming you’re going to use Windows)?


#8

You are right, these days I’m always crunching the numbers to see how frugal I can be, spending the most on what’s going to do the most. Fortunately, I have practically all the peripheral parts and gear I’ll need, so I’m focusing on the cpu & mobo and can put it in an existing case, but RAM is on the table and after that I’m pretty sure I’ll be upgrading my existing graphics card. So far, it looks like for around $400 a person might (might…) swing a fairly fast rendering selection of cpu/mobo/and ram. But everything does come down to money in the end.

(Go Dallas…I’m just down the road.)


#9

You are looking at numbers, and comparing, an overclocked 8350 with a non OCed (yet unlocked) 4770k. It doesn’t make a lot of sense.
If you look at both OCed, or both non OCed, the difference is more considerable.

Non OC-ed the more optimistic results put an 8350 below a i5 2500K, which is a 100$ i5 from two generations ago (and not OCed when it could be, at that).

Even on a budget I would have serious trouble justifying an AMD CPU these days, it’s just something AMD has hardly cared about in the last couple years outside of milking what little they could out of the mobile OEM tray sales.

If you are looking at saving 50-100 bucks per box for rendering work you are treading dangerous grounds, the save a penny to spend a pound kind of grounds.
With OCing involved The 8350 would barely compare to the lower end of the i5s from the past generation, not to what was the top of the line Haswell until only a few months ago that you picked for the comparison.


#10

Thanks, Raffaele…

I know of you and have seen some of what you’ve done (via IMDB and elsewhere), so for my part I’ll consider your opinion as one that settles the matter. I guess I needed something to push me one way or the other, and you’ve definitely got practical experience in areas I’m also looking toward (especially film work). So, I’d rather spend the pound now and have the piece of mind…I’ll go with Intel.
Thanks very much.
JT


#11

You’re welcome, but to be honest I don’t think Mash or Olson’s or Bjorn’s contributions to this forum are any less important or well informed than mine.
When it comes to AMD CPUs in fact I would listen to Olson before I’d listen to myself, he has more hands on experience with them than anybody else who’s a regular.
BTW I’m not prone to false modesty, nor do I have a problem speaking tersely; it’s a rather simple truth that a few film credits don’t make me more authoritative about bang for buck in CPUs than other HW nuts who participate just as much as I do.

I wasn’t trying to settle the thing at all, in fact, just pointing out you picked two reference points that were somewhat not appropriate given you didn’t consider the clock and potential, that doesn’t suddenly invalidate every other consideration, yours or other people’s :slight_smile:


#12

Thanks again…and you’re right and polite to point out that others opinions do have merit. I knew as I was writing you that film credits don’t necessarily mean you know all about cpu’s, but an industry and the requirements can push the technology, or at least the best tends to become popular among those who know. Things change so quickly these days that it’s hard to keep up when I’m not building a new system every six months (the good old days), so others opinions matter greatly. I am impressed that every one of them here has been worthwhile and helpful.

I was an early AMD user (40mhz…a screamer at the time, dust now), and have an AMD system on my left and an Intel on my right, both rendering…10 days now, my life slipping by. So I’m not pro or con as far as they go, but only need to squeeze every second of rendering speed I can out of every dollar. It seems that Intel has the edge in cpu’s though, if I’ve read correctly, AMD may lead as far as graphics cards/gpu’s go, though even that gets into CUDA or OpenCL (GL?). It gets confusing, but at least I don’t think I’ll make a bad decision after all the opinions are weighed in.
-JT


#13

AMD had plenty glorious moments, from the 40mhz 386s you mentioned where they offered unprecedented bang for buck (while intel was trying the whole SX vs DX 486 thing), to absolutely obliterating intel in the Thunderbird and Palomino days, to the point the only CPUs worth buying back then that weren’t AMD were the OC friendly Celerons.

They dropped the ball seriously during the multicore race when it comes to performance, but they still had bang for buck going for them for a while. These days, with Intel offering Haswell CPUs under 200$, and Devil’s Canyon coming on for 300-350, everything except the bottom end of performers seems to be Intel’s playground.

The GPU thing is a different thing altogether, and just out of its infancy, but not yet done with puberty.

Currently, for VFX and scientific work, nVIDIA is unarguably ahead (games and home computing for hashing heavy apps not so much, Radeons will crack passwords and mine bitcoins faster), but we’ll see if that’ll hold when the next gen with CPU like hosts on board of videocards will change it.
AMD seemed to have a better horse in that race until a year ago, but right now nVIDIA has a mature system that pre-emptively supports it virtualized with CUDA6, and nVIDIA Maxwell, while not mind blowing, is due soon and looks very concrete, while AMD is still living off rumors and alpha test boards. That’s probably a couple years away before it’ll be clear, but for now, again in a VFX context, the overwhelming dominance of CUDA over OCL has nVIDIA in the lead for the immediate future.

I can’t say I’m a fan of Intel. Not a hater either, but I’m sort of lukewarm about them these days. Used to prefer AMD as a company, I guess, but they let me down repeatedly for too long compared to Intel and nVIDIA (they only let me down three out of four :stuck_out_tongue: ), so I’m fairly unbiased.
I hope AMD seriously steps it up, they’ve always been an important disruptor and it’s glaringly obvious Intel hasn’t felt the right amount of pressure in years, but right now, for my financial means and needs, I won’t be buying any of their products in the immediate future.


#14

Single core speed can have a lot of influence on how fast you can interact with your application. Rendering is usually well multithreaded but anything else is less so. Effectively this can determine how long you will have to sit in front of your computer to finish a task.
For personal use i opt for higher single core speed since rendertimes don’t concern me as much as interactivity and simulation speed.
I was a big AMD fan back in the Athlon days, but things move on and you have to take a step back from what marketing departments want you to think and double check the numbers and facts. I try to be as manufacturer agnostic as possible and currently Intel has the parts that suit my needs more. Maybe in a year or two roles are reversed, i’ll happily buy AMD then.


#15

For what we do, the FPU is the absolute heart of a cpu core, without it, the extra 4 cores do an incredibly bad job with practically everything you might throw at it. Im not digging on AMD because I dont like them, Im digging on them because theyve screwed up. I havent bought a cpu which didnt include an FPU since my amiga '030 card 20 years ago.

Even at the budget end its hard to suggest AMD, you can grab an i5 chip which goes toe to toe with the equivalent AMD and kills it rather harshly on single threaded tasks.

One thing I would like to say to the OP, when comparing system speeds and prices, do not make the mistake of comparing the price and performance of the cpu chip. You must consider the entire system price. If you look at just the cpu chip alone then you might say “hey, the intel gives me 25% more speed but costs 50% extra!” because you’re comparing a $100 cpu against a $150 cpu. But if you look at the entire system as a whole; $1000 vs $1050, then its 25% more speed for 5% extra cost.


#16

Like I said before, it comes down to budget. For roughly 80% of the rendering performance there’s a considerable difference in cost for the platform (CPU and motherboard). For comparison here’s a lower end ATX size Asus board with each. One is $260 and the other is $450.

FX-8350
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131872
http://www.newegg.com/Product/Product.aspx?Item=N82E16819113284

i7-4770K
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131985
http://www.newegg.com/Product/Product.aspx?Item=N82E16819116901

I’m not saying AMD is better, I’m saying AMD is a good option depending on the budget. Obviously the current Intel offerings are faster but that’s a moot point if the budget for the entire build is $700. Since the original poster hasn’t shared budget information they can figure it out.


#17

I dont see that as a fair comparison at all.

For rendering speed, the intel equivalent is an i5 4670 (3.5GHz stock speed) which retails for $219 instead of $339 for the i7. But even then its not a particularly fair comparison because the i5 is 60% (!!!) faster on single threaded work which is a pretty significant difference.

The same applies for the motherboard, its quite easy to match the amd socket motherboard features with something the same price. This brings the difference down to a measly $40.

AMD systems will be cheaper, but for the price difference and speed loss it just doesnt make any sort of economical sense for a 3D user.


#18

First, to answer an earlier question, I haven’t had a specific budget in mind because the rendering speed was what I wanted to look at first, then weigh my options. The cpu and motherboard seemed a logical place to begin, which is why I was looking at them to determine a) what others in-the-know are using and/or recommend, and b) knowing it’s not going to be the latest Xeon released yesterday, what’s the best a working guy might be happy with?

Imashination made an excellent point I hadn’t even considered when he suggested that the cost difference between two cpu’s be considered as it’s fraction of the total end-cost of all parts. At that point, the speed gained for essentially 5% more expense is huge. I’m definitely going to end up paying for memory, a cooler, and a graphics card…and who knows what else…so it’s a very practical way to look at this

As of today, for the money, I’m leaning toward the Intel i7 4770K. But I have some what-if considerations about a motherboard. The Z87’s are now into Z97’s for about the same cost, but is there a motherboard that might allow me to upgrade the processor in a couple of years (e.g. is there a motherboard that might accept a 6-core, or 8-core if Intel makes a Socket 1150 version?) Or is it impractical to think that way since in 2-3 years I’d be better off doing this same kind of search all over again?
Thanks,
JT


#19

In two or three years Intel will have a completely new platform with DDR4 memory and a new processor socket. Get what you need for now and then do it all over again in a few years.


#20

Good advice at any time, otherwise you will be postponing the purchase indefinitely.
Let me add that you should not plan to much for possible additions. Plan for more memory, that’s about it. Stuff is getting cheaper, more powerful, smaller and less power hungry all the time.