Concerning lifetime and care:
I’d imagine an overclocked i7 would need to be cared for more, you’ll have to be more sure it’s free of too much dust, monitor it’s temperature more often, and refill it’s coolant to keep parts from melting: I’m sure the overclocked owners have a lot to say about that or not.
I’ve had an overclocked machine with water pipes inside of it with people suspicious as if I was growing narcotics in my computer. Well, It stopped overclocking just after a few days because the cooling system couldn’t sustain it. After that I just returned the entire thing and got a stable Dual Xeon which would last me a long time without having to worry about a single crash or a file being unable to open and I could max out all threads to 100% when rendering.
Yeah, for sure overclocked systems are more maintenance if you’re setting them up yourself. If you’re buying a pre-overclocked system with a warranty, then it’s just business as usual since the manufacturer did all the leg-work.
I personally don’t rely on water cooling for overclocked systems. IMO a high quality air cooler is the way to go for CG work. Less maintenance, less parts to break, more reliability since everything isn’t relying on the water pump functioning. Just don’t toss the machine around so the large heatsink doesn’t damage the motherboard. That’s honestly the only reason manufacturers avoid heavy duty air cooling because of motherboard damage during shipping. That and they’re all capitalizing on the whole liquid cooling frenzy, and the myth that it’s quieter than air coolers even though they still have fans to cool a radiator on top of a potentially noisy water pump.
The closed-loop water coolers are meant for gaming systems that run hard for hours at a time, not months. Their pump monitoring software often is Windows only and intended for people who are present at their computer, not away while the machine is doing calculations unattended. Sometimes closed-loop can eventually create pin-hole leaks and leak coolant on your graphics card, ruining it.
The more extreme open loop water cooling systems that don’t use fans that require coolant changing typically have better cooling capability than closed-loop, but IMO are not at all suited for CG production work or render farms.
I think closed-loop coolers are perfectly fine for normal systems, overclocked gaming systems, or servers (who’s CPU’s don’t get as hot even under 100% load). I just think there’s too much potential for problems with an overclocked CG production system or rendering computer.
On another side note, I’ve noticed large air coolers cool the CPU better when the case is sitting on its side with the heatsink on top of the motherboard instead of sticking out sideways from it. I’ve seen consistent 3 degrees C less than when the case is upright. I suspect the heatsink’s weight is more evenly distributed on the CPU.
the major difference is the desktop is more susceptible to crashing while the workstation is specifically built not to crash.
Sorry, my bullshit-o-meter just started beeping. There is no difference between a desktop computer and a workstation other than the name and how much a manufacturer thinks he can gouge from you.
With an i7 or a xeon, neither is inherently more stable than the other (outside of registered memory), theyre practically the same chip. If you think a xeon is going to run any more stable than a similar quality build of i7 then you are simply deluding yourself into thinking you’ve paid for something of a higher quality.
As other have mentioned, its perfectly normal, safe and stable to run most i7 chips at 4-5GHz, seeing as most things outside of final rendering and video encoding is still overall poorly threaded, the single core speed will usually contribute far more to the speed of a computer than the combined speed of all cores. For modelling, texturing, setting up scenes, physics simulations and even many parts of the final rendering, a faster i7 will trounce the slower 2GHz xeon. The only place the xeon has a hope in hell of putting in a good performance is in the final rendering; but frankly I’d rather rent 1000 xeons on a renderfarm for 10 minutes than leave my computer churning away all week.
The only reason im even replying here is because it utterly pains me every time I go to a studio or a freelancers apartment and see thousands of pounds worth of “workstation” sat there when I know full well they could have had a machine which performs twice as fast in day-to-day tasks for half the price. I swear if I see one more xeon workstation with a single cpu and a quadro 500…
If you need to spend 500$ on a heatsink, well then there’s no hope for you :rolleyes:
Now, you didnt mention anything about your “workstation grade” pro video card and how much more stable and powerful it is versus a lowly gamer card…
There is somewhat of a case that can be made for stability with xeons vs i7’s, but only in regards to overclocking and the xeon platform using ECC memory. Non-ECC memory does introduce a minor level of possible instability.
As far as the CPU’s themselves though, the sandy bridge-E chips were xeons, but didn’t make the cut. They were all 8 core chips designed to run at 100% load at a certain V-core and temp.
When they mass produce them, not all chips come out equal. Some leak voltage, and will thus pull in more voltage to run at 100% load which makes them run hot. Intel takes these chips, disables 2 cores and some L3 cache and then repackages them as i7’s or lower end xeons, sometimes clocking them higher than they were originally going to be. Some chips are way off the mark and others are just barely.
Either way, disabling cores and bumping the clock speed up slightly ensures that the chips can maintain stability. The ones that barely didn’t make the xeon cut are champion overclocking CPU’s and will run more stable than the chips that were too far from intel’s criteria.
We can’t enable the disabled cores, but we can overclock the existing cores. If you could take the highest end xeons and do whatever you wanted with them in regards to disabling cores and overclocking, they would perform faster, run cooler, use less voltage, and be more stable than the i7 or lower end xeon CPU’s if they were configured the same way.
All that said, in the real world an overclocked i7 can be made stable by adjusting settings in the BIOS and adding better cooling. You can reach 100% stability with the exception of non-ECC memory
I don’t know where you’re getting the info from, but the idea that a XEON is engineered not to crash while an i7 might be less fault tolerant is a bit preposterous.
In first place, ECC is a joke for DCC work. ECC reduces the memory errors you -might- actually get a crash from by exactly zero.
Crashes due to memory handling are, in their absolute entirety, a software fault, and no amount of automatic error correction within the ram will change that by a iota. If a pointer to an invalid object is fetched and used, your app will crash, ECC or not.
ECC is mostly meant for you to protect yourself from, hang in there, cosmic rays. Yes, you read it right
While the beefier cache is nice, the fact is most xeons within accessible price have laughable clocks. We’re not even talking overclocking here, any ex i7 will absolutely smoke a xeon of time and half the price at any single threaded or poorly threaded tasks, and of those, except rendering, there are many. Sadly, even in simulation, particularly so in the very archaic Maya’s toolset.
If you were to overclock, the cost would be about 80$ for an all-in-one, out of the box, hardware dummy friendly corsair liquid cooling setup. At 500$ worth of cooling you are talking overclocking as a hobby, things like Peltier cells and evaporative towers in the circuit, something you do for fun, not for results on the buck.
If given a choice between a 3.6Ghz i7 with 32GB of quality RAM, and the equivalent money in XEON + ECC RAM (two tiers from the top and 16GB if you’re lucky) I’d pick the i7 pretty much any day of the year for anything except racked/rendering purposes.
I use both, extensively, and I, meaning no offense, believe you got swept in by some serious hype.
For non threading friendly tasks it’s not even worth the discussion. With an old architecture, overpriced ram and a clock that barely matches the low power laptop solutions, the added cache is not worth it, you will be crawling your way to the finish line when a top tier i7ex will have done several laps, and practically all of modelling, rigging/animation, sculpting, most of simulation, and quite a bit even in the rendering field falls into this domain.
yeah ECC memory just protects against cosmic radiation or stray neutrinos that happen to travel through a memory bank or in a case when a memory bank goes bad. The ECC is able to correct those bad memory banks at roughly a 10% memory performance penalty. Meanwhile normal memory can always be underclocked or have its CAS latency lowered a notch as a way to eliminate any errors the memory might have.
ECC memory is absolutely critical for servers like what run your bank account that need 100% uptime that run for months or even years at a time and every single digit of info on the machine is critical. It’s especially important servers because every piece of data is first loaded into memory before it’s sent out to a client computer. It’s not as critical for a render node that’s just rendering out a frame, then dumps the file from memory and loads up another file to render.
The OS only takes up so much memory, so the majority of errors that would likely happen from cosmic radiation would be the rendering. Chances are if radiation hit, a pixel or two would be different values than they should be. Worse would be the render crashes and then restarts. The absolute worse case (and rarest) scenario is a critical piece of the OS that’s loaded in memory gets hit with the radiation and the system crashes. You’re probably just as likely to win the lottery though.
I’ve read that if a computer is running all year, its memory banks will routinely get hit about 50 times a year, changing 1’s to 0’s and vise versa. If memory is constantly being flushed and not writing info to disc, those errors aren’t permanent.
The good news at least is ECC memory doesn’t have quite the premium it once had. Now it’s just a little more expensive than normal memory. It’s nice to have a platform that’s capable of 512 gigs of memory which is just a pointless luxury for most CG work. It’s also nice to have a platform that you can buy 2400 mhz ram that can improve overall performance by 2-5% over 1333 or 1600mhz ECC ram, and in certain memory intensive operations, up to 10% faster or more.
Two tiers from the top?!!! We’re talking dualed right? Aren’t you also renting out your Xeon reject for an extra $100 a month to the electric company?? (had to go there)
i’ll take your word for it ,
the i7 series seems to have changed the laws of nature.
To some extent it has because of their ability to overclock around 50% higher than their stock speed.
Like back when you recommended a dual 6-core 2ghz E5-2620, that’s 24ghz of total performance with a 2.5ghz single-threaded speed
meanwhile a basic 4.5ghz 6-core i7 is 27ghz of total performance and 4.5ghz single-threaded speed, for less money.
You lose the ability to use ECC memory (actually you can use it, but just not in ECC mode) or high amounts of ram, but you save money and gain the ability to run 1866-2400mhz ram and a chance of running the setup at 4.7-4.9ghz if your CPU is up to the task.
For the small offices and freelancers, Dual Xeon mobos and fancy ECC memories, is just a WASTE OF MONEY!
The BEST money/ performance radtio is following:
i7-2600K
i7-3930K
FX-8350
But, 3930K is the best buy, as you have powerful single workstation.
i7-2600K, FX -8350 should be used for the slaves on renderfarm.
…
Same is about graphic card, GTX 660 2GB is still best buy, until you wanna go on Tesla/ Quadro system.
For big studios with expensive renderman etc, licesensing, its OK to go on extreme power single systems /due to price of licence/
Talking same money a top tier i7 will cost you less than a couple tiers down the xeon lines, that’s the two tiers comment.
The electric company part I plain don’t get, sorry you went all the way there, wherever that was, for nothing.
With the reject comment I have to assume you think that i7s are batches that didn’t make it to xeon and got demoted. If that’s what you think, you are just about as mistaken as you were in thinking ECC ram makes any difference to software stability. If anything the hand picked batches are the i7 ex.
When CPUs have different amounts of L1 cache they simply don’t come from the same batch btw, so even ignoring the current process (which has an overabundance of CPUs testing top tiers having to be branded below) xeons would be rejects of xeons and i7s of i7s.
If it was something else, apologies, I must be particularly dense today because I’m not getting that part either then
As for the rest, sentry made a decent enough point that I don’t feel I need to re-iterate it, other than that I’d say it stands up even WITHOUT overclocking anything.
You started from a fairly silly base arguing for xeons and ECC making any difference to stability whatsoever, though I have no doubt you meant well and actually believe it to be true, but I’m not sure what you’re getting at now. You were wrong though, and still are, and it’s got nothing to do with the laws of nature being subverted
Sandy bridge E came out about 6 months before the sandy bridge xeons. Intel’s original plan was to release the sandy bridge xeons immediately after sandy bridge-E i7’s hit the market, but due to AMD’s lack of competition and the trouble they had with the original sandy bridge chipset, they put it off.
They were supposed to have ivy bridge xeons released early this year, but since they let their server line slide 6 months behind schedule, they’re now going to skip ivy bridge xeons and go straight to haswell for their xeon line.
Apple got screwed by this decision with their mac pro line and now has to wait for the haswell chips since it’s too late to bother with sandy bridge xeon mac pros.
sandy bridge-E i7’s do have 2 disabled cores on their 6-core models:
You can see the 2 disabled chips (connection severed by a laser)
I’ve had conversations with actual intel engineers and this is a common thing they’ve done as far back as the original pentium chip actually being a disabled pentium pro because it costs less to manufacture 1 chip than having tooling for 2 different chips.
I would put down money that the xeon 2687W and 2690 are the exact same chip, just clocked different. The 2687W is rated at 150 watts and runs hotter at 3.1 ghz while the 2690 is 130 watts and 2.9ghz.
From what I understand, the 2690 is actually the superior chip technically and uses less voltage and runs cooler than the 2687W. The 2687W though is clocked slightly higher even though it’s technically inferior to the 2690, because they decided to label it a workstation-only CPU where there would be better cooling than what a smaller rackmount case can offer.
The fact that they’re rated at different watts just means intel measured the wattage and labeled them as such. They both turbo boost to the same 3.8ghz
Same with the 3930k, 3960x, and 3970x with the exception that the 3930k is handicapped slightly with 3 megs of disabled L3 cache and they’re all clocked slightly different. They certainly seem to perform roughly the same when each is clocked to the same ghz and draw about the same amount of power with a slight difference with the 3930k’s using less L3 cache. Then again the 3930k it runs cooler and uses less power at the same clock speed as the other two.
Do you actually believe that a single cpu can draw 100$ a month of electricity? Even when running 24/7? Let alone a marginal power consumption discrepancy between 2 different models of chips? Your tendency to over exaggerate everything only does make you sound silly, if anything…
Seriously, stop. An entire i7 overclocked to 4.5GHz will draw between 150 and 250 watts depending on which generation i7 and whether it s 4 or 6 core. The dual Xeons will be doing 150 just for the cpu chips, let alone powering the rest of the machine.
Even if the single i7 sucked down an extra 100 watts, which it wont, the increased monthly bill with an average electricity rate would be £7 a month should the system run at 100% cpu load. But it wont, it will likely only be rammed to full a quarter of the time, so your new bill is an extra £1.50 a month. But then the i7 wont chew through an extra 100 watts in the real world, so the extra monthly cost, at most, is pennies.
Honestly, you just come off as someone trying to justify their own purchase to themselves. Now stop spreading misinformation, its tiresome.
It peaks at 275 watts when both the cpu and gpu are under full load. 130 of which are drawn by the Radeon 7870 gfx card.
Thats it, thats the absolute limit the machine will draw. Firing up a copy of Maya wont make the machine magically guzzle down any more electricity than what you see in the video.
What’s not also being considered is how much power the motherboard needs to draw to feed the memory controller, chipsets, and voltage regulators appropriately fed while under load. Who cares what the CPU by itself is drawing. It’s the motherboard+CPU that matters
This is a ballpark estimate since both systems had different hard drives and graphic cards that draw different power when those components are idle during CPU stress tests.
If both systems were run at full load for a month, the 3930k system costs about $10 a month more according to this electricity cost estimator site: http://michaelbluejay.com/electricity/cost.html
…however consider that the dual xeon system will likely cost you $4000 more up front than the i7 machine