Your Advices about my System?

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 04 April 2013   #16
Originally Posted by ThE_JacO: There are actually many cases where the super pumped i7 ex will outpace even the 30MB, 10 cores xeons, because tons of stuff just doesn't thread that well...


yeah i suppose the performance shouldn't be much difference especially if your into unlocking and overclocking but that's still a gaming thing imo.
And with ECC memory and dual socket multi-processing I don't have to worry much about doing heavy 3d work, especcially with software that can utilize the workstation, I'll stick with Xeon.
__________________
"If you have wisdom, spare some now."
 
Old 04 April 2013   #17
Originally Posted by RoundRobbin: yeah i suppose the performance shouldn't be much difference especially if your into unlocking and overclocking but that's still a gaming thing imo.
And with ECC memory and dual socket multi-processing I don't have to worry much about doing heavy 3d work, especcially with software that can utilize the workstation, I'll stick with Xeon.


Well if overclocking is good for gaming, it should be good for everything else, dont you think? If you can get a 25-30% free speed increase per core, I think that's a pretty substancial benefit for anything you might be doing with your computer.

Those Xeons you mentioned earlier are 2.0 ghz which, even at 2.5 on turbo is pretty lame by all accounts. So let's see : a pair of those would add up to around 850$ + 450$ for a dual socket mobo + the extra cost of a e-atx tower case. This would give you 30ghz for around 1400$. Conversely, a 3930k with a decent mobo will cost around 900$ and give you 27ghz when oc'ed to 4.5ghz (which has been proven a stable clock for a while now).

So the single socket build gives up a couple of ghz of multi threaded power vs the dual socket but is roughly twice as fast for single threaded applications, which still represents the vast majority of computing tasks. And this at 500$ cheaper.

I work with a oc'ed 3930k now, which replaced a dual 2.3 ghz Xeon workstation (now used as render slave). I personally think it's much more efficient to have a higher clocked workstation and add cheap self built render nodes when you feel the need for more rendering power than to work with an slower clocked system, whatever number of cores it might have.

Last edited by vlad : 04 April 2013 at 04:14 AM.
 
Old 04 April 2013   #18
Talking

Originally Posted by vlad:
Well if overclocking is good for gaming, it should be good for everything else, dont you think?
.



the major difference is the desktop is more susceptible to crashing while the workstation is specifically built not to crash..especially for simulation and heavy particles counts, to have the cpu caching of the Xeons and scaling high as the sky with RIMMS. Although I could see a overclocked desktop running rather well with rendering.
If I had to choose between a desktop or a workstation (and I could afford both), i don't think I would lose any sleep having picked the workstation.

edit: You failed to mention the cost of cooling your souped up desktop. I'll have to those $500 now..
__________________
"If you have wisdom, spare some now."

Last edited by RoundRobbin : 04 April 2013 at 05:10 AM.
 
Old 04 April 2013   #19
Originally Posted by RoundRobbin:
-for power and cost look for The Xeon® E5-2620 it's affordable and can be dualed up.



IMO the E5-2620 isn't a good choice for a processor for CG work as a main production computer for most people.

A stock i7 3930k will be faster in both rendering and single-threaded functions. Sure you can get a dual 2620, but you can also overclock a 3930k.

If you're going to go xeon, go big and get a dual 2690 or dual 2687, otherwise I'd run an overclocked 3930k unless you have a very specific reason you'd rather have a low-end xeon.

xeons work well for renderfarms if you're buying rackmounts and need a bunch of machines in a small area. Overclocked 3930k's can sometimes crash during long renders. The flipside is you can buy a lot more of them, but depending on your rendering software more machines can get expensive from a licensing standpoint.

cost of cooling an overclocked 3930k is around $100 for a decent heatsink and some fans. The overclocking process can take some time to dial in. Each CPU is a little different and it's immediately obvious if you're setting up a lot of overclocked 3930k's which CPU's are the good ones. since their voltage and temperatures are so much lower with the exact same settings. I've found that motherboards can have similar variances.

After I dialed in the overclock settings for our render farm, I chose one of the better machines to be my production computer - 4.9ghz at 1.39 vcore and it's among the most rock solid of the bunch. Out of 10 machines, only 2 systems were able to run like that. The other render machines typically range from 4.5-4.7ghz. I'm still fine tuning some settings and them. There's a couple render nodes that lock up or reboot in the middle of the night of rendering, but they'll be ironed out. I could always have just set them all up at 4.4-4.5ghz and call it a day if I didn't want to mess with spending 2 minutes fine tuning settings every morning after I see which ones crashed.


For all of us who are not loaded with infinite cash, we have to make a compromise and choose what you think is the lesser evil for your particular situation.

Last edited by sentry66 : 04 April 2013 at 06:00 AM.
 
Old 04 April 2013   #20
Originally Posted by sentry66:
For all of us who are not loaded with infinite cash, we have to make a compromise and choose what you think is the lesser evil for your particular situation.


Good points

Concerning lifetime and care:
I'd imagine an overclocked i7 would need to be cared for more, you'll have to be more sure it's free of too much dust, monitor it's temperature more often, and refill it's coolant to keep parts from melting: I'm sure the overclocked owners have a lot to say about that or not.
I've had an overclocked machine with water pipes inside of it with people suspicious as if I was growing narcotics in my computer. Well, It stopped overclocking just after a few days because the cooling system couldn't sustain it. After that I just returned the entire thing and got a stable Dual Xeon which would last me a long time without having to worry about a single crash or a file being unable to open and I could max out all threads to 100% when rendering.
__________________
"If you have wisdom, spare some now."
 
Old 04 April 2013   #21
Yeah, for sure overclocked systems are more maintenance if you're setting them up yourself. If you're buying a pre-overclocked system with a warranty, then it's just business as usual since the manufacturer did all the leg-work.

I personally don't rely on water cooling for overclocked systems. IMO a high quality air cooler is the way to go for CG work. Less maintenance, less parts to break, more reliability since everything isn't relying on the water pump functioning. Just don't toss the machine around so the large heatsink doesn't damage the motherboard. That's honestly the only reason manufacturers avoid heavy duty air cooling because of motherboard damage during shipping. That and they're all capitalizing on the whole liquid cooling frenzy, and the myth that it's quieter than air coolers even though they still have fans to cool a radiator on top of a potentially noisy water pump.

The closed-loop water coolers are meant for gaming systems that run hard for hours at a time, not months. Their pump monitoring software often is Windows only and intended for people who are present at their computer, not away while the machine is doing calculations unattended. Sometimes closed-loop can eventually create pin-hole leaks and leak coolant on your graphics card, ruining it.

The more extreme open loop water cooling systems that don't use fans that require coolant changing typically have better cooling capability than closed-loop, but IMO are not at all suited for CG production work or render farms.

I think closed-loop coolers are perfectly fine for normal systems, overclocked gaming systems, or servers (who's CPU's don't get as hot even under 100% load). I just think there's too much potential for problems with an overclocked CG production system or rendering computer.


On another side note, I've noticed large air coolers cool the CPU better when the case is sitting on its side with the heatsink on top of the motherboard instead of sticking out sideways from it. I've seen consistent 3 degrees C less than when the case is upright. I suspect the heatsink's weight is more evenly distributed on the CPU.

Last edited by sentry66 : 04 April 2013 at 07:42 AM.
 
Old 04 April 2013   #22
the major difference is the desktop is more susceptible to crashing while the workstation is specifically built not to crash.


Sorry, my bullshit-o-meter just started beeping. There is no difference between a desktop computer and a workstation other than the name and how much a manufacturer thinks he can gouge from you.

With an i7 or a xeon, neither is inherently more stable than the other (outside of registered memory), theyre practically the same chip. If you think a xeon is going to run any more stable than a similar quality build of i7 then you are simply deluding yourself into thinking you've paid for something of a higher quality.

As other have mentioned, its perfectly normal, safe and stable to run most i7 chips at 4-5GHz, seeing as most things outside of final rendering and video encoding is still overall poorly threaded, the single core speed will usually contribute far more to the speed of a computer than the combined speed of all cores. For modelling, texturing, setting up scenes, physics simulations and even many parts of the final rendering, a faster i7 will trounce the slower 2GHz xeon. The only place the xeon has a hope in hell of putting in a good performance is in the final rendering; but frankly I'd rather rent 1000 xeons on a renderfarm for 10 minutes than leave my computer churning away all week.

The only reason im even replying here is because it utterly pains me every time I go to a studio or a freelancers apartment and see thousands of pounds worth of "workstation" sat there when I know full well they could have had a machine which performs twice as fast in day-to-day tasks for half the price. I swear if I see one more xeon workstation with a single cpu and a quadro 500....
__________________
Matthew O'Neill
www.3dfluff.com
 
Old 04 April 2013   #23
Originally Posted by RoundRobbin: the major difference is the desktop is more susceptible to crashing while the workstation is specifically built not to crash..especially for simulation and heavy particles counts, to have the cpu caching of the Xeons and scaling high as the sky with RIMMS. Although I could see a overclocked desktop running rather well with rendering.
If I had to choose between a desktop or a workstation (and I could afford both), i don't think I would lose any sleep having picked the workstation.

edit: You failed to mention the cost of cooling your souped up desktop. I'll have to those $500 now..


If you need to spend 500$ on a heatsink, well then there's no hope for you
Now, you didnt mention anything about your "workstation grade" pro video card and how much more stable and powerful it is versus a lowly gamer card...
 
Old 04 April 2013   #24
I'm guessing this is all going to be a pretty useless debate once CUDA and Teslas take over.
__________________
"If you have wisdom, spare some now."
 
Old 04 April 2013   #25
There is somewhat of a case that can be made for stability with xeons vs i7's, but only in regards to overclocking and the xeon platform using ECC memory. Non-ECC memory does introduce a minor level of possible instability.

As far as the CPU's themselves though, the sandy bridge-E chips were xeons, but didn't make the cut. They were all 8 core chips designed to run at 100% load at a certain V-core and temp.

When they mass produce them, not all chips come out equal. Some leak voltage, and will thus pull in more voltage to run at 100% load which makes them run hot. Intel takes these chips, disables 2 cores and some L3 cache and then repackages them as i7's or lower end xeons, sometimes clocking them higher than they were originally going to be. Some chips are way off the mark and others are just barely.

Either way, disabling cores and bumping the clock speed up slightly ensures that the chips can maintain stability. The ones that barely didn't make the xeon cut are champion overclocking CPU's and will run more stable than the chips that were too far from intel's criteria.

We can't enable the disabled cores, but we can overclock the existing cores. If you could take the highest end xeons and do whatever you wanted with them in regards to disabling cores and overclocking, they would perform faster, run cooler, use less voltage, and be more stable than the i7 or lower end xeon CPU's if they were configured the same way.

All that said, in the real world an overclocked i7 can be made stable by adjusting settings in the BIOS and adding better cooling. You can reach 100% stability with the exception of non-ECC memory

Last edited by sentry66 : 04 April 2013 at 06:44 PM.
 
Old 04 April 2013   #26
Originally Posted by RoundRobbin: the major difference is the desktop is more susceptible to crashing while the workstation is specifically built not to crash..especially for simulation and heavy particles counts, to have the cpu caching of the Xeons and scaling high as the sky with RIMMS. Although I could see a overclocked desktop running rather well with rendering.
If I had to choose between a desktop or a workstation (and I could afford both), i don't think I would lose any sleep having picked the workstation.

edit: You failed to mention the cost of cooling your souped up desktop. I'll have to those $500 now..

I don't know where you're getting the info from, but the idea that a XEON is engineered not to crash while an i7 might be less fault tolerant is a bit preposterous.

In first place, ECC is a joke for DCC work. ECC reduces the memory errors you -might- actually get a crash from by exactly zero.
Crashes due to memory handling are, in their absolute entirety, a software fault, and no amount of automatic error correction within the ram will change that by a iota. If a pointer to an invalid object is fetched and used, your app will crash, ECC or not.

ECC is mostly meant for you to protect yourself from, hang in there, cosmic rays. Yes, you read it right

While the beefier cache is nice, the fact is most xeons within accessible price have laughable clocks. We're not even talking overclocking here, any ex i7 will absolutely smoke a xeon of time and half the price at any single threaded or poorly threaded tasks, and of those, except rendering, there are many. Sadly, even in simulation, particularly so in the very archaic Maya's toolset.

If you were to overclock, the cost would be about 80$ for an all-in-one, out of the box, hardware dummy friendly corsair liquid cooling setup. At 500$ worth of cooling you are talking overclocking as a hobby, things like Peltier cells and evaporative towers in the circuit, something you do for fun, not for results on the buck.

If given a choice between a 3.6Ghz i7 with 32GB of quality RAM, and the equivalent money in XEON + ECC RAM (two tiers from the top and 16GB if you're lucky) I'd pick the i7 pretty much any day of the year for anything except racked/rendering purposes.

I use both, extensively, and I, meaning no offense, believe you got swept in by some serious hype.
For non threading friendly tasks it's not even worth the discussion. With an old architecture, overpriced ram and a clock that barely matches the low power laptop solutions, the added cache is not worth it, you will be crawling your way to the finish line when a top tier i7ex will have done several laps, and practically all of modelling, rigging/animation, sculpting, most of simulation, and quite a bit even in the rendering field falls into this domain.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
Old 04 April 2013   #27
yeah ECC memory just protects against cosmic radiation or stray neutrinos that happen to travel through a memory bank or in a case when a memory bank goes bad. The ECC is able to correct those bad memory banks at roughly a 10% memory performance penalty. Meanwhile normal memory can always be underclocked or have its CAS latency lowered a notch as a way to eliminate any errors the memory might have.

ECC memory is absolutely critical for servers like what run your bank account that need 100% uptime that run for months or even years at a time and every single digit of info on the machine is critical. It's especially important servers because every piece of data is first loaded into memory before it's sent out to a client computer. It's not as critical for a render node that's just rendering out a frame, then dumps the file from memory and loads up another file to render.

The OS only takes up so much memory, so the majority of errors that would likely happen from cosmic radiation would be the rendering. Chances are if radiation hit, a pixel or two would be different values than they should be. Worse would be the render crashes and then restarts. The absolute worse case (and rarest) scenario is a critical piece of the OS that's loaded in memory gets hit with the radiation and the system crashes. You're probably just as likely to win the lottery though.

I've read that if a computer is running all year, its memory banks will routinely get hit about 50 times a year, changing 1's to 0's and vise versa. If memory is constantly being flushed and not writing info to disc, those errors aren't permanent.

The good news at least is ECC memory doesn't have quite the premium it once had. Now it's just a little more expensive than normal memory. It's nice to have a platform that's capable of 512 gigs of memory which is just a pointless luxury for most CG work. It's also nice to have a platform that you can buy 2400 mhz ram that can improve overall performance by 2-5% over 1333 or 1600mhz ECC ram, and in certain memory intensive operations, up to 10% faster or more.

Last edited by sentry66 : 04 April 2013 at 06:40 AM.
 
Old 04 April 2013   #28
Originally Posted by ThE_JacO: If given a choice between a 3.6Ghz i7 with 32GB of quality RAM, and the equivalent money in XEON + ECC RAM (two tiers from the top and 16GB if you're lucky) I'd pick the i7 pretty much any day of the year for anything except racked/rendering purposes.


Two tiers from the top?!!! We're talking dualed right? Aren't you also renting out your Xeon reject for an extra $100 a month to the electric company?? (had to go there)


i'll take your word for it ,
the i7 series seems to have changed the laws of nature.
__________________
"If you have wisdom, spare some now."
 
Old 04 April 2013   #29
Originally Posted by RoundRobbin: the i7 series seems to have changed the laws of nature.


To some extent it has because of their ability to overclock around 50% higher than their stock speed.

Like back when you recommended a dual 6-core 2ghz E5-2620, that's 24ghz of total performance with a 2.5ghz single-threaded speed

meanwhile a basic 4.5ghz 6-core i7 is 27ghz of total performance and 4.5ghz single-threaded speed, for less money.

You lose the ability to use ECC memory (actually you can use it, but just not in ECC mode) or high amounts of ram, but you save money and gain the ability to run 1866-2400mhz ram and a chance of running the setup at 4.7-4.9ghz if your CPU is up to the task.
 
Old 04 April 2013   #30
For the small offices and freelancers, Dual Xeon mobos and fancy ECC memories, is just a WASTE OF MONEY!
The BEST money/ performance radtio is following:

1. i7-2600K
2. i7-3930K
3. FX-8350

But, 3930K is the best buy, as you have powerful single workstation.
i7-2600K, FX -8350 should be used for the slaves on renderfarm.
....
Same is about graphic card, GTX 660 2GB is still best buy, until you wanna go on Tesla/ Quadro system.
For big studios with expensive renderman etc, licesensing, its OK to go on extreme power single systems /due to price of licence/

Best R
__________________
http://trideval.blogspot.com/
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 07:14 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.