Thanks ThE_JacO! I get the point of rule, just misunderstood how it worked! Also, since everyone advice me in that way, I’m definetly buying a desktop.
The usual practice is to install the operating system and your major applications on the SSD, and everything else on the HDD. But I suggest being careful when you set up your cache folders for things like RealFlow, so that the SSD doesn’t get written on all the time.
the new Mac Pro and other Ivy Bridge Xeons will be out soon if you can wait. They will be beasts for rendering with one 12-core CPU. But maybe that’s overkill for your needs
And I have two of those Dell U2713HM screens. really nice
I don’t know why you keep bringing up the new ivy-e Xs. Possibly pre-emptive Apple love?
They are 6core (12 is HT virtual cores) for the decently clocked ones, and the 2643, the one hexacore with semi decent frequency is 1550$ when you buy tray.
They might be beasts for preview rendering style but only if you have high cycle per die needs, because cycle per buck they are horrendous and close to the absolute worst possible CPU you sould slap into a case.
Given OP’s budget of 3 to 3.5k I’d be more than hesitant recommending one spends more than half of it on what will be a CPU more than thrice as expensive as a top of the line i7 for maybe a 20-30% performance squeeze, if that.
On top of that if you want a single CPU and overclocking is an option they are simply not that good for rendering, and get absolutely smoked at single tasking by even a lowly i5k clocked respectably.
(unless the 2643 will be unlocked, in which case I’ll retract that last paragraph, but I haven’t found any indication either way).
Unless you mean the 2695, which is an actual 12 phys cores, but has a laughable base frequency and is 2.4k tray cost (street price practically the entire floor budget by OP). They are totally NOT meant to be good rendering CPUs, those are webfarm/computational centre CPUs for those with no license bounds and who have footprint and heat priorities.
They would be the only thing worse than the above mentioned 2643 bang for buck for a workstation, so I sincerely hope you weren’t suggesting that.
“Its looking like 3D modelers are going to seriously benefit from the potential that Ivy Bridge-EP offers to Apples Mac Pro, even in a single-socket configuration.”
I would at least wait to see the reviews. I’ll be writing one of the new Mac Pro and comparing it to the dual socket versions of the current HP Z820 and Dell T5600 that I reviewed for Ars Technica. But I’m not saying “buy a Mac”, I’m saying “wait to see what the CPU landscape looks like once these chips hit the market.” But Apple will be the first out the door with these machines, likely within the next month since OS X Mavericks just went golden master and these chips were slated for mass production in September.
I’m not saying the CPUs won’t be good, they will be, there’s quite a few very good ideas in them actually.
I’m saying the 12 core ones are flat-out not meant for a workstation in first place, not that at their street price they would be even remotely interesting anyway, and the only decently clocked one is still unlikely at 3.5, no matter what else is thrown in there, to beat an ivy or haswell clocked at 4.7 in single tasking, and for rendering with the price of one single CPU you can literally build two full render donkeys.
They are simply not worth waiting for when your budget is three grands or less, they’d be a waste of money.
The CPUs themselves are pretty decent, just their bang per buck is simply not meant for this thread, it lies elsewhere.
[QUOTE=cgbeige]from Tom’s Hardware:
“Its looking like 3D modelers are going to seriously benefit from the potential that Ivy Bridge-EP offers to Apples Mac Pro, even in a single-socket configuration.”
Ohe yeah, TH, because they surely have a clue on bang for buck while rolling in sponsor and bribe money and that’s right below a rendering test
You want it in a render client, you really don’t want it in a workstation that needs decent interface/viewport speed.
ECC is most important for 24/7 machines, a fast SSD can be used in any kind of machines these days.
I was talking about the PCI-based SSD since SATA3 is no longer adequate for the speeds that SSDs are reaching (over 6GB/s) but this is not exclusive to Xeons either now that I think about it. It’s just built into the new Mac Pro and other Macs. Anyway, I’ll stop there. I just think that the whole pro workstation scene is about to change so it would be good to see what that landscape looks like in a month.
But ECC isn’t just important for servers I’ve already had to send back a RAM chip for my gaming machine/render helper and RAM problems can be hard to diagnose since your system just behaves erratically. ECC at least maps out that defective area of the RAM and lets you use it without interrupting your work.
You can get PCIe based SSD. Yes, ECC can be helpfully, but from a statistical pov it is most useful with machines that are working 100% of the time. Yes, you can loose work due to RAM errors, but ECC only helps with small errors like you get from cosmic rays, defective memory sticks are often beyond ECC to correct.
Cheers
Björn
They aren’t because they are targeted at computational centres.
They feature a squillion low temperature, low interference, low clock cores.
24 virtual cores at 2.2 or 2.4 Ghz for 2.5 grands tray price is horrible bang for buck for a workstation.
Core parallelism is far from scaling linearly, and in an interactive scenario, which is the focus of a workstation, there is far too much poorly threadable if not even single threaded bottlenecking for them to be good.
Animation, Rigging, bakes, a lot of I/O bound operations on caches are all strictly single threaded, and many more CPU bound operations scale horribly past the three or four threads range.
Even for rendering, depending on the engine, they might prove bad enough, depending on how much thread memory duplicity is necessary, in some cases even with generous cache if enough duplicity is present you will cache starve and thrash so frequently they will peform very, very poorly, and last if your engine is core and not node licensed, you are paying through the nose just to serve them.
The 3.5 hexacore CPUs are the ones intended for workstation use.
While the 24.5 to 28 theoretical Ghz spanning the cores might seem attractive, they will almost always (close to 100% in a workstation scenario) end up vastly inferior to the 21 theoretical Ghz spanning only 6. Only for massive thread pools and many VMs scenarios with low heat per VM and high yield per Watt the 12cores CPUs have some advantage.
As for PCI-E SSD, that’s far from a Xeon exclusive, the bonus for the more recent ivy xeons is when you have retarded amounts of parallel SSD storage virtualized into multiple pools, which is again a farming scenario, not a workstation scenario (where for years now you have had PCI-E storage ala FX I/O available).
ECC is 100% pointless, the error prevention of it is purely for mission critical scenarios, the incidence of ECC recovering or preventing something is several decimal places away from significant. Unless you work outside the ionosphere during a solar storm
Edit: I did sort of forget to include the E5-2697V2, which is an actual 12 core, respectably clocked ivy, but with a tray price in excess of 2.6k, a street price that’s likely to be well over four grands before GST, and a locked multiplier, it’s unlikely anyone will care much, unless your productivity is so valuable to you that you’re ok forking out 10k, 80% of which CPU, for a dual CPU workstation to see faster previews
If you meant those, then yeah, they are nice, but priced ridiculously and most likely targeted at crossroads nodes where you need as much short proximity oompf as poosible with no regard for component cost.
Edit2: Apparently the first 2697s on the street are retailing for 5 to 5.8k. I see now 4k was very optimistic
You misunderstand what ECC is for and how it works I think.
ECC doesn’t map out defective areas of memory, ECC adds parity check and sum check bits and procedures in case a transistor gets inadvertently switched by external causes.
That’s radiation (cosmic rays, local radioactivity, enormous local interference etc.).
None of it aplies to your workstation, except cosmic rays, which, to hit a transistor squarely and switch it from a 0 to a 1 exactly while it’s 0 and just after it’s been switched by a write and right before it’s read, need to roll a bazillion faces dice to a bazillion exactly
Stability is unaffected, ECC is accident preventing for mission critical deployments.
When Maya or your rendering engine crashes for a memory related issue, that’s usually a bad piece of hardware, a glitch in the OS, or plain bad programming/programming mistakes.
ECC can’t magically figure out why an entry in a long binary number makes an app crash, it only ensures it doesn’t get changed by external factors. External factors that under your desk, even with a 24/7 machine, occur about once in every forty years.
I don’t really care much about ECC in a workstation. IMO it matters more in a server that first caches files into memory before sending them across a network as a client requests them. They’ll then sit loaded into memory forever until they go stale when other requests eventually bump that data out of memory.
The xeon platform at least offers the capability to run a lot more memory. Compositors would love to run 256 or 512 gigs of ram, especially with 4k footage. Having more cores in an app like NUKE or AE will let you render more frames at the same time and eat up all that memory.
3D render nodes can run multiple simultaneous render jobs with all those cores if you have enough memory. That’ll help eliminate diminishing returns with so many cores instead of having all cores on a single render job.
How much memory do those new macs max out at? 4 ram slots right? Maybe they’ll support expensive 16 gig chips for a grand total of 64 gigs? That’s not an improvement over the 2011 socket i7 platform from a year and a half ago and a step backwards in terms of modern xeon platforms that have up to 32 ram slots.
It’s a shame the individual xeon core speeds, even with turbo mode are so low, but to me, big ram is one of the primary reasons for wanting a xeon platform as a workstation - aside from dual sockets.
ya, I would think that they max out at 64GB. I personally never need more than 24GB, even with Nuke, Photoshop and Maya open but I don’t comp 4K footage. I monitor my memory usage quite well so I know I’m not paging more than I need to, even with a 16GB machine. I would like to see actual resource usage for those compositors. After Effects is absolute shit for memory though it seems to eat as much as you feed it and doesn’t flush its RAM properly. Nuke can take a lot but it’s not broken like AE is.
I don’t know of many places rigging comp workstations with half a tera, not even for 4k stereo.
Normally if you’re willing to shell out the retarded amount of money that would set you back by (because you need the densest modules) you’re much better off having 64 or 128 of ram and instead investing in PCI-E storage for a 0tiered caching.
There are options that comfortably page back and forth to the ram at the speed of a cache-ahead for 4k playback, so the ram you actually NEED is that for the graph to be rendered and the I/O ops pool to stay lively, which is far, far shorter than 512, or 256, or in fact even 128
In Short, no, it’s not an attractive option in actual practice. Most dual xeon mobos supporting v2 come with 16 slots. To deck out 512GB in one you have to buy 32GB reg ECC modules and you’d better buy 1866, that sets you back some twelve grands in memory alone. You’d be much better off spending a couple grands in memory and the rest on a good I/O card. MUCH better.
128GB is more or less where the bang for buck curve (for an expensive configuration) has already plateaued. Past that you are in ridiculous territory and again squarely in crossroads serving.
We’ve been comping 4k stereo at 60fps for the past 6 months and 64 gigs of ram is barely enough memory to cache 5-10 seconds worth of footage depending on the complexity of the comp. I’ve had one extreme case with 8k x 8k single-image comp something like 90 elements that required 60 gigs of memory to comp that one frame at the full resolution. Granted there’s ways to prerender out things so it doesn’t use so much memory but it takes extra time and is a drag. No doubt 256 gigs of ram is a lot of memory, but I don’t think it’s all that extreme if it helps you work better.
IMO mentioning the xeon platform and ‘good bang-for-the-buck’ in the same sentence is ironic. If you have the money for dual high-end xeons, you can probably afford the extra $1300 to go from 128 to 256 gigs ram if you do heavy comping. 1866mhz ram isn’t a requirement for the new xeons. IMO it’s not worth the premium. I’d personally much rather have 256 gigs 1600mhz than 128 gigs 1866mhz. Does 1866mhz ECC ram even exist yet? Don’t you need ECC ram if you run more than 64 gigs ram anyway?
A large PCI-E SSD does sound great for comp work, but most of the ones I’ve looked at either have sketchy reviews/feedback about them (like the OCZ drives) or they’re slower enterprise-grade $5000+ drives. Granted, they cost a lot less than using 32 gig ram modules - which I agree our outrageously expensive, but I think I’d still opt to just use a large regular SSD or put a couple in a RAID0.