I’m not saying the CPUs won’t be good, they will be, there’s quite a few very good ideas in them actually.
I’m saying the 12 core ones are flat-out not meant for a workstation in first place, not that at their street price they would be even remotely interesting anyway, and the only decently clocked one is still unlikely at 3.5, no matter what else is thrown in there, to beat an ivy or haswell clocked at 4.7 in single tasking, and for rendering with the price of one single CPU you can literally build two full render donkeys.
They are simply not worth waiting for when your budget is three grands or less, they’d be a waste of money.
The CPUs themselves are pretty decent, just their bang per buck is simply not meant for this thread, it lies elsewhere.
[QUOTE=cgbeige]from Tom’s Hardware:
“Its looking like 3D modelers are going to seriously benefit from the potential that Ivy Bridge-EP offers to Apples Mac Pro, even in a single-socket configuration.”
Ohe yeah, TH, because they surely have a clue on bang for buck while rolling in sponsor and bribe money and that’s right below a rendering test
You want it in a render client, you really don’t want it in a workstation that needs decent interface/viewport speed.
ECC is most important for 24/7 machines, a fast SSD can be used in any kind of machines these days.
I was talking about the PCI-based SSD since SATA3 is no longer adequate for the speeds that SSDs are reaching (over 6GB/s) but this is not exclusive to Xeons either now that I think about it. It’s just built into the new Mac Pro and other Macs. Anyway, I’ll stop there. I just think that the whole pro workstation scene is about to change so it would be good to see what that landscape looks like in a month.
But ECC isn’t just important for servers I’ve already had to send back a RAM chip for my gaming machine/render helper and RAM problems can be hard to diagnose since your system just behaves erratically. ECC at least maps out that defective area of the RAM and lets you use it without interrupting your work.
You can get PCIe based SSD. Yes, ECC can be helpfully, but from a statistical pov it is most useful with machines that are working 100% of the time. Yes, you can loose work due to RAM errors, but ECC only helps with small errors like you get from cosmic rays, defective memory sticks are often beyond ECC to correct.
They aren’t because they are targeted at computational centres.
They feature a squillion low temperature, low interference, low clock cores.
24 virtual cores at 2.2 or 2.4 Ghz for 2.5 grands tray price is horrible bang for buck for a workstation.
Core parallelism is far from scaling linearly, and in an interactive scenario, which is the focus of a workstation, there is far too much poorly threadable if not even single threaded bottlenecking for them to be good.
Animation, Rigging, bakes, a lot of I/O bound operations on caches are all strictly single threaded, and many more CPU bound operations scale horribly past the three or four threads range.
Even for rendering, depending on the engine, they might prove bad enough, depending on how much thread memory duplicity is necessary, in some cases even with generous cache if enough duplicity is present you will cache starve and thrash so frequently they will peform very, very poorly, and last if your engine is core and not node licensed, you are paying through the nose just to serve them.
The 3.5 hexacore CPUs are the ones intended for workstation use.
While the 24.5 to 28 theoretical Ghz spanning the cores might seem attractive, they will almost always (close to 100% in a workstation scenario) end up vastly inferior to the 21 theoretical Ghz spanning only 6. Only for massive thread pools and many VMs scenarios with low heat per VM and high yield per Watt the 12cores CPUs have some advantage.
As for PCI-E SSD, that’s far from a Xeon exclusive, the bonus for the more recent ivy xeons is when you have retarded amounts of parallel SSD storage virtualized into multiple pools, which is again a farming scenario, not a workstation scenario (where for years now you have had PCI-E storage ala FX I/O available).
ECC is 100% pointless, the error prevention of it is purely for mission critical scenarios, the incidence of ECC recovering or preventing something is several decimal places away from significant. Unless you work outside the ionosphere during a solar storm
Edit: I did sort of forget to include the E5-2697V2, which is an actual 12 core, respectably clocked ivy, but with a tray price in excess of 2.6k, a street price that’s likely to be well over four grands before GST, and a locked multiplier, it’s unlikely anyone will care much, unless your productivity is so valuable to you that you’re ok forking out 10k, 80% of which CPU, for a dual CPU workstation to see faster previews
If you meant those, then yeah, they are nice, but priced ridiculously and most likely targeted at crossroads nodes where you need as much short proximity oompf as poosible with no regard for component cost.
Edit2: Apparently the first 2697s on the street are retailing for 5 to 5.8k. I see now 4k was very optimistic
You misunderstand what ECC is for and how it works I think.
ECC doesn’t map out defective areas of memory, ECC adds parity check and sum check bits and procedures in case a transistor gets inadvertently switched by external causes.
That’s radiation (cosmic rays, local radioactivity, enormous local interference etc.).
None of it aplies to your workstation, except cosmic rays, which, to hit a transistor squarely and switch it from a 0 to a 1 exactly while it’s 0 and just after it’s been switched by a write and right before it’s read, need to roll a bazillion faces dice to a bazillion exactly
Stability is unaffected, ECC is accident preventing for mission critical deployments.
When Maya or your rendering engine crashes for a memory related issue, that’s usually a bad piece of hardware, a glitch in the OS, or plain bad programming/programming mistakes.
ECC can’t magically figure out why an entry in a long binary number makes an app crash, it only ensures it doesn’t get changed by external factors. External factors that under your desk, even with a 24/7 machine, occur about once in every forty years.
I don’t really care much about ECC in a workstation. IMO it matters more in a server that first caches files into memory before sending them across a network as a client requests them. They’ll then sit loaded into memory forever until they go stale when other requests eventually bump that data out of memory.
The xeon platform at least offers the capability to run a lot more memory. Compositors would love to run 256 or 512 gigs of ram, especially with 4k footage. Having more cores in an app like NUKE or AE will let you render more frames at the same time and eat up all that memory.
3D render nodes can run multiple simultaneous render jobs with all those cores if you have enough memory. That’ll help eliminate diminishing returns with so many cores instead of having all cores on a single render job.
How much memory do those new macs max out at? 4 ram slots right? Maybe they’ll support expensive 16 gig chips for a grand total of 64 gigs? That’s not an improvement over the 2011 socket i7 platform from a year and a half ago and a step backwards in terms of modern xeon platforms that have up to 32 ram slots.
It’s a shame the individual xeon core speeds, even with turbo mode are so low, but to me, big ram is one of the primary reasons for wanting a xeon platform as a workstation - aside from dual sockets.
ya, I would think that they max out at 64GB. I personally never need more than 24GB, even with Nuke, Photoshop and Maya open but I don’t comp 4K footage. I monitor my memory usage quite well so I know I’m not paging more than I need to, even with a 16GB machine. I would like to see actual resource usage for those compositors. After Effects is absolute shit for memory though it seems to eat as much as you feed it and doesn’t flush its RAM properly. Nuke can take a lot but it’s not broken like AE is.
I don’t know of many places rigging comp workstations with half a tera, not even for 4k stereo.
Normally if you’re willing to shell out the retarded amount of money that would set you back by (because you need the densest modules) you’re much better off having 64 or 128 of ram and instead investing in PCI-E storage for a 0tiered caching.
There are options that comfortably page back and forth to the ram at the speed of a cache-ahead for 4k playback, so the ram you actually NEED is that for the graph to be rendered and the I/O ops pool to stay lively, which is far, far shorter than 512, or 256, or in fact even 128
In Short, no, it’s not an attractive option in actual practice. Most dual xeon mobos supporting v2 come with 16 slots. To deck out 512GB in one you have to buy 32GB reg ECC modules and you’d better buy 1866, that sets you back some twelve grands in memory alone. You’d be much better off spending a couple grands in memory and the rest on a good I/O card. MUCH better.
128GB is more or less where the bang for buck curve (for an expensive configuration) has already plateaued. Past that you are in ridiculous territory and again squarely in crossroads serving.
We’ve been comping 4k stereo at 60fps for the past 6 months and 64 gigs of ram is barely enough memory to cache 5-10 seconds worth of footage depending on the complexity of the comp. I’ve had one extreme case with 8k x 8k single-image comp something like 90 elements that required 60 gigs of memory to comp that one frame at the full resolution. Granted there’s ways to prerender out things so it doesn’t use so much memory but it takes extra time and is a drag. No doubt 256 gigs of ram is a lot of memory, but I don’t think it’s all that extreme if it helps you work better.
IMO mentioning the xeon platform and ‘good bang-for-the-buck’ in the same sentence is ironic. If you have the money for dual high-end xeons, you can probably afford the extra $1300 to go from 128 to 256 gigs ram if you do heavy comping. 1866mhz ram isn’t a requirement for the new xeons. IMO it’s not worth the premium. I’d personally much rather have 256 gigs 1600mhz than 128 gigs 1866mhz. Does 1866mhz ECC ram even exist yet? Don’t you need ECC ram if you run more than 64 gigs ram anyway?
A large PCI-E SSD does sound great for comp work, but most of the ones I’ve looked at either have sketchy reviews/feedback about them (like the OCZ drives) or they’re slower enterprise-grade $5000+ drives. Granted, they cost a lot less than using 32 gig ram modules - which I agree our outrageously expensive, but I think I’d still opt to just use a large regular SSD or put a couple in a RAID0.
Man, you always seem to work on projects with specs and issues nobody else in the whole world deals with
At 4k 60fps with still the graph active you are unlikely to be able to play. Unless you’re talking of playing back rendered frames, at which point you could very well do 4k by 30fps, and that’s pretty much available with most modern media oriented flash drive solutions for PCI-E 2.0.
There are 5k 48fps solutions out there, which should match the bandwidth of 4 by 60 (at what ratio though? big difference between near square imax and cinevision 2.35)
Anyway, I don’t know what solutions you looked at, but two SSDs in raid0 are far from being the same of dedicated tier 0 storage, and the slower enterprise solutions might simply have been a matter of looking at the wrong products (since there are plenty such things for database servers etc.). They sure aren’t slow, nor cost 5k for a slow one unless you look at the onlining 6 or 7 tera ones.
Half tera fusio IO FX drives are remarkably fast, can be coupled (and will do imax 5k at 24 or 4k at 30fps with overscan just fine then), and are about a grand a pop.
Every time I looked at large ram (DIMM) storage, it always came out to a TON more than just 1300 bucks difference for the 128 to 256 jump.
What reliable, reg ECC 16x16GB modules have you found that cost only 1300$ more than 16x8GB ones? That’s better dollar per GB than cheapo gamer ram for modules with twice the density. Not challenging you here, sincerely interested in where you find them so cheap.
512 is still a plain and loud laugh price wise, but we seem to agree on that
Playing final 4k frames isn’t an issue, especially if it’s final compressed video footage. Where I run into bottlenecks with memory are in the comp software and wanting to scrub through the final full resolution frames while I’m working. You’ll run out of memory pretty quick as it loads up all the layers at their full resolution. I have a dedicated SSD cache drive for AE that never seems to use more than 22 gigs. I’ll take a look at the fusion IO drives for storing render layers on, but aren’t they like $5500 for their 1.6TB drive?
I’m still not sure fast PCI SSD’s would solve not needing more memory though because if I have a bunch of CPU cores on my workstation and need AE to output a comp ASAP, it’s going to need a lot of memory available for each instance of AE I launch to render different frame ranges simultaneously. I know a program like Nuke is better at multithreading than AE, but Nuke also isn’t cheap, nor is the amount of time learning it, and it still has shortcomings with a lot of motion graphic capabilities that AE has.
I routinely get asked to “quickly” render out 1-minute stereo turn-table animations (which I elect to render at 4k 60fps) of some specific anatomy to play in the 4k auditorium projector between the Dr. presentations or for content to span across 12 vertical 1080p screens acting as one. A lot of the work itself isn’t anything crazy other than the output spec, but the whole situation is a little backwards from what most small studios probably deal with who might have more staff, but less expensive equipment, or standard HD or film resolution formats.
As an aside, right now you currently can’t play 4k at 60fps in a video format until h265 is here. Right now you can play 1620p stereo video at 60fps or 4k mono at 60fps until h265 is fully out and supported by players and compressors. The ironic thing is, the 4k stereo projector ends up artificially cutting the resolution in half due to its polarizing lens. I render in 4k anyway for the sake of high-res mono playback and being a more print-friendly resolution - not to mention it’s just better. The 60fps actually helps a lot with smoothing out noise since you see it half as long.
One sorta funny thing is, the video editors figure their 1080i footage is already 60fps and so figure I should probably just match their framerate
I don’t know the specific brand or model if you were to buy the memory outright though. I honestly wouldn’t be shocked if it’s as expensive as you say. I typically don’t price out individual parts for server-class anything since we buy those systems ready to go directly from vendors with full warranty.
Well, bear with me here, but this is as usual going very long ways around through some extremely singular, if not contrived, scenarios
So when you said " Compositors would love to run 256 or 512 gigs of ram" what you really meant, rather than generically talking about compositors was: “In this entirely unique scenario where I work hooked up to a 45k$ projection system (because a normal monitor surely doesn’t warrant preview playback at 4k) in a format largely unheard of outside of my workplace, supported by an IT dept miles away from the ones normally supporting the creative industries employing specialised compositors, 64GB got constrained and 128 or 256 would have been nice”
And even then, since you sure won’t be evaluating that kind of I/O and rendering at 24fps, let alone at 60, at 4k, 1TB of tier0 storage capable of playing your rendered previews is still preferrable to an added 384GB of RAM IMO.
Of course if you can have both because of a budget to burn through, get both!
Not having a go, don’t misunderstand me, please, just since this thread has turned a bit into a bang for buck on mid-range setups, I don’t feel it’s a fair statement that, for the price, the ability to load half tera of ram is that much of a deciding factor in going with xeons.
My workstation isn’t hooked up to a 4k projector. My main monitor is just a standard 2560p monitor. The 4k and stereo projectors are each one of the final destinations the work is shown on and so I do care about seeing 1:1 pixels at 4k - which I have to zoom in to see on my 2560p monitor. It eats a lot of memory when you resolve several frames at 100% full resolution. I often run out of memory and AE drops those previously rendered frames from memory, making me rerender them if I want to go back to them.
I do agree 512 gigs of ram is completely insane if we’re talking about bang for the buck. Hell I wouldn’t even choose the xeon platform if bang for the buck was the main concern. Otherwise, the key benefits of xeon over i7 are ECC memory, dual socket (yet slower frequency CPU’s,) and more PCI-E slots. That’s about it. There’s some other auxiliary things like Intel Phi’s only work on the xeon platform - not that we know how those will actually pan out yet in regards to real-world rendering.
All I was saying was what compositor wouldn’t love to have the ton of memory? Because that is something the xeon platform can offer over the i7 platform. My thought is, if you’re going to spend the extra cash to go for big xeons and more memory will benefit you, that would normally be one of the first justifications in my mind for spending the higher premium for xeons. Dual sockets and more (slower) cores start having severe diminishing returns, but I have to admit the new v2 xeons are a solid improvement while the 4630k is a complete let-down when compared to the 3930k.
From a sheer processing standpoint, in my mind the new high-end xeons are finally possibly worth the premium over an overclocked i7 if you do heavy rendering. It’s no longer a 40% render speed improvement for 300% the i7’s price. It’s now more like 80% faster rendering for 320% the price (as in the total system price)