Uh, was the E5-1620 even dual-processor capable? I think not…
(Personally I still have a single i7-3930K, so I didn’t try…)
building a low cost c4d workstation
Depending on the motherboard you get I’d recommend the second fastest i7 as in both the latest gen has well and the previous gen the second fastest unlocked CPU overclocks so well that the performance of both levels out when overclocked.
If you hate the idea of over clocking then maybe the top ones are with it but honestly a decent motherboard nowadays has a single button press overclock that with high quality air coolers or any liquid radiator will handle just fine. Easily take the 3930 or 4930 to 4.2 to 4.4 with no effort or risk or really even knowledge of overclocking.
as far as I know all the xeon’s are fine in a dual board. It would be pretty silly if they where not.
You are right, all 1xxx Xeons are for single socket systems, they won’t work in a dual CPU configuration.
I would go for an i7 solution as well, in fact i have a couple of weeks ago and now use a thoroughly fast i7-4770k, slightly overclocked to 4.4. It’s CB value is one of the reference values that come with CB.
The 1xxx series xeons are rather pointless, last I checked the first digit determines how many cpus you can use at once, hence they all start with a 1, 2, 4 or 8. A single xeon makes absolutely no sense as a 3d workstation.
I don’t totally agree, an e5-1650 v2 cost as much as an i7-4930K an have a slightly ghz advantage(if you are not interested in OC), plus is support ECC memory and some more features. While the e5-1650v2 is very attracting for his price point I agree with you that many other xeon cpus do not offer the same price/performance ratio.
which bright spark moved this thread to a forum where nobody will ever seen it again?
If I had a pound for every time someone talked about processor/board combos/gfx combos specifically for C4D , I could retire !
there is no logic at all to this kind of moderation.
hmm, now I see that i7 does not support ECC ram. which throws me back into a state of confusion. I want ECC ram. so that means just xeon’s (as far as I can tell)?
I actually could use just the 1x xeon. I have dumped the dual processor idea altogether and ordered that WS board. so now I need the fastest xeon 6 core, with as many mhz as i can get.
which one is that?
I actually used to enjoy all this cpu carry on, but to me it looks like intel are trying their best to maek it all as confusing as possible , to up the chances of people buying shit they dont need for more than they want to pay.
my cpu budget is 1k , I want 1 xeon (2011) , am I gonna get a >=6 core cpu with 3.6 ghz or more for that?
There are a couple of discussions on this topic right here in this forum, the last one from just a couple of days ago.
ECC RAM is really most usefull for 24/ high availability systems. It mostly protects against RAM errors due to cosmic rays. For a normal workstation it usually is not necessary.
The common definition of a workstation system is somewhat outdated.
as far as I can tell xeon requires ECC , i7 does not, but i7 actually seems to be able to use it as well if the board supports it (2011 does)
what a fantastic way to waste time 
I was going to just buy a Dell box, but they where taking the p*ss on price,imo.which is why i started component hunting. Seems everyone slaps a premium on their boxes at the hint of the word “workstation”
Correct and imo very much unjustified.
Higher powered i7 systems don’t register for most manufacturers for anything but gaming systems. They either have i5 or low power i7 for business use, or Xeons for “workstation” and server use.
Current Haswell i7 deliver a good bang for the bug and are running cool and reliable.
As far as I know most xeon will work also without ecc ram. I’m also pretty sure that i7 do not take advantage from ecc memory regardless what motherboard you choose.
For gfx artists, ECC memory offers nothing useful. Some people will see that ecc memory error corrects and gives a more stable system, and wrongly assume it will help them. Truth is, virtually every crash or error you suffer will be because of buggy code in your software or drivers; ECC only protects against electrical instability caused by extremely rare circumstances.
Its the computer equivalent of buying elephant repellent whilst living in a London apartment…
the only thing I could make my mind up about is the motherboard…
I am now even more confused than ever about a gfx card.
A: the gtx 680 , titan,ect have just got a massive price drop and that has not been reflected in the prices I see.
and look at this !!


these game card are basically pants in a pro application it seems. Is it like this in C4D?
if not, theres something wrong with c4d ogl implementation? Has me thinking…
I find the result astonoshing, considering that ever since I have been using c4d the consensus has been that professional gfx cards are waste of money. seems not to be the case in those apps.
gtx 580 looking good for OGL ?
That is mainly because professional Quadro and FirePro cards come with special optimized drivers for Maya and Max. It does make quite a difference.
For Cinema4d my experience is that the high-end AMD consumer cards traditionally outpace both Quadro and FirePro cards. For example, in Cinebench 15, on my aging system, the 7970 GPU score is 87fps. Quite a bit higher than a Quadro (k)4000.
But it depends on various factors. Also, those SPEC benchmarks only test part of the opengl performance, and both Max and Maya run in DirectX now - which tends to bring a performance boost on Windows workstations as well.
A Titan is probably a good match, and also support CUDA accelerated applications well with 6gb.
Although some of these GPU scores look a bit unbelievable, check out this benchmark list:
http://www.cbscores.com/index.php?sort=ogl&order=desc
> CUDA accelerated applications
beyond a few tech demo’s, ive never seen a cuda accelerated application (maybe some of the new realtime renderers?)
the only place I could use anything like that is openCL together with photoscan.
I kinda fancy that new radeon 290x , looking good for the money and I would support AMD, out of principle right now, just for making NV drop its prices (they have been taking the p*ss for quite a while now)
Most of my h264 video encoding application at home are cuda accelerated.
At work we also use cartographic/topologic pro apps that rely on cuda also.
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.