They did a 180 on that, and now they announced they won’t make any more consumer level mobos, but leave that to the existing manufacturers (Asus, GB etc.).
They are definitely offering socketed for the foreseeable future.
What next for Intel?
intel has really gotten lazy in the high performance dept
everything is about power savings and mobility these days with them
It’s not just lazyness, they are about to approach architectural limits again, and doubling the number of cores again isn’t going to do much when the alternatives they are researching point towards a more Larrabee like set of instructions, management, and a number of cores in the 64-256 range.
It’s always like this, the iX generation was a big deal, but it’s plateauing, and they have to take into account the shift of personal computing combined with the server needs to make a good dime.
You’ll probably see another tossing around in a (relatively) short while if some collabs come to fruition, but until then expect just a little more of the same.
If you’ve got 40K, Intel makes a 80C/160T machine. They just don’t advertise things like that to consumers. Intel has something like a 90% share in the server world.
There current available chips have a manufacturing precision of 22nm, and rumour has it that anything below 17nm wont function due to the physical limits of the materials.
I’ve got an Intel 860 in my computer as well, and I’ve got no plans to upgrade.
-AJ
Then nothing better than the i7 3770 comes in 2013. Ok. Yeah, I will keep the 860 unless I got a sudden surge of money and nothing to do with it.
Overclocking the 3770k might be nice though…
There have been working FET demo models as small as 5nm since 2002 and 2004 (respectively single and multi gate),
Last November Samsung showed a 10nm media card, and we’re already at the 14nm node officially.
nVIDIA forecasta 10-11nm by 2015 (despite the current node map giving 10nm in 2016), and intel argues that node should be skipped to move straight to 8nm.
The smallest possible functional FET with non self interferring large arrays is a lot smaller than 17nm, even when you account for the difference between experimental and commercial.
I am, however, looking forward to single atom transistors and trinary state protein based transistors, shortly before machines will take over the world we’ll enjoy a short but blissful period of responsive apps and awesome visuals in games humanity will have never had experienced before.
I’d go for a 3930k before a 3770k. If you’re on a budget for 4 cores though, the 3770k has no real advantage over the 2600k from 1.5 years ago other than the newer things the new chipsets support like trim support for RAID with SSD’s etc.
oooo very cool. Thanks for sharing that. 
I think what I read was maybe that the current silicon semiconductors couldn’t carry a strong enough current when gates were smaller than 17nm. I saw this really niffy video from GF where they showed the gist of how they make these things, and demonstrated how a vastly more improved technique or process would be needed to get much smaller. Then again, my memory may have betrayed me, and I might be spewing complete rubbish. 
It is more difficult to make a CPU on a smaller scale due to the more complex instruction sets that need to be processed? Or is this just simply market conditions that have stalled CPU performance gains? My little 860 is almost 5 years old, but the new 3770K is only about 60% faster.
I really want to get into and learn more about this stuff, but I’ve been trying to get my physics, chemistry, and electricity knowledge up before I dive into those classes.
-AJ
You mean the 860 is 60% slower? I see about 140% speed bump.
http://www.anandtech.com/bench/Product/108?vs=551
Which is not bad actually. Quite worth the update if you really need it.
I don’t know, maybe I got my numbers wrong. I think the 860 scored a 5 on cinebench 11.5, and the 3770K scored around 7.8, so I guesss you could word is as the 3770K offers 64% more performance than the 860, or 164% the performance of the 860.
So you have K = X + X(.64) in the first statement, and K = X(1.64) in the second. I think its just a semantics argument. Either way, the 3770K is a lot quicker at rendering. 
-AJ
Not complete rubbish at all, but there’s an important distinction to be made between various types (pFETs, finFETs, MOFSETs etc.), and problems with manufacturing the transistor itself, gating it, power consumption, and all correlated issues, and sub17 being plain unachievable, or even inconvenient (intel has a perfectly functional and gated 10nm with no gate distance or gate spacing problems btw, just not fully able to scale up to large consolidated arrays without interference issues).
17nm IS a milestone, but it has to do more with a sweet spot in the curve with current gate modulators than it doeswith the size of the transistor itself.
Berkeley has an excellent presentation on this:
microlab.berkeley.edu/text/seminars/slides/moroz.pdf
It is more difficult to make a CPU on a smaller scale due to the more complex instruction sets that need to be processed? Or is this just simply market conditions that have stalled CPU performance gains?
Honestly, I don’t know. What little I know about electronic engineering and design is what bits and bobs I find interesting and read about. I doubt somebody outside of AMD, Intel or nVIDIA could give a clear and comprehensive picture on this.
The balance between the various parts of manufacturing and repurposing plants, expected yields and so on is trickier than most of us care to map.
I believe there is a strong component of making sure you have a market to sell to driving evolution, and it’s undeniable that single pipe high pressure pipes are disappearing, and therefore make the investments less advantageous.
The world of CPUs is both polarizing in the existing technologies (more low power, high yield per amperage for portable and small factor, combined with servers scaling differently than they did), and only now coming close to the physics brick wall of materials and figuring out what to do next for the next gen of high performance.
It’s also not dissociated from programming trends, available platforms and so on.
Processing Units are only as good as the average developer is willing to make them while respecting time and budget and with whatever tools you provide him with (CUDA, intel SIMD etc.).
My little 860 is almost 5 years old, but the new 3770K is only about 60% faster.
Intel had an extremely successful run with the first few models of i7 and related architectural changes. That was both a spike, and the alignment where many things changed together for the better. It’s only normal IMO that given the current scenario you don’t see the clock and relative performance jumps you would have had in the P4 five years apart.
I can’t give you a percentage, again many things come into place more than sheer speed these days, but the times when you could expect to unsocket your three years old CPU and put a new one in the same socket and see render times halved over night are long gone. I don’t know if a single benchmark is the best way to test though.
I really want to get into and learn more about this stuff, but I’ve been trying to get my physics, chemistry, and electricity knowledge up before I dive into those classes.
-AJ
Nothing wrong with that.
What I usually do is when I bump into interesting questions or concepts, I start wiki hopping and looking for university courses and relative documentation, and see what parts I think I meet the prereqs for and then start reading on lunch breaks or when I’m bored.
It’s hardly how you get an engineering degree, but I’ve picked up many odd bits and bobs over the years that even years later helped me tons in having some waypoints when I was in some uncharted territory.
I’m the information equivalent of a pack rat and have an oddly dysfunctional memory for such things, but I see no harm in spending my idle time going through Uni courses and literature, or economy related articles and papers, instead of spending time on youtube watching funny cats videos like a lot of people do 
Thanks Raffaele for your response. I really appreciate you taking time to share your knowledge on this stuff. 
Wow! It’s crazy how wobbly everything looks in those images. Everything looks so square and even in all those diagrams you see in CS101 books. 
I’ve tried going through a Computer Architecture class on Coursera, and I’ve looked at a few books. I didn’t know my watts from my amps until a few months ago, so I’ve got some work to do before I get there.
Hell, even just early last year, I didn’t now many basic Algebra concepts. I went through a bunch of videos on Khanacademy and was able to test into a Calculus course at my local CC.
Either way I’m glad I’ve toned it down with the art stuff and have started to get more into the technology. Its much more fulfilling.
-AJ
I’d go for a 3930k before a 3770k.
Will the turboboost compensate for the lowerclock speed?
I need very strong single threaded performance.
I think a typical highly overclocked 3770k can have something like a 2-3% advantage in single-threading over a typical highly overclocked 3930k, but IMO the 50% multithreaded performance of the 3930k and possible 8 memory slots is worth going that route if you have the extra money.
And honestly, the 3930k can be pushed harder in a predictable way if you wanted to overclock even higher while the 3770k quickly hits a hard ceiling because of intel using thermal paste instead of solder to connect the CPU core to the heat spreader. 3770k has a really bad temp threshold when overclocked hard.
For a production machine though, I wouldn’t absolutely max out the overclocking though so either chip should be fine for 4.5-4.7ghz
What’s next for intel… They are too late for tablets and phones, as no OEM in the world would want a lock-in into intel architecture. ARM is more than enough for that market and ARM chips are cheap. 64 bit ARM is around the corner too, so the server market will give intel a run for their money too pretty soon.
Intel will have to reinvent its X86 architecture, so it can compete in supercomputing environments with nvidia cuda and with future 64 ARM servers at the same time. Interesting times are ahead.
In the short run there will be minor speed gains for desktop intel chips and maybe a couple more cores for Xeons.but what will happen after that is beyond my imagination. Maybe hell will freeze and intel will actually buy nvidia. :twisted:
Yea, apparently AMD just bought the licence to make 64 bit ARM server chips. Maybe they can one day make a dent in Intel’s near server monopoly.
I’m sure Intel’s working on its own RISC architecture as well though.
I think AMD is going to be pushing its ARM server chips under the guise that it saves power. I’m pretty sure the hardware overhead would be a lot more for an ARM based datacenter, so it will be interesting to find out at what scale the ARM becomes the more cost effective solution.
I don’t think ARM can compete with x86 on a per node performance basis, so well likely not see hot workstations or gaming rigs based off ARM any time soon.
Either way, I’m not very informed on this type of stuff, so if I’m way off here, please correct me. 
-AJ
Of course ARM would yield worse performance per square cm of die than x86-64, but the reason Intel has been rushing work on ARM-like work is that the server market has evolved a lot recently.
Many virutal machines run off the same CPU were almost unheard of a while ago, high virutalization is now not only common, it’s about to become dominant even for some tasks that were performance capped in the past.
Even some performance sensitive thing like cloud computing services and the such are moving to many-machines paradigms and revising the pricing models.
It’s not unlikely, not as much as it used to be at least, to say many arrayed mini-RISC might be a considerable chunk of the future, and Intel is actually late on that, although I would say far from too late.