What next for Intel?


#9

I’d go for a 3930k before a 3770k. If you’re on a budget for 4 cores though, the 3770k has no real advantage over the 2600k from 1.5 years ago other than the newer things the new chipsets support like trim support for RAID with SSD’s etc.


#10

oooo very cool. Thanks for sharing that. :stuck_out_tongue:

I think what I read was maybe that the current silicon semiconductors couldn’t carry a strong enough current when gates were smaller than 17nm. I saw this really niffy video from GF where they showed the gist of how they make these things, and demonstrated how a vastly more improved technique or process would be needed to get much smaller. Then again, my memory may have betrayed me, and I might be spewing complete rubbish. :blush:

It is more difficult to make a CPU on a smaller scale due to the more complex instruction sets that need to be processed? Or is this just simply market conditions that have stalled CPU performance gains? My little 860 is almost 5 years old, but the new 3770K is only about 60% faster.

I really want to get into and learn more about this stuff, but I’ve been trying to get my physics, chemistry, and electricity knowledge up before I dive into those classes.

-AJ


#11

You mean the 860 is 60% slower? I see about 140% speed bump.
http://www.anandtech.com/bench/Product/108?vs=551
Which is not bad actually. Quite worth the update if you really need it.


#12

I don’t know, maybe I got my numbers wrong. I think the 860 scored a 5 on cinebench 11.5, and the 3770K scored around 7.8, so I guesss you could word is as the 3770K offers 64% more performance than the 860, or 164% the performance of the 860.

So you have K = X + X(.64) in the first statement, and K = X(1.64) in the second. I think its just a semantics argument. Either way, the 3770K is a lot quicker at rendering. :stuck_out_tongue:

-AJ


#13

Not complete rubbish at all, but there’s an important distinction to be made between various types (pFETs, finFETs, MOFSETs etc.), and problems with manufacturing the transistor itself, gating it, power consumption, and all correlated issues, and sub17 being plain unachievable, or even inconvenient (intel has a perfectly functional and gated 10nm with no gate distance or gate spacing problems btw, just not fully able to scale up to large consolidated arrays without interference issues).

17nm IS a milestone, but it has to do more with a sweet spot in the curve with current gate modulators than it doeswith the size of the transistor itself.

Berkeley has an excellent presentation on this:
microlab.berkeley.edu/text/seminars/slides/moroz.pdf

It is more difficult to make a CPU on a smaller scale due to the more complex instruction sets that need to be processed? Or is this just simply market conditions that have stalled CPU performance gains?

Honestly, I don’t know. What little I know about electronic engineering and design is what bits and bobs I find interesting and read about. I doubt somebody outside of AMD, Intel or nVIDIA could give a clear and comprehensive picture on this.
The balance between the various parts of manufacturing and repurposing plants, expected yields and so on is trickier than most of us care to map.

I believe there is a strong component of making sure you have a market to sell to driving evolution, and it’s undeniable that single pipe high pressure pipes are disappearing, and therefore make the investments less advantageous.

The world of CPUs is both polarizing in the existing technologies (more low power, high yield per amperage for portable and small factor, combined with servers scaling differently than they did), and only now coming close to the physics brick wall of materials and figuring out what to do next for the next gen of high performance.

It’s also not dissociated from programming trends, available platforms and so on.
Processing Units are only as good as the average developer is willing to make them while respecting time and budget and with whatever tools you provide him with (CUDA, intel SIMD etc.).

My little 860 is almost 5 years old, but the new 3770K is only about 60% faster.

Intel had an extremely successful run with the first few models of i7 and related architectural changes. That was both a spike, and the alignment where many things changed together for the better. It’s only normal IMO that given the current scenario you don’t see the clock and relative performance jumps you would have had in the P4 five years apart.
I can’t give you a percentage, again many things come into place more than sheer speed these days, but the times when you could expect to unsocket your three years old CPU and put a new one in the same socket and see render times halved over night are long gone. I don’t know if a single benchmark is the best way to test though.

I really want to get into and learn more about this stuff, but I’ve been trying to get my physics, chemistry, and electricity knowledge up before I dive into those classes.

-AJ

Nothing wrong with that.
What I usually do is when I bump into interesting questions or concepts, I start wiki hopping and looking for university courses and relative documentation, and see what parts I think I meet the prereqs for and then start reading on lunch breaks or when I’m bored.
It’s hardly how you get an engineering degree, but I’ve picked up many odd bits and bobs over the years that even years later helped me tons in having some waypoints when I was in some uncharted territory.

I’m the information equivalent of a pack rat and have an oddly dysfunctional memory for such things, but I see no harm in spending my idle time going through Uni courses and literature, or economy related articles and papers, instead of spending time on youtube watching funny cats videos like a lot of people do :slight_smile:


#14

Thanks Raffaele for your response. I really appreciate you taking time to share your knowledge on this stuff. :stuck_out_tongue:

Wow! It’s crazy how wobbly everything looks in those images. Everything looks so square and even in all those diagrams you see in CS101 books. :wink:

I’ve tried going through a Computer Architecture class on Coursera, and I’ve looked at a few books. I didn’t know my watts from my amps until a few months ago, so I’ve got some work to do before I get there.

Hell, even just early last year, I didn’t now many basic Algebra concepts. I went through a bunch of videos on Khanacademy and was able to test into a Calculus course at my local CC.

Either way I’m glad I’ve toned it down with the art stuff and have started to get more into the technology. Its much more fulfilling.

-AJ


#15

that’s the lure… and when you’re feeling comfortable with it all. it eats your soul.


#16

I’d go for a 3930k before a 3770k.

Will the turboboost compensate for the lowerclock speed?

I need very strong single threaded performance.


#17

Usually you turn the turboboost off if you overclock.


#18

I think a typical highly overclocked 3770k can have something like a 2-3% advantage in single-threading over a typical highly overclocked 3930k, but IMO the 50% multithreaded performance of the 3930k and possible 8 memory slots is worth going that route if you have the extra money.

And honestly, the 3930k can be pushed harder in a predictable way if you wanted to overclock even higher while the 3770k quickly hits a hard ceiling because of intel using thermal paste instead of solder to connect the CPU core to the heat spreader. 3770k has a really bad temp threshold when overclocked hard.

For a production machine though, I wouldn’t absolutely max out the overclocking though so either chip should be fine for 4.5-4.7ghz


#19

What’s next for intel… They are too late for tablets and phones, as no OEM in the world would want a lock-in into intel architecture. ARM is more than enough for that market and ARM chips are cheap. 64 bit ARM is around the corner too, so the server market will give intel a run for their money too pretty soon.

Intel will have to reinvent its X86 architecture, so it can compete in supercomputing environments with nvidia cuda and with future 64 ARM servers at the same time. Interesting times are ahead.

In the short run there will be minor speed gains for desktop intel chips and maybe a couple more cores for Xeons.but what will happen after that is beyond my imagination. Maybe hell will freeze and intel will actually buy nvidia. :twisted:


#20

Yea, apparently AMD just bought the licence to make 64 bit ARM server chips. Maybe they can one day make a dent in Intel’s near server monopoly.

I’m sure Intel’s working on its own RISC architecture as well though.

I think AMD is going to be pushing its ARM server chips under the guise that it saves power. I’m pretty sure the hardware overhead would be a lot more for an ARM based datacenter, so it will be interesting to find out at what scale the ARM becomes the more cost effective solution.

I don’t think ARM can compete with x86 on a per node performance basis, so well likely not see hot workstations or gaming rigs based off ARM any time soon.

Either way, I’m not very informed on this type of stuff, so if I’m way off here, please correct me. :blush:

-AJ


#21

Of course ARM would yield worse performance per square cm of die than x86-64, but the reason Intel has been rushing work on ARM-like work is that the server market has evolved a lot recently.
Many virutal machines run off the same CPU were almost unheard of a while ago, high virutalization is now not only common, it’s about to become dominant even for some tasks that were performance capped in the past.

Even some performance sensitive thing like cloud computing services and the such are moving to many-machines paradigms and revising the pricing models.

It’s not unlikely, not as much as it used to be at least, to say many arrayed mini-RISC might be a considerable chunk of the future, and Intel is actually late on that, although I would say far from too late.


#22

I will not have to build a workstation after all as my new job will be on a Mac Pro 12 cores 2012. I was a bit meh at first because it’s probably the 2.4ghz model but after thinking a bit, the turbo boost will take care of that. At 3.x ghz, It should be faster than my current workstation anyway.

Not sure they will let me install windows but 12 cores will be nice!


#23

Ahhhh, so soothing to have a nice level headed hardware discussion after all the non-sense on the GD and News Fourms. :wise:

I found this article which has some nice info on the subject. (not a huge find as it was the second result on google after searching for arm64)

http://www.realworldtech.com/arm64/

It seems like all the Linux distros have already released versions that should run on arm64, almost 2 years before the hardware hits the market. I guess Windows 8 already runs on the current arm32.

I couldn’t dig up any press releases from Intel saying that they’re working on anything RISC, but I’m sure their up to something. Or maybe they’ll fall in line for once and start making ARM chips. :shrug:

-AJ


#24

many-RISC is the ARM original philosophy and offer.

Intel wants to compete in that market with their Atom Line, which is already successful on the laptop end of things. It’s a rehash of x86 fundamentally, and already 64bit capable (although unused as such in smartphone platforms, it’s available on some laptops).

You can look up Clover Trail for the short term perspective, and they have other gens planned. For now very mobile oriented, of course even if they planned to they wouldn’t upset the server world balance by stating they are looking at Atom based clusters, but it doesn’t mean they won’t, or aren’t already.

They are lagging behind, still, especially compared to Tegra, but Clover Trail is a considerable gap closer compare to some abysmal previous offerings.


#25

I lately predicted that if the X86 architectures demise continued and intel fabs didn’t pull out chips at full capacity over an unusual period of time, intel would turn into a contract manufacturer like TMSC.

According to rumours, that’s on the table now.
http://appleinsider.com/articles/13/03/07/rumor-apple-and-intel-again-mulling-partnership-to-build-a-series-chips


#26

I would disagree with this… Intel has been working for a while now on this and we should see this year some new things happening:

http://www.xbitlabs.com/news/cpu/display/20121016235425_Intel_s_Next_Gen_Atom_Avoton_Chip_for_Micro_Servers_to_Feature_Eight_Cores_New_Memory_Controller_Media.html

That article is complete Apple drivel…, Intel has worked with other mobile companies to get its’ mobile cores in production to compete against ARM. We should begin to see that this year hopefully with Avoton.


#27

Haswell will have no more than 4C/8T, so, basicly, the 3930K is THE ONLY MONEY WORTH investment right now, for personal workstation, with budget till 1000-1500€, with no fancy stuff like dual socket Xeon mobos, Quadros, and Keplers…
i-9xx, i7-2600, or i7-3770 are still good enough if there is no budget for 3930K


#28

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.