PDA

View Full Version : Intel promises 80 core CPU in 5 years!


pearson
09-26-2006, 10:23 PM
http://news.com.com/2100-1006_3-6119618.html?part=rss&tag=6119618&subj=news

While I'm excited about this, it seems that multicore is not a direction software makers really foresaw. Not very many softwares are able to take advantage of multicores, and if there is a jump from 1 or 2 cores to 80 cores in just 5 years, will software be able to adapt fast enough?

Szos
09-26-2006, 10:52 PM
Yea, I don't think it is that software makers didn't "foresee" the trend toward multi-core... I think it is more like it was convenient/cheaper/less trouble for them not to bother with it. Multi-processor computer have been around for many, many years now - and multi-core is just the next step beyond multi-processor... programming-wise, my guess is that they are very similiar to program for (ofcourse I might be quite wrong on that), so from a power-user's perspective, it is only an excuse that all programs now-a-days aren't multi-core aware.

Ofcourse it is not just the application makers that need to step up - but also the OS needs to be fully aware and take full advantage of all those cores too.

This is pretty damn cool news and all, but the software has to catch up to the hardware before any of that power can be really used... or else you are going to have 80 CPU graph in your Task Manager, with 79 of them at 0% and one maxed out at 100%.

PROVIDE3D
09-26-2006, 11:04 PM
I'm wondering what kind of cooling system are they going to use for that...

Kaostick
09-26-2006, 11:05 PM
I'm wondering what kind of cooling system are they going to use for that...

It will give the real tech junkies an excuse to get their own cooling tower. :bounce:

pearson
09-26-2006, 11:12 PM
Here's a pic of the CPU fan: :D
http://www.cleanfreak.com/Qstore/custom/cf_airmover_large.jpg
(click if pic doesn't load (http://www.cleanfreak.com/Qstore/custom/cf_airmover_large.jpg<font%20size=))
</FONT>

hominid
09-26-2006, 11:12 PM
Programming applications that use concurrancy (multicores) has its own set of issues. Here's a look into one of Microsoft's research projects on that topic:

http://channel9.msdn.com/Showpost.aspx?postid=231495

Cheers,
Pete

3DDave
09-26-2006, 11:17 PM
I don't think floating point cores are equivilent to cpu cores?

Tlock
09-26-2006, 11:20 PM
I think licensing structures which are limited by processors will have to change, who wants to buy an 80 core system for Vista and only be allowed to use 4 cores. That happened with Windows 2000, which didn't fully utilize 2 CPU's with HT since it saw HT as physical rather than logical. I'm assuming 80 cores is considered physical and not logical.

charleyc
09-26-2006, 11:38 PM
Once this technology begins to show up I am sure we will see the software end make the move to deal with it. I am nore interested in what ways that many processors could be utilized. Will newer software set up so that different cores do different tasks. Dedicated processors for specific operations, sort of like the PS3 cell system. But no matter what, think of the render farms of the future. Imaging a couple hundred machines, each with 80 processors.

awnold
09-26-2006, 11:42 PM
I think the key sentance is this "The growing Internet video phenomenon, as evidenced by the spectacular rise of Web sites like YouTube, will keep these processors busy during intensive tasks like video editing, he said."


basically, Intel now sees muti-threaded processor intensive programs going main stream, meaning this is where their business model is focused going forward, meaning more power for us at good prices... very nice

pearson
09-27-2006, 01:04 AM
Well, for example, how many cores will Vista support when it comes out? 4 cores? 16 cores? Will they be able to scale to 80 in only 5 years? Look at how long 64bit has been around, yet XP64 is still slower than just running 32bit XP. 5 years is not a lot of time when you're talking about developing software.

Will 80 cores really mean 80x the power in Maya or Max's modeling windows? Will photoshop filters be 80x faster? Like Szos said, it's easier to just program for one core and hope the speed of that core keeps going up. Programming for XXcores is a lot harder.

I bought a dual CPU machine years ago, but so few apps used the 2nd CPU that I've never bought another one since. I hope that doesn't happen with these 80 core chips.

BillB
09-27-2006, 02:06 AM
Hmm, how long since they said we'd have 10GHz CPU's soon? ;)

"Otellini, right, stands with Rackable Systems CEO Tom Barton in front of a 22-inch-tall rack of servers with 80 quad-core "Clovertown" Xeon processors. The chips are scheduled to ship in November."

Now that's something I could use. 360 cores in something waist high! Renderfarm in a box! Suh-weet.

psyop63b
09-27-2006, 02:33 AM
Most applications still do not support multiple processors in ANY form. Even some "production" softwares such as AfterEffects, only utilize one processor. Gridiron Software (http://www.gridironsoftware.com/) has built and entire business out of addressing this limitation.

Soljarag
09-27-2006, 02:42 AM
sweet! 80 vray rendering buckets would probally fill up the entire screen at one time

Tlock
09-27-2006, 04:06 AM
From a development point of view, multi core support shouldn't take to long considering MS has already added OpenMP support to Visual Studio 2005. So i think we can expect great things in the future.

Myliobatidae
09-27-2006, 04:21 AM
I think the programs that need to be SMP, are already, I really don't need MS Word to use all the cores when I'm typing a letter...

Szos
09-27-2006, 04:39 AM
I think the programs that need to be SMP, are already, I really don't need MS Word to use all the cores when I'm typing a letter...
:surprised

See quote below:

Most applications still do not support multiple processors in ANY form. Even some "production" softwares such as AfterEffects, only utilize one processor.

Also note - even if a program is not a demanding application, it should still be multi-core/multi-processor aware. If not, it can hog the resources of a single core, which will degrade overall system speed.

TheLostVertex
09-27-2006, 05:56 AM
Thats right, why make a car faster and more efficiant, when you can just buy a bunch of crappy ones bundled together :scream:

I dont see an 80 core CPU happening for mainstream computers unless the hardware manufacturers have there hearts set on it. This will only lead to programming hell, and when tools automaticly allow programmers to utilize all resources, we will have a new form of bloat ware in all likely hood. Going with the previous joke/example, it will be really sad when they find a way to make microsoft word use 80 cores.

Just more marketing hype.

-Steven

Myliobatidae
09-27-2006, 06:23 AM
:surprised

See quote below:



Also note - even if a program is not a demanding application, it should still be multi-core/multi-processor aware. If not, it can hog the resources of a single core, which will degrade overall system speed.

I highly doubt it would even be noticeable, considering its not noticeable now, and I only have two cores now, as for after effects, if its not multi-threaded, there are always other choices, the only time I ever feel like I could use 80 cores is at rendering time...

pearson
09-27-2006, 07:44 AM
I dont see an 80 core CPU happening for mainstream computers unless the hardware manufacturers have there hearts set on it.
-StevenBut that's just it, Intel and AMD do want this. It's a lot easier for them to just add more cores rather than more Mhz. Users want it because it's sexy and easy to market (more is obviously better), so the problem is going to fall to the software devs to get the most out of the hardware.

I highly doubt it would even be noticeable, considering its not noticeable now, and I only have two coresI don't know. When I had two CPUs you could see one peg with a process, while the other one was free, but the whole system would suffer. I'm sure the situation has changed a bit, but if the programs don't know about the other cores, the cost of managing all the traffic falls to the OS.

Lorecanth
09-27-2006, 08:12 AM
suprised to hear 3d or post guys to say anything negative. We've always been the ones best able to take advantadge of multiple processors (cores). IE the idea of the render farm is nothing more than this. Multiple cores allow for everything from realtime muscle simulation and better dynamic animation controls for animators, to more sophisticated lighting and shading techs for td's. The gamers should be complaing not us.

swampjesus
09-27-2006, 08:46 AM
Well, it all comes back down to system bandwidth and how well the operating system scales up. If someone's (microsoft?) really wise, they ought to buy SGI (well, if they still have any proficient people left)...

playmesumch00ns
09-27-2006, 10:30 AM
suprised to hear 3d or post guys to say anything negative. We've always been the ones best able to take advantadge of multiple processors (cores). IE the idea of the render farm is nothing more than this.

Actually a render farm is a hell of a lot different from having one 80-core cpu. Render-farm boxes don't share address space (memory), so you don't have to worry about synchronisation.

Programming parallel applications is NOT an easy task. Fortunately a lot of what we do in CG is parallelisable. I too wonder about how software companies are going to deal with the licensing issues for many-core processors.

almux
09-27-2006, 12:53 PM
In 5 years the CPUs might be done of these laser/silicon type of things, therefore they will run much cooler anyway.
I don't think Vista can afford to be too far behind with their OS, as actual OSX can use 64 (or is it 128?) CPUs and next Leopard probably more than that.
I'm no programmer, but a dev involved guy told me it would need only few ligns of code at the top of an app to make it multicore aware.

L.Rawlins
09-27-2006, 01:35 PM
I've read that Intel are dropping the Core Quadro by the fall of '06. I envisage the name will have to change... but still, awesome. :)

P_T
09-27-2006, 02:04 PM
Which one would be easier to code for, GPGPU or this 80 core CPU? because I would imagine in 5 years time, GPU would be so powerful that GPU accelerated physics, GI, maybe even video encoding would be commonplace.

I read in a magazine somewhere that ATI Radeon X1900XTX can manage around 400 gigaflops of theoretical floating point performance while Intel Core2 Duo X6800 can only manage around 40 gigaflops.

80 cores sound very impressive but is it practical? Shouldn't they instead go for a 128bit processor instead of cramming a chip with as many cores as possible?

Dennik
09-27-2006, 03:18 PM
Hmmm, something tells me that the power consumption of these will be prohibiting for the home user.

Apoclypse
09-27-2006, 03:30 PM
Well, I don't know how MS is handling the smp stuff. BeOS, has had multithreading for a while and has the best support I've seen so far. It uses as many cpu's as you give and all programs made for the os were encouraged to be multi-threaded from the beginning. That is MS flaw, they put out a lot of excuses about why things don't work properly. These things don't work properly for them because they haven't encouraged devs to take advantage of multi-threading which they should have done when they introduced the NT platform. They are still stuck in the win 9x mentality. I have no idea how vista was developed, but now is the time to change how devs create apps by forcing them to use a multithreaded model. A lot of these issues arise from MS' obsession with legacy. They can never move forward if they are stuck trying to support legacy applications and coding practices.

Qslugs
09-27-2006, 04:02 PM
suprised to hear 3d or post guys to say anything negative. We've always been the ones best able to take advantadge of multiple processors (cores).


I think that we've been taking more advantage more of processors than cores (rendering for instance). I've had 3 2 processor systems since the mid 90's, on average I get about a year and a half longer life out of them than single processor systems. Right now I am contemplating a new system for my wife because its single 1.8 processor is dog slow on most apps, its painful to work on, whereas my dual xeon 1.7 is still usable. Both were purchased around the same time in 2002.

Also multiple Cores haven't been around that long (3-4 years) And while they help, I'd rather have the 2 or more procesors. My opinion is the OS runs a bit smoother overall with more than one processor present.

Tlock
09-27-2006, 04:45 PM
I think there is a misconception that somehow 64bit is that much better than 32bit. For most home users why would they even need a 64bit processor, when will they need values the great than a trillion. 64bit is an issue for scientific usage where studying very small and very large topics like the distances in space, but why the heck does Office need something like that. Multiple cores is the logical next step for home users not 64bit, the only major issue with 32bit is the limit on the 4GB memory which has already been along time ago. I'm not saying that we shouldn't ever leave the 32bit enviroment, but for most home users 32bit is more than enough. I have yet to see a single need for myself to ever use a 64bit operating system which has very little benefits other than extended variable sizes, while multiple cores in 32bit it gives you much more processing power at reduced energy usage. Not to mention we have almost reached the absolute limit of how small we can make chip architecture, which is simple fact of physics.

neuromancer1978
09-27-2006, 05:01 PM
...Not to mention we have almost reached the absolute limit of how small we can make chip architecture, which is simple fact of physics.

Yep. That is an issue that cannot be avoided. They make them so small that soon, and I do mean soon, electrons simply would not be stable enough to get from one transistor to another, and thus would short out, causing major problems for software and hardware to talk. At least that is what I have read, but I believe it was worded better.

I think this is one of the reasons that Intel and AMD started to do multi cores in the first place, to allow them a bit more time before the processes are at the smallest size physically possible. Although I would like to see an 80 core chip, I cannot imagine how big that would be, unless they plan on using multiple CPU's too? So I guess that is the next step for CPUs, untill science finds a better and faster way to compute.

Sorry for my grammer, I just woke up.

Badllarma
09-27-2006, 05:03 PM
Hmm, how long since they said we'd have 10GHz CPU's soon? ;)

"Otellini, right, stands with Rackable Systems CEO Tom Barton in front of a 22-inch-tall rack of servers with 80 quad-core "Clovertown" Xeon processors. The chips are scheduled to ship in November."

Now that's something I could use. 360 cores in something waist high! Renderfarm in a box! Suh-weet.

LOL we are run a 150 processor machines at work and there the size of a large wardrobe. The cooling pipe is that diameter you could put your head up it ;-).

Getting double THAT power on my desk top would rock!

jewalker
09-27-2006, 05:32 PM
There are several potential applications that could push users towards multiple cores AND 64 bit systems. The main applications I'm thinking of are photo and video editing. If HD camcorders start becoming available to mainstream computers, the systems required to handle editing these streams will require LARGE amounts of RAM, huge system buses, and multiple cores to be able to process all of that data. These applications are some of the best to take advantage of multiple cores.

Now, an 80 core CPU will most likely be used on servers that have to deal with thousands of connections and processes. Imagine running several large, complicated databases that handle thousands of queries a minute. Increase the network bandwidth to transmit the data, use multi-headed disk drives to speed up access of the data, and increase the number of processors to handle the queries.

Szos
09-27-2006, 06:17 PM
Well, I don't know how MS is handling the smp stuff. BeOS, has had multithreading for a while and has the best support I've seen so far. It uses as many cpu's as you give and all programs made for the os were encouraged to be multi-threaded from the beginning. That is MS flaw, they put out a lot of excuses about why things don't work properly. These things don't work properly for them because they haven't encouraged devs to take advantage of multi-threading which they should have done when they introduced the NT platform. They are still stuck in the win 9x mentality. I have no idea how vista was developed, but now is the time to change how devs create apps by forcing them to use a multithreaded model. A lot of these issues arise from MS' obsession with legacy. They can never move forward if they are stuck trying to support legacy applications and coding practices.

That is MS's problem for just about anything. Crappy drivers can be blamed with the same excuse - MS doesn't demand tight/error-free code when writing drivers, so what do the hardware makers do? Write code that is just "good enough" to pass - and if there is a conflict with some other peice of hardware, well, then too bad.

For controlling such a HUGE chunk of the PC market, it is quite surprising how little MS forces Windows developers to write good code. They set guidelines and such (and I believe even some kind of silly "testing" procedure), but MS should really push software makers to support multiple cores and stuff like that for the good of the industry.

playmesumch00ns
09-27-2006, 06:29 PM
Yep. That is an issue that cannot be avoided. They make them so small that soon, and I do mean soon, electrons simply would not be stable enough to get from one transistor to another, and thus would short out, causing major problems for software and hardware to talk. At least that is what I have read, but I believe it was worded better.

As I understand it, once you get to a certain size, quantum tunneling starts happening, i.e. electrons literally jump between pathways on the chip. This manifests itself as noise in the signal. To fix it you have to boost the signal, i.e. raise the power on the chip, which means the chips run hotter and consume more power. I would imagine there is also a point at which you just can't keep the electrons running down certain paths at all.

mdee
09-27-2006, 06:46 PM
It will give the real tech junkies an excuse to get their own cooling tower. :bounce:
Both mr and renderman are mulit-threaded apps, that's all I care about ;) I am just afraid that they will start charging licence fees per core really soon.

pearson
09-27-2006, 07:44 PM
Both mr and renderman are mulit-threaded apps, that's all I care about ;) I am just afraid that they will start charging licence fees per core really soon.Well, realistically, if you had to pay for 20 licenses just to use the common, industry standard 20 core chips (if/when that time comes) the licenses would have to drop in price dramatically.

To me, it's like trying to charge a license fee based on how many Ghz your CPU is. The end result is that the box holding your computer increased in power over time. Whether that power comes from more cores or faster speeds should be irrelevant, imho.

On the topic of miniaturization, I think it's funny that, barring a breakthrough, chips will have to get bigger and hotter. Perhaps we'll get to the point where a top of the line 3D workstation is as big as a Challenge or a Cray. :p

Qslugs
09-27-2006, 08:05 PM
On the topic of miniaturization, I think it's funny that, barring a breakthrough, chips will have to get bigger and hotter. Perhaps we'll get to the point where a top of the line 3D workstation is as big as a Challenge or a Cray. :p

So what youre saying is 3d workstations would actually revert back to their initial size? :)

Syndicate
09-28-2006, 12:45 AM
So we should technically have 15 cores next year right?

BillB
09-28-2006, 02:42 AM
So we should technically have 15 cores next year right? No, it's not a linear progression - it'd go 5, 10 20, 40, 80 over five years (roughly speaking, I know moore's law is every 18 months).

Wonder when HD manufacturers will twig to the idea of doing raid 5 internally to get some good speed increases? Sacrifice 1 of 3 platters, but increased speed and relaibility would be great. Hmm, were'd I put that Patent application...

Suprised no one challenged the "After Effects isn't threaded" claim - to my knowledge it's been threaded pretty much since I started using it at version 3.3 or 4 or something, on dual celerons.

Jadetiger
09-28-2006, 02:57 AM
80 cores... Somehow I see our future computers having airbags for when they explode.

pearson
09-28-2006, 04:20 AM
I wonder how they got to 80. They are at 4 now, with 8 announced. Don't most computer things double each time, so they should go 2, 4, 8, 16, 32, 64, 128...? :shrug:

parallax
09-28-2006, 11:53 AM
Suprised no one challenged the "After Effects isn't threaded" claim - to my knowledge it's been threaded pretty much since I started using it at version 3.3 or 4 or something, on dual celerons.


Yes it is threaded, but it is not 'properly' threaded' GridIron nucleo is a plug-in that optimizes AE performance by running multiple instances of AE in the background (at least that is what the task manager tells me). Apart from that, i think applications such as After Effects are way too poorly written to take advantage of multiple cores. (enter GridIron)
In a multiple layer environment doing ie. motion graphics or compositing, diskspeed isn't that important on a single-frame basis. (as far as ik can imagine)
For instance, you've got 4 layers of HD video, multiple graphics, adjustment layers running blurs, displacements and ie. Magic Bullet, and what not. This would take a lot of CPU power, but processes it as a whole AFAIK. Instead, maybe it should assign specific cores to specific plug-ins, and specific cores to decoding videostreams. After Effects should handle CPU power in a more modular way, i imagine.

bitBrain
09-28-2006, 12:40 PM
I wonder how they got to 80. They are at 4 now, with 8 announced. Don't most computer things double each time, so they should go 2, 4, 8, 16, 32, 64, 128...? :shrug:
That's not the case with multicore CPUs. You could have 3-core, 5-core, 7-core CPUs without problem; it's just a matter of the manufacturer setting the next step in performance according to what the current state of technology can cram onto one chip. I doubt we'll see a direct jump from 40- to 80-core, for example.

If we're following Moore's Law, in 5 years we'd have just over a triple doubling; that is, 8 times and then some - like 10 times. If 8-core CPUs were here now, 80 would be correct, but since we are at 4 now, 40 would be a safer estimate. Still good :)

Tlock
09-28-2006, 06:25 PM
BillB Wonder when HD manufacturers will twig to the idea of doing raid 5 internally to get some good speed increases? Sacrifice 1 of 3 platters, but increased speed and relaibility would be great. Hmm, were'd I put that Patent application...

This has already been done. Multiple Cores have been around for ages just in the Mainframe world. Those IBM mainframes have a redundency for EVERY SINGLE device attached and that includes Core/CPU's/HD you name it there's at least 2 of them.

One day our PC's will be almost exactly like older mainframes. Just like HD RAID started in mainframes and is now common place on PC's.

CGTalk Moderation
09-28-2006, 06:25 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.