PDA

View Full Version : Top Secret Intel Processor Plans Uncovered (quadcore and octacore)


js33
12-04-2005, 07:58 AM
Intel is ramping up the next generation of chips. They are revealing plans for a quad core in early 2007 and an eight core in 2008. Next year we will see dual core mobile chips, these will more than likely be in the first Intel Mac laptops, dual core desktop and in all 21 new processors based on 65nm and 45nm.

Link (http://www.tomshardware.com/cpu/20051203/index.html)


Cheers,
JS

Sonk
12-04-2005, 08:04 AM
Intel is ramping up the next generation of chips. They are revealing plans for a quad core in early 2007 and an eight core in 2008. Next year we will see dual core mobile chips, these will more than likely be in the first Intel Mac laptops, dual core desktop and in all 21 new processors based on 65nm and 45nm.

Link (http://www.tomshardware.com/cpu/20051203/index.html)


Cheers,
JS

8 core? sounds good, hopefully this competition between Intel and AMD, will drive the prices down..i better start saving up for 8 core CPU :D

Para
12-04-2005, 08:14 AM
Soo...if I have octacore processor on a dual processor motherboard I have 16 cores in total in my system which means that some programs demand 16 rendering licenses if I ever want to use my processors at full power.

...meh :) This could be one of those things that will change the industry in terms of pricing in 2007. We'll see...

lukx
12-04-2005, 08:17 AM
if intel got secrets I guess AMD also...so let's wait what will happen. I'm old intel user but for sure my next pc will be AMD based.

js33
12-04-2005, 08:26 AM
Soo...if I have octacore processor on a dual processor motherboard I have 16 cores in total in my system which means that some programs demand 16 rendering licenses if I ever want to use my processors at full power.

...meh :) This could be one of those things that will change the industry in terms of pricing in 2007. We'll see...

Well I would rather have the 16 cores and find some other software with a more reasonable licensing scheme. :) I'm sure with the advent of multicore processors software will have to adapt. I think a license should apply to the machine rather than the processors. Also I wonder what version of Windows, or OSX, we will need to run on a 16 core machine?
Cheers,
JS

js33
12-04-2005, 08:30 AM
if intel got secrets I guess AMD also...so let's wait what will happen. I'm old intel user but for sure my next pc will be AMD based.

Yeah AMD has the lead at the moment with the Opteron and the X2 but if you read the article it sounds like Intel is serious at surpassing AMD soon. AMD is still on 90nm and having trouble going to 65nm while Intel already has a quad core taped out at 65nm and they are already working on 45nm chips. :)

Also the article doesn't go into much if any detail about which chips will be 64 bit and which won't? I imagine most will be 64 bit but I think the first dual core mobile processors are still 32bit.

Cheers,
JS

Sonk
12-04-2005, 08:46 AM
Well I would rather have the 16 cores and find some other software with a more reasonable licensing scheme. :) I'm sure with the advent of multicore processors software will have to adapt. I think a license should apply to the machine rather than the processors. Also I wonder what version of Windows, or OSX, we will need to run on a 16 core machine?
Cheers,
JS

charging per core is just wrong IMO, they should charge per CPU. Modo renderer should make good use of all 16 cores with their buckets :D. Maybe by than we'll get 1 second renders :)

Is Vista even capable of support so many cores in the OS?

the article was a nice read, but it didnt mention weather the new chips are 32-64 bit version.

Beamtracer
12-04-2005, 09:32 AM
8 core? It's like shavers, isn't it? When the 8 core computer comes out, Schick will probably release an 8 blade shaver to match.


Yeah AMD has the lead at the moment with the Opteron and the X2 but if you read the article it sounds like Intel is serious at surpassing AMD soon.

AMD has the lead over Intel for processors suitable for 3D rendering. AMD's are faster. I think it will stay that way for some time.

Intel won't be the only one making multi-core processors in that timeline. Intel still needs to get it's act together for 64-bit processors, as AMD's are superior in that regard.

Para
12-04-2005, 11:20 AM
Is Vista even capable of support so many cores in the OS?

I remember reading from somewhere that Vista supports up to 16 384 processors (each core is handled as processor in Windows environment). I could be wrong though since that was a long time ago I read that article.

Lone Deranger
12-04-2005, 12:10 PM
That should make for an interesting looking Task Manager. :D

Seriously though... I hope that with the advent of all these multi-core/setup solutions software developers will put a bit more effort into getting their products to multi-thread to the max. Eventhough most (if not all) commercial rendering solutions are multi-threaded, there still are large areas in the DCC pipeline that remain solely single threaded.

I remember reading from somewhere that Vista supports up to 16 384 processors (each core is handled as processor in Windows environment). I could be wrong though since that was a long time ago I read that article.

spacefrog
12-04-2005, 03:25 PM
AMD did reveal it's radmap in november

AMD Roadmap (http://www.amdcompare.com/techoutlook/)

I guess the "steeper" Intel Roadmap is just a PR strike back for the recent
AMD X2 speed victory

nubian
12-04-2005, 05:40 PM
oh man this is going to get ugly.
i'm glad i'm in the middle of it.
this is so exciting.

Shaderhacker
12-04-2005, 07:08 PM
finally we might see realtime ray-tracing soon!!:buttrock:

I was so excited about the SIGGRAPH paper this year concerning Realtime ray-tracing. They talked about how they could probably bring the number from 20 or so procs to 8 or so. This means that we should be approaching this very soon in the next 3-4 years.

For those of you who know Mental Ray, you better keep your skills up. ;)

-M

Para
12-04-2005, 07:38 PM
finally we might see realtime ray-tracing soon!!:buttrock:

I was so excited about the SIGGRAPH paper this year concerning Realtime ray-tracing. They talked about how they could probably bring the number from 20 or so procs to 8 or so. This means that we should be approaching this very soon in the next 3-4 years.

For those of you who know Mental Ray, you better keep your skills up. ;)

-M

Realtime ray-tracing is ooooooold (or in other words, it's already been done several times). You should be a bit more precise like "realtime raytraced radiosity with reflective, refractive and SSS surfaces" :)

js33
12-04-2005, 07:55 PM
How about Render at the speed of thought. Well that may be a bit optimistic but it will be close. :thumbsup:

Cheers,
JS

Hazdaz
12-04-2005, 08:15 PM
Interesting.

I don't know if anyone already mentioned one of the biggest speedbumps to these multi-core chips.... having software that full takes advantage of all those cores. Until that happens, it's not gonna be as great as it sounds.

I mean hell, we have had dual processor PCs for ages now (I am typing this one one right now), but even today, I look at my task bar and still see many many times where a program is taking up nearly 100% of ONE processor, while leaving the other at nearly 0%. Would be alot faster/better if the program was SMP aware and could also feed that 2nd chip with data too.... OR to make the overall PC run smoother, use about 50% of both chips, instead of 100% of one.

CENOBITE
12-04-2005, 08:23 PM
<<< What Hazdaz said. It's all about having the programs optimized to use the silicon to it's fullest. This is one area where it will be interesting to see how programmers will use all the Xbox360's cores, or the PS3's Cell processor to their fullest. In a closed system, it is possible... however with PC's, well, we will have to see.

js33
12-04-2005, 08:33 PM
Seems like it should be the OSs job to direct traffic and fully utilize all the cores rather than leave it up to each individual program. It will be interesting to see who makes better use of all the cores since OSX and Windows will soon be running on the same processors.

Cheers,
JS

Srek
12-04-2005, 08:46 PM
Until now the main progress in computing speed was done by increasing the speed of a single CPU/core. The software development followed this and until now for most applications there was no real need to optimize for multiple processors. Fact is that by far the most solutions for software algorithms are single threaded algorithms that can not be easily adjusted to multithreading.
Let's face it, 3D rendering apps are among the best multithreaded apps there are currently (maybe only rivaled by databases and very specific simulation software). Chances are it will take software technology a long time to adjust for this change in paradigmens. What currently is missing is a new and stable foundation in development tools. Even if current compilers are able to produce mutlithreaded code the programmers have still way to much hassle to crerate and especialy debug it.
I think once tools like Open MP become mainstream we will see a nice increas in efficent use of multi core CPUs. Chances are though that this will take some time, especialy since not only the tools need to be ready, the developers need to adjust to this too.

Cheers
Björn

Elekko
12-04-2005, 08:54 PM
2,4,8,16-core it accelerates! By the way, when will the optical chips take over?

Bliz
12-05-2005, 12:16 PM
I think all this multicore chip evolution will make systems very expensive. Just think every core ideally needs as much RAM as you would give a single proc machine. And in production that's 2gig for each 32bit proc. And then if you want to fully utilise 64bit computing... well I think some new motherboards are going to have to be developed that cater for 128gig of ram and above.

Canadianboy
12-05-2005, 12:32 PM
8 core? It's like shavers, isn't it? When the 8 core computer comes out, Schick will probably release an 8 blade shaver to match.


i always joke about that too.. soon well see a Shick commercial introducing the new Check octo. 8 blades to get the closest shave and take of 3 milimeters of your skin revealing a fresh new layer hahah

Apoclypse
12-05-2005, 12:41 PM
I think intel is trying too hard. They should worry about getting their 64-bit tech up to snuff first. But Intel's strategy always seems to default to more is better. This is not always true as AMD has shown. You can do more with less. First fix the inherent problems you have with the chips you have out now that are getting trounced even by single core opterons then start adding to it. That is why the X2 and the opterons are as good as they are they are building on a solid foundation. If a single core opteron can trounce a dual core Xeon in some tests or even match its scores, then there is something clearly wrong with the foundation that you are building these cores at.

Shaderhacker
12-05-2005, 02:46 PM
Realtime ray-tracing is ooooooold (or in other words, it's already been done several times). You should be a bit more precise like "realtime raytraced radiosity with reflective, refractive and SSS surfaces" :)

Realtime ray-tracing is old. But it was limited to businesses and it wasn't mainstream. With the number of core processors multiplying for average home computers, we can hope that we start to see applied realtime ray-tracing in our homes on common 3d software packages like Maya.

-M

Shaderhacker
12-05-2005, 02:51 PM
If a single core opteron can trounce a dual core Xeon in some tests or even match its scores, then there is something clearly wrong with the foundation that you are building these cores at.

Most of the tests I've seen where the single core opteron beat out the dual-core xeon's are in apps that don't really utilize the multi-cores. In almost all tests where the dual-cores have been used, they've outrun their single core counterparts.

-M

hanskloss
12-05-2005, 03:14 PM
It's pretty funyy everyone getting excited and all but noone has stopped for a second to think about...cooling. Can you even begin to imagine what kind of cooling systems these chips are going to need? I think Intel is ridiculous. They can't even get their 64bit chips to work right and price them competitively yet they are thinking 8 core?? They are becoming much like General Motors, the bigger the better and it really isn't a right way to go. I went AMD over a year ago and haven't looked back at Intel, and I don't plant to again. Waste of money.

littlebluerobots
12-05-2005, 03:25 PM
can anyone say HT????? till intel owns up to its bullshit, Ill stick with amd....

DrFx
12-05-2005, 03:56 PM
The Sony marketing strategy of spreading tech rumours more than one year ahead of launch has gained a new adept... :deal:

Cronholio
12-05-2005, 04:06 PM
It's pretty funyy everyone getting excited and all but noone has stopped for a second to think about...cooling. Can you even begin to imagine what kind of cooling systems these chips are going to need? I think Intel is ridiculous. They can't even get their 64bit chips to work right and price them competitively yet they are thinking 8 core?? They are becoming much like General Motors, the bigger the better and it really isn't a right way to go. I went AMD over a year ago and haven't looked back at Intel, and I don't plant to again. Waste of money.

That's the point of the change in architecture and the shrinking die. These chips are going to run cooler and more efficient. You can already see the proof of concept in the Pentium M. These new CPUs should be really good. It looks to me really like AMD is going to have a tough road ahead of them. They are on top right now, but they apparently are having trouble shrinking their die, and they haven't made any improvements at all as far as core speed is concerned in well over a year (as long a time as Intel, if maybe a month or 2 longer). It looks like both AMD and Intel have hit the same wall and all we have to look foward to is more and more low powered cores on a single chip. This time next year, Intel will likely once again be the price and performance leader, just watch.

Hazdaz
12-05-2005, 04:11 PM
Like was mentioned above about cool.. also don't forget, that these chips are aimed at the 45nm process, which would (or atleast should) make them MUCH cooler than today's chips, as well as make them use less electricity (good for laptops).

Tlock
12-08-2005, 06:48 PM
What will the point of sending data to the Video Card for processing when you have some many cores that could work in conjunction with a Single Pool of Motherboard Memory. Unless video card manufactures get their butts into gear the need for advanced video card coding is dead. Now you can have one Core running Interger Math the Other Floating Point and the others for what ever else you need. All the Video Card needs to do is present video data on large screens.

Example: Have two Cores Generate the Render and have a Third sends data as soon as it is ready to the Video Card. Tell me that is not better than a single Video Card Processor trying to Render as well as Present Data.

Hazdaz
12-08-2005, 08:04 PM
What will the point of sending data to the Video Card for processing when you have some many cores that could work in conjunction with a Single Pool of Motherboard Memory. Unless video card manufactures get their butts into gear the need for advanced video card coding is dead. Now you can have one Core running Interger Math the Other Floating Point and the others for what ever else you need. All the Video Card needs to do is present video data on large screens.

Example: Have two Cores Generate the Render and have a Third sends data as soon as it is ready to the Video Card. Tell me that is not better than a single Video Card Processor trying to Render as well as Present Data.

Sounds like we've heard this logic many many times in the past, and everytime it is evetually proven wrong. The CPU (mutli-core or not) does a fantastic job at doing work - general work - that a CPU is designed for... It is designed to do everything from running Word to sending email to surfing for p0rn to playing music.

BUT that doesn't mean it is suited to processing graphics information. Graphics chips are specifically designed to do that function and they also do their particular job very well.

Even Microsoft themselves agree with this, by having the next version of WIndows (VISTA) use the GPU to offload some of the regular windowing/GUI work off the CPU (and the next GUI will be vector-based).

I actually predict that some cheaper PC makers will try to save money on low end machines by trying to use the extra codes on the CPU to process the graphics info (so you don't need a 'real' graphics card), but that has proven to not be an optimal solution in the past and I don't see it to be a good soltuion in the future.

AA_Tyrael
12-08-2005, 09:04 PM
Well it would be nice if we could use a software renderer to help the gpu in case we got multiple cores :-)

Para
12-09-2005, 06:38 AM
What will the point of sending data to the Video Card for processing when you have some many cores that could work in conjunction with a Single Pool of Motherboard Memory. Unless video card manufactures get their butts into gear the need for advanced video card coding is dead. Now you can have one Core running Interger Math the Other Floating Point and the others for what ever else you need. All the Video Card needs to do is present video data on large screens.

Modern GPU:s were octacore chips already a couple of years ago so what you're saying has already happened. The multicore scheme was adapted from GPU:s to CPU:s.

Tlock
12-13-2005, 06:10 PM
It's true, topics in computers are very cyclical. But the question is valid since i don't need 8 cores to run word (which i could do with a Pentium 200 MHz) and the most demanding aspects of an OS is graphics. Plus even though GPU's are more designed for graphics, you have much more limited access to resources and will always rely on the graphic manufatures to add features. Microsoft and Intel have a plan up their sleeves with this, to somehow cripple OpenGL (guessing). A Little history between Intel and Microsoft, Intel wanted to add very specific Multimedia features to their chips which Microsoft was very concerned about and tried to use their influence to put a full stop to this.

At present and for at least the next couple of years the GPU will be very relavent but i think it will play a lessor role in the future, taking away all the advantages that OpenGL will have. Think about it, if OpenGL was gone who would be king in the graphics world, which has always been a sore spot for Microsoft. One move 2 markets, gaming and graphics.

Hazdaz
12-13-2005, 06:45 PM
^^^
I think this is the best way that I can put this.

This:
http://a332.g.akamai.net/f/332/936/12h/www.edmunds.com//pictures/VEHICLE/2003/Dodge/100076301/003240-E.jpg

And

This:
http://a332.g.akamai.net/f/332/936/12h/www.edmunds.com//pictures/VEHICLE/2006/Dodge/100645333/20028848-E.jpg

Are both extremely power vehicles (that actually share the same engine)... but while one of them is optimized to haul cargo, the other is optimized to haul ass. Even an idling CPU is not the bestest way to run demanding graphics duties. You have to contend with bandwidth and memory access (and even with new PCs taht wouls still be an issue). The way new graphics cards handle this situation right now seems to be the optimal solution - they run on their own buss (sort of), have their own memory and have their own processor. It does seem wasteful not to have unified memory, but the added cost seems to be counteracted by the added speed.

Also don't negate the fact that while CPU development is still advancing, it isn't advancing nearly as quickly as GPUs are. Easier, cheaper to drop in a new GPU (in the form of a new video card) every year, than having to upgrade your entire CPU.



(http://www.edmunds.com/new/2006/porsche/cayenne/100541901/photogallery.html?pg_type=SUV&imgsrc=%2Fpictures%2FVEHICLE%2F2006%2FPorsche%2F100541900%2F20026636-T.jpg)

CupOWonton
12-13-2005, 06:45 PM
You know, when you try to compile sound on a CPU rather than through a good sound card, you pick up CPU noise. This is because CPU's just arent good for Audo compiling, because they arent specificly designed for it.
Video cards are specificly designed to do the main screen rendering.
I think if anything GPU's will continue to expand with CPU's keeping Video, Audio, and GeneralComputing in their own respective areas on a computer so as not to cause serious problems. At some point, maybe there will be 1 super brain processor that computes everything in millions of tiny sub processor nodes, but untill then, we're gonna be stuck with a video card, an audio card, and a cpu.
And somewhere in the near future, there should be a PHYSICS card to take care of a lot of motion computing.

Apollux
12-13-2005, 07:02 PM
It's true, topics in computers are very cyclical. But the question is valid since i don't need 8 cores to run word (which i could do with a Pentium 200 MHz) and the most demanding aspects of an OS is graphics. Plus even though GPU's are more designed for graphics, you have much more limited access to resources and will always rely on the graphic manufatures to add features. Microsoft and Intel have a plan up their sleeves with this, to somehow cripple OpenGL (guessing). A Little history between Intel and Microsoft, Intel wanted to add very specific Multimedia features to their chips which Microsoft was very concerned about and tried to use their influence to put a full stop to this.

At present and for at least the next couple of years the GPU will be very relavent but i think it will play a lessor role in the future, taking away all the advantages that OpenGL will have. Think about it, if OpenGL was gone who would be king in the graphics world, which has always been a sore spot for Microsoft. One move 2 markets, gaming and graphics.

I think that if they really try hard they could take out OpenGL from your average desktop machine running on a Wintel plaform (and it remains to be seen if Intell will still be average consumer choice, because something tells me it won´t).

But for the high end CG market, OpenGL has a stronghold (think of any studio renderfarm)... and there neither Microsoft nor Intel are serious players.. there Linux and AMD are both (and are likely to continue been) the Kings of the hill.

Tlock
12-13-2005, 07:17 PM
I think you both actually re-enforced my point. When the first CPU's came out they were nothing more than integer pushers. Then came along the floating unit, which for those that remember, it was VERY VERY slow and seperate. The next evalution was to add it directly to the cpu, which resulted in a massive explosion in intel chips being used for more and more scientific type calculations. At the end the floating point units within the CPU run faster than the integer. So this discussion is not that the GPU is better than a CPU at graphic specific functions, but rather what should the relationship between the Core CPU and the GPU be. I think they should consider adding a GPU core to the stack or somehow merge the memory. This reduced memory overhead alone would produce LIGHTING fast performance compared to todays standards. If seperate processing units were so amazing what benefit would there have been to add the FPU.

Car = CPU
Truck = FPU
Sports Car = GPU
SUV = CPU + FPU
? = CPU + FPU + GPU

Don't get me wrong i know where you guys are coming from, but it is something to consider.

The Benefits
- Share Memory
- Faster Buses
- No more x different versions of OpenGL or Other Graphic Libraries
- More standardized performance.
- Standard Instruction Sets
- Cheaper overall cost since a single board is required.
- and much much more

The Cons
- Better compitition MAYBE (look at AMD and Intel fight for FPU performance)
- More Flexibile upgradability - Primarily needed since Microsoft has no control of DirectX or OpenGL implimentations on the Card. At present GPU Manufatures need at least GDI, GDI++, DirectX (all versions), OpenGL (all versions) and this will only get worse.
- Feel free to add more, i would be interested to see what ppl think would be other Pros and Cons.

Hazdaz
12-13-2005, 07:33 PM
You are going to end up making the CPU an insanely complex and expensive part (and it already is complex and expensive). I just don't see the point in this when time and time again the computer industry has proven that inegrating grphics functions onto the CPU is just not a good idea.

I can see where you are coming from in thinking that "oh, there is going to be soooo many extra CPU-cycles with these mutli-core chips"... but time and again we have found a way to use up all those cycles.

But like I mentioned in a previous post, I fully expect for there to be low-end PCs out eventually that use these multi-core CPUs and run "virtual" graphics cards and audio cards on them to save money. There have been "virtual" modems out for ages, so this would be one next step.

Tlock
12-13-2005, 08:00 PM
Well at least from the cost point of view they would be able to easily add the partial cost of modern video cards to the cost of the CPU.

Maybe the trick should be to make motherboards more modular then ever by seperating the CPU, FPU, VPU, and GPU but have a share memory enviroment that they all work from. Oh ya maybe through in a port for Fully Programmable Chips to add more efficient hardware implimentations of specific tasks. This way it would also allow other manufactures to get into the hardware game. Oh i guess that will never happen.......

JosephGoss
12-14-2005, 09:37 AM
eight cores, more cores and then comes along quantum computers in about 10 years time, and that will be cool right?

i wouldn't mind a computer a million times more powerful then my current computer

just a thought, intel designs a new 32 core cpu, everyone says wow! and then IBM make a quantum computer and intel is out of business because they won't change, they will stick with 64bit (lol, or even 32bit) for the next 30 years
of course AMD will embrace quantum computers

lol.

Para
12-14-2005, 10:30 AM
I've been dreaming of having some sort of "generic nanomass" for CPU:s which could be expanded by pouring more mass to mold and let the system reconfigure itself for efficiency. Of course there would be small subsets in that mass like data transfer nanites to link everything, floating point math nanites for math, memory nanites etc.

That would make upgrading also easier, you could literally pour a litre of math nanites into your computer and you'd immediatly gain thousands of googolhertzes of more calculating power! ;)

CGTalk Moderation
12-14-2005, 10:30 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.