PDA

View Full Version : Industry gears up for 'two-headed' chips


RobertoOrtiz
12-21-2004, 05:41 PM
QUOTE:
"...Though the tiny switches built in silicon are the heart of the digital revolution, they can't shrink forever. And in recent years, chip companies have struggled to keep a lid on power and heat the result of some transistor components getting as thin as a few atoms across.

Now, the world's leading semiconductor companies have unveiled a remarkably similar strategy for working around the problem: In 2005, microprocessors sold for personal computers will sprout what amounts to two heads each.


Instead of building processors with a single core to handle calculations, designers will place two or more computing engines on a single chip. They won't run as fast as single-engine models, but they won't require as much power, either, and will be able to handle more work at "

>>Link<< (http://story.news.yahoo.com/news?tmpl=story&cid=562&ncid=738&e=2&u=/ap/20041221/ap_on_hi_te/two_headed_chips)
-R

rendermania
12-22-2004, 08:18 AM
Took them long enough to figure that one out, lol. :D What about various people trying to design 3D dimensional chips/circuitry instead of today's flat, single layer ones? Linky:

http://www.extremetech.com/article2/0,1558,5090,00.asp
http://news.com.com/Start-up+has+feel+for+3D+chips/2100-1040_3-276609.html

Thalaxis
12-22-2004, 06:52 PM
It couldn't have taken them as long as people think, because it processor projects have around a 3-5
year timetable. AMD disclosed their dual-core plans back in 2001, for example. And the Montecito
project (which started under a different code name) was intended to be a dual-core product when
development started back in 2000... and intended for launch in 2005.

Those are just a couple of examples... the point is that the transition started a long time ago.

Scott Harris
12-22-2004, 08:25 PM
forget all of that. it's 2005... where the hell are my flying cars?!?

DePingus
12-22-2004, 10:48 PM
forget all of that. it's 2005... where the hell are my flying cars?!?
Right here!
http://www.moller.com/skycar/

Zeruel the 14th
12-23-2004, 04:38 AM
What will this transition mean for our games? Will Doom3 for example run slower on a brand new dual core chip because it doesn't take advantage of it? Maybe these new chips will have features that will make the transition easier, ie current crop / early upcoming games that don't run slower.

dotTom
12-23-2004, 05:14 AM
I've been waiting for the general press to catch on to the significance of the move to multi core CPUs. It's been my experience in over 11 years of interviewing developers that honestly less than 1% of developers can be trusted to write multi threaded code.

The end of the "for free" year-on-year performance increase we've enjoyed so far for single threaded code is hugely significant. Even if you can thread safely, not all algorthims are suitable for parallelization.

For the vast majority of home-office users todays machines truely are fast enough. The rabid gamers will probably be OK since game devs tend have more experience of writing parallel algorthims, esp. if they worked on consoles (or written against a GPU for that matter). Doubtless we'll see dozens of "Teach yourself threading in 21 days" <shudder> books. In years of writing threading code I'm still learning new stuff (i.e. the true semantics of volatile in C++ and the good old double-lock pattern etc). To paraphrase the comment about quantum mechanics, "If you're not afraid of multi-threading you haven't understood it", or rather you haven't had to debug it on a 4 way system.

Thalaxis
12-23-2004, 12:33 PM
What will this transition mean for our games? Will Doom3 for example run slower on a brand new dual core chip because it doesn't take advantage of it? Maybe these new chips will have features that will make the transition easier, ie current crop / early upcoming games that don't run slower.
Why would they be any slower? If the game isn't presently multithreaded, it will run on a dual-core
processor just like it was a single-core processor. The only difference is that clock speed will ramp more quickly on the single-core models, since they won't be as hot (fewer transistors). That alone will
make the single-core processors more desirable for games for quite a while.

Parallel programming introduces a host of new challenges that very few developers have had to deal
with so far, and most of the ones who have, ad dotTom pointed out, didn't actually figure it out anyway.
However, that won't last forever; the tools will improve, developers will (gradually) learn, and researchers will figure out new ways to parallelize algorithms. Eventually, it won't make any sense to
have a single- core processor anyway, and in fact it probably won't even be all that long before we
start seeing more than two cores in mainstream processors.

Justr watch AMD and Intel shift their marketing war from clock speed to core count ;)

arquebus
12-23-2004, 03:34 PM
So far dual processors are really only used for servers, so I think it will be the same for dual headed chiips. Maybe they should try and go for 128 bit or 256 bit processors instead of just 64 bit.

Thalaxis
12-23-2004, 03:49 PM
So far dual processors are really only used for servers,

And workstations.


so I think it will be the same for dual headed chiips. Maybe they should try and go for 128 bit or 256 bit processors instead of just 64 bit.
No, they shouldn't. It wouldn't do anyone any good; even now we're just edging up on the limits imposed by 32-bit processors, and the amount of memory a 64-bit processor, even limited to 40 or 48
bit physical addressing would, if stacked, reach the moon. We don't have that much physical ram in
the entire world right now.

So why would anyone want to waste their precious transistor budget on a 128 bit processor?
Realistically, there's only one company in the world that could afford to spend transistors that
frivilously, and they have more important things to worry about than silly marketing claims, like heat.

There are those who actually believe that more bits = more performance, but there's no relationship
between the two in the real world.

opus13
12-23-2004, 05:26 PM
ive been waiting for multicore processors for quie a while, and quite impatiently, to boot. (no pun intended)

the original k8 (opteron, clawhammer, and sledgehammer) included multicore extensions as part of their original design philosophy. i buy opteron systems because they actually have an upgrade path, including drop in replacement! (the current batch of dual s940 motherboards will have compatible parts to make then effectively quad processor systems-dual dual cored)

i was also wondering when this would make headlines outside of the tech industry. interestingly: other people will finally see the real world 'creamy goodness' of SMP computing, while the current SMP crop upgrades to quad processing.

i'm drooling already.

1001 JediNights
12-24-2004, 07:10 AM
Dual processors are only used in servers? Wow. I didn't know that all of apple's desktop systems were used as servers.

dotTom
12-24-2004, 09:22 AM
Dual processors are only used in servers? Wow. I didn't know that all of apple's desktop systems were used as servers.
Since when where all of Apple's desktop systems SMP? ;-) Then again the comment about SMP being a server only thing is just wrong, it's a server / workstation technology.

1001 JediNights
12-24-2004, 09:27 AM
Okay, most. Three of the four systems available on their site are dual.

halo
12-24-2004, 09:55 AM
well for starters duals are useful as render slaves...but anyway, what is still not settled about dual core chips is the software licensing, the industry cant decide wether to treat them as single or dual chips, consequently per processor licenses for rendering and software could get either cheaper or more expensive depending on your position.

So although one part of the industry is moving forward, another is stuck in a quagmire and we're stuck in the middle.

Thalaxis
12-24-2004, 12:18 PM
Microsoft already stated that they're going to do licenseing per socket rather than per
core.

AMD is going to basically masquerade their processor as hyperthreaded so that it looks like
two logical processors rather than two, from what I understand.

Oracle and their ilk apparently still plan to do licensing per core, but the CG industry hasn't
cast their vote yet, AFAIK. My guess is that software like Cinema and LightWave won't
care; they'll just ask the OS how many processors there are and run with that, just like
they do now.

I'm sure most of the graphics software that works that way now will continue to work that
way on dual core processors.

Terkonn
12-26-2004, 06:18 PM
The main limit to cpu speed has always been heat. The less heat you can get your cpu to generate a second, the more energy you can send through it. The dual-chip proccessors is a way around making one chip cooler. Having only one chip, every bit of energy is going through one electric body whereas two chips can have a slighty decrease in speed in each giving of less heat per a body but the overall speed is quite a bit more.

At my work I am currently a head researcher for a project where we are trying to make the silicon used in components much more effiecent, can't say too much (government top-secret), but I can say that when this project is finished (hopfully within the next two years) chips will become MUCH more heat efficient allowing for a much tinier chip and greatly enhancing speed.
This has many benifits, the smaller chip size makes it easier to fit more chips per processer and the much faster speeds will make much more powerful chips available but will also result in a huge decrease in standard processer costs.

dotTom
12-26-2004, 06:44 PM
The main limit to cpu speed has always been heat. The less heat you can get your cpu to generate a second, the more energy you can send through it. The dual-chip proccessors is a way around making one chip cooler. Having only one chip, every bit of energy is going through one electric body whereas two chips can have a slighty decrease in speed in each giving of less heat per a body but the overall speed is quite a bit more.

At my work I am currently a head researcher for a project where we are trying to make the silicon used in components much more effiecent, can't say too much (government top-secret), but I can say that when this project is finished (hopfully within the next two years) chips will become MUCH more heat efficient allowing for a much tinier chip and greatly enhancing speed.
This has many benifits, the smaller chip size makes it easier to fit more chips per processer and the much faster speeds will make much more powerful chips available but will also result in a huge decrease in standard processer costs.All of which is true, but writing software that can actually make useful use of all these cores is much harder (useful here means not spending all your time in synchronisation). The serious money will be make by folks who can make parallel programming accessible to the Visual Basic folk (who make up the vast majority of developers). CG packages are in themselves not simple bits of software so the companies involved probably have the talent to deal with this change - that said retro fitting concurrency to existing code bases (full of static and global objects) is a complete nightmare. CG is also fortunate in that it's a problem area that allows for fairly elegant parallel implementation, i.e. tile based within a frame or across frames.

As with almost all things in software (where all the interesting or wholly original work was done back in the 70's and early 80's) the "problem" of making parallel programming accessible to the non guru developer is not a new one, it's just that now it's not limited to some bespoke bit of transputer hardware in some lab but is everyone's problem. It's the fact that we've known about how hard it is to write dependable, cost effective parallel software for almost 30 years and still don't have a solution is more worrying. Note: by solution I mean a solution for the folks who just want their software to go faster with the algorithms they've successfully been using for all these years, yes there's things like OpenMP for C++ etc but they only go so far.

Terkonn
12-26-2004, 06:48 PM
All of which is true, but writing software that can actually make useful use of all these cores is much harder (useful here means not spending all your time in synchronisation). The serious money will be make by folks who can make parallel programming accessible to the Visual Basic folk (who make up the vast majority of developers). CG packages are in themselves not simple bits of software so the companies involved probably have the talent to deal with this change - that said retro fitting concurrency to existing code bases (full of static and global objects) is a complete nightmare. CG is also fortunate in that it's a problem area that allows for fairly elegant parallel implementation, i.e. tile based within a frame or across frames.

As with almost all things in software (where all the interesting or wholly original work was done back in the 70's and early 80's) the "problem" of making parallel programming accessible to the non guru developer is not a new one, it's just that now it's not limited to some bespoke bit of transputer hardware in some lab but is everyone's problem.
Very true and well said. It isn't a problem that can be fixed in just a lab or just by a group of programmers, it is more of a mutual concern to all areas of computing. Lets just hope that we all see the benifits of better computing very shortly!

Srek
12-26-2004, 06:53 PM
CG packages are in themselves not simple bits of software so the companies involved probably have the talent to deal with this change
Due to the demand on ever higher performance this has happened long ago. Every major 3D package on the market already has a pretty good support for multithreading and therefore multi CPU / multi core systems.
I think the step to dual core CPUs will give the 3D apps the most avantage right from the start.
Cheers
Srek

dotTom
12-26-2004, 06:55 PM
Very true and well said. It isn't a problem that can be fixed in just a lab or just by a group of programmers, it is more of a mutual concern to all areas of computing. Lets just hope that we all see the benifits of better computing very shortly!Yes, I mean there is a lot of language innovation going on, for example C# 3 will included data manipulation APIs that the compiler / execution engine can easily infere distributable behaviour from. This however does not solve the problem for folks who just want the year on year increase they've enjoyed to date with their largely sequential implementations. I think we might very well see a lot of software companies who can't afford to re-skill for this change go to the wall. I've seen enough attempts at trying to introduce threads to know that quite often it's better to start from scratch. You take "start from scratch", i.e. re-write to your CTO and see what the reaction is. It's often, go away and think again.

dotTom
12-26-2004, 07:00 PM
Due to the demand on ever higher performance this has happened long ago. Every major 3D package on the market already has a pretty good support for multithreading and therefore multi CPU / multi core systems.
I think the step to dual core CPUs will give the 3D apps the most avantage right from the start.
Cheers
Srek
True, and maybe I'm going off topic (that is not just CG related) but I think the whole migration to multi-core, i.e. pervasive multi threading, has wider implications for the software industry as a whole. Heck, the major western economies have benefited and relied upon Moore's Law delivering them the ability to manipulate larger and larger datasets cheaply. It's that cheaply bit that's the issue. There are folks who can write this stuff but they're few and far between and very expensive (this is great for them). Human beings just don't Think Parallel, so I think it will be interesting to see the implications of this across the board.

rendermania
12-26-2004, 10:26 PM
Let me go even more off topic. Is it technically possible to create a 'rendering' motherboard that uses an array of conventional CPUs (Xeons are pretty cheap a piece for example) just for rendering calculations?

In other words, a heavily cooled box that you stick something like 8 Xeon CPUs into and taps into those CPUs only a slave calculating units (rather than full CPUs you can run an OS or other software on)?

There has got to be a cheaper way to get a renderfarm in a box than is currently possible.

JeroenDStout
12-27-2004, 12:29 AM
There has got to be a cheaper way to get a renderfarm in a box than is currently possible. The Grid

No! Wait! forget I ever said that!

dotTom
12-27-2004, 05:47 AM
Let me go even more off topic. Is it technically possible to create a 'rendering' motherboard that uses an array of conventional CPUs (Xeons are pretty cheap a piece for example) just for rendering calculations?

In other words, a heavily cooled box that you stick something like 8 Xeon CPUs into and taps into those CPUs only a slave calculating units (rather than full CPUs you can run an OS or other software on)?

There has got to be a cheaper way to get a renderfarm in a box than is currently possible.I'd say you're right on topic. It's entirely possible, check out the RenderDrive these folks do http://www.art-render.com/ .

Dennik
12-27-2004, 03:08 PM
I'm wondering if its worth waiting for that technology to hit the market of home workstations before i upgrade from my 2.6 P4? I mean its fast enough for home use right now, i haven't felt like upgrading for long time. But if i upgrade i wouldn't want to bet on an almost dead horse now...

CGTalk Moderation
01-20-2006, 05:00 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.