PDA

View Full Version : NY Times: G5 ships today


SMH
08-18-2003, 11:15 AM
Hi,

NY times says Apple's G5 will be available today (18-8-2003). The company already has over 100,000 pre-orders to deal with...so good luck getting yours. :D

http://www.nytimes.com/2003/08/18/technology/18NECO.html

mark_wilkins
08-18-2003, 05:40 PM
Too bad the dual 2 GHz doesn't ship until the end of the month...

-- Mark

Goon
08-18-2003, 06:45 PM
finally the war may begin in earnest!!

Array
08-18-2003, 08:17 PM
Originally posted by Goon
finally the war may begin in earnest!!

Too bad there wont be a price war anytime soon :rolleyes:

policarpo
08-18-2003, 08:25 PM
Originally posted by Array
Too bad there wont be a price war anytime soon :rolleyes:
for what it is worth, just configured a dual xeon 3.06 @ boxx with 1GB Ram and 40gb HD and a Lowest Quadro 4 (i selected BOXX cause the quality is up there with Apple's) and the Total Price : $3,616.00.

it goes up to $4,026.00, if i add a DVD writer....


so...um...yeah, the war will start.

i know you can build it on your own for a little less...a friend just built his own Dual 2.4 for a little over 2k, so the dual 3.06 would be several hundred more...but i like machines that are solid, tested and worth what you pay.

Here's to Apple and OSX!:beer:

Bring it on!!:buttrock:

Zastrozzi
08-18-2003, 08:41 PM
When someone builds there own pc it is not inherantly unstable or flawed in any way. In fact I think it is better than many of the OEM pcs out there if said person has any idea what they are doing. Dell HP and compac all throw useless garbage into the OS.

policarpo
08-18-2003, 08:52 PM
Originally posted by Zastrozzi
When someone builds there own pc it is not inherantly unstable or flawed in any way. In fact I think it is better than many of the OEM pcs out there if said person has any idea what they are doing. Dell HP and compac all throw useless garbage into the OS.

i'm just using BOXX as the example, because I feel that they have the best hardware and configurations out there for Digital Content Creators.

I personally don't like building my own equipment, cause i'd rather be doing something else than building a machine (i don't have the patience for such things to be honest with you). But hey, I like to eat out for lunch a lot, whereas other people like to bring their own lunch from home.

But i bet with the same config as the G5, a BYO dual Xeon 3.06 would still be around the same amount, if not a little more. :thumbsup:

and don't forget to buy the OS, firewire card, DVDR, media burning software, DVD authoring package, Video Editing Package, Photo Library application, and restocking and shipping fees if something doesn't work out. :love:

halo
08-18-2003, 09:24 PM
Dell HP and compac all throw useless garbage into the OS

format partition, problem solved

the only thing thats stopping me ordering a g5 is no quark express

mark_wilkins
08-18-2003, 09:29 PM
No QuarkXpress? Version 6, the OS X version, is out...

http://www.quark.com/products/xpress/mac_osx.html

or do you mean that you personally don't have it?

-- Mark

policarpo
08-18-2003, 09:37 PM
Originally posted by halo
format partition, problem solved

the only thing thats stopping me ordering a g5 is no quark express

heheheh....

have you tried out InDesign by chance?

I've used Quark since version 3.x, and recently switched over to InDesign 2.0, and I can honestly say that i really do enjoy it.

I've heard Quark 6 is supposed to be nice (from the Quark site. :P)

Let me know what you think of Quark 6 whenever you get a chance to use it. :thumbsup:

moovieboy
08-18-2003, 09:49 PM
Originally posted by SMH
The company already has over 100,000 pre-orders to deal with...so good luck getting yours. :D


Other threads were mentioning a 7 - 10 week wait on recent orders for their G5s... My day job company is transitioning the whole building to OSX from a mix of OS 9 and Windows 2000 boxes/workstations AND we're going from Quark 4.1 to InDesign!

But, I guess our IT folks were assuming that the G5s wouldn't be an issue, since everyeone was thinking Sept at the earliest, if not much later for the dual 2ghz G5... Guess they better start re-thinking them into the overall plan!

Interesting times, and I'm gonna throw a tantrum if they don't get me one of them G5s soon! :D

-Tom

matty429
08-19-2003, 06:17 AM
Boxx Xeons also Happen to be the most Expensive....Least comparable....

The G5 Will Absolutley get torched By The Athlon 64...Price and Performance....Especially when they announce a dual nforce..
Never the less...I'll give the Mac it's 2 weeks worth of glory...
Then it will go from being Fast ... to being not soo fast...but boy, look at the design of this thing...
...ohh yeah but thats right ...you can't compare it...The G5 is the worlds Most powerfull DESKTOP

And the Boxx Machines Are Workstations

Well...I don't need a Desktop.....I need a Workstation...

moovieboy
08-19-2003, 06:33 AM
Originally posted by matty429
Boxx Xeons also Happen to be the most Expensive....Least comparable....

The G5 Will Absolutley get torched By The Athlon 64...Price and Performance....Especially when they announce a dual nforce..
Never the less...I'll give the Mac it's 2 weeks worth of glory...
Then it will go from being Fast ... to being not soo fast...but boy, look at the design of this thing...
...ohh yeah but thats right ...you can't compare it...The G5 is the worlds Most powerfull DESKTOP

And the Boxx Machines Are Workstations

Well...I don't need a Desktop.....I need a Workstation...

:rolleyes: ummm, sure... whatever...:rolleyes:

-Tom :D

matty429
08-19-2003, 07:13 AM
Exactly

policarpo
08-19-2003, 07:19 AM
Originally posted by matty429
Boxx Xeons also Happen to be the most Expensive....Least comparable....

The G5 Will Absolutley get torched By The Athlon 64...Price and Performance....Especially when they announce a dual nforce..
Never the less...I'll give the Mac it's 2 weeks worth of glory...
Then it will go from being Fast ... to being not soo fast...but boy, look at the design of this thing...
...ohh yeah but thats right ...you can't compare it...The G5 is the worlds Most powerfull DESKTOP

And the Boxx Machines Are Workstations

Well...I don't need a Desktop.....I need a Workstation...

i never understand this.

if you think Apple is a weak performer, why even bother commenting on this thread? it obviously has nothing to do with how you perceive the world.

i know your bio says "Complainer", but don't you have anything better to do than state your dissenting opinion which has nothing to do with the G5 shipping?

just wondering.:love:

matty429
08-19-2003, 07:39 AM
Every apple post always has someone trashing the alternatives....

I'm just adding my perspective....If you can't handle that ...that's not my problem....

Peter Reynolds
08-19-2003, 08:06 AM
Can someone explain to me how people who are "creative" and "artistic" can have closed minds?

I just don't get "app wars" or "platform wars".

I don't understand how people are unable to "discuss" if they are working in a creative field.

Doesn't it seem a bit pathetic that threads have to be locked because people cannot carry on a discussion?

moovieboy
08-19-2003, 08:40 AM
Yeah... sigh...

Some psychiatrists and behaviorists explain such actions as coming from the reptilian part of the brain. Basically, it's the earlier evolutionary part of us that craves (in part) being bigger, stronger and supposedly, "better"...

The fact that we still have this lesser brain explains why some of us "need" to have a huuuge SUV, even if the rational part of us says they're impractical... and it explains why some people get off on pissing all over someone else's choice of sport, team, car manufacturer, music, clothing, sound system... and, of course, computing choice.

On a side note, it would be interesting to see what underlying causes make some people so... well, venomous towards other 3D programs and/or hardware to the point of flaming for no reason...

I mean, it's not like as a small child, a Mac Classic chased them into a dark alley and took their milk money, or a Quadra dated their sister and told everyone she was a slut when she wouldn't put out...

I dunno. I'm not a big fan of the folks at "thetruth.com" or big SUVs or certain people running for Governor in California, but I can't see myself actively seeking out ways to flame them...

Oh well, back to work! :D

-Tom

Joviex
08-19-2003, 10:24 AM
Originally posted by policarpo
i never understand this.

if you think Apple is a weak performer, why even bother commenting on this thread? it obviously has nothing to do with how you perceive the world.

just wondering.:love:


Well, I do understand a little bit, even as I type this on my way underpowered g4 laptop.

You guys started out ok, but you are comparing Apple to PC's, so I would expect someone to come along eventually and do the reverse. Granted, no one is bashing anything per se, but in the comparison is the devil.

I honestly think the G5 is almost as fast as what is currently out in the PC market at 32bit. Granted when 64bit software becomes avail for it, it will prolly smoke what 32bit intel/amd stuff is out, HOWEVER, 64bit WinTel/WinAMD machines are right around the corner.

And, not to mention the fact, I dislike the way Apple has marketed this new machine. Fastest Desktop in the world? That is a very myopic way to sell something, considering I can still get a Dec Alpha (Yes on Ebay) for my desktop machine, and it will definately smelt the new case design to all hell.

I think this is where Apple has lost, and will continue to lose, customers. They need to take a step back and let the ego deflate a bit, especially since they have no reason for one in the first place (post 1987).

O well, back to my PC.

Howzat
08-19-2003, 01:59 PM
Originally posted by policarpo
i never understand this.

if you think Apple is a weak performer, why even bother commenting on this thread? it obviously has nothing to do with how you perceive the world.

i know your bio says "Complainer", but don't you have anything better to do than state your dissenting opinion which has nothing to do with the G5 shipping?

just wondering.:love:
*cough* hypocrite *cough*
that is in reference to a certain thread involving many many postings of a nuclear explosion....

silvergun
08-19-2003, 02:45 PM
Nearly faster than what pcs have on 32 bit? Did you see the real world tests? As for the guy who says the amd64 will smoke the g5. It may, it may not but youll still have to reboot everytime you use a 32 bit app and a 64 bit app. Alot of time wasted there. Plus youll be using windows xp and may have to cough up even more cash for xp pro. Thats more money wasted. Don't forget the heat of these machines...it'll add up to a huge electricty bill.

deepinspace
08-19-2003, 03:16 PM
Well...........the Athlon 64 is nowhere to be seen, yet. So, just shut up and let people enjoy their G5s!!!! ;)

policarpo
08-19-2003, 05:07 PM
Originally posted by Howzat
*cough* hypocrite *cough*
that is in reference to a certain thread involving many many postings of a nuclear explosion....

:drool:

now now...i'd say i was more of a nuisance than anything.:love:

Thalaxis
08-19-2003, 05:16 PM
Originally posted by amorano
Granted when 64bit software becomes avail for it, it will prolly smoke what 32bit intel/amd stuff is out,


Well, that flies totally in the face of reality there.

(64bit vs 32bit has negligible impace on performance... and in fact
favors 32-bit when the hardware is identical -- as is the case for
every 64-bit processor out there OTHER than Itanium2 and
Opteron/Athlon64.)


And, not to mention the fact, I dislike the way Apple has marketed this new machine. Fastest Desktop in the world?

It's the classical "set up expectations you can't meet" method. It's
probably going to be good for a short-term boost in sales, but it's
not a good long-term strategy.

Thalaxis
08-19-2003, 05:20 PM
Originally posted by silvergun
amd64 will smoke the g5. It may, it may not but youll still have to reboot everytime you use a 32 bit app and a 64 bit app.


The Opteron has already been demonstrated publicly running
64-bit code AND 32-bit code in 64-bit Linux and in pre-release
versions of WinXP64, so that's obviously bunk.


Plus youll be using windows xp and may have to cough up even more cash for xp pro. Thats more money wasted. Don't forget the heat of these machines...it'll add up to a huge electricty bill.

You might want to take a look at the power consumption specs
for the G5. That's no embedded microcontroller like the G4.

real
08-19-2003, 07:58 PM
Humans just need some kinda group to belong to. It started with tribes and such. Now we have platform wars, and app wars. Apple and Wintel Quark and Indesign. Shake and Combustion, Nuke, AE, Maya and lightwave 3dmax.

It's all the same but the buttons are in a different position:)

If you like windows work with windows, if you like Mac OS X then use that. If you don't like either well use linux and if you don't like any of them well fine, I don't Care. My Composite is better than yours.
real

Goon
08-19-2003, 08:13 PM
mac user.:annoyed:





















I j/k:D But yeah, totally agree with your point. Not like it hasnt been made a 10001 times already but we dont seem to listen.

noxy
08-19-2003, 08:53 PM
As a longtime PC user, I used to dis macs all the time for being expensive and slow, but it seems much harder to do that these days. Whereas a year ago I would've screamed bloody murder if they made me use a mac at work, I wouldn't mind a bit switching to one of the new G5s. I have plenty of troubles with my 2x 3.06Ghz boxx so I doubt stability would suffer. Apple seems to be making many savvy decisions these days (marketing aside) and has built up quite a momentum in the fx arena.

Noxy

matty429
08-19-2003, 09:27 PM
Originally posted by deepinspace
Well...........the Athlon 64 is nowhere to be seen, yet. So, just shut up and let people enjoy their G5s!!!! ;)


Where's the G5?

halo
08-19-2003, 09:30 PM
shipping:rolleyes:

real
08-19-2003, 09:36 PM
Originally posted by Goon
mac user.:annoyed:

I j/k:D But yeah, totally agree with your point. Not like it hasnt been made a 10001 times already but we dont seem to listen.



We hear whats being said but still it doesn't change the fact the we need to belong somewhere. It has nothing to do with Microsoft or Apple it just human nature.
Mac users know about PC but like the the Mac and PC uses know about the Mac but like PC's.
SIMPLE

As long as it looks good at the end.

mental
08-19-2003, 10:46 PM
not to add fuel to the flames but i found this recent exchange with Apple's VP of Hardware Engineering quite amusing:

DMN: Now, you're saying it's the first 64-bit desktop machine. But isn't there an Opteron dual-processor machine? It shipped on June 4th. BOXX Technologies shipped it. It has an Opteron 244 in it.

Rubinstein: Uh...

Akrout: It's not a desktop.

DMN: That's a desktop unit.

Akrout: It depends on what you call a desktop, now. These… From a full desktop per se, this is the first one. I don't know how you really distinguish the other one as a desktop.

DMN: Well, it's a dual processor desktop machine, just like that one.

Akrout: It's not 64, then.

DMN: Yes, it's a 64-bit machine with two Opteron chips in it. It started shipping June 4th.

Akrout: That we'll double check, but in my mind, it wasn't.
http://www.digitalvideoediting.com/2003/06_jun/features/cw_macg5_interview.htm

Uhhh... I guess 'it depends' on who you call VP of Hardware Engineering :p

/edit: my source was www.2cpu.com

-mental :surprised

moovieboy
08-19-2003, 11:30 PM
We're definitely shifting on and off topic, but I think it's about time that we as an internet society start moving back to some semblance of, dare I say it, etiquette?

I know, it's a silly dream. But, imho, people should start really asking themselves why they choose to make this post or that post. I think many flamers and bashers have rationalized themselves into thinking they're behaving acceptablly because it's not ALL CAPS OBSCENITIES or overtly angry...

As a rule of thumb, if these same people (in the "real" world) overheard a group of people conversing at, say, another table, would they act the same way? Would they suddenly throw themselves into a conversation for the sole purpose of bullying/bragging/competing under the various guises of "adding perspective" and what not? Of course, most of them would refrain, but it is still curious to understand what motivates some people to post the way they do... especially in an enviroment that is filled with peers and professionals who they may have to work with or ask for a job from someday...

Okay, enough jabbering & moralizing. Back to my real job :D

-Tom

matty429
08-19-2003, 11:43 PM
If you were yelling and addressing everyone that could hear you I would Say whats on my mind....

Joviex
08-20-2003, 02:14 AM
OK, so lets say that this G5 release goes smooth, everyone gets one, machines seem solid.

I want to seriously know, from the long term Mac users (I personally cut my use since the early 90's, use them very rarely now), what happens if (THIS IS ONLY IF) the new G5 falls way short of the hype claims that are made??

Will there be PC users who go haha, told ya so, sure, but ignoring that fact for the moment, what would you do about it work wise? Would you feel cheated to some extent? Does this make Apple a better company then say MS?

Just curious. I notice this board is not zealot embedded like some of the Apple forums per se, but I would be curious to know what you would say if in fact the G5 didn't quite reach the goals it is claiming to have surpassed already.

moovieboy
08-20-2003, 02:42 AM
Originally posted by amorano
OK, so lets say that this G5 release goes smooth, everyone gets one, machines seem solid.

I want to seriously know, from the long term Mac users (I personally cut my use since the early 90's, use them very rarely now), what happens if (THIS IS ONLY IF) the new G5 falls way short of the hype claims that are made??

Will there be PC users who go haha, told ya so, sure, but ignoring that fact for the moment, what would you do about it work wise? Would you feel cheated to some extent? Does this make Apple a better company then say MS?

Just curious. I notice this board is not zealot embedded like some other Apple forums per se, but I would be curious to know what you would say if in fact the G5 didn't quite reach the goals it is claiming to have surpassed already.

It's kind of an odd question... Because I don't think many long-time users of ANYTHING expect their actual experiences to match the hype marketed to them. We don't fly through the trees using Windows XP, Lightwave 8 won't make me the next Pixar and we don't need tanks to protect our "supercomputer" G4s :D

For your "average" long-time user of any hardware or software, I would think all that matters is "can this new machine/OS/3D program make my job/life any easier/faster/better than what I've got now?"

I would have to say most Mac professionals would answer that question, in regards to the G5, with a big "Hell, yes!" but again, there are those who don't need the latest tech and do their jobs fine with old versions of photoshop on an old powermac or what-have-you...

From my standpoint, I could always use a speed boost on my renders in After Effects, Lightwave or even big complex Illustrator & Photoshop files. I currently work from a 733Mhz G4 here at work and a lil' mac 400Mhz cube at home, so a dual 2Ghz G5 would set my pants on fire, hyperbole or not :D And I know my editing life would be much faster since I also use Final Cut Pro...

I don't care about whether or not I have "the fastest" anything because I look at things like inferno* suites or gymnasium-sized render farms busting a cap into whatever little thing I've got at home and realize the speed game is one I'd never win...:hmm:

-Tom

Byla
08-20-2003, 07:54 AM
All I would like to know is:

are there any Cinema, Maya, After Effects rendering benchmarks avaliable? I mean, from non Apple fanatics or Apple staff?

moovieboy
08-20-2003, 08:14 AM
Originally posted by Byla
All I would like to know is:

are there any Cinema, Maya, After Effects rendering benchmarks avaliable? I mean, from non Apple fanatics or Apple staff?

My guess is that they'll be right around the corner shortly after the dual 2Ghz G5 ships, since that's the main beast Apple's been bragging about...

Then, we'll all be up to our caches in individual, company and "official" tests :)

-Tom

Byla
08-20-2003, 08:25 AM
probably you're right...

I do like Macs, but right now, the only thing I care about is benchmarks. If g5 is indeed as fast as Apple says, I will probably buy one. So, if any of the CGtalk readers got a new g5, could you please test some appliactions and post benchmarks? :)

manuel
08-20-2003, 12:47 PM
Originally posted by Byla
All I would like to know is:

are there any Cinema, Maya, After Effects rendering benchmarks avaliable? I mean, from non Apple fanatics or Apple staff?

Are Luxology apple fanatics? Not if you ask them... (http://www.luxology.net/company/wwdc03followup.aspx)

Are the people who did the benchmarks for the G5's in Apple's pocket? Not if you look at their list of clients. (http://www.veritest.com/clients/reports/default.asp?visitor=X) Do a google on VeriTest, they're not "apple staff". Oh, and this (http://www.veritest.com/clients/reports/apple/default.asp?visitor=X) is where you download those famous benchmarks everybody has an opinion about without looking at them. Try it.

real
08-20-2003, 09:21 PM
Originally posted by moovieboy
My guess is that they'll be right around the corner shortly after the dual 2Ghz G5 ships, since that's the main beast Apple's been bragging about...

Then, we'll all be up to our caches in individual, company and "official" tests :)

-Tom


I was just wondering if this type of thing happens when intel or amd releases new chips. Is there alot of questions about whether or not it is as fast as intel/amd say it is. Or is it to much of Dells and compaqs are so different that the benchmarks vary from all the different Wintel Companies. I think it is good that us(the users of these systems) have so many questions about the truth to the marketing from all hardware markers. Keeps them On there toes. But then you have to think, Does Intel, AMD, Apple Care what the benchmarks mean or is it mainly about marketing.I say Marketing. Apple says the fastest personal computer on the planet. Maybe?Intel used to say. Buy a PIV and watch how it speeds up your digital lifestyle. Maybe?


I guess what I'm asking is, are there the same Intel fanatics that act the same as Apple fanatics.Will they take it to there grave Wintel is better and Apple just sucks.

I really hope The G5 delievers. It would be great for apple and Intel and AMD. Then everyone would be even and everybody would have to work harder on better tech to make their machines that much better. That goes for eveyone. The main goal here is to get great looking work done before the deadline. And if Apple, Intel, AMD can help with that. WOW! that would be great.

Here's to wishing Cheers:beer:
real

Limbus
08-20-2003, 09:38 PM
I read some benches from a german mac magazine today. They where able to test the smallest model with 1.6 Ghz and 512mb ram (it ships with 256 ram which is a joke). They compared it to a dual 1.25 g4 and a Athlon Xp 2200+ (1.8 Ghz). In some benches it came close to the Athlon but in most cases the rather old Athlon was faster. The G5 1.6 performs like the dual g4. Interesting iare the results from the cinebench benchamrk (cinema 4D rendering). In this Benchmark the Athlon was alot (about 40%) faster than the g5.

I still think that the top g5 with 2x 2ghz is the best value for the money. The two other modells are far overpriced. I just hope that some workstation graficcards will be available soon (without the normal huge extra price that you normaly pay for any mac gcard). But for 3k it should have something better than a radeon 9600.

Florian

moovieboy
08-20-2003, 09:55 PM
Originally posted by real
I guess what I'm asking is, are there the same Intel fanatics that act the same as Apple fanatics.Will they take it to there grave Wintel is better and Apple just sucks.

Any group, like political parties, sports fans, gear heads, and computer geeks have a mix of fanatics, quiet majorities and apathetic folks who just came for the free goodies... :D

And likewise, you've got those who make a habit of attacking those on the other side for no real reason besides they want to... I'm a Los Angeles Mac user from Minnesota, so between California Governor Recalls, the tempest after the death of Sen. Wellstone, former Gov Jesse Ventura and mac v pc wars... well it gets pretty sad from a larger perspective :hmm: But, people will do as people will do!

-Tom

Limbus
08-20-2003, 10:17 PM
Originally posted by real
[B]I was just wondering if this type of thing happens when intel or amd releases new chips. Is there alot of questions about whether or not it is as fast as intel/amd say it is.
If I think back to the introduction of the P4 and all the claims by intel that it would make browsing the internet faster there was very critical voices from all over. Right now these voiuces are very critical about the athlon xp because it cant match the claimed speed of the p4. I guess its rather normal to question benchmarks that the manufacturer publishes. Apple just got this much attention because the G5 looks like the first cpu that can actually compete with amd and intel.


I think it is good that us(the users of these systems) have so many questions about the truth to the marketing from all hardware markers. Keeps them On there toes. But then you have to think, Does Intel, AMD, Apple Care what the benchmarks mean or is it mainly about marketing.

It depends who you are selling to. For the normal business user who uses his standard office suite it is rather irrelevant what the benchmarks say. For this marketing its just marketing for the processor company to get the huge pc producers like hp/compaq, dell, etc.. to build computers with their cpus. The customer will care less whats inside the computer. They care for stuff like "total cost of ownership", image stability, service etc...
But companys who really need computing power (renderfarms, clusters) will care for the max. speed they can fit into a blade server.


I guess what I'm asking is, are there the same Intel fanatics that act the same as Apple fanatics.Will they take it to there grave Wintel is better and Apple just sucks.

There are. There are just as much flamewars like AMD vs. intel or win vs. linux.


I really hope The G5 delievers. It would be great for apple and Intel and AMD. Then everyone would be even and everybody would have to work harder on better tech to make their machines that much better. That goes for eveyone. The main goal here is to get great looking work done before the deadline. And if Apple, Intel, AMD can help with that. WOW! that would be great.
Sadly most people dont understand that a hard competition will only improve things for us the customers. Apple would not have dumped motorola if the intel and amd cpus would not have been that much faster and vice versa.

Florian

real
08-20-2003, 10:31 PM
Limbus
Thanks for the info.

I have been away from the Wintel side for a longtime now. It must get really crazy arguing about AMD and intel. The only thing us apple people argue about is Wintel. Another reason to use a mac you don't have to fight 3 wars(apple, Intel, Amd). JK:)


THANKS AGAIN for the info.
Have a good one everyone.


Keep up the good work.

real

mark_wilkins
08-20-2003, 10:52 PM
Why wouldn't a 1.8 GHz Athlon with DDR 266 RAM be faster than a 1.6 GHz G5, particularly for 3D rendering?

The 1.6 G5 has slower memory than the other two G5 machines.

The Athlon has unusually strong floating-point performance and in this instance a 13% core clock rate advantage on the G5 to begin with.

Cinema 4D probably was optimized carefully for the G4 on the Mac side, and in many cases the optimizations that work best for the G5 are exactly the opposite of what are ideal for the G4. Hard to say what impact this might have had, though.

Like you say, I'm curious how things go with the dual 2 GHz machine, or even the 1.8 (which is a lot closer to the 2 GHz in memory performance, in particular.)

-- Mark

Limbus
08-20-2003, 11:07 PM
Originally posted by mark_wilkins
[B]Why wouldn't a 1.8 GHz Athlon with DDR 266 RAM be faster than a 1.6 GHz G5, particularly for 3D rendering?

Because the Athlon 2200+ is more than a year old and will cost less than half of the G5.


Cinema 4D probably was optimized carefully for the G4 on the Mac side, and in many cases the optimizations that work best for the G5 are exactly the opposite of what are ideal for the G4. Hard to say what impact this might have had, though.

Why is that? I thought that the G5 also has a Alitvec engine. What are the other optimizations?


Like you say, I'm curious how things go with the dual 2 GHz machine, or even the 1.8 (which is a lot closer to the 2 GHz in memory performance, in particular.)

The 2 Ghz G5 has a much higher Memorybandwith than the Athlon XP 2200+. The Mac has 500Mhz for reading and writing (2x 500 = 1000Mhz) so it only reaches its max mem performance if reading and writing occurs at the same time. The Athlon Xp 2200+ has 400Mhz. The only Wintel CPU that in the G5 range is the P4 with 800 Mhz. The P4 can read and write at full speed but needs some clock cycles to swicth between reading and writing. So it should really depend on the app which memory is faster. And then you have the Opteron which has a memory bandwith of 24.5 GByite/sec compared to the 7 GByte/sec of the G5 (2Ghz) and 6.4 Gbyte/sec of the P4 (with FSB 800).

Florian

mark_wilkins
08-20-2003, 11:21 PM
Because the Athlon 2200+ is more than a year old and will cost less than half of the G5.

That's not a reason that the G5 1.6 should be faster, that's a reason that it's not a great value!

It's obvious looking at the specs that the 1.6 is a marketing-hobbled G5 for the low-end buyer, which is why I passed over it when ordering! :D

Why is that? I thought that the G5 also has a Alitvec engine. What are the other optimizations?

Well, there are AltiVec and non-AltiVec optimizations that are the issue.

The AltiVec engine in the G5 is instruction-compatible with that of the G4, but they're not the same. G4-optimized AltiVec code (which is most of the AltiVec code out there) uses certain instructions for prefetching data that are useless on the G5 (which performs prefetching automatically) and apparently cause performance-killing pipeline stalls.

Non-AltiVec optimizations involve ordering of instructions and things like that -- I understand many of these things are ideally somewhat different between G4 and G5 as well.

One thing that Cinema4D could be doing wrong is using its own hand-tweaked (or statically-linked) numerical library, as Apple's numerical library shipping with the G5 addresses all of these optimization issues.

And then you have the Opteron which has a memory bandwith of 24.5 GByite/sec compared to the 7 GByte/sec of the G5 (2Ghz) and 6.4 Gbyte/sec of the P4 (with FSB 800).

I'd like to know what motherboard and what version of Opteron you're talking about -- 800 series? 200 series? The BOXX Opteron systems are much closer to the G5 in memory performance than the numbers you quote, and apparently for the self-builders it's hard to find Opteron motherboards with adequate PCI and AGP support for desktop use in any case. In any case, many of the Opteron CPUs are sufficiently expensive as to price them out of the G5's market except at the low end.

(and as we know, you can almost ALWAYS get faster by spending more money...)

-- Mark

Limbus
08-20-2003, 11:36 PM
Originally posted by mark_wilkins

I'd like to know what motherboard and what version of Opteron you're talking about -- 800 series? 200 series? The BOXX Opteron systems are much closer to the G5 in memory performance than the numbers you quote, and apparently for the self-builders it's hard to find Opteron motherboards with adequate PCI and AGP support for desktop use in any case.
-- Mark

As I understand it, all Opterons have the same 3 Hypertransport and 2 Memorychannels so the memoryperfomance should be equal on al systems (100 means 1 CPU, 200 means 2CPUs and so on). I guess Memory bandwith isnt everything. In the benchmarks I saw the Opteron did best on fileserving and database benchmarks. Look here: http://www.anandtech.com/cpu/showdoc.html?i=1816&p=4 for Databse benches and here: http://www.anandtech.com/cpu/showdoc.html?i=1818&p=11 for rendering benches. Most mobos are still made for server-use anyway.

Florian

mark_wilkins
08-21-2003, 01:32 AM
ok, so I read those articles, and I'm not sure where your numbers come from on memory throughput. For example:

http://www.anandtech.com/cpu/showdoc.html?i=1815&p=7

says:

"For example, the Opteron supports a maximum of DDR333 SDRAM currently, giving it a peak bandwidth of 5.3GB/s per CPU."

The Apple developer note for the G5 states that the connection from CPU to memory controller supports:

"two independent, unidirectional 800 MHz to 1 GHz frontside buses each supporting up to 8 GBps data throughput per processor"

and that the memory controller has

"two independent processor interfaces"

which seems to imply that the memory controller can max out DDR 400 RAM on two different CPU channels, yielding better throughput on a dual proc machine than the dual Opteron.

Is this evaluation of the specs correct? Who knows? Let's see a range of benchmarks and find out! :)

-- Mark

Thalaxis
08-21-2003, 03:38 AM
Ok, who will you believe, the IBM engineers or the Apple guy?

IBM's engineers claim that the G5's FSB has a throughput of 6.4
GB/s; the disadvantage of using a bidirectional bus is that it
requires a packet-based implementation which eats a bit of
bandwidth.

In any case, a dual-channel 400 MHz DDR implementation only
provides a max of 6.4 GB/s.

The advantage of the bidirectional bus is that it can generally
sustain a higher percentage of its peak throughput than a
unidirectional bus can.

Finally, each processor in the G5 is connected to a single memory
controller, which isn't necessarily the case in an Opteron rig. So
they both share a 6.4 GB/s memory bus, which means that the
system bandwidth is 6.4 GB/s.

The system architecture is very similar to that of the AthlonMP's.

The Opteron implementation puts a memory controller on each
processor -- but that's actually optional; low-end dual-Opteron
rigs can just implement one of the memory controllers, and hang
the second one off of the first, using the NUMA implementation
instead, so they share the single bus to main memory, netting an
advantage in cost, but not in bandwidth.

The more upscale implementation has a memory bus hung off of
every processor, which means that each processor adds to
system bandwidth. The theoretical peak doubles each time that
you double the processor count, but in practice that's not a very
good metric, since the latency for accessing memory that is not
local is considerably higher than for local memory.

So the 25 GB/s figure is actually fairly accurate... in 4-way SMP
configs, with a memory bus for every processor.

In the optimization front... there are significant issues regarding
instruction ordering. The G5 has a very restrictive multi-issue
limitation; without the proper instruction ordering, it won't get
more than one or two instructions per clock cycle on a good day.
The theoretical peak issue rate is 5 instructions per clock cycle...
but in order to compensate for the complexity of tracking the
rather massive 200-something in-flight instructions, it does so in
bundles, and filling the bundles is where the restrictions come in.
Stuffing 5 instructions on it has a lot of caveats associated with it.

I hope that clarifies more than it confuses :)

mark_wilkins
08-21-2003, 06:28 AM
So the 25 GB/s figure is actually fairly accurate... in 4-way SMP
configs, with a memory bus for every processor.

Thanks for clarifying. Still, I'd like to see the pricing on an Opteron system that actually implements some of these multi-way hypertransport buses... to see whether we're still in the same class of machine...

And as for the 21 GB/s number, why would someone compare a 4-way SMP system to a 2-way system in any case?

Incidentally, I cranked a copy of that Macwelt article mentioned earlier (the 1.6 G5 vs. 2200+ Athlon) through babelfish.altavista.com and have some thoughts on it:

First off, they do not clearly specify how much RAM was in the G5 for each test, but it appears that in the Photoshop and Cinebench tests apparently they tried it at 256 and 512 MB on the G5, while the Athlon was at 768 MB the entire time. For these kinds of applications that's likely to be a killer.

For the gaming test (UT2k3), they tested the G5, with less RAM and a GeForce FX 5200 Ultra, against the faster-clocked Athlon, with more RAM and an ATI 9700 Pro. As they point out, the ATI card is older, HOWEVER it's also as much as three times faster as the FX 5200 on the same motherboard. Check out this chart of UT benchmarks at Tom's Hardware:

http://www.tomshardware.com/graphic/20030714/vga_card_guide-12.html

Sure enough, the Mac ended up at about 1/3 the frame rate.

Finally, the iTunes test, which again the Mac lost, is guaranteed to be AltiVec intensive. G5s are slow running G4-optimized AltiVec code, apparently because the instructions used for prefetching data on the G4 stall the G5 execution pipeline and the G5 automatically prefetches. So, different optimization probably will make a huge difference here.

So what does all this mean? Not that much. Between the issues of software of unknown optimization, giving the PC a MUCH faster graphics card, giving the PC more RAM, and running the tests against a PC with a clock speed advantage, it's a surprise to me that the PC didn't spank the G5 by even more.

-- Mark

Saurus
08-21-2003, 06:37 AM
Yeah...but does it play Desert Combat? If it can't it sucks!

Saurus

deepcgi
08-21-2003, 07:03 AM
How fast do the Intel and AMD machines run OS X?

skigil
08-21-2003, 07:32 AM
As an employee of Apple I must make the statement that Intel and AMD machines will not run OS X.

Thanks.

-skigil

Zastrozzi
08-21-2003, 09:19 AM
Originally posted by skigil
As an employee of Apple I must make the statement that Intel and AMD machines will not run OS X.

Thanks.

-skigil

It would be nice to try out though

Tim Deneau
08-21-2003, 11:05 AM
All that matters to me is OS X, good for the G5, but give us Panther :D

Thalaxis
08-21-2003, 03:36 PM
Originally posted by mark_wilkins
Thanks for clarifying. Still, I'd like to see the pricing on an Opteron system that actually implements some of these multi-way hypertransport buses... to see whether we're still in the same class of machine...


I'd say that's singularly unlikely, no matter how much you stretch
your definitions. Right now AMD's primary goal is to drive up their
average selling price, which means that they're pricing each model
based on its target market, not cost. 4-way and up SMP rigs are
exclusively aimed at server markets, and in those markets the
cost of the processor isn't the driving force behind the cost of the
machine...


And as for the 21 GB/s number, why would someone compare a 4-way SMP system to a 2-way system in any case?


Beats me. I didn't intend to support the comparison, or even
provide a comparison, just information. And I agree with you; as
long as 4-way SMP rigs are out of consideration due to price, they
are IMO irrelevent here.

I suspect that things will get shaken up a bit more this fall and
winter when Intel hits the market with a $750 64-bit processor,
but it won't have a whole lot of native code running on it when it
launches, most likely. When the native code shows up, then
things will start getting REALLY interesting.

Personally, I think that the Xeon will get caught in the middle
then, since it won't have 64-bit support.



First off, they do not clearly specify how much RAM was in the G5 for each test, but it appears that in the Photoshop and Cinebench tests apparently they tried it at 256 and 512 MB on the G5, while the Athlon was at 768 MB the entire time. For these kinds of applications that's likely to be a killer.


512 MB is enough for CineBench, which I can tell you from trying
it out myself (I favor Cinema). It doesn't overrun physical memory
with 512 MB installed.


For the gaming test (UT2k3), they tested the G5, with less RAM and a GeForce FX 5200 Ultra, against the faster-clocked Athlon, with more RAM and an ATI 9700 Pro. As they point out, the ATI card is older, HOWEVER it's also as much as three times faster as the FX 5200 on the same motherboard. Check out this chart of UT benchmarks at Tom's Hardware:
Sure enough, the Mac ended up at about 1/3 the frame rate.


What a silly benchmark.


Finally, the iTunes test, which again the Mac lost, is guaranteed to be AltiVec intensive. G5s are slow running G4-optimized AltiVec code, apparently because the instructions used for prefetching data on the G4 stall the G5 execution pipeline and the G5 automatically prefetches. So, different optimization probably will make a huge difference here.


That's one thing that you'd expect the mac to excel at, even
compared to the x86 competition, but again... clueless tester
syndrome strikes.


So what does all this mean? Not that much. Between the issues of software of unknown optimization, giving the PC a MUCH faster graphics card, giving the PC more RAM, and running the tests against a PC with a clock speed advantage, it's a surprise to me that the PC didn't spank the G5 by even more.


I'd also like to know what the details are on both machines'
configurations. Given the general cluelessness that those
reviewers showed, who knows what the real deal is?

Thalaxis
08-21-2003, 03:38 PM
Originally posted by skigil
As an employee of Apple I must make the statement that Intel and AMD machines will not run OS X.

Thanks.

-skigil

Yes, indeed... another opportunity missed by Apple.

chadtheartist
08-21-2003, 04:40 PM
Originally posted by Limbus
I read some benches from a german mac magazine today. They where able to test the smallest model with 1.6 Ghz and 512mb ram (it ships with 256 ram which is a joke). They compared it to a dual 1.25 g4 and a Athlon Xp 2200+ (1.8 Ghz). In some benches it came close to the Athlon but in most cases the rather old Athlon was faster. The G5 1.6 performs like the dual g4. Interesting iare the results from the cinebench benchamrk (cinema 4D rendering). In this Benchmark the Athlon was alot (about 40%) faster than the g5.

I still think that the top g5 with 2x 2ghz is the best value for the money. The two other modells are far overpriced. I just hope that some workstation graficcards will be available soon (without the normal huge extra price that you normaly pay for any mac gcard). But for 3k it should have something better than a radeon 9600.

Florian

One thing that I noticed that the tests were not using recompiled software. No one will see a speed boost from software that isn't compiled in GCC 3.3 on the G5. Basically, like was stated earlier, the software isn't optimised to address any of the G5 functionality, and is more or less running half empty. If they recompiled the software that they tested in GCC 3.3, then they would be seeing results like Luxology, Pixar, Adobe, and Apple previously reported at WWDC.

So I wouldn't really hold a lot of credence to what this magazine article stated. Wait for real results using software that is optimised to run on the G5. Optimising meaning just a simple recompile in GCC 3.3. Not rewriting the software, like some people might be thinking.

And the Opteron does not have a thruoughput of 25GB/s. Hypertransport is what the Opteron's FSB is, and HT throughput maxes out at 6.4 GB/s (3.2 GB/s each way)

"The AMD Opteron processor includes three 16-bit HyperTransport technology interfaces capable of operating up to 1600 mega-transfers per second (MT/s) with a resulting bandwidth of up to 6.4 Gbytes/s (3.2 Gbytes/s in each direction). The AMD Opteron processor supports HyperTransport technology synchronous clocking mode. Refer to the HyperTransport™ I/O Link Specification
(www.hypertransport.org) for details of link operation."

Link: http://www.amd.com (Opteron Processor Tech Docs)

And I also don't know where you got the number of instructions the G5 can do per clock cycle. In Apple's G5 White paper, it clearly states:

"At the heart of the PowerPC G5 is an entirely new superscalar, superpipelined execution
core, composed of 12 functional units that execute different types of instructions concurrently
for massive data throughput. Before instructions are dispatched into the
functional units, they are arranged into groups of up to five. Within the core alone,
the PowerPC G5 can track up to 20 groups at a time, or 100 individual instructions. This
efficient group-tracking scheme enables the PowerPC G5 to manage an unusually large
number of instructions “in flight”: 20 instructions in each of the five dispatch groups,
in addition to 100-plus instructions in the various fetch, decode, and queue stages.
Up to 215 in-flight instructions
A wide architecture with 12 discrete processing
units enables the PowerPC G5 to contain up
to 215 in-flight instructions simultaneously—
71 percent more than the 126 instructions in
a Pentium 4.

Link: http://www.apple.com/g5processor/ (G5 White Paper)

It clearly states that the G5 can do 100 individual instructions, in addition to the Grouping you talked about. So explain to me how "On a good day" the G5 could do one or two instructions per clock cycle? Because in the same White paper Apple states this:

"It starts with 512K of L2 cache that provides the execution core with ultrafast access to
data and instructions—up to 32 GBps. Instructions are prefetched from the L2 cache
into a large, direct-mapped 64K L1 cache at up to 64 GBps. As they are accessed from
the L1 cache, up to eight instructions are fetched per clock cycle. Next, instructions are
decoded and divided into smaller, faster-executing operations. In addition, 32K of L1 data cache can prefetch up to eight active data streams simultaneously."

I'd like to hear further information on your findings, because I can't quite figure out how you are going about getting the figures you did. Apple says the maximum throughput for the G5 is 8 GBps, which I don't really think is all that hard to believe. The Opteron's maximum throughput however is limited to 6.4 GBps because of it's FSB being Hypertransport. And that is still not 100% because it depends on the speed of the RAM, which the Opteron can only use 100 MHz (DDR200) PC-1600 DIMMs or 133 MHz (DDR266) PC-2100 DIMMs or 166 MHz (DDR333) PC-2700 DIMMs according to the Opteron Tech Docs linked above.

Theoritically, once a 500 mhz DDR ram is available, the G5's bandwidth should be at full speed. But like was stated earlier, 200 mhz is as fast as the G5 will have right now. So 8 GBps might not be as fast as the G5 throughput could be.

Thalaxis
08-21-2003, 04:55 PM
Originally posted by chadtheartist
Hypertransport is what the Opteron's FSB is,

Wrong.

The Opteron's "FSB" is its connection to system memory. The
HyperTransport links are for I/O and interprocessor
communication, an entirely different purpose.


It clearly states that the G5 can do 100 individual instructions, in addition to the Grouping you talked about. So explain to me how "On a good day" the G5 could do one or two instructions per clock cycle? Because in the same White paper Apple states this:


No, it does not. It says nothing of the kind. The instruction groups
are what limit the issue rate; it can issue one group per clock
cycle, which means that it can issue a max of 5 instructions per
clock cycle in some very limited circumstances.

The "100 instructions in flight" is an entirely different metric. The
issue rate determines how many instructions the processor can
START during a given clock cycle. The in-flight rate relates to that
combined with the depth of the pipelines (it's just shy of 20
stages, so a peak of 5 issues per clock * 20 stages should look
familiar). It can retire one group per clock, and groups are
issued and retired in program order, according to IBM. So the
catch is filling groups, which is where the limitation I mentioned
comes from.

The 8 GB/s figure does not account for packet overhead. That
figure comes from IBM's tech docs, your figure comes from Apple's
BS department. Who are you going to believe?

The Opteron has an integrated memory controller as part of the
integrated NorthBridge. It uses 3 HyperTransport links, which are
PRESENTLY (look at the HyperTransport spec sometime, the 3.2
GB/s figure is definitely not the end of the line), limited to 3.2 GB/s
in each direction... in a dual configuration, one is used for the
I/O tunnel, and one is used to communicate with the 2nd
processor.

Each processor's memory controller has a 128-bit DDR333
memory bus, for a throughput of 5.4 GB/s -- each.

When ECC memory is available at 400 MHz, that will increase to
6.4 GB/s per processor.

Limbus
08-21-2003, 05:13 PM
Originally posted by chadtheartist
And the Opteron does not have a thruoughput of 25GB/s. Hypertransport is what the Opteron's FSB is, and HT throughput maxes out at 6.4 GB/s (3.2 GB/s each way)

Its 6.4GB/s per CPU so with 4 CPUs you have around 25 GB/s and with 2 CPUs 12.8 GB/sec. Still much more than the G5.

Florian

chadtheartist
08-21-2003, 05:28 PM
Ok, thanks for clarifying that. AMD's tech doc's didn't say that the FSB was anything, so I just assumed they meant the Hypertransport was what they used (When the lumped the HyperTransport Documentation with the memory controller information). It's funny though, because in their entire tech doc, they don't mention anything at all about the Opteron's FSB speed. Weird.

"Its 6.4GB/s per CPU so with 4 CPUs you have around 25 GB/s and with 2 CPUs 12.8 GB/sec. Still much more than the G5."

The dual 2 Ghz G5 has a throughput of 16 GB/sec. Still more than the Opteron at 2 processors.

"To ensure maximum efficiency on dual 2.0GHz G5 systems, each processor has its own dedicated 1GHz frontside bus. The resulting bandwidth of 16 GBps offers more than twice the 6.4-GBps maximum throughput of Pentium 4-based systems. In addition to providing fast access to main memory, this high-performance frontside bus architecture lets each G5 discover and access data in the other processor’s cache."

Link: http://www.apple.com/g5processor/architecture.html

mark_wilkins
08-21-2003, 06:00 PM
Originally posted by Limbus
and with 2 CPUs 12.8 GB/sec. Still much more than the G5.

except that the shared DDR 333 memory can't deliver data that fast... so they're really waiting for a better memory subsystem.

-- Mark

Thalaxis
08-21-2003, 06:05 PM
Originally posted by chadtheartist

The dual 2 Ghz G5 has a throughput of 16 GB/sec. Still more than the Opteron at 2 processors.


That's still not true... they are limited to 6.4 GB/s by having a
single 6.4 (dual-channel DDR-400) memory controller to share
between them. The advantage is that they don't have to waste
time (and bandwidth) accessing main memory or memory
controller compute cycles for cache coherency stuff.

It's definitely an improvement over the previous shared-bus
implementation, which forced both processors to fight for
bandwidth, but AMD's implementation does offer potentially better
scalability... if the OS is NUMA-aware and NUMA-optimized.

If the OS allocates memory poorly in a NUMA environment, it can
hurt performance considerably; the pathological worst-case
situation is that it allocates a thread or process to one CPU while
all of the data that it needs is in memory local to the other, and
at the same time does the reverse with another process or
thread.

The on-board memory controller's biggest advantage is not in
bandwidth, it is in latency.

BTW, the reason that they don't talk about "FSB speed" in the
Opteron docs is that the FSB doesn't really exist anymore; the FSB
is the connection between the processor and the NorthBridge,
and since the NorthBridge is now on-die, that connection is
entirely internal.

In the dual G5 (this statement wouldn't make much sense in the
context of a single, even though it's just a special case of the
same architecture), each processor has a dedicated connection to
the memory controller. That connection has a throughput of 6.4
GB/s, after accounting for packet overhead, since it's really two
3.2 GB/s unidirectional connections. The marchitecture doc you
linked is claiming that those combined determine the total system
bandwidth, when in fact that is no the case; that is limited by the
system's bottleneck, which in this case is the single memory
controller.

Hence the 6.4 GB/s figure.

In theory, the Opteron sounds like it has a significant edge in
bandwidth, but in practice we'll see. I think that there will be
certain cases where it turns out to be huge, but in general the
Opteron's biggest performance-related advantage will not be its
bandwidth, but rather the latency that it gets from having such a
fast memory controller.

mark_wilkins
08-21-2003, 11:46 PM
BTW here's a comment on the optimizations used in the current version of Cinebench from one of the Maxon developers that appeared on Appleinsider.com. This seems to confirm the theory that poor optimizations may be the cause of poor Cinebench speeds. However, AltiVec differences are NOT the cause of differences in Cinebench between G4 and G5:


I really wonder about what versions of C4D and Cinebench we are talking about?

Here some first hand clarifications about the optimizations done in Cinebench 2003 and C4D V8...

For the Mac OS the apps are compiled using CodeWarrior 8.3 with all speed optimizations switched on and scheduling is set for G4 7450...

For Windows the apps are compiled with the latest Intel-Compilers and optimization is done for the P4...

The source code (C++) for both platforms is exactly the same, except for a very small OS compatibility layer and some Apple specific GL optimizations (AppleVertexArryas and AppleFence), those have been implemented on request from Apple...

AltiVec, MMX, ISSE or ISSE2 is not used at all. There are two main reasons for that:

1) The time consuming calculations (intersections and color) have to be done in 64 bit, there is no way to do those in production quality with 32 bits only...

2) All those Vector Units are optimized for linear data access, always one value directly after the other, guess why they are called Vector Units. Raytracing is by nature exactly the opposite of this. Data is scattered all over the memory, the ray is simply not predictable, if it would be, the entire raytracing problem would be solved...

Simple test: Take an iBook G3 900 MHz and a Powerbook G4 867 Mhz and run on both machines some renderings using C4D, LW, Maya or EI. Why is the G4 not really faster? Because no one is able to use the current VE for production quality rendering...

SIMD (Single Instrucion Multiple Data) is great for streaming data (Music, Posteffects, Movies), this is what is was made for, but using it as a general FPU is not working...

Since Version 8 the apps are optimized for Hypherthreading, especially the GI (just try V7 against V8 on a HT enabled machine)...

Maxon is directly working together with Apple, Intel and AMD to get the best render speed out of the hardware. Depending on how early we get our hands on new hardware, we do what we can to optimize for it. Most times we have been the first app utilizing new hardware beaus of that...

Regarding the G5, it does not make much sense at all to discuss the current results, right now there are a few prototype machines with new GFX cards (probably beta drivers) and a patched OS running, what can one expect from a configuration like that...


Cheers, Richard (one of the Maxon developers)


and


an example: There is an algorithm that needs to access memory and of course does some calculations. Now if you always use 64 bit numbers you have to move twice as much memory compared to 32 bit numbers. Now there are some calculations to do, but for those 32 bit does not give enough precision...

Can you see the picture?

As you might understand I can not show you the details of our render engine

Since more than 8 years we are working, tweaking and optimizing on those routines, they have been developed and tested on the 68000, 68020, 68030, 68040 and even the 68060, 80386, 80486, P1, P2, P3, P4, AMD Athlon, PPC 601, PPC 604, G3, G4 and as soon as I get my hands on the G5, they will get optimized for this one as well...


-- Mark

MadMax
08-24-2003, 12:07 AM
Originally posted by silvergun
but youll still have to reboot everytime you use a 32 bit app and a 64 bit app. Alot of time wasted there. Plus youll be using windows xp and may have to cough up even more cash for xp pro. Thats more money wasted. Don't forget the heat of these machines...it'll add up to a huge electricty bill.


Wow, sure a lot of completely wrong, uninformed information there.

Fact, Opteron and Athlon64 will run 64 bit and 32 bit apps simultaneously without rebooting. Anyone who says otherwise is wrong.

As for the issues of heat, what do you base your comments on? certainly not real world facts that's for sure. Opteron's and Athlon 64 run with far less voltage than the Athlon line they replace did, and most certainly run far cooler.

MadMax
08-24-2003, 12:29 AM
Originally posted by Limbus
Its 6.4GB/s per CPU so with 4 CPUs you have around 25 GB/s and with 2 CPUs 12.8 GB/sec. Still much more than the G5.

Florian

No that is not correct.

Opteron has 3 Hypertransport links. EACH LINK, INDIVIDUALLY supports 3.2gb/s in each direction, for 6.4gb/s.....

PER HT LINK.

Not 6.4gb/s total for the system.

You guys need to go back and read the AMD white papers on the subject again, because no one has gotten it right yet.

Someone even quoted it and still didn't get it right.

So one more time....

6.4gb/s per hypertransport link.

Opteron has 3 Hypertransport links.

mark_wilkins
08-24-2003, 12:57 AM
Well, ok, but there are a few things that make that 3x6.4 GB/sec number meaningless:

One of those links goes to RAM and two can go to other CPUs, and each Opteron has only one memory controller. While this is great as a shortcut for passing cached data around, there's simply no way to get that sustained data throughput from another CPU's cache because the fetches are going to be small and nonsequential. So, you'll never see the full throughput of all of these links exercised simultaneously -- only the ones going to RAM are likely to see huge, sustained data transfers.

* While separate banks of DDR400 RAM can be accessed simultaneously by two CPUs with different memory controllers, one CPU has to wait for access if they want to hit the same bank at the same time. This reduces, somewhat, the effectiveness of all of this parallelism (though the dual RAM paths on the G5 probably suffer equally from this.)

* Anyway, in a shared memory system each bank of RAM has a maximum bandwidth that just can't be exceeded, and this is the memory bottleneck in both the G5 and the Opteron systems. Adding extra memory controllers helps with latency but it doesn't help with the raw data transfer rates out of a single bank of RAM itself.

So, regardless of what integer multiplier of 6.4 we decide to throw around for what system, the difference is not nearly as much as it seems, and what the benchmark impact is has yet to be seen also.

(Incidentally, all these multi-proc Opteron systems are being marketed into server applications, where latency, not throughput, is the issue for the CPU. Thus, adding a lot of expensive hardware to reduce CPU memory-access latency has a huge payoff. For a machine developed specifically with digital content creation in mind, it's unclear whether the tradeoff is as attractive.)

-- Mark

Thalaxis
08-24-2003, 01:13 AM
Originally posted by mark_wilkins

One of those links goes to RAM and two can go to other CPUs,


None of them go to RAM. The HyperTransport links are not
memory ports.


simply no way to get that sustained data throughput from another CPU's cache because the fetches are going to be small and nonsequential.


That is true.


* While separate banks of DDR400 RAM can be accessed simultaneously by two CPUs with different memory controllers, one CPU has to wait for access if they want to hit the same bank at the same time. This reduces, somewhat, the effectiveness of all of this parallelism (though the dual RAM paths on the G5 probably suffer equally from this.)


That just described an issue common to pretty much all shared
memory SMP systems, yes.


* Anyway, in a shared memory system each bank of RAM has a maximum bandwidth that just can't be exceeded, and this is the memory bottleneck in both the G5 and the Opteron systems. Adding extra memory controllers helps with latency but it doesn't help with the raw data transfer rates out of a single bank of RAM itself.


Opteron uses a NUMA approach, not a shared memory approach.
Each processor has a dual-channel memory controller and its own
local memory, unless the memory interface for the second
processor is not implemented (to reduce board costs).


So, regardless of what integer multiplier of 6.4 we decide to throw around for what system, the difference is not nearly as much as it seems, and what the benchmark impact is has yet to be seen also.


True... especially since it doesn't even apply to the memory ports.


(Incidentally, all these multi-proc Opteron systems are being marketed into server applications, where latency, not throughput, is the issue for the CPU. Thus, adding a lot of expensive hardware to reduce CPU memory-access latency has a huge payoff. For a machine developed specifically with digital content creation in mind, it's unclear whether the tradeoff is as attractive.)


Based on the memory performance numbers I've seen so far, it
seems as if it is working rather well. The Opteron has quite a bit
of memory bandwidth even discounting SMP implementations,
and it can make much better use of that bandwidth than an
Athlon and even a P4 due to the combination of low memory
access latencies and large 2nd level cache (large caches on-die
reduce the number of times the processor needs to fetch data
from main memory).

Also, the potential extra bandwidth that having a memory
controller in each processor is going to be more realizable when
the system is running with a NUMA-optimized OS, and with NUMA-
optimized software; in that case, memory can be allocated for
each thread in memory local to the processor that thread is
assigned to, which reduces concurrent access issues.

It's still not going to result in a sustained system bandwidth of
10.8 GB/s (two 5.4 GB/s memory controllers at work with their
own memory), but it's still quite possible for it to achieve more
than the 5.4 GB/s of each individual controller in aggregate.

MadMax
08-24-2003, 01:14 AM
Originally posted by mark_wilkins
(Incidentally, all these multi-proc Opteron systems are being marketed into server applications, where latency, not throughput, is the issue for the CPU. Thus, adding a lot of expensive hardware to reduce CPU memory-access latency has a huge payoff. For a machine developed specifically with digital content creation in mind, it's unclear whether the tradeoff is as attractive.)-- Mark


Eh, not true.

BoXX has been selling a desktop systems for months now designed and aimed at content creators.

the Asus SK8N nForce3, although a single CPU solution is aimed at content creators and desktops, not servers.

I mention the SK8N specifically because it is an nVidia based board. nVidia has a different apporach to design and it is clear that it is geared specifically to squeaking every ounce of bandwidth available. Opteron at 2.2 ghz. on the SK8N easily outperforms Intel 3.2 on real world apps, even with the great difference in clock speeds.

This is particularly relevant because nVidia did announce that dual nForce3 based boards would be shipping soon as well. Basing single nForce3 numbers to get an idea of it's dual performance will give a fair indication of how it will perform against the competition.

Also, Opteron isn't set up to access the same memory. Each processor has it's own individual bank of memory to address. Although a couple of board designs took the easy way out and has the 2nd cpu being fed by the first.

And further, there are dual Opteron boards out there that are workstation boards not servers.

based on initial results, there is a very large payoff to the Opteron design. And since I know you don't follow AMD, Opteron has been reducing in price to a very reasonable level for a decent priced workstation.

Thalaxis
08-24-2003, 03:29 AM
Originally posted by MadMax

BoXX has been selling a desktop systems for months now designed and aimed at content creators.
[/quote

Given that Boxx's primary market is content creation, that's only
logical.

Dual nForcePro boards are good news, especially if they're based
on the newer 250 model.

[quote]
Also, Opteron isn't set up to access the same memory. Each processor has it's own individual bank of memory to address. Although a couple of board designs took the easy way out and has the 2nd cpu being fed by the first.


I can see this is going to become confusing, due to the
terminology involved. I've been trying to avoid using the word
"bank" to refer to each processor's local memory for that reason;
processors already have multiple banks of memory.

The implementations that only have local memory for one CPU
are low-cost implementations. Enough traces for 2 DDR channels
is a lot of wires to begin with, and doubling that up isn't exactly
cheap to implement.


based on initial results, there is a very large payoff to the Opteron design.


That Intel has reduced the prices of Xeon processors significantly,
and launched a XeonDP with a 1 MB on-die L3, and launched a
dual Xeon platform based on the Canterwood chipset indicate
Intel's agreement with that statement.

I suspect that Deerfield's pricing reflects similar thinking.

mark_wilkins
08-24-2003, 03:31 AM
OK, I was wrong on a few points about the architecture.

However, I'd understood that the BOXX systems were not an optimal implementation of the Opteron's memory interface in any case. Maybe I'd misunderstood that point though.

-- Mark

MadMax
08-24-2003, 07:03 AM
Originally posted by mark_wilkins
OK, I was wrong on a few points about the architecture.

However, I'd understood that the BOXX systems were not an optimal implementation of the Opteron's memory interface in any case. Maybe I'd misunderstood that point though.

-- Mark

no, the BoXX design isn't "optimal" . The motherboard has a single bank of dimms that are connected to CPU 1, then using NUMA it feeds CPU 2.

Is this a HUGE difference? no. In practice, the results are minimal. Not enough to consider it a huge flaw, but not the best that could have been done.

mark_wilkins
08-24-2003, 08:39 AM
I think a point that many people are missing (and it's a point that's deliberately not part of Apple's advertising campaign) is that the G5 is not so much a revolution in its current form, but instead a much more promising platform on which Apple can build in the future.

IBM is one of the few non-Intel companies who may be able to make good on promises to keep up with Intel in performance. I think we've seen that Motorola hasn't been such a company in a long while...

Anyway, as far as the current G5, though it more resembles Opteron systems in architecture, it's closer to a high-end single-proc Pentium 4 in price. HP, for example, makes a P4 3.2 GHz machine that's about $2900, and has similar features and functionality (including storage and RAM) to the base dual 2 GHz G5.

Incidentally, I happen to have on hand my freshly-upgraded Intel 3.2 GHz Pentium 4, which is very similar in specs to said HP machine. The Cinebench CPU rendering times, with the latest, non-G5-optimized Cinebench, are 66 seconds on the dual G5 (according to various sources) and 69 seconds on the 3.2 GHz P4 of similar cost. Furthermore, once the benchmark (and Cinema 4D) are G5 optimized, there may be an additional 20% or more in speed there, while the P4 is pretty much doing as well as it's likely to ever get.

As for the "I can build a cheaper machine in my garage," well, yes that's possibly true. However, a store-bought computer will always come with a higher level of support and the system integration is someone else's problem. I've certainly experienced firsthand horror stories of system integration from companies like Dell and HP that might make me shudder with disgust and make me willing to build my own, but Apple tends to do an exceptional job and I'd rather leave the construction of the system to them. One home-built computer is enough, you know?

(My Dell story is the build-to-order computer they made my mother that included a video capture system. Unfortunately, the video capture driver CONFLICTED WITH THE KEYBOARD DRIVER as shipped!! Furthermore, it took several calls to tech support to identify a support person who was even familiar with the particular video capture system that had been installed in the machine! I don't understand how Dell can have customer satisfaction numbers up there with Apple's... I guess that must have been the exception to the rule, but still, it's pretty weak.)

Funny thing is, my 3.2 GHz P4 is sitting two feet away from my 800 MHz Powerbook, but I probably use the Powerbook three or four times as much. I like the interface, I like the content creation tools (particularly Final Cut Pro) and I like the availability of a UNIX command line. Sorry, but compared to UNIX, Windows is just as deficient on the command-line front as DOS was in 1982. :D

Tell you one thing, though, the Windows machine is a screamer, and so will be the dual G5, when it comes. Maybe next week, if they keep their promise?? Who knows?

-- Mark

richcz3
08-24-2003, 05:51 PM
When Real World Benchmarks are out then all the pre-relrease squabble won't matter anymore. Steve Jobs is swinging for the fence and he has a powerful and very competent company in IBM to support him in the transition from Motorola.

All companies hype their even Apple have PR Machines that hype specs and downplay caveats in performance. Frankly I think it's great that Apple is capable of bringing a new processor to the mainstream. That's just good for everyone.

AS for AMD (All my rigs are AMD) the Opteron is getting allot of good press. However there are prices to pay with being an adoptor and going with Opteron. The price points just aren't getting me to write a check. This first round of Opterons support in motherboards, 64Bit OS, and 64Bit Applications is sorely lacking (Speaking as a Windows under here).

In the meantime Xeon and P4 prices are dropping which do have very favorable price points to performance. The whole run to 64Bit processing may lead to some fire sales in 32 Bit components. :applause:

richcz3

matty429
08-24-2003, 06:34 PM
The only thing I can recall IBM doing ..is selling off it's hard drive division name in order to keep producing hard drives....How many of you had bad IBM hard drives...?

Limbus
08-24-2003, 07:01 PM
Originally posted by mark_wilkins
Furthermore, once the benchmark (and Cinema 4D) are G5 optimized, there may be an additional 20% or more in speed there, while the P4 is pretty much doing as well as it's likely to ever get.

Cinema 4D does not seem to have any SSE2 optimization (like Lightwave) so Maxon could improve the speed on both systems.

Florian

MadMax
08-24-2003, 07:28 PM
Originally posted by richcz3
AS for AMD (All my rigs are AMD) the Opteron is getting allot of good press. However there are prices to pay with being an adoptor and going with Opteron. The price points just aren't getting me to write a check. This first round of Opterons support in motherboards, 64Bit OS, and 64Bit Applications is sorely lacking (Speaking as a Windows under here).

Apple has one choice of motherboard, AMD has several.

Apple lacks a 64 bit OS also. Although with Opteron I have in my hand an offical Microsoft Disk of Windows64 given to me by Micorosft.

Apple lacks any 64 bit apps as well.

Prices of Opteron systems vs. G5's seem to be about the same.

richcz3
08-24-2003, 10:11 PM
matty429

You know nobody seems to know IBM blew it with the "Deathstar" I mean Deskstar Series of drives. That aside, outside of the consumer realm IBM has a solid server business of which Apple is being bouyed via the "down sized" version server of their CPU.

MadMax

Opteron and Athlon 64 will have quite a solid selection of motherboards by 1st quarter 2004.
Thats not saying whats available isn't good, it's just that AMD kept final board specs silent from key consumer MB makers. There were pin issues for each chip class and they wanted to avoid a migration to a cheaper Athlon64. So the innards are not so different. The pins will define each chips end usage. The pins will help keep Athlon64's from being used in MP boards (ie: Dual Workstations and Servers).

AMD wants plenty of boards for the Athlon64 support and they want to keep the two markets identifiable.
AMD is trying to nail how Athlon64 and Opterons are marketed. The workstation, gamer, and fringe wants Opteron. The Athlon64 is targeted for the consumer. The quandry is that if everyone wants an Opteron system, how is the Athlon64 going to be leveraged aside from hiking the prices up on the Opteron.

As for the Apple not having a 64Bit OS. That is a given and of course they have said it will be updated in a patch. Remember, the 1st version of OSX didn't support dual processor, but they sold Dual 1Ghz anyways.
To the people shelling out xdollars for a dual G5, that means nothing. Just like people using Windows Xp with a dual Opteron for the sheer speed improvement. The beta Windows64 for Opteron was handed out SIGGRAPH and other events and I have heard little or nothing about its performance. Care to divulge any info?

For me, buying an Opteron or Athlon64 while waiting for maximum usage to come in one year or more just doesn't sound reasonable.


richcz3

MadMax
08-24-2003, 11:26 PM
Originally posted by richcz3
The beta Windows64 for Opteron was handed out SIGGRAPH and other events and I have heard little or nothing about its performance. Care to divulge any info?

For me, buying an Opteron or Athlon64 while waiting for maximum usage to come in one year or more just doesn't sound reasonable.


Windows64 isn't much different than using regular XP. It seems stable enough though. Personally I prefer Linux64.

As for buying an Opteron now or waiting, I bought now. Asus SK8N single CPU nForce3 board.

I gained a substantial increase for not all that much money.

moovieboy
08-25-2003, 12:57 AM
Originally posted by MadMax
Apple lacks a 64 bit OS also. Although with Opteron I have in my hand an offical Microsoft Disk of Windows64 given to me by Micorosft.

Apple lacks any 64 bit apps as well.

That's said like it's both a bad thing and a permanent thing. Imho, Apple is moving smartly into 64 bit territory, both in the short and long terms. It would've been unwise to tell developers, programmers and users who have JUST moved their apps/documents/workflows into OSX, "Thanks for all the hard work... Now do it all over again so it'll be all 64 bit by year's end!"

So, for now, Apple/IBM use their "bridge technologies" to get some juice out of the G5 64 bit architecture with 32 bit apps and the programmers can take their time and be ready for when OS X becomes a full fledged 64 bit system.

So, what's the big deal? :D

-Tom

MadMax
08-25-2003, 01:09 AM
Did anyone actually read the post I was responding to???????

The user was quoting reasons not to buy an Opteron. I only pointed out that G5 is in exactly the same position as Opteron.

No 64 bit OS, no 64 bit software. Very expensive.

So if people are suggesting to wait on Opteron, why are people promoting go buy a G5?

If everyone waits on both systems, there never will be any software for them.

mark_wilkins
08-25-2003, 01:17 AM
Originally posted by Limbus
Cinema 4D does not seem to have any SSE2 optimization (like Lightwave) so Maxon could improve the speed on both systems.


??? They use the Intel compiler, which aggressively uses SSE2 automatically. Are you speaking of manual SSE2 optimizations?

-- Mark

moovieboy
08-25-2003, 06:01 AM
Originally posted by MadMax
Did anyone actually read the post I was responding to???????

The user was quoting reasons not to buy an Opteron. I only pointed out that G5 is in exactly the same position as Opteron.

I re-read them. You're right, my bad :beer:

-Tom

MadMax
08-25-2003, 06:15 AM
Originally posted by moovieboy
I re-read them. You're right, my bad :beer:

-Tom

No problem.

I just thought it odd to criticise one and praise the other when their situations are identical.

Limbus
08-25-2003, 08:01 AM
Originally posted by mark_wilkins
??? They use the Intel compiler, which aggressively uses SSE2 automatically. Are you speaking of manual SSE2 optimizations?

-- Mark
If you compare Lightwave and Cinema 4D Benchmarks you can see that Lightwave is much more optimized on Pentium 4. Intel is even telling reviewers that they should use Lightwave as a benchmark because of this.
http://www.computerbase.de/article.php?id=232&page=22#cinema_4d_xl_r8

Florian

Thalaxis
08-25-2003, 01:43 PM
I think that there's more room for optimization on both the
Opteron/Athlon64 (they are the same processor, so I consider
them to be one) and the G5 than for the P4 in its current
iteration.

The Prescott is obviously another story altogether, but I think that
for the Northwood core we're well past the point of diminishing
returns. Any further performance gains on the Northwood, at
least for Cinema, would probably require a massive re-architecting
of Cinema to make heavy use of SIMD.

Opteron obviously has a lot of room due to being AMD's first
processor with SSE2, and also from the architectural tweaks that
become available in 64-bit mode (namely the extra registers).

richcz3
08-25-2003, 04:24 PM
Thalaxis

Yes, until AMD clarifies any real physical difference between the Athlon64, FX64, and the pteron, I would say that the biggest difference is their pin and cache configs. The short answer for difference is that the Opteron is for dual and more MP process arrangemnets.
Fortunately they recently released the minimum the minimum start spec for the Athlon64 being 2.2GHz.

The Prescott is going to be interseting for Intel. The current build of the chip dissapaits 100watts of heat. So even though they are using the 90nm process, it is running very hot.


richcz3

Thalaxis
08-25-2003, 04:37 PM
Originally posted by richcz3
Thalaxis
Yes, until AMD clarifies any real physical difference between the Athlon64, FX64, and the pteron, I would say that the biggest difference is their pin and cache configs. The short answer for difference is that the Opteron is for dual and more MP process arrangemnets.
Fortunately they recently released the minimum the minimum start spec for the Athlon64 being 2.2GHz.


From what I can tell, at launch the Athlon64 will be available at
2.2 and 2.0 GHz (maybe a lower speed grade as well, but I don't
expect it, as it would hurt sales of existing AthlonXP units). There
will be a 75-something pin version with 1 DDR channel, and a
higher-end part called Athlon64FX with dual DDR, initially on a
940-pin socket and later with 939 pins. The only real difference
between the Athlon64FX and the Opteron will be SMP support,
which I think might end the Opteron 1xx line, since it seems to me
to be redundant at that point.

On the 90nm node things start getting more interesting, as the
clock speed will ramp, the Athlon64 will get dual-channels, and it
may be a bit cooler than the 130 nm version.

Also, it seems that the Athlon64 is going to be a revision "c" part,
which includes some SSE2-related fixes. The performance of the
Opteron in SSE2 code right now is pretty weak, apparently the
result of some intelerances for current optimization techniques,
that AMD has addressed and will be including in future models.


The Prescott is going to be interseting for Intel. The current build of the chip dissapaits 100watts of heat. So even though they are using the 90nm process, it is running very hot.


Yes, it is. This is a good indication of what everyone else will have
to deal with in trasitioning to 90 nm also.

Of course, there is also a large amount of extra stuff in the
Prescott core, so it's not entirely clear at this point what the
origins of the extra power requirements are. It is possible that a
lot of is coming from a higher resource utilization in addition to
process-related issues like gate leakage.

MadMax
08-25-2003, 05:28 PM
Originally posted by richcz3
[BThe Prescott is going to be interseting for Intel. The current build of the chip dissapaits 100watts of heat. So even though they are using the 90nm process, it is running very hot. [/B]


Imagine how hot that little thing is going to be.

103w and very little surface area to dissipate the heat across.

Prescott is going to make even the hottest running Athlons look cold by comparison.

Thalaxis
08-25-2003, 05:34 PM
Originally posted by MadMax
Imagine how hot that little thing is going to be.

103w and very little surface area to dissipate the heat across.

Prescott is going to make even the hottest running Athlons look cold by comparison.

Yes indeed... and let's just hope that AMD is able to find a solution
for that same heat problem, since they will need to in order to
stay competitive with Prescott next year.

MadMax
08-25-2003, 05:49 PM
Originally posted by Thalaxis
Yes indeed... and let's just hope that AMD is able to find a solution
for that same heat problem, since they will need to in order to
stay competitive with Prescott next year.

They already have. It's called SOI courtesy of their new pals at IBM.

Thalaxis
08-25-2003, 05:53 PM
Originally posted by MadMax
They already have. It's called SOI courtesy of their new pals at IBM.

Yes, that is the typical incorrect misconception.

That doesn't make it any more correct, however.

MadMax
08-25-2003, 06:35 PM
Originally posted by Thalaxis
Yes, that is the typical incorrect misconception.

That doesn't make it any more correct, however.

how so?

they have lowered power consumption, thus heat. Opteron runs much cooler.

Read AMD's material on SOI and what they expect it to do. Seems pretty accurate to me.

richcz3
08-25-2003, 06:42 PM
Heres the way I see it.
Just like nVidia jumped through the hoop first with the .13 die process and ended up with the fireball 5600 card. TMC was bedeviled by the low yields. Now will Intel has to deal with the heat dissapation process going with the smaller 90 nm process.

Sure the 90 nm process will allow for higher clock speeds, but the higher load on the die that generates the heat needs to be engineered out. This is a case where the 1st one through the hoop doesn't neccesarily come out the winner.

In the case of ATI, they stayed with the .15 process and were able to squeeze the 9700 Pro which beat the 5600 FX soundly.
Now that ATI has moved to .13 TMC has the process ironed out and can produce the .13 process with no yield issues.
So nVidia bankrolled the process and ATI is the beneficiary.

There is no doubt that AMD has to do something with their die process. If Opteron and the 64Bit line expect to see growth they need to go down to 90nm or Intel will dominate again. 2.8 GHz on the current die is speculative acievement at best. The current crop of 64's have eaked out at 2 and 2.2 GHz.

richcz3

Thalaxis
08-25-2003, 06:49 PM
Originally posted by MadMax
how so?

they have lowered power consumption, thus heat. Opteron runs much cooler.

Read AMD's material on SOI and what they expect it to do. Seems pretty accurate to me.

No one (at least no one who isn't an Intel-bigot) will deny that
SoI has its benefits, as is demonstrated by Intel's announced
plans to adopt it in the future (they've stated that they intend to
use it on the 65-nm node).

However, it is also quite obvious and quite true that SoI, while
helpful in some areas, isn't going to prevent outright the
difficulties involved in transitioning to a 90 nm process.

Thalaxis
08-25-2003, 07:02 PM
Originally posted by richcz3
Heres the way I see it.
Just like nVidia jumped through the hoop first with the .13 die process and ended up with the fireball 5600 card. TMC was bedeviled by the low yields. Now will Intel has to deal with the heat dissapation process going with the smaller 90 nm process.


Yup, nVidia gambled on technology and lost. ATI stuck to a more
convervative fab technology, and rode back their design to fit the
more constrained transistor budget, and won.


Sure the 90 nm process will allow for higher clock speeds, but the higher load on the die that generates the heat needs to be engineered out. This is a case where the 1st one through the hoop doesn't neccesarily come out the winner.


There I don't agree. Being the first to deal with these problems
means that Intel will have a leg up on everyone else as far as
fabs go... well, to be more precise, a bigger leg up. They've
already shown that their fab capabilities are unmatched, even by
IBM who can't match their clock speeds even with a supposedly
more advanced fab process, and an almost identical pipeline
depth, let alone match Intel's ridiculously fast caches on the same
process.

Once Intel gets their 90 nm process ironed out, they'll have all
sorts of advantages over the competition, and their head start
means that they'll be generating revenue on their 90nm fabs
when other companies like IBM are producing test silicon.


In the case of ATI, they stayed with the .15 process and were able to squeeze the 9700 Pro which beat the 5600 FX soundly.
Now that ATI has moved to .13 TMC has the process ironed out and can produce the .13 process with no yield issues.
So nVidia bankrolled the process and ATI is the beneficiary.


Actually, of late there has been some information floating around
that implies that the 130 nm yield issues were not entirely TSMC's
doing, but we'll see.

My theory is that the first CineFX implementation was just too
ambitious for the target technology, and they couldn't get the
problems ironed out as quickly as they'd hoped to, by far.

I imagine that in the next generation, ATI won't have the luxry of
further nVidia stumblage to help them out, since every indication
is that nVidia has the yield issues with their new IBM fab ironed
out, and so supposedly does IBM.

IBM had better hope so... they need the money -- badly. They've
been running at around -$1.3 billion on that fab in the past year,
which isn't exactly healthy.


There is no doubt that AMD has to do something with their die process. If Opteron and the 64Bit line expect to see growth they need to go down to 90nm or Intel will dominate again. 2.8 GHz on the current die is speculative acievement at best. The current crop of 64's have eaked out at 2 and 2.2 GHz.
richcz3

Most people are expecting the 130 nm Hammers to top out at
2.4 GHz, but we'll see. Given their timetable for the 90 nm
transition, they don't really need very many more speed grades
to fill the interim anyway, in order to keep ramping. The real
problems for AMD will be Prescott and Deerfield, though; not
Northwood.

richcz3
08-25-2003, 07:35 PM
Oh boy...Some comparative benchmarks between G5's and PC's are already making the rounds. Just check a few HW sites and take your pick.

I can seee this getting ugly now. :rolleyes:

richcz3

Thalaxis
08-25-2003, 07:44 PM
Originally posted by richcz3
Oh boy...Some comparative benchmarks between G5's and PC's are already making the rounds. Just check a few HW sites and take your pick.


The funniest thing that I see about these reviews is that most of
them made pretty much the same blunder: they couldn't figure
out how to use one of the easiest benchmarking tools out there.

I find it to be pretty amusing and disappointing that so many
people are having so much trouble figuring out how to use
CineBench... especially since it does ALL of the work for you at
the press of a single button, and all that you have to do is copy
and paste the results.

CGTalk Moderation
01-15-2006, 10:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.