PDA

View Full Version : ATi FireGL X1


elvis
12-22-2002, 12:39 AM
SPECviewperf 7 results on spec.org's website:

http://www.spec.org/gpc/opc.data/vp7/summary.html

notice in the two identical dell machines the ATi FireGL X1 beats the nVidia Quadro 4 XGL 900. same goes for the two fujitsu celcius machines.

impressive feat. early indications were that the quadro4 was beating the new X1's, but i'm guessing ATi have released some better drivers. kudos to ATi for keeping up software support for their high-end cards. i have a feeling the 4 new workstations i'll be purchasing next month will all have X1's in them at this rate. :)

GregHess
12-22-2002, 12:35 PM
Technically shouldn't the X1 OWN the 900XL? It is a next gen card based on the R300 chip. (Radeon 9700). The 900XL's are still set to compete against the previous ATI line...(8800 FireGL).

elvis
12-23-2002, 05:23 AM
the 8800GL drivers have had a lot longer to mature compared to the X1 drivers. it's the software that makes the card. put the biggest shiniest whizbangery you like on a card, it don't mean squat if the OS/application can't use it (case in point: recent update in maxtreme drivers).

give ATi time. i'm confident they are putting a lot of hard earned dollars into driver R&D. 3 months from now there should be the performance difference we are expecting.

having said that: a geforce4ti4600 can still beat a radeon 9700 in certain openGL games. perhaps the R300 chips are designed for better pixel shader style rendering and not grunt GL wire/texture code? time (and driver updates) will tell, i guess.

GregHess
12-23-2002, 02:18 PM
it don't mean squat if the OS/application can't use it (case in point: recent update in maxtreme drivers).

Excellent point.

perhaps the R300 chips are designed for better pixel shader style rendering and not grunt GL wire/texture code? time (and driver updates) will tell, i guess.

We forgot something however...The FireGL X1 Sounds a Hellava lot faster then a Quadro 980XL. Thats gotta be worth at least a hundred horsepower in stickers. (See most honda civics hehe)

Speaking of the 980XL...maybe nvidia will send me to test my new samsung 191T with...so I can report the exact same data everyone else has. :)

jscott
12-23-2002, 02:50 PM
Hey greg,

We were suppose to get some 980XGL's in some new Compaq/HP boxes with AGP 8X mobo's over a month ago but haven't seen squat.

-jscott

dmeyer
12-23-2002, 02:53 PM
need......quadro....fx......:twisted:

GregHess
12-23-2002, 03:03 PM
Hey Scott,

Thats mainly cause there is a less then 1% performance difference between 4x and 8x AGP. In some instances 8X AGP is actually slower...I think either xbitlabs or digilife did a review in max comparing them to the 900XL's.

Basically nothing makes use of the architecture currently, which is why one shouldn't just toss your 4x agp cards out the window...(If you do, make sure its into my window)

jscott
12-23-2002, 03:13 PM
Oh no we aren't tossing anything out the window. These were suppose to be evaluation systems. So we could test the new workstation configurations. I was trying to get them to send over a whoop ass laptop too so I could test that :).

BTW we had an nVidia guy here a while back and he was going on an on about the 980XGL and how much better it was going to be. I suspected like you stated the the performance increase would be nominal if any.

-jscott

maelstrom
12-23-2002, 11:37 PM
this might interest some of you:

Fire GL X1 Review by Amazon International (http://www.amazoninternational.com/html/benchmarks/graphicCards/atiFireGLX1/atiFireGLX1.asp)

GregHess
12-24-2002, 01:44 AM
What is concerning is the performance drop in the Fire GL X1 in the Dual System over a Single P4 with Hyper Threading

This type of comment immediately points out the level of technical competence the reviewer has. In this case, little or none.

OpenGL is not multithreaded, and thus does not take advantage of the 2nd cpu. In which case a 3.06 will of course be faster then a 2.8...unless my math skills are really bad, thats a 260 megahertz advantage.

Just a note as not to judge a card's performance based on this review, as its pretty obvious there are some gaping holes in the reviewers arguments.

elvis
12-24-2002, 05:50 AM
one thing that review does confirm is my claim that the driver updates made the difference. check 1020 drivers vs the 1024 drivers.

as for 8X AGP: it's a nice idea, but too early for such a feature. cards are only now making full use of the 4X AGP standard (2 years ago when it was released the same thing happened: people got worse results due to immature drivers). give AGP 8X another 12 months, and you'll see it's full potential. for now it's just a marketing gimmic.

greg: as for the horsepower stickers, check this beast out: the bitchin fast 3d!!! (http://carcino.gen.nz/images/image.php/5e08eed6/bf3d2000.jpg) :applause:

LeeTN
12-24-2002, 02:42 PM
Elvis,

ROTFLMAO

...where can I get one of these???

I may have to seriously mod my Lian-Li to get it to fit, but what a great conversion starter: "Um...what's that big thing sticking out of the front of your computer?"

:D

GregHess
12-24-2002, 02:42 PM
greg: as for the horsepower stickers, check this beast out: the bitchin fast 3d!!! (http://carcino.gen.nz/images/image.php/5e08eed6/bf3d2000.jpg) :applause:

Hahahahahahahahaahahah!

Thats just classic. Thanks for the link elvis. Happy holidays btw. :)

elvis
12-24-2002, 02:58 PM
i first saw that ad a few years back now, about the time the whole nvidia vs 3dfx thing really started heating up. nvidia fans were bitching about 3dfx using multi-chip rather than new technology, and 3dfx fans were bitching how nvidia used 32bit rendering and crappy directx5 instead of glide. also the whole online review fiasco started kicking in with people using incomplete or dud benchmarks to make things look better one way or the other.

then some dude came up with that piece of photoshop goodness, and i think it put everything back into perspective. :) very funny stuff. as you can no doubt guess, 256MB at the time of print was twice as much as most people had in their systems. (16MB cards were still pretty hard-core).

well, it's 1:47AM december 25 here in brisbane, so merry christmas to all of you (and i belive i'm supposed to say "happy holidays" to be PC... something australia hasn't caught onto yet i might add :) ). here's hoping you can all take some well deserved time off and spend it with your loved ones.

dmeyer
12-24-2002, 03:07 PM
I think the best part is "Quake XIII: It's Hammer Time"

LMAO

Speed Racer
12-26-2002, 09:31 PM
jscott:
BTW we had an nVidia guy here a while back and he was going on an on about the 980XGL and how much better it was going to be. I suspected like you stated the the performance increase would be nominal if any.

As someone in this thread mentioned, it is the software/drivers that make or break a card. Everyone is plugging 8x cards into systems and expecting software to immediately take advantage.

This is not realistic. Drivers and applications have been highly tuned to stay within the limits of 4x AGP. They have to be equally tuned to take advante of the increased limits of 8x AGP and that takes time.

Jscott, I am sure that you will see the performance gains from 980 XGL (in an 8x AGP workstation with the _proper_ drivers) that nvidia was promising you.

Looking at recently posted numbers from the workstation vendors, it looks like 980 XGL gives between a 3% and 30% performance gain over 900 XGL. Certainly nothing to sneeze at with their pricing being roughly equal and their recent price drop.

In addition, it appears that Nvidia's lead in software abilities has kept 980 XGL competitive and sometimes faster than FireGL X1. With X1 being based on a chip that is somewhere between a 980 XGL and the fast approaching NV30gl in specs, it is interesting to see the X1 loosing benchmarks or barely winning benchmarks in this small window of opportunity it has.

ATI bought a good hardware team in the ArtX guys. Nvidia's move to make NV30gl .13u was a risk that has given ATI a window to slip R300 into. However, ATI has yet to come close to the software talent that is at nvidia.

We will soon see what happens with NV30gl

Speed Racer
12-26-2002, 10:43 PM
Oh, and BTW, OpenGL can very much indeed be multi-threaded.

There is still an amount of triangle setup that is done on the CPU as well as the whole interface between the App & memory and the Gfx card and memory.

Well written gfx hardware drivers and OpenGL implementations can take advantage of the increased memory bandwidth and processing power that a dual CPU system gives you.

It is not wrong to be a bit disturbed by X1 taking a perf hit in a dual CPU system.

Gregwid
12-27-2002, 12:00 AM
Originally posted by elvis
SPECviewperf 7 results on spec.org's website:

http://www.spec.org/gpc/opc.data/vp7/summary.html

notice in the two identical dell machines the ATi FireGL X1 beats the nVidia Quadro 4 XGL 900. same goes for the two fujitsu celcius machines.

impressive feat. early indications were that the quadro4 was beating the new X1's, but i'm guessing ATi have released some better drivers. kudos to ATi for keeping up software support for their high-end cards. i have a feeling the 4 new workstations i'll be purchasing next month will all have X1's in them at this rate. :)


Me too. I am in for that exact set up. Cool news Elvis.Thanks.:beer: :applause:

GregHess
12-27-2002, 01:15 AM
Er....All the test data with the 980XL vs the 900XL shows less then a 1% performance gain...in max, or in specviewperf. I don't think software has anything to do with not seeing a performance increase with these cards...its more a hardware issue.

The AGP bus really isn't a bottleneck right now. It wasn't with AGP 4x, and it sure as hell isn't with AGP 8x. Its almost akin to the whole ATA bandwidth BS. Sure Serial ata is fine with a 150 meg/sec bandwidth...but nothing out there can utilize that bandwidth (especially since all the embedded controllers can not currently diasy chain more then a single drive). [At this current time]

When there are no bottlenecks, increasing bandwidth doesn't result in any real noticable performance increase.

Quick example. A Pentium III system...with SDRAM vs DDR. Technically DDR has twice the available bandwidth...is it noticable? Not at all. Both in realworld and in benchmarks, there is barely any noticable change in performance.

Now take another processor...a Pentium IV, and equip one system with SDRAM, and the other with DDR. Performance difference? HELL YA. A massive one. In some cases a DDR/Rambus equipped P4 will perform an equivilant 500 megahertz faster then one on an SDRAM platform. (Aka a 1.5 P4 on Rambus, would be almost equivilant to a 2.0 P4 on SDRAM).

Of course 8x AGP adds some other features...but these won't be taken advantage of for quite some time. At least not until we've seen the 2nd or 3rd generation of the FX cores.

As for the multithreaded nature of opengl...Could you please point us to some articles, information, or realworld tests of the 2nd cpu helping the first in viewport rendering?

My own personal tests show little if any difference in ogl fps in 3dsmax3 through 5 (d3d and ogl) in single vs dual cpu systems. If what you say is true, I'd love to know how, as I'm sure many others, how we can get the 2nd cpu to run multithreaded opengl.

Speed Racer
12-27-2002, 07:56 AM
Looking at the mentioned SPEC Viewperf page and with two 3.06 GHz systems I see boosts of 3%, 18%, 9%, 24%, 18%, and 15%. I expect other examples to be better than this or not as good as this, as this one was taken at random.

I do expect the numbers in the above example to improve as Nvidia continues to tune to AGP 8x and around some possible enhancements in the NV28gl chip.

I honestly don't understand why you keep saying "less than 1% difference" when the data is out there for you to do the math with. I did notice that a number of articles were done comparing 900xgl and 980xgl using drivers that were not tuned to 8x AGP yet (based on my understanding of when that work first showed up and which rev number Nvidia released it with). You may want to check if the articles you are depending on fall into that category.

As for multi-threaded OpenGL drivers, I have seen a number of papers and articles about that published by SGI. One of the biggest bottlenecks in 3D perf is the host system memory bandwidth. Dual CPU's increases the overall memory bandwidth and I have seen numbers from nvidia that show they get a gain from that. In SGI systems they have shown decent gains on graphics only performance as a result of a second CPU.

I will let you Google for the details. :-)

My 1 min. glance found a pdf from 3dlabs about their "Highly-optimized, Multi-Threaded PowerThreads OpenGL Drivers..."

I see some FireGL 1,2, 3, and 4 (true firegl boards :-) info about their multi-threaded drivers...

As I mentioned, even with the geometry power on today's cards there is still triangle setup work that is done in the application and at the application/driver boundary. It is done with the CPU(s) and good driver technology with take advantage of the second CPU for this processing _and_ for the increase in host memory bandwidth.

I stand by the statement that ATI should benefit from a second CPU, let alone not dip in performance with their RadeonGL X1... uh, I mean "FireGL X1 based on the Radeon 9700". :-)

GregHess
12-27-2002, 03:14 PM
For the 8X AGP information, I'm going by nvidia's latest driver sets, as for actual realworld usage, as in max5, maya 4, etc. Most of the spec data from "most" websites using both publicly available drivers, and "beta" ones, show very little if any performance boost. In fact, some of the sites actually show a PERFORMANCE DECREASE when using AGP 8x.

http://www.tech-report.com/reviews/2002q4/gf4-8x/index.x?pg=9

http://www.xbitlabs.com/video/3dsmax5-980xgl/

I'll post the other urls once the pages go back up. Got another two.

[Of course these tests don't exactly stress max, and what not, but still, you'd think you'd at least get SOMETHING]

You'll also note, that in some cases the actual nvidia data is proven completely wrong. If nvida's claims to performance gains in 3d apps is true, and if it does require specifically optimized drivers...then why release the cards when no drivers are available to the public? [In case you don't feel like reading, nvidia is claiming a whopping 5%, and in reality, this doesn't even account to 1%, somewhere near .1%]

The problem with using google to search, is it tends to go to the most biased pages first. Doing a search on AGP 8x performance, brings you straight into nvidia's lair, which of course, is going to be extremely biased.

Whats the exact link for the specviewperf data? As shown above, most reviewers, using even unreleased driver sets, have shown little if any performance benefit using "CURRENT TECHNOLOGY". Much greater benefit is shown from a ram or clock speed increase, then by moving from a 4x platform, to an 8x one. Remember I'm not saying that 8X AGP won't give an advantage, I'm saying its not giving an advantage now, and won't be until a few card generations down the line...(Most likely the R400, and Successor to the FX) I don't believe this will be solved through software, as the current hardware just can't utilize the increased bandwidth (its not designed too, its mainly just a marketing gimik). It will take future hardware changes to match the advantages of AGP 8x.

Of to the other topic...multithreaded drivers...

3dlabs VP and Wildcat4

1) 3dlabs is no longer a competing force. Either with or without multithreaded drivers. The average 3dlabs card can be beaten by some cheaply available gaming accelerators. (Even the new Wildcat 7210)

http://www.3dlabs.com/support/drivers/index.htm

(No available or previous drivers for any "tier 1 cards" list support for dual cpu opengl. If they do, I can assure it, its "support" not optimization. [meaning it'll work in a dual cpu system]

http://www.3dlabs.com/product/wildcat4/faq.htm

http://www.3dlabs.com/product/wildcat4/features.htm

http://www.3dlabs.com/product/wildcatvp/faq.htm

http://www.3dlabs.com/product/wildcatvp/index.htm

You'd also think that 3dlabs would boast continuously about such features as "Multithreaded opengl"...but its mentioned nowhere in any of the faqs, drivers, or feature lists of any of their newer cards.

2. ATI. The FireGL line....

Just going to look at the 8800, FireGL4, and X1. [Latest ones]

The 8800... mentions dual cpu support, not optimization, nor support for multithreaded ogl.

The FireGL4...

http://www.ati.com/support/manualpdf/FGL4UGENG.pdf

Actually mentions Threadsafe OGL 1.2 SMP Support. Which begs the question....why hasn't a single person done a comparison between a single and dual cpu system?

Check out page 50 of the manual now. You'll note that any mention of SMP ability through optimized software has been removed. I also believe that the ability for the card to do SMP OGL was removed for stability reasons.

The X1. Since this is based on a 9700 pro...I seriously doubt it'll have any sort of SMP optimizations. If it does, I'll be really surprised and give kudos to ATI...but since ATI is usually pretty #$)!@& with drivers (The FireGL 4 was the last real firegl), I seriously doubt we'll see any implementation of this.

3. Nvidia. No nvidia accelerator currently supports SMP Ogl or d3d.

Speed Racer
12-27-2002, 07:36 PM
Greg,

You have used the masterful technique of confusing the heck out of me and have won as a result. :-) I can no longer follow what you are even arguing. :-) ...and honestly, a lot of your statements don't make sense (especially around the idea of "beta" drivers and nvidia's claims).

I posted the SPEC Viewperf 7 comparisons (the URL is well known and was posted in this thread) which showed an average of about 15% gain and a max of 24% for 980 xgl over 900 xgl in identical systems.

I just looked at SPEC apc and there are not systems similar enough to compare without you arguing about the system differences. :-) However, ignoring the system differences I can see a gain of 1% for max, which fits with what you are seeing. This shows to me that this particular test is not large enough to see a difference.

I can assure you that I have worked with a major corporate specific benchmarks that have shown a substantial performance increase with AGP 8x cards under Unigraphics with extremely large model sizes.

The rule of thumb remains the same, if AGP 4x was a bottleneck for your workflow (which it can be, despite your wierd claim that gfx cards aren't advanced enough) then AGP 8x with proper drivers will reduce the impact of that bottleneck. This means that you need to have very very large model sizes (common for a host interface limited situation) or be doing tons of memory to graphics transfers (which is common with HD video work, etc.)

I think what was most strange about your post was this nebulous claim that it is going to take a R400 or NV40 to take advantage of AGP 8x. First, they will probably not be AGP by then (:-)) but more importantly, it shows a lack of understanding of how graphics hardware workd (no offense intended).

A graphics card can either be host interface limited or not. I assure you that R300 and NV25/NV28/NV30 can very much be host interface limited at this point. That is not to say that sticking a 980 xgl in an AGP 8x system fixes all your performance woes, but it does change the equation a bit.

Lastly, 980 xgl is _not_ just about AGP 8x. You can't explain the performance gains over 900 xgl simply by a 2x gain in host interface. Especially considering some of the benchmarks are obvously not host limited. I don't know what the HW/SW differences are, but there is something.

My point to jscott was to say that I have seen 980 xgl benchmarks with large data that have shown a substantial increase in performance over 900 xgl. Some of that was due to AGP 8x, some of that might have been as a result of nv28gl specific mojo.

jscott
12-27-2002, 08:54 PM
I wasn't trying to start a fight but it did make interesting reading. Thank you Greg and Speed :).

-jscott

Christian Mir
12-28-2002, 01:57 PM
Hey guys,

Iīd like to add that www.spec.org features the most technically correct and complete suite of benchmarks in the industry. The are a industry standard.

Their latest specviewperf results shows that the FireGL X1 is currently dominating the professional 3d arena. Note that Ati has updated their results with the newer drivers. There is a benchmark that shows the X1 beating the the 980XGL by 40%!

Itīs good to see the fireGLīs on the top, though we still have to see some image quality comparisons between the "high end" graphics cards....

Come on Greg...www.spec.org has been there for years. And they are really showing some nice improvements on the 980XGL over the 900īs. Letīs keep in mind that their benchmark suite uses some fairly large (and bandwidth intensive) datasets.

Nice discussion above all!

:beer:

GregHess
12-28-2002, 03:00 PM
I'm not talking about spec, I'm talking about maya, max, xsi, lw, or whatever 3d package your using. Spec scores are a gauge, not a representation of realworld results.

The only realworld tests out there on the 900 and 980's right now, show a VERY minor increase in performance in 3d apps. And the independent tests done by tech-report on the ti 4200's (4x and 8x) versions show minor changes to spec scores. (Some faster, some slower). Thought this could be due to drivers, I don't believe it is.

As iterated above...as with the jump from 2x to 4x agp, we won't see these actual jumps in performance until the cards start to take advantage of the 8x agp. I believe its hardware, Speed Racer believes its software changes. Two opinions :).

Remember with most 4x AGP cards, you can bump them down to 2x agp, and notice very little change in performance. Hell, even some of the tests between pci and agp versions of the same cards don't show that big a change (10-15% theoretical maximums).

Thats my opinion and I'm sticking to it.

elvis
12-28-2002, 10:03 PM
Originally posted by GregHess
I'm not talking about spec, I'm talking about maya, max, xsi, lw, or whatever 3d package your using. Spec scores are a gauge, not a representation of realworld results.

and may i add: hooray for people who optimise drivers for benchmark programs... not. (check the nvidia changelogs in driver revisions for things like "better SVP performance" and "improved 3dmark performance"... there's tonnes of them).

[EDIT: spell check!]

Speed Racer
12-29-2002, 04:11 AM
How can they not pay attention to their HW & driver performance on industry benchmarks when we sit here and compare them and CIOs base a company's standard on them? :-)

There are many larger companies that put forth an enormous effort to define benchmarks around their particular workflow. This is obviously the ideal way to judge what you buy. And when given the chance ATI, 3dlabs, nvidia, etc. will gladly profile their performance with an end user benchmark and look for ways to improve.

You have to be very smart at how you look at benchmarks. For example, the new Viewperf numbers by the X1 look really good. Look at their jump in UG! But then one should take serious notice that there is not one single X1 submission for SPECapc Unigraphics. Why??? Is it because no one has bothered to run the test yet?

No. It is because they get their ass kicked by nvidia or 3dlabs. In my opinion, SPECapc is more important than viewperf but we have all been quoting viewperf for so long that we always go back to it. :-)

Note that Wildcat is now getting beat in viewperf pretty easily and yet it remains a decent solution for a lot of people with big models and older software. Why doesn't Viewperf show this? If someone were to suggest that X1 kicks a Wildcat's butt in application performance they would either be crazy or have not done any real testing and simply looked at Viewperf 7.

I think looking at the SPECapc benchmarks shows that nvidia is the strongest across the board performer with untouchable bang for the buck. Wildcat is still strong with certain applications and workflows but is really starting to show its age and you pay too much for the small percentage gain, if any over the 980xgl (I can assume nv30gl will really make it ugly for 3dlabs).

ATI is the one that bothers me. They still don't take workstation seriously and seem to have a strategy of destroying any validaty in SPEC Viewperf. I base a lot of this on end-user testing and SPEC apc which shows their performance is not there for real application work. There is not a single SPECapc benchmark for X1 that was good enough for someone to submit (although they did submit E1 due to its $/perf).

elvis
12-29-2002, 09:21 AM
Originally posted by Speed Racer
How can they not pay attention to their HW & driver performance on industry benchmarks when we sit here and compare them and CIOs base a company's standard on them? :-)

this is my point. it's a vicious circle. in a bid to try and find an easy way to compare two totally different pieces of hardware, the benchmark was born. as a CIO, i must admit benchmark results do sway me heavily on what becomes a part of the office and what stays on the shelf. having said that, companies like nvidia, ati and the rest do optimise their drivers for said benchmarks in order to maximise on that very fact.

having said that, optimising for benchmarks that use varied code like the SVP benchmarks should theoretically boost any application using similar code. or at least one would think so.

at the end of the day i guess if my CAD guys are getting 30-40 FPS as a consistent minimum in a standard 4-viewport CAD package throwing a couple million poly's around, i'm pretty happy. but it's still nice to know you try and research into the best you can give them for your dollar, and again it's off to benchmark land we head.

vicious circle ahoy. as long as people remember to use several different pieces of industry software as well as benchmarking tools, i guess it's a fairly safe bet.

Christian Mir
12-29-2002, 02:19 PM
Hi folks,

What a nice discussion!

Itīs really great to have the opportunity to talk with so many people with tons of experience and knowledge. CGTALK ROCKS!!!

I would like to add my opinion to the FireGL X1/SPECapc subject:

I think that ATI has the best piece of 3d hardware right now(GeforceFX is not on the market yet), although they havenīt submitted SPECapc results...thatīs a matter of time...the X1 has just entered the market.

I know everybody (in the pro field) is concerned when it comes to considering buying an ATIīs piece of hardware(and software)...And I include myself on that pool. But letīs look at the work they have done on the FireGL8800...It is three to four times faster than the consumer level RADEON 8500 using the same GPU...the Quadro 4 is not that much faster than the Geforce 4. So the guys at ATI have done a great job on their drivers, at least. And despite the fact that the Quadro 4 GPU is younger than the RADEON 8500, Iīve seen several real world benchmarks on Xbitlabs and Extremetech that show the GL8800 has a clear advantage in wireframe performance...sometimes itīs 2x faster than the 900XGL. And most 3d artists/engineers/technicians spend most of their time in wireframe mode.

Speed Racer- We have to recall that the FireGL E1 is on the market for months now. I remember seeing it on dell.com a couple of months ago. The X1 is a way younger product. OK, they should have already submitted some APC bechmarks....letīs just wait a little bit more.

Changing the subject a little bit....Iīve seen some early scores of an FX GL prototype (I donīt recall the url, it was a link from aceshardware I think)...Guys (if the japanese site was telling the truth)It is 3x faster than the 900XGL!!! in some tests.

If 3dlabs is already having a hard time trying to justify the Wildcat 4īs price/performance despite its small improvement over the 6210/6110...They will go crazy when Nvidiaīs FX GL reaches the market, Ok we have to admit that 3dlabs full scene antialiasing is the best on the market and their image quality is superb...but who can stop the FX GL monster 3X faster than the current 900XGL???

Sorry for the long post. I just couldnīt stop writting.

Letīs keep the great discussion up!

Regards,

Christian

:beer:

Speed Racer
12-29-2002, 08:46 PM
Christian,

I have to post one correction to your comment.

The SPECapc benchmarks for the FGL X1 _have_ been run. They are slower than 980 XGL (and 900 XGL and possibly even 7x0 XGL).

You can download them and run them yourself, right now.

ATI does not post the results. The workstation vendors that use ATI or nvidia's cards post the results for the entire workstation.

Dell has posted SPEC Viewperf numbers with FGL X1. You can bet your life that in all the time they have had X1 (including pre-release) they have run SPECapc as well.

One of the really bad things about SPEC is that you can pull your results once you start getting whooped. This is why you will see numbers show up for one system and then suddenly they are gone. Why are there Wildcat numbers for one or two SPEC apc benchmarks but not for SPEC apc Unigraphics?

Those of us that pay close attention know exactly why FGL X1 numbers are not listed for SPEC apc right now. Because they get beat. Unfortunately, the casual reader will just assume they haven't been done yet or something similar.

Sort of a bummer since it is a better tool to compare application performance.

Christian Mir
12-29-2002, 10:01 PM
OK...


Christian,

I have to post one correction to your comment.

The SPECapc benchmarks for the FGL X1 _have_ been run. They are slower than 980 XGL (and 900 XGL and possibly even 7x0 XGL).

1. The FGL X1 is on the market for only a few days... I would wait a little bit more, its drivers are too young. If we think that the FGL 8800 is (according to extremetech, xbitlabs) faster than the 900XGL in wireframe mode...

You can download them and run them yourself, right now.

2. That would be a waste of time. My main graphics card is a FireGL 2...and I have already tested it with SPEC Viewperf and with Max 4.2 and compared it to my other Gfx cards.

ATI does not post the results. The workstation vendors that use ATI or nvidia's cards post the results for the entire workstation.

3. Iīve been working with 3d for 9 years now, the spec viewperf is probably out for more than that. From the beginning I know that the vendors post the results. Sorry if I was misunderstood.

Dell has posted SPEC Viewperf numbers with FGL X1. You can bet your life that in all the time they have had X1 (including pre-release) they have run SPECapc as well.

4. I wouldnīt bet my life. I donīt disagree with you. I just think that the X1īs drivers are still young, If you think that the R300 is able to push more pixels and polygons than the GeForce 4 and that It was designed from the ground up with the workstation market in mind as a second option, you will know that it is a matter of "maturing" the drivers.

One of the really bad things about SPEC is that you can pull your results once you start getting whooped. This is why you will see numbers show up for one system and then suddenly they are gone. Why are there Wildcat numbers for one or two SPEC apc benchmarks but not for SPEC apc Unigraphics?

5. Why arenīt there 900XGL numbers for Solid Edge V11? There is another point to consider, we should recall that the workstation vendors have contracts with the chip makers...Nvidia dominates more than 50% of the market...so would it be interesting for them (Dell, Ibm, Fujitsu) to put an ATI product on top of every benchmark? Iīm not making any statement...

Those of us that pay close attention know exactly why FGL X1 numbers are not listed for SPEC apc right now. Because they get beat. Unfortunately, the casual reader will just assume they haven't been done yet or something similar.

6. I can be a casual poster in this forum although I will be posting here more often from now...not a casual reader... I do feel a bit of salt in your comment. Letīs just keep the HEALTHY discussion. I didnīt mean to offend you in my last post.

Sort of a bummer since it is a better tool to compare application performance.

7. I agree with you that SPEC apc is more application specific(as the name indicates) although the SPEC viewperf 3dsmax-01 viewset uses the same models used in the SPECapc for 3ds max™ 4.2.

Thatīs it.

elvis
12-29-2002, 11:22 PM
Originally posted by Speed Racer
The SPECapc benchmarks for the FGL X1 _have_ been run. They are slower than 980 XGL (and 900 XGL and possibly even 7x0 XGL).

here's the SVP 3dsmax benchies (not SPECapc):
http://www.spec.org/gpc/opc.data/vp7/3dsmax-perf.html

and these are the descriptions:
http://www.spec.org/gpc/opc.static/3dsm01.html

q980 = nvidia quadro 4 980XGL
ax1 = ati fire gl x1
both i've taken from the fujitsu benchmark which enableds AGP8X on both cards, so no arguments there on hardware bias.

1, 2, 6, 7, 11, 12 are smooth shaded: medium complexity the ax1 wins, high complexity the q980 wins.

3, 4, 8, 9, 13, 14 facet shaded: ax1 wins medium/high complexity all bar the most complex lit scene, which the q980 takes.

5, 10 wireframe: ax1 wins.

from this, i'll put forward the following (purely observational, no bias intended):

if i was doing high poly modelling and not much smooth shading, so far i'd go with the firegl x1. it seems to not mind the raw polygon maths side of things.

if i was doing medium poly with occasional smoothshade, i don't think i'd have a preference.

if i was doing low-medium poly modelling with a lot of smoothshading, i'd probably choose the quadro 980xgl. it seems to not mind the lighting and shading style maths that's needed.

as mentioned, the ATi drivers aren't quite mature yet, so i'm expecting good things from them in the future.

at any rate, it's good to see two contenders on the market again. there's nothing worse than one kick arse product and everyone lagging miles behind. bad for competition, bad for technological advance, and bad for my wallet. :)

Speed Racer
01-01-2003, 12:21 PM
Originally posted by Christian Mir
Hey guys,
Their latest specviewperf results shows that the FireGL X1 is currently dominating the professional 3d arena.
:beer:

FireGL X1 is dominating Viewperf, not the "3d arena." It is getting _dominated_ in SPEC apc and with applciation benchmarks I have worked on.

Looks like ATI is doing a good job at viewperf performance engineering. Let's hope that progresses to actual application performance at some point in the future.

CGTalk Moderation
01-14-2006, 01:00 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.