PDA

View Full Version : Video Cards GGGRRRRRRRR


Maaahshoe
01-02-2003, 01:32 PM
Okay back to the basics!

I'm looking at buying a new system any day now though the decision of which video card is becoming really confusing!

Working in 3D graphics I was thinking of getting the Fire GL X1, as I heard its a psycho piece of equipment! BUT I want to also be able to run games on the system which I heard it sucks at. I read up about the Radeon 9700 card and it goes hard with games and if I remember right it left the NVIDIA 4600 blah blah for dead! Would the Radeon perform well in a 3D application such as Maya or the like? I heard the Fire GL X1 is based on that card so whats the deal? And whats the performance difference, havn't found any comparisons....

Any help would be greatly appreciated! :D



Matt

Kabab
01-02-2003, 02:07 PM
Yeah from my experience the FireGL serries of cards are awesome for 3D work but crappy for games..

Maybe try getting a higher end Nvida card a quadro of some sort they still play game ok and are good for 3d work....

What software are you using because you can usally find a certified video card list with bugs etc...

Maaahshoe
01-02-2003, 02:15 PM
Software? As in the 3D Software? I use Max and Maya, plus the typical photoshop, combustion, aftereffects.

But question! Okay if the Radeon 9700 Pro goes hard in games does that mean it would kick in a 3D app? Cause isn't the card just there to take the load of the processor on real time updates. I know it has no affect over rendering. But wouldn't that mean the Radeon would work well in 3D?

But hhmmmmm I've been looking into the Quadro and also the GeForce FX.


Still confused :annoyed:

MaDSheeP
01-02-2003, 02:49 PM
well.. game cards and pro cards are two different things...

the video game cards are good at fill rates... filling geometry with high res texture maps about 60 times a second at 1600x1200 resolution..

but you are rairly doing anything like that iwth your models... you are dealing with very high poly counts... and thats where game cards start to choke.... i think

like... a mesh of about a million polygons would kill any standard setup's refresh rate, but a really nice Quadro card wouldn't have a single problem with that many polygons.. so..

game cards work well for most people... and you could even look into buying a GF4 4600 and just softquadro it...

but the main thing is.. how the cards work with high poly models, and stability is always an important issue to... you get awesome stability with the pro cards.. but the game cards you may not be so lucky...

just my incoherent opinion :)

Maaahshoe
01-02-2003, 03:04 PM
Thanks a lot madsheep! I forgot to realise the polycount deal! But still I'm so annoyed, but guess you can't have best of both worlds UNFORTUNETLY!!!!!!! :annoyed:


What did you mean by softquadro it????

Kabab
01-02-2003, 03:19 PM
madsheep is on the money !!

Check on the alias site they have an extensive list of video cards that are compatable with maya and what bugs they have..

GregHess
01-02-2003, 04:10 PM
The closest thing to the best of both worlds would be a geforce 4 ti 4200. Its cheap, its fast, it works well in both games and applications.

It doesn't hurt the pocketbook, so you've got cash for other things, like taking your gf out to dinner to appease her of the new video card purchase, or buying more ram, or a cpu upgrade.

Spliff_Richards
01-02-2003, 04:47 PM
Wich one would you guys buy: the GF 4600 or 4200?

loop29
01-02-2003, 11:10 PM
Guys, here is a list of what a professional card would give you more performance and what is the difference between game cards and professional graphics cards in general:

The customers expectations
The term professional workstation usually implies high driver and hardware quality, excellent reliability, responsive support, and high performance.
The expectations of workstation users exist because they have a different point of view when they’re looking at their computers. It’s a tool, not a toy, they have to work with it.
Their goal is to design, create scenes or demonstrate their work. Time is money, so they don’t accept crashes, baffling bugs or avoidable delays in workflow.
They don’t change their adapter twice a year, so they expect driver support for at least 2 years.
Because in most companies the purchased hardware has to be qualified, it should be available for a long period of time, no matter if it’s up to date


Public Opinion
People often ask for the differences between Quadro and GeForce adapters, because their technique appears to be very similar and their price is differntiating.
Even in CAD-relevant newsgroups there are many estimations of users telling that GeForce adapters work fine in 3dsmax, Inventor, AutoCAD and so on.
What people mostly forget, is that especially this similarity dropped the prices for professional adapters. NVidia’s concept worked for both, the quality and versatility of consumer adapters and the pricing for professional products.
But there are lots of differences, and NVidia spends ambitious efforts to create the perfect solution for both parties.


1.1 Hardware antialiased lines
A unique feature of the Quadro GPUs is to support antialiased lines in hardware. This has nothing in common with the GeForce full-scene antialiasing.
It works for lines, not for shaded polygons, without sacrificing performance or taking extra video memory for oversampling. Most professional applications support this feature because it is standardized by OpenGL.


1.2 Logical Operations
Another unique feature of the Quadro GPUs is supporting OpenGL Logical Operations. Those can be implemented as a last step in the rendering pipeline, before contents is written to the framebuffer. Workstation applications use this functionality to draw on top of a 3D scene, for example to mark a selection by a simple XOR function.

Having this function done in hardware prevents from significant performance losses like a GeForce Adapter would show.


2. The OpenGL Differences
On consumer and workstation adapters, OpenGL is used for different purposes:
Most common applications for GeForce adapters are full-screen OpenGL games.
CAD applications are working with OpenGL windows, in combination with 2D-elements
The unified driver architecture provides an optimized implementation for both, professional and consumer demands. While consumer applications have a quite simple request, bug-free functionality and performance over all, there are several optimizations for professional applications that would not improve anything in full-screen OpenGL. They only make sense for window-based OpenGL and are therefore working on Quadro-based hardware only.
They will be explained next?


2.1 Clip Regions
A typical workstation application contains 3D and 2D elements. While viewports display window-based OpenGL, menus, rollups and frames are still 2D elements. They oftenly overlap. Depending on how they are handled by the graphics hardware, overlapping windows may noticeably affect visual quality and graphics performance.
When a window has no overlapping windows, the entire contents of the color buffer can be transferred to the frame buffer in a single, continuous rectangular region. However, if other windows overlap the window, the transfer of data from the color buffer to the frame buffer must be broken into a series of smaller, discontinuous rectangular regions. These rectangular regions are referred to as “clip regions.

GeForce Hardware supports just one clip region, mostly sufficient for displaying menus in OpenGL. Quadro GPUs support up to 8 clip regions in hardware, keeping up the performance in normal workflow using CAD/DCC applications.


2.2 Hardware-Accelerated Clip Planes
Clip planes allow sections of 3D-objects to be cut away so the user can look inside solid objects. Looking inside objects is particularly useful for visualizing assemblies. For this reason, many professional CAD /DCC applications provide clip planes.
The Quadro family of GPUs supports clip-plane acceleration in hardware—a significant performance improvement when they are used in professional applications. Tests 6 and 10 of the SPECopc Viewperf MedMCAD-01 Test define a clip plane, and are useful for quantifying the performance benefits of clip-plane support on the Quadro2 family.


2.3 Quadro Memory Management optimization
Another feature offered by the Quadro family of GPUs is Quadro memory management optimization, which efficiently allocates and shares memory resources between concurrent graphics windows and applications. In many situations, this feature directly affects application performance and so offers demonstrable benefits over the consumer-oriented GeForce GPU family.
The graphics memory is used for the frame buffer, textures, caching and data. NVIDIA’s unified memory architecture dynamically allocates the memory resources instead of keeping a fixed size for the frame buffer. Instead of the remaining frame buffer memory being wasted because it is unused, UMA allows it to be used for other buffers and textures.
Especially when applications require much memory, using quad-buffered stereo or full scene antialiasing, it becomes more important to manage the resources efficient. The table below contains different constellations on GeForce and Quadro hardware showing the advantage of this effectiveness, both have 32MB on board


2.4 Common problems solved with Quadro drivers:
Two-sided Lightning
Quadro hardware supports two-sided lighting. Objects which are not created as solids may show triangles from their “back-side? viewing the objects from the inside.
Two sided lightning prevents the diffuse and specular lightning components from dropping to zero when the surface normal points away from the light source. As a result, these “backward-facing?triangles remain visible through all viewing angles.


2.5 Hardware Overlay Planes
The user interfaces of many professional applications often requires elements to be interactively drawn on top of a 3D model or scene. The cursor, pop-up menus or dialogs appear on top of a 3D-viewport. These elements can damage the contents of the covered windows or affect their performance and interactivity.
To avoid this, most professional applications use overlay planes. Overlay planes let items be drawn on top of the main graphics window without damaging the contents of the windows beneath. Windows drawn in the overlay plane can contain text, graphics, and so on—the same as any normal window.
The planes support a transparency bit, which when set, allows pixels underneath the overlayed window to show through. They are created as two separate layers, nothing has to be blended together. This prevents damage to the main graphics window and improves performance. Likewise, clearing an overlayed window to the transparency bit and then drawing graphics within it allows user-interface items to be drawn over the main graphics window.
Clearing and redrawing only the overlayed window is significantly faster than redrawing the main graphics window. This is how animated user-interface components can be drawn over 3D models or scenes.


2.6 Quad buffered stereo
The Quadro GPU family supports quad-buffered stereo; the GeForce GPU family does not. Quad-buffered stereo is an OpenGL functionality, not depending on special stereo hardware to show the effect. Two pictures are generated, both double-buffered, one per eye. Displaying is done alternately or interlaced, depending on the output device.
Many professional applications like 3ds max, SolidWorks or StudioTools let users view models or scenes in three dimensions, using a stereoscopic display. It can be done by a plugin, like in Solidworks, by a application driver like MAXtreme in 3ds max, an external viewer like QuadroView for autocad-based products or by the application itself. The use of stereoscopic display is to have an overview in complex wireframe constructions, making walkthroughs much more realistic and impressive or simply to improve the displaying of proportions in large 3D-scenes.
Stereo support on the Quadro GPU family significantly benefits professional applications that demand stereo viewing capabilities.

This text is taken from a PNY presentation to clear up differences between Quadro and Geforce GPU family, the features mentioned above are unique to other professional video cards like Fire GL and Wildcat. This means that these cards support those features in Hardware and the difference between cards is how good they perform those OpenGL extensions for professional applications. For example nvidia quadro supports 8 lights in hardware, Wildcat cards support 24 lights in hardware.

regards

elvis
01-02-2003, 11:54 PM
Originally posted by GregHess
It doesn't hurt the pocketbook, so you've got cash for other things, like taking your gf out to dinner to appease her of the new video card purchase

so true!!! i think we've all been there! :applause:

CgFX
01-03-2003, 06:53 AM
Quadro is a superset of GeForce.

A Quadro 900 XGL will play games just as well as a GeForce Ti 4600 but a GeForce Ti 4600 will not handle workstations applications as well as a Quadro 900 XGL.

elvis
01-03-2003, 08:02 AM
Originally posted by CgFX
Quadro is a superset of GeForce.

A Quadro 900 XGL will play games just as well as a GeForce Ti 4600 but a GeForce Ti 4600 will not handle workstations applications as well as a Quadro 900 XGL.

*bzzzt* wrong answer (slightly).

quadro's are slower in most games than their geforce couterparts. "but how can they be???" i hear you ask... WELL:

quadro's have features that enable them to handle a large amount of data in a reliable way. for example: hardware wireframe accelleration and AA, double-sided hardware shading and texturing and 8 hardware lights are all features of a quadro4. these features allow the quadro to render a large, complexly lit scene at a constant 30FPS... a scene in which a geforce would probably choke on.

now, take a standard game engine (quake3, UT, whatever). these are relatively simple scenes with low polycounts (in the thousands, rather than the hundred of thousands or millions), a small amount of lights (typically 1 or 2 max) and only single-sided "pre-baked" textures. a geforce4 can render this simple scene at a large amount of FPS (100+ in some cases). however, putting this into practice and benchmarking a quadro4, you'll find the quadro will only spit out 30-60 FPS. all of the extra features it does slows down the raw FPS that a game card is designed for.

i've benchmarked a ti4200 and a 900XGL side by side, and the ti4200 won each gaming test despite it's lower clock speed. of course the quadro4 kicked the geforce4 in things like SPEC and whatnot, as expected.

this is not to say a quadro4 CANNOT play games! it certainly can. i happily play battlefield 1942 at 1280x1024, 32bit colour and a good consistent 50 FPS on our workstations after hours (all quadro4 based). it's a neat side affect, but not the card's primary purpose.

elvis
01-03-2003, 08:12 AM
Originally posted by Spliff_Richards
Wich one would you guys buy: the GF 4600 or 4200?
4200. the 4600 is not (IMHO) worth the extra cash for the speed.

both cards have the same features, but are just slightly different clock and memory speeds. the extra clock and memory speed will help your games a little, but the effect you'll see on your 3d stuff will be minimal at best.

there are literally hundreds of different types of geforce4 on the market from different manufacturers. i've got a SUMA GF4ti4200-SE. it's got 128MB of 3.3ns DDR ram, which means it overclocks right up past ti4600 speeds without a problem. the 128MB of fast ram cost me a bit more than your average 4200 (almost as much as a 4400 at the time) but in the long run was better value for money. it's also got DVI and VGA out (with a DVI-VGA converter to allow dual VGA out) and tv-out. all in all a great value card, which i'm hoping lasts me at least 18 months before i'm tempted again by another product.

my geforce2mx got me 2 years of useability before it was donated to the parents' system.

loop29
01-03-2003, 08:20 AM
Uhm, elvis I can´t follow your details exactly. As I mentioned in my post you should noticed that these advantages only affect OPENGL features for the quadro, it won´t slow down in directx games by a great margin. And it is correct that quadro is a superset of GF, quadro handles games in OpenGL as well as a GF cause the hardware is mostly the same, otherwise it wouldn´t be possible to patch a Geforce 4 to Quadro 4. The higher rendering precision in viewports will slow down Quadro in comparison to GF in OpenGL.


greets

elvis
01-03-2003, 08:21 AM
Originally posted by Spliff_Richards
Wich one would you guys buy: the GF 4600 or 4200?
4200. the 4600 is not (IMHO) worth the extra cash for the speed.

both cards have the same features, but are just slightly different clock and memory speeds. the extra clock and memory speed will help your games a little, but the effect you'll see on your 3d stuff will be minimal at best.

there are literally hundreds of different types of geforce4 on the market from different manufacturers. i've got a SUMA GF4ti4200-SE. it's got 128MB of 3.3ns DDR ram, which means it overclocks right up past ti4600 speeds without a problem. the 128MB of fast ram cost me a bit more than your average 4200 (almost as much as a 4400 at the time) but in the long run was better value for money. it's also got DVI and VGA out (with a DVI-VGA converter to allow dual VGA out) and tv-out. all in all a great value card, which i'm hoping lasts me at least 18 months before i'm tempted again by another product.

my geforce2mx got me 2 years of useability before it was donated to the parents' system.

elvis
01-03-2003, 08:33 AM
Originally posted by loop29
Uhm, elvis I can´t follow your details exactly. As I mentioned in my post you should noticed that these advantages only affect OPENGL features for the quadro, it won´t slow down in directx games by a great margin. And it is correct that quadro is a superset of GF, quadro handles games in OpenGL as well as a GF cause the hardware is mostly the same, otherwise it wouldn´t be possible to patch a Geforce 4 to Quadro 4. The higher rendering precision in viewports will slow down Quadro in comparison to GF in OpenGL.


greets

loop: try it out for yourself. i've benchmarked quake3 (gl), UT2003 (dx8) and 3dmark2001se (dx8), specviewperf 6.1.2 (gl), specviewperf 7.0 (gl) and max 4.2.6 + specapc in gl mode on a dual 2.2GHz Xeon dell workstation using a geforce4ti4200 and a quadro4 xgl 900.

the ti4200 won all of the gaming tests (gl and directx), and the quadro won all of the CAD tests. give it a go and you'll see the same results, i can guarantee you.

Spliff_Richards
01-03-2003, 08:40 AM
When I get my card I´ll have to ask you guys how to overclock the card:)

GregHess
01-03-2003, 04:08 PM
Do remember that you can also now softquadro the geforce 4 ti 4200's. Might factor into your purchasing decision a bit.

150 USD card...with almost the performance of the 600 USD card. hmm...

CgFX
01-03-2003, 04:38 PM
Originally posted by elvis
*bzzzt* wrong answer (slightly).

quadro's are slower in most games than their geforce couterparts. "but how can they be???" i hear you ask...


Ummm..... *gong!* your wrong. :-) (hey, that rhymes)

I assure you that a Quadro is a superset of GeForce. That being said, yes there are some things that a Quadro does that could potentially slow down games (two sided lighting) but there are also things that it does that could speed up games.

However, most games are DirectX and not OpenGL so things like OpenGL two sided lighting, HW lights, multiple window tages, etc. do not even factor in.

With identical systems and the same (proper) driver you should see more parity between the two in games then you are suggesting.

GregHess
01-03-2003, 04:41 PM
Easiest way to prove him wrong, is to actually test the cards as he has.

I tend to always err upon the side which has actually spent time and energy collecting data to use in an argument, then the side that just says the other side is wrong :).

Lets get some more benchmarks!

CgFX
01-03-2003, 04:47 PM
...tough crowd. :-)

On my system (AMD 1500+, nForce1) I have gotten roughly the same 3DMark2001 scores with a Quadro 900 XGL and a GeForce4 Ti 4600.

GregHess
01-03-2003, 04:49 PM
Roughly the same, or different? Roughly the same tends to indicate there is a difference.

And with what driver set please. 41.09? An earlier one? What Ogl/d3d settings?

CgFX
01-03-2003, 05:11 PM
Originally posted by GregHess
Roughly the same, or different? Roughly the same tends to indicate there is a difference.
Run 3DMark2001 five times in a row on the same system and you will get five different numbers. :shrug:

The default settings for Quadro are often different for GeForce so you may want to check that elvis. e.g. Vsync is on by default for Quadro.

And with what driver set please. 41.09? An earlier one? What Ogl/d3d settings?
Sorry Greg, but that is all I had time for.

If this discussion requires the same world-class benchmarking that is better suited for a bigger fight, then I gladly conceed that a GeForce is a better board for gaming. :)

I can assure you that the Quadro is a better gaming card than the FGL X1. It is a better workstation card too. :buttrock:

loop29
01-03-2003, 06:16 PM
I think that FireGLX1 is based upon Radeon 9700 chip, like ATI did with last professional GPU the FireGL 8800. I think they decided to do the same like nvidia. I have early drivers for the FireGL 8800 that could be installed on a Radeon 8500 and let you run prof apps like you had a FireGL 8800, but unfortunately there is no Softfire : )
But I think that the FireGLX1 and similar should behave like a Radeon 9700 in games..

regards

CgFX
01-03-2003, 06:24 PM
Originally posted by loop29
But I think that the FireGLX1 and similar should behave like a Radeon 9700 in games..

This has not proven to be the case to date.

loop29
01-03-2003, 07:32 PM
Q1: What is a FIRE GL™ X1 graphics card and what chipset does it use?
A1: The FIRE GL™ X1 graphics card is the world's fastest and most advanced graphics board for professional graphics usage. Through a combination of incredible 3D rendering performance, sophisticated real-time visual effects, unsurpassed image quality and cutting-edge video features, it takes the workstation graphics experience to a new level. The FGL™ 9700 VPU (Visual Processing Unit) is ATI's high-performance graphics solution for performance oriented consumers and commercial platforms. The FIRE GL™ X1 comes in two flavours:
FIRE GL X1™ - 256MB with AGP Pro connector, StereoGraphics output, and dual DVI-I output for dual digital or dual analog support (or a combination of the two).
FIRE GL X1™ - 128MB with standard AGP connector and Dual DVI-I output for dual digital or dual analog support (or a combination of the two).
For more information on ATI's FIRE GL™ X1 graphics cards please check out the Specifications section on the FIRE GL™ X1 product page.

http://www.ati.com/products/workstation/fireglx1/faq.html

What does FGL™ 9700 VPU sounds like?

regards

CgFX
01-03-2003, 07:42 PM
Uh, thanks for posting the contents of ATI's marketing web site here. :-\

You misunderstood. I know very well that the X1 is a Radeon 9700 Pro. However, since ATI does not have anything close to a single uniform driver (as does NVIDIA) the X1 has proven to be a very bad gaming board to date.

loop29
01-03-2003, 08:33 PM
np m8, if you once can´t find any product information site, ask me : )


But as I´m not very familiar with driver architecture of ATI´s stuff, I thought catalyst drivers are UDA, seems that I´m wrong in this case. Did anyone tried installing catalyst drivers on a FireGLX1?
Maybe they are trying to prevent the same happening to them like Softquadro did to the nvidia guys...

regards

CgFX
01-03-2003, 09:44 PM
All the vendors (ATI, 3dlabs, Matrox) are scrambling to get to a unified driver architecture because of how much success NVIDIA has had with it in the corporate marketplace.

Their first step is a unified archive. This is something that ATI and Matrox are now doing where there is a single downloadable archive but the actual drivers (binaries, .DLL's) remain completely different and often come from different SW engineering teams. The proper driver for the particular card that is in the system is chosen by the installation program.

There is hardware in NVIDIA's chips to support UDA and I believe it will take some time before the other vendors have a truely unified driver.

This is the main reason that NVIDIA has had such success with stability and performance. It is the same core driver that they have been beating on and tuning for ~five years now.

elvis
01-04-2003, 12:08 AM
Originally posted by CgFX
Run 3DMark2001 five times in a row on the same system and you will get five different numbers. :shrug:

The default settings for Quadro are often different for GeForce so you may want to check that elvis. e.g. Vsync is on by default for Quadro.

Sorry Greg, but that is all I had time for.

If this discussion requires the same world-class benchmarking that is better suited for a bigger fight, then I gladly conceed that a GeForce is a better board for gaming. :)

I can assure you that the Quadro is a better gaming card than the FGL X1. It is a better workstation card too. :buttrock:

3dmark results will vary 100 points or so per test, that i agree. my tests showed quite clearly that the geforce4 outperformed the quadro4. between each test i uninstalled and cleanly reinstalled all driver files and registry settings to make sure the cards weren't being held back by software. i used the 40.72 detonator drivers at the time, and made sure both cards were set to the same settings. (force 32 bit colour, all FSAA and AF set to "off" and vsync set to "off").

keep in mind too that i tested a 4200 vs a quadro 900xgl. the 900xgl lost in 3dmark, which was quite amazing considering it's ti4600-style clock speeds. i have the figures jotted down at work, so i'll try and recover them to give real data. the results weren't differences of several thousand points or anything, but still a difference.

your final comment i agree with also. the quadros have seemed to be gamig cards tweaked for pro applications, and as such have always performed reasonably well in games. the firegl cards have never been great gaming cards, but that was never their intent. having said that i've never had the opportunity to test anything faster than a firegl2 in a gaming situation, so i can't comment on things like the firegl 8800 and X1/E1. hopefully we'll have some X1's in the office within a month or so, and i can do some proper tests on them and share the data.

jonestation
01-04-2003, 03:18 AM
many people talk about SoftQuadro, has anyone test it before? what is your opinion toward this softquadro?

Sieb
01-04-2003, 03:34 AM
I am using it right now. Ran into some issues with maya and the maya OpenGL setting, but thats probably because I am running older drivers (40.72). All the Dets after 41 give me this white line across my screen (which actually ends up being my taskbar) and Explorer.exe maxes out.. Has something to do with nview/dualview. Disabling my second monitor ends the problem. Haven't figured it out yet. 42's and WHQL's cause the same problem.

But I have had normal performance in games and apps with acceptable 3dmark resaults.

1.4GHz Tbird with 384MB PC2100DDR on an MSI KT3ultra ARU and BFG 4600 resaults in around 8900 for 3dmark.

CgFX
01-04-2003, 07:07 AM
Please please please, do not discuss the well respected FireGL 2, 3, and 4 in the same breath (let alone thread) as their shallow imitations in name only which are the Radeon derived FireGL 8x00 and X1/E1.

IBM made a great chip and FireGL was a great brand before IBM decided to get out of the graphics chip design business and ATI bought FireGL and started slapping that brand name on Radeons.

:-)

jonestation
01-04-2003, 09:13 AM
Thanks, I have download the RT and SQ, but will not test it for the moment, might try it in the future.

elvis
01-04-2003, 09:17 AM
Originally posted by CgFX
Please please please, do not discuss the well respected FireGL 2, 3, and 4 in the same breath (let alone thread) as their shallow imitations in name only which are the Radeon derived FireGL 8x00 and X1/E1.

IBM made a great chip and FireGL was a great brand before IBM decided to get out of the graphics chip design business and ATI bought FireGL and started slapping that brand name on Radeons.

:-)

pardon me while i cack myself laughing.

care to test said firegl2/3/4 against an 8800? IBM did not "decide" to leave the graphics chip industry: they weren't making money, because they charged too much for mediocre cards that were being outperformed by retail desktop cards selling for hundreds less.

IBM have great research facilities. they pioneered many things (including the personal computer and x86 chip) only to be outdone by better companies with better manufacturing processes (intel, ati, etc). if they were any good they'd still be around. simple as that. :)

jonestation
01-04-2003, 09:22 AM
Elvis;
Do you know any link that guide on how to unistall the RT and SQ ? Just to be safe :-), or if you don't mind sharing the idea here.

Savage_Henry
01-04-2003, 10:08 AM
Elvis,

How do you like your Suma card?
I'm about 2 days away from deciding between that particular card and the gainward. I'm swaying more on the Suma side because of the DVI feature, but I'm wondering about driver updates and whatnot. there are charging me a fair enough price...I'm just not familiar with the brand.

Any thoughts?

CgFX
01-04-2003, 10:31 AM
Originally posted by elvis
pardon me while i cack myself laughing.

care to test said firegl2/3/4 against an 8800? IBM did not "decide" to leave the graphics chip industry: they weren't making money, because they charged too much for mediocre cards that were being outperformed by retail desktop cards selling for hundreds less.

IBM have great research facilities. they pioneered many things (including the personal computer and x86 chip) only to be outdone by better companies with better manufacturing processes (intel, ati, etc). if they were any good they'd still be around. simple as that. :)

Elvis,

I will certainly pardon you for forgetting that the 8800 is at least one full generation _after_ the FGL 2, 3, 4 series of boards. IBM got out simply because they didn't see the value in continuing to invest in order to keep up or at least try to keep up with ATI and NVIDIA (much to both of their relief).

If you don't have a consumer business to leverage, it is nearly impossible to keep up (as IBM was well aware and 3dlabs painfully learned last winter with their solo demise and sellout to Creative).

elvis
01-04-2003, 12:53 PM
Originally posted by jonestation
Elvis;
Do you know any link that guide on how to unistall the RT and SQ ? Just to be safe :-), or if you don't mind sharing the idea here.

i'll happily share my way of doing it:

1) uninstall the nvstrap.sys driver using riva tuner. you don't need to uninstall rivatuner itself if you want to use it for overclocking and whatnot later on.

2) if you installed the drivers via the inf method, remove them from the hardware device manager. if you simply installed the detonator drivers via nvidia's exe drivers, remove them using add/remove programs.

3) i recommend this thrid step: after uninstalling the nvidia drivers, reboot your machine, and press F8 before windows starts to bring up the boot menu. choose safe mode, and boot into that. once in windows, search your c:\windows or c:\winnt folder (including sudirectories) for files named "nv*.*". delete anything the earch finds.

also run regedit and browse to the HKEY_LOCAL_MACHINE\software part of the tree and delete the "nvidia" or "NVIDIA Corporation" registry entries.

reboot once more, and install a clean set of detinators, and all softquadro hacks will be safely removed from your system.

it's a bit of a lengthy process, but you don't want old drivers and regisry entries lying around making a pain of themselves. you can always double check by going into your advanced display properties and checking what driver and file versions the software reports. if you have different versions of files, you know you haven't properly (un)installed something.

elvis
01-04-2003, 12:56 PM
Originally posted by Savage_Henry
Elvis,

How do you like your Suma card?
I'm about 2 days away from deciding between that particular card and the gainward. I'm swaying more on the Suma side because of the DVI feature, but I'm wondering about driver updates and whatnot. there are charging me a fair enough price...I'm just not familiar with the brand.

Any thoughts?

i quite like the card due to the fact that it overclocks well, and i like my ut2003. :) the 128MB ram is good for those heavily textured scenes too, which can some times be handy.

drivers don't worry me, as i only use nvidia detonators. 3rd party drivers have never interested me, as most of the features they cater for can be used better by other software anyway (eg: tvtool for tv-out, etc). the DVI and vga out is nice too if i need to use multi-monitor. eventually i want to get a flat-panel, so that's why i got it.

elvis
01-04-2003, 01:03 PM
Originally posted by CgFX
Elvis,

I will certainly pardon you for forgetting that the 8800 is at least one full generation _after_ the FGL 2, 3, 4 series of boards. IBM got out simply because they didn't see the value in continuing to invest in order to keep up or at least try to keep up with ATI and NVIDIA (much to both of their relief).

If you don't have a consumer business to leverage, it is nearly impossible to keep up (as IBM was well aware and 3dlabs painfully learned last winter with their solo demise and sellout to Creative).

ok, my apologies... i was having a go at you back there. :p

the fireGL cards were great in their time. when they came out they provided features to 3d artists that previously were only available in multi-million dollar SGI workstations. i remember well the first fireGL1 card we ordered with a 3D system. we fired up our then current MAX and autocad packages and were blown away at the speedup they gave us over our 2d-only cards. amazing stuff.

things have changed a lot in the 3d-hardware industry of late, as we all know. technology that was once foreign and expensive is now available cheaply to most hardware manufacturers. the greats like 3dlabs and 3dfx lost their original market share to cheaper manufacturers. yes it's a shame, but in the end market competition is what makes the world go around.

there's no hard evidence saying that in 5 years time nvidia and ATi will still be around. who knows? maybe some young startup company will make a new product that blows them both out of the water. the point is, embrace what technology there is to offer, and look forward o what's around the corner. IT hardware changes faster than any other technology on he planet, and we're all at it's mercy!

CgFX
01-05-2003, 07:41 AM
Originally posted by elvis

drivers don't worry me, as i only use nvidia detonators. 3rd party drivers have never interested me, as most of the features they cater for can be used better by other software anyway (eg: tvtool for tv-out, etc).
FYI, there are no third party drivers any more. All driver development is done by NVIDIA only now. There are vendor specific releases (PNY, HP, IBM, Dell, etc.) but they are all tested and certified versions or point releases of the same UDA driver from NVIDIA.

elvis
01-05-2003, 10:57 AM
that's what i was referring to. drivers "released" by non-nvidia companies are merely tested and verified nvidia detonators, and occasionally include a tweak or hack or bit of extra code for a third party tv-out chipset and the like.

possibl;y the only nvidia card i have that i don't use nvidia detonators for is my old asus 3800, which i need the asus drivers for to get tv-out without divx and dvd skipping, primarily due to the cards age. (new drivers just aren't optimised for it, and result in dvd skipping all through a movie).

CgFX
01-06-2003, 10:27 AM
Elvis,

To further clarify. No tweaks, hacks, or extra code come from any board vendors for any such "third party" components.

All components have to be on a list of supported parts from NVIDIA and all software support for these parts comes from NVIDIA.

elvis
01-06-2003, 11:48 PM
i'm talking things like the asus hardware probes (fan speed and temp monitors), winfast "winfox" overclocking utilities (front ends for reghacks), custom DVD software, blah blah blah.

more often than not it's just a detonator with the files renamed and a company logo whacked on it. :)

and no, components do not "have" to be certified by nvidia. nvidia supply the manufacturer with chips and a reference board design, and the manufacturer can then do as they please. the only thing they can't do is market a chip as something else.

eg: they can't market a ti4200 as a ti4600 even if the clockspeeds are the same. but they can certainly call it a "ti4200 plus ultra giga thingo" if they like. see the geforce4 mx440 and "mx440SE" debacle to understand what i'm talking about.

CgFX
01-07-2003, 12:09 AM
Originally posted by elvis

and no, components do not "have" to be certified by nvidia. nvidia supply the manufacturer with chips and a reference board design, and the manufacturer can then do as they please.

Elvis,

This is not true. The supported parts that show up on any board have to come off a list of qualified componets that has been put together by NVIDIA with driver support from NVIDIA. Regardless of if they do their own board design or not.

Tempest811
04-11-2005, 04:00 AM
I have a GeForce4 ti 4600 and let me tell you...its decent for gaming but not for workstation work. But it's all relative - it you have $20 card, then the 4600 would be something u should look into....but workstation cards will definately help u out WAY more.

lots
04-11-2005, 03:11 PM
I'm guessing you ignored the fact that this is a post from early 2003? :)

CGTalk Moderation
04-11-2005, 03:11 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.