PDA

View Full Version : Gigabyte GeForce FX5700 256MB: Lightwave Friendly ?


Yiorgz
10-18-2004, 12:06 PM
Anyone here with a Gigabyte GeForce FX5700 256MB ?

I'm comparing the ATI Radeon 9200 128 with this GeForce FX5700 card, looking at getting one of these two cards to use with LW.

Any bad or good experiences with either of these two cards (in particular the GeForce) ?

MattClary
10-18-2004, 12:44 PM
Pretty much any nVidia card will be just fine with LightWave (and will be preferable to ATI). Don't get hung up on the manufacturer (i.e. Gigabyte, Asus, Chaintech...), that's pretty much irrelevant.

Yiorgz
10-18-2004, 12:57 PM
Thanks.

Have read a bit about radeons and most of the posts talk about bad drivers. I know nVidia's drivers have been pretty good. I already have a GeForce4go and a GeForce2MX which run well.

I was just thinking I could save some money with the radeon card (82 AUD vs the 240 AUD for the FX5700).

The pricelist I am looking at also has a GeForce FX 5900 T 128MB for around 315 AUD, do you know if there's much difference between the 5900 and 5700 (performance-wise), I mean is it really worth the extra $$ considering it only has half the ram ?

FX5700 256 MB ram @ 240 = approx $1.05 per MB of FX5700 power
vs
FX5900 128 MB ram @315 = approx $2.46 per MB of FX 5900 power

Now at this 2.46:1 price ratio, I would expect at least double performance out of this FX5900.

Do you think this is the case with this card (the FX5900, that is) ?

ericsmith
10-18-2004, 04:13 PM
It's been my experience that all of these cards are way faster than your CPU can keep up with, at least with Lightwave. I've run a Gforce 4600, 5200 and 5900 on the same dual 2.4 xeon machine. I don't think I could tell any difference in redraw speed when rotating a shaded view around in modeler. Considering the spec differences between these cards (ie. how many vertices and textels they are supposed to be able to crank through), I have to come to the conclusion that the bottleneck is somewhere else.

Eric

Fasty
10-19-2004, 12:12 AM
I'm using a GeForce FX5700 256MB with dual monitors and it runs swimmingly.

Yiorgz
10-22-2004, 05:49 AM
It's been my experience that all of these cards are way faster than your CPU can keep up with, at least with Lightwave. I've run a Gforce 4600, 5200 and 5900 on the same dual 2.4 xeon machine. I don't think I could tell any difference in redraw speed when rotating a shaded view around in modeler. Considering the spec differences between these cards (ie. how many vertices and textels they are supposed to be able to crank through), I have to come to the conclusion that the bottleneck is somewhere else.

Eric

What sort of polygon count are we talking about here?

And the question, is Lightwave's interface truly multithreaded ? On a dual CPU box, will both processors share the load of rotating an object in the perspective viewport ? (Maybe someone could confirm this..)

My test was to build a quick object (eg OneMinuteSpaceShip.lwo) and then crank the subdivision level in modeler up to about 20 and switch the perspective view to full screen (keypad 0) and drag it around, zoom in/out etc.

I freeze the subdiv object and see how that affects the speed. Also I try subdivision level of 2, freeze that and then hit TAB again to convert that into the new cage, then crank up the divisions again.

The ATI Radeon 9200 that I got to play with was pretty good with a frozen object up to 40,000 polys with the perspective view on textured wireframe (LW8) at full screen (800x600) (on a P4 3.0GHz 1GB ram).

Doing some render tests across different machines (eg Celeron vs Pentium) for basic scenes, and the same clock speed, render times were close (10%), but when you throw the hard stuff at it (eg radiosity / refraction etc), the Pentium breaks away with much better results.

Do these bigger graphics cards do the same thing when the polycount is high (eg real load) ?

Referring to the Radeon 9200 I spoke about earlier, do you think the P4 3.0Ghz is too slow for the Radeon ? (This probably depends on the Dual CPU + perspective rendering question)

If 2x2400 xeon will not have difference in rotating an object across 4 different range of GeForce (and assuming both CPU are involved in the rendering), then pairing up a P4 3.0GHz CPU with any mid range gfx card would be fine.

Maybe I need to incrementally crank up the polycount (past 40,000) and see where this ATI card starts to become jerky.

Yiorgz
10-22-2004, 06:11 AM
I'm using a GeForce FX5700 256MB with dual monitors and it runs swimmingly.

What spec is the machine and what resolutions are you running on each monitor ?

The screen you run LW on, do you go full screen, and what kind of "snappiness" or response smoothness do you get when you turn the perspective port full screen while rotating around a 40k polycount object ?

CGTalk Moderation
01-19-2006, 01:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.