PDA

View Full Version : 128mb vs 256 mb on video card


dmeyer
03-07-2003, 04:17 AM
I am in the market for a new video pro video board at the moment, and right now it's a toss up between:

Quadro FX 1000 (128mb) @ $880
Fire GLX-1 (256 mb) @ $820

I run Maya and Combustion non stop. I would normally be nVidia all the way, but I'm wondering how much that 256 mb would make a difference in OpenGL accelerated Combustion?

GregHess
03-07-2003, 02:02 PM
Are you using the card's dual head features? Are you accelerating both monitors?

If its just single head, the 128 vs 256 won't be too much of a difference.

If your doing a dual accelerated display...you get the picture. (Half the ram per display)

However this is just comparing ram, not architectures. Discreet products are usually alot more nvidia friendly then ATI friendly...

dmeyer
03-07-2003, 02:19 PM
Well at the moment I am on a single 22" CRT, but I am thinking about going to a dual LCD setup, hence the interest in the Quadro FX with its dual DVI that can both be run at 16x12.

In maya, a typical workflow would be to have my perspective view open on one and menus such as Hypershade and Hypergraph on the other (although these are menus, they are indeed a bit OpenGL accelerated). This works well with the Wildcat 6110s at work under XP. Any issues under Win2K? I'd like to stick with Win2K at home if possible, I've heard though that dual display OGL is problematic under Win2K.

In combustion, a typical flow would be to have a Layer view, Composite View, and the main operator area open on one screen, and either the timeline or Schematic open on the other.

Of course the occasional typical "Maya on one, Photoshop on the other" instance will come into play too.

hmmm...leaning towards Quadro....why wont nVidia release a 256 mb part already?

GregHess
03-07-2003, 02:37 PM
1600x1200? 128 definitely makes the difference at that res (vs 64). The question is, would the ram still be divided if you ran with acceleration on the primary monitor (3d acceleration) and just normal 2d acceleration on the secondary. Hmm..

The Quadro 900 and 980XL can also run dual dvi-i at 1600x1200. Just letting ya know.

PlanetMongo
03-07-2003, 07:28 PM
Ever post on www.mcsetutor.com? Just wondering.. :)

dmeyer
03-07-2003, 07:44 PM
nope

dvornik
03-07-2003, 09:17 PM
Are you familiar with dual monitor dilemmas that all cards face? Native Windows monitor management vs One Large Monitor Across Two?

The thing is very few people here are aware of ATI dual monitor management issues as far as I know. Nvidia monitor management is a more common topic. I can test Combustion OGL on 750 XGL, that should be similar to other Nvidia cards, but I don't know how to use Combustion.

Maya was problematic on Nvidia's dualhead cards in Dualview while working well in Span mode.

dmeyer
03-08-2003, 02:14 AM
dvornik,

Any insight you can provide on using the 750 in dual monitor config under Combustion 2 would be greatly appreciated!

Maybe grab a combusiton user and have them put the card through it's paces? Just make sure to have openGL enabled in the viewports, and maybe swing the comp around in 3d view a bit :)

elvis
03-08-2003, 02:23 AM
Originally posted by dmeyer
Any issues under Win2K? I'd like to stick with Win2K at home if possible, I've heard though that dual display OGL is problematic under Win2K.

using the nview utilities in the nvidia drivers and the hydravision for the ati cards i've never had any troubles under win2k getting any form of hardware accelleration working in dual display (OGL, D3D or any other specialised drivers).

CgFX
03-08-2003, 07:31 AM
Greg,

The memory on NVIDIA cards (and ATI) is not divided up per display like a Wildcat. It is a unified pool of memory and is used as needed, regardless of the number of displays connected.

When using nView Span you have only one framebuffer that gets scanned out to two monitors. In the case of dual 1600x1200 you would have a 3200x1200 frame buffer and the memory requirements that brings with it. Including Zbuffer that is only 29.4 MB. Using 4x FSAA would increase that (not 4x since the Z buffer stays the same) to about 73 MB. With double buffering and a unified back buffer you will sneak in under 128 MB and start using AGP memory for your texture. This is absolute worst case scenerio.

DualView mode requires two 1600x1200 framebuffers with a negligable amount of more memory used because of that.

In this config I would probably not run 4x FSAA in which case you would have tons of memory available.

All of this has been assuming fullscreen modes. In practice your 3D viewport is not the full 3200x1200 but a smaller subset (75% or less) which would significantly reduce the memory numbers above.

To the main question, I would think that Maya's adoption and use of Cg alone would be reason enough to choose the nVidia card. Realtime shaders can also your requirement for texture memory space not to mention being the next level of working with realtime versions of what your final render will look like,

dmeyer
03-08-2003, 03:55 PM
Originally posted by CgFX



To the main question, I would think that Maya's adoption and use of Cg alone would be reason enough to choose the nVidia card. Realtime shaders can also your requirement for texture memory space not to mention being the next level of working with realtime versions of what your final render will look like,

This is a good point. My main bias towards the nVidia cards is that there is some talk around work of integrating Cg into our visualization workflow, trying to move more into real time stuff rather than baking off renderings. It'd be nice to work in Cg a bit at home to learn the ropes.

Also, while the GeForceFX's benchies have been less than stellar, the Quadro's seem to be a pretty solid card (from the benchmarks out on the web at least).

dmeyer
03-08-2003, 06:16 PM
FX1K is en route. :airguitar

FreeQ
03-09-2003, 12:16 AM
Originally posted by dmeyer
FX1K is en route. :airguitar

Right choice, for Maya i mean :]

GregHess
03-09-2003, 12:40 AM
Hey Cg,

Thanks for the clarification. I didn't mean to infer any particular aspects of nvidia's technology splitting the ram usage. I feel that when you add a second display of equal resolution, your basically doubling the amount of theoretical ram you could put into play at any one time. The higher the res, the more ram needed to perform the same functions. This is classically seen between the 3.3 and 3.6 ns 64 vs 128 Ti 4200's.

Though the 64 meg cards take the gold in most of the tests, the instant the resolution increases the 128's start to pull away.

What I'm trying to say is...

When you double the # of displays in use, you should expect your ram requirements to raise as well. Whether the card is physically dividing the ram or not.

elvis
03-09-2003, 02:49 AM
Originally posted by GregHess
When you double the # of displays in use, you should expect your ram requirements to raise as well. Whether the card is physically dividing the ram or not.

i think it's quite safe to assume this. that would be the starting point i would use for any ram calculations i'd be making.

dvornik
03-09-2003, 04:29 AM
Leaving the technical aspects aside I've had an impression that with Nvidia cards Maya runs better in Span mode and gets some unwanted artifacts in Dualview. While combustion itself is aware of a number of monitors and I assume would run better in dualview mode because of that. I don't think one would want one large combustion window stretched across two monitors.

dmeyer
03-24-2003, 07:45 PM
So i got the Quadro FX1K finally....

It's actually TOO fast, I am going to have to underclock this card..heh. I can't imagine having the FX2K, that would just be ludicrous ;)

Seriously though, monster viewport "feel" increase over my Ti4200.

Mistyk
03-24-2003, 08:25 PM
That's awesome dmeyer, congrats to your new monster (card)!

elvis
03-25-2003, 10:53 AM
hrm... i totally missed this post.

run any benchies on this thing yet?

Mistyk
03-25-2003, 12:13 PM
Elvis, I believe that you said earlier that Dell switches video card supplier every once in a while and thus may skip the QuadroFX:s altogether. You wouldn't happen to have any updated info on this?

Thanks!

dmeyer
03-25-2003, 12:32 PM
Originally posted by elvis
hrm... i totally missed this post.

run any benchies on this thing yet?

Soon....

elvis
03-25-2003, 10:40 PM
Originally posted by Mistyk
Elvis, I believe that you said earlier that Dell switches video card supplier every once in a while and thus may skip the QuadroFX:s altogether. You wouldn't happen to have any updated info on this?

Thanks!

no official word yet. i've meen hassling my dell account manager for some weeks now about this, and he in turn the head of hardware over in the dell headquarters in singapore (head office for asia/pacific).

as soon as i hear something i'll let you guys know. dell are usually very tight-lipped about upcoming hardware support, which is a real bummer. i think all o ftheir efforts have gone into laptops of late, but i'd like to see some imporvment in their high-end workstations.

Mistyk
03-26-2003, 02:46 PM
Thank you elvis! Do you think that Dell Europe would do the same as Dell asia/pacific when it comes to QuadroFX?

There was some talk about Dell getting a fair share of the now discontinued GeforceFX:s (the first ones). They have yet to show up as options for the desktop lines. Perhaps the Quadro:s and Geforce:s will be introduced simultaneously, if at all.

CGTalk Moderation
01-14-2006, 02:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.