PDA

View Full Version : 2x NVidia GeForce GTX 295 (Sli) or 2 x Sapphire Radeon 4870 X2 2GB (crossfire)


purostar
05-08-2009, 10:43 AM
Hello,

I am in the process or building a new pc. I am having trouble trying to figure out which is the better route to go with the gcards. I have heard that ATI have much better architecture in there cards at the moment over Nvidia. However I have also heard that ATI Radeon cards can be quite buggy in regards to the viewport.
I have tried searching for these cards within the forums however not much info has come up for them.

Please can somebody help me choose which cards I should go for.
I dont do much Animation work within 3ds Max, I mainly use it for low poly games work and architectural stills.

Here is the rest of the specification for the machine im going to be building

Coolermaster Cosmos Case
Gigabyte GA-EX58-UD5 Intel X58
Intel Core i7 965 3.20GHz Extreme
CoolerMaster Real Power Pro 1000W PSU
Standard CPU Cooling Fan (not sure how many as of yet)
Corsair XMS3 12GB (6x2GB) DDR3 1600MHz
500GB 7200RPM SATA II

Gcards (not sure)

Creative Sound Blaster X-Fi
802.11g WiFi PCI adapter

Thanks for reading,
Dan

imashination
05-08-2009, 12:53 PM
Neither, 1x geforce 285 will be the same speed as your other suggestions and a lot cheaper. Multiple gfx cards do nothing for 3d apps.

thp777
05-08-2009, 01:32 PM
u should go with a i7 920 and overclock it a little and get 12 gb of corairs dominator ram its $166 for 6 gb on newegg and like imashination said sli and xfire doesnt work in most 3d apps
go nvidia for gpu ati sucks IMO

vlad
05-08-2009, 02:25 PM
I'm pretty sure you could go with a 500W psu with your setup (with a single gpu as mentioned).

thp777
05-08-2009, 03:09 PM
id recommend at least a 650 w corsair just to be safe 500w would be border line

purostar
05-08-2009, 03:19 PM
Thankyou for your replys,

From reading more and more threads, I am under the impression that these 2 cards would be a better choice for me,

BFG NVidia GeForce GTX 285 1GB
or
Sapphire ATI Radeon HD 4890 1GB DDR5 PCI Express 2.0

spec-wise I have changed it a little;

Coolermaster Cosmos 1000 Case
Gigabyte GA-EX58-UD5 Intel X58
Intel Core™ i7 965 3.20GHz Extreme
Corsair 750Watt PSU
Standard CPU Cooling Fan
Corsair XMS3 12GB (6x2GB) DDR3 1600MHz
500GB 7200RPM SATA II
Samsung SH-S223 22x DVD RW Black SATA

Insert choice of card here

Microsoft Windows XP SP3 Professional (64-bit)
Creative Sound Blaster X-Fi
802.11g WiFi PCI adapter


Im sure that thoes cards will still be more than adequate on the games frontier aswell.

ThE_JacO
05-08-2009, 03:44 PM
Like it's been said already, sli/xfire is a waste of money for 3D apps, and ATI (which is far from being so vastly superior in architecture) insists producing rubbish drivers on a regular basis.

Get a single geforce200 of your choice, and make sure you have a decent PSU.
500w bundled psus with that configuration will either be spinning its fans at max all the time, or simply not cope.
650w is what I'd consider the very minimum (speaking of safe margins) to be honest.
If you decide to go sli or xfire regardless, becuase of games or some other factor, then make it 800w+

biliousfrog
05-08-2009, 04:49 PM
Isn't it about time that a sticky was created about multiple graphics cards and 3d apps? This question gets asked at least once a week.

Mind you, nobody reads them anyway.

BoostAbuse
05-08-2009, 05:16 PM
http://www.evga.com/products/moreinfo.asp?pn=02G-P3-1186-AR

Much better than a GTX295 given that the GTX 295 is two GTX 275's tied together (2 x 896mb GPU's) so the 3D app will only see a single 896mb GPU versus the full 2048mb in the GTX 285.

vlad
05-08-2009, 05:52 PM
Like it's been said already, sli/xfire is a waste of money for 3D apps, and ATI (which is far from being so vastly superior in architecture) insists producing rubbish drivers on a regular basis.

Get a single geforce200 of your choice, and make sure you have a decent PSU.
500w bundled psus with that configuration will either be spinning its fans at max all the time, or simply not cope.
650w is what I'd consider the very minimum (speaking of safe margins) to be honest.
If you decide to go sli or xfire regardless, becuase of games or some other factor, then make it 800w+

Actually, for the above mentionned system (single non oc'd 3ghz cpu, single gpu, single HD etc), 400W would probably suffice and 500W would be the safe side. I have a 750W psu feeding a dual Xeon with 8gb, a 8800gtx + 9800gt, 4 HDs, a DVD, 5 case fans (+ the 2 for the cpus), a fan controller and an USB pci card, running pretty much 24/7 since January 2007, rendering at full load very often, without any trouble whatsoever. No max spinning fans or anything.

Anyway, a 1000W psu is way overkill in this case.

BoostAbuse
05-08-2009, 06:35 PM
The GT200 series require more power draw than the older GeForce equivalents. If you look at the new GTX 275 they recommend a 600w PSU and not just any random off the shelf but something with stronger +12v rails to handle the load that GPU's put out these days. I agree with Jaco that a 650w would be a safe starting point with room to grow and SLI would easily push that to an 750-800w+ to assure you're going to be stable with the system specs the OP has outlined.

vlad
05-08-2009, 07:59 PM
Arent manufacturers simply doubling the requirements just to cover their butts?

purostar
05-08-2009, 09:35 PM
Thanks for the replys, I will look at the chaging the PSU to a more sane one, I put that in as a safe guad, as of the time i was concidering crossfire.

I have nailed my cards down to three choices.

SAPPHIRE VAPOR-X HD 4870 2GB GDDR5 PCI-E
http://www.sapphiretech.com/presentation/product/?psn=0001&pid=217

SAPPHIRE HD 4890 1GB GDDR5 PCI-E
http://www.sapphiretech.com/presentation/product/?psn=000101&pid=219

GeForce GTX 285 2GB SC
http://www.evga.com/products/moreinfo.asp?pn=02G-P3-1186-AR

Which out of the three above am I going to find the most issues in max with regarding buggy viewport issues? I am currently swaying towards the ATI.

Cheers,
Dan.

ThE_JacO
05-09-2009, 12:30 AM
Arent manufacturers simply doubling the requirements just to cover their butts?
The gtx cards really do draw more, and the double slam ones like the 285 draw in even more.
In terms of watts of course even a 380w PSU would cover in theory. If you measure power absorption at the socket in actuality even a monstrous workstation doesn't suck more than 300 to 500.

Problem is the amperage you need on individual rails to keep things stable and nice comes from components, choices and numbers that are only found in much higher nominal wattage PSUs.
If you use a sli of 285 or anything like that, you can rest assured that higher 700 is the bare minimum for safet, and if you have enough drives and just one other pci-e card then you'd have to look at 760-820, or accept that you'll be working with tight margins.

As for the OP: so you're going to go Xfire and ATI despite pretty much everybody saying that's crap and/or wasteful for what you want to do. Are you putting together a gaming system and only want to throw a dodgy copy of max on it to doodle at home or something like that? Because xfire for max will do absolutely nothing.

purostar
05-09-2009, 12:44 AM
I certaily wont be going Xfire, or Sli, thats for certain.

imashination
05-09-2009, 12:49 AM
The geforce 285 is a single GPU AFAIK

thp777
05-09-2009, 12:59 AM
ya the gt295 is the only dual gpu
and as for the psu u have to take into account surge when turning on the machine

purostar
05-09-2009, 01:00 AM
I believe it is a single gpu yes, is that a better card than the SAPPHIRE HD 4890 1GB?

ThE_JacO
05-09-2009, 01:10 AM
I certaily wont be going Xfire, or Sli, thats for certain.
My apologies, re-reading I clearly misunderstood your last post. Thought you were saying you're still considering xfire, which is clearly not what you were saying :)

purostar
05-09-2009, 01:15 AM
No need for apologies =]

Im finding this process very difficult as all readers can tell. Looks like im down to 2 single gpu cards now by process of elimination;

EVGA GeForce GTX 285 2GB SC
http://www.evga.com/products/moreinfo.asp?pn=02G-P3-1186-AR

SAPPHIRE HD 4890 1GB GDDR5 PCI-E
http://www.sapphiretech.com/presentation/product/?psn=000101&pid=219

BoostAbuse
05-09-2009, 01:16 AM
The geforce 285 is a single GPU AFAIK

Yup, the EVGA is a single 2048mb 512-bit DDR3 GPU... I've got two here at the house and the one in this system just loves to hammer away on large polygon sets. Most I've managed to get up to in Mudbox with the card and system is about 135 million polygons and it's still very usable.

Save yourself the cash and get the regular GTX 285 2GB. The overclock on the SC is so minimal that you could easily use their Precision tool to OC the card to SC specs anyways.

purostar
05-09-2009, 01:17 AM
Yup, the EVGA is a single 2048mb 512-bit DDR3 GPU... I've got two here at the house and the one in this system just loves to hammer away on large polygon sets. Most I've managed to get up to in Mudbox with the card and system is about 135 million polygons and it's still very usable.

Thats great information, thankyou!

DieMachinist
05-09-2009, 06:33 PM
Google gave me this-


RAM Speed:
GeForce GTX 285 2GB (http://www.evga.com/products/moreInfo.asp?pn=02G-P3-1185-AR) => 2322Mhz (effective)
GeForce GTX 285 2GB SC (http://www.evga.com/products/moreInfo.asp?pn=02G-P3-1186-AR) => 2376Mhz (effective)
GeForce GTX 285 (http://www.evga.com/products/moreInfo.asp?pn=01G-P3-1180-AR) => 2484Mhz (effective)

Memory Bandwidth:
GeForce GTX 285 2GB (http://www.evga.com/products/moreInfo.asp?pn=02G-P3-1185-AR) => 148.6 GB/s
GeForce GTX 285 2GB SC (http://www.evga.com/products/moreInfo.asp?pn=02G-P3-1186-AR) => 152 GB/s
GeForce GTX 285 (http://www.evga.com/products/moreInfo.asp?pn=01G-P3-1180-AR) => 159 GB/s

So is the 2 GB really helping in anything?

BoostAbuse
05-09-2009, 07:20 PM
More vram to fill basically, because most 3D apps shuttle data back and forth from CPU -> GPU the more available VRAM you have the less likely you are to choke. The memory speeds are more relevant in the games realm where you obviously need to be pushing data at a very fast rate to keep up with the players movement and interaction.

thp777
05-09-2009, 08:02 PM
the 285 2gb sc would be the one to get
its got a faster gpu speed as well

purostar
05-09-2009, 10:25 PM
Hello again, well im pretty much sorted out now. I have my eyes set on the 285, however I have discovered this.

Palit - PALIT GeForce GTX 285 2GB (2048MB)
http://www.palit.biz/main/vgapro.php?id=1098

Is that better than the EVGA card?

BoostAbuse
05-09-2009, 10:50 PM
Hello again, well im pretty much sorted out now. I have my eyes set on the 285, however I have discovered this.

Palit - PALIT GeForce GTX 285 2GB (2048MB)
http://www.palit.biz/main/vgapro.php?id=1098

Is that better than the EVGA card?

Steer clear of Palit... there's lots of bad vibes in the enthusiast crowd with them. I'd stick with anything from Asus, EVGA, BFG or XFX to be on the safe side.

ThE_JacO
05-10-2009, 01:29 AM
Palit gets a lot of mixed reviews, which leads me to think it can either e a bit of a russian roulette, or be dependent from the model.
In my case I have to say I got an evga, and ended up returning it (busted fan) and had it replaced with a slightly better palit (because it was in stock while evga was a week wait), and it's one of the best cards I've owned.

It's very well designed, it's taken all the stress tests I threw at it well, and the off-spec cooling on it is designed well and built better.
It comes with absolutely nothing in the box except a couple adapters and a cd, which means the price for the OC 216 units one was in line with the normal 260 by evga.

If I was to go just by my experience I'd recommend the brand to be honest.
A lot of the complaints from the enthusiasts have equivalent bitching on other brands, palit might just be getting more flak for one fairly unlucky model they put out, the 280 that gets most of the complaints from overclockers, but I'd say it's to be expected from a brand that sells OC models.
It's only normal that if they have OC offers those end up with the better batches of GPUs, and the standard ones don't take to OCing or stress too well.

On early batches while nvidia is still perfecting a manufacturing process for a new transistor size buying an OC model and clocking it back to normal (or not exceeding the factory OC), is actually a decent insurance that it comes from a good batch and has solid memory on the board. A year or two into the process, when batches are usually very consistent, it becomes a bit more of a gimmick and not worth the money.

3DMadness
05-13-2009, 05:40 PM
Hey guys, I'm also buying a new computer and I'm wondering about the videocard for working with 3dsmax.
I know that for viewport SLI is not worth, but what about when we start to working together with cuda and physx? Not even in this case the SLI or a 295 can help?

Cheers!

Flávio

Srek
05-13-2009, 06:03 PM
but what about when we start to working together with cuda and physx? Not even in this case the SLI or a 295 can help?
Ask again once this has been implemented and tried ;)
At this time any answer on this would be speculation. Chances are it will work, but basing a buying decission on this right now is not a good idea.

Cheers
Björn

ThE_JacO
05-14-2009, 08:13 AM
Hey guys, I'm also buying a new computer and I'm wondering about the videocard for working with 3dsmax.
I know that for viewport SLI is not worth, but what about when we start to working together with cuda and physx? Not even in this case the SLI or a 295 can help?

Cheers!

Flávio
Even when you work with CUDA it depends from how threading friendly the application is. CUDA does a really good job of managing that for the developer, but not everything will scale the same way.

I've started playing around with it at home, and I very rarely manage to cap my single slot 216units 260.

And even with CUDA and CL in mind, there really isn't much out there that uses it to any noticeable extent between the all-rounder 3Dapps.
Wouldn't worry too much about it this year, by the time your 2nd videocard would be put to any use enough time will have gone by that with the money saved not buying you'll probably be able to upgrade to a better single slot configuration.

3DMadness
05-14-2009, 12:11 PM
Thanks for the repply guys. Physx is already been used in 3dsmax with thinking particles and will be available for particle flow with box #2. But I think the single 285 will be my choice.
And Jaco, this computer will be used in a university where upgrades are really rare, that's why I was wondering about which card to choose. ;)

rafaelbarriola
05-19-2009, 06:06 PM
Yup, the EVGA is a single 2048mb 512-bit DDR3 GPU... I've got two here at the house and the one in this system just loves to hammer away on large polygon sets. Most I've managed to get up to in Mudbox with the card and system is about 135 million polygons and it's still very usable.

Save yourself the cash and get the regular GTX 285 2GB. The overclock on the SC is so minimal that you could easily use their Precision tool to OC the card to SC specs anyways.

Do you know if the same is true for Zbrush, I'm sorry for the lame question, but I've been reading this forum because I want to build a new workstation and and all the information here has been very good. I always thought that mudbox or zbrush would rely on the system memory not the vidcard, I was planning to get 12gb of ram just becouse of that.

Rafael

3DMadness
05-21-2009, 04:07 PM
Do you know if the same is true for Zbrush, I'm sorry for the lame question, but I've been reading this forum because I want to build a new workstation and and all the information here has been very good. I always thought that mudbox or zbrush would rely on the system memory not the vidcard, I was planning to get 12gb of ram just becouse of that.

Rafael
It rely on the system memory for storing the scene you're working, the mesh, etc, the video card will store things to be displayed, like textures. But mudbox uses a lot of the videocard so it's always good to have some more memory there too as long as it's a fast memory, some builders likes to put lots of slow ram on slow videocard to sell to people that looks only for amount of ram. ;)

rafaelbarriola
05-21-2009, 04:31 PM
It rely on the system memory for storing the scene you're working, the mesh, etc, the video card will store things to be displayed, like textures. But mudbox uses a lot of the videocard so it's always good to have some more memory there too as long as it's a fast memory, some builders likes to put lots of slow ram on slow videocard to sell to people that looks only for amount of ram. ;)

I will then get a single 285 with more memory, I wonder if the same is true for Maya, if it holds information on the vidcard memory, that would be really helpful.

3DMadness, tudo bom? eu so brasileiro tb eheh, eu morava em Balneário Camboriú e morei em floripa por uns meses.

ThE_JacO
05-22-2009, 01:35 AM
I will then get a single 285 with more memory, I wonder if the same is true for Maya, if it holds information on the vidcard memory, that would be really helpful.

3DMadness, tudo bom? eu so brasileiro tb eheh, eu morava em Balneário Camboriú e morei em floripa por uns meses.
ZBrush doesn't really do much with the videocard, mudbox on the other hand seriously hammers it.
As for maya, it's not like "it holds information on the videocard". It depends from what you're doing how much difference the videocard will make.
Some tasks are usually bottlenecked by the CPU (ie: subd modelling), some others by the gpu speed and bus(navigating a large environment), some others by how much memory you have (holding a ton of textures, brickmaps or many different sprites on various clouds).

rafaelbarriola
05-22-2009, 01:44 AM
ZBrush doesn't really do much with the videocard, mudbox on the other hand seriously hammers it.
As for maya, it's not like "it holds information on the videocard". It depends from what you're doing how much difference the videocard will make.
Some tasks are usually bottlenecked by the CPU (ie: subd modelling), some others by the gpu speed and bus(navigating a large environment), some others by how much memory you have (holding a ton of textures, brickmaps or many different sprites on various clouds).

That's good to know I mostly will be animating for now, and I haven't seen a single difference from using a geforce or a quadro card, and later some modeling primarily using Zbrush, Maya for textures, and for that I was planning in getting 6 or 12gb on a new Core i7.

My biggest question is what would make the view port animation run better, and not be in slow motion with some rigs? The character I'm animating now has only 4390 faces and the fps drops quite fast when I hit play.

Thanks for all the information, I was actually thinking on getting a GeForce 295 too, because at least gaming wont suck, but I rather have all the 3D apps working better than some fps in games.

ThE_JacO
05-22-2009, 04:55 AM
That's good to know I mostly will be animating for now, and I haven't seen a single difference from using a geforce or a quadro card, and later some modeling primarily using Zbrush, Maya for textures, and for that I was planning in getting 6 or 12gb on a new Core i7.
RAM is mostly important for rendering, simulation and comp. If you don't do that in profuse quantities then I'd say 6GB will do you, and you can always add the other 6 later.

My biggest question is what would make the view port animation run better, and not be in slow motion with some rigs? The character I'm animating now has only 4390 faces and the fps drops quite fast when I hit play.
Rigging and moving those rigs is solely and entirely cpu in maya (and correlated elements like its cache and the bus to memory).
Your videocard doesn't have a problem drawing those 5k triangles, what is slowing things down is your cpu having to pull together the graph every update. Even with a 15 times faster cpu though don't expect rigs to suddenly take off, there's a lot to those issues that isn't pure number-crunching but rather latency and waiting for things to happen. Also a lot of that stuff is crap to parallelize, so it will hardly ever thread well onto multiple core and is a lot more likely to use one for as much as it can.
It's not necessarily a bad thing to turn off hyperthreading on an i7 if all you're doing is animating and rigging.

Thanks for all the information, I was actually thinking on getting a GeForce 295 too, because at least gaming wont suck, but I rather have all the 3D apps working better than some fps in games.
get a 260 or something in that range and it will be plenty for animating even rather heavy scenes, and will play well.
A 9800 won't do badly either.

rafaelbarriola
05-22-2009, 05:58 AM
RAM is mostly important for rendering, simulation and comp. If you don't do that in profuse quantities then I'd say 6GB will do you, and you can always add the other 6 later.


Rigging and moving those rigs is solely and entirely cpu in maya (and correlated elements like its cache and the bus to memory).
Your videocard doesn't have a problem drawing those 5k triangles, what is slowing things down is your cpu having to pull together the graph every update. Even with a 15 times faster cpu though don't expect rigs to suddenly take off, there's a lot to those issues that isn't pure number-crunching but rather latency and waiting for things to happen. Also a lot of that stuff is crap to parallelize, so it will hardly ever thread well onto multiple core and is a lot more likely to use one for as much as it can.
It's not necessarily a bad thing to turn off hyperthreading on an i7 if all you're doing is animating and rigging.


get a 260 or something in that range and it will be plenty for animating even rather heavy scenes, and will play well.
A 9800 won't do badly either.

Thanks a lot ThE_JacO I was wrongly assuming that everything on the viewport was being calculated by the quadro cards (or geforce), and the CPU was really making a difference on Render time, I'll prob get a 285 and 6gb.

Thanks

3DMadness
05-22-2009, 03:18 PM
Hey Rafael, good to hear that people who used no live near me is now in USA. :)

I agree with jaco, what shows in the viewport is being calculated by the videocard, but before sending the information to the viewport the software must calculate lots of things like meshing deform, and this is cpu bound. So if you're woking with character animation and have a top of the line card like the 285 it will be waiting most of the time to the cpu feed it with the information, that's why he said a 260 would be enough. So in this case you should save the money to get the fastest cpu you can afford, or even buy good coolers and memory to overclock the cpu if you got the time. ;)

rafaelbarriola
05-22-2009, 07:18 PM
Hey Rafael, good to hear that people who used no live near me is now in USA. :)

I agree with jaco, what shows in the viewport is being calculated by the videocard, but before sending the information to the viewport the software must calculate lots of things like meshing deform, and this is cpu bound. So if you're woking with character animation and have a top of the line card like the 285 it will be waiting most of the time to the cpu feed it with the information, that's why he said a 260 would be enough. So in this case you should save the money to get the fastest cpu you can afford, or even buy good coolers and memory to overclock the cpu if you got the time. ;)

Yeah that's the plan, fast CPU and memory, and the 285, I could get a 260 but I rather have something a little faster and with more memory, who knows I never tried Mudbox.

Yeah your city was great to live in, I miss it so much, and I visit whenever I can.

3DMadness
05-25-2009, 04:46 PM
Hey Rafael, that's a great choice and you'll get a card that will last long.

And if you work with maya you'll like mudbox, you should give it a try, it's very intuitive.

The next time you be here around send me a message so we can have a beer. :D

Cheers!

Flávio

CGTalk Moderation
05-25-2009, 04:46 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.