The benefit of a quadro vs geforce?

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 11 November 2011   #16
Originally Posted by TheGrak: Those 580 geforce cards are nice. The 590 has 3gbs of vram.
The high end quadro cards only advantage is the larger vram.

Take a look at the gpu capabilities:
The price on the quadro 6000 is $4k, and you get 448 stream processors.
The price on the 590 is $750, and you get 1024 processor cores.

Unless the quadro's processors are more than twice as fast as the 590s, I'd say two sli'd 590s would be a better gpu setup than one quadro, unless your scene is larger than 3 gbs.


^ That's the truth. Don't think that your sli'd cards are sharing memory.

Unless you are loading scenes that are massively huge onto the gpus it doesn't make sense to buy a high end quadro. (And that's coming from a quadro 4000 owner and user).


The 590 actually isn't a good choice--it's technically two cards put together, Max can't take advantage of both (since it's like SLI) and so it's better to get a GTX 580 which has a faster processor, along with a 3GB version.
__________________
The Z-Axis
 
Old 12 December 2011   #17
Thi sis just my personal experience - but the Quadro cards seem to to better with the lines showing, so wireframe is better with Quaddro - whereas the gaming cards seem to be better with shaded solid objects.
__________________
www.jd3d.co.uk - I'm available for freelance work.
 
Old 12 December 2011   #18
I will never waste money on a quadro again, unless Nvidia does some low down dirty tactics like disabling the geforces somehow. There are a few advantages of a Quadros just not $3,500 per card worth of advantages for me.
 
Old 12 December 2011   #19
Originally Posted by mynewcat: Thi sis just my personal experience - but the Quadro cards seem to to better with the lines showing, so wireframe is better with Quaddro - whereas the gaming cards seem to be better with shaded solid objects.



This is kind of a retro experience IMHO, as it was true with OpenGL. Now that the focus is 100% on Direct3D and it's geometry caching, wireframe is blazing fast. Just be sure to disable backface culling, as in wireframe it is done on the CPU side...
Mabye quadro's + performance driver were faster with overlayed edge display etc.. ( just speculation, no firsthand experience), but as performance drivers are obsolete too now on Max2012 and up, that would'nt make an argument pro quadro either ...
__________________

PowerPreview: High Quality Nitrous Previews for 3ds Max 2012|2013|2014

[ Free Download (ScriptSpot) ]

Home of The Frogs | Online Portfolio
 
Old 12 December 2011   #20
They seem to be stuck with a legacy mindset, that is the only reason they still sell them. In the old days it mattered, these days it doesn't. If openGL were still a player you bet a Quadro would be better, much like a 3dlabs Wildcat was, with DX forget about it. I am not convinced.

How do you justify the cost of it? Games are all built on DX, Viewports are pretty much strictly developed now in DX, there really can't be that much of a difference (I am sure there are a few nuances)

I have been fortunate enough to use most of them in one form or another and since the advent of high-end game cards, VRam really is the only perceivable difference. In fact I have had better performance on my 480 gtx than I have had on my Quadro 5k in most circumstances that do not involve GPU+textures (ie iRay, ect.) It seems to be the only major advantage is "real-time" rendering.

Even that is weird, why can't texture memory be split up over SLI, this seems artificial. If SLI can draw a portion your monitor, a rt render should be able to render a portion of the shot, right? ie 1st half on one card, 2nd half on the other. Rendering is all predetermined it is not like running a physics simulation where a effects b in relation to time.
__________________
poof ~>Vimeo<~
 
Old 12 December 2011   #21
Well,
I don't know the performance of the gtx 580, but my quadro 4000 seems to perform very well. I get a smooth navigation in viewports with nitrous with a 12 million polys scene, so I don't know what to think about that. I have a dual xeon but I don't think it's helping in the viewports. So what do you think about such a performance? I think it's pretty strong but I'm curious to hear about 580's fps in nitrous with such a scene. Does anyone that owns one can post a fps test?
Thank you
 
Old 12 December 2011   #22
FTIW, in an iRay class at AU, a representative from NVidia (forgot the name, sorry) and the instructor said that the primary difference between gaming and workstation NVidia cards was that the workstation cards were more stable under long term load.

Also, I learned that the 3GB 580 was really two 1.5GB cards mashed together.

...and knowing is half the battle.
__________________
www.davetyner.com
 
Old 12 December 2011   #23
Originally Posted by em3: Also, I learned that the 3GB 580 was really two 1.5GB cards mashed together.
Depends on the vendor. There are native 3GB and 580 x2, which are like the 590 and are 2 cards in a single configuration.

-Eric
__________________
"The Evil Monkey hiding in your closet."
 
Old 12 December 2011   #24
Originally Posted by PiXeL_MoNKeY: Depends on the vendor. There are native 3GB and 580 x2, which are like the 590 and are 2 cards in a single configuration.

-Eric


Great to know Eric, thanks. How can one tell which configuration they have?
__________________
www.davetyner.com
 
Old 12 December 2011   #25
Originally Posted by darthviper107: The 590 actually isn't a good choice--it's technically two cards put together, Max can't take advantage of both (since it's like SLI) and so it's better to get a GTX 580 which has a faster processor, along with a 3GB version.


3ds Max will utilize *every* compatible GPU, it will use both or even more. I work with systems with up to 32 GPUs.

SLI should be disabled as the advantage of SLI is actually a bottleneck to GPU rendering. It isn't a huge difference but every bit helps. It will work fine with SLI but not as fast.

As to the GeForce vs. Quadro/Tesla debate. A Quadro/Tesla card can run under heavy loads for long periods (much longer than GeForce). I have seen several GeForce cards burn up under iRay rendering. I have had my 1x Quadro 5000 3x Tesla 2050 Cubix box render for a couple of days straight with no issues. In server racks this can be a very important advantage. The Quadro/Tesla cards also offer very high memory limits. I have a Tesla 2075 with 6GB memory.
__________________
Twitter: @Kelly_Michels
kelly.michels@autodesk.com
3ds Max Senior QA / 3ds Max Beta Manager
M&E Division Beta Administrator
Autodesk, Inc

Last edited by KellyM : 12 December 2011 at 02:34 AM. Reason: Added Tesla memory amout
 
Old 12 December 2011   #26
Originally Posted by em3: Great to know Eric, thanks. How can one tell which configuration they have?
If you check the product page and they list the number of CUDA cores that is usually the best indication (per card the 5xx has a max of 512 cores, anything more is typically a 2-in-1 card). I have never used any of the 2-in-1 cards so not sure how they show up in the system. You can also check the Nvidia Page for the reference card/product specification anything outside the norm may indicate a 2-in-1 card by the Add-in card manufacturer.

-Eric
__________________
"The Evil Monkey hiding in your closet."
 
Old 12 December 2011   #27
Originally Posted by em3: FTIW, in an iRay class at AU, a representative from NVidia (forgot the name, sorry) and the instructor said that the primary difference between gaming and workstation NVidia cards was that the workstation cards were more stable under long term load.

Also, I learned that the 3GB 580 was really two 1.5GB cards mashed together.

...and knowing is half the battle.


I was actually the person in the audience to bring that up, and I said 590, not 580...though it sounds like there are also 580 cards that may be set up the same (didn't know that)!

Just thought I'd point that out.
 
Old 12 December 2011   #28
Originally Posted by KellyM: 3ds Max will utilize *every* compatible GPU, it will use both or even more. I work with systems with up to 32 GPUs.


I think all he meant was that it (gtx590) wasn't a good choice if you thought you were getting a card with 3gb of vRAM, because you aren't.

Last edited by jfincher : 12 December 2011 at 04:07 PM.
 
Old 12 December 2011   #29
In reality the 590 could be faster than a single 580 as you get 512x2 CUDA cores (@~79% the speed of the 580) and 1536MBx2 vram (@~85% the speed of the 580), based on the Nvidia Reference Specs. As long as the scene can fit in the 1536MB memory space, then it could be better than the 580 3GB as you have more processing power.

-Eric
__________________
"The Evil Monkey hiding in your closet."
 
Old 05 May 2012   #30
So what's the conclusion?!

I'm finding it a nightmare trying to decide which card to buy next, every discussion on this topic has conflicting opinions. One minute I'm thinking GTX 580, next Quadro 5000, next GTX 285...
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 10:11 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.