GTX 680 2mb vs. GTX 580 3mb for this software?

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
  05 May 2012
Originally Posted by forelle: Hi

We are testing the 680 at the moment. We haven't found any problems and should have a decision very soon about supporting it. If you can hold off a week or so, we should be able to say yes/no.

Jack


I've just had the nod from the QA's that they're happy to support the 680. We've tested it on 1.4v3 and it will be officially supported by the upcoming versions.

With 1.4v3 on the 680 you should just be able to permanently ignore the warning dialog.
 
  05 May 2012
Originally Posted by forelle: I've just had the nod from the QA's that they're happy to support the 680. We've tested it on 1.4v3 and it will be officially supported by the upcoming versions.

With 1.4v3 on the 680 you should just be able to permanently ignore the warning dialog.


Thanks for the information. Good news.
 
  07 July 2012
So, what's better for work (in maya and mari, for example)? GTX 580 3GB 324bit memory interface (512 CUDA cores), or GTX 680 4GB from Palit or EVGA (1536 CUDA cores) with 254bit memory inetface? The price is almost identical...

Last edited by Fomen : 07 July 2012 at 10:40 AM.
 
  07 July 2012
i second to that. im in the middle of deciding on a video card, how would gtx 680 4gb compare to the gtx 580 3 gb in viewport performance in maya and mudbox when dealing with multi million poly meshes? also whats better on gpu rendering like in vray? it seems that the gtx 580 is only more preferred because it is more compatible, but isn't that just depends on downloading up to date drivers and have the software manufacturers update theirs.
 
  07 July 2012
Originally Posted by Jinian: i second to that. im in the middle of deciding on a video card, how would gtx 680 4gb compare to the gtx 580 3 gb in viewport performance in maya and mudbox when dealing with multi million poly meshes? also whats better on gpu rendering like in vray? it seems that the gtx 580 is only more preferred because it is more compatible, but isn't that just depends on downloading up to date drivers and have the software manufacturers update theirs.


Nvida is terrible since the gtx400 series when it comes to view port, this is due to nvidia purposely gimping the drivers and trying to forcing quadro onto consumers. I have the 580 and its useable but in comparison to my friends mid range ati(sorry I forgot the model) its pathetic. Lots of posts about this issue if you google and since it use to be the other way around alot of people dont realize this issue and keep recommending nvidia for 3d.

Its quite good for maya's viewport 2.0 and vray-rt.
 
  07 July 2012
For me the viewports are just fine. In the testing for VrayRT the new GTX 680 isn't any better than the GTX 580
__________________
The Z-Axis
 
  07 July 2012
Originally Posted by Ian151: Well, you should be able to have 2 690 cards with 8GB - 4x2GB


it still would be 2GB

because they dont share memmory, so each gpu needs a copy of the data in its memmory.

2 690 cards = 4 gpu's
8GB / 4 = 2 GB


Also, the 580 performs significantly better in CUDA, Direct compute, OpenCL, etc...

Last edited by ACiD80 : 07 July 2012 at 04:58 PM.
 
  11 November 2012
Hi

Is it more important to have more Vram or More overclocked GPU?
I am looking for the good choice and I don't know witch Graphic card To buy.
My choice is a SLI MSI GTX 680 twin frozr III OC 4gb (1058-1124Mhz) or an SLI Asus ASUS GTX680-DC2G-4GD5 4 Go (1006-1058Mhz)
Is it possible to overclock it easily ?

Thanks for your help
 
  12 December 2012
GTX670 4 GB vs GTX 580 3GB

Hi all,

since i've found so little info about this comparison, i'm posting here my first results:

3dsmax 2013 update 6 Vray RT 2.30.02
latest NVIDIA drivers at the moment: 306.97
Intel Core i7 920 2.66 Ghz 12 GB ram
Windows7 x64

The scene is a simple one, just for testing...without textures, just teapots, planes, cylinders...primitives
Vray physical cam
Some mesh lights, some self illuminated materials, one sunlight...
Dof and Motion Blur activated, Bokeh fx active
Resolution half HD (1280X720), 256 paths

These are my timings:

GTX580 CUDA 69 secs
opencl 58 secs

GTX670 CUDA 106 secs
opencl 69 secs

both in parallel 44 secs CUDA
both in parallel 33 secs opencl

The results are interesting....definetely, for me at least, opencl in this scene is way faster than CUDA rendering...but, and this is weird (but not too much actually) the difference among the two (with the GTX 580 always faster) become smaller in opencl rendering...such as CUDA rendering is not working properly for the Kepler GTX 670.
-drivers issues?-


It might provide a rough estimation about how they currently compare in Vray RT.
Hope it will help somebody...Opinions?
 
  12 December 2012
Originally Posted by Mork74: Hi all,

since i've found so little info about this comparison, i'm posting here my first results:

3dsmax 2013 update 6 Vray RT 2.30.02
latest NVIDIA drivers at the moment: 306.97
Intel Core i7 920 2.66 Ghz 12 GB ram
Windows7 x64

The scene is a simple one, just for testing...without textures, just teapots, planes, cylinders...primitives
Vray physical cam
Some mesh lights, some self illuminated materials, one sunlight...
Dof and Motion Blur activated, Bokeh fx active
Resolution half HD (1280X720), 256 paths

These are my timings:

GTX580 CUDA 69 secs
opencl 58 secs

GTX670 CUDA 106 secs
opencl 69 secs

both in parallel 44 secs CUDA
both in parallel 33 secs opencl

The results are interesting....definetely, for me at least, opencl in this scene is way faster than CUDA rendering...but, and this is weird (but not too much actually) the difference among the two (with the GTX 580 always faster) become smaller in opencl rendering...such as CUDA rendering is not working properly for the Kepler GTX 670.
-drivers issues?-


It might provide a rough estimation about how they currently compare in Vray RT.
Hope it will help somebody...Opinions?


The 670's cores are not as fast as the 580, as far as I know the 6xx series is more focused on using less power per core than it is about brute power per core like the 5xx series.
 
  12 December 2012
gtx 670 vs gtx580

yes, true....a positive note is that the gtx670 is very cold compared with gtx580 and consume much less (for me this is important...i'm not for the maximum power at ANY cost...have to check in detail with a tester)
about 50 C° vs 80 C° under stress of the gtx580.
That is in a small aircooled coolermaster case, nothing special, and pretty full (3 hard drives, dvdr, 2 gtx, etc)
And if you see in opencl the performance hit is not too big...so i'm not exactly disappointed...I've a card which consume much less, much cooler (maybe if it would have been an other 580gtx a change in the case or to the cooling system would have been necessary) and which render just a bit less fast.
Also it costs less than the 580 in its golden age because it's not the topline...
I presume that the gtx680 would perform as the 580 but cooler and with less power consumption...not bad at all!
 
  12 December 2012
Are there any pages running tests like this as part of reviews on a regular basis?

Is the 690/680 really slower than the 580 w/CUDA? Am doing fluid sims etc. so more curious about that vs how well all the game tests run, I know they will push polys around just fine.
 
  12 December 2012
From Nvidia

Quote: But to set expectations, you should not expect the initial Kepler products (out now) to deliver a dramatic speed increase for iray over their Fermi generation predecessors. While Kepler has many more cores than Fermi, they run at much lower power, which means they have less performance per core. The gain you are guaranteed to see is superior performance per watt. This also makes it much easier to fit larger or more GPUs into power-constrained systems.

As for viewport/raster performance, it’s quite possible that many high end GPUs will not report high usage unless your scene is really taxing the GPU, most likely with many programmable shaders. High face counts and texture usage impact memory far more than they do workload. The graphics pipeline itself can also have a bottleneck. Here NVIDIA is working with Autodesk (and many other companies) to eliminate unnecessary data transfers that hold back the GPU. This not a reflection on Autodesk’s ability to design but rather how much more rapidly GPUs have evolved than CPUs, as these practices were often negligible to performance a GPU generation or two ago. The good news is that once these “speed bumps” get removed, all modern GPUs should benefit.

- Phil

Author: Phil Miller
 
  12 December 2012
Hmmm. I guess I will just have to roll the dice and do my own tests and see how it goes.

For a lot of what I really want the speedups for, any card will run out of memory and end up back on the CPU anyway right now. I am hoping efficient multi-GPU use gets into the software side soon enough.
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 02:34 AM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.