CGTalk > Main > General Discussion
Login register
Thread Closed share thread « Previous Thread | Next Thread »
 
Thread Tools Search this Thread Display Modes
Old 05-11-2012, 05:35 PM   #31
forelle
Know-it-All
portfolio
Jack Greasley
London , United Kingdom
 
Join Date: Mar 2010
Posts: 340
Quote:
Originally Posted by forelle
Hi

We are testing the 680 at the moment. We haven't found any problems and should have a decision very soon about supporting it. If you can hold off a week or so, we should be able to say yes/no.

Jack


I've just had the nod from the QA's that they're happy to support the 680. We've tested it on 1.4v3 and it will be officially supported by the upcoming versions.

With 1.4v3 on the 680 you should just be able to permanently ignore the warning dialog.
 
Old 05-11-2012, 05:39 PM   #32
shokan
Know-it-All
Peter Van Aken
Canada
 
Join Date: Dec 2001
Posts: 448
Quote:
Originally Posted by forelle
I've just had the nod from the QA's that they're happy to support the 680. We've tested it on 1.4v3 and it will be officially supported by the upcoming versions.

With 1.4v3 on the 680 you should just be able to permanently ignore the warning dialog.


Thanks for the information. Good news.
 
Old 05-12-2012, 01:22 AM   #33
shokan
Know-it-All
Peter Van Aken
Canada
 
Join Date: Dec 2001
Posts: 448
EVGA GTX 680 4GB released

 
Old 07-07-2012, 08:57 AM   #34
Fomen
Explorer
 
Fomen's Avatar
portfolio
Mikhail Fomenko
Melnitsa animation studio
St.Petersburg, Russia
 
Join Date: Jun 2011
Posts: 18
So, what's better for work (in maya and mari, for example)? GTX 580 3GB 324bit memory interface (512 CUDA cores), or GTX 680 4GB from Palit or EVGA (1536 CUDA cores) with 254bit memory inetface? The price is almost identical...

Last edited by Fomen : 07-07-2012 at 10:40 AM.
 
Old 07-26-2012, 08:36 AM   #35
Jinian
Freelance 3d artist
 
Jinian's Avatar
portfolio
Jin Hao Villa
character artist/digital sculptor
Philippines
 
Join Date: May 2004
Posts: 534
i second to that. im in the middle of deciding on a video card, how would gtx 680 4gb compare to the gtx 580 3 gb in viewport performance in maya and mudbox when dealing with multi million poly meshes? also whats better on gpu rendering like in vray? it seems that the gtx 580 is only more preferred because it is more compatible, but isn't that just depends on downloading up to date drivers and have the software manufacturers update theirs.
 
Old 07-26-2012, 01:59 PM   #36
nsb
Frequenter
-
Melbourne, Australia
 
Join Date: May 2003
Posts: 203
Quote:
Originally Posted by Jinian
i second to that. im in the middle of deciding on a video card, how would gtx 680 4gb compare to the gtx 580 3 gb in viewport performance in maya and mudbox when dealing with multi million poly meshes? also whats better on gpu rendering like in vray? it seems that the gtx 580 is only more preferred because it is more compatible, but isn't that just depends on downloading up to date drivers and have the software manufacturers update theirs.


Nvida is terrible since the gtx400 series when it comes to view port, this is due to nvidia purposely gimping the drivers and trying to forcing quadro onto consumers. I have the 580 and its useable but in comparison to my friends mid range ati(sorry I forgot the model) its pathetic. Lots of posts about this issue if you google and since it use to be the other way around alot of people dont realize this issue and keep recommending nvidia for 3d.

Its quite good for maya's viewport 2.0 and vray-rt.
 
Old 07-26-2012, 04:40 PM   #37
darthviper107
Expert
 
darthviper107's Avatar
portfolio
Zachary Brackin
3D Artist
Precocity LLC
Dallas, USA
 
Join Date: Feb 2004
Posts: 3,955
For me the viewports are just fine. In the testing for VrayRT the new GTX 680 isn't any better than the GTX 580
__________________
The Z-Axis
 
Old 07-26-2012, 04:55 PM   #38
CHRiTTeR
On the run!
 
CHRiTTeR's Avatar
Chris
Graphic designer extraordinaire
Belgium
 
Join Date: Feb 2002
Posts: 4,381
Quote:
Originally Posted by Ian151
Well, you should be able to have 2 690 cards with 8GB - 4x2GB


it still would be 2GB

because they dont share memmory, so each gpu needs a copy of the data in its memmory.

2 690 cards = 4 gpu's
8GB / 4 = 2 GB


Also, the 580 performs significantly better in CUDA, Direct compute, OpenCL, etc...

Last edited by CHRiTTeR : 07-26-2012 at 04:58 PM.
 
Old 11-19-2012, 09:27 AM   #39
ZoneArkos
New Member
portfolio
Arkos
Mine, France
 
Join Date: Nov 2012
Posts: 1
Hi

Is it more important to have more Vram or More overclocked GPU?
I am looking for the good choice and I don't know witch Graphic card To buy.
My choice is a SLI MSI GTX 680 twin frozr III OC 4gb (1058-1124Mhz) or an SLI Asus ASUS GTX680-DC2G-4GD5 4 Go (1006-1058Mhz)
Is it possible to overclock it easily ?

Thanks for your help
 
Old 12-12-2012, 05:37 PM   #40
Mork74
New Member
Massimo
italia, Italy
 
Join Date: Aug 2005
Posts: 10
GTX670 4 GB vs GTX 580 3GB

Hi all,

since i've found so little info about this comparison, i'm posting here my first results:

3dsmax 2013 update 6 Vray RT 2.30.02
latest NVIDIA drivers at the moment: 306.97
Intel Core i7 920 2.66 Ghz 12 GB ram
Windows7 x64

The scene is a simple one, just for testing...without textures, just teapots, planes, cylinders...primitives
Vray physical cam
Some mesh lights, some self illuminated materials, one sunlight...
Dof and Motion Blur activated, Bokeh fx active
Resolution half HD (1280X720), 256 paths

These are my timings:

GTX580 CUDA 69 secs
opencl 58 secs

GTX670 CUDA 106 secs
opencl 69 secs

both in parallel 44 secs CUDA
both in parallel 33 secs opencl

The results are interesting....definetely, for me at least, opencl in this scene is way faster than CUDA rendering...but, and this is weird (but not too much actually) the difference among the two (with the GTX 580 always faster) become smaller in opencl rendering...such as CUDA rendering is not working properly for the Kepler GTX 670.
-drivers issues?-


It might provide a rough estimation about how they currently compare in Vray RT.
Hope it will help somebody...Opinions?
 
Old 12-12-2012, 06:26 PM   #41
BigPixolin
Banned
 
BigPixolin's Avatar
c none
US
 
Join Date: Jun 2007
Posts: 2,764
Quote:
Originally Posted by Mork74
Hi all,

since i've found so little info about this comparison, i'm posting here my first results:

3dsmax 2013 update 6 Vray RT 2.30.02
latest NVIDIA drivers at the moment: 306.97
Intel Core i7 920 2.66 Ghz 12 GB ram
Windows7 x64

The scene is a simple one, just for testing...without textures, just teapots, planes, cylinders...primitives
Vray physical cam
Some mesh lights, some self illuminated materials, one sunlight...
Dof and Motion Blur activated, Bokeh fx active
Resolution half HD (1280X720), 256 paths

These are my timings:

GTX580 CUDA 69 secs
opencl 58 secs

GTX670 CUDA 106 secs
opencl 69 secs

both in parallel 44 secs CUDA
both in parallel 33 secs opencl

The results are interesting....definetely, for me at least, opencl in this scene is way faster than CUDA rendering...but, and this is weird (but not too much actually) the difference among the two (with the GTX 580 always faster) become smaller in opencl rendering...such as CUDA rendering is not working properly for the Kepler GTX 670.
-drivers issues?-


It might provide a rough estimation about how they currently compare in Vray RT.
Hope it will help somebody...Opinions?


The 670's cores are not as fast as the 580, as far as I know the 6xx series is more focused on using less power per core than it is about brute power per core like the 5xx series.
 
Old 12-12-2012, 06:46 PM   #42
Mork74
New Member
Massimo
italia, Italy
 
Join Date: Aug 2005
Posts: 10
gtx 670 vs gtx580

yes, true....a positive note is that the gtx670 is very cold compared with gtx580 and consume much less (for me this is important...i'm not for the maximum power at ANY cost...have to check in detail with a tester)
about 50 C° vs 80 C° under stress of the gtx580.
That is in a small aircooled coolermaster case, nothing special, and pretty full (3 hard drives, dvdr, 2 gtx, etc)
And if you see in opencl the performance hit is not too big...so i'm not exactly disappointed...I've a card which consume much less, much cooler (maybe if it would have been an other 580gtx a change in the case or to the cooling system would have been necessary) and which render just a bit less fast.
Also it costs less than the 580 in its golden age because it's not the topline...
I presume that the gtx680 would perform as the 580 but cooler and with less power consumption...not bad at all!
 
Old 12-12-2012, 07:08 PM   #43
hypercube
frontier psychiatrist
 
hypercube's Avatar
portfolio
Daryl Bartley
vfx goon / gfx ho
hypercube
Los Angeles, USA
 
Join Date: May 2002
Posts: 4,050
Are there any pages running tests like this as part of reviews on a regular basis?

Is the 690/680 really slower than the 580 w/CUDA? Am doing fluid sims etc. so more curious about that vs how well all the game tests run, I know they will push polys around just fine.
 
Old 12-12-2012, 07:28 PM   #44
BigPixolin
Banned
 
BigPixolin's Avatar
c none
US
 
Join Date: Jun 2007
Posts: 2,764
From Nvidia

Quote:
But to set expectations, you should not expect the initial Kepler products (out now) to deliver a dramatic speed increase for iray over their Fermi generation predecessors. While Kepler has many more cores than Fermi, they run at much lower power, which means they have less performance per core. The gain you are guaranteed to see is superior performance per watt. This also makes it much easier to fit larger or more GPUs into power-constrained systems.

As for viewport/raster performance, it’s quite possible that many high end GPUs will not report high usage unless your scene is really taxing the GPU, most likely with many programmable shaders. High face counts and texture usage impact memory far more than they do workload. The graphics pipeline itself can also have a bottleneck. Here NVIDIA is working with Autodesk (and many other companies) to eliminate unnecessary data transfers that hold back the GPU. This not a reflection on Autodesk’s ability to design but rather how much more rapidly GPUs have evolved than CPUs, as these practices were often negligible to performance a GPU generation or two ago. The good news is that once these “speed bumps” get removed, all modern GPUs should benefit.

- Phil

Author: Phil Miller
 
Old 12-12-2012, 07:56 PM   #45
hypercube
frontier psychiatrist
 
hypercube's Avatar
portfolio
Daryl Bartley
vfx goon / gfx ho
hypercube
Los Angeles, USA
 
Join Date: May 2002
Posts: 4,050
Hmmm. I guess I will just have to roll the dice and do my own tests and see how it goes.

For a lot of what I really want the speedups for, any card will run out of memory and end up back on the CPU anyway right now. I am hoping efficient multi-GPU use gets into the software side soon enough.
 
Thread Closed share thread


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 05:02 PM.


Powered by vBulletin
Copyright ©2000 - 2016, Jelsoft Enterprises Ltd.