Integrated Octane Render Engine coming up...

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 02 February 2013   #31
The specs show that the 660 has much higher CUDA scores than the 580. Would this result in better performance on CUDA accelerated apps, or would the 580 still do better? Otoy site seems to recommend the 580.
__________________
"...if you have faith as small as a mustard seed... Nothing will be impossible for you."
 
Old 02 February 2013   #32
580 is significantly better than the 660ti (about 80% faster in CUDA tasks), these cards are artificially limited by Nvidia to ensure they can sell their Quadros and Tesla pro cards, the 6xx series are generally 60% worse off at compute vs their 5xx equivelant.

Source

It's always worth doing some research on these things, while it was a good step up for gaming it's not a good chip for any compute work.

This doesn't translate to all scenarios and this is all info I've read myself so take it as you will, I just wanted to let people know it's not always a clear boost from one gen to the next.

There is a GK110 chip card speculated to be released soon which will have very very good CUDA capability but it'll have the pricetag to match

cheers
brasc
__________________
brasco on vimeo
 
Old 02 February 2013   #33
Thanks for the info, Jon. I'd never have known that, since based on specs on geforce.com the 660ti seems to be a better card, particularly on CUDA:

Gtx 580:
512CUDA Cores
772 MHzGraphics Clock (MHz)
49.4Texture Fill Rate (billion/sec)

Gtx 660ti:
1344CUDA Cores
915Base Clock (MHz)
102.5Texture Fill Rate (billion/sec)

I guess that once again proves you can't trust specs alone.
__________________
"...if you have faith as small as a mustard seed... Nothing will be impossible for you."
 
Old 02 February 2013   #34
Here's a thread comparing the 580 vs 660ti:
http://forums.cgsociety.org/showthr...ight=Nvidia+gtx

It's interesting (and enviable!) that PCs are getting cinebench OpenGL of 51.97 with the 660ti!
__________________
"...if you have faith as small as a mustard seed... Nothing will be impossible for you."
 
Old 02 February 2013   #35
That's because the OGL benchmark in cinebench is hugely reliant on the CPU. Faster CPU = faster OGL score. They didn't post what they were getting before in Cinebench either so moot comparison.

If you want a card just for cinema 4d viewport improvements go ATi, they're much better at that specific task, I use Octane and a other CUDA reliant applications so need a nVidia card, if not I'd go ATi.

cheers
brasc
__________________
brasco on vimeo
 
Old 02 February 2013   #36
My GTX 580 that has 3GB of memory gets a Cinebench OGL score of 54.04.

That's on a PC build with a 3930K cpu.

And the viewport editing is pretty snappy.
__________________
Les Sandelman
 
Old 02 February 2013   #37
Forgot to mention, the 3930K cpu has a mild OC.
__________________
Les Sandelman
 
Old 02 February 2013   #38
Unfortunately, there is no one number that can be compared between cards of different generations to determine performance. A 5 series CUDA core is not equal to a 6 series CUDA core nor to a 4 Series CUDA core.
Exactly the same applies to CPUs now as well. Two 3.2Ghz i7s of different series will perform differently. We would all like it to be simple to compare but the only way is by comparing benchmarks between different cards & there are far fewer CGI benchmarks than games ones.

If you know how NVidia's naming convention works (and you can certainly be forgiven for not knowing) then you shouldn't expect a 660ti to compete with a 580 which is much more expensive but with a 560ti, which it does.
Also, CUDA performance & Open GL performance can differ quite widely for a given card - for the record, my 660ti gets 60fps on the open gl benchmark but my 3930k is running at 4.4ghz.
I've also noticed in the past that diferent drivers can make quite a difference.

When choosing cards for Octane, you should bear in mind that the entire scene & textures has to fit into the card's memory, so unlike for gaming, it can be worth getting a mid-range card with a bit more memory than standard.
__________________
Cinema 4D R17 Studio, VRAYforC4D, Z Brush, Mudbox, Photoshop CS6.

Last edited by Decade : 02 February 2013 at 09:34 PM.
 
Old 02 February 2013   #39
Quote:
Originally Posted by Decade

When choosing cards for Octane, you should bear in mind that the entire scene & textures has to fit into the card's memory, so unlike for gaming, it can be worth getting a mid-range card with a bit more memory than standard.


Isn't this a GeForce limitation? As I understand it a Quadro card driving one or more Tesla's counts all the ram, not just the ram of one card. If so this would be another reason why Octane user may want to start saving up for Quadro and Tesla instead.
 
Old 02 February 2013   #40
Quote:
Originally Posted by PhoenixCG
Isn't this a GeForce limitation? As I understand it a Quadro card driving one or more Tesla's counts all the ram, not just the ram of one card. If so this would be another reason why Octane user may want to start saving up for Quadro and Tesla instead.


Sorry, I meant that in comparison to a non-GPU renderer, where everything fits into system RAM, which is cheap & plentiful nowadays.
I haven't heard about Quadro & Tesla being able to add up the GPU Ram of multiple cards but maybe. Do you have any more info on that?
__________________
Cinema 4D R17 Studio, VRAYforC4D, Z Brush, Mudbox, Photoshop CS6.
 
Old 02 February 2013   #41
No I don't, I just thought that was a key benefit of the "Maximus Configuration".

Last edited by PhoenixCG : 02 February 2013 at 11:06 AM.
 
Old 02 February 2013   #42
__________________
"...if you have faith as small as a mustard seed... Nothing will be impossible for you."
 
Old 02 February 2013   #43
Nvidia Titan. Zounds.

Titan specs
__________________
2014 Reel
Company website
Behance Portfolio
HyperactiveVR
I reject your reality and substitute my own
 
Old 02 February 2013   #44
Awesome isn't it? Looking forward to compute benchmarks, here's a more indepth article , talks specifically about the compute capability.

Reviews in ~36hrs apparently

cheers
brasc
__________________
brasco on vimeo
 
Old 02 February 2013   #45
But can it be crowbarred into a Mac Pro?
__________________
2014 Reel
Company website
Behance Portfolio
HyperactiveVR
I reject your reality and substitute my own
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 02:23 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.