CGTalk > Technical > Technical and Hardware
Login register
Thread Closed share thread « Previous Thread | Next Thread »
 
Thread Tools Search this Thread Display Modes
Old 08-18-2013, 02:35 AM   #1
kev030981
New Member
portfolio
Kevin
Bristol, United Kingdom
 
Join Date: Nov 2012
Posts: 6
Graphics Card technical specs

Hi - I'm about to upgrade my system,, and would love some help with a few graphics card questions?

My Current system is i7-2600K 16GB with a gtx 570. I run a 27" display at 2560 x 1440. I use the system for a mix of both gaming and architectural visualisations (3DS max, vray, Revit, Adobe Creative suite)
From advice I had decided to go with an upgrade to gtx 780, which looks good for my needs. It now seems I am to get a bit more for a job I recently finished , so I'm considering upgrading (my upgrade) to a Titan. Rather than ask if its better than a 780 (I've read countless threads on this) -I was wondering if someone could help me understand, as simply as possible - what each of the specs means in terms of real life use?

What I mean is
* What will increase viewport speed on complex scenes (3DS max / Revit) - memory? Cuda cores? etc
* What will increase rendering speeds?
* What will allow me to run more complex (high poly) scenes - i suppose this is an extension of viewport speed? (I think?).
* What does this double precision point thing actually mean for the above apps (if anything)? It seems to be the main dif (other than ram) between the two cards.

I'm quite technical - but am never 100% sure what each of the specs mean when I'm comparing the cards. There doesn't seem to be a clear explanation from my searching on the web? If anyone has any thoughts (or a link) I'd be most grateful!

Then with all that said ^ - Does anyone think I'd be wasting the money on the titan? Or will it make a real difference, if so - i'd gladly spend the extra on it but don't want to waste the money!
Many thanks!

PS: I've looked in Quadro cards, but want to stick with GTX due to gaming. Ive also been (dreaming for at least a few months) about putting a quadro in and running a duel boot. (although this is later - much later). Any thoughts on this - superflous with titan? Which quadro would make a difference? K4000 i would assume?
 
Old 08-30-2013, 08:54 PM   #2
Lanre
New Member
portfolio
USA
 
Join Date: Aug 2013
Posts: 11
Hey there.

For viewport speeds in Max, it depends on both memory and cores. For very large scenes with 2million polys +, you'll need a lot of memory, especially with Nitrous accelerated viewports. But to playback calculation intensive animations, like Phoenix FD simulations, you'll need more cores.

For rendering speeds, it depends on the renderer. If you're using a GPU accelerated renderer like Vray RT GPU, then we can talk about upgrading your GPU. And keep in mind that GPU rendering is mostly based on OpenCL API, at which AMD excels in, way more than Nvidia. But for normal rendering like mental ray or Vray, its all on your CPU, and the i7-2600K, i7-2700K, i7-3770K, i7-3960K, i7-3970K, and i5-3570K, just to name the top of the spec, pack all the punch you'll ever need, just with a little overclocking. My i7-3770K renders at lightspeed, at 4.8GHz, ON AIR. No stock coolers, no water, no liquid nitrogen

The answer to your third question is what i stated first

Double precision floating point performance, measured in GFLOPS, is typically the amount of floating point operations (numbers after the decimal point) a GPU can handle in 1 second. GPUs are based on parralelism and so, they naturally have more GFLOPS than CPUs (CPUs are optimized for interger calculations, but come with a subsidiary Floating point unit (FPU) or Vector unit (VU) or both, as seen in the Playstation 2's primary processor, the great Emotion Engine)

Based on all these, i will advice Dual AMD Radeon HD 7970s in CrossfireX. AMD is superior to Nvidia in OpenCL, the API by which GPGPU is based (the means by which you can render with your gpu) It also seems superior to the GTX 780 in DX11 acceleration, which translates into your nitrous viewport frame rate. I hope you understand all i've said,

Cheers ^.^
 
Old 09-03-2013, 04:46 PM   #3
kev030981
New Member
portfolio
Kevin
Bristol, United Kingdom
 
Join Date: Nov 2012
Posts: 6
Thanks Lanre, that actually answers my question pretty much perfectly. I've posted this same question a few places and you;re the first to be able to directly relate a specification to a real world application (or function of application - not sure how to phrase it!)

I've realised that Vray im using is CPU render, and im relatively happy with render speeds (do you know if RT is regarded as superior - or is it just some people prefer GPU rendering - or perhaps the real-time element?).

Therefore I guess what I'm most interested in the viewport speed (thanks to you I now understand that!) - are there any benchmark sites showing card comparisons specific to to this, or is it as simple as looking at the DX11 performance?

Where I've come down is what most people recommend - have a gaming card for gaming and a professional card for working. I'm going to stick with the 780 despite agreeing the dual 7970s are faster - i'm just happier with one card for gaming y'see.

The next future upgrade is going to be either a first hand quadro K4000, or a 2nd hand K5000 if possible, and I'll run a dual boot system with the different drivers - one for working , one for gaming. It might be pricy, but I think its the only way to get the best of both worlds!

Thanks again for your kind advice!
 
Old 09-04-2013, 05:49 PM   #4
Lanre
New Member
portfolio
USA
 
Join Date: Aug 2013
Posts: 11
Hey there.

I would advice against benchmarking sites, except they have benchmarks for 3ds max itself. They may benchmark with other software, mostly games, but the games may be dependant on a performance feature that is specific to only one GPU generation, or manufacturer, like Nvidia Physx. Even though they outperform the competition because of these features, the target software which is 3ds max may not use the feature. This is why benchmarks are often misleading.

But for viewport speed, as far as you dont have a 3d model of planet earth, of 6 zillion polys, with every last house in the right place =D, the Nvidia GTX 780, or Radeon HD7970 is all you need. Even more than enough.

As for Vray Adv and Vray RT, RT is far more inferior to the CPU renderer. RT can't even render PhoenixFD simulations, ChaosGroups's own fluid simulator, not to talk of FumeFX. I emailed them on this issue, and they said they will try fixing it in later revisions. While RT may render faster especially on multi-gpu systems, it is notorious for not being able to handle large scenes. Not to talk of the huge amount of bugs on it. RT is mainly beneficial when doing ActiveShade, thats its main advantage. But for production renders, I advice against it, except for brief test renders or animations.

Stick to the GTX 780 for now. But when you have enough money, i dont advice buying another GPU, because the GTX 780 is almost on par with the titan, and thats all the power you'll need. Instead, use the money to buy a server computer, a few stacks of Intel Xeons, Sandy bridge extreme, or Ivy bridge extreme, if its out by that time. CGI rendering is still largely CPU based, which is likely to remain for a few decades.

Cheers
 
Old 09-04-2013, 07:43 PM   #5
imashination
Expert
 
imashination's Avatar
portfolio
Matthew ONeill
3D Fluff
United Kingdom
 
Join Date: May 2002
Posts: 9,073
If I might make a different suggestion... the 570 is still a very good card, its really quite unlikely that this will be the bottleneck in terms of editor playback speed. Chances are that its much more of a cpu speed issue than the gfx card, if you upgraded to either the 780 or the titan, youd likely be a bit upset that theres very little difference.
__________________
Matthew O'Neill
www.3dfluff.com
 
Old 09-04-2013, 11:39 PM   #6
ThE_JacO
MOBerator-X
 
ThE_JacO's Avatar
CGSociety Member
portfolio
Raffaele Fragapane
That Creature Dude
Animal Logic
Sydney, Australia
 
Join Date: Jul 2002
Posts: 10,955
Quote:
Originally Posted by Lanre
For rendering speeds, it depends on the renderer. If you're using a GPU accelerated renderer like Vray RT GPU, then we can talk about upgrading your GPU. And keep in mind that GPU rendering is mostly based on OpenCL API, at which AMD excels in, way more than Nvidia. But for normal rendering like mental ray or Vray, its all on your CPU, and the i7-2600K, i7-2700K, i7-3770K, i7-3960K, i7-3970K, and i5-3570K, just to name the top of the spec, pack all the punch you'll ever need, just with a little overclocking. My i7-3770K renders at lightspeed, at 4.8GHz, ON AIR. No stock coolers, no water, no liquid nitrogen

That's fundamentally incorrect.
There are currently more available CUDA or double OCL + CUDA implementations than there are OCL engines.
VRay RT in example offers both.
And while nVIDIA isn't putting much stock in OCL and optimizing accordingly (unsurprisingly), it can and will run both unless there are hardware targeted optimizations.
AMD will lock you with OCL only for now.
That precludes you a lot of products such as Octane (CUDA only), RedShift (CUDA only) and some pretty decent CUDA extensions for other softwares.
At this point in time CUDA is quite simply more adopted, more mature, better documented, and much better served. Giving it up for some mythical apps that benefit from OCL is not reasonable. Not unless you are putting together a folding rig, or a bitcoin miner and so on. In DCC OCL is largely not of the relevance we'd all hope for yet.

Quote:
Double precision floating point performance, measured in GFLOPS, is typically the amount of floating point operations (numbers after the decimal point) a GPU can handle in 1 second.

Incorrect at best.
FLOPs are floating point operations per second, NOT double precision ops. Big difference. DP FlOps aren't an atomic operation, you don't meausre by them.
Double precision involves a lot more to be taken into consideraiton, not last that many videocards are artificially crippled in their DP for market phasing (IE: GTX 6 and 7 cards, but not the 5s, quadros or Titans).

Quote:
GPUs are based on parralelism and so, they naturally have more GFLOPS than CPUs (CPUs are optimized for interger calculations, but come with a subsidiary Floating point unit (FPU) or Vector unit (VU) or both, as seen in the Playstation 2's primary processor, the great Emotion Engine)

I'm not entirely sure where you are going with this.
It's a 386 to Pentium set of notions. Modern CPUs are not that simple.
There is a lot more differentiating the two than that, and CPU architecture these days is ridiculously complex. A number of other factors will also come into play (IE: whether the compiler was set to, or even able, to take advantage of some features).

Quote:
Based on all these, i will advice Dual AMD Radeon HD 7970s in CrossfireX. AMD is superior to Nvidia in OpenCL, the API by which GPGPU is based (the means by which you can render with your gpu) It also seems superior to the GTX 780 in DX11 acceleration, which translates into your nitrous viewport frame rate. I hope you understand all i've said,

You keep mentioning OCL as if it's the premiere, or even the only, GPU rendering platform. That is a million miles off the truth.
nVIDIA has such an overwhelming dominance in the DCC market that nobody in their right mind would make an OCL only commercial product. It's easier to find something CUDA only than it is to find something OCL only between products of any relevance.

Crossfire will do absolutely nothing for your viewport, and is generally regarded as a waste of money outside of gaming.
On top of that the 79xx has considerable syncronicity issues when it really gets taxed (IE: offline rendering on a GPU).

Quote:
Originally Posted by kev030981
PS: I've looked in Quadro cards, but want to stick with GTX due to gaming. Ive also been (dreaming for at least a few months) about putting a quadro in and running a duel boot. (although this is later - much later). Any thoughts on this - superflous with titan? Which quadro would make a difference? K4000 i would assume?

The k4k is daylight robbery. Unless you need quadro specific features (12bit colour buffers, proper stereo support in Nuke etc.), in which case the k5k or the k6k are where it's at, a Titan or a 7xx, or a 580 if on a budget, tend to be better bang for buck.
I have a Titan and don't regret it, but I do a fair chunk of CUDA work. Unless you need the uncrippled DP (which most softwares don't require) or the abundant ram the 770 and 780 are better value.
On a budget, especially if you need DP on a budget (unlikely) the 5xx remain strong cards.
__________________
"As an online CG discussion grows longer, the probability of the topic being shifted to subsidies approaches 1"

Free Maya Nodes

Last edited by ThE_JacO : 09-04-2013 at 11:50 PM.
 
Old 09-06-2013, 12:55 PM   #7
Lanre
New Member
portfolio
USA
 
Join Date: Aug 2013
Posts: 11
Quote:
That's fundamentally incorrect.
There are currently more available CUDA or double OCL + CUDA implementations than there are OCL engines.
VRay RT in example offers both.
And while nVIDIA isn't putting much stock in OCL and optimizing accordingly (unsurprisingly), it can and will run both unless there are hardware targeted optimizations.
AMD will lock you with OCL only for now.
That precludes you a lot of products such as Octane (CUDA only), RedShift (CUDA only) and some pretty decent CUDA extensions for other softwares.
At this point in time CUDA is quite simply more adopted, more mature, better documented, and much better served. Giving it up for some mythical apps that benefit from OCL is not reasonable. Not unless you are putting together a folding rig, or a bitcoin miner and so on. In DCC OCL is largely not of the relevance we'd all hope for yet.


I guess you're correct in saying that CUDA is used a lot more in software, but the most used renderers are not limited to CUDA. AMD, even though limited to OpenCl, excels more than Nvidia in OpenCl. And benchmarks have shown that Nvidia CUDA speeds are almost, if not equivalent to their OpenCL speeds. Which means that i can conclude that if AMD were to support CUDA, Nvidia's gonna get their ass whipped. Its a pity that all these renderers only support CUDA, I guess its SDK is more user friendly. But as for Vray-RT, OpenCL and CUDA are on par, AMD beats Nvidia, WITH THE EXCEPTION OF THE TITAN.

Quote:
Incorrect at best.
FLOPs are floating point operations per second, NOT double precision ops. Big difference. DP FlOps aren't an atomic operation, you don't meausre by them.
Double precision involves a lot more to be taken into consideraiton, not last that many videocards are artificially crippled in their DP for market phasing (IE: GTX 6 and 7 cards, but not the 5s, quadros or Titans).


I wasnt going into depth, thats just more or less an introductory explanation.

Quote:
I'm not entirely sure where you are going with this.
It's a 386 to Pentium set of notions. Modern CPUs are not that simple.
There is a lot more differentiating the two than that, and CPU architecture these days is ridiculously complex. A number of other factors will also come into play (IE: whether the compiler was set to, or even able, to take advantage of some features).


But I cannot be proved wrong. Indeed CPUs are increasingly complex nowadays, but can you explain why the intel i7-2600K is capable of about 125GFLOPS, meanwhile the Radeon HD7970 does about 4.2TFLOPS

Quote:
You keep mentioning OCL as if it's the premiere, or even the only, GPU rendering platform. That is a million miles off the truth.
nVIDIA has such an overwhelming dominance in the DCC market that nobody in their right mind would make an OCL only commercial product. It's easier to find something CUDA only than it is to find something OCL only between products of any relevance.

Crossfire will do absolutely nothing for your viewport, and is generally regarded as a waste of money outside of gaming.
On top of that the 79xx has considerable syncronicity issues when it really gets taxed (IE: offline rendering on a GPU).


You're wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.

I still do not think a GPU upgrade is necessary. Invest in Xeons
 
Old 09-06-2013, 02:43 PM   #8
kev030981
New Member
portfolio
Kevin
Bristol, United Kingdom
 
Join Date: Nov 2012
Posts: 6
Thanks all for continued help! I don't think i've ever considered so many options for a hardware purchase - attempting to bridge the gap between gaming and workstation is a first for me (normally ive just got by with gaming tech, but now as i've pretty much qualified as architect - its becoming a bit more necessary to have a bit of professional oomph.

I did eventually get the 780... It almost became a K4000 for a moment, but I decided i'd like to go with a gaming card for now and see how my visualisation freelance things take off. Whilst I hear the advice here about my 570 being ok, it isnt coping well (viewport fps) with the scenes from my student projects, and as I become more proficient its just going to be a more annoying problem. I've also realised that rendering isnt really an issue for me as my cpu seems to belt through vray scenes with little or no problems, so ok there for now. It turned out my main concern was limited to viewport, which I think the 780 will help with for now!

Cheers for all the help - all been great at steering through a difficult purchase!

-Kev
 
Old 09-06-2013, 03:54 PM   #9
vlad
Expert
 
vlad's Avatar
 
Join Date: Aug 2002
Posts: 1,195
Quote:
Originally Posted by Lanre
...
You're wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.

I still do not think a GPU upgrade is necessary. Invest in Xeons


Absolutely not. Max and Revit (and all other DCC apps afaik) just dont support it for viewport operations. Only gpu renderers will, but even then SLI is not required nor beneficial.

Oh, and I wouldnt "invest" in Xeons either...
 
Old 09-06-2013, 11:20 PM   #10
ThE_JacO
MOBerator-X
 
ThE_JacO's Avatar
CGSociety Member
portfolio
Raffaele Fragapane
That Creature Dude
Animal Logic
Sydney, Australia
 
Join Date: Jul 2002
Posts: 10,955
Quote:
Originally Posted by Lanre
I guess you're correct in saying that CUDA is used a lot more in software, but the most used renderers are not limited to CUDA. AMD, even though limited to OpenCl, excels more than Nvidia in OpenCl. And benchmarks have shown that Nvidia CUDA speeds are almost, if not equivalent to their OpenCL speeds. Which means that i can conclude that if AMD were to support CUDA, Nvidia's gonna get their ass whipped. Its a pity that all these renderers only support CUDA, I guess its SDK is more user friendly. But as for Vray-RT, OpenCL and CUDA are on par, AMD beats Nvidia, WITH THE EXCEPTION OF THE TITAN.

AMD can't support CUDA, it's nVIDIA's property.
Besides, what "most used renderers" are not limited to CUDA?w VRay RT is the only relevant one I can think of with double support.
Octane, RedShift, iRay are all CUDA only.



Quote:
I wasnt going into depth, thats just more or less an introductory explanation.

Sorry, but it was a WRONG expalanation, not an introductory one.



Quote:
But I cannot be proved wrong. Indeed CPUs are increasingly complex nowadays, but can you explain why the intel i7-2600K is capable of about 125GFLOPS, meanwhile the Radeon HD7970 does about 4.2TFLOPS

Huh?
I wasn't saying the FLOPs are not like that, I was saying your explanation of CPUs was poor at best, and overly dated for sure.


Quote:
You're wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.

Neither XFire or SLI accelerate any major DCC app viewport. That's known, confirmed by employees of the various software houses and so on.
People in these forums ask these questions because they are about to spend their hard earned money, unless you know something for certain hold the advice, especially when it's bad advice.

Quote:
I still do not think a GPU upgrade is necessary. Invest in Xeons

And why should one "invest" in Xeons? It's got absolutely nothing to do with the topic.
Oh well...
__________________
"As an online CG discussion grows longer, the probability of the topic being shifted to subsidies approaches 1"

Free Maya Nodes
 
Old 09-07-2013, 01:10 PM   #11
imashination
Expert
 
imashination's Avatar
portfolio
Matthew ONeill
3D Fluff
United Kingdom
 
Join Date: May 2002
Posts: 9,073
Quote:
You're wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.


No, they don't.
__________________
Matthew O'Neill
www.3dfluff.com
 
Old 09-07-2013, 03:53 PM   #12
Lanre
New Member
portfolio
USA
 
Join Date: Aug 2013
Posts: 11
I said he should invest in Xeons because it would help his rendering, as it is the final render that matters, nobody except himself cares about the viewport. And the xeons will really benefit his rendering. Far more than CUDA or OpenCL. CPU rendering is still dominant in VFX for films, so forget about the quadros, keplers, and tahitis. Get a good server with a couple of SandyBridge EXes and start yourself a render farm/supercomputer.

But I read on a forum that SLI/Crossfire benefit to Max's viewport.
Heres the link
 
Old 09-07-2013, 04:19 PM   #13
vlad
Expert
 
vlad's Avatar
 
Join Date: Aug 2002
Posts: 1,195
Quote:
Originally Posted by Lanre
...
But I read on a forum that SLI/Crossfire benefit to Max's viewport.
Heres the link


OK, so you have a single post on a viz forum from a few years back VS the software developers themselves stating SLI is not supported by their software...

Quote:
Originally Posted by Lanre
nobody except himself cares about the viewport


I guess you havent been in CG for very long

Last edited by vlad : 09-07-2013 at 04:26 PM.
 
Old 09-07-2013, 07:25 PM   #14
Lanre
New Member
portfolio
USA
 
Join Date: Aug 2013
Posts: 11
Actually, i havent been in CG for long cos i havent been in life for long. I'm just 14, I have school to keep up with, I barely have time to work, but when i do, the results come out pretty good. I've been working with max since 2009 But im far from expertise. Im at the limit of what my time allows me to do. You can't blame me.

I didnt check the date on that, but since its the only post i saw, i thought so. Thats why im saving up to get another 7970 to make crossfire.
 
Old 09-07-2013, 08:15 PM   #15
vlad
Expert
 
vlad's Avatar
 
Join Date: Aug 2002
Posts: 1,195
Quote:
Originally Posted by Lanre
...Thats why im saving up to get another 7970 to make crossfire.

 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 01:36 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.