Graphics Card technical specs


#4

Hey there.

I would advice against benchmarking sites, except they have benchmarks for 3ds max itself. They may benchmark with other software, mostly games, but the games may be dependant on a performance feature that is specific to only one GPU generation, or manufacturer, like Nvidia Physx. Even though they outperform the competition because of these features, the target software which is 3ds max may not use the feature. This is why benchmarks are often misleading.

But for viewport speed, as far as you dont have a 3d model of planet earth, of 6 zillion polys, with every last house in the right place =D, the Nvidia GTX 780, or Radeon HD7970 is all you need. Even more than enough.

As for Vray Adv and Vray RT, RT is far more inferior to the CPU renderer. RT can’t even render PhoenixFD simulations, ChaosGroups’s own fluid simulator, not to talk of FumeFX. I emailed them on this issue, and they said they will try fixing it in later revisions. While RT may render faster especially on multi-gpu systems, it is notorious for not being able to handle large scenes. Not to talk of the huge amount of bugs on it. RT is mainly beneficial when doing ActiveShade, thats its main advantage. But for production renders, I advice against it, except for brief test renders or animations.

Stick to the GTX 780 for now. But when you have enough money, i dont advice buying another GPU, because the GTX 780 is almost on par with the titan, and thats all the power you’ll need. Instead, use the money to buy a server computer, a few stacks of Intel Xeons, Sandy bridge extreme, or Ivy bridge extreme, if its out by that time. CGI rendering is still largely CPU based, which is likely to remain for a few decades.

Cheers


#5

If I might make a different suggestion… the 570 is still a very good card, its really quite unlikely that this will be the bottleneck in terms of editor playback speed. Chances are that its much more of a cpu speed issue than the gfx card, if you upgraded to either the 780 or the titan, youd likely be a bit upset that theres very little difference.


#6
That's fundamentally incorrect.
There are currently more available CUDA or double OCL + CUDA implementations than there are OCL engines.
VRay RT in example offers both.
And while nVIDIA isn't putting much stock in OCL and optimizing accordingly (unsurprisingly), it can and will run both unless there are hardware targeted optimizations.
AMD will lock you with OCL only for now.
That precludes you a lot of products such as Octane (CUDA only), RedShift (CUDA only) and some pretty decent CUDA extensions for other softwares.

At this point in time CUDA is quite simply more adopted, more mature, better documented, and much better served. Giving it up for some mythical apps that benefit from OCL is not reasonable. Not unless you are putting together a folding rig, or a bitcoin miner and so on. In DCC OCL is largely not of the relevance we’d all hope for yet.

Double precision floating point performance, measured in GFLOPS, is typically the amount of floating point operations (numbers after the decimal point) a GPU can handle in 1 second.

Incorrect at best.
FLOPs are floating point operations per second, NOT double precision ops. Big difference. DP FlOps aren't an atomic operation, you don't meausre by them.
Double precision involves a lot more to be taken into consideraiton, not last that many videocards are artificially crippled in their DP for market phasing (IE: GTX 6 and 7 cards, but not the 5s, quadros or Titans).

GPUs are based on parralelism and so, they naturally have more GFLOPS than CPUs (CPUs are optimized for interger calculations, but come with a subsidiary Floating point unit (FPU) or Vector unit (VU) or both, as seen in the Playstation 2’s primary processor, the great Emotion Engine)

I'm not entirely sure where you are going with this.
It's a 386 to Pentium set of notions. Modern CPUs are not that simple.
There is a lot more differentiating the two than that, and CPU architecture these days is ridiculously complex. A number of other factors will also come into play (IE: whether the compiler was set to, or even able, to take advantage of some features).

Based on all these, i will advice Dual AMD Radeon HD 7970s in CrossfireX. AMD is superior to Nvidia in OpenCL, the API by which GPGPU is based (the means by which you can render with your gpu) It also seems superior to the GTX 780 in DX11 acceleration, which translates into your nitrous viewport frame rate. I hope you understand all i’ve said,

You keep mentioning OCL as if it’s the premiere, or even the only, GPU rendering platform. That is a million miles off the truth.
nVIDIA has such an overwhelming dominance in the DCC market that nobody in their right mind would make an OCL only commercial product. It’s easier to find something CUDA only than it is to find something OCL only between products of any relevance.

Crossfire will do absolutely nothing for your viewport, and is generally regarded as a waste of money outside of gaming.
On top of that the 79xx has considerable syncronicity issues when it really gets taxed (IE: offline rendering on a GPU).
The k4k is daylight robbery. Unless you need quadro specific features (12bit colour buffers, proper stereo support in Nuke etc.), in which case the k5k or the k6k are where it's at, a Titan or a 7xx, or a 580 if on a budget, tend to be better bang for buck.
I have a Titan and don't regret it, but I do a fair chunk of CUDA work. Unless you need the uncrippled DP (which most softwares don't require) or the abundant ram the 770 and 780 are better value.
On a budget, especially if you need DP on a budget (unlikely) the 5xx remain strong cards.

#7

That’s fundamentally incorrect.
There are currently more available CUDA or double OCL + CUDA implementations than there are OCL engines.
VRay RT in example offers both.
And while nVIDIA isn’t putting much stock in OCL and optimizing accordingly (unsurprisingly), it can and will run both unless there are hardware targeted optimizations.
AMD will lock you with OCL only for now.
That precludes you a lot of products such as Octane (CUDA only), RedShift (CUDA only) and some pretty decent CUDA extensions for other softwares.
At this point in time CUDA is quite simply more adopted, more mature, better documented, and much better served. Giving it up for some mythical apps that benefit from OCL is not reasonable. Not unless you are putting together a folding rig, or a bitcoin miner and so on. In DCC OCL is largely not of the relevance we’d all hope for yet.

I guess you’re correct in saying that CUDA is used a lot more in software, but the most used renderers are not limited to CUDA. AMD, even though limited to OpenCl, excels more than Nvidia in OpenCl. And benchmarks have shown that Nvidia CUDA speeds are almost, if not equivalent to their OpenCL speeds. Which means that i can conclude that if AMD were to support CUDA, Nvidia’s gonna get their ass whipped. Its a pity that all these renderers only support CUDA, I guess its SDK is more user friendly. But as for Vray-RT, OpenCL and CUDA are on par, AMD beats Nvidia, WITH THE EXCEPTION OF THE TITAN.

Incorrect at best.
FLOPs are floating point operations per second, NOT double precision ops. Big difference. DP FlOps aren’t an atomic operation, you don’t meausre by them.
Double precision involves a lot more to be taken into consideraiton, not last that many videocards are artificially crippled in their DP for market phasing (IE: GTX 6 and 7 cards, but not the 5s, quadros or Titans).

I wasnt going into depth, thats just more or less an introductory explanation.

I’m not entirely sure where you are going with this.
It’s a 386 to Pentium set of notions. Modern CPUs are not that simple.
There is a lot more differentiating the two than that, and CPU architecture these days is ridiculously complex. A number of other factors will also come into play (IE: whether the compiler was set to, or even able, to take advantage of some features).

But I cannot be proved wrong. Indeed CPUs are increasingly complex nowadays, but can you explain why the intel i7-2600K is capable of about 125GFLOPS, meanwhile the Radeon HD7970 does about 4.2TFLOPS

You keep mentioning OCL as if it’s the premiere, or even the only, GPU rendering platform. That is a million miles off the truth.
nVIDIA has such an overwhelming dominance in the DCC market that nobody in their right mind would make an OCL only commercial product. It’s easier to find something CUDA only than it is to find something OCL only between products of any relevance.

Crossfire will do absolutely nothing for your viewport, and is generally regarded as a waste of money outside of gaming.
On top of that the 79xx has considerable syncronicity issues when it really gets taxed (IE: offline rendering on a GPU).

You’re wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.

I still do not think a GPU upgrade is necessary. Invest in Xeons


#8

Thanks all for continued help! I don’t think i’ve ever considered so many options for a hardware purchase - attempting to bridge the gap between gaming and workstation is a first for me (normally ive just got by with gaming tech, but now as i’ve pretty much qualified as architect - its becoming a bit more necessary to have a bit of professional oomph.

I did eventually get the 780… It almost became a K4000 for a moment, but I decided i’d like to go with a gaming card for now and see how my visualisation freelance things take off. Whilst I hear the advice here about my 570 being ok, it isnt coping well (viewport fps) with the scenes from my student projects, and as I become more proficient its just going to be a more annoying problem. I’ve also realised that rendering isnt really an issue for me as my cpu seems to belt through vray scenes with little or no problems, so ok there for now. It turned out my main concern was limited to viewport, which I think the 780 will help with for now!

Cheers for all the help - all been great at steering through a difficult purchase! :slight_smile:

-Kev


#9

Absolutely not. Max and Revit (and all other DCC apps afaik) just dont support it for viewport operations. Only gpu renderers will, but even then SLI is not required nor beneficial.

Oh, and I wouldnt “invest” in Xeons either…


#10

AMD can’t support CUDA, it’s nVIDIA’s property.
Besides, what “most used renderers” are not limited to CUDA?w VRay RT is the only relevant one I can think of with double support.
Octane, RedShift, iRay are all CUDA only.

I wasnt going into depth, thats just more or less an introductory explanation.

Sorry, but it was a WRONG expalanation, not an introductory one.

But I cannot be proved wrong. Indeed CPUs are increasingly complex nowadays, but can you explain why the intel i7-2600K is capable of about 125GFLOPS, meanwhile the Radeon HD7970 does about 4.2TFLOPS

Huh?
I wasn’t saying the FLOPs are not like that, I was saying your explanation of CPUs was poor at best, and overly dated for sure.

You’re wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.

Neither XFire or SLI accelerate any major DCC app viewport. That’s known, confirmed by employees of the various software houses and so on.
People in these forums ask these questions because they are about to spend their hard earned money, unless you know something for certain hold the advice, especially when it’s bad advice.

I still do not think a GPU upgrade is necessary. Invest in Xeons

And why should one “invest” in Xeons? It’s got absolutely nothing to do with the topic.
Oh well…


#11

You’re wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.

No, they don’t.


#12

I said he should invest in Xeons because it would help his rendering, as it is the final render that matters, nobody except himself cares about the viewport. And the xeons will really benefit his rendering. Far more than CUDA or OpenCL. CPU rendering is still dominant in VFX for films, so forget about the quadros, keplers, and tahitis. Get a good server with a couple of SandyBridge EXes and start yourself a render farm/supercomputer.

But I read on a forum that SLI/Crossfire benefit to Max’s viewport.
Heres the link


#13

OK, so you have a single post on a viz forum from a few years back VS the software developers themselves stating SLI is not supported by their software…

I guess you havent been in CG for very long :wink:


#14

Actually, i havent been in CG for long cos i havent been in life for long. I’m just 14, I have school to keep up with, I barely have time to work, but when i do, the results come out pretty good. I’ve been working with max since 2009 But im far from expertise. Im at the limit of what my time allows me to do. You can’t blame me.

I didnt check the date on that, but since its the only post i saw, i thought so. Thats why im saving up to get another 7970 to make crossfire.


#15

:banghead:


#16

why hit your head against the wall?


#17

Last I heard Chaos Group was planning on dropping OCL support in V-Ray. So I guess that leaves what, Indigo?


#18

I can’t stand watching AMD being segregated like this. It simply isn’t fair. Wicked Nvidia and Renderer developers


#19

Aiming a gun squarely at your own feet and then pulling the trigger isn’t ‘being segregated’…


#20

It’s not segregation at all.
nVIDIA got there first, provided GPGPU focused resources for farming (Tesla) early on and is therefore ahead of the curve. As a brand it’s much more widely adopted in the DCC market, and CUDA is simply a more mature, served and docuemented platform than OCL for many applications, not to mention that nVIDIA has broad support of technology and a clearer and more responsive driver dev map and bugfix rate, and has had linux and scientific community support for much longer than ATI’s pre and post-acquisition.

It’s simply a much stronger candidate and an easier platform to work with. AMD is way behind in all those regards, and while OCL is an open standard, which is good, it’s far from being enough to push people into adoption. especially when the CPU partners don’t care much for it.
We’ll see if HSA will blow anybody’s socks off, which won’t be for a few years anyway, and if they ever will get anywhere close to the farm market, but for now you can’t blame developers or call foul play if they decide not to support OCL yet. They have practically no incentive to.

So, what happened to all these amazing OCL products you mentioned as dominating the market before and would advise buying an OCL focused card for? Thought of any yet? :slight_smile:


#21

Let’s not rip his heart out just yet, folks.

Lanre, you’re young and while your intentions are good, butting heads against long-standing and very experienced techs and CG artists here will not win you any points, son. Relax. Do your own thing, but listen to what people here are saying. It will help you, nobody is telling you these things to hurt you. But if you come in here and misinform people about topics you’re merely speculating on, you’re not going to get good reactions. The original poster here was asking for advice on how to spend his money - and you gave him bad advice. Don’t do that. Better to know nothing than to know something wrong.

If you blow your own money on dual-GPU Radeon or Crossfire setup only to find, just as everyone else in the universe (including the people who developed the software you’re using in the first place) has told you, that it has no effect on the Viewport itself, you’ll feel pretty bad! And we don’t want you to feel bad.

Relax. Absorb all this information, and it’s entirely okay to be wrong. To be wrong is to be a scientist! Being right is easy. :slight_smile:


#22

I’d just like to add that I did eventually get the 780 (MSI twin frozr for £499). I have been blown away by its gaming performance, but as Imashination suggested early on… it hasn’t actually improved my 3DS max / Revit / Sketchup /Rhino viewport experience much. My scenes take roughly about the same time to load (I’m going to see how a ssd remedies this in coming weeks), and about the same time to respond to a command (say orbit)… however once orbiting… the card is much smoother and runs at what I would estimate to be a fair improvement in fps. (Sadly i didn’t fraps my old card so I cant compare sorry). In conclusion: I am glad I didn’t spend extra on a Titan that probably wouldn’t have helped much more.

What I am doing next is trying to improve my CPU by OCing (running at stock speed of 3.4 - which I hope to get up to 4.5 or higher), and I’ve bought a fancy watercooler (h100i) to help with this (I know there are cheaper alternatives - but my system was running hot and this one had good reviews).

Finally… In the hopefully not to distant future, I hope to get a quaddro card. I am hoping this will have a significant increase in fps for viewport, and will probably (nearer the time) run a new post asking for advice on how best to do this (and which one to go for). Thanks all very much for the help - and its been interesting reading the follow on conversation :thumbsup:


#23

That has nothing to do with the videocard (if you assumed it would), and it might or might not be affected by a new drive.
If the long load time is due to data transfer (large files), then it will improve considerably. But long load times can be just as commonly be a complex graph or preflight check, which are CPU bound.

and about the same time to respond to a command (say orbit)…

Also, usually, not videocard dependent.

however once orbiting… the card is much smoother and runs at what I would estimate to be a fair improvement in fps. (Sadly i didn’t fraps my old card so I cant compare sorry).

That’s the only thing of what you mentioned somewhat videocard related, hence why it’s the only place where you saw an improvement :slight_smile:

In conclusion: I am glad I didn’t spend extra on a Titan that probably wouldn’t have helped much more.

In terms of what you’ve described not at all. It would have made an enormous difference if you needed DP or if you were capping the RAM of your 780 though, but you don’t sound like you’re doing either.

What I am doing next is trying to improve my CPU by OCing (running at stock speed of 3.4 - which I hope to get up to 4.5 or higher), and I’ve bought a fancy watercooler (h100i) to help with this (I know there are cheaper alternatives - but my system was running hot and this one had good reviews).

If you haven’t unboxed it and can return it you could as well get an H50. It’s unlikely to be the cooler that will make the difference between 4.3 (practically guaranteed) and 4.7 (about one third of the CPUs make it there, only a small fraction past that). It will be down to how lucky your roll of the dice with that particular CPU will have been.
Cooling only makes a difference when you are truly and aggressively pushing the voltage to stabilize a big boost, and Haswell CPUs will normally pocket up heat internally and stay relatively high in temperature regardless of the small difference in dispersion capability between an H50 and an H100.

Finally… In the hopefully not to distant future, I hope to get a quaddro card. I am hoping this will have a significant increase in fps for viewport, and will probably (nearer the time) run a new post asking for advice on how best to do this (and which one to go for). Thanks all very much for the help - and its been interesting reading the follow on conversation :thumbsup:

In what situation? Because chances are it won’t :slight_smile: