For viewport speeds in Max, it depends on both memory and cores. For very large scenes with 2million polys +, you’ll need a lot of memory, especially with Nitrous accelerated viewports. But to playback calculation intensive animations, like Phoenix FD simulations, you’ll need more cores.
For rendering speeds, it depends on the renderer. If you’re using a GPU accelerated renderer like Vray RT GPU, then we can talk about upgrading your GPU. And keep in mind that GPU rendering is mostly based on OpenCL API, at which AMD excels in, way more than Nvidia. But for normal rendering like mental ray or Vray, its all on your CPU, and the i7-2600K, i7-2700K, i7-3770K, i7-3960K, i7-3970K, and i5-3570K, just to name the top of the spec, pack all the punch you’ll ever need, just with a little overclocking. My i7-3770K renders at lightspeed, at 4.8GHz, ON AIR. No stock coolers, no water, no liquid nitrogen
The answer to your third question is what i stated first
Double precision floating point performance, measured in GFLOPS, is typically the amount of floating point operations (numbers after the decimal point) a GPU can handle in 1 second. GPUs are based on parralelism and so, they naturally have more GFLOPS than CPUs (CPUs are optimized for interger calculations, but come with a subsidiary Floating point unit (FPU) or Vector unit (VU) or both, as seen in the Playstation 2’s primary processor, the great Emotion Engine)
Based on all these, i will advice Dual AMD Radeon HD 7970s in CrossfireX. AMD is superior to Nvidia in OpenCL, the API by which GPGPU is based (the means by which you can render with your gpu) It also seems superior to the GTX 780 in DX11 acceleration, which translates into your nitrous viewport frame rate. I hope you understand all i’ve said,
Thanks Lanre, that actually answers my question pretty much perfectly. I’ve posted this same question a few places and you;re the first to be able to directly relate a specification to a real world application (or function of application - not sure how to phrase it!)
I’ve realised that Vray im using is CPU render, and im relatively happy with render speeds (do you know if RT is regarded as superior - or is it just some people prefer GPU rendering - or perhaps the real-time element?).
Therefore I guess what I’m most interested in the viewport speed (thanks to you I now understand that!) - are there any benchmark sites showing card comparisons specific to to this, or is it as simple as looking at the DX11 performance?
Where I’ve come down is what most people recommend - have a gaming card for gaming and a professional card for working. I’m going to stick with the 780 despite agreeing the dual 7970s are faster - i’m just happier with one card for gaming y’see.
The next future upgrade is going to be either a first hand quadro K4000, or a 2nd hand K5000 if possible, and I’ll run a dual boot system with the different drivers - one for working , one for gaming. It might be pricy, but I think its the only way to get the best of both worlds!
I would advice against benchmarking sites, except they have benchmarks for 3ds max itself. They may benchmark with other software, mostly games, but the games may be dependant on a performance feature that is specific to only one GPU generation, or manufacturer, like Nvidia Physx. Even though they outperform the competition because of these features, the target software which is 3ds max may not use the feature. This is why benchmarks are often misleading.
But for viewport speed, as far as you dont have a 3d model of planet earth, of 6 zillion polys, with every last house in the right place =D, the Nvidia GTX 780, or Radeon HD7970 is all you need. Even more than enough.
As for Vray Adv and Vray RT, RT is far more inferior to the CPU renderer. RT can’t even render PhoenixFD simulations, ChaosGroups’s own fluid simulator, not to talk of FumeFX. I emailed them on this issue, and they said they will try fixing it in later revisions. While RT may render faster especially on multi-gpu systems, it is notorious for not being able to handle large scenes. Not to talk of the huge amount of bugs on it. RT is mainly beneficial when doing ActiveShade, thats its main advantage. But for production renders, I advice against it, except for brief test renders or animations.
Stick to the GTX 780 for now. But when you have enough money, i dont advice buying another GPU, because the GTX 780 is almost on par with the titan, and thats all the power you’ll need. Instead, use the money to buy a server computer, a few stacks of Intel Xeons, Sandy bridge extreme, or Ivy bridge extreme, if its out by that time. CGI rendering is still largely CPU based, which is likely to remain for a few decades.
If I might make a different suggestion… the 570 is still a very good card, its really quite unlikely that this will be the bottleneck in terms of editor playback speed. Chances are that its much more of a cpu speed issue than the gfx card, if you upgraded to either the 780 or the titan, youd likely be a bit upset that theres very little difference.
That's fundamentally incorrect.
There are currently more available CUDA or double OCL + CUDA implementations than there are OCL engines.
VRay RT in example offers both.
And while nVIDIA isn't putting much stock in OCL and optimizing accordingly (unsurprisingly), it can and will run both unless there are hardware targeted optimizations.
AMD will lock you with OCL only for now.
That precludes you a lot of products such as Octane (CUDA only), RedShift (CUDA only) and some pretty decent CUDA extensions for other softwares.
At this point in time CUDA is quite simply more adopted, more mature, better documented, and much better served. Giving it up for some mythical apps that benefit from OCL is not reasonable. Not unless you are putting together a folding rig, or a bitcoin miner and so on. In DCC OCL is largely not of the relevance we’d all hope for yet.
Double precision floating point performance, measured in GFLOPS, is typically the amount of floating point operations (numbers after the decimal point) a GPU can handle in 1 second.
Incorrect at best.
FLOPs are floating point operations per second, NOT double precision ops. Big difference. DP FlOps aren't an atomic operation, you don't meausre by them.
Double precision involves a lot more to be taken into consideraiton, not last that many videocards are artificially crippled in their DP for market phasing (IE: GTX 6 and 7 cards, but not the 5s, quadros or Titans).
GPUs are based on parralelism and so, they naturally have more GFLOPS than CPUs (CPUs are optimized for interger calculations, but come with a subsidiary Floating point unit (FPU) or Vector unit (VU) or both, as seen in the Playstation 2’s primary processor, the great Emotion Engine)
I'm not entirely sure where you are going with this.
It's a 386 to Pentium set of notions. Modern CPUs are not that simple.
There is a lot more differentiating the two than that, and CPU architecture these days is ridiculously complex. A number of other factors will also come into play (IE: whether the compiler was set to, or even able, to take advantage of some features).
Based on all these, i will advice Dual AMD Radeon HD 7970s in CrossfireX. AMD is superior to Nvidia in OpenCL, the API by which GPGPU is based (the means by which you can render with your gpu) It also seems superior to the GTX 780 in DX11 acceleration, which translates into your nitrous viewport frame rate. I hope you understand all i’ve said,
You keep mentioning OCL as if it’s the premiere, or even the only, GPU rendering platform. That is a million miles off the truth.
nVIDIA has such an overwhelming dominance in the DCC market that nobody in their right mind would make an OCL only commercial product. It’s easier to find something CUDA only than it is to find something OCL only between products of any relevance.
Crossfire will do absolutely nothing for your viewport, and is generally regarded as a waste of money outside of gaming.
On top of that the 79xx has considerable syncronicity issues when it really gets taxed (IE: offline rendering on a GPU).
The k4k is daylight robbery. Unless you need quadro specific features (12bit colour buffers, proper stereo support in Nuke etc.), in which case the k5k or the k6k are where it's at, a Titan or a 7xx, or a 580 if on a budget, tend to be better bang for buck.
I have a Titan and don't regret it, but I do a fair chunk of CUDA work. Unless you need the uncrippled DP (which most softwares don't require) or the abundant ram the 770 and 780 are better value.
On a budget, especially if you need DP on a budget (unlikely) the 5xx remain strong cards.
That’s fundamentally incorrect.
There are currently more available CUDA or double OCL + CUDA implementations than there are OCL engines.
VRay RT in example offers both.
And while nVIDIA isn’t putting much stock in OCL and optimizing accordingly (unsurprisingly), it can and will run both unless there are hardware targeted optimizations.
AMD will lock you with OCL only for now.
That precludes you a lot of products such as Octane (CUDA only), RedShift (CUDA only) and some pretty decent CUDA extensions for other softwares.
At this point in time CUDA is quite simply more adopted, more mature, better documented, and much better served. Giving it up for some mythical apps that benefit from OCL is not reasonable. Not unless you are putting together a folding rig, or a bitcoin miner and so on. In DCC OCL is largely not of the relevance we’d all hope for yet.
I guess you’re correct in saying that CUDA is used a lot more in software, but the most used renderers are not limited to CUDA. AMD, even though limited to OpenCl, excels more than Nvidia in OpenCl. And benchmarks have shown that Nvidia CUDA speeds are almost, if not equivalent to their OpenCL speeds. Which means that i can conclude that if AMD were to support CUDA, Nvidia’s gonna get their ass whipped. Its a pity that all these renderers only support CUDA, I guess its SDK is more user friendly. But as for Vray-RT, OpenCL and CUDA are on par, AMD beats Nvidia, WITH THE EXCEPTION OF THE TITAN.
Incorrect at best.
FLOPs are floating point operations per second, NOT double precision ops. Big difference. DP FlOps aren’t an atomic operation, you don’t meausre by them.
Double precision involves a lot more to be taken into consideraiton, not last that many videocards are artificially crippled in their DP for market phasing (IE: GTX 6 and 7 cards, but not the 5s, quadros or Titans).
I wasnt going into depth, thats just more or less an introductory explanation.
I’m not entirely sure where you are going with this.
It’s a 386 to Pentium set of notions. Modern CPUs are not that simple.
There is a lot more differentiating the two than that, and CPU architecture these days is ridiculously complex. A number of other factors will also come into play (IE: whether the compiler was set to, or even able, to take advantage of some features).
But I cannot be proved wrong. Indeed CPUs are increasingly complex nowadays, but can you explain why the intel i7-2600K is capable of about 125GFLOPS, meanwhile the Radeon HD7970 does about 4.2TFLOPS
You keep mentioning OCL as if it’s the premiere, or even the only, GPU rendering platform. That is a million miles off the truth.
nVIDIA has such an overwhelming dominance in the DCC market that nobody in their right mind would make an OCL only commercial product. It’s easier to find something CUDA only than it is to find something OCL only between products of any relevance.
Crossfire will do absolutely nothing for your viewport, and is generally regarded as a waste of money outside of gaming.
On top of that the 79xx has considerable syncronicity issues when it really gets taxed (IE: offline rendering on a GPU).
You’re wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.
I still do not think a GPU upgrade is necessary. Invest in Xeons
Thanks all for continued help! I don’t think i’ve ever considered so many options for a hardware purchase - attempting to bridge the gap between gaming and workstation is a first for me (normally ive just got by with gaming tech, but now as i’ve pretty much qualified as architect - its becoming a bit more necessary to have a bit of professional oomph.
I did eventually get the 780… It almost became a K4000 for a moment, but I decided i’d like to go with a gaming card for now and see how my visualisation freelance things take off. Whilst I hear the advice here about my 570 being ok, it isnt coping well (viewport fps) with the scenes from my student projects, and as I become more proficient its just going to be a more annoying problem. I’ve also realised that rendering isnt really an issue for me as my cpu seems to belt through vray scenes with little or no problems, so ok there for now. It turned out my main concern was limited to viewport, which I think the 780 will help with for now!
Cheers for all the help - all been great at steering through a difficult purchase!
Absolutely not. Max and Revit (and all other DCC apps afaik) just dont support it for viewport operations. Only gpu renderers will, but even then SLI is not required nor beneficial.
AMD can’t support CUDA, it’s nVIDIA’s property.
Besides, what “most used renderers” are not limited to CUDA?w VRay RT is the only relevant one I can think of with double support.
Octane, RedShift, iRay are all CUDA only.
I wasnt going into depth, thats just more or less an introductory explanation.
Sorry, but it was a WRONG expalanation, not an introductory one.
But I cannot be proved wrong. Indeed CPUs are increasingly complex nowadays, but can you explain why the intel i7-2600K is capable of about 125GFLOPS, meanwhile the Radeon HD7970 does about 4.2TFLOPS
Huh?
I wasn’t saying the FLOPs are not like that, I was saying your explanation of CPUs was poor at best, and overly dated for sure.
You’re wrong there, Crossfire and SLI increase viewport speeds, most notably when handling very large scenes.
Neither XFire or SLI accelerate any major DCC app viewport. That’s known, confirmed by employees of the various software houses and so on.
People in these forums ask these questions because they are about to spend their hard earned money, unless you know something for certain hold the advice, especially when it’s bad advice.
I still do not think a GPU upgrade is necessary. Invest in Xeons
And why should one “invest” in Xeons? It’s got absolutely nothing to do with the topic.
Oh well…
I said he should invest in Xeons because it would help his rendering, as it is the final render that matters, nobody except himself cares about the viewport. And the xeons will really benefit his rendering. Far more than CUDA or OpenCL. CPU rendering is still dominant in VFX for films, so forget about the quadros, keplers, and tahitis. Get a good server with a couple of SandyBridge EXes and start yourself a render farm/supercomputer.
But I read on a forum that SLI/Crossfire benefit to Max’s viewport. Heres the link
OK, so you have a single post on a viz forum from a few years back VS the software developers themselves stating SLI is not supported by their software…
Actually, i havent been in CG for long cos i havent been in life for long. I’m just 14, I have school to keep up with, I barely have time to work, but when i do, the results come out pretty good. I’ve been working with max since 2009 But im far from expertise. Im at the limit of what my time allows me to do. You can’t blame me.
I didnt check the date on that, but since its the only post i saw, i thought so. Thats why im saving up to get another 7970 to make crossfire.
It’s not segregation at all.
nVIDIA got there first, provided GPGPU focused resources for farming (Tesla) early on and is therefore ahead of the curve. As a brand it’s much more widely adopted in the DCC market, and CUDA is simply a more mature, served and docuemented platform than OCL for many applications, not to mention that nVIDIA has broad support of technology and a clearer and more responsive driver dev map and bugfix rate, and has had linux and scientific community support for much longer than ATI’s pre and post-acquisition.
It’s simply a much stronger candidate and an easier platform to work with. AMD is way behind in all those regards, and while OCL is an open standard, which is good, it’s far from being enough to push people into adoption. especially when the CPU partners don’t care much for it.
We’ll see if HSA will blow anybody’s socks off, which won’t be for a few years anyway, and if they ever will get anywhere close to the farm market, but for now you can’t blame developers or call foul play if they decide not to support OCL yet. They have practically no incentive to.
So, what happened to all these amazing OCL products you mentioned as dominating the market before and would advise buying an OCL focused card for? Thought of any yet?
Lanre, you’re young and while your intentions are good, butting heads against long-standing and very experienced techs and CG artists here will not win you any points, son. Relax. Do your own thing, but listen to what people here are saying. It will help you, nobody is telling you these things to hurt you. But if you come in here and misinform people about topics you’re merely speculating on, you’re not going to get good reactions. The original poster here was asking for advice on how to spend his money - and you gave him bad advice. Don’t do that. Better to know nothing than to know something wrong.
If you blow your own money on dual-GPU Radeon or Crossfire setup only to find, just as everyone else in the universe (including the people who developed the software you’re using in the first place) has told you, that it has no effect on the Viewport itself, you’ll feel pretty bad! And we don’t want you to feel bad.
Relax. Absorb all this information, and it’s entirely okay to be wrong. To be wrong is to be a scientist! Being right is easy.