There’s no real point to running them together to be honest.
SLI doesn’t do much (anything) for DCC apps and would require identical cards anyway, the k5000 already supports more monitors than you can put on a sturdy desk without cracking it, and the k5000 vastly outpowers the 2000.
They should run just fine in non-SLI mode if you want more monitors/resolution than what the K5000 can do alone.
Each card will drive the monitor it’s hooked up to independantly
If you run them in SLI mode (not recommended for 3d apps since it offers nothing), the K5000 will run at the K2000’s slower speed so they can run in parellel
could I just have the k2000 there for its cuda cores? Adobe CC is supposed to read two cards once it is released. Guess I test it out and see how it responds. Can’t hurt.
If there isn’t much of a difference, I’ll just sell the k2000.
By saying “for its cuda cores” it sounds like you want them in SLI.
As mentioned SLI is:
A) not used by most DCC apps
B) usually not stable, if working at all, between different cards
C) IF it works between different cards it will address memory and parallelism by the MCD, which means you would have two half-arsed k2000s with the memory size of just one, instead of the superior k5000.
The only scenario where two different cards tend to have some use is if you want/need two different processing and rendering pipes to go to two different monitors, something we just don’t do much in your average CGI oriented software.
Sell the k2000, running them in tandem is unlikely to ever produce any notable benefits in any plausible scenario.
You can leave the 2000 in there, with no monitor connected, and it will give your CUDA rendering applications a small boost. After Effects 3D render and Premiere CC, etc.
A) It’s not recommended in general, but AE is practically the only app out between the ones you listed that will, in a limited subset of situations, actually use a second card just for the available cuda cores regardless of SLI or not.
B) Yes, the available VRAM will be capped to the card with the lowest amount or RAM (and you usually should factor in about 70-80% of that ram as available, although if the main card drawing the screens has more ram you might get a more favorable usage, as only the rendering part will need to be memory aligned and therefore capped, multi GPU rendering is synchronous so no way to work around that).
AFAIK only the 3D raytracer will actually pick those cores up, as it’s functionally speaking a rendering engine.
If you have multiple GPUs installed, the GPU-accelerated ray-traced 3D renderer will use the CUDA cores on all of them, as long as they are of the same CUDA compute level. (See the technical specifications of your GPU for its CUDA compute level.) After Effects will also use all of the VRAM on the installed GPUs, with the caveat that both cards will be treated as if they each have the amount of VRAM on the card with the lesser amount of VRAM.
I can’t comment about Premiere, I simply don’t use it or have any interest whatsoever in it, so I don’t know what components would actually use a GPU.
Thank you JackO for taking the time and responding.
One last question…Would you Keep the K5000 or use the GTX Titan for 3ds max? Worried about viewport performance.
Seems like there is more blurring of the two classes of GPUs now. Nothing I do is mission critical. I live in a big town, isn’t out of the question to just go to the store and pick a card up if it falters.
I’m going to sell the K2000, doesn’t seem like it would give me that much more performance
Thanks,
Joe
The Titan is a great card, and from the limited testing I’ve seen it still beats the 780 by a great margin in DCC (which despite not being as bad off as the 6xx seems to have received SOME DP crippling still that the Titan isn’t suffering from, games is another story).
Unless you absolutely and desperately need 6GB instead of 4, or you’re strapped for pennies (which you don’t sound like), I see no reason to sell a k5000 to pick up a Titan.
I’d never buy a k5000 here in Sydney, Oz prices are grossly inflated for those compared to a minor inflation of the GTX, not to mention below 1000$ I could import tax free from the States making the divide even greater, but those would be the only grounds I’d pick a Titan hands down over a k5k.
If I had a k5k for free I wouldn’t feel a pressing need to trade it for a Titan to be honest, and in some apps (Nuke and Mari) the k5k is reported to work more than a small edge better.
For DX11 or offline rendering I doubt, other than memory, there would be much difference swinging one way or another.
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.