Multiple GPUs - Need some clarification


I’ve read that CPUs have PCIe lanes and that those are used by the graphics card.

1.- So if a processor has 20 PCIe lanes, does that mean I can only have one x16 (or 2 x8) graphics card?

2.- Does that mean if I want two x16 I need a 40 PCIe lanes processor?

3.- If that’s the case, why does having multiple graphics cards on a 20 PCIe lane processor actually makes a difference while GPU rendering?

(About the third question, here’s a post showcasing it)

4.- If it doesn’t affect negatively having multiple graphics cards on a 20 PCIe lane processor while doing GPU rendering, then, when does it actually matter? (negatively)

Thank you :slight_smile:

  1. More or less, yes.
  2. If you want both to run at full bandwidth then yes.
  3. The PCI Express lanes are unlikely to be a bottleneck in that scenario.
  4. Older PCI Express versions where the frequency is lower (so the quantity of lanes becomes more important), huge IO devices like high performance disk controllers or 100Gb and up network adapters, graphics cards in the future.

Bonus reading material. There are motherboards designed for cryptocurrency mining where each GPU has only one PCI Express lane so more GPU can be used with a single CPU, drive, etc. Not saying this is a good idea for you, just saying it’s an interesting thing that exists.

Some GPU workloads are very dependent on the PCI Express lanes to deliver work and data. Other workloads not so much.


Thank you very much! I guess I’m going dual x8 since it fits my needs :slight_smile:


Little head’s up if you’re thinking about dual GPUs. I’ve been considering that option myself. However, I just discovered that NVIDIA is most likely abandoning SLI with the upcoming GTX 20xx series. It seems that they might be replacing it with some version of NVLink.

From what I’ve been reading, which is a lot, you might want to hold off on SLI anyway. SLI does (generally) improve performance. However, there are some caveats. Some are obvious, others less so.

  1. Cost. If you’re considering SLI then, well, you already know this

  2. PSU requirements. Double the GPUs means (almost) double to power requirements. Don’t go in for SLI if you don’t have a beefy PSU.

  3. SLI bridge. Beware. Know exactly how far apart you’ll be placing these cards. That will determine what size bridge you get.

  4. SLI bridge (part 2). If you insist on doing SLI, invest in a High Bandwidth bridge. While performance is similar in a general sense, the flexible and stock NVIDIA ones are somewhat slower.

  5. Spacing. Give your cards room to breathe. Placed too close together and you run the risk of blocking off airflow. That’s true even if you’ve got a hybrid liquid cooled card.

  6. Not all games support SLI. Those that don’t might actually end up performing worse than if you just had single card.

  7. Games that do support SLI can suffer performance issues. The link that enables them to operate in tandem isn’t 100% perfect, which seems to be why NVIDIA might be ditching SLI. As such, you can expect an occasional stutter in some games. (YouTube has video examples on display.)

  8. GPU accelerated rendering varies from app to app. For example, if you’re rendering in Blender and opt to also use GPU rendering then you need to turn off SLI. From what I’ve read, you can still use both cards, but individually. SLI is a no go. Other apps may impose a similar restriction.

  9. Again, it looks like SLI is being phased out. There have been leaks regarding a new MSI 2080ti (pics and all) and SLI has been swapped out for the (reportedly) more stable NVLink.

  10. SLI is pretty much wasted if you don’t intend on doing 4K. You can still rock all current games and many future ones at 1080p with a single 1080ti. Even if you do go 4k, a single such card can actually run 4k games at respectable rates too. (25-60 fps depending on the game.) Dual cards might future proof your gaming performance in 4k, but that’s up to you.

  11. It should go without saying. HOWEVER, beware of heat. If you’re using a mid-tower case and are running multiple GPUs then you better have good cooling. A single card alone can usually drive up case temps when you’ve got a full load. Multiple cards? You’re looking to overheat your system if you don’t have ample cooling and airflow. IMO, I probably wouldn’t attempt a dual setup in a cramped mid-tower. Full tower (or super full) only. (Bigger cases are better, imo, anyway. You can open yourself up to potential liquid cooling, extra case fans, and ventilation from multiple angles.)

One other thing. If you’re a game developer (indie or pro), know your target audience. Even if your game takes 2 years to make, it’s still unlikely that 4k will be standard by that time. 4k will still be the sole domain of the enthusiasts by 2020. Unless console gaming and streaming services push that market segment hard, I don’t see 4k displays being the norm in 2 year’s time. As a developer, you don’t want to target only 1% of your potential audience. Most gamers - especially those on consoles or playing ports on PC - will still be at 1080p in 2020.

IMO, save your money. Invest in the fastest single GPU you can afford atm. That’ll last you longer and probably serve you better. Even if you have to switch it out in a few years, replacing one GPU is always cheaper than replacing two. Until NVIDIA gets its act together on multiple GPU setups, you may want to steer clear. All signs point to SLI going bye bye for future cards. (The 20xx series reportedly goes on sale sometime within the next few weeks, afaik.)


That’s a lot of info about sli, but if you’re going to be gpu rendering you wouldn’t want sli active! It can have a detrimental effect on the speed/reliability of gpu rendering - in redshift at least.
No need for bridges or any of that. Multiple gpus will work just fine without.

NVLink is a whole new ball game, it looks like 2070 wont have it, 2080 and 2080ti will. But you will only be able to connect 2 gpus at a time. So you could theoretically have 4 gpus, with 2 sets of combined memory.

Redshift uses the pcie lanes more than octane for out-of-core, texture swapping etc - so my advice in RS would be to go with x8 pcie 3.0 lanes per gpu. Octane I’ve seen setups which have far less (like x1 lane in some cases, but as its only loading the data once into gpu memory its not too much of a hit.

Lastly, PLX chips which come on motherboards to create more lanes than the cpu has - there have been major issues with compatilbility and nvidia drivers causing crashing - the very latest driver seems to have resolved this somewhat, but I’d stay away still.