GTX 1080 Ti - Three and Four Way SLI


#1

Hi there,

Very new to the hardware side of things.
I am building a custom PC and I have just come across this article-

http://www.pcworld.com/article/3082708/components-graphics/nvidia-quietly-kills-3-and-4-way-sli-support-for-geforce-gtx-10-series-graphics-cards.html

I am wondering if this only effects gaming??

Is there anyone doing a three-way or four-way SLI with a 10 series NVIDIA card and happily using cinema 4d, houdini, arnold, Octane etc?


#2

A multi-gpu setup for CGI stuff is useful only for gpu rendering (with Octane for example), but without sli configuration, i.e. without the sli bridging. You can add as many gpus as your motherboard can hold and they don’t have to be the same, for example: you can have a gtx 1080 for viewports and add 2-3 1070s for rendering, or 1x1070 and 2x1060’s. Gpu rendering software will make use of all the resources it can detect, but the limiting factor here is Vram: it’s limited by the gpu with the lowest Vram.

For viewports sli is useless for all the CG software I’m aware of. It has a meaning only for gaming and not in every game. In most cases, it’s better to buy a higher-end single gpu than to have two lower-end in sli. There are scaling issues due to poor driver support for sli in many games, as far as I know.


#3

hi there,

thanks for the reply, this definitely clears things up!
could you explain what you mean by 'but the limiting factor here is Vram: it’s limited by the gpu with the lowest Vram. ’ . am I right to understand that for example if you have two GPUs the one with the lowest VRAM is what would be getting utilized and other one would be ignored??

many thanks


#4

You’re welcome. No, what I meant was that, if you, for instance, have a 1070 (8gb Vram) with a 1060 (6gb Vram) working together in rendering, then your gpu renderer could render a scene up to 6gb heavy, because both of the gpus would have to load the same amount of data at the same time, so the 6gb of the 1060 would be the limiting factor.

There are some gpu renderers, like Redshift, that can use main system’s RAM if the gpu’s memory comes short. This is a big + especially in ArchViz, where most of the scenes are RAM hungry.


#5

right - gotchya!
after this I’m definitely leaning more towards getting one good GPU as oppose to several lower end ones.
it’s funny how they don’t tell you these things when purchasing.

thanks again for the info!


#6

Hi there,
some further questions, these may require a thread in it’s own right but…

  • as far as my research has gone…more graphics cards (not SLI) doesn’t necessarily mean faster rendering because of the VRAM limiting factor, so how would you get around this?

  • also since I was under the impression that multiple graphics cards (not SLI) would also increase view port performance, which you say is incorrect - what would is the best way to increase view port performance in say CInema 4D?


#7

The Vram factor is a well known issue and each user must do his math about the scene sizes he is usually going to work on, before moving to gpu rendering.

If the Vram is enough though, the gpu rendering speed usually scales linearly when adding more gpus, with the exception of some renderers of course (I think, Blender Cycles gpu is one of them).

As I’ve already mentioned, Redshift is one of the gpu renderers that can make use of the system’s RAM if the Vram is not enough. I don’t know if this works with some latency penalties, to be honest. Maybe some Redshift users here in the forums could shed some light on this issue. It has been announced the Blender 2.8 will have this feature too (bypassing Vram and using System’s RAM when necessary). As a Blender user, I’m looking forward to this with great interest.

Viewport performance is usually considered to be “tightly” dependent on gpu performance, but my small experience shows that this is not entirely true. Especially with C4D, the OpenGL test of the well known Cinebench Benchmark, shows that fps have to do with core numbers and overall cpu performance too. I’ve changed various cpus in my personal rig during the last 4-5 years and had the opportunity to test them with the same gpus and see the differences. Yes, upgrading the gpu resulted in fps increase, but overclocking the cpu did the same too. Recently, I discovered that higher core numbers had a positive impact in fps too, and this was something I couldn’t ever imagine before trying it myself. This review also reveals the same behaviour https://www.pugetsystems.com/labs/articles/AutoDesk-3ds-Max-2017-GeForce-GPU-Performance-816/

I wish someone (maybe Srek, who works for Maxon) could give us some more information and feedback about these interesting phenomena.


#8

To the best of my knowledge all substitution of VRAM with RAM (swapping) comes with a huge speed penalty that makes it basically useless.

As for the odd scaling, while very often specific display tasks can’t be multithreaded efficiently this does not mean that other tasks that contribute to display speed can’t be worked on at the same time. Depending on software and exact task at hand more cores will speed up the display as well, it is just that the single core speed often is the single biggest factor beside the GPU.


#9

It depends how far over you go and the nature of the scene. I’ve tested octane with scenes that need 7GB to render, my 4GB card chewed through them at a very impressive rate. Perhaps it is slower than if the card had enough memory to hold everything, but it is a long way from being useless.


#10

many thanks for all the info!


#11

Just a bit of extra info.
In Redshift VRAM is not scaled down to smallest GPU RAm amount. Each GPU uses it;s own RAM, so if you have Titan and 12GB and some old 4GB VRAM GPU, titan would still use all 12GB and other card will use it’s own 4GB


#12

this is interesting, I have to re-think the build
thanks for this!