Render with graphics card?


#1

Greetings all,
This is maybe a dumb question, but I’m wondered.
Is it possible to use the high performance graphics card in my computer to get faster renders?
As far as I know renderers use the processor to get the job done. Is it possible to use the graphics card instead?
Regards


#2

It depends if you are using one of the new renderers which make use of the gpu like furryball or Vray RT. Or even Maxwell, or Octane. Mental ray lack of this right now.


#3

That’s great news! Especially when I only have an i5 4 core laptop that renders painfully slow.
Another question: what about Renderman? Or it will just soon support it?


#4

Maxwell Render is not using GPU. It’s 100% pure CPU rendering engine.


#5

For GPU rendering me and many others would suggest Redshift, it truly is really fast. But do note that it currently only supports NVidia cards, no AMDs as of yet.

For many companies GPU rendering is the future, and for many even the present.


#6

Hi All - we just released new FurryBall RT version with brand new Benchmark. So you can test your system now very easy and compare with others.
http://furryball.aaa-studio.eu/support/benchmark.html


#7

As far as I know, Mentalray doesnot support GPU. I do know VRay RT and iRay both support GPU rendering, so to spend lots money on a video card is a bit of a waste. It’s always great to have more CPU power


#8

Did you try redshift with an ok/decent graphics card? It’s quite fast. Some people say they gained around 10-15 times as fast rendering speeds with it compared to their previous renderer, arnold, I remember from one example, but to be fair they probably had a decent gpu.

Personally, with my tests, redshift has been between 2 and 5 times as fast as vray. I have a 3 year old dual xeon and a gtx 970.


#9

To be honest I’ve never used redshift or even heard of them. I just checked out their website and you can produce some amazing renders! I guess it depends on how good the application of materials and textures are. I’ll bear this one in mind next time :slight_smile:


#10

It is really cool. During work I use VRay, but after work it’s redshift time! I do believe that already now small companies can easily use it in production (and tbh they have been for a while), and in a year or two when better graphics cards with more ram come out it will be even easier for more companies to go GPU. Another thing is that you can put up to 4 cards in one machine, which means you can have 1/4 the amount of computers which saves you a bit. This uses less power, and of course means you need only 1/4 of the licenses for all the different software you have (OS, 3D app, antivirus, etc) so you save a bit of money there.

You should check out this movie done entirely by Alf Lovvold, with one single machine used for rendering:

https://vimeo.com/129346968


#11

Amazing work man, Vray or redshift? Can you share the specs of your machine / render times?


#12

I’m not Alf Lovvold unfortunately! :slight_smile:

He used Redshift, and quickly reading through his WIP thread over at the RS forums he mentions early on using 4x GTX 780ti. Not sure if he kept them throughout the whole project (not reading through the 17 pages again just to find out, sorry :P).

I may suggest making an account and go over there and read up! Lots of cool stuff there. If you do, below is the url to his thread:
https://www.redshift3d.com/forums/viewthread/3192/P15


#13

In short, as others have pointed out, yes it is possible to render using GPUs, and it seems to be the direction the industry is taking.

Why use GPUs?

Well, for starters they are much faster and efficient at handling parallel processes and can scale almost linearly. In english, basically they are much better at calculating light paths and bounces. Scaling means that 2 GPUs will render (almost) twice as fast than 1 GPU.

As far as computer builds, every other generation or so of CPUs will change the socket needed in the motherboard to work. Upgrading your CPU would most likely imply an upgrade to the motherboard.

GPUs use PCIx16, a standard socket in all motherboards, and can be easily upgraded. Most motherboards have more than 1 PCIx slots, for example mine has 4, allowing for a potential use of 4 GPUs.

Then it boils down to the rendering engine.

They usually come in on of two flavours. Biased and nonbiased. I myself made the transition from mental ray (biased) to iRay (unbiased) years ago and never went back.

Biased vs Unbiased

This plainly means how the engine will calculate light. IRay is a physically correct engine, and will calculate light bounces infinitely. This produces instant, yet grainy results, that gradually improve over time. Light behaves as it does in the real world. They produce shadows and reflections, intensities are on par with real wattage metering in light bulbs. This makes it very friendly as little to no set up es required to get great results. The down fall is that some power users want to tweak light, manage alpha channels (for material cutouts) and more. Everything behaves as it would in reality, thus there is some restrictions, like all objects will cast shadows for example. Unbiased engines and GPUs make a very good combination, and most people use them together. IRay will only officially work with Nvidia graphic cards.

On the other hand we have biased engines like Vray, which produces amazing results. It has a similar approach to iRay, but it cuts rendering time by making (very good) guesses as to where light will bounce, sometimes limiting light bounces to where the camera is pointing, and getting light direction averages, instead of calculating the entire pathway. Biased engines tend to be CPU based, the release of VRay-RT is a step in the biased/gpu direction.

Although there are cases like corona engine, which is CPU only, delivering impressive results with short rendering times.

RAM

Another important aspect is scene sizes. with GPUs must be able to fit the entire scene inside its own ram. Nowadays the standards seems to be pushing 6Gb of ram, which for most is plentiful. It it does not fit, then there is a penalty in performance. Where as CPU based rendering can benefit from your computers RAM which can be inexpensive and easy to upgrade, unlike the GPU ram which is soldered to the card.

Hope this helps!

-Vii out


#14

thats not true for Redshift. Its working pretty whell with its out of core architecture.
its getting slower, yes, but you can do a lot with 6GB of ram.