View Full Version : AMD or NVIDIA for GPU Rendering

10-07-2011, 12:21 PM

finally i made it to cgsociety! Great to be here ;)

I recently renewed my pc system, in short words a i7 processor and 2 crossfired Radeon HD 6870 and all surrounding components. I still don't have the money to by some BOXX system.. ;) My main field of work is archviz.

I want to focus on GPU rendering as far as it's way faster than CPU rendering. My system is quite good enought for all my modeling.

My questions now is: which are good GPU renderers or will be in the near future?

Octane is in beta but already produces impressing results. The price is still low because of beta phase. I tended to that renderer, but suddenly found out that it's not supporting AMD cards..!
Is it working well to export files to Octane? What if you change something at the model, are there any workarounds to sync / update the model in Octane?

Now i still got 2 days left to decide if i want to send in my two crossfired HD 6870 and get e.g. a NVIDIA GTX 580 / 570. Right now the second HD card is blocking 4 of my SATA ports as well- which was a little bad advice of some technican to buy that..

On the other hand i was thinking to switch to 3dsmax and Vray2.0. There would be no performance-loss if i'd use a GTX 580 there compared to the HD 6870?

Any other GPU renderers to mention? Is it the right way to focus on GPU rendering at all?

I know it's not easy to answer these questions, so i break it down to these:

Should i send back my 2 HD 6870 and get a GFX 580?
If so, should i buy a GTX 580 or is a GTX 570 enought (about 100 difference), with the maximum of RAM i can get.
Or any other advice?

Thanks a lot in advance :)


10-07-2011, 04:38 PM
Ok, just talked to my vendor, i'm going to change several parts including the graphic cards.
Buying only one GTX 570 OC now, still space to upgrade later.

E.g. Octane is able to use only one of two installed cards if you want that- therefor you can still go on modeling while Octane is rendering with the other card.

Anyway, if someone has an aswer regarding the other questions (GPU renderers out there)
i would appreciate any information (e.g. links).


10-08-2011, 04:30 AM
Octane has been in Beta for an incredibly long time. If you do focus on GPU rendering it seems like Nvidia cards and its CUDA cores are the way to go. But if you do, you need as much memory on the card as you can afford, if you plan to do large scenes. My experience with Octane and all unbiased rendering is that it seems faster because you see an image immediately but to get a good image requires almost as much time as a cpu render because they take so many passes to get an acceptable image. There is inherent grain to a GPU rendered image that takes a very long time to render to clean up.

Also, while you can tweak an image while its rendering in Octane, when you do, the image starts rendering from scratch again. I think GPU rendering will be good someday as the cores increase in number and the software is improved but day to day production with it seems a few years away. Just my opinion.

10-08-2011, 09:08 AM

thanks for sharing your opinion.

I read a few articles, watched some videos, made in 2009 and 2010- all user were very impressed. But you're right, it's like an extended lightcache rendering which takes it's time to refine the image until it's clean.

Octane renderer is between 10x and 50x faster than cpu rendering- that's what they say on their website. In the octane forums lot's of people say they will stick to it because of it's power. These guys for sure in most cases have MONSTER machines ;)

Anyway, i think i'll change to GTX cards. Just trying out octane then and we'll see.

Still i don't know what's meant with "if you want to render large models"..

Is a large model a charater with hundreds of millions of polys, or a huge landscape with un-instanced 3d plants? ;)

Will it be good to go for even large scale architecture (lets say an opera or museum) with some parametrical environment like vue? Any references to compare from?

For sure for good renderings you'll have to use high quality textures. This will limit the VRAM as well?

Whats happening if my model exceeds the limit (VRAM) of my graphic card? Will it still be able to render or won't it start at all?

Regards, YPF, Niko

10-08-2011, 09:21 AM
Honestly, you don't want to render archviz with GPU. You will run out of GPU memory faster than you anticipate. Get Vray and be done with it. GPU is not ready for serious work.

10-08-2011, 09:34 AM
Okay.. Too bad (in german "Mist")

I'm going to check it out anyway- but not expecting too much.

Does it still make sense to buy the card with 2,5Gb VRam instead of 1,2Gb?

It's really hard to find out the right components because all testing is always focused on gaming and a conclusion like "no game made use of the extra memory" doesn't really help..

10-08-2011, 09:46 AM
For archviz, I'd get the fastest CPU possible and _at least_ 16gb ram.
With typical archviz scenes (many objects, many splines, hi poly count, proxy scattering), the GFX card is not that important for viewport speed, CPU is more important.
The 2nd GFX card wont do anything except when GPU rendering, which only may be useful for very simple scenes.
To speed up rendering, having a bunch of low spec'ed (but fast CPU) render nodes is the best way. Although cloud rendering is a big topic right now. I suspect that cloud rendering will be mainstream long before GPU rendering will really take off.

10-08-2011, 10:53 AM
Thanks for all this information,

i'm quite satisfied with my i7 right now. I thought 8Gb of ram were not too bad, but i'll maybe better add some.

For sure i was looking at renderfarms to get additional ressources- but it's the first time i heard of "cloud rendering". I'm always a little behind what's up-to-date. I just sometimes add my laptop as a rendernode.. ;)

I was fine with my old system which was way lower than the one i use right now, so everything should be alright.

Thanks again, i think i'll get back here soon, seems to be a nice place :)


10-08-2011, 11:00 AM
I have an EVGA (my choice in graphic cards because of their quality and their included Precision utility which controls overclocking and fan speed) GTX 460 which has 336 Cuda cores for a very reasonable $150. I have the demo of Octane but my card has only 768 mb on board and all geometry and textures have to be loaded in that memory. So, if you want to large large models and high quality textures you need as much on card memory as possible. 2.5 gig is almost today's limit and are pricey. But since the octane demo is free, you can compare speed by matching the resolution in your cpu based renders of the same scene as the gpu and know for sure. You definitely don't need an Nvidia Quadro series card for Octane.

Here is a very interesting cpu/gpu talk by Brad Peebler, CEO of Luxology which makes Modo.

10-08-2011, 12:30 PM
It's a parallel world.. Good conclusion.

Very interesting video, even if a little anticipating towards gpu- but that's understandable.
Some true facts in there for sure. (And man, i'm always getting so jealous when i see these network renders with 30+ cores doing the work..!)

Another big problem is that Octane is not supporting sss, displacement (or it is already) and so on.

One last questionmark still: if a scene does not "fit" into the graphic card's memory, it won't render at all??

Anyway i won't be able to afford more than one GTX 570 with 2,5Gb and this one is only about 30 bucks more compared to the 1,2Gb- so it's probably going to be the one.

10-08-2011, 02:35 PM
I think that's a good choice, the 570. My experience with Octane is that if the scene is too big it won't load into the card. There's no paging the overload to disk that I can see.

Even if GPU rendering doesn't turn out to be what you need a good Nvidia card is always a good investment.

Good luck!

10-08-2011, 04:52 PM
I think so, too. Thanks ;)

See you here, next time!

10-18-2011, 08:10 AM
I'm sure we'll see benefits from GPU rendering at some point, and perhaps there are large studios using it. But every iteration I've tested and seen had sever limitations, if not just RAM itself then shaders, flexibility, or no-net-gain in renderspeed/quality.

I also opted for the GTX460 for my arch/viz setup. Octane is not useful in my daily deadlines; nor oRay or VrayRT for that matter, yet. All are decent, but none are viable on a time crunch in my experience yet.

Basically, until they're real-time every-time, they won't even matter to me. And even then, there'll still be heavy math and shaders that main renderers will support which GPUs won't. It's a long way off, still.

CGTalk Moderation
10-18-2011, 08:10 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.