PDA

View Full Version : Asus nvidia GTX 690 Graphics card advice


HUCZOK
01-11-2013, 11:47 AM
Does anybody use the Asus Nvidia GTX 690 Graphics card with Maya? If so does it improve the performance of Maya? I have this card but am not sure if it is worth using as I am not a game player but a CGI animator and would like to know whether or not this will benefit me.

Thanks

habernir
01-11-2013, 12:32 PM
well , if you use 3ds max that it based on directx you almost dont see the difference BUT if you use OPENGL application well its another level because in the OPENGL 3d world geforce its garbage in performance compared to the quadro soo if you want really good performance and you have the money and use opengl application like maya buy quadro

derMarkus
01-11-2013, 01:10 PM
Hard to say if you don't post your current graphics card specs...
From the performance side the 690 should be good.


The quadro/geforce discussion is another thing, but I for myself decided to never ever buy a quadro again. I use Maya the whole day long with CAD data and other stuff. Currently on a Geforce GTX 660 Ti with no problems at all. Despite of some UI glitches I mentioned in another thread. But I think are not related to the graphics card, though.

habernir
01-11-2013, 03:55 PM
well he ask if he sould buy geforce 690 when even quadro 2000 will kill in preformace the geforce 690 . and if he upgrade to geforce 690 he will not notice a big change but to quadro yes. and thats the reason that i said that

it all depand on the size of the scene that he work.

darthviper107
01-11-2013, 04:00 PM
Quadro 2000 is crap, they're way overpriced for what you get, low amount of memory and a much slower speed.

I would think, that probably Maya can't take advantage of a GTX 690 because it is a dual-GPU card, 3ds Max definitely can't, at least for the viewport, it would only be able to use one GPU.

SheepFactory
01-11-2013, 04:30 PM
Thank you for this breaking Cg News topic.

habernir
01-11-2013, 04:38 PM
Quadro 2000 is crap, they're way overpriced for what you get, low amount of memory and a much slower speed.

I would think, that probably Maya can't take advantage of a GTX 690 because it is a dual-GPU card, 3ds Max definitely can't, at least for the viewport, it would only be able to use one GPU.

yes you right the only problem that the quadro 2000 have its memory but slower in maya ????

do you have quadro?? oo you look the benchmarks?? quadro 2000 in opengl is far far a way in preformance in maya check before you write wrong facts .

and yes its overprices you right but he said that we wont play games and quadro 690 its cost more than quadro 200 (you can check that) and yes you right its have low memory (just 1GB) .

the only thing that 690 faster then the quadro its directx application and games and cuda based application.

dont believe me . check it yourself.

AND WHY ITS IN THE NEWS TOPIC????

tswalk
01-11-2013, 07:45 PM
you'll get a lot of "general" opinions on this topic.. but in reality, their is a huge dependency on the application and Maya being one that benefits greatly from Quadro hardware and driver optimizations.

r0b3030
01-18-2013, 05:59 AM
hey guys I'm scanning over nvidia/card threads in last 7 or 8 months to get a sense of what card to get in my new rig :

Trying to decide between Geforce GTX 660 (w/2mg) or the low-end Quadro 2000 (can't afford hi-priced Quadro above that).

(going to start my own thread but you guys know alot more about this than me, so in case ur still checkin this/getting updates from this thread// i thought i'd put this question here also)

i used to work in 3d modeling & lighting [mostly assets for games but some medical illustrations & 2d art stuff, etc],
these days my processing // GPU needs are different:
i'm getting back into 3d but not in large-volume production,
just for my own art & experimenting w/ models for 3d-printing----so my poly counts/geometry
could get large but probly not "huge"-- nor any big worlds or
heavy-production-deadlines.

I used to use old version of 3dsMax, so i'll likley be getting into Max, modo, zBrush, possibly C4d (i liked using their 3d-paint tools in the past a bit),
and Ps for general paint-- but no serious video editing.
and no gaming--

So i'm going to have 2 PCI-E slots on new ASUS mobo & perhaps add 2nd card yr or 2 in future.
((I'm not going to have SLI-
had it on old system but doubt i ever took advantage of it--seems it's best 4 gaming & multiple-realtime-viewports from what i understand).

-On my budget, i've got an i5 w/ 8Gg sys ram on Win7 pro/64bit.


-
....[i]THANX for any advice // feedback.. mucho appreciated.

Kinematics
05-09-2013, 07:13 PM
Hey guys,

I need simple clean advice, everyone tells me different things. Basically, I want to get a second hand card.

GTX 690 $1,100 5GB
GTX Titan $1,250 6GB
(in my currency which is Singapore dollar SGD)

I use Maya2013 with Vray 2.3 and I want to use Vray RT for quick shader tweaking. I also need a good graphic card to handle the polycount in the viewport.

One of our other machines has a Quadro K5000. People tell me its all about the Ram and CUDA cores so essentially, Quadro is a waste of money. I should just get a GTX690 or even GTX680.

No intention to SLI anything btw even in the near future.

So which should I buy. Some reports keep comparing the Titan to the GTX 670 and say their on par but its cause of shitty drivers for the Titan for now.

I'd like to just pick one and go for it. Please chime in. Thanks.

NoxLupi
05-11-2013, 01:30 PM
just get a geforce x60-x80 model. gen 4, 5 or 6.. they are all crippled cards and they are slower than quadro's in all aspects(Ogl, Ocl and cuda etc.) except for games. but are decent enough. Quadros are way to expensive in my opinion, but they are faster at what they do! way faster!.

x90 series are dual GPU cards.. and they are not utilized by any of the 3D packages that exist today.

Beware that the generation 6 eg 680 cards are more crippled than gen 5 eg 580 cards when it comes to OpenCL.

Last but not least, if you just need fast viewport response and high compatibility, the 2nd gen(285) cards are wonderful cards and you can get them cheep, they are faster then any of the newer cards in OpenGL, especially in maya. (regular viewport/ not vp2.0 etc)

ThE_JacO
05-11-2013, 02:12 PM
Hey guys,

I need simple clean advice, everyone tells me different things. Basically, I want to get a second hand card.

GTX 690 $1,100 5GB
GTX Titan $1,250 6GB
(in my currency which is Singapore dollar SGD)

I use Maya2013 with Vray 2.3 and I want to use Vray RT for quick shader tweaking. I also need a good graphic card to handle the polycount in the viewport.

One of our other machines has a Quadro K5000. People tell me its all about the Ram and CUDA cores so essentially, Quadro is a waste of money. I should just get a GTX690 or even GTX680.

No intention to SLI anything btw even in the near future.

So which should I buy. Some reports keep comparing the Titan to the GTX 670 and say their on par but its cause of shitty drivers for the Titan for now.

I'd like to just pick one and go for it. Please chime in. Thanks.
The 690, compared to the 680, is a waste of money to begin with, lets start with that.
The titan is not a PoS card, although one could argue it's not quite worth that amount of money, but it's a luxury product. That said, the titan doesn't suffer from some of the crippling issue the 6xx did with DP, but again, that seldom matters much for what you do.

Between a 690 and a titan, a titan, without a shadow of a doubt. Personally at this point, with the 7xx allegedly coming this quarter, I think you should really, REALLY consider waiting a few weeks, your titan might literally drop a third of the price overnight this month or the next.

ThE_JacO
05-11-2013, 02:15 PM
just get a geforce x60-x80 model. gen 4, 5 or 6.. they are all crippled cards and they are slower than quadro's in all aspects(Ogl, Ocl and cuda etc.) except for games. but are decent enough. Quadros are way to expensive in my opinion, but they are faster at what they do! way faster!.
Sorry, but could please, please stop spreading these old myths?

Quadros are NOT way faster.
They really, really aren't. A titan will blaze by any quadro priced time to time and half in practically any regard, and the Kepler generation of quadros is late to the game, overpriced, and has generally been received as underwhelming.

NoxLupi
05-11-2013, 07:01 PM
Sorry, but could please, please stop spreading these old myths?

Quadros are NOT way faster.
They really, really aren't. A titan will blaze by any quadro priced time to time and half in practically any regard, and the Kepler generation of quadros is late to the game, overpriced, and has generally been received as underwhelming.

I have worked with both cards.. and still do from time to time.. Yes, when people compare 580/680 with a lower end or old quadro they are underwhelmed. I ones did a comparison between a quadro 5000(work) and my GTX 285(home) and my current 570(home). The difference was like this: GTX 285 2.1 mill polys at about 15-16 simple shading
GTX 580 0.4 mill polys at 2-4 fps. Quadro 5000 40 mill polys at around 80 fps. I scaled the scene to match the capability of the cards, instead of just depend on fps alone. i don't remember the exact numbers but that was the general behavior. The Quadros are faster or rather the Gforce cards are slower, when handling double sided lighting and return pixels in openGL.

Edit: This was tested in the Maya viewport only! and not vp2.0.. Oh and no, I do not support the price of Quadros!

Here is a rather contrasty test, where they are comparing two cards side by side. a geforce 670 (a card on the heady side), and a Quadro 600 (A card in the very low end of Quadros) and the Quadro still eats the geforce in that particular task. http://www.youtube.com/watch?v=kl4yNCgD3iA

ThE_JacO
05-12-2013, 12:37 AM
I have worked with both cards.. and still do from time to time..
So have I, for years, including literally side to side with a monitor switcher hopping between the two workstations (IE: one with a 580 and one with a 4k).

Yes, when people compare 580/680 with a lower end or old quadro they are underwhelmed. I ones did a comparison between a quadro 5000(work) and my GTX 285(home) and my current 570(home). The difference was like this: GTX 285 2.1 mill polys at about 15-16 simple shading
GTX 580 0.4 mill polys at 2-4 fps. Quadro 5000 40 mill polys at around 80 fps. I scaled the scene to match the capability of the cards, instead of just depend on fps alone. i don't remember the exact numbers but that was the general behavior. The Quadros are faster or rather the Gforce cards are slower, when handling double sided lighting and return pixels in openGL.
See, that's one of the worst ways to test a videocard you could possibly think of, not to mention rather dated.
That's just not how the videocards or their drivers work, but anyway...

Here is a rather contrasty test, where they are comparing two cards side by side. a geforce 670 (a card on the heady side), and a Quadro 600 (A card in the very low end of Quadros) and the Quadro still eats the geforce in that particular task. http://www.youtube.com/watch?v=kl4yNCgD3iA
You are aware of the fact the 6xx is DP crippled, and therefore can be made to artificially perform horribly in some tests, right? The 580 will absolutely BLAZE by a 690 in example if you toss them on a DP Fast Fourier Transform that's using DP.

The Titan doesn't have the crippling, which lets us hope the 7xx, based on the same silicon, won't either.

That's also why many people consider the 2xx and the 5xx the best gaming card gens for 3D.

Regardless, let me re-state, no, quadros aren't faster, they are exactly the same cards as the GTX, recently on lower clocks, with their on board id changed by a resistor (see resistor hack thread I posted where a 680 is turned into a k5000) to let drivers throttle features, and occasionally (depending on line up) some cores lasered out or in.

The Titan will smoke a k5000 in day to day use with Maya in my experience and that of others.

The generic statement "quadros are faster" is so fundamentally flawed when made as blanket statement it's annoying beyond belief to see it constantly repeated by people barging in and out when it's been disproven a ridiculous amount of times at this point.

NoxLupi
05-12-2013, 09:46 AM
So have I, for years, including literally side to side with a monitor switcher hopping between the two workstations (IE: one with a 580 and one with a 4k).


See, that's one of the worst ways to test a videocard you could possibly think of, not to mention rather dated.
That's just not how the videocards or their drivers work, but anyway...


You are aware of the fact the 6xx is DP crippled, and therefore can be made to artificially perform horribly in some tests, right? The 580 will absolutely BLAZE by a 690 in example if you toss them on a DP Fast Fourier Transform that's using DP.

The Titan doesn't have the crippling, which lets us hope the 7xx, based on the same silicon, won't either.

That's also why many people consider the 2xx and the 5xx the best gaming card gens for 3D.

Regardless, let me re-state, no, quadros aren't faster, they are exactly the same cards as the GTX, recently on lower clocks, with their on board id changed by a resistor (see resistor hack thread I posted where a 680 is turned into a k5000) to let drivers throttle features, and occasionally (depending on line up) some cores lasered out or in.

The Titan will smoke a k5000 in day to day use with Maya in my experience and that of others.

The generic statement "quadros are faster" is so fundamentally flawed when made as blanket statement it's annoying beyond belief to see it constantly repeated by people barging in and out when it's been disproven a ridiculous amount of times at this point.
This conversation is going nowhere.. Cause it seams to me, that you are basically saying the same as me on many points.

I know that the GPUs are identical for GTX and Quads. Only minor changes on the board determines the ID and which state the GPU will operate in and I know that the cards are crippled(especially the 6xx series), didn't you read my posts?

The crippling is why they perform badly in Maya and many other applications, it seam to have problems with pixel readback and 2 sided lighting, among other important things, and
that should, by all means conclude that the gForces operate slower in maya, thus making the Quads faster!

You are right that in terms of spec, the GTX cards are faster cards..And this is why Nvidia should be sued, cause many people upgraded there geforce cards to ones with higher spec in all levels only to fin out that they where not available. And it didnt say so on the box!

I don't care how i tested the card, it wasn't an official test for anything.. the test showed me that the gForce cards are crippled to only perform about 5-10 percent of there Quad counterparts, in professional applications.

In terms of DIY GTX 2 Quad.. so far I have only heard that people lost there cards to it, and the ones that got it working for a little while, never got it to perform as the Quadro's(missing features).

ThE_JacO
05-12-2013, 10:57 AM
If you read further up, the toss up was between a titan and a 690.

The titan doesn't suffer for the same crippling.
On top of that, the DP crippling has hardly any effect on viewport, but that's beside the point.

The blanket statement of "quadros are faster" is what I take issue with.
It's not true in general terms, and it's not true in the absolute as the top end of the GTX series (Titan, and most likely soon to be the 7xx which will probably not differ much) is ahead of the k5000 even in some of the peskier artificial tests that saw the 6xx gen kneeling down dying.

Again, blanketing out a "geforce test to 5% of quadros" is too generic, and more frequently untrue than true.

vlad
05-12-2013, 02:35 PM
...
I don't care how i tested the card, it wasn't an official test for anything.. the test showed me that the gForce cards are crippled to only perform about 5-10 percent of there Quad counterparts, in professional applications.
...

There's that blanket statement again :rolleyes: I dont work in Maya, but I bet it's another story in Viewport 2.0, since it's DX based. Max, which viewport is also DX based, is another "professional application" that doesnt benefit (in the vast majority of situations) from Quadros.

NoxLupi
05-12-2013, 04:15 PM
There's that blanket statement again :rolleyes: I dont work in Maya, but I bet it's another story in Viewport 2.0, since it's DX based. Max, which viewport is also DX based, is another "professional application" that doesn't benefit (in the vast majority of situations) from Quadros.
Why did you pull that part out of context? read and follow my statements! I explained, it has to do with Features called by the applications non-specific to DX or oGL! Professional Applications such as Maya, MAX, XSI, CAD etc. Call up functions which are not used in games such as, Overlays, Pixel readback, DP rendering, 2 Sided lighting etc. this is where nvidia crippled the cards.

VP 2.0 is for visualizing,i t is way too flawed to work with! I do hope they find a way to make it work as the normal viewport, even make it the default!.. but so far, I consider it bells and whistles!

Now show me a test/bench where GTX is faster then Quads in Maya or MAX! as well as point me to the source of where your "Titan is not crippled" statement came from.

I have been working in the industry for about 25 years, guys I am not trying to fill your heads with BS. Please prove me wrong!, Tell me how I can get my GTX 570 to pull models with over 20 mill polygons at, at least 30-40 fps! Which I can do with a Quadro 5000.

NicholasG
05-12-2013, 08:45 PM
Why did you pull that part out of context? read and follow my statements! I explained, it has to do with Features called by the applications non-specific to DX or oGL! Professional Applications such as Maya, MAX, XSI, CAD etc. Call up functions which are not used in games such as, Overlays, Pixel readback, DP rendering, 2 Sided lighting etc. this is where nvidia crippled the cards.

VP 2.0 is for visualizing,i t is way too flawed to work with! I do hope they find a way to make it work as the normal viewport, even make it the default!.. but so far, I consider it bells and whistles!

Now show me a test/bench where GTX is faster then Quads in Maya or MAX! as well as point me to the source of where your "Titan is not crippled" statement came from.

I have been working in the industry for about 25 years, guys I am not trying to fill your heads with BS. Please prove me wrong!, Tell me how I can get my GTX 570 to pull models with over 20 mill polygons at, at least 30-40 fps! Which I can do with a Quadro 5000.

http://content.screencast.com/users/m0bus/folders/Jing/media/b2a9241b-e777-41cc-991b-6f12aaeb504d/sven_rig.png


http://screencast.com/t/mX8lLyujxCW

NoxLupi
05-12-2013, 08:56 PM
http://content.screencast.com/users/m0bus/folders/Jing/media/b2a9241b-e777-41cc-991b-6f12aaeb504d/sven_rig.png


http://screencast.com/t/mX8lLyujxCW
please explain what I am looking at?

I se a rather lowpoly "looking" playback in vp 2.0 using a titan. What is that proving?

ThE_JacO
05-12-2013, 10:58 PM
Why did you pull that part out of context? read and follow my statements! I explained, it has to do with Features called by the applications non-specific to DX or oGL! Professional Applications such as Maya, MAX, XSI, CAD etc. Call up functions which are not used in games such as, Overlays, Pixel readback, DP rendering, 2 Sided lighting etc. this is where nvidia crippled the cards.
I don't think you are peddling BS, and I think you've held your countenance fairly well and am not finding you impolite.
But your bases are fairly out of date.

Overlays are gone, applications don't comp like that anymore on windows OR linux, therefore they are irrelevant.
DP rendering doesn't exist. DP types on the other hand do, but it's irrelevant to drawing the viewports, it's simply not required anywhere, and again, the titan doesn't have the issue, nor do the 5xx or previous cards, literally only the 6xx do. But that is, again, beside the point, as you have to take on some specific things to start seeing the double precision end of things buckle on 4 out of the last 50 cards that have gt and quadro offerings.

Pixel readback, sorry, but you might need to clarify here.
Pixel readback is technique dependent, games have absolutely no problems doing it to extreme levels even in buffered multipass on GTX cards. Are you referring to PBO handling in OGL? Because PBO is only of any benefit for asyncronous and to prevent thread locking, which isn't exactly common AFAIK in DCC apps, and even when it is, and PBO -is- strongly driver dependent, I can't say I've seen the 6s have any issues with it.

You might be thinking of old FBO handling issues, but those have been long gone.

Double sided lighting I honestly haven't looked into for a long while, nor monitored, nor needed, so I have to let that go.

Most of what you mention has been irrelevant for three to six years.

Have you written much for OGL or CUDA?

steven168
07-15-2013, 06:58 AM
saw this thread..

GTX 690: first Geforce certified for Maya 2014?
http://forums.cgsociety.org/showthread.php?f=23&t=1105221

shokan
02-17-2014, 08:51 AM
Does ANY 3D software that you've heard of use both GPUs of the GTX 690? I've looked around and seems the answer is 'no'.

imashination
02-17-2014, 02:04 PM
The only use of multiple gpus is for gpu based renderers

shokan
02-17-2014, 05:54 PM
The only use of multiple gpus is for gpu based renderersI thought it was the same deal with the GPU renderers such as V-Ray RT, Octane, Furryball, Thea... that none of them can use both GPUs in a GTX 690, but that they do use multiple cards.

imashination
02-18-2014, 08:05 AM
This is news to me, I dont see why they wouldnt be able to access the second chip? Unless the 690 is hardwared to be SLI rather than be seen as two cards, and the render engine cant work across an SLI'd card?

shokan
02-18-2014, 08:50 AM
This is news to me, I dont see why they wouldnt be able to access the second chip? Unless the 690 is hardwared to be SLI rather than be seen as two cards, and the render engine cant work across an SLI'd card?

Furryball 4.6, for example, lists the GTX 690 and puts "2 cores" after in parentheses. I guess they mean 2 chips, don't know.
http://www.aaa-studio.cz/furrybench/benchResults4.php

I have to ask around more abut the other ones, particularly V-Ray RT. I was under the impression, btw, that the 690 arrangement is SLI.

ThE_JacO
02-18-2014, 11:34 PM
This is news to me, I dont see why they wouldnt be able to access the second chip? Unless the 690 is hardwared to be SLI rather than be seen as two cards, and the render engine cant work across an SLI'd card?
AFAIK it's that.
x90s are internal SLI on the same PCB, which is slightly different than cross slot SLI. The benefit is they require less bandwidth for nearly identical SLI output of two separate cards (and less power, translating into less heat) , the downside is that it's not the exact equivalent of two separate cards in some cases.

I can't comment on rendering engines dealing with it though, since not having an x90 means I never had to worry about SLI vs splits.

shokan
02-19-2014, 12:15 AM
AFAIK it's that.
x90s are internal SLI on the same PCB, which is slightly different than cross slot SLI. The benefit is they require less bandwidth for nearly identical SLI output of two separate cards (and less power, translating into less heat) , the downside is that it's not the exact equivalent of two separate cards in some cases.

I can't comment on rendering engines dealing with it though, since not having an x90 means I never had to worry about SLI vs splits.I think the reason I can't find much information about the GTX 690 for use with 3D software and render engines is because practically no one would buy one (at $1000 plus) for use with those softwares because only one chip of the two is used. No one except me. I usually research the heck outta stuff before buying. Not this time, lol. But, it seems one or more of the GPU-accelerated render engines may, indeed, use both. Big maybe. In any case, a Titan or the new Titan Black coming up is on my list to add to the machine.

ThE_JacO
02-19-2014, 12:28 AM
I think the reason I can't find much information about the GTX 690 for use with 3D software and render engines is because practically no one would buy one (at $1000 plus) for use with those softwares because only one chip of the two is used. No one except me. I usually research the heck outta stuff before buying. Not this time, lol. But, it seems one or more of the GPU-accelerated render engines may, indeed, use both. Big maybe. In any case, a Titan or the new Titan Black coming up is on my list to add to the machine.
The Titan is kind of bad value for money compared to a 780, especially for games, BUT I got one for two reasons: First I like playing with CUDA, and the unlocked DP computation matters to me. That's not a big deal though on most apps since it's not widely used, or it's used sparingly to not see GTX cards crippled. More important though: 6GB of fast RAM and excellent power draw to compute power ratio.

In light of that for GPU rendering it's not a bad deal.

For gaming the 690 is regarded by many as a better choice, and as a greatly better choice if you care for immersive triple monitor setups at a reduced bandwidth impact on the bridges. For professional CG though a lot of that added value disappears instantly IMO, and often places the 690 below a 680.

All that said, if you find on board memory not to be a bottleneck, then a 780ti is better bang for buck than a titan, and will be slightly faster as well whenever DP isn't involved, which for most people is the staggering majority of the time.

shokan
02-19-2014, 12:40 AM
The Titan is kind of bad value for money compared to a 780, especially for games, BUT I got one for two reasons: First I like playing with CUDA, and the unlocked DP computation matters to me. That's not a big deal though on most apps since it's not widely used, or it's used sparingly to not see GTX cards crippled. More important though: 6GB of fast RAM and excellent power draw to compute power ratio.

In light of that for GPU rendering it's not a bad deal.

For gaming the 690 is regarded by many as a better choice, and as a greatly better choice if you care for immersive triple monitor setups at a reduced bandwidth impact on the bridges. For professional CG though a lot of that added value disappears instantly IMO, and often places the 690 below a 680.

All that said, if you find on board memory not to be a bottleneck, then a 780ti is better bang for buck than a titan, and will be slightly faster as well whenever DP isn't involved, which for most people is the staggering majority of the time.The Titan is the card recommended by Furryball and Octane, GPU renderers. It's because of the 6G VRAM. I imagine it's much the same for the other GPU renderers. Man, with the tax, I'm looking at another $1200. So be it. Thanks for your commentx and suggestions.

ThE_JacO
02-19-2014, 01:16 AM
RAM, or lack thereof, on a videocard has been one of the longest standing issues facing GPU rendering, so yeah, the Titan being a 6GB card and yet living between GTX and quadro budgets makes it good value if you've suffered for that bottleneck but don't want to fork out the preposterous amount of money for the high loaded Quadros. Not surprising they recommend one. The Redshift guys were also pretty enthusiastic last I read them write about it.

It depends on what you render as well though. If you do a lot of normal res, texture less pack shots then you might very well never bump into the issue. If you do environment work then it'll be a godsend.

All in all I don't feel mine was wasted money, just it's situational value wise, only a niche, which you might very well be part of, really gets its money back from spending on one.

furryball
03-05-2014, 08:35 PM
Yes, FurryBall use ALL cuda cores in the system.
We have in our benchmark page also 4 Titans
http://www.aaa-studio.cz/furrybench/benchResults4.php

BTW, 2 cores on 590, 690 and 790 cards mean 1 card (but in fact there are 2 cards inside).

Next question - No, FurryBall doesn't need any SLI, just use all CUDA cores in the system.
For example this setup has 4x590 - (8 cores) :argh: :argh: :argh:

shokan
03-05-2014, 11:42 PM
Yes, FurryBall use ALL cuda cores in the system.
We have in our benchmark page also 4 Titans
http://furryball.aaa-studio.eu/abou...benchmarks.html

BTW, 2 cores on 590, 690 and 790 cards mean 1 card (but in fact there are 2 cards inside).

Next question - No, FurryBall doesn't need any SLI, just use all CUDA cores in the system.
For example this setup has 4x590 - (8 cores) :argh: :argh: :argh:
http://www.aaa-studio.cz/furrybench/benchResults4.php?setupID=2388,2226&fbhwid=&showCompact=0&hideHTML=&orderByScene=1&orderByTime=best_time&orderByID=0&version=4.6#2388I have the GTX 690, which I plan to use for 2 displays. The GTX Titan will be only used for the GPU-accelerated renderer.

In the case of Furryball (which I'm considering), does my set-up make sense? I assume there's a dialog to choose which card gets used by display and which for Furryball. Yes?

Thanks for your info.

furryball
03-06-2014, 05:58 AM
I have the GTX 690, which I plan to use for 2 displays. The GTX Titan will be only used for the GPU-accelerated renderer.
In the case of Furryball (which I'm considering), does my set-up make sense? I assume there's a dialog to choose which card gets used by display and which for Furryball. Yes?
Thanks for your info.

For FurryBall you can select which card is used for CUDA rendering and which for rasterize render (DX biased rendering).

For CUDA there is very IMPORTANT combine SAME cards, because if you use TITAN and 680 for example, it will work, but Titan will wait for slower card and you will have ONLY 3 GB memory (the maximum of the lowest card - GPU memory is NEVER sum for more GPUs!!!)...

But clever combination is to use 1.core of 690 like sytem GPU, 2.core of 690 for rasterize rendering and Titan for CUDA.
If you have different card for Rasterize render than system GPU, it will NOT slowdown viewport when FurryBall render. You will almost don't know, that system renders.