Asus nvidia GTX 690 Graphics card advice

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

REPLY TO THREAD
 
Thread Tools Search this Thread Display Modes
  05 May 2013
Originally Posted by ThE_JacO: So have I, for years, including literally side to side with a monitor switcher hopping between the two workstations (IE: one with a 580 and one with a 4k).


See, that's one of the worst ways to test a videocard you could possibly think of, not to mention rather dated.
That's just not how the videocards or their drivers work, but anyway...


You are aware of the fact the 6xx is DP crippled, and therefore can be made to artificially perform horribly in some tests, right? The 580 will absolutely BLAZE by a 690 in example if you toss them on a DP Fast Fourier Transform that's using DP.

The Titan doesn't have the crippling, which lets us hope the 7xx, based on the same silicon, won't either.

That's also why many people consider the 2xx and the 5xx the best gaming card gens for 3D.

Regardless, let me re-state, no, quadros aren't faster, they are exactly the same cards as the GTX, recently on lower clocks, with their on board id changed by a resistor (see resistor hack thread I posted where a 680 is turned into a k5000) to let drivers throttle features, and occasionally (depending on line up) some cores lasered out or in.

The Titan will smoke a k5000 in day to day use with Maya in my experience and that of others.

The generic statement "quadros are faster" is so fundamentally flawed when made as blanket statement it's annoying beyond belief to see it constantly repeated by people barging in and out when it's been disproven a ridiculous amount of times at this point.

This conversation is going nowhere.. Cause it seams to me, that you are basically saying the same as me on many points.

I know that the GPUs are identical for GTX and Quads. Only minor changes on the board determines the ID and which state the GPU will operate in and I know that the cards are crippled(especially the 6xx series), didn't you read my posts?

The crippling is why they perform badly in Maya and many other applications, it seam to have problems with pixel readback and 2 sided lighting, among other important things, and
that should, by all means conclude that the gForces operate slower in maya, thus making the Quads faster!

You are right that in terms of spec, the GTX cards are faster cards..And this is why Nvidia should be sued, cause many people upgraded there geforce cards to ones with higher spec in all levels only to fin out that they where not available. And it didnt say so on the box!

I don't care how i tested the card, it wasn't an official test for anything.. the test showed me that the gForce cards are crippled to only perform about 5-10 percent of there Quad counterparts, in professional applications.

In terms of DIY GTX 2 Quad.. so far I have only heard that people lost there cards to it, and the ones that got it working for a little while, never got it to perform as the Quadro's(missing features).
 
  05 May 2013
If you read further up, the toss up was between a titan and a 690.

The titan doesn't suffer for the same crippling.
On top of that, the DP crippling has hardly any effect on viewport, but that's beside the point.

The blanket statement of "quadros are faster" is what I take issue with.
It's not true in general terms, and it's not true in the absolute as the top end of the GTX series (Titan, and most likely soon to be the 7xx which will probably not differ much) is ahead of the k5000 even in some of the peskier artificial tests that saw the 6xx gen kneeling down dying.

Again, blanketing out a "geforce test to 5% of quadros" is too generic, and more frequently untrue than true.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
  05 May 2013
Originally Posted by NoxLupi: ...
I don't care how i tested the card, it wasn't an official test for anything.. the test showed me that the gForce cards are crippled to only perform about 5-10 percent of there Quad counterparts, in professional applications.
...


There's that blanket statement again I dont work in Maya, but I bet it's another story in Viewport 2.0, since it's DX based. Max, which viewport is also DX based, is another "professional application" that doesnt benefit (in the vast majority of situations) from Quadros.
 
  05 May 2013
Originally Posted by vlad: There's that blanket statement again I dont work in Maya, but I bet it's another story in Viewport 2.0, since it's DX based. Max, which viewport is also DX based, is another "professional application" that doesn't benefit (in the vast majority of situations) from Quadros.

Why did you pull that part out of context? read and follow my statements! I explained, it has to do with Features called by the applications non-specific to DX or oGL! Professional Applications such as Maya, MAX, XSI, CAD etc. Call up functions which are not used in games such as, Overlays, Pixel readback, DP rendering, 2 Sided lighting etc. this is where nvidia crippled the cards.

VP 2.0 is for visualizing,i t is way too flawed to work with! I do hope they find a way to make it work as the normal viewport, even make it the default!.. but so far, I consider it bells and whistles!

Now show me a test/bench where GTX is faster then Quads in Maya or MAX! as well as point me to the source of where your "Titan is not crippled" statement came from.

I have been working in the industry for about 25 years, guys I am not trying to fill your heads with BS. Please prove me wrong!, Tell me how I can get my GTX 570 to pull models with over 20 mill polygons at, at least 30-40 fps! Which I can do with a Quadro 5000.
 
  05 May 2013
Originally Posted by NoxLupi: Why did you pull that part out of context? read and follow my statements! I explained, it has to do with Features called by the applications non-specific to DX or oGL! Professional Applications such as Maya, MAX, XSI, CAD etc. Call up functions which are not used in games such as, Overlays, Pixel readback, DP rendering, 2 Sided lighting etc. this is where nvidia crippled the cards.

VP 2.0 is for visualizing,i t is way too flawed to work with! I do hope they find a way to make it work as the normal viewport, even make it the default!.. but so far, I consider it bells and whistles!

Now show me a test/bench where GTX is faster then Quads in Maya or MAX! as well as point me to the source of where your "Titan is not crippled" statement came from.

I have been working in the industry for about 25 years, guys I am not trying to fill your heads with BS. Please prove me wrong!, Tell me how I can get my GTX 570 to pull models with over 20 mill polygons at, at least 30-40 fps! Which I can do with a Quadro 5000.


http://content.screencast.com/users...4d/sven_rig.png


http://screencast.com/t/mX8lLyujxCW
 
  05 May 2013

please explain what I am looking at?

I se a rather lowpoly "looking" playback in vp 2.0 using a titan. What is that proving?
 
  05 May 2013
Originally Posted by NoxLupi: Why did you pull that part out of context? read and follow my statements! I explained, it has to do with Features called by the applications non-specific to DX or oGL! Professional Applications such as Maya, MAX, XSI, CAD etc. Call up functions which are not used in games such as, Overlays, Pixel readback, DP rendering, 2 Sided lighting etc. this is where nvidia crippled the cards.

I don't think you are peddling BS, and I think you've held your countenance fairly well and am not finding you impolite.
But your bases are fairly out of date.

Overlays are gone, applications don't comp like that anymore on windows OR linux, therefore they are irrelevant.
DP rendering doesn't exist. DP types on the other hand do, but it's irrelevant to drawing the viewports, it's simply not required anywhere, and again, the titan doesn't have the issue, nor do the 5xx or previous cards, literally only the 6xx do. But that is, again, beside the point, as you have to take on some specific things to start seeing the double precision end of things buckle on 4 out of the last 50 cards that have gt and quadro offerings.

Pixel readback, sorry, but you might need to clarify here.
Pixel readback is technique dependent, games have absolutely no problems doing it to extreme levels even in buffered multipass on GTX cards. Are you referring to PBO handling in OGL? Because PBO is only of any benefit for asyncronous and to prevent thread locking, which isn't exactly common AFAIK in DCC apps, and even when it is, and PBO -is- strongly driver dependent, I can't say I've seen the 6s have any issues with it.

You might be thinking of old FBO handling issues, but those have been long gone.

Double sided lighting I honestly haven't looked into for a long while, nor monitored, nor needed, so I have to let that go.

Most of what you mention has been irrelevant for three to six years.

Have you written much for OGL or CUDA?
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles

Last edited by ThE_JacO : 05 May 2013 at 11:02 PM.
 
  07 July 2013
saw this thread..

GTX 690: first Geforce certified for Maya 2014?
http://forums.cgsociety.org/showthr...?f=23&t=1105221
 
  02 February 2014
Does ANY 3D software that you've heard of use both GPUs of the GTX 690? I've looked around and seems the answer is 'no'.
 
  02 February 2014
The only use of multiple gpus is for gpu based renderers
__________________
Matthew O'Neill
www.3dfluff.com
 
  02 February 2014
Originally Posted by imashination: The only use of multiple gpus is for gpu based renderers
I thought it was the same deal with the GPU renderers such as V-Ray RT, Octane, Furryball, Thea... that none of them can use both GPUs in a GTX 690, but that they do use multiple cards.
 
  02 February 2014
This is news to me, I dont see why they wouldnt be able to access the second chip? Unless the 690 is hardwared to be SLI rather than be seen as two cards, and the render engine cant work across an SLI'd card?
__________________
Matthew O'Neill
www.3dfluff.com
 
  02 February 2014
Originally Posted by imashination: This is news to me, I dont see why they wouldnt be able to access the second chip? Unless the 690 is hardwared to be SLI rather than be seen as two cards, and the render engine cant work across an SLI'd card?


Furryball 4.6, for example, lists the GTX 690 and puts "2 cores" after in parentheses. I guess they mean 2 chips, don't know.
http://www.aaa-studio.cz/furrybench/benchResults4.php

I have to ask around more abut the other ones, particularly V-Ray RT. I was under the impression, btw, that the 690 arrangement is SLI.
 
  02 February 2014
Originally Posted by imashination: This is news to me, I dont see why they wouldnt be able to access the second chip? Unless the 690 is hardwared to be SLI rather than be seen as two cards, and the render engine cant work across an SLI'd card?

AFAIK it's that.
x90s are internal SLI on the same PCB, which is slightly different than cross slot SLI. The benefit is they require less bandwidth for nearly identical SLI output of two separate cards (and less power, translating into less heat) , the downside is that it's not the exact equivalent of two separate cards in some cases.

I can't comment on rendering engines dealing with it though, since not having an x90 means I never had to worry about SLI vs splits.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
  02 February 2014
Originally Posted by ThE_JacO: AFAIK it's that.
x90s are internal SLI on the same PCB, which is slightly different than cross slot SLI. The benefit is they require less bandwidth for nearly identical SLI output of two separate cards (and less power, translating into less heat) , the downside is that it's not the exact equivalent of two separate cards in some cases.

I can't comment on rendering engines dealing with it though, since not having an x90 means I never had to worry about SLI vs splits.
I think the reason I can't find much information about the GTX 690 for use with 3D software and render engines is because practically no one would buy one (at $1000 plus) for use with those softwares because only one chip of the two is used. No one except me. I usually research the heck outta stuff before buying. Not this time, lol. But, it seems one or more of the GPU-accelerated render engines may, indeed, use both. Big maybe. In any case, a Titan or the new Titan Black coming up is on my list to add to the machine.
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 10:37 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.