PDA

View Full Version : SLI GeForce6 6800's... tomorrow (?)


3Dfx_Sage
06-27-2004, 11:07 PM
well, not really "SLI" but it's the same principal (actually it may be real SLI, but I doubt it). Tomorrow nVidia should be showing everone just what can be done with TWO GeForce6's in one machine. This is kind of like the rig Alienware had, but should be native nVidia and not require extra 3rd-party drivers like the Alienware rig.

3Dfx_Sage
06-28-2004, 04:59 AM
well, people, here (http://www.hardwareanalysis.com/content/article/1728/) it is!
(http://www.hardwareanalasys.com/content/article/1728/)

creative destructions
06-28-2004, 06:20 AM
Wow 3Dfx assets finally put to good use.

Digs up his old Voodoo3.

Wonders what ATI will come up with.

3Dfx_Sage
06-28-2004, 06:45 AM
Wow 3Dfx assets finally put to good use.

Digs up his old Voodoo3.

Wonders what ATI will come up with.
actually, there's a lot more 3Dfx tech in the NV3x/4x than you might think. IIRC pretty much the entire 2d engine was taken from 3Dfx... not that it's that major. There were also some design philosophies taken from 3Dfx, specifically regarding the parallelism in the NV3x.

dmeyer
06-28-2004, 03:51 PM
Dual Xeons with dual 6800s, sounds like your going to need dual power supplys as well..

:scream:

peanuckle
06-28-2004, 03:57 PM
Dual Xeons with dual 6800s, sounds like your going to need dual power supplys as well..

:scream:
Alienwares rig with their technology of 2 videocards used an 800watt power suppy.

pea~

kex
06-28-2004, 05:25 PM
at least you can chuck out your old radiater and by some earplugs.

what the hell will your electricity bill be ?

3Dfx_Sage
06-28-2004, 07:00 PM
of course, the most important part of this is.... DUAL QUADRO'S as well!

the thing is, I can't see how this will increase geometry performance, because each GPU is rendering a part of the same frame. in order to know what triangles are in what part of the frame you have to do geometry transformation. And I really don't think taht little link is fast enough to transfer scene geometry over. Could be wrong though, there's a lot left to be discovered.

shehbahn
06-28-2004, 09:11 PM
i haven't looked at the specs so take this with a grain of salt but :

- i doubt the link transfers anything but synchornisation data. the PCI-E bus transfers the actual scene data. the connector seems to be having 30 ish contacts, not much of a high bandwidth bus.

- the article mentions dual AGP being too slow : i remember reading a number of articles about AGP x16 and the fact that the bottleneck was the memory bus, not the GPU bus. PCI-E won't really fix that, but it suddenly makes sense if you are going to be pushing the data to 2 GPUs instead of one...

- the GPU nowadays does handle the bulk of the CTE transformations. in this case however, both GPUs cooperating on the same image would be implying redundant operations. I am guessing that this setup is aimed at the gaming market, where increasingly the rendering load is spent in pixel shaders rather than geometry (ie. don't push more polygons, make them look prettier).

3Dfx_Sage
06-28-2004, 09:26 PM
- the GPU nowadays does handle the bulk of the CTE transformations. in this case however, both GPUs cooperating on the same image would be implying redundant operations. I am guessing that this setup is aimed at the gaming market, where increasingly the rendering load is spent in pixel shaders rather than geometry (ie. don't push more polygons, make them look prettier). if it's smart, then it will just do the transformation part of the vertex shader and then decide if it needs to do the rest. this will result in a partial speedup in vertex processing, but pro apps usually don't use VS programs. And, this is assuming that it is able to figure out to do just the transformation part.

now, where this will be useful is with Gelato. More pixel power definitely means faster render times.

shehbahn
06-28-2004, 11:11 PM
>the transformation part of the vertex shader and then decide if it needs to do the rest.

correct - occlusion culling can speed things up tremendously. somehow, i don't think it's that easy though... i would have to go look at some whitepapers on the different pixel pipelines since my recollections right now are a bit fuzzy.

> this will result in a partial speedup in vertex processing, but pro apps usually don't use VS programs.

correct - that's why i am thinking this is aimed at markets that do heavy RT texturing & shading. incidentally, "if you build it they will come", it's fairly safe to assume that we can expect the major 3D apps to jump on those features to offer better interactivity in the lighting & shading tools.

i also don't see handling of very heavy geometry as being all that high on the demand list of 3D apps users. the more problematic area is interactive traversal of scene graph : namely resolving the very complicated articulations of modern characters, with all the muscles / skin / dynamics and other blend shapes in a timely manner. the display of surfaces is not where the bottleneck is in my experience.

>More pixel power definitely means faster render times.

still ambivalent on that one : i'll want to hear more about it before commenting - though i am definitely drooling at the new GPUs.

stephen2002
06-29-2004, 12:50 AM
Alienwares rig with their technology of 2 videocards used an 800watt power suppy.
Their demo system was also dual-CPU.

3Dfx_Sage
06-29-2004, 02:25 AM
>More pixel power definitely means faster render times.

still ambivalent on that one : i'll want to hear more about it before commenting - though i am definitely drooling at the new GPUs.
well, that was in reference to Gelato. with Gelato the pixel shaders are what do the hard work, and doubbling the number of pixel shader pipelines would, of course, speed things up.

shehbahn
06-29-2004, 05:59 PM
yes i got that - what i am ambivalent about is the fact that NVidia CG is a watered down shading language with a few critical pieces missing, mostly due to memory architecture reasons. In other words, i am curious to see which tasks are defferred to the GPU by a Mental Ray or Gelato app and how much of a performance boost results from this over a distributed render over traditional render nodes. I love the new GPUs, i am just not quite sold on the idea that we can replace the more flexible softwar solutions. The idea has been kicked around quite a few times already (Pixar computer ? ART render boxes ?) and never all that successfully.

CGTalk Moderation
01-18-2006, 12:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.