PDA

View Full Version : GeForce4*2 vs. Quaddro


Helix
05-21-2002, 06:06 PM
i read the articles on cgchannel or whatever, and was wondering-

if the geforce4 ti4600 is comparable to the top quaddro board, or even if it's not..

why not put 2 geforce cards in your machine for less than the price of 1 quaddro? i'm thinking of doing this...

Tellerve
05-21-2002, 09:18 PM
I'm assuming you don't mean two 4600's. Otherwise how would you get two geforce 4600's in your case? I'm pretty sure there isn't a motherboard that has two apg slots. Now, I think I saw a geforce4mx card that was pci but ick, an mx card. Or you could use an older geforce2 or something that might still be pci. If you meant that then sorry I misunderstood you.

Tellerve

ZrO-1
05-21-2002, 09:55 PM
I don't think I've ever heard of any mobo coming with dual AGP slots. In fact, from what I know of the archetecture it's impossible. You could do a PCI card, but what I think you are thinking of is some sort of bridging effect where both cards output to the same screen. The last board you could do that with was the old Voodoo series. If you want dual screens then any of the newer card's multi-monitor support would be fine. If your thinking of trying to use two cards as one...no can do.

Helix
05-22-2002, 01:09 AM
no i meant for dual display. i figured the only draw for the quaddro was this very support. you could do a pci geforce3 and an agp geforce4 and get just as good quality (pretty much) fora lot less cost.. am i right?

3DMadness
05-22-2002, 02:26 PM
In this case you would need an AGP and a PCI card. But you would only have OpenGL acceleration in the primary display.
I think that if you want to save some money you should get a GeForce4 Ti4600, it also can output to two displays and you can have both diaplays accelerated.
Also, you don't have a PCI GeForce3, the fastest PCI card would be a GeForce2 MX400... :rolleyes:
Hope now you got it. :thumbsup:
Cheers!

3D Madness

Helix
05-22-2002, 06:37 PM
so there's an adapter for the tvout plug on the ti4600? (which i plan on getting anyway)

3DMadness
05-22-2002, 07:50 PM
What kind of adapter? Most of the GF4 Ti4600 that I've seen come with one CRT output, one s-video in/out and one LCD output with a LCD/CRT adapter so that you can hook two CRT monitors on it. Also there are some with two LCD output and comes with two adapter to plug a CRT monitor on it.
Cheers!

3D Madness

Helix
05-22-2002, 08:44 PM
oh ok thx. i got it now :)

a bit confused with all the orifices :D

jscott
05-22-2002, 11:14 PM
Guys in case you didn't know the second port on GeForce4 4600 is not untapped video processing power waiting for you to plug a monitor up to it.

The GeForce4 cards have 2 RAMDACs. When driving one monitor both RAMDAC's work together to drive the single display. When using two monitors each RAMDAC is assigned to drive one monitor only.

So hooking up two monitors to a GeForce4 cuts the boards power in half. This is why so many people use one AGP card and one PCI card. This way they have at least one monitor running full tilt and the other monitor can be used for other windows such as browsers, tools, dialoge boxes, etc.

If you are working with large or high poly count models you may not want to take the performance hit of hooking up that other monitor. Do your research...

-jscott

Tellerve
05-24-2002, 01:11 AM
Huh, I hadn't heard that...where did you read this? A link please, as I'd like to study up.

As for the most powerful pci card...not sure exactly but I do know they have a Geforce4 pci card, however, it is an MX version but still more powerful than a Geforce2.

Tellerve

jscott
05-24-2002, 02:02 AM
Damn I can't find a link on NVIDIA site where the admit that in dual monitor setup each RAMDAC drives only one monitor. Here is the PDF that states the board has two RAMDAC's http://www.nvidia.com/docs/lo/1457/SUPP/Q4_nView.jb2_final.pdf

You see they hope you don't figure that out or you would just keep your GeForce3 and get a PCI card.

I have a Wildcat 5110 at work which is a dual head card. With the driver set to push two monitors frame rates for playing an animation in 3ds max are approx. half the frame rate I was getting with the drivers set to only drive one monitor. This includes orbiting around a model also.

I'm 99% sure NVIDIA cards word similar but I don't have one. It just makes sense.

If I find a link where someone else states how it works I'll post it.

later,

-jscott

3DMadness
05-24-2002, 02:09 PM
I agree with you Scott, but if you put your viewport in just one monitor it is still half fast?
I was thinking that this slowdown would only happen when you have the viewports in both monitors. Then it would be the same as increasing the resolution using one monitor, more pixels to accelerate means slowdown.
I was thinking about something: if you have two displays running at 1280x960 each that would mean 1280xx960x2=2,457,600 and one single monitor at 1600x1200 would mean 1600x1200= 1,920,000 is not that big difference. What I want to say is that it can bem more interesting to use two monitors instead of one in high resolution.
Please, help me with these questions! :)
Cheers!

3D Madness

ZrO-1
05-24-2002, 04:21 PM
I have never read any review which mentioned the frame rates or even the general performance dropping when the second monitor was enabled on the Nvidia cards. I have read that the 3DLabs cards do take almost a 50% hit with two monitors enabled.

Here's a snippet from a great review I read over at Extreem Tech:

"Like many previous OpenGL cards, all four boards have dual monitor outputs, though most of our previous tests focused solely on single monitor output.

"What happened to performance" we wondered, "if you fire up that second monitor, perhaps to spread your application over both screens, or to run a different application altogether? Seems like something a lot of folks may want to do." Well, we spent a lot of time exercising these boards with dual monitor scenarios in addition to a large number of single monitor tests.

This first segment covers low-level performance characteristics, while Part II looks at application performance and stability, and Part III focuses on dual monitor features and performance."

Here's a link right to the artice in annother thread here in the T&H main page. http://www.cgtalk.com/showthread.php?s=&threadid=8699

jscott
05-24-2002, 04:59 PM
...if you put your viewport in just one monitor it is still half fast?

I believe it is always going to be about ~50% slower with two monitors hooked up no matter what is on each viewport.

Like I said I haven't been able to absolutly prove this yet but I have every indication that it works this way unless NVIDIA has something special going on. Which I doubt since 3dLabs has been the leader in OpenGL twinview cards for years and all the Wildcat's take the performance hit because the processing power is split and each half is dedicated to each monitor.

Even when nothing is happenning on a monitor the video card is still sending the signal. It's just that the singnal hasn't changed. When something moves there is a change in the signal and the video card sends the new info to the monitor. This is constant. If the RAMDAC quit processing a signal for monitor 2 to help out an operation on monitor 1 then monitor 2 would go blank while the RAMDAC did processing for the signal going to monitor 1. I'm not a video technician but this seems a likely reason why each RAMDAC is dedicated in a dual config. Now it would be interesting if through drivers you could have RAMDAC 2 help out RAMDAC 1 without completly stopping proccessing for the singnal to monitor 2. I don't know we'll just have to see.

We did get a Quadro4 900XGL in for testing earlier this week but I haven't been able to set it up in one of the machines yet. If between us we don't have a solid answer when I install the board I'll post some test info when I get the board installed.

ZrO-1:

I have never read any review which mentioned the frame rates or even the general performance dropping when the second monitor was enabled on the Nvidia cards.

I haven't either but I see no reason why the NVIDIA cards would allow some way for the two RAMDAC's to dynamically choose which monitor needs the most attention. Not that it's not possible that's why in a previous post I stated that I was 99% sure. I haven't tested it my self and I have not read anywhere exactly what the performace is like. That is also why I said "do your research". I can promise you that there are not unused processors on the board waiting for you to connect a second monitor. Ever notice how virtually all benchmarks are done on one monitor. A lot of people who buy a GeForce4 4600 will never hook up two monitors. So why would NVIDIA wast money putting processors on a board that a large portion of consumers won't ever use. If it was setup like that there would be two seperate products one cheaper single head board and a more expensive dual head board.

I have already read the review you quoted but the Part III with dual monitor performance tests has not been posted yet. I can only fine Part1 and Part 2. I am eagerly waiting to see how it works out.

Don't get me wrong I like NVIDIA boards and I wish there was a way to run full single monitor acceleration on dual monitors. Again I have strong suspicions and I was just wanting you guys to be sure about what you were getting into with planning upgrades.

-jscott

3DMadness
05-24-2002, 05:49 PM
Hi Scott!

Thanks for your repply! It's interesting to know that there's no difference if there's nothing in the second monitor... have you tried it with the wildcats? I thought it would be the same as when you are running something in a small viewport and a full screen, in a small viewport it always playback faster.
BTW, I remember you from Discreet's forum. :) And I remember Greg always says that Wildcat have a poor performance under max... maybe there's something else, I don't know. But I'm looking forward to your test with the Quadro4 900XGL, then you can tell us how is it going with one and two monitors on it. Also, I'm also waiting for the Part III in Extreme Tech's article! :thumbsup:
Cheers!

3D Madness

jscott
05-24-2002, 06:11 PM
This was my test on the a Wildcat 5110.

The dragon character animation from 3ds max scenes dir. The file is an animated dragon flying. Nothing else but clouds moving in the background. I used Max's FPS counter which I believe averages the FPS. All I did was open the file and hit play with only the camera viewport full screen. Max is mazimized on 1280x1024. Nothing on the second monitor

Wildcat 5110 single monitor - 27.2 fps
Wildcat 5110 dual monitor - 16.5 fps

-jscott

ZrO-1
05-24-2002, 08:14 PM
Hey scott, I just wanted to make sure you knew I wasn't arguing with you about your point. Actually I was agreeing about the 3DLabs card, and just stateing that I personally hadn't heard anything about the Nvidia cards.

Man I'm waiting with tense antisipation for part 3 to come out with the Extreme Tech article. Before parts 1 and 2 had come out I've spent the last 2.5 months scouring the 'net for any review I could find about the latest pro cards. And I have to say the E-tech article seems to me to be the most comprehensive and complete real-world testing of the cards. I can say that I'm pro'lly going to base 90% of my decision on which card I buy based on this review.

jscott
05-24-2002, 08:38 PM
ZrO-1,

It's cool man....

peace,

-jscott

3DMadness
05-24-2002, 10:36 PM
Damn Scott, I didn't know there was such big difference just having another monitor even if it's blank. Thanks a lot for this info, man! :thumbsup:
Now I'm with ZrO-1 waiting for E-Tech put part III there... or waiting for you to do the same test like this simple test you did with the wildcard to be done with the Quadro 4. ;)
Cheers!

3D Madness

Speed Racer
12-26-2002, 10:24 PM
jscott:
I believe it is always going to be about ~50% slower with two monitors hooked up no matter what is on each viewport.

JScott, I should clarify a bit as you are not totally correct on this.

1. Dual RAMDACs do not both operate while only driving one display. One RAMDAC is hardwired to one display channel and the other RAMDAC is hardwired to the second. When running just one monitor, the other RAMDAC isn't doing anything.

2. RAMDACs do not affect a 3D cards performance. A RAMDAC only plays a role in the maximum refresh rates a card can scan out and the quality of the analog display. The RAMDAC is _after_ all the 3D work has been finished. It simply scans the completed data out of the framebuffer (scan out) and to the monitor. Often this is even completely asynchronous (vblank off) to what is going on with the 3D pipeline.

3. When two displays are hooked up, a 3D card is _not_ 50% slower. All things being equal (total desktop resolution) a 3D card with two channel output should perform 100% the same, regardless of if you are scanning out to one monitor or two (Wildcats are the last remaining exception, see below).

The one poster was correct in that even if you have 2x the pixel count for your desktop (framebuffer), if the 3D viewport occupies the same resolution on the screen as before then the performance of the application should be 100% identical.

e.g. If your desktop goes from 1600x1200 to 3200 x 1200 then you potentially have 2x the pixels to fill. If your 3D viewport occupies the entire desktop, you can see anywhere from a 5% peformance hit to a >500% performance hit, depending on the card and the 3D scene. If it occupies only 1600x1200 it should perform the same as before.

4. 3D labs wildcat: This is a very different beast and because of that it creates the situation you are seeing. True wildcats (Wildcat I, II, III, and the last-of-its-type-IV) are all based on older technology that is used in a bruteforce way. Unlike the new VPs, X1s, and Quadros, the true Wildcats are based on a pipeline that is laid out on the board rather than on chip (separate Geo, tex, pix fill ASICs).

In addition, the high end versions of these boards had two completely separate pipelines laidout on the board. When using a single monitor, the driver divides up the work between the two pipelines and the results are filled into a single frame buffer and scanned out to one monitor.

In the case of a two monitor config, 3dlabs _could potentially_ switch into a mode where one pipeline handles one half of the desktop (one display) and the other pipeline handles the other half of the desktop (the second display).

This would be the easiest way to handle this and would allow for a more predictable level of performance. However, it would hardwire 50% of the performance of the entire board to each display (just as you were seeing) regardless of what was going on in your desktop.

(BTW, this is also why the memory capacity of a true Wildcat is misleading. Although there is a large amount of texture memory divided between the two pipelines, a copy of each texture has to be stored in the memory on both pipelines as you can't predict which scanline or side of the display it will be needed on. This is why you have to take the total cache and texture memory capacities and divide it in half to be able to compare it to the capacity of the VP, X1, and Quadro style of soltutions.)

Single chip, 4-8 piplelines-on-a-chip solutions like Vp, X1, Quadro have an easier ability to direct their performance to any portion of the display. The architecture doesn't care if you are scanning that out to one monitor or four.

Ps. The above ignores other variables such as MicroSoft's DualView mode. It is assuming the VP/X1/Quadro are using their natural method of multi-display support. If you are forced to use the OS's DualView mode (only required for supporting different resolutions on each monitor) then there other other OS factors that will affect the performance comparison to a single monitor solution.

jscott
12-27-2002, 09:05 PM
Thanks for the explanation Speed. I did at a much later time discover that the Quadro4 worked much differently than the Wildcats in a dual display situation. I have since recommended Quadros as our future display card of choice for dual monitor systems.

We do still have one OpenGL application that a Wildcat 6110 simply and totally stomps all over a 900XGL and we have yet to discover why.

peace,
-jscott

elvis
12-27-2002, 09:47 PM
i'm going back in time a little, so i apologise if this is askew...

just to clarify, you cannot have dual AGP slots in a motherboard. AGP is designed to have asynchronous access to the system RAM, independant of such features as the PCI bus and CPU. the AGP device still needs to use an interrupt to access CPU time, but for system ram (controlled in amount by teh AGP apature) it does not.

as such, there is no physical way to put two AGP slots on a single controller, as the system would be totally unusable while the cards fight for resouces. (instant blue-screen). perhaps someone somewhere has researched a bridging-controller option to make the idea work, but the cost to benefit would be so low, i don't think anyone would bother.

jscott
12-28-2002, 04:27 PM
I could be wrong but I thought I read before that the AGP 8x secification had some provision for dual AGP ports. I agree though that with the abundance of dual output video boards today we may never actually see the need for dual AGP ports unless through drivers or other means this allowed a doubling of video processing power.

-jscott

Gyan
12-28-2002, 04:46 PM
Originally posted by jscott
I could be wrong but I thought I read before that the AGP 8x secification had some provision for dual AGP ports. I agree though that with the abundance of dual output video boards today we may never actually see the need for dual AGP ports unless through drivers or other means this allowed a doubling of video processing power.

-jscott

Kinda offtopic, but isn't PCI-X the next big thing now ?

Speed Racer
12-28-2002, 08:28 PM
It is my understanding that there will be an AGP version of PCI-X (AGP-X) that puts us in the same single specialized (DMA) slot situation again. Anyone?

I think there are some other bus standards that may have an impact as well.

I do think this is an issue jscott as there are quite a few 3 channel solutions. I believe it is a much more natural solution with a full channel in the front of your vision rather than a seam between two monitors. There are a number of three DFP desktop solutions as well as three projection solutions for immersive rooms.

There will need to be a vast improvement in the multi-machine solution, cards with three channels of output, or a system architecture that allows for two full speed cards (for a total of four monitor outs).

elvis
12-28-2002, 09:41 PM
Originally posted by Gyan
Kinda offtopic, but isn't PCI-X the next big thing now ?
PCI-X 1.0b should allow for 1GB/s (at 133MHz). PCI-X 2.0 spec proposes for 2 to 4 times this amount, which is well within the useable amount for current video devices. (same as AGP 4X to 8X, by my calculations. someone correct me if i am wrong).

3dfx pushed SLI (scan line interleave - using two video cards to render alternate lines to the screen to effectively [almost] double performance). considering 3dfx and nvidia are one, and high-bandwidth PCI is around the corner, i'm pretty interested at what they can achieve together. imagine being able to fill all of your spare PCI slots with high-speed video cards, each increasing your FPS slightly. sounds like a high-end workstation user's dream.

of course the argument stands that if you wait 12 months, a single card will outperform your cluster of cards, but that's not the point, is it? :)

CGTalk Moderation
01-13-2006, 06:00 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.