PDA

View Full Version : GeForce 7800 GTX (G70) Specs!


daart
05-18-2005, 12:55 PM
0.11 micron process TSMC
430Mhz core / 1.4GHz 256MB GDDR3 memory
256-bit memory interface
38.4GB/s memory bandwidth
10.32Bps Fill Rate
860M vertices/second
24 pixels per clock
400MHz RAMDACs
NVIDIA CineFX 4.0 engine
Intellisample 4.0 technology
64-bit FP texture filtering & blending
NVIDIA SLI Ready (7800 GTX only)
DX 9.0 / SM 3.0 & OpenGL 2.0 supported
G70 comes with 3 models; GTX, GT and Standard
Single card requires min. 400W PSU with 12V rating of 26A
SLI configuration requires min. 500W PSU with 12V rating of 34A
Launch : 22nd of June

Solothores
05-18-2005, 01:57 PM
edit: duh :argh:

FlyByNight
05-18-2005, 05:33 PM
interesting release timing:rolleyes: i think were gonna be seeing pcs as fast as these next gen consoles faster then we think... i wouldnt be too surprised if i hear some new info on processors coming out soon as well.... something smelling fishy?...

splintah
05-18-2005, 06:00 PM
dude

time for a new psu :-)


3 models ?
GTX GT STANDARD
i bet they will bring out like 10 different names/versions for it
so no one remembers which is which

i bet its got crazy speed in gelato

-=TF=-
05-18-2005, 06:20 PM
ok, i take one :scream:

novadude
05-18-2005, 06:25 PM
3 models ?

less the current generation (Ultra Extreme [short run], Ultra, GT, and vanilla), and it only got worse when dell introduced the GTO and other companies released other variants on the names.

Oh, and I'll take two on one board so I have room for a PCI-E SAS board :)

NUKE-CG
05-18-2005, 06:52 PM
Spec-wise, it is not a big jump it seems. But benchmarks are more conclusive than words and numbers.

We shall wait.

Edit: I think buying the card at release will be kind of pointless for gamers, the three most graphics-intensive games run fine on 6800U's...

rakmaya
05-18-2005, 07:49 PM
Spec-wise, it is not a big jump it seems. But benchmarks are more conclusive than words and numbers.

We shall wait.

Edit: I think buying the card at release will be kind of pointless for gamers, the three most graphics-intensive games run fine on 6800U's...


Agreed, But the problem to that statement will be if ATI brings in something first as powerful as the next gen. Remember that NVIDIA guy at the conference said PS3 GPU is 2-wise as a fast (in terms of calculation) as the 6800 ultra. So it becomes natural for them to make sure PC games doesn't vanish in the mist. Also it takes at least 6 to 8 months for the GPU to become main stream in the PC market. So it is sort of like an early step up to make sure it becomes the standard by the time the consoles comes out.

Viper
05-18-2005, 09:19 PM
Wow, this is good news :D It means the 6s will go down on price ;) BTW, what does SLI ready means exatcly? If I have a SLI mobo I can't put a non SLI ready Graphics card on it?

NUKE-CG
05-18-2005, 09:35 PM
Yeah, that 2X quote is getting corrected by a lot of Hardware sites, saying that it's just the shader calculations. No doubt that is a intense procedure, but it's not everything.

I'd love to see some high mhz cores and optimized polygon-pushing capability myself.

rakmaya
05-18-2005, 09:41 PM
Ya, it is only the calculation (I quoted in "terms of calculation"). But neverthless, it is very fast and a big jump.

Hazdaz
05-18-2005, 09:58 PM
i think were gonna be seeing pcs as fast as these next gen consoles faster then we think..

Well except that that video card will probably cost more than an entire next-gen console. When it comes to just gaming, I know what I will be spending my money on - and it isn't wasting it on a PC.

Curious what this new card means to people that do 3D (not games)... if it is even that mig of a jump in performance.

BillSpradlin
05-18-2005, 10:01 PM
Video cards are WAY overpriced and have been for quite some time. There's absolutely no reason why a top end gaming card should cost in upwards of $500.00.

erilaz
05-18-2005, 11:50 PM
Video cards are WAY overpriced and have been for quite some time. There's absolutely no reason why a top end gaming card should cost in upwards of $500.00.

Yes there is; because they can.:D

t-man152
05-19-2005, 12:23 AM
those consoles are scaring me PC game wise. If we dont get better processors PC gaming might vanish. kind of a scary thought if you ask me.

novadude
05-19-2005, 12:25 AM
Yes there is; because they can.:D

Because they have convinced gamers that they need to buy a new card every six months so that they can continue to play the latest games at ultra high resolutions with evey drop down choice under the game's video page as far down as it will go. Add to that game developers putting in a few poorly implemented shaders in the new generation of games so that only the next gen gaming cards coming out with that option will suffice, and monitors ever increasing in resolution and the list goes on...

This generation of cards won't add much in outright performance (and who needs it when you're running upwards of 400fps in games?), but will just implement some new features to make the games look more realistic via better shading/lighting/textures. Cheap RAM prices make increased world sizes possible, while increasing CPU capabilities make it possible for better AI and physics.

Zenbu
05-19-2005, 12:44 AM
those consoles are scaring me PC game wise. If we dont get better processors PC gaming might vanish. kind of a scary thought if you ask me.

I really doubt that would happen, due to PC gaming having an "aftermarket" usually readily available for the people to tinker with, in most games that come out (IE: mods, levels, etc)

FREE mind you.

That is a terrible thought though.

Hazdaz
05-19-2005, 01:17 AM
those consoles are scaring me PC game wise. If we dont get better processors PC gaming might vanish. kind of a scary thought if you ask me.
And that is bad, why?? I personally could care less about PC gaming - haven't even installed a PC game in atleast 5 years (maybe closer to 10 years). The only thing I would miss is all the PC gamers that buy these over-priced videocards and other leading-edge components that help push down their prices.

Kabab
05-19-2005, 01:21 AM
those consoles are scaring me PC game wise. If we dont get better processors PC gaming might vanish. kind of a scary thought if you ask me.
Thats not going to happen.

Remember when the PS1 came out at the time it was much better then any pc same as when the PS2 came out etc...

These consoles have a 5 year shelf life so after about 1-2years PC's once again shoot ahead and by the end of 5 years well the consoles look pretty sorry...

It all just goes in cycles.

Some types of games will always suit pc's whilst other consoles...

t-man152
05-19-2005, 01:26 AM
mouse and keyboards are much better for First person shooters than controllers can be (unless Nintnedo has something amazing controllerwise)

NickBFTD
05-19-2005, 01:56 AM
those consoles are scaring me PC game wise. If we dont get better processors PC gaming might vanish. kind of a scary thought if you ask me.

dual core CPU's are coming later this year and quad core planned for 2007

BillSpradlin
05-19-2005, 02:36 AM
x86 technology is dead and has been for a while. Intel and AMD are just squeezing everything they can out of it by doubling up the cpu's on die. Pathetic if you ask me. Bring on the biotechnology for shits sake.

As far as consoles go, I never could get into them for the very fact that the controllers sucked ass and could never give me the response time of a mouse and keyboard.

rakmaya
05-19-2005, 04:40 AM
I really doubt that would happen, due to PC gaming having an "aftermarket" usually readily available for the people to tinker with, in most games that come out (IE: mods, levels, etc)

FREE mind you.

That is a terrible thought though.


That is something that will be changed in the next generation. The reason is because X360, PS3 and Revolution is insisting very much on Networked games. Apart from Shooting games the only reason PC games will survive is because of the independant developers who would want to do something for less. But with programs like XBOX Live and Online enabled RPG games, PC games will have hardtime keeping up with the console games.

They will have an aftermarket but depending on the cost/production/time ratio, this can vanish quickly before we realize.

My plans for the next year is only to buy a Dual G5 and 3 consoles. I have 6800 and is sticking with it for quite some time. No more updating the PC GPU because it doesn't have any games I want to play that long. 6800 is enough for me in terms of working to test the shaders and effects for the job. So unless they bring the GPU price down to much less thn 500 bucks, it ain't becoming mass.

durbdk
05-19-2005, 06:46 AM
YES PLEASE! :drool::drool::drool:

Thanks for the info, was getting ready to upgrade my card, but think I'll wait for a while and see what develops. Exciting times...

imashination
05-19-2005, 10:24 AM
x86 technology is dead and has been for a while.

They've said that every year for the past x years and its still a load of crap.

Thalaxis
05-19-2005, 04:51 PM
They've said that every year for the past x years and its still a load of crap.

Apparently some people just can't read... or think... what else can you expect? Just reading that post
was amusing, since he's basically criticizing AMD and Intel for doing exactly what these consoles are
showcasing in terms of technological progress.

It's pretty likely that this console generation will have a huge effect on PC gaming, for the simple reason
that writing a game for a PC requires targeting a platform that doesn't exist when you got started, yet it
still has to run on the stuff that is current when you got started. So with a 2-3 year game development
cycle, you have to target no less than SIX generations of hardware, and just on the graphics and
processor side, that's already a huge number of combinations to worry about. Throw in networking,
chipsets, audio, and controllers, and they sky's the limit. With MSHQL, no one but Dell and HP would
be able to ship a working machine at all.

The consoles are a known and static target. It's therefore easier to code for AND easier to QA and support. The next generation are also heavily optimized for gaming and multimedia, so even though
PC's are going to be more powerful and equipped with more memory, disk, and by 2007 more
horsepower, newer graphics and audio technology and so on, they'll still be general-purpose machines,
not optimized gaming platforms. That means that the extra muscle will probably just make up for not
being as optimized. :)

BillSpradlin
05-19-2005, 09:40 PM
Apparently some people just can't read... or think... what else can you expect? Just reading that post
was amusing, since he's basically criticizing AMD and Intel for doing exactly what these consoles are
showcasing in terms of technological progress.

It's pretty likely that this console generation will have a huge effect on PC gaming, for the simple reason
that writing a game for a PC requires targeting a platform that doesn't exist when you got started, yet it
still has to run on the stuff that is current when you got started. So with a 2-3 year game development
cycle, you have to target no less than SIX generations of hardware, and just on the graphics and
processor side, that's already a huge number of combinations to worry about. Throw in networking,
chipsets, audio, and controllers, and they sky's the limit. With MSHQL, no one but Dell and HP would
be able to ship a working machine at all.

The consoles are a known and static target. It's therefore easier to code for AND easier to QA and support. The next generation are also heavily optimized for gaming and multimedia, so even though
PC's are going to be more powerful and equipped with more memory, disk, and by 2007 more
horsepower, newer graphics and audio technology and so on, they'll still be general-purpose machines,
not optimized gaming platforms. That means that the extra muscle will probably just make up for not
being as optimized. :)


You are exactly right, some people's reading comprehension is baffling. I wouldn't agree by any means that x86 architecture has been "dead" for a long time, quite the contrary, its flourished well beyond what most of us thought it would. However, when you have the leaders of AMD and Intel both scrambling to squeeze every last drop out of it, that's a clear indication that it's time for something new. This hasn't been the case over the past 10 years, in fact there was quite a bit room for improvement up until recently.

The console makers are by no means left out of criticism, they fall under the umbrella of disdain as right alongside Intel and AMD.

Maybe I should have been clearer, but I was in a hurry and just threw out a post. I'll be sure to be less subtle for those only reading what they want to hear.

Thalaxis
05-19-2005, 09:57 PM
You are exactly right, some people's reading comprehension is baffling. I wouldn't agree by any means that x86 architecture has been "dead" for a long time, quite the contrary, its flourished well beyond what most of us thought it would. However, when you have the leaders of AMD and Intel both scrambling to squeeze every last drop out of it, that's a clear indication that it's time for something new. This hasn't been the case over the past 10 years, in fact there was quite a bit room for improvement up until recently.


There's obviously a lot of room for improvement left. Multiple cores isn't a cop out.


The console makers are by no means left out of criticism, they fall under the umbrella of disdain as right alongside Intel and AMD.


That's nonsensical.

Maybe I should have been clearer, but I was in a hurry and just threw out a post. I'll be sure to be less subtle for those only reading what they want to hear.[/QUOTE]

The problem was the content, not the presentation.

Hazdaz
05-19-2005, 10:14 PM
What BILLSPRADLIN said is semi-correct. The x86 architecture isn't exactly dead - nor will it be any time soon... BUT what AMD and Intel must go through to keep on increasing the performance of their chips is getting more and more expensive and more and more (needlessly) complex (and these dual-core chips is just another example), and they have to basically find ways AROUND the original design of the architecture to keep evertying compatible. That is really the biggest issue - all the legacy hardware and software that HAS TO run no matter what.

If they were allowed to design a brand new chip from a clean sheet of paper, they wouldn't have to contend with all the backward compatibility and out-right limitations of an architecture that has been around for ~20 years. Now mind you, I realize that there is no real way to "start clean" - after all, ~98% of all personal computers run on this architecture - and chips from Intel and AMD will continue to get more and more powerful, but at some point in the future someting will have to happen.

rakmaya
05-19-2005, 10:41 PM
x86 Execution model is a very fine and robust model. The basic principle behind the architecture is as good as the Cell, PowerPC and everything else. However, it is way of creating that is probably old. Intel and AMD is giving out more muscle power to their system. Even if the internal architecture and way of designing the chip to run the x86 model changes in the future, the x86 execution model will always be alive.

Getting back to the point of the thread, me and millions of people love the console because even buying all 3 systems is better than one upgrade of a PC. In 2 years, the high end grtaphic card will become main stream and would drop the price down to 300 may be and might get included with new PCs and they might be dual core that will become standard etc... But the price to build such a system for gaming is too much. If you are 3D modeling people or Compositing people you have the extra power to use for something useful, but that doesn't add up any significant population. So when a global perspective, the PC upgrades are not that useful when it comes to games, unless ofcourse the particular game of interest doesn't exist or work very well with the console (FPS etc..).

Zenbu
05-19-2005, 11:13 PM
That is something that will be changed in the next generation. The reason is because X360, PS3 and Revolution is insisting very much on Networked games. Apart from Shooting games the only reason PC games will survive is because of the independant developers who would want to do something for less. But with programs like XBOX Live and Online enabled RPG games, PC games will have hardtime keeping up with the console games.

They will have an aftermarket but depending on the cost/production/time ratio, this can vanish quickly before we realize.

I meant more on the individual side. For examples: I used to edit and fiddle with all the Infinity/Aurora engine games (Baldur's Gate, Planescape, Neverwinter Nights), and mess with even the FPS stuff (doom, quake) There's just more freedom in editing almost anything and everything about a game on the PC - Stuff like Natural Selection and countless other Half Life mods happen with that kind of freedom. Freedom that I personally think that consoles will never have.

rakmaya
05-19-2005, 11:19 PM
Yes, Modding is something that PC will always thive on. That is one major reason PC games lives on even after the end of the world.

Oh!, but if MS puts their OS onto the XBOX360 and gives a Windows+NortonAntiVirus Package, Keyboad and Mouse, you could do that on XBOX360:rolleyes:

Hazdaz
05-19-2005, 11:36 PM
Yes, Modding is something that PC will always thive on. That is one major reason PC games lives on even after the end of the world.

Oh!, but if MS puts their OS onto the XBOX360 and gives a Windows+NortonAntiVirus Package, Keyboad and Mouse, you could do that on XBOX360:rolleyes:

Extrapolating some ideas from what I have seen already - my prediction is that you'll be able to actually edit next-gen XB360 games on your PC, and then upload them to the console to play (and mostly definitly share with your friends). Sure you probably won't be able to do full-on new games like you can on a PC, but make new levels and stuff like that is to be expected.

DotPainter
05-20-2005, 12:22 AM
The consoles are a known and static target. It's therefore easier to code for AND easier to QA and support. The next generation are also heavily optimized for gaming and multimedia, so even though
PC's are going to be more powerful and equipped with more memory, disk, and by 2007 more
horsepower, newer graphics and audio technology and so on, they'll still be general-purpose machines,
not optimized gaming platforms. That means that the extra muscle will probably just make up for not
being as optimized. :)

The key to me is how many companies go through all of the effort to publish games that cater to high end hardware that is exclusive to the PC. With the next gen consoles being very much graphics powerhouses that will not see the full advantage of their capabilities for a few years, companies may decide to focus on pushing the GFX envelope on the consoles first and then port it to the PC second. I am sure that there will be quite a few effects that will run better on the "highly optimized" consoles that will not run well or be implemented on the PC (due to ATI and NVIDIA's console only "special sauce"). Does anyone remember how real time shadows like those in Splinter Cell did not make it to the PC until way after the consoles? On top of that, with so many game studios closing or merging, the mega-corporations may not see the PC business as profitable and decide to push console titles to the hilt. Likewise, there are only a few powerhouse game engines out there anyway and the most notable of these, the unreal engine, has already shown a HEAVY slant toward the consoles versus the PC. That makes it even easier for a small house to focus on console titles. These are the factors I am more concerned about than anything. Lastly, how many titles at E3 showcased any sooper dooper, smack ya mama graphics that were exclusive to the PC? .............................................................................................................
.............................................................................................................

Thats right..... a whopping total of 0! Yeah, there may be new titles that will allow you to create custom maps and all that kind of stuff, but, bottom line, the graphics are taking a back seat to the consoles. And there may be even better hardware available for PCs in a few years. But, unless developers focus on building games that exploit those features, it will be a moot point (especially since most wont have that hardware anyway). Yes, PC titles will have exclusive features that the consoles don't for a while longer, but graphics goodness won't be one of them. :)

And, lets face it, in order for the new consoles to do well, they HAVE to get new games out that showcase the capabilities that they are all touting at E3. So it is only natual that a lot of developers will be focusing their energies there for a while, trying to get a piece of the pie.

On a related note, what I want to know is when 3d software will really take advantage of the new capabilities of the next gen video cards and CPUs that are on the way? I mean it gets me fairly upset to see a fully textured, normal mapped environment and models moving around in it, with hdri and soft shadows, all being rendered in real time. Especially because I know a similar scene, with similar amounts of detail, WILL NOT RUN on my PC in real time with any 3d app. 3d apps should be making note of the "revolution" in computing power and start staging a software revolution to take advantage of these features:

1) close to or almost real time previews of complex scenes with sophisticated texturing and lighting
2) close to or almost real time compositing of effects WITHIN one package
3) close to or almost real time volumetrics
4) close to or almost real time HDRI previews
5) close to or almost real time GI and caustics.
6) close to or almost real time BLOOM, Bokeh Effects, light halos and effects, lens flares.... and so on
7) close to or almost real time lens effects, depth of field, blurs (camera, object and general purpose)
8) close to or almost real time NPR shader effects......
and on and on

All of these things are possible with the power that will be in the PCs in the next year or two, especially for those doing DCC work. I even read somewhere that someone at E3 said they were doing real time SSS for skin effects..... Even if we cant get everything in real time, I am sure that using the GPUs as part of the render pipeline will be able to enhance things like previews for lighting as well as animation pre-renders and maybe even the final render. The key to it is the algorithms that game developers use to simulate advanced effects on GPUs. These algorithms are becoming very powerful and would allow a signifigant advantage for software rendering if they were incorporated. Right now, the brute force, all or nothing school of 3d apps is showing signs of being too rigid and not allowing for room for competing algorithms that produce similar effects quality but in shorter time frames.......

3d software has a ways to go to catch up and take advantage of hardware..... Almost makes me want to take a next-gen game engine and convert it to a general purpose 3d platform..... On second thought, maybe 3d apps and game engines should merge. The capabilities of the two are coming closer and closer together anyway..... : )

Hazdaz
05-20-2005, 12:42 AM
1) close to or almost real time previews of complex scenes with sophisticated texturing and lighting
2) close to or almost real time compositing of effects WITHIN one package
3) close to or almost real time volumetrics
4) close to or almost real time HDRI previews
5) close to or almost real time GI and caustics.
6) close to or almost real time BLOOM, Bokeh Effects, light halos and effects, lens flares.... and so on
7) close to or almost real time lens effects, depth of field, blurs (camera, object and general purpose)
8) close to or almost real time NPR shader effects......
and on and on


Amen brotha.

I have been saying the same thuing for a long, long time, and with each passing generations of video card that is "supposed" to do all these things, I am always disappointed by software that doesn't let me. http://images.nasioc.com/forums/images/smilies/furious.gif

rakmaya
05-20-2005, 01:53 AM
If have done software engineering for 3D or Compositing application you will understand the real aspect of 3D Packages.

Games deploy a concentrated strategy that restricts its effect when you get to a certain point. Raytracing for example is not something games deploy. Lighting in games rely on different philosophy, however these are unavoidable portion in cg film production. Most 3D renderers uses a different strategy to render because of the need for advanced CPU expensive features.

If you are talking about accelerated previwing in the 3D view of the 3D package then it is already there, just not fully implimented for general public. A lot of studios have programmers writing plyugins to create Renderman and Mentalray shaders and HLSL shaders as well.

However you are correct in one aspect. There are features that 3D renderers used in the packages that could be moved onto the GPU to speed up the processing (Some have already started doing so). However due to the complexity and non-uniformity in the PC world this is almost never done. May be time will change and certain things might be GPU accelerated. On a side note, anyone who uses FX5900 or less shouldn't even dream about these things:rolleyes:

DotPainter
05-20-2005, 02:28 AM
It is a paradigm shift in thinking about 3d. If you look at some of the latest research in 3d in universities and such, a lot of it has to do with using gpus to perform various tasks like raytracing, GI, displacement and all sorts of stuff. I agree, most 3d packages tend to be conservative when it comes to cutting edge research and many wait until the last possible moment to implement years old research, mostly to keep up with the joneses. That model keeps most of the benefits of current technology, research and algorithms 4 or 5 (or more) years away from commercial products. Otherwise, studios hire programmers to take advantage of the cutting edge for custom solutions, which may never make it to the public.

To me a 3d application should just be a general purpose framework that allows you to use whatever is best suited to get the job done. Basic research just needs to be done and implemented on some of the fundamental areas: passing geometry, vertex and pixel data back and forth between software, CPU and GPU.

Your 3d package should at least be able to generate and optimize geometry for use on a GPU for things like normal mapping. Imagine using a zbrush type app that uses the gpu to render high poly normal mapped objects from low poly objects, as you edit in real time. Real time manipulation and preview of uv textures. These things are already possible, but the main issue is how to pass geometry info from a 3d package to the gpu but by automatically optimizing the mesh for the gpu, something similar to what happens when using subpixel displacement. The geometry is triangulated and subdivided and rendered. Take that same approach, but only let the software (or maybe the gpu) do the first pass of triangulating the mesh and then let the gpu handle the displacement. If there was a way to get this data from the gpu treated as a pass and assigned to a pipeline on the GPU, it would be easy to assign various passes to GPU/CPU pipelines that would make current paradigms for 3d packages obsolete overnight. Build your base geometry, then make passes for everything else:normal maps, lighting, displacement, texturing, masks (shadow, highlight, color, diffuse, zbuffer, alpha,etc), blur, hdri, particles and so on. Each of those passes can either be processed by the cpu or passed to the gpu or both. The advantage of passes means that you get to use the pipelining of the upcoming dual and quad (or more) GPU architectures so that you can really scale performance quite well with added hardware. However, even basic set ups will work. In this scheme, the 3d app is more like the orchestrator where you create your passes and set the various parameters. Then it will send the work to whatever is most appropriate to doing the job. This, of course, may require almost an operating system to itself, since you will be orchestrating the activities of potentially many cpu cores and gpu cores with direct control over how resources are used to achieve a result. But heck, that is what the Cell, SLI and the new windows graphics subsystem are SUPPOSED to be bringing about anyway.........

rakmaya
05-20-2005, 02:37 AM
Yes, good thought process. The difference is that 3D packages such as Maya and all polygonal modeling ones rely on the basic algorithm that cannot be changed. Unlike ZBrush, since these are poly based, we need a way of doing everything we can on CPU to put it on GPU. Unfortunately, since GPU is not general purpose, that ultimate limitation cannot be overcomed.

I do hope the real time previewing on the 3D viewport will get better. Having worked on Virtual reality project for 2 years I am surprised how long will Alias and Discreet will hold themselves from rewriting their old techniques. This is where some of the new modelig applications are benefitting.

Thalaxis
05-20-2005, 05:07 AM
BUT what AMD and Intel must go through to keep on increasing the performance of their chips is getting more and more expensive and more and more (needlessly) complex (and these dual-core chips is just another example),


And look at what, just by way of example, IBM has to do to keep their stuff moving
forward... while losing millions per quarter just doing fabrication R&D. The cost of
semiconductors in general is just staggering now, regardless of the instruction set.


If they were allowed to design a brand new chip from a clean sheet of paper, they wouldn't have to contend with all the backward compatibility and out-right limitations of an architecture that has been around for ~20 years. Now mind you, I realize that there is no real way to "start clean" - after all, ~98% of all personal computers run on this architecture - and chips from Intel and AMD will continue to get more and more powerful, but at some point in the future someting will have to happen.

My guess is that it will be software emulation like IEL or TransMeta's codemorphing, but
we'll see. That's one of the biggest driving forces behind Intel's Itanium development;
look at the gate counts for the cores, and think about the cost of SRAM vs the cost of
dynamic (and leaky) logic, and you'll see why a 1.7 billion gate Montecito can have a
production cost comparable to that of a 200 million gate P4.


x86 Execution model is a very fine and robust model. The basic principle behind the architecture is as good as the Cell, PowerPC and everything else. However, it is way of creating that is probably old. Intel and AMD is giving out more muscle power to their system. Even if the internal architecture and way of designing the chip to run the x86 model changes in the future, the x86 execution model will always be alive.


There you're wrong. The x86 instruction set is constrained by design compromises that
resulted from putting together a processor in a week. The only reason that x86 has
reached such a height is pure volume. Intel is still making record revenues on x86 sales,
and AMD is making a profit without even giving Intel a slight bruise. That's saying a lot
about the market.


Lastly, how many titles at E3 showcased any sooper dooper, smack ya mama graphics that were exclusive to the PC?


Also note that the big two graphics card makers are launching their next-generation
flagship parts on the console, not the PC. When the PS2 launched, the high-end graphics
cards had a more impressive feature set, but this time it's the other way around.

Anyone else notice the "free" hardware anti-aliasing engine in the XBox360? You know, the
one that's based on a 192-tap 10 MB EDRAM back-buffer custom-built by NEC? I'll give
you three guesses on when we'll see that on a PC add-in card. And keep in mind that both
of the "big two" consoles are targeted at 3/5 of the price of a current high-end graphics
card. Sure, they have less memory... but they also have 4x the interconnect bandwidth.

That's more bandwidth than a current quad-Opteron system. It's more bandwidth than
PCI-X could hope to deliver. It's also more bandwidth than 4 channels of DDR-2 667 would
give you.

But a well-configured PC with specs like that would be called Prism and sell for around
$40k. :)

BillB
05-20-2005, 05:29 AM
companies may decide to focus on pushing the GFX envelope on the consoles first and then port it to the PC second

I'm missing what it is people think has changed fundamentally here? We're just on the same point of the cycle we've been to twice before when a new console(s) comes out and for a short time they're at the forefront of what's possible graphically. These consoles will come out, and less then a year to eighteen months later a PC will be able to kick their butts graphically, and over the 5 year life of the console it'll look worse and worse. Picture half life 2 or Doom 3 on a PS2 - ick!

"The reports of the death of the PC as a gaming platform are greatly exaggerated."

DotPainter
05-20-2005, 09:52 AM
Yes, good thought process. The difference is that 3D packages such as Maya and all polygonal modeling ones rely on the basic algorithm that cannot be changed. Unlike ZBrush, since these are poly based, we need a way of doing everything we can on CPU to put it on GPU. Unfortunately, since GPU is not general purpose, that ultimate limitation cannot be overcomed.

Right, but I am not suggesting that we use the gpu for modelling.... directly.

Imagine this:

I have a piece of geometry I have modelled and it is fairly low poly. The modelling was done purely on the cpu. However, I decide to make a new pass called the subpixel displacement pass for viewing and editting a high resolution displacement mapped version of the same geometry. Yes the base geometry was modelled and can still be editted using the cpu, but I can also go into the sub-d-displacement pass and edit a high poly sub-pixel displaced version of the object that is either rendered by the GPU or CPU, depending on the configuration of the hardware. Obviously, if you dont have the GPU hardware, then you only can use the CPU/software engine to render it, which will be much slower than using the GPU. So in this sense, the GPU is not "general purpose" it is rendering a normal mapped displaced version of the base geometry and calculating changes to that normal mapped displaced geometry. This is not too dissimilar to what will happen with normal mapped displaced characters in next gen games. All the other passes work the same way, these are not "general purpose" but passes focusing on achieving a specific effect, either rendered/manipulated on the gpu or rendered manipulated by the CPU/software. A good example of what I mean is how Particle Illusion works. Particle illusion is all gpu based. A pass in my concept would be similar to what happens AFTER you pass your geometry to particle illusion. The only difference is that it would all happen in ONE package and you would have more flexibility with configuring the passes, since you have access to more of the geometry/masking/gbuffer data, that can be used for advanced compositing effects, right within the same package.......particle illusion (http://www.wondertouch.com/info1.asp)

Another, more basic example of what a rendering pass or preview could look like in the same package would be insta viz. (http://www.eovia.com/products/amapi_addons/instant_viz.asp)
The only difference in my concept would be that the texture channels bump, specular, diffuse, glossiness.... etc would all be more complex. Since the pass would be either CPU or GPU rendered, the settings would allow the GPU to render a pass with the same quality and level of detail as the CPU/software side, but with less time. The issue is making the shader languages used for building the textures compatible from software engine to the GPU. The nice thing is that you would have direct access to each texture channel as a separate sub-pass unto itself, that can then be manipulated for compositing purposes.... directly within the same package.

I guess the best comparison I can make is to something that already exists called SmodeStudio. (http://www.galago.fr/index.php?module=pagemaster&PAGE_user_op=view_page&PAGE_id=12&MMN_position=39:31) The only difference in my concept would be that you can create and edit the geometry on the fly within the same package and all the other affects would instantly be updated with the new geometry info.

rakmaya
05-20-2005, 10:32 AM
There you're wrong. The x86 instruction set is constrained by design compromises that
resulted from putting together a processor in a week. The only reason that x86 has
reached such a height is pure volume. Intel is still making record revenues on x86 sales,
and AMD is making a profit without even giving Intel a slight bruise. That's saying a lot
about the market.



I am not saying the processor architecture is robust. x86 execution model and x86 processors are independant. AMD and Intel have a different processor architecture to run the x86 execution model. x86 execution model is simply the underlying programming/code derivative which has very little relation directly to the processor implimentation. Remember that RISC execution model is very primitive as well until many developers added things on top of it. Of course that was the beauty of that model. However now it has become the most prefered one in many ways because of the simplicity. Of course processor manifacturers (Sun/IBM etc..) does rely on different mechanism to implement that as well. Changing the processor architecture does not mean they loose backward compatibility as long as their new one supports x86 execution model.

Since x86 execution model is only implimented by AMD and Intel, you could say that if AMD or Intel goes out of commision, or stop creating chips for that, it is dead. But I don't see that happening in the near future either.

Thalaxis
05-20-2005, 03:44 PM
I am not saying the processor architecture is robust. x86 execution model and x86 processors are independant.


Not really; the processor designers have to jump through a lot of hoops to decode instructions, extract
ILP, and retire them in order, as required by the ISA. The current performance lead that x86 has over
the vast majority of the RISC processors out there is purely a result of the fact that x86 has a lot more
effort thrown at it in the form of custom circuit design, process optimization, and compiler development.

It would be very cool to see what AMD and Intel processor teams could do without the x86 baggage,
but x86 has so much momentum behind it that not even Intel could kill it if they tried. They'd go out of
business first, and even without AMD, x86 would probably end up living on for a long time. Via'd be
quite pleased about it, I suppose ;)


Remember that RISC execution model is very primitive as well until many developers added things on top of it. Of course that was the beauty of that model.


That's very true, and it's actually almost purely a result of some very bad business decisions by IBM
and some very good business decisions by Intel and Microsoft that pushed x86 into the lead. Don't
forget that x86 got to a position of market leadership LONG before it had anywhere close to performance
parity, let alone leadership. Right now, the only things out there that are faster are either custom or
VERY high end (e.g. POWER5 and Itanium2/9M). And even those don't actually lead x86 across the
board, which is IMO simply amazing. (That x86 is doing so well, that is.)


However now it has become the most prefered one in many ways because of the simplicity. Of course processor manifacturers (Sun/IBM etc..) does rely on different mechanism to implement that as well. Changing the processor architecture does not mean they loose backward compatibility as long as their new one supports x86 execution model.


x86 is preferred primarily because of software, not simplicity; it's among the more complex ISA's out
there, and definitely the cruftiest. AMD-64 is actually a step toward removing some of the cruft -- they
do not, AFAIK, support x87 in long mode (one of x86's biggest bottlenecks), and they don't support
real mode, IIRC (so no more DOS, hardly a big loss nowadays :)).


Since x86 execution model is only implimented by AMD and Intel, you could say that if AMD or Intel goes out of commision, or stop creating chips for that, it is dead. But I don't see that happening in the near future either.

And Via, don't forget... they have almost 1% of the market now :)
But you're right, AMD's situation is still a bit precarious, but they seem to be doing pretty well (let's hope!),
but Intel is still pulling down record revenues, and Intel is now also the only semiconductor company
left with enough resources to do the entire semiconductor pipeline (develop new processes, build new
fabs, and design, validate, and ship processors) alone. Everyone else, including IBM, have been
teaming up to share the costs because the new processes and fabs are getting so expensive that
they're making it nearly impossible for anyone else to go it alone.

rakmaya
05-20-2005, 05:51 PM
x86 is preferred primarily because of software, not simplicity; it's among the more complex ISA's out
there, and definitely the cruftiest. AMD-64 is actually a step toward removing some of the cruft -- they
do not, AFAIK, support x87 in long mode (one of x86's biggest bottlenecks), and they don't support
real mode, IIRC (so no more DOS, hardly a big loss nowadays :)).

No, I meant RISC is prefered because of its simplicity and its execution model is so open that most processor makers can develop and push the envelop. This is probably because of the lack of extreme developers like Intel and AMD who have already pushed many factors of their processors to the endge. Once we get the same type of research and dev on RISC, we will begin to see its limits as well (although not any time soon).

Thalaxis
05-20-2005, 06:16 PM
No, I meant RISC is prefered because of its simplicity and its execution model is so open that most processor makers can develop and push the envelop. This is probably because of the lack of extreme developers like Intel and AMD who have already pushed many factors of their processors to the endge. Once we get the same type of research and dev on RISC, we will begin to see its limits as well (although not any time soon).

Sorry, I misunderstood your context, but obviously we're saying the same thing here :)

I don't think we'll get that level of R&D on RISC until x86 emulation works well enough to make x86
compatibility irrelevant.
:hmm:

CGTalk Moderation
05-20-2005, 06:16 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.