PDA

View Full Version : Nvidia -Tesla


vauric
11-26-2008, 11:06 AM
Sorry if this has been posted already.

http://www.nvidia.com/object/personal_supercomputing.html

impressive.

Ikaria
11-26-2008, 11:23 AM
Would be even more so if it ran CGI apps...

DaveWortley
11-26-2008, 12:27 PM
Why wouldn't it? It runs windows 64 bit.

meyers3d
11-26-2008, 01:14 PM
Yes, I've read about this, looks promising.

Ikaria
11-26-2008, 01:24 PM
Why wouldn't it? It runs windows 64 bit.


Don't know the particulars, but someone mentioned having looked into
it on highend3d's maya listserv. My guess is that the Tesla GPUs might require
special drivers for such apps as Maya and Autodesk might need to rewrite
things on their end to take full advantage of all that's there.

Also, Boxx only mentions scientific apps in conjunction with their Tesla machine --
http://www.boxxtech.com/products/BOXX_PSC/BOXX_PSC_spec.asp

Szos
11-26-2008, 01:36 PM
Don't know the particulars, but someone mentioned having looked into
it on highend3d's maya listserv. My guess is that the Tesla GPUs might require
special drivers for such apps as Maya and Autodesk might need to rewrite
things on their end to take full advantage of all that's there.

Also, Boxx only mentions scientific apps in conjunction with their Tesla machine --
http://www.boxxtech.com/products/BOXX_PSC/BOXX_PSC_spec.asp

I would think so too.
This Tesla computer might be technologically impressive, but has little to no connection with what the vast majority of us do here on this website.
It looks like a specialized computer aimed at a niche (scientific) market... not unlike the market that SGI was in - high end scientific number crunching. Just the fact that much of that data is also in 3D does not mean it relates to apps like MAX or Maya.



in other words:
http://evilhideout.com/images/chief_wiggum.gif
"Nothing to see here... Move along folks."

airpot
11-26-2008, 01:46 PM
You could take tesla as a pc with unbelivably fast gpu. Today I think there are few 3d softwares that can take use of that power. XSI (ageia) can use cuda for particle and rigid body simulation. Also I know that grading software Scratch uses gpu intensively. But the tasks on it are so simple that tesla would be overkill. Also Gelato could be used on Tesla. As Nvidia bought Mental Images there are hopes that in future mental ray or similar might be gpu based. :)

Even usual gamer gpus outperform todays cpus.

BigPixolin
11-26-2008, 04:17 PM
"Nothing to see here... Move along folks."


I would not be so sure in this.
Think future mental ray+cuda+tesla= pure awesomeness.

CHRiTTeR
11-26-2008, 05:06 PM
I would not be so sure in this.
Think future mental ray+cuda+tesla= pure awesomeness.

Yes, if it wasnt for being stated over and over again that raytracing on a gpu is not as efficient as raytracing on a cpu

mustique
11-26-2008, 05:58 PM
The GPU is a very young tech compared to the CPU so it's no surprise that there's no 3D/rendering app that makes proper use of it yet.

Bottom line is, todays fastest GPU's are up to 250 times faster than the fastest multicore CPU's in raw number crunching. Which means, the realtime raytracing 3D app / game, everybody dreams about, could be reality today theoretically.

The real question is if the mainstream market will demand that much power to let this tech blossom. If yes, it's going to be the gaming market, not us 3d guys or researchers who will be the deciding factor.

705
11-26-2008, 07:00 PM
well, there are many other ways in dcc instead of using 3D Apps rite ;)



long live graphic programmer :D

enygma
11-27-2008, 03:53 PM
The real question is if the mainstream market will demand that much power to let this tech blossom. If yes, it's going to be the gaming market, not us 3d guys or researchers who will be the deciding factor.
The gaming market has already demanded the power, and they use it. The core technology of the Tesla GPU is not unlike technology in the Quadro FX 5800 or the GeForce GTX 280. The stream cores utilized in the GPU are essentially the shader processing units, which allows for highly programmable shaders.

The bonus is that they happen to be so flexible that they can also be utilized for general purpose programming. The added bonus on top of that, is rather than telling a programmer that they have to somehow manipulate DirectX or OpenGL shader languages to do general work, NVIDIA had developed a set of libraries called CUDA, which is a more GPGPU friendly set of libraries to the programmer than GLSL or HLSL. It even supports explicit multi GPU support (which is not the same as SLI).

So at this point, it is really up to the developers to adopt the technology and use it to their advantage, and the advantage of those that use their software. Heck, if there are any developers out there that wish to do remote testing of their CUDA enabled applications, I'm more than happy to allow that on my in-house test system (http://www.tycrid.com/?page_id=85).

Szos
11-27-2008, 08:21 PM
The gaming market has already demanded the power, and they use it. The core technology of the Tesla GPU is not unlike technology in the Quadro FX 5800 or the GeForce GTX 280. The stream cores utilized in the GPU are essentially the shader processing units, which allows for highly programmable shaders.

The bonus is that they happen to be so flexible that they can also be utilized for general purpose programming. The added bonus on top of that, is rather than telling a programmer that they have to somehow manipulate DirectX or OpenGL shader languages to do general work, NVIDIA had developed a set of libraries called CUDA, which is a more GPGPU friendly set of libraries to the programmer than GLSL or HLSL. It even supports explicit multi GPU support (which is not the same as SLI).

So at this point, it is really up to the developers to adopt the technology and use it to their advantage, and the advantage of those that use their software. Heck, if there are any developers out there that wish to do remote testing of their CUDA enabled applications, I'm more than happy to allow that on my in-house test system (http://www.tycrid.com/?page_id=85).

But that is always the case, and that is always where the problems creep up. Without developer support, Nvidia or ATi or whomever can come out with the fastest CPU/GPU/whatever out there, but if it doesn't get the support it deserves, then all it is is a fancy paperweight.

instinct-vfx
11-27-2008, 08:27 PM
having had some cuda in-house development happen recently i gotta add that cuda is pretty incredibly limiting in many ways. Wich makes it unlikely to be a target for serious commercial software developers. Scientific apps ? Sure! In-house ? Maybe! commercially ? sure....

There has been a lot of requests in the chaos forums for example. Just read vlado's answers to get an overview of the COMMERCIAL use of systems like that....(cough cough next to none cough cough especially with the rise in cpu power each 6 months coug cough :D)

Regards,
Thorsten

enygma
11-27-2008, 09:37 PM
I can't really disagree with those points, as they are valid issues. One thing I could point out though is that while CPU speeds increase, so do GPU speeds.

Do you have a link to Vlado's post? I'd like to check it out.

instinct-vfx
11-28-2008, 07:58 AM
There are quite a bunch of them with different bits of info. A search for "GPU*" on the chaos forum comes up with them. Here's an example:

http://www.chaosgroup.com/forums/vbulletin/showthread.php?t=40583&highlight=GPU*

Basically it comes down to "hassle vs. advantage" :D

Regards,
Thorsten

CGTalk Moderation
11-28-2008, 07:58 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.