Nvidia unveils raytracing-focused Quadro RTX GPU family


#1

I just read this:

Nvidia


#2

Its happening. Question is, how much do these GPUs cost, and how long before we can use them in Max, Maya, C4D and so on - for speeding up offline rendering.

Also, lets see how long they take to produce a Realtime Raytracing solution for games.


#3
  1. $2300-$10,000 per card, depending on the configuration

  2. They’ll offer immediate performance gains for GPU accelerated renderers right now. Renderers such as Redshift, Arnold, and PRMan are currently being optimized for this new architecture and should be available shortly.

  3. There’s already a build of Unreal Engine which supports this:

//youtu.be/Z85aPqqJzs0

Hypothetically we could be seeing some publicly available downloadable demos of this tech in the coming months. Probably 3-5 years max before this becomes the norm in terms of what’s available in consumer gaming cards.


#4

Which is a better card - TItan V or this one?? What’s the difference?


#5

What about Blender EEVEE, do you have any idea?


#6

Yes, Blender as well. It was shown as one of the software packages which supports this technology. Anything which uses Cuda will see performance gains. I’m curious about OpenCL enabled applications though.

You can watch the full presentation here:

//youtu.be/jY28N0kv7Pk


#7

Blender Cycles is mentioned here as well:

https://blogs.nvidia.com/blog/2018/08/13/turing-industry-support/


#8

From the above link:

[quote=]Chaos Group: Preview of Project Lavina using Microsoft’s DXR to deliver 3-5x real-time ray-tracing performance over Volta generation for scenes exported from Autodesk 3ds Max and Maya. VRAY GPU using RT Cores
in Quadro RTX for substantial acceleration over NVIDIA Pascal
generation.[/quote]


#9

It seems subpar to traditional rendering. In short, it looks fake.


#10

Are you referring to the Porsche spot? At this point, whether or not something that is rendered with this technology looks ‘fake’ depends on the artistic execution of a particular shot. For non real time rendering, the images produced by this technology will be indistinguishable from those generated by an MCRT renderer running on a traditional CPU. In the case of the Porsche sequence, I think this was more of a demo of real time ray tracing within the context Unreal Engine. The car paint shader could probably be a bit better, but then they may have taken a performance hit on the rendering.


#11

So they teamed up with some 3 dozen development partners 2 or more years ago, and we are only now being told that realtime raytracing is “the next thing”.

I guess Nvidia wanted us to keep buying Titans and Titan Vs, rather than wait for the new RTX GPUs.

Typical Nvidia.


#12

Innovation takes time, especially on the scale of these new RTX cores. As stated in their Siggraph presentation, this is the biggest advancement from Nvidia since CUDA, their general purpose computing API, was released in 2007. I don’t think that that there was any kind of secret conspiracy between Nvidia and their development partners to keep information out of the hands of the public. It’s just that any software which currently uses Nvidia’s CUDA and/or Optix API’s gets an automatic performance boost with this new hardware.

In regards to Nvidia’s Pascal and Volta based hardware, I doubt that anyone who has purchased said hardware over the past few years for the purpose of using them for commercial projects feels in any way slighted or cheated by the release of this new hardware. Having a Volta or Pascal card for accelerated rendering/computing is still preferable to using a generic CPU for a lot of applications.


#13

It looks like the consumer/gaming versions of these cards might be coming sooner rather than later:

https://www.techradar.com/news/the-nvidia-rtx-2080-reportedly-costs-only-dollar649

There’s an Nvidia Geforce event scheduled for August 20th where these consumer cards may be unveiled.


#14

Good points, thank you. Not only Porshe, but also the dancing soldier demo had something “strange” about it. I don’t know if reflections were meant to be irregular on that character model, but it added to a feeling of being a gimmick to me personally.
It’s definitely a great leap, or rather a start of it. It just had to happen like 5 years ago. Maybe it’s due to AMD catching up with its new many-core processors.


#15

Yes, the dancing soldier looked a bit strange, a bit plastic and too bright imo, while the rest of the video was great. I think you still have to differentiate between two scenarios: use this tech for realtime, or use it for accelerated rendering in DCC apps. Saw some videos showing the Autodesk booth and Arnold GPU doing a Spiderman shot - that was impressive and looked more realistic that the realtime demos.


#16

By far the Turing board : it has both tensor cores (denoising) and RT cores (BVH traversal + triangles intersect). The Volta architecture only has tensor cores and has to emulate triangle intersections with CUDA software : it is nowhere close to the 10 GRay/s of the higher end RTX boards.

Remember the little Star Wars elevator short : it was taking 4 Titan V’s and barely ran 25 FPS - Turing runs it faster on a single 2080ti (i’ll let you do the perf / $$$)


#17

RTX 2070 is $600, RTX 2080 is $800


#18

I reckon we will see robust acceleration of major renderers with RTX in 5-10 years. By that time, several generations of this technology cards will pass. It’s really nice, as it’s directly usable for rendering. But it’s just a glimpse of the future.


#19

…five to ten YEARS…!!! I can’t wait that long…!! :frowning:


#20

Redshfit, Octane, Vray gpu will support within months rather than years. This really is calling time on cpu rendering.