I just read this:
Its happening. Question is, how much do these GPUs cost, and how long before we can use them in Max, Maya, C4D and so on - for speeding up offline rendering.
Also, lets see how long they take to produce a Realtime Raytracing solution for games.
$2300-$10,000 per card, depending on the configuration
They’ll offer immediate performance gains for GPU accelerated renderers right now. Renderers such as Redshift, Arnold, and PRMan are currently being optimized for this new architecture and should be available shortly.
There’s already a build of Unreal Engine which supports this:
Hypothetically we could be seeing some publicly available downloadable demos of this tech in the coming months. Probably 3-5 years max before this becomes the norm in terms of what’s available in consumer gaming cards.
Which is a better card - TItan V or this one?? What’s the difference?
What about Blender EEVEE, do you have any idea?
Yes, Blender as well. It was shown as one of the software packages which supports this technology. Anything which uses Cuda will see performance gains. I’m curious about OpenCL enabled applications though.
You can watch the full presentation here:
Blender Cycles is mentioned here as well:
From the above link:
[quote=]Chaos Group: Preview of Project Lavina using Microsofts DXR to deliver 3-5x real-time ray-tracing performance over Volta generation for scenes exported from Autodesk 3ds Max and Maya. VRAY GPU using RT Cores
in Quadro RTX for substantial acceleration over NVIDIA Pascal
It seems subpar to traditional rendering. In short, it looks fake.
Are you referring to the Porsche spot? At this point, whether or not something that is rendered with this technology looks ‘fake’ depends on the artistic execution of a particular shot. For non real time rendering, the images produced by this technology will be indistinguishable from those generated by an MCRT renderer running on a traditional CPU. In the case of the Porsche sequence, I think this was more of a demo of real time ray tracing within the context Unreal Engine. The car paint shader could probably be a bit better, but then they may have taken a performance hit on the rendering.
So they teamed up with some 3 dozen development partners 2 or more years ago, and we are only now being told that realtime raytracing is “the next thing”.
I guess Nvidia wanted us to keep buying Titans and Titan Vs, rather than wait for the new RTX GPUs.
Innovation takes time, especially on the scale of these new RTX cores. As stated in their Siggraph presentation, this is the biggest advancement from Nvidia since CUDA, their general purpose computing API, was released in 2007. I don’t think that that there was any kind of secret conspiracy between Nvidia and their development partners to keep information out of the hands of the public. It’s just that any software which currently uses Nvidia’s CUDA and/or Optix API’s gets an automatic performance boost with this new hardware.
In regards to Nvidia’s Pascal and Volta based hardware, I doubt that anyone who has purchased said hardware over the past few years for the purpose of using them for commercial projects feels in any way slighted or cheated by the release of this new hardware. Having a Volta or Pascal card for accelerated rendering/computing is still preferable to using a generic CPU for a lot of applications.
It looks like the consumer/gaming versions of these cards might be coming sooner rather than later:
There’s an Nvidia Geforce event scheduled for August 20th where these consumer cards may be unveiled.
Good points, thank you. Not only Porshe, but also the dancing soldier demo had something “strange” about it. I don’t know if reflections were meant to be irregular on that character model, but it added to a feeling of being a gimmick to me personally.
It’s definitely a great leap, or rather a start of it. It just had to happen like 5 years ago. Maybe it’s due to AMD catching up with its new many-core processors.
Yes, the dancing soldier looked a bit strange, a bit plastic and too bright imo, while the rest of the video was great. I think you still have to differentiate between two scenarios: use this tech for realtime, or use it for accelerated rendering in DCC apps. Saw some videos showing the Autodesk booth and Arnold GPU doing a Spiderman shot - that was impressive and looked more realistic that the realtime demos.
By far the Turing board : it has both tensor cores (denoising) and RT cores (BVH traversal + triangles intersect). The Volta architecture only has tensor cores and has to emulate triangle intersections with CUDA software : it is nowhere close to the 10 GRay/s of the higher end RTX boards.
Remember the little Star Wars elevator short : it was taking 4 Titan V’s and barely ran 25 FPS - Turing runs it faster on a single 2080ti (i’ll let you do the perf / $$$)
RTX 2070 is $600, RTX 2080 is $800
I reckon we will see robust acceleration of major renderers with RTX in 5-10 years. By that time, several generations of this technology cards will pass. It’s really nice, as it’s directly usable for rendering. But it’s just a glimpse of the future.
…five to ten YEARS…!!! I can’t wait that long…!!
Redshfit, Octane, Vray gpu will support within months rather than years. This really is calling time on cpu rendering.