Anybody get a 2080? :)


#1

Hi,
So…did anyone here GET a 2080 for themselves? :slight_smile: I’m curious as to what your experiences were… and are AMD/Intel/Imagination/anybody else showing any signs of competing? :slight_smile: How are they faring? Skeebertus? :slight_smile:


#2

Here I’ve posted some real world benchmark on various render engine: Octane 4 RC1 With Integrated Brigade Realtime Pathtracer Tech Available To Test Now


#3

numbers don’t look right - looks like RT cores aren’t being used at all.


#4

That’s because RT cores can’t be used and won’t be used for a long time(my guess is about a year but I can be wrong), so if you buy this card hoping for miraculous speed gain as marketing want you to believe you will be disappointed. It’s just incremental gain over previous generation(two years old), usually about 30% faster (but cost 50% more).
Other independent tests:


#5

RT cores won’t be used in a long time?!

What… really? :frowning: how do you know?


#6

Because developing specialized features take time and resources. Even when available I really doubt you will see large improvements, and honestly I don’t trust a company that tells you that you will se this kind of gain(between 300 and 800%): https://evermotion.org/articles/show/11111/nvidia-geforce-rtx-performance-in-arch-viz-applications
but then when it comes to reality will perform like this(just look for raytracing benchmark about 30%): https://www.pugetsystems.com/all_news.php All of this after two years and a 50% increase in cost over previous gen and without improving VRAM amount. Puget system even discovered that in reality you won’t be able to double your VRAM using NVlink on gaming cards, and that was the only real improvement for me, this feature will still be available only on the far pricier Quadro: https://www.pugetsystems.com/labs/articles/NVIDIA-GeForce-RTX-2080-2080-Ti-Do-NOT-Support-Full-NVLink-1253/


#7

Actually it’s all happing pretty fast it seems. BUT not a lot is using it right now… so Is it a reason to pay the premium over the 1080ti… we’ll have to see…


#8

That’s just Nvidia marketing, I always prefer facts. Until now Nvidia marketing have promised up to 800% speed increase in render time but as a matter of facts actually they only delivered about 30%.


#9

The speedup advertised is ray intersections, which has fixed-function acceleration for the first time in Turing. Iirc the CUDA implementation on TitanV (Volta) only pushes 1.1GRs, against the 10GRs from the new RT cores : that’s your 8x. That’s why the Star Wars demo was running on 4 TitanV’s (several KWatts) and barely scratching 24FPS, while a single 2080ti has no problems with it.

However, the bulk of your render time typically goes into shading & lighting computations, which is why your overall speedup is much more modest.

With that said, while i can’t get into the details, there is still a fair chunk of work to bring all these apps up to speed (pun intended)


#10

The speedup they advertize its not about rays intersection, it’s about overall path tracing renderer performance.


#11

RTX gives a speed increase of 800 percent compared with previous CPU-based technology.

This is a quote from Allegorithmic - and put in context, it’s comparing apples to oranges.

There is however a genuine order of magnitude advance in ray-tracing with Turing over previous architectures (see my previous post)


#12

A question: If I wanted to learn to PROGRAM these cards, so that I can one day make graphics like in the posts above, where would I start? Is learning Vulkan a good idea? (I’m on a Linux system…)


#13

Ray casting is it’s only a small part of the job on modern renderer. Nvidia marketing is just misleading, claiming they are able to replace entire Hollywood blockbuster renferfarms with a few GPUs and so on. They invented terms like gigarays that doesn’t tell you basically nothing at all about what to expect in real life using this new cards. The sad part it’s that the more I read on the web, the more I see people hoping to buy their GPUs expecting a breakthrough in performance, while real world benchmark proved just little incremental gain over Pascal, actually only 15% year over year since Pascal is more than two years old. That’s not revolution that’s barely evolution especially considering the 50% price increase. Without proper competition Nvidia is simply following Intel strategy to give modest improvements and increasing the price, compared to Intel they are very good at marketing though and with this latest release they really have an 800% gain(in overhyping). I really hope for a comeback from AMD on the GPU side too because Intel ruined CPU world for a decade until Ryzen and this is already happening here with Nvidia.


#14

Assuming you are interested in path-tracing, I would start here : https://www.pbrt.org/

On Windows, Microsoft added the DXR extension to DX12 (made public with RS5) and you can start coding today (google for tutorial links)

On Linux, Vulkan is pretty much your only choice but you would have to use nvidia vendor extensions at the moment to implement ray-tracing. I haven’t been involved with Khronos for a while, so i have no idea what the timetable is for integrating the equivalent of DXR into the core spec.

just little incremental gain over Pascal

Here is an (incomplete) list of new features:

  • tensor-core to support AI inference (DLSS, up-rezzing, …)
  • RT cores for ray-tracing
  • overlapping integer / floating operations
  • new rasterization pipeline (mesh shading)
  • variable rate shading (more flexible AA)
  • decoupled shading (lighting in texture space, temporal shading reuse)

Doesn’t look like minor increments to me, but i guess it will take a while for the peanut gallery on the interwebs to catch up to what happened…


#15