Octane 4 RC1 With Integrated Brigade Realtime Pathtracer Tech Available To Test Now


To be honest the only one mentioning 5 rays per pixel per second is you, I’ve only said 5 pixel in RT as reported by Vlado. I agree that he should have provided some context about resolution and fps, nonetheless no matter what context it is, it require hundred time more render power to provide RT on complex production scene(and this seems obvious for me).
After all, if this technology can’t even provide substained RT ray tracing for games that are a lot simpler to compute what’s the chance to get RT ray tracing for production(which those cards were marketed for)?
My guess is that this new cards will provide maybe a 50% increase in performance(comparing let’s say a very popular 1080 from 2016 with a 2080), that’s good but all of that comes with pricier cards, after more than 2 years development, and without increase in memory size over previous generation(unless you pay huge amount of money for Quadro). You can double your memory now and that’s very good, but only after paying twice for two cards plus the cost of the link.
IMO that’s not revolution, it’s just the normal increase in performance year after year, plus some welcome feature like the link for gaming cards.
Again that’s my guess but I prefer to wait for benchmark, who knows… maybe I’m wrong and you will cut rendertime by a factor of 10;)


I strongly disagree. There are 3 factors…three technologies in play here that can work cooperatively.

-There will be an immediate 20-30% increase in speed from the CUDA count increase.

  • According to VRAY and OTOY techs there will ultimately be a 2-8 fold increase from integration of RT; this may possibly be context limited and will take time to be integrated into renderers. These are distinct chips in addition to CUDA.

-There will also be huge speed increases from the AI/hardware de-noising. Octane 4’s de-noising is a very big deal in reducing render time and that’ not tied to dedicated/advanced hardware assisted AI…as we’ll see in Nvidia Optix…driven w/Tensor Core. Again these chips are discreet.


I think we all have already given our opinion here, but in the end only benchmark will tell the truth, your bet is up to 800% increase, my bet is on up to 50%, let’s wait and see who get closer when it comes to render on Octane or Vray when released;)


My money is on a 30% real-world difference. The days of “ermergerd!, 500% faster than last gen!” are way behind us. Every cpu, gpu, memory speed bump is incremental every year or two at best from here on out.



The actual reason we didn’t publish any benchmarks is because we were expressly forbidden to do it.

Best regards,


Yes. We agree on that: the benchmarks will be the final tell. And we won’t see full RTX integration for awhile so I’m not suggesting anyone race out and buy the new cards.

To be clear about what I’m predicting: I believe if one uses the baseline of Octane 3 running on Kepler Nvidia…there will be a 200-600% real-world speed increase.

Nvidia’s AI Denoising is already a huge deal and AI is not only going to impact noise-removal but the ability to up-scale rendered images with much greater image fidelity.


Try being a LITTLE LOGICAL. If the speed increase were ONLY 50% over a 1080 there would be NO WAY AT ALL TO RAYTRACE ANYTHING IN GAMES WITH THAT GPU.

You could take 4 X 1080 TI GPUs together, try raytracing Battlefield 5 on their CUDA cores, and I’d be surprised if your FPS even reached 30 FPS in 1080 HD with those 4 GPUs taken together.

That is PRECISELY WHY THERE WAS A PRESSING NEED IN THE FIRST PLACE FOR DEDICATED RAYTRACING CORES - the boring old CUDA cores simply cannot do it fast enough. They are NOT designed for raytracing.

The most likely speed increase where raytracing is involved is probably going to be somewhere between 300% and 800%, depending on HOW WELL NVIDIA ENGINEERED THE 1ST GENERATION RT CORES.


S.A.Skeebert… you do know that using capitalisation with emboldened text to express an opinion has the opposite effect of undermining the points you’re trying to make… ? In other words it’s like standing up and shouting in a room like Rick from the Young One’s (an early 80’s comedy show from the BBC ), when people are sitting down next to you.

Capitalisation of text whilst also being emboldened should only be used for main headings and subheadings of a document or printed article if not for otherwise artistic purposes. Besides being a bad use of the English language, it’s actually much more difficult to read, requiring one to significantly slow down their reading speed. Typically if this is the case, people will not take the time to read it all or perhaps properly… or attempt to respond to it… just like that dude shouting in the room analogy.

I mean, I can give strong to the point opinions on things, but I always feel that it’s important to show that I have the intelligence in knowing how to communicate effectively and in an intellectual way. Always remember that you are communicating with people on the other side of your screen, not simply pinning up a poster on a lamp post for people to look at. :slight_smile:

I’m posting this as some friendly advice…because I’m seeing this in a lot of your posts on here.


I have to agree with Scott here, common on man, stop shouting :slight_smile:

Besides the Redshift dev’s response there is also Chaos Groups “initial impression” post that Bullit posted a while ago: https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

Should be a fun read.

For what its worth, me personally, I am thinking it won’t be thaaaat big of a difference but time will tell I suppose :slight_smile:

edit: I’ve somehow butchered Bullit’s nickname.


Very glad you join the conversation and I hope you can tell us more when allowed.
Don’t get me wrong but I’ve seen tons of gaming benchmark on new RTX cards(and quite a few were disappointed from the test as far as I can tell), the fact that nobody post real ray tracing benchmark get me suspicious, I mean if I have something that much faster than before I should be more than happy to show it.


Previous generation was Pascal(Volta never come to consumer), before we have Maxwell and before Kepler. Kepler architecture was from 2012(dog slow GPU like 680 i.e.). Achieving 2/6x the performance over a 6 years old architecture seems totally possible but not impressive at all, even a top of the line CPU from that time was 5/6x slower than what we have now, but again this is just evolution. All that being said marketing report those number against Pascal not Kepler, don’t know if you have confused the two in your prediction.


Ok, now you try to be logical, raytrace a bunch of rays for gaming stuff with dedicated hardware is possible to some extent, doing the same on high end path tracer software that need to compute tons of rays it’s a very different task and speed gain will be different as well, if you don’t get this you never used a render engine.
Beside that seems that even on gaming they can’t reach a 50% increase on average over previous gen: https://www.techpowerup.com/247323/nvidia-geforce-rtx-2080-ti-benchmarks-allegedly-leaked-twice that’s’ barely 37% on average so I’m not sure how they pretend to reach their goal on far more complex calculation.
Waiting for render engine benchmark for now we have some number:
2.5 years long development
1:1 same memory amount(a bit faster though)
50% increase in price(compareing a 1080ti with a 2080ti from the benchmark)
37% average increase in gaming performance
I can not quantify aesthetic but the case IMO is a lot nicer compared to previous GTX, if that matter…


Sure hope you are correct and this remains true for the Turing architecture, but it’s not out of the realm of possibility that Nvidia could further separate their pro and gaming cards capabilities, forcing production quality renders to use the Quadro line for all but the simplest of scenes.


The new chips are already 30% faster just from the CUDA bump alone…before we consider:
-AI DeNoising
-AI upscaling
-Deployment of the dedicated RT chips


The other big thing with the new Nvidia cards is NVLink add-on. With the NV link you can combine two cards and get double the speed AND double the available VRAM. So two 2080TI/2080 would gave you 22GB of available memory. That would easily handle 99.98% of scenes for us users.

Most users don’t need that much. 11GB is a lot. And Octane has out-of-memory management where the system RAM gets used when VRAM is all used. But a pair of the Turing cards would be compelling.


Yes, you are correct. I meant Turing over Pascal… 200-600% speed increase when factoring in CUDA count increase, AI de-noising and up-scaling and the integration of RTX chips, which will come later. (My prediction)


When they were announcing the Quadro cards I thought Nvidia might do exactly that…feature the RayTracing chips only on that card…or perhaps limit the NVLink to those cards. But they didn’t. The GeForce cards have the same type of Ray Tracing chips and also an NVLink bridge ($79).

The differences are VRAM amt, core count and driver refinement. Most pros will opt for the cheaper GeForce cards just as they’ve been doing in the past.


On Twitter Otoy has a little tease of Octane + RTX


Apparently, running also on Vulkan… Interesting stuff.



First results give a modest advantage to the 2080Ti over 1080Ti, about 25%.
2080 it’s about as fast as a 1080Ti, actually slightly slower according to this benchmark.
2080Ti 52’’
1080Ti 69’’
2080 70’’

Anybody get a 2080? :)