Octane 4 RC1 With Integrated Brigade Realtime Pathtracer Tech Available To Test Now


#1

If you own an Octane 3 license, apparently you can test-drive an experimental build of Octane 4 now:

It comes with Otoy’s long-awaited Brigade pathtracer integrated, so should be interesting to play around with.


#2

Everything I’m seeing on their site indicates this is still an NVIDIA-only product. Is that true or am I missing something?


#3

It is currently nvidia only, but amd support is supposedly being worked on.


#4

That was announced more than 2 years ago, actually for version 3.


#5

Yeah that was my recollection too. Disappointing there’s been no progress. That said, some of the technology being demonstrated in this app is potentially worth making a switch to PC for, given all the other app benefits of having an NVIDIA card. PC + RTX looking solid. Been waiting this long for Mac Pro update but ironically may end up ditching at the last minute anyway. If Apple’s lack of NVIDIA support on the eGPU side is any indicator, their next Mac Pro won’t support it either.


#6

So from what I’ve heard they (otoy) have abandoned the OpenCL integration that would run on AMD hardware. I could be wrong but I think it was also because Apple switched to Metal and so OpenCL was pretty much dead on that platform (thanks Apple!).

Now that being said they are working on a Vulkan compute integration. So AMD / everything OSX will run on Vulkan.

In a nutshell, OpenCL was abandoned but AMD compatibility is still on its way. I think you can see some of the progress of it on youtube where people use Octane on iPhones etc…

Full disclaimer: I am not a fan of OTOY by any means.


#7

Slight off-topic, but after my moaning about earlier versions of Octane, version 4.0 is – while not exactly rock solid – much improved on Mac. I used it fir ages the other day without any crashes. It can still be flaky with certain scenes (usually lots of objects/clones), but it’s much more useable now. And when using adaptive sampling and deposing you can get very impressive results much more quickly. Just thought I’d give credit where it was due.


#8

They abandoned Open CL on Mac in favor of metal. Otoy’s demo videos and Siggraph releases, which seem to be power point files put on youtube, talk about the RNDR backend for CUDA, x86, Vulkan, D3D, and Metal. They show it running on MacBooks, iMacs, and even iPads and iPhones in a vid from March 2018. Start at 9:31 in their Siggraph 2018 video for mentions of Metal. The vid released in March 2018 showed Mac OS ports and iOS. This one focuses on iOS. Their CUDA cross compiler is insane work. They claim in this vid there is no penalty hit to AMD.

//youtu.be/6ChweQETKTE

Us Mac folk are getting impatient - seeing it run on an iPhone X is neat, but that’s not where we work in C4d! I’m with Blinny. I feel that if nVidia mac support is coming, it’s coming with the Mac Pro which is taking too long to come to market (my bet is December 2019, announced in the summer, though). Might be time to build another PC and move on, at least if working in C4d.

Octane seems to be the holy grail for Mac users. Now if they could just get two promised bits of tech off the ground - the metal compatible Octane, and headless rendering which would allow Mac users to have PC slaves full of GPUs do the work. I’m sure that’s a lot of work, but again, I’m impatient at best. :wink:


#9

+1 with Darth - V4 Octane is much less crash prone on my mac too, hoping they don’t add the bugs back to the final release!


#10

While I’m always leery of Otoy roadmaps (more so that other companies’) I forgot the did claim specificlly they’re moving things to Metal for AMD users. So will have to see but either way tempting to just give up on AMD and whatever the next Mac Pro is. Might end up being a simple race: what comes first Puget Systems with RTX cards or Late 2018 / Early 2019 Mac pro (whatever they end up calling it)?

[And then again maybe not. I just found an article at Puget that wasn’t there last time I checked. Seems RT cores may not have the flexibility to work with rendering engines, that it may be applicable only to things like games, VR / AR.]


#11

@BubblegumDimension

Thanks for providing the link I should have provided and clarifying the situation :slight_smile: Didn’t want anyone to feel confused by my post.

@Blinny
Vlado spent a minute or two talking about the new RT cores on the Turing cards. Check it out here https://youtu.be/CgelDAZLuhU?list=TLGGu1Lftkv60hMyNzA4MjAxOA&t=1587 .


#12

RT Cores can be used with renderers. Chaosgroup showed an early example of VRay on RT Cores:

//youtu.be/yVflNdzmKjg


#13

Just curious since I don’t own Octane - precisely what does Brigade add to the renderer in V 4.0?

The Brigade demos they showed before were many networked CPUs doing pathtracing of 3D scenes in realtime.

Can Octane 4 do this now?


#14

This video shows Octane 4 (which is in release-candidate form right now. Some people are using it for production.) Unfortunately this walk-thru is for Houdini’s implementation of Octane, but the features are generally equivalent in c4d’s version. There is a little discussion of the Brigade integration, which is somewhat integrated.

https://m.youtube.com/watch?v=UUp1jRkKWQ4


#15

Oh no. I feel very similar. Saw the ridiculous RTX Nvidia demos and it’s very tempting. Thing is the cards are very very expensive. $10,000 for the top of the range one and that’s US so I’d be looking at nearly $20,000 for a graphics card here in Aus!

i may have got my numbers wrong but I’m sure he said there was a 10k card which blew my mind. I will find the clip.

I didn’t get my numbers wrong and here’s the slide that made me gulp


#16

Yeah, that demo was really cool. They were really hyping up the fact that 4 of the high end GPUs could potentially replace a whole row of server racks at 1/4 the cost. Insane.

The good news is that the products for mere mortals, the 20XX series, is also looking great. The 2080 card is supposedly outperforming a 1080Ti. Although so far nVidia has not shown a blower design on their cards. I’m wondering how Octane in a PC would do with two fan based 2080 TI cards side by side. Is that a concern in a decent sized tower? I can’t even imaging throwing 3 GPUs in a system. The noise and cost is probably a bit nuts.


#17

Yes, in their dreams. This won’t happen anytime soon. Save this thread and see a in few years from now how many Hollywood farms moved from CPU to GPU.
Since in Silicon Valley a company like Theranos rised billions only on hyping I’m not that surprised anymore by sensational unrealistic claims.


#18

I think it would be foolishness to buy a Quadro. You’ll be able to get most of the same tech with the GeForce RTX cards ($500-1,200). But it will be some time (months? Years?) before any of the render engines take advantage of the RTX bonus tech. So for now one should expect a 15-30% speed increase from the previous generation.

Look at 2070, 2080 and 2080 TI cards.


#19

In Redshift forum, Panos the head of company said that RTX raytracing improves more rendertime if the materials are simple, but if the shading is complex it goes for CUDA.
So typically simple Mograph stuff, materials with only one texture or one simple shader are those that would be the most improved by raytracing. But there are complex shading that is not improved by raytracing, might do it only indirectly if raytracing cores “liberates” CUDA cores.

Plus this:

[quote=]It’s not just shading. There’s also volume grids (VDB) which won’t be accelerated at all. And there’s non-triangular primitive types (such as
hair or points) which will only be partially accelerated and, by my
estimates, not very well (or at all) compared to our existing tech. So
if your scene uses good amounts of that, the percentage to be
accelerated will be even less.[/quote]
Chaos Group confirms:

https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray


#20

This is just Generation 1 of realtime raytracing technology. A company like Nvidia won’t give you “Hollywood Level 3D In Realtime” in generation 1 even if it were technically possible.

We’re looking at the same game as before in GPUs - every 1 to 2 years an incremental leap in how good these cards raytrace in realtime.

This is a good thing - it means that someone with better intentions and better tech can sweep in an knock Nvidia sideways.

The future of this may not even be GPUs - it may be co-processor boards full of ASICs from different vendors.

You’d be able to buy dedicated Realtime Raytracing Accelerator Cards that you can use even with a weaker Nvidia GPU like say a 1060 GTX.