|3 Weeks Ago|
Originally Posted by Skeebertus: "As for Nvidia's "amazing denoisers", Nvidia is trying desperately to find a way to sell professional 3D/DCC software users hundreds of thousands of their ultra expensive top-of-the-line GPUs with tensor cores in them."
Nothing wrong with trying to keep your hardware company profitable.
I just wish companies like Daz inc and now (quite regretably) Reallusion,
were not so quick to implement the "free" Brute force IRay render engine that requires
the purchase of a mid to high end NVidia Card to be actually useful.
I see Daz users letting renders run over night trying to get clean renders of one single image.
Rendering animations is out of the question
Of course they routinely use models with multiple 4k textures
but still it is utter lunacy for still images
Quite Glad there are so many rendering alternatives on the market today.
|3 Weeks Ago|
Originally Posted by Skeebertus: There is nothing special about DXR/RTX.
Well, yeah, it's just math and code if you put it that way, but on the other hand it is special because microsoft and Nvidia did put some significant R&D resources into it to make it efficient and easy to use with a high level API. You can try and reinvent the wheel on your own, but just like Cuda vs OpenCL, there is a reason while most developpers use CUDA despite being proprietary : ease of use, support, performance...
Of course there's a catch and of course they're doing it for you to buy their cards. Nobody is denying it. There's no free lunch.
Plus, Nvidia does have patents and/or a great deal of experience on many things (QMC samplers, AI denoising...) and it's not that easy to replicate.
AMD is indeed trying to offer an alternative to DXR/RTX, but it doesn"t seem to be quite on the same level yet, and when it is, it'll be integrated in Radeon ProRender in C4D, which leaves the Tachyon/URender guys with what as a competitive advantage?
|3 Weeks Ago|
Originally Posted by EricM: Well, yeah, it's just math and code if you put it that way, but on the other hand it is special because microsoft and Nvidia did put some significant R&D resources into it to make it efficient and easy to use with a high level API. You can try and reinvent the wheel on your own, but just like Cuda vs OpenCL, there is a reason while most developpers use CUDA despite being proprietary : ease of use, support, performance...
There are 5 major companies that have shown working realtime-raytracing demos to this day:
2) Imagination (the GPU maker)
So Nvidia's API is just one API that has been announced. Its kind of like the very 1st Occulus Rift development kit. Yes, Facebook did it first. But that didn't stop competitors like the HTC Vive, Microsoft Hololens, MagicLeap, Apple and possibly now also Intel from working on their own competing VR/AR products.
First-to-market doesn't guarantee that you become "the dominant standard" for anything - if your competition does it better or cheaper your proposed standard may quickly be forgotten again.
Originally Posted by EricM: AMD is indeed trying to offer an alternative to DXR/RTX, but it doesn"t seem to be quite on the same level yet, and when it is, it'll be integrated in Radeon ProRender in C4D, which leaves the Tachyon/URender guys with what as a competitive advantage?
If you want to do something like an Archviz walkthrough/flythrough of a complex office building or shopping mall at 30FPS or 60FPS in 4K UHD resolution, trying to do that with 1st gen or even 2nd or 3rd gen realtime raytracing GPUs may not even be possible.
Will a profit-driven company like Nvidia really sell you one affordable 3K priced GPU that lets you do that? Or will you instead be buying a large water-cooled Nvidia GPU box that holds up to 8 high-end Nvidia GPUs and costs you north of 30K or 40K when fully loaded?
For many real world 3D projects you may need 4 to 8 of the absolute highest-end Nvidia 11XX series GPUs working together or even more to be able to do that what you need to do at 4K resolution in realtime.
Whereas something raster-based like URender may be able to do it in pretty decent quality - with a bit of pre-baked GI lighting here and there - on just 1 or 2 1080 gaming GPUs.
Also consider that there is nothing whatsoever stopping the URender guys from also adding selective hybrid raytracing to their OpenGL renderer.
When OpenGL gets realtime-raytracing extensions, URender may give you pretty much exactly what Nvidia gives you with its gaming-oriented DirectX API.
What isn't shiny or overly reflective and looks good enough in raster would run as fast GPU raster. What needs to be raytraced - the reflective floor of a company HQ lobby for example - would be GPU raytraced.
The hybrid realtime raytracing currently being proposed is literally piss-ordinary GPU raster graphics + the ability to shoot raytracing rays into select parts of the raster render.
So there is nothing particularly wrong or redundant about writing a fast, goodlooking GPU raster renderer like Tachyon/URender and then adding selective GPU raytracing or pathtracing to it afterwards.
Imagine that you are doing the opening broadcast graphics for a TV News program. Stuff that flies around but isn't actually reflective would render as raster on GPU.
But if when you have the "XYZ NEWS AT 10" text flying into the picture, and it is made of reflective and refractive glass, that's where selective raytracing would be used.
|reply share thread|