Interesting points. To the last couple, it’s an interesting thought that maybe ARM or someone else like that could build some specialized PCIe cards to handle some of these workflows. It would be nice to see some new entrants in the hardware market given how diverse and healthy the software side of the market is for rendering tech. I think if such a thing were to happen the cards would be pretty expensive initially given that it would be more of a niche product, but in general it would be good for everyone.
I disagree with you. The migration to GPU rendering is now all but complete. Even VRAY and Arnold have moved over. The factors holding studios back from using it have been:
1- The amount of VRAM, now much less of an issue with the way these new cards can cluster
2 - Legacy and Propreitary tools that are tied into CPU rendering pipelines.
Studios are getting killed right now financially. They have no choice but to make cheaper films and a lot of the budget has been tied to special effects. They have no alternative but to shift over to GPUs, which will make everything faster, cheaper…as well as reduce electrical bills and space requirements.
The Studios wont use real time rendering, but they will use renderers that leverage CUDA + Tensor Cores + the RT chips employed in non-real-time rendering.
Yes sure, all good point, but I keep reading those argument since at least 3/5 years;)
Again book this thread and come back in a few more years and see how many Hollywood movies are made on GPU only and how many large CPU farms have turned to GPU for final production.
Go to Chaosgroup forums, then you will perceive how things are going in real world despite all the Vray GPU marketing attempt, it’s all about flexibility, and there is a reason why only a very limited amount of Vray user switch to GPU for final frames and the one that does this usually aren’t producing complex scene.
All the RT core stuff is just overhyped. Now you can process 5rays per pixel, that maybe “good” for games and very simple setup but unfortuntely real scenes requires thousand of rays and the new dedicated core won’t help that much on this, the bulk of the calculation will still be made by traditional CUDA cores like before. That’s why(despite having access to early hardware) nobody posted real world benchmark results made by Octane, Vray or other, because realistically this new cards will just give the usual incremental update for final frames over previous generation instead of miraculous results that let you turn entire CPU farms in a single GPU box, real time visualization for blockbuster movies and save millions on hardware and electrical bill(basically those were the claims made by Nvidia presenter).
Maybe I’m wrong, let’s wait and see:)
Bringing Vray is the worst case for GPU render…they have had much less functionalities, only now it appears have a speedy IPR
Beside Arnold and Renderman(AFAIK they do not support GPU in official release yet), Vray is probably the only other commercial engine used in Hollywood productions that uses both CPU and GPU, so it’s not the worst case, its the only case where you can compare CPU and GPU in the same renderer.
You are right, after years of existence there are still feature missing compared to the CPU counterpart(it’s easy to guess why), but you can bet that when available both Renderman and Arnold will have hard time keeping 1:1 the same functionalities and flexibility on different hardware. That’s one of the reason why GPU farms won’t replace CPU farms anytime soon.
Beside Hollywood the mainstream user will not spend money on Quadro cards, they will just buy 2080 or similar, this will replace the 1080 that is over two years old, let’s see how many improvements are made after 2 years in software like Octane, Vray, ecc.
Nvidia’s most powerful GPUs are rated at 10 Gigarays/second. That’s 10 Billion raytracing rays per second.
Where did you get “5 rays per pixel” from? Are you rendering at 16K or 32K resolution or something?
4K UHD is 3,840 x 2,160 pixels. That’s 8.3 Million Pixels.
10 Billion Rays DIVIDED BY 8.3 Million Pixels EQUALS 1,205 RAYS PER PIXEL PER SECOND IN 4K UHD RESOLUTION.
Even if its a 4K UHD Game running at 60 FPS you get 20 rays per pixel.
There are millions of people who render 3D stuff daily around the world. 3D rendering is not such a small market anymore.
Plus the same PCIe cards could become a cornerstone of offering Photorealistic Virtual Reality or Augmented Reality experiences to a broader marker.
Most people who try VR/AR complain currently that “the 3D looks fake”. These accelerator cards might change that.
Also, these accelerator cards may become compatible with 3D games as well. Might be interesting to E-sports participants and games who live for “Maximum Graphics Settings”.
So if Nvidia can’t provide you with 4K UHD @ 90 FPS raytraced for your favorite VR game, maybe an accelerator card by Intel, ARM or Imagination may.
It is irrelevant to compare version of same render if there exists a GPU render that is better . You also have to ask why Arnold, Vray would invest in GPU, it is not trivial investment like we see from the time they need to put a product at level of CPU version.
And there is another advantage
[quote=]Another advantage of using Redshift is that it offers amazing speed and power using off-the-shelf software and hardware. For the artist
workstations, Hydraulx invested in powerful Nvidia GPUs and for final
rendering, Hydraulx created its own GPU render farm consisting of
machines with up to four GPUs each.
This meant that the existing Hydraulx CPU infrastructure was freed up
and Hydraulx was quick to take advantage of the new opportunities this
offered. Since we were able to render on the GPU, it created a positive scenario that our CPU farm could focus on simulation and compositing,
Chun explains. With this new parallel process pipeline, it allowed shots to have a faster turn around.[/quote]
From Vlado: “Yes, we can trace 5 rays per pixel instead of 1, in realtime, and that helps, but final production quality requires thousands of rays”
I imagine he know very well of what he is talking about. You can not be that naive to suppose that game and production render tasks are similar:)
Well considering ILM already used a few realtime shots of the robot in Rogue One, and they’re also already using Optix for their destruction/smoke system, I think there is clearly going to be a growing number of shots made on GPU even in Hollywood.
They’ll keep the CPU farm for the most complex shots, and will add more and more GPU nodes on the side for the “other shots” (which could actually be the majority of them in the end)
Having a GPU renderer is a great option because there are certainly many tasks that run as fast or faster on GPU, also maybe in the future(5+years) it maybe a viable option for anyone regardless of the scene complexity. My point is that unlike Nvidia claims a few GPUs can’t replace a whole farm for large blockbuster, I haven’t see Rampage but to be honest the image in the Redshift article looks fake as a B movie, and you should ask your self why they used Redshift for those basic scene while all the other stuff(accidentally all the complex scene) were made by Weta/Scanline using CPU renderer.
Don’t get me wrong, as said for some task GPU rendering is great, but beside this most of the claims made by Nvidia are just unrealistic for now. I see those new cards as a good update over the previous generation, but noting more and certainly not revolutionary like marketing want you to believe, again, time will tell, and after all it’s your money so if you really hope you will get real time rendering by using the “revolutionary” Nvidia GPUs then go for it:)
Im missing your point. Despite the doubters and naysayers…In the past 3-4 years GPU renderers have in fact taken over for the majority of independent, agency and small studio work. Arnold and VRAY have raced to move to GPU so they dont get left in the dust.
When you survey the presenters at NAB and SIGGRAPH the past 3 years…the majority of top c4d guys are using Octane or Redshift. This isnt projecting the future…this is logging historical fact.
As for image quality I think your opinion is unenlightened. (Or maybe Im misunderstanding your argument :)) There is zero difference in quality. Zero. If someone doesnt like how Octane looks…they dont like how reality looks…how light works. Its unbiased. With Redshift you can dial in any look you want.
As for Turing and the new features…Nvidia has grown ten fold and now is nearly rivaling the market cap of Intel. I suspect theyve done their homework on where the market is headed.
The thing holding back Hollywoods use of GPU rendering, again, has been two things. Lets take a quick look.
The big screen…and all the dense polygonal structures and efx can require 6-128 GB of memory. Pixar explicitly told that to Nvidia a couple of years ago. The new Quadro cards can be configured with up to 96 GB. So there are maybe 5-10% of Hollywood scenes that cant be rendered on GPU now
Weta and RenderMan XPU are already starting to integrate Nvidias new Optix GPU tech
The second hold up is legacy plugins, extensions, etc. When studios can save millions of dollars per movie…I think its safe to say those legacy tools will get updated or replaced.
REALTIME MEANS AT 30 FPS. So Vlado is basically saying that their current implementation does 150 Rays Per Pixel Per Second.
If he was talking about 4K UHD resolution, that’s even better. It means that you get a good 600 Rays Per Pixel Per Second when rendering a 1080 HD frame.
That’s less than the 1,200 rays/pixel/second calculation I did earlier in the thread.
But what he’s saying does NOT mean that these cards can only do 5 rays/pixel/second.
It’s not difficult to get my point, I’m not talking about small studio, agency or freelance, I’m talking about large Hollywood production since those where mentioned in Nvidia presentation.
Cinema in not used for movies beside title, and a very few selected circumstances so I’m not surprised if the majority of users jump on the GPU vagon, Cinema target is mostly motion graphic and similar and this is perfect for GPU.
You don’t get my point for image quality neither, I was simply referring to the article posted about the Rampage movie, those render looks like crap from a B movie to me, I’m not saying that you can’t achieve good results using GPU. Beside aesthetic consideration my point was that they used GPU renderer only for very simple scene of the movie leaving all the complex stuff to CPU, and yes GPUs alternatives are studied by every major player including Pixar, Weta ecc but so far they are still on CPU for final rendering, eventually they will switch sometime in the future but this is not the time.
Judging companies from their market cap is not a reliable indicator, It can lead you anywhere from Apple, to Theranos, to Enron, also while in the last 3 years Nvidia grow 10 time AMD grow 14 time, should that be an argument to choose them over Nvidia?
What he is saying is rather simple, now they can do 5rays per pixel in RT(whatever resolution or frame rate they mean), production quality requires thousand rays per pixel in RT(but last say only 1000 for seplicity), 1000/5=200, so for having production scene in RT they require hardware 200 times faster.
Beside all the talk here, let’s wait for some benchmark on actual renderer:)
No one is claiming there will be real time 30 fps movie-quality rendering. In the Nvidia presentation the CEO spoke about rendering 5 movie caliber scenes per day, with a much smaller, less expensive, less energy-intensive GPU setup.
Meanwhile the real time rendering for games will be around 30-40 fps but quite limited in bounce count, etc.
GPU rendering was already 2-5 times faster over CPU with CUDA, but with ray-tracing acceleration that gap will grow. It will take time to get integrated into Redshift, Octane and others…just as it took time to integrate CUDA cores.
Otoys Jules said that Octane gets 400 million rays per second with current cards, but gets 3.2 billion with Turing. 8 fold speed improvement. Now I very much doubt the real-world speed will jump 8x, but its going to make a big difference when Octane is optimized for the RT portion of those card.
Apple hasnt invested billions of dollars in 3d rendering. Nvidia has.
Nvidia’s top 20xx GPU can do 10 Billion Rays / second. It is the RESOLUTION IN PIXELS and FRAMERATE of the render that DETERMINES HOW MANY RAYS PER PIXEL PER SECOND ARE AVAILABLE OUT OF THE POSSIBLE 10 BILLION RAYS PER SECOND AVAILABLE .
Vlado’s “Only 5 Rays Per Pixel Per Second” is COMPLETELY AND UTTERLY MEANINGLESS AS A NUMBER IF YOU DO NOT QUOTE PIXEL RESOLUTION AND FRAMERATE ALONGSIDE IT.
So NO, THESE CARDS ARE NOT FIXED-RATED AT JUST 5 RAYS PER PIXEL PER SECOND.
VLADO PROBABLY GOT 5 RAYS PER PIXEL PER SECOND AT A SPECIFIC RESOLUTION AND SPECIFIC FRAMERATE ON HIS VERY EARLY PROJECT LAVINIA PROTOTYPE.