PDA

View Full Version : Presentation of GPU-based V-Ray solution was one of the highlights of the SIGGRAPH 09


jmottle
08-10-2009, 06:12 AM
This demo was presented by Chaos Group on August 6, 2009 at the SIGGRAPH 2009 Chaos Group User Event. During the event the first public showing of their GPU rendering tests were presented to a packed house. This video showcases the recently released V-Ray RT using an NVIDIA GeForce 285 GPU. While this GPU version is not a shipping product yet, already this technology demo features rendering speeds and quality surpassing all of the current GPU rendering applications that we've seen to date. Several high level industry representatives in attendance commented that this demo was the highlight of SIGGRAPH 2009.

http://www.cgarchitect.com/news/SIGGRAPH-2009-CHAOS-GROUP-GPU.shtml

Enjoy!

Cheers,
Jeff

mister3d
08-10-2009, 06:37 AM
Do I have to wait until it downloads? My browsers freeze.

jmottle
08-10-2009, 06:40 AM
Do I have to wait until it downloads? My browsers freeze.

Nope it's a streaming video. If it's freezing up, then it sounds like a potential problem with your flash install. The player is an MP4 streamed through a flash player (JWplayer).

Kabab
08-10-2009, 06:44 AM
Amazing!

The rendering revolution has started!

*edit mine freezes as well..

BColbourn
08-10-2009, 06:51 AM
wow that was impressive at the end with the car. wish the demo guy had moved some stuff around so it didnt look like he was cruising around a baked environment.

mister3d
08-10-2009, 07:12 AM
Nevermind, it works now.

Spacelord
08-10-2009, 07:28 AM
My God thats just nuts !
This year is going to be the year of real time rendering.
I wonder how it would handle rendering print res ?

mister3d
08-10-2009, 07:37 AM
Seems it was a problem wit my computer to play the video.

mister3d
08-10-2009, 08:51 AM
Just watched and it's very impressive. I would say it's even more impresive than Caustics. Caustics must try very hard to beat it.
The GPU scaling speed is what so marvellous about this: if you set 2 powerful cards via SLI, you must be getting double speed increase. And in a year or two it will be almost realtime.
There were some caustics noise in the interior scene which didn't go away though, I'm wondering why.

So, it's basically 100x times faster than CPU rendering? :eek: The glossy car interior is what blowed my mind.

Spacelord
08-10-2009, 08:53 AM
Hi Jeff,
Did you get to film Mental Rays IRAY ? I'd love to see that too.

cheers

mister3d
08-10-2009, 09:12 AM
"What seperate their solution from all others is that the GPU rendering output MATCHES the production render quality from a CPU rendered frame buffer exactly!" - true, I haven't seen this before. :thumbsup:

jupiterjazz
08-10-2009, 09:15 AM
Hi Jeff,
Did you get to film Mental Rays IRAY ? I'd love to see that too.

cheers

forget iray... it was demoed at nVidia booth, near the optix/nvirt demo which is is faster & more general so none of the ones who understands things really gave a damn about the former (for the kicks: iray is a progressive path tracer, cuda-based, but not using optix/nvirt api though... I mean these guys don't even look at what nvidia is doing... pathetic...)

As per the vray video it speaks for itself, it is really impressive.
Kudos to Vlado & co. B) Best of Luck!

This should trigger the following actions:

@ nvidia: wonder what the hell the mental institute is doing, and if still make sense to protect them under their financial wing, because it is clear that even with the advantage of privileged access to NV technology they can't just do it anymore

@ adsk: reconsider their centralized mental ray strategy

I am not surprised there will be no video emerging on the net about iray, at mental they must be there not believing their eye rays hehehe..

p

Hamburger
08-10-2009, 09:30 AM
Good news for Maya users too:

is it, somehow, possible to use VRay RT wit maya?



Yep; it will be implemented. In fact a large amount of that work is already done for the V-Ray support for IPR in Maya.

Best regards,
Vlado

Source: http://www.chaosgroup.com/forums/vbulletin/showthread.php?t=46365

Magnus3D
08-10-2009, 09:36 AM
I'm sure the tech is impressive but the video is impossible to watch, i tried numerous times. It freezes, it stops, it even crashed my browser. And yes it's all updated and fresh, but please make the video easier to watch if you want people to comment on what it tries to show.

/ Magnus

mister3d
08-10-2009, 09:43 AM
I'm sure the tech is impressive but the video is impossible to watch, i tried numerous times. It freezes, it stops, it even crashed my browser. And yes it's all updated and fresh, but please make the video easier to watch if you want people to comment on what it tries to show.

/ Magnus

I had problems with my computer too, nothing helped. Just watch it on another machine.

mustique
08-10-2009, 10:09 AM
insane stuff!... CPU's never looked so loosy.

mister3d
08-10-2009, 10:27 AM
CPU's never looked so loosy.

Haha, true: just imagine how much it would cost to buy 100 cpu's and yet how much they would consume. :scream:

rebb
08-10-2009, 10:49 AM
Very nice.
I guess it was just a matter of time since the introduction of the high-level GPGPU solutions like CUDA, that all the heavy computation applications like off-line rendering make a move towards them.

cresshead
08-10-2009, 11:21 AM
Finally, fprime has a contender!

looks good quailty output and maybe a little faster than fprime..mind you that came out some 4 years ago and is now on version 3.

[works with lightwave]

mister3d
08-10-2009, 11:39 AM
Finally, fprime has a contender!

looks good quailty output and maybe a little faster than fprime..mind you that's 4+ years old:beer:

Did Fprime use GPU? And how it's used now? Vray is very popular, therefore I imagine this technilogy will be adopted very fast by the market. I wonder what happened to Fprime though.

cresshead
08-10-2009, 11:41 AM
fprime's still being developed, it's on version 3 now and sold and used in productions
http://www.worley.com/E/Products/fprime/fprime.html

Q on Vray RT will it support deformation, volumetrics, motion blur, dof etc or just static scenes?

mister3d
08-10-2009, 11:43 AM
fprime's still being developed, it's on version 3 now and sold and used in productions
http://www.worley.com/E/Products/fprime/fprime.html

Q on Vray RT will it support deformation, volumetrics, motion blur, dof etc or just static scenes?

So does Fprime use CPU or GPU? I think it's important to realise it's GPU is being used with this demo.
They say in the video it will support deformation. I'm sure it will support all effects, as it's identical to CPU rendering.

cresshead
08-10-2009, 11:48 AM
fprime works on cpu...so really it's doing some 'magic' if you compare to how gpu's out perform cpu's thesedays.

anyhow don't want to send this too far off topic just wanted to put it out there that vray RT looks to have caught up with fprime, or will do once it's back within max so yo can edit your scenes and see the results realtime.

shuggie
08-10-2009, 11:59 AM
Very impressive, with 3 gfx cards running this thing would be a monster, it will be interesting to see if/how it deals with particle effects and plugins like fume and the like.

mister3d
08-10-2009, 12:20 PM
fprime works on cpu...so really it's doing some 'magic' if you compare to how gpu's out perform cpu's thesedays.

anyhow don't want to send this too far off topic just wanted to put it out there that vray RT looks to have caught up with fprime, or will do once it's back within max so yo can edit your scenes and see the results realtime.

In Fprime examples they didn't show much fast glossy effects (it stopped after certain threshold and didn't improve any more, that's why I think they mostly showed mirror-like reflections), so in my opinion Vray was still faster with its RT(cpu) version. Much faster.

thablanchh
08-10-2009, 12:23 PM
fprime works on cpu...so really it's doing some 'magic' if you compare to how gpu's out perform cpu's thesedays.

anyhow don't want to send this too far off topic just wanted to put it out there that vray RT looks to have caught up with fprime, or will do once it's back within max so yo can edit your scenes and see the results realtime.

You see the result in realtime in Max... But Vray RT is not really in competition with Fprime.
Even if the way it works can seem similar, Vray RT is (for now) sold as a Vray previewer, so a workflow enhancer, and not as a full renderer, even if you can use it as you main render engine. The actual version of Vray RT is limited for motion blur, dof and other things. Time wil tell if it will become a full render engine, but for now, it is a toy, a really usefull, performant toy.

Speedwise, Vray RT is working on a distributed rendering platform, wich makes it quicker than Fprime. I'm talking about the actual version, and not the future GPU version

mr-doOo
08-10-2009, 12:25 PM
It seems it's already available for purchase for 3dsmax since june :http://www.chaosgroup.com/en/2/vrayrt.html

O-Green
08-10-2009, 12:29 PM
Looks amazing ! Though it's kinda hard to estimate its potential from that video. Looks a bit quicker than Fprime though.

mister3d
08-10-2009, 12:34 PM
You see the result in realtime in Max... But Vray RT is not really in competition with Fprime.
Even if the way it works can seem similar, Vray RT is (for now) sold as a Vray previewer, so a workflow enhancer, and not as a full renderer, even if you can use it as you main render engine. The actual version of Vray RT is limited for motion blur, dof and other things. Time wil tell if it will become a full render engine, but for now, it is a toy, a really usefull, performant toy.

Speedwise, Vray RT is working on a distributed rendering platform, wich makes it quicker than Fprime. I'm talking about the actual version, and not the future GPU version

The only limitation I found about RT is the absence of motion blur http://www.chaosgroup.com/en/2/vrayrt.html What kind of limitations are you talking about?

Looks amazing ! Though it's kinda hard to estimate its potential from that video. Looks a bit quicker than Fprime though.

Show me something like a production quality glossy reflection demo from Fprime. :) All their demos do not show final quality rendering, only some noisy results.

thablanchh
08-10-2009, 12:40 PM
Mister3d:

Limitations in 3dsMax integration, like procedural map support.

For noisy stuff, the renderings shown are something from 6 to 10 seconds.. and this shooted from a handheld camera.. It takes Fprime quite a bit of time to get rid of the noise.

mister3d
08-10-2009, 12:59 PM
Mister3d:

Limitations in 3dsMax integration, like procedural map support.

For noisy stuff, the renderings shown are something from 6 to 10 seconds.. and this shooted from a handheld camera.. It takes Fprime quite a bit of time to get rid of the noise.

Well I think procedural maps is not a big problem really, if to think about it. I think we will see a good integration within a year or so. So I wouldn't call it a toy. :)
I'm not sure what you mean that it was shot with a handhield camera, but I'm sure it got to a noise-free results in this demo, regardless of the quality we saw. It's seen in the highlights.
So at least you agree that Fprime is slower with glossy effects. That means a much slower raytracer. So saying that Vray RT GPU is "a bit faster" is misleading. :)

thablanchh
08-10-2009, 01:06 PM
I've been using the current version of Vray RT for 3 months now.. and it is really kicking a** as it is, so, can't wait to see it GPU based..

cresshead
08-10-2009, 01:11 PM
I've been using the current version of Vray RT for 3 months now.. and it is really kicking a** as it is, so, can't wait to see it GPU based..

how does the cpu based version of vray RT appear to compare to the gpu as demonstrated in that video?

mister3d
08-10-2009, 01:14 PM
how does the cpu based version of vray RT appear to compare to the gpu as demonstrated in that video?

AFAIK 10x times, so it's 100x times over standard cpu rendering.

thablanchh
08-10-2009, 01:17 PM
how does the cpu based version of vray RT appear to compare to the gpu as demonstrated in that video?

It is working in distributed rendering, so it is depending of much horsepower you have..

I'm usually connected to 3 to 6 machines, and doing mostly archviz interior shoots, and something like 10-20 seconds is enough to get a clear idea of a complex material.
As I did mention, it is now more of a working tool as a complete renderer. You can adjust material, lights and camera settings in seconds, "realtime", and when you are satisfied, you launch the final rendering with your Vray renderer.

mister3d
08-10-2009, 01:42 PM
I and when you are satisfied, you launch the final rendering with your Vray renderer.

So it can't make the same quality image? I'm wondering to hear it about this new version.

thablanchh
08-10-2009, 01:58 PM
So it can't make the same quality image? I'm wondering to hear it about this new version.

Yes you could.. but you would have to wait a little more. ( just like Fprime)
If the new RT works a finl image in seconds, mabee the workflow will change.

AJ
08-10-2009, 02:00 PM
So it can't make the same quality image? I'm wondering to hear it about this new version. It's more a case that your RT window will be relatively small (640x480...etc.) - you render for a full sized image. Plus as others have mentioned RT can't (currently) do a few things V-Ray can (Motion blur, SSS...etc).

mister3d
08-10-2009, 02:08 PM
It's more a case that your RT window will be relatively small (640x480...etc.) - you render for a full sized image. Plus as others have mentioned RT can't (currently) do a few things V-Ray can (Motion blur, SSS...etc).

So you think it will not be used as a production renderer?

lazzhar
08-10-2009, 02:10 PM
So it can't make the same quality image? I'm wondering to hear it about this new version.

Usually the test render that is boring and eats up most of the time. I don't mind firing the render and go home or out leaving it render overnight.
Go Vray Go !

thablanchh
08-10-2009, 02:21 PM
So you think it will not be used as a production renderer?

Resolution can be changed, and some people are using it in production. some other people are using as a working tool.

mister3d
08-10-2009, 02:37 PM
Usually the test render that is boring and eats up most of the time. I don't mind firing the render and go home or out leaving it render overnight.
Go Vray Go !
Yeah, totally. Though I would prefer using it for animation also.

Resolution can be changed, and some people are using it in production. some other people are using as a working tool.

I never worked with a path-tracing engine like maxwell, so I barely imagine how the quality is generated and at what point it becomes production-quality. So how much longer you must wait after you can clearly see the result yet you need the final output? So what are your predictions on this demo, how much time it will take to generate a GPU-based final rendering?

ThirdEye
08-10-2009, 02:46 PM
Since this is GPU based i wonder how it behaves with hi-res textures, so far all i've seen was some untextured models.

AJ
08-10-2009, 02:51 PM
So you think it will not be used as a production renderer?I'm using the current V-Ray RT in production but no, not to output the final image (the project I'm currently working on is 12'000x12'0000). Bear in mind I'm talking about the CPU version of RT, I have no idea what the speed increases...etc of the GPU will mean.

TwinSnakes
08-10-2009, 04:06 PM
A full fledged ray tracer on the GPU! It's going to change the game. ChaosGroup is going to be the King of the Mountain for a long time, unless some of the other players have a Ace up their sleeve...it's game over. :cool:

ulb
08-10-2009, 04:27 PM
I wonder why GPUs can be faster than CPUs for that.

Is it a matter of massive multithreading with a lot of cores, or optimisations to do specific computations?

I find it a bit strange that it can be so much faster. What would prevent CPUs manufacturer to provide such enhancements for 3d rendering and some other tasks?

The processors of graphic cards are running at slower frequencies than CPUs, so what makes the difference?

ulb
08-10-2009, 04:43 PM
I wonder why GPUs can be faster than CPUs for that.

Is it a matter of massive multithreading with a lot of cores, or optimisations to do specific computations?

I find it a bit strange that it can be so much faster. What would prevent CPUs manufacturer to provide such enhancements for 3d rendering and some other tasks?

The processors of graphic cards are running at slower frequencies than CPUs, so what makes the difference?I guess the answer is CUDA.

here some explanations (http://video.google.com/videoplay?docid=-565754234179733207)

soulburn3d
08-10-2009, 05:20 PM
Well I think procedural maps is not a big problem really, if to think about it.

Well, it is a problem for anyone who wants to use procedurals :)

- Neil

morimitsu
08-10-2009, 05:28 PM
Wow, I just watched it and it's outstanding!

In the next 2 to 5 years, the next generation of video games might be able to render that 60fps at high res.

Unbelievable!

ienrdna
08-10-2009, 06:21 PM
Games area already gpu accelerated, they will only benefit from raw power.

CHRiTTeR
08-10-2009, 06:35 PM
This looks verry promessing.

I wonder if they can also use the GPU to render using the brute force, irradiance maping or photon mapping engines, but they probably arent going to put time into that because this looks already fast enough in a way that it makes these techniques obsolete?

The issue with using it for printing resolutions would probably be the 'limited' memory on current graphic cards. (just a guess)


Anyway; BRING IT OOOON!!



And yes, pls keep support for procedurals. :D


Someone mentioned the examples were untextured, but you can clearly see the coloseum is textured when they zoom in.

ThirdEye
08-10-2009, 06:48 PM
Someone mentioned the examples were untextured, but you can clearly see the coloseum is textured when they zoom in.

I doubt those were 4k or 8k textures though.

jmottle
08-10-2009, 06:49 PM
Hi Jeff,
Did you get to film Mental Rays IRAY ? I'd love to see that too.

cheers

I didn't but I have a private call with Mental Images this week and they are going to show me a demo. I hope to get some recordings or info to show soon.

jmottle
08-10-2009, 06:54 PM
fprime's still being developed, it's on version 3 now and sold and used in productions
http://www.worley.com/E/Products/fprime/fprime.html

Q on Vray RT will it support deformation, volumetrics, motion blur, dof etc or just static scenes?

They said explicitly that they will not support Motion Blur as the calculations are too difficult to calculate in real-time right now.

jmottle
08-10-2009, 06:56 PM
It seems it's already available for purchase for 3dsmax since june :http://www.chaosgroup.com/en/2/vrayrt.html

That is the CPU version. The video shows the same app using the GPU, which is not yet available.

jmottle
08-10-2009, 06:57 PM
AFAIK 10x times, so it's 100x times over standard cpu rendering.

The numbers they quoted with a GeForce 285 was 20-40x I've asked for a demo of the GPU version so I can test on some of the newer GPUs from Nvidia. (It's only on CUDA right now)

mister3d
08-10-2009, 06:59 PM
They said explicitly that they will not support Motion Blur as the calculations are too difficult to calculate in real-time right now.

That's a pity, and limits its use for animation. I hope they will find a way to solve this.

jmottle
08-10-2009, 06:59 PM
Since this is GPU based i wonder how it behaves with hi-res textures, so far all i've seen was some untextured models.

I did specifically ask Peter about this and he said textures are supported, but you are right these demos did not show any. I imagine is has something to do with offloading them to the GPU memory...not sure how that works yet.

TwinSnakes
08-10-2009, 07:13 PM
I wonder why GPUs can be faster than CPUs for that.

Is it a matter of massive multithreading with a lot of cores, or optimisations to do specific computations?

I find it a bit strange that it can be so much faster. What would prevent CPUs manufacturer to provide such enhancements for 3d rendering and some other tasks?

The processors of graphic cards are running at slower frequencies than CPUs, so what makes the difference?

It's not CUDA...its the architecture of a GPU vs CPU. CPU's are designed for linear tasks, while GPU's are designed for parallel tasks.

A simple example: If a CPU is a Professor, and the Ghz of the CPU is the speed at which that Professor solves an equation. Then a GPU would be the equivalent of 400+ Professors (depending on the video card), working together on the same equation but thinking at a slightly slower pace.

What makes VrayRT on the GPU so fascinating is that they've designed a solution for the light algorithm that can be solved in parallel. No one has come close to what they've achieved IMO. Not Intel, not nVidia, no one. Like I said, unless the other players just havent shown their hand yet. You'd think nVidia would be in the lead in this arena. But, they couldnt even give Gelato away.

ulb
08-10-2009, 07:21 PM
It's not CUDA...its the architecture of a GPU vs CPU. CPU's are designed for linear tasks, while GPU's are designed for parallel tasks.

A simple example: If a CPU is a Professor, and the Ghz of the CPU is the speed at which that Professor solves an equation. Then a GPU would be the equivalent of 400+ Professors (depending on the video card), working together on the same equation but thinking at a slightly slower pace.

What makes VrayRT on the GPU so fascinating is that they've designed a solution for the light algorithm that can be solved in parallel. No one has come close to what they've achieved IMO. Not Intel, not nVidia, no one. Like I said, unless the other players just havent shown their hand yet. You'd think nVidia would be in the lead in this arena. But, they couldnt even give Gelato away.Are you sure they didn't use CUDA?? Is it written somewhere?

I wonder if that works also with ATI cards.

jmottle
08-10-2009, 07:22 PM
It's not CUDA...its the architecture of a GPU vs CPU. CPU's are designed for linear tasks, while GPU's are designed for parallel tasks.

A simple example: If a CPU is a Professor, and the Ghz of the CPU is the speed at which that Professor solves an equation. Then a GPU would be the equivalent of 400+ Professors (depending on the video card), working together on the same equation but thinking at a slightly slower pace.

What makes VrayRT on the GPU so fascinating is that they've designed a solution for the light algorithm that can be solved in parallel. No one has come close to what they've achieved IMO. Not Intel, not nVidia, no one. Like I said, unless the other players just havent shown their hand yet. You'd think nVidia would be in the lead in this arena. But, they couldnt even give Gelato away.

One of the guys at NVIDIA explained it to me this way. If you needed to locate a specific word in a book using a CPU then it would search each page in order until it found the word. With a GPU (and assuming you has as many cores as you did pages) then every page would be searched simultaneously. Obviously this is a MUCH faster way to compute.

DrBalthar
08-10-2009, 07:26 PM
One of the guys at NVIDIA explained it to me this way. If you needed to locate a specific word in a book using a CPU then it would search each page in order until it found the word. With a GPU (and assuming you has as many cores as you did pages) then every page would be searched simultaneously. Obviously this is a MUCH faster way to compute.
Well that's only half of the story that only depends if you do not have to synchronize or communicate after each page with every thread if it has found the word yet. If yes, your parallelism goes to crap and that's the main crux with GPU everything is fine as long you don't have to synchronize around the threads if you do performance drops like a steep cliff. A lot of rendering algorithms needs synchronization or communication though.

hassearo
08-10-2009, 07:27 PM
I use vray RT all the time now, (the regular CPU one), and its really much easier to setup lightning and reflections, HDRI, and for ones really test out shaders and glossy reflections.
I use this for previewing animations, you can just scrub through the timeline and see that nothing wierd or ugly shows up somewhere in the animation,.textures work no mather resolution seems. I belive the most common procedurals work perfect.
A useful shader that doesnt work (as i now) is SSS.

Ones it looks good you need to kick it out though regular vray so your time savings are
in tuning the scenes, not final render.

And that CPU one is incredible fast, if this GPU one is double that or more you can just throw away all realtime viewport stuff that doesnt match the final output.


//

ThirdEye
08-10-2009, 07:32 PM
Are you sure they didn't use CUDA?? Is it written somewhere?

Nobody said they're not using CUDA, he just explained you the advantages of using a GPU vs a normal CPU, CUDA or not CUDA.

oddforce
08-10-2009, 07:39 PM
I wonder if there's a limitation to GPU rendering when it comes to some of the modern rendering features.

It would be such a shame if there's all this massive speedup but we wouldn't be able to use displacement or SSS (just as examples) because something about it is not parallelizable (this a word?)

I'm cautiosly optimistic but I think I'll be holding my breath a little while longer :argh:

ulb
08-10-2009, 07:40 PM
Nobody said they're not using CUDA, he just explained you the advantages of using a GPU vs a normal CPU, CUDA or not CUDA.ah ok, I didn't get it.

So it is massive multithreading with specific optimisations.

Hopefully CPUs will have tens of cores soon! :drool:

TwinSnakes
08-10-2009, 07:53 PM
Are you sure they didn't use CUDA?? Is it written somewhere?

I wonder if that works also with ATI cards.

CUDA is specific to higher-end nVidia cards. nVidia even has this server type thing that has a ton of GPU's in it for scientific research type stuff on CUDA. So CUDA is just the GPU environment the program is run in. They are porting the program over to OpenCL so it can run on any GPU environment, including ATI, and even motherboards that have a GPU built in.

But, I wasnt addressing CUDA in my previous post. I was attempting to answer your question about why the GPU outperforms the CPU.

ulb
08-10-2009, 08:10 PM
CUDA is specific to higher-end nVidia cards. nVidia even has this server type thing that has a ton of GPU's in it for scientific research type stuff on CUDA. So CUDA is just the GPU environment the program is run in. They are porting the program over to OpenCL so it can run on any GPU environment, including ATI, and even motherboards that have a GPU built in.

But, I wasnt addressing CUDA in my previous post. I was attempting to answer your question about why the GPU outperforms the CPU.
Thank you for your answers!

CUDA doesn't seem to be limited to high-end products though, gaming cards are in the list of supported products (http://www.nvidia.com/object/cuda_learn_products.html).

But indeed I understand it is "just" a programming environment designed to use GPUs to their full potential.

Their work gives amazing results. I hope we will not have to wait as long for the release of GPU-enabled VRayRT than we had to between the firsts demos and the first release of VRayRT.

TwinSnakes
08-10-2009, 08:31 PM
Well that's only half of the story that only depends if you do not have to synchronize or communicate after each page with every thread if it has found the word yet. If yes, your parallelism goes to crap and that's the main crux with GPU everything is fine as long you don't have to synchronize around the threads if you do performance drops like a steep cliff. A lot of rendering algorithms needs synchronization or communication though.

So true! And that's what really knocked me out of my seat watching that video. I just cant imagine how they got around synchronization. Unless they arent doing it at all. Maybe they are just firing photons around like Maxwell, and that's how it converges so quickly because instead of 8 cores, it's 400+ cores. It's like having your own render farm in a off-the-shelf consumer CPU with a decent graphics card.

thablanchh
08-10-2009, 08:34 PM
So true! And that's what really knocked me out of my seat watching that video. I just cant imagine how they got around synchronization. Unless they arent doing it at all. Maybe they are just firing photons around like Maxwell, and that's how it converges so quickly because instead of 8 cores, it's 400+ cores. It's like having your own render farm in a off-the-shelf consumer CPU with a decent graphics card.

The actual CPU version of VrayRT uses Progressive Path tracing (PPT), so quite similar to Maxwell / fry way of working. PPT is aviable in the standard render engine as well, even if I do not think many people are using it.

scottsch
08-10-2009, 10:29 PM
Very cool stuff.

Is there a release date target for this application? Did I understand right that this will be a standalone application that only requires a script exporter? That is great news for Maya & XSI users if so.

I didn't see any refraction samples... vray is great with GI, but bogs down like all other renderers when you throw a lot of refraction into the scene.

Also, I am surprised there is criticism of mental ray. I still get better & faster stuff from mental ray due to user knowledge (yeah, that), & not being able to figure out some vray methods. Of course that goes out the window if the RT-GPU actually produces noiseless 4K stills in less than a minute.

Bullit
08-10-2009, 11:45 PM
Talking about Mentalray there is already Holomatix Rendition around that works with it. I have have a good opinion of it from using it with XSI one year ago.

http://www.holomatix.com/cat/rendition/

soulburn3d
08-11-2009, 01:04 AM
I doubt those were 4k or 8k textures though.

Maybe they could get a mipmap thing going in the texturemaps, so that when you're further from the model, it will just use smaller mipmapped textures, and so wouldn't need all the memory overhead of the full 8k texture all the time. It's still so sad that most of the major renderers don't deal with mipmapping much.

- Neil

mister3d
08-11-2009, 04:58 AM
Maybe they could get a mipmap thing going in the texturemaps, so that when you're further from the model, it will just use smaller mipmapped textures, and so wouldn't need all the memory overhead of the full 8k texture all the time. It's still so sad that most of the major renderers don't deal with mipmapping much.

- Neil

Why not to use bitmap proxies?

soulburn3d
08-11-2009, 06:14 AM
Why not to use bitmap proxies?

A whole bunch of reasons. They are useful, but you can only have a single alternate resolution per bitmap, you have to manually turn it on and possibly fine tune it per bitmap rather than the system handling it for you, it has to write out new files to disk, etc. It's fine at doing the one or two very specific things it was designed for, but does not replace a real mipmap system.

- Neil

Syndicate
08-11-2009, 07:00 AM
That's a pity, and limits its use for animation. I hope they will find a way to solve this.

Its best to do motion blur in post. Render a velocity pass and use that for accurate motion blur in post.

I know it would be nice to have a 1 button render, but honestly when you are working on a production you dont have time to include motion blur calculations (unless you have a huge renderfarm)

Something I want to see... More textures being used. I imagine this wont be available until the end of the year though. Seems they have a lot left to do. Impressive nonetheless.

PiotrekM
08-11-2009, 07:41 AM
sorry but no post-mb can match a 3d one, period.

Its best to do motion blur in post. Render a velocity pass and use that for accurate motion blur in post.

I know it would be nice to have a 1 button render, but honestly when you are working on a production you dont have time to include motion blur calculations (unless you have a huge renderfarm)

Something I want to see... More textures being used. I imagine this wont be available until the end of the year though. Seems they have a lot left to do. Impressive nonetheless.

mister3d
08-11-2009, 08:06 AM
Its best to do motion blur in post. Render a velocity pass and use that for accurate motion blur in post.

I know it would be nice to have a 1 button render, but honestly when you are working on a production you dont have time to include motion blur calculations (unless you have a huge renderfarm)

Something I want to see... More textures being used. I imagine this wont be available until the end of the year though. Seems they have a lot left to do. Impressive nonetheless.

Yes, you are right, as it's often the only solution we have regarding time. But it's best to render true 3d motion blur as you know regarding quality.
That's why I use mental ray's rapid motion blur if an animation scene does not have mirror-like reflections.
I wish I wouldn't need to bother with buggy post motion blur. I hope we will have a solution within several years. AFAIK Gelato has a very fast motion blur. So it's possible to have a GPU-accelerated motion blur.

Syndicate
08-11-2009, 08:30 AM
sorry but no post-mb can match a 3d one, period.

Thats the same thing as saying that Maxwell render will look better than v-ray (unbiased vs biased etc). Its never a question of matching, its a case of achieving the desired effect... in a timely manner :)

As another point I have seen some amazing post motion-blur, Add to that post DOF plugins like Lenscare etc and I would say that I think the majority prefer more control over their motionblur/DOF than having it "accurate"... also think about the fact that you dont always want things to be accurate. Sometimes you want things to be physically incorrect, but in doing so appear more realistic to the viewer.

Dont worry I used to think the same way. I used to want everything to be done in one render pass... and in doing so spent more time staring at the buckets (render) than doing work.

Kel Solaar
08-11-2009, 09:59 AM
Really impressive stuff, with VRay For Maya coming soon that's some great news for the Rendering field.

KS

leopadua
08-11-2009, 12:07 PM
@ Neil and others regarding textures.

Im not an expert and this might not be the case, but I have seen some docs on CUDA and it has calls from and to the CPU, so there might be a chance that we see Vray or anyother solution using ram memory + cpu to deal with the textures loading and caching, while using GPU + its memory to address the PPT.

I would suspect that someone with RT running on a 295 or so, surely has 6GB of ram or more, so that could not be a problem at all.

Lets wait and see... as said before, this is a impressive result nonetheless.

Regards,

CHRiTTeR
08-11-2009, 03:31 PM
AFAIK Gelato has a very fast motion blur. So it's possible to have a GPU-accelerated motion blur.

Gelato and VrayRT dont use the same way to generate/calculate an image, so its not as simple as just putting the way gelato does mb into VrayRT, that probably wont work at all.

mister3d
08-11-2009, 03:35 PM
Gelato and VrayRT dont use the same way to generate/calculate an image, so its not as simple as just putting the way gelato does mb into VrayRT, that probably wont work at all.

I understand, though I wasn't referring to Vray RT GPU, rather to uncoming priducts like Larrabee and such.

CHRiTTeR
08-11-2009, 03:41 PM
@ Neil and others regarding textures.

Im not an expert and this might not be the case, but I have seen some docs on CUDA and it has calls from and to the CPU, so there might be a chance that we see Vray or anyother solution using ram memory + cpu to deal with the textures loading and caching, while using GPU + its memory to address the PPT.

I would suspect that someone with RT running on a 295 or so, surely has 6GB of ram or more, so that could not be a problem at all.

Lets wait and see... as said before, this is a impressive result nonetheless.

Regards,

Im no expert at this either, but from what i remember what you mention here was exactly the reason why ppl said that doing raytracing on the gpu was not a good idea.
Since you loose a lot of time with the cpu/ram <-> gpu communication processes.

Appearently something has changed regarding this problem. Is it because of changes in CUDA or an adjusted GPU design, i dont know.

I really doubt exeryone with a gf 295 has 6gb in his system, really. More something like 4GB by default and not everyone who buys a system with a gaming GPU needs more then that.
But i suspect it doesnt matter much how much system ram they have anyway as I think vrayRT uses the video card's memmory, because this will enable the gpu to have the fastest possible access to the data, no?

soulburn3d
08-11-2009, 05:14 PM
Its best to do motion blur in post. Render a velocity pass and use that for accurate motion blur in post.

Doing motionblur in post isn't something I'd consider 'better', it's just another way that's available to the user, with it's own set of advantages and disadvantages. And I'd generally consider doing it in the 3d render to be more accurate, unless you're rendering a coverage channel or something to reconstruct objects behind the motionblurred objects.

I know it would be nice to have a 1 button render, but honestly when you are working on a production you dont have time to include motion blur calculations (unless you have a huge renderfarm)

Well the company I work for does, and so we do our motionblur in the render.

- Neil

soulburn3d
08-11-2009, 05:17 PM
Im not an expert and this might not be the case, but I have seen some docs on CUDA and it has calls from and to the CPU, so there might be a chance that we see Vray or anyother solution using ram memory + cpu to deal with the textures loading and caching, while using GPU + its memory to address the PPT.

That would certainly be cool, since there is probably a lot of things that the GPU currently can't do, and so you'd need to have the cpu handle a number of tasks. Hopefully though that won't slow down the interactiveness of the render much, it would suck that you have a nice fast gpu render, but the gpu is always waiting for the cpu to catch up :) Again, I don't have a lot of experience with this sort of technology, so I'm just asking questions aloud.

- Neil

PiotrekM
08-11-2009, 06:25 PM
ure so wrong mate...

Thats the same thing as saying that Maxwell render will look better than v-ray (unbiased vs biased etc). Its never a question of matching, its a case of achieving the desired effect... in a timely manner :)

As another point I have seen some amazing post motion-blur, Add to that post DOF plugins like Lenscare etc and I would say that I think the majority prefer more control over their motionblur/DOF than having it "accurate"... also think about the fact that you dont always want things to be accurate. Sometimes you want things to be physically incorrect, but in doing so appear more realistic to the viewer.

Dont worry I used to think the same way. I used to want everything to be done in one render pass... and in doing so spent more time staring at the buckets (render) than doing work.

leopadua
08-11-2009, 10:50 PM
@ CHRiTTeR

Well, it might indeed be a bad assumption on the amount of ram, but since i7's tripple channel ddr3, 6Gb is becomming more common than 4Gb on new systems.

As for the possible slow down due to communications between ram, cpu and gpu I guess its going to be something we will have to live with, because as much as I agree that the gpu has faster access to its memory, how many cards have more than 1GB of mem.?

I think it would limit the market share if they said..."Well, from now on, everyone will need a graphics card with more than 2GB if they want to see textures on the RT render." One would rather spend money buying ram that also helps other aspects of the system inter-op.

It might be a way to use the ram as an "in-between", while graphics card catch up with the amount of ram. Examples: Bunkspeed and Shaderlight rely badly on ram in order to cache and speed up things.

Anyway... ChaosGroup ppl are the ones entitled to talk about their software and I'm just speculating. ;)

Syndicate
08-12-2009, 06:57 AM
ure so wrong mate...

My renders: 2min/frame
Your renders: 1hr/frame

I should mention though, that if you are able to do 3d motion blur then do it. I was simply trying to explain a trick I see used a lot.

also, dont waste a post with a one liner... explain your thoughts next time.

ciao

mister3d
08-12-2009, 07:16 AM
My renders: 2min/frame
Your renders: 1hr/frame

I should mention though, that if you are able to do 3d motion blur then do it. I was simply trying to explain a trick I see used a lot.

also, dont waste a post with a one liner... explain your thoughts next time.

ciao

Have you tried 3d rapid motion blur in mental ray? It's better than post because it's a real 3d motion blur for a fraction of time of grand 3d motion blur.

PiotrekM
08-12-2009, 07:54 AM
ok.
the thing is 3d moblur vs post moblur is not like vray vs maxwell.

vray and maxwell can both produce realistic results.
Using vray over maxwell is not cheating. Maxwell is not 'physics simulator'. Your objects inside 3d are made of polygons not billions of particles, etc, etc, etc.
Any renderer claiming that is physically correct...this is stupid.

Back to topic, post moblur is useful only in 10% of jobs I do. 90% of time It FAILS.

3d blur vs post is not a real choice, you pick post moblur ONLY when you don't have time+money/renderfarm.

If your renderer of choice is 30 times slower with true moblur, that you should buy another one thats faster.

Simple thing like 3d moblur can do ALOT to make your animation look more believable.

I can agree only about doing DOF in post. Lenscare failed me only few times, it has it glitches but overall I like it more over true 3d dof.

My renders: 2min/frame
Your renders: 1hr/frame

I should mention though, that if you are able to do 3d motion blur then do it. I was simply trying to explain a trick I see used a lot.

also, dont waste a post with a one liner... explain your thoughts next time.

ciao

Syndicate
08-12-2009, 08:15 AM
Re: Mister3d - Mental Ray Rapid Moblur... I use V-ray so going back to mental ray would be a bit problematic, also check out this problem with mental ray... pretty funny:
http://cgpov.com/?p=153

Ok Piotrek, I agree what you said completely. I thought by your first post that you never tried post moblur. I was mistaken... sorry about that.
Still, 10% usability is a lot for me :) a lot of the jobs I do are commercials that usually have 2-3 people working on them. I guess the biggest problem is re-rendering and time-remapping which is why post mblur wins for me.

I also find that I dont just do an overall motionblur but split it for parts (avoid crossing-elements on one layer)... After Effects renders the 1920x1080 frames in about 10-20sec a frame at most.

Anyway, I guess the best solution for true motionblur that is both affordable and fast is a dedicated hardware card. The Pure cards used to do some cool stuff.

I do wonder, what would V-ray RT run like on an Nvidia Tesla :D

Either way for Archviz, Vray RT is defintely the best solution yet (IMO)

ChaosGroup
08-12-2009, 01:35 PM
Chaos Group made a number of significant announcements at SIGGRAPH 2009. The two new V-Ray products that are already rocking the 3D world in 2009:
V-Ray for Maya and V-Ray RT got their public presentation at one of the biggest and most exciting events in the industry. Without any doubt the most remarkable among Chaos Group announcements was full presentation of GPU-based V-Ray solution during the V-Ray User Event on August 6th. Chaos Software revealed test results showing that the new upcoming technology already features rendering speeds and quality that exceed existing GPU accelerated raytracing applications.

More than 150 attendees at the New Orleans Marriott Hotel at the Convention Center followed the demonstration of test rendering performed by V-Ray on GPU. Chaos Group showed a complex scene with 800 000 polygons and multiple bounces of global illumination rendered with V-Ray on the GPU with 6-7 frames per second at VGA resolution. Several CG experts commented that the demonstration of GPU-based V-Ray solution was one of the highlights of the Siggraph exhibition. Full presentation of V-Ray on GPU is available for review and download at the following link:

http://www.spot3d.com/vrayrt/gpu20090725.mov

Chaos Software performed extensive research into current and emerging technologies for acceleration of raytracing. The already great and ever increasing power of GPUs and the acceptance and implementation of OpenCL as an industry standard were the decisive factors for selecting this platform for future development. Massively parallel general-purpose GPUs become more widely available, and with industry-standard APIs now in place to utilize them, the usage of GPUs to accelerate raytracing finally becomes practical with V-Ray. Combined with the existing distributed rendering architecture of V-Ray RT, this solution will offer unparalleled raytracing performance far beyond what is available today. The goal of Chaos Software is to deliver on the GPU the same level of photorealistic rendering now available in the V-Ray RT engine with complex material and lighting effects, including physically accurate global illumination, glossy reflections, area lights, layered materials, etc. at speed never imagined before.

http://chaosgroup.com/mtrlimg/sig2009/SIGGRAPH09.jpg

http://chaosgroup.com/mtrlimg/sig2009/UserEvent.jpg

Chaos Group

mister3d
08-12-2009, 02:26 PM
Thank you.

Poirot
08-12-2009, 02:47 PM
Real time architectural visualization comes to mind. So many possibilities. The future will be interesting.

mister3d
08-12-2009, 03:22 PM
Now it's visible in a better quality, and Vlado says the renders are identical (and it looks like it. The brute-force interior looks completely smooth in 2 minutes! Damn it would take half an hour with i7). So perhaps it will be possible to use it as a production renderer? Why not really? I wish Vlado would provide such a possibility for animation.

DrBalthar
08-13-2009, 08:57 PM
Hurray Chaos Group they decided to go for OpenCL and not CUDA bollocks! Very wise. Heterogenous computing to the power!

playmesumch00ns
08-14-2009, 01:39 PM
My renders: 2min/frame
Your renders: 1hr/frame

I should mention though, that if you are able to do 3d motion blur then do it. I was simply trying to explain a trick I see used a lot.

also, dont waste a post with a one liner... explain your thoughts next time.

ciao

Just use a proper renderer like prman :)

/troll

CHRiTTeR
08-14-2009, 02:14 PM
Just use a proper renderer like prman :)

/troll

In all respect, prman is a great renderer, but it isnt the best for everyone.

Just looking at the galleries over here is best proof of that. Not so much individuals using it.

Prman is great for bigger studios; they got the budget to spend on lighting/shading and technical td's and build a pipeline around it. Thats where prman shines.

But for not-so-technical artists or artists who want more ease of use and keep the process simple and intuitive it isnt verry ideal.

From what i understood, prman is amazingly fast because it is a rasterizer (that also makes it probably easier to make it run on a GPU). But from the moment you throw some raytracing in the mix it chokes.
Maybe that has changed with the latest versions i dont know?

Also if you go and look at the pricing theres a huge difference.

Xharthok
08-15-2009, 01:59 AM
In all respect, prman is a great renderer, but it isnt the best for everyone.
This is an important point. And I have a totally mega hyper serious slightly oversimplified one point checklist for People who consider learning Renderman:

For Maya:
You're using AA 0-2 for most of your work and you think it's the best quality/rendertime tradeoff?

-If yes, don't think about Renderman anymore.
-If no, learn it you're gonna need it.

:)

ThE_JacO
08-15-2009, 02:41 AM
One of the guys at NVIDIA explained it to me this way. If you needed to locate a specific word in a book using a CPU then it would search each page in order until it found the word. With a GPU (and assuming you has as many cores as you did pages) then every page would be searched simultaneously. Obviously this is a MUCH faster way to compute.
I'm not surprised that comes from a guy at nvidia... :)
The speed increments in parallel scalability relate to how efficiently you can thread while keeping the threads safe and not saturating the pipes.
Not everything can be threaded efficiently, and in general most things that aren't stateless will either scale inefficiently with parallelism or even lose performance while working around the limitations that make the routines advance at the speed of a quadriplegic monkey while you have 300 tiny units waiting for the last one to push a massive mathematical turd out that simply couldn't be recombined if you split it.

In the example the guy gave you, he forgot to mention what happens when his myopic viewer is handled just one long page and needs to read a single sentence that can't be split into words to give other reader without ceasing to make sense :)

I'm as excited as anybody at seeing what Vlado and co. are doing, but the generalizations and predictions are getting a bit out of hand here.
In first place the difference isn't really about GPU vs CPU di per se' but down to many specialized units in parallel VS less and more generic units. CPUs have been going the way of parallel scaling themselves for a while and will keep doing so, so the boundaries will keep blurring between the two things.

Offloading calculations to the GPU also isn't a silver bullet, there still are severe limitations when working with them, and there is a difference between a GPU accelerated previewer to do a first pass on lighting and rendering before you defer the final task to normal rendering, and this being a full engine replacement.
For some markets this will no doubt be a huge improvment and might even contribute to final pictures, for many more however this is absolutely irrelevant and other raytracers look a lot more interesting (IE: Arnold)

Farms aren't going to be equipped with videocard shaped 200W electric heaters anytime this year or the next. It's not practical logistically or economically as you sure can't go cheap on any other component just because you put two Watt vampires in SLI in a 4U unit, when before you could stuff two 2U extremely thermally and power efficient units with four quad core CPUs in the same rack space.
And even if somebody was stupid enough to do just that videocards still have limits as to what they can do, how much memory they come with, and many things still perform WORSE when widely split for threading and pushed through SPUs than they do when run on a quadcore.

I'm playing a lot with CUDA at home and have been for a while now, and am looking forward to OpenCL too, I'm absolutely positive that this will make more than ripples in the pond and has its place, but in no way this will be the CPU killer people seem to think it is.
Thread safety and latency issues alone are enough to make any grown man cry when trying to implement certain things, and even when you finally shed off your previous 10 years of assumptions and comfort and rewire considerable portions of your brain to re-design and re-implement a lot of stuff, some problems are simply impossible to solve efficiently, or the workarounds so involved and brittle that they really aren't worth the tradeoff.

What I find extremely good about news such as this and other GPU technologies though is that it means that programming, and a number of other things, are finally getting out of the stale ditch they were in, and are now progressing in such a way that the future generations of hardware won't be sitting there, crippled and bleeding, 90% of the time like my dual quad core workstation at work when it runs maya with a CPU graph that never makes it above 13% :)

So lets keep rejoycing in the innovation and new tools brought to the table brothers! But don't sell your CPU yet or start blindly swearing by whatwever half arsed statements nVIDIA and ATI feel like throwing around to convince you that a nuclear powered 4way Xfire or SLI is exactly what you need in 2010 ;)

mister3d
08-15-2009, 07:53 PM
Farms aren't going to be equipped with videocard shaped 200W electric heaters anytime this year or the next. It's not practical logistically or economically as you sure can't go cheap on any other component just because you put two Watt vampires in SLI in a 4U unit, when before you could stuff two 2U extremely thermally and power efficient units with four quad core CPUs in the same rack space.
And even if somebody was stupid enough to do just that videocards still have limits as to what they can do, how much memory they come with, and many things still perform WORSE when widely split for threading and pushed through SPUs than they do when run on a quadcore.
But if to compare the speed, won't it be enough to place a handful of videocards to replace a big farm any soon (considering the speed increase), especially for those who are starting to build new farms? And what kind of limitations do you think there will be in terms of practical rendering with GPU's?

Titus
08-15-2009, 08:49 PM
From what i understood, prman is amazingly fast because it is a rasterizer (that also makes it probably easier to make it run on a GPU). But from the moment you throw some raytracing in the mix it chokes.

PRman is fast because you can tweak your scenes and trick all you want. I've used PRMan in small studio environments, as the only shading TD with no problems. But you're right, as a PRMan user you need more technical abilities than just clicking a render button.

ThE_JacO
08-16-2009, 06:12 AM
But if to compare the speed, won't it be enough to place a handful of videocards to replace a big farm any soon (considering the speed increase), especially for those who are starting to build new farms? And what kind of limitations do you think there will be in terms of practical rendering with GPU's?
I wish I had a crystal ball and could tell you what the future held exactly :) but not being a rendering engine developer I can't really say what the problems of GPU ports would be in that specific domain, but at the very least, given my experience in other fields, I can imagine memory being a pretty big issue, not to mention a limit in what algorithms you can implement because some scale terribly in parallel, or mostly get bottlenecked by pipe width and would be severely affected by too much threading pulling data from the same pipe.

The moment you leave the realms of video memory and start paging in and out of the system's ram things fall apart quick. Given how many times memory, even today with 8 or 16GB per blade, has been the limiting factor in delivering a shot I'd say that having to fork out 2.5 grands more for a system with only 3.8 available GB of VRam would make it pretty limiting and expensive.

Also given how very few things make any decent use of GPUs right now, and the cost of equipping a farm with decent videocards, I'd say that even if you were building a new farm, if it had to be generically purposed, it's not time yet. When all is considered a quadro enabled farm costs over time and half a normal one to setup, and considerably more to maintain.

Maybe if you have in-house dev that is heavily biased in this direction it already makes sense to have at least part of the farm that way (maybe ILM has done something already?), but that's only a limited number of players who could consider it even in the top tier.

Again, if you could be certain that just deferring part or all the processes to GPUs would give you 10x the speed, then sure, this might be interesting, but in reality a lot of stuff can't be deferred to a GPU at all, or when it is it doesn't perform any better than it does on a CPU.

I would definitely see workstations or personal/departmental minifarms based on solutions like intels 2nd generation plans for Larrabee or nVIDIA tesla boxes spearheading this a lot sooner than I can see a lot of blades being mounted in with SLI quadros.

mister3d
08-16-2009, 09:52 AM
Thanks ThE_JacO, that's interesting.

TwinSnakes
08-17-2009, 04:24 PM
Again, if you could be certain that just deferring part or all the processes to GPUs would give you 10x the speed, then sure, this might be interesting, but in reality a lot of stuff can't be deferred to a GPU at all, or when it is it doesn't perform any better than it does on a CPU.

I would definitely see workstations or personal/departmental minifarms based on solutions like intels 2nd generation plans for Larrabee or nVIDIA tesla boxes spearheading this a lot sooner than I can see a lot of blades being mounted in with SLI quadros.

I have to totally disagree here. They (ChaosGroup) did what nVidia could not do with a 3-5 year head start AND a whole building full of GPU programmers. nVidia said Gelato was production ready, and they couldnt even give it away for free. I'm not in the industry at the production level, but I'm sure we all suffer from the same albatross - Time.

Consider Pixar: 360 servers running RenderMan, each rendering a different frame at 17hours+ per-frame for production films. That's no GI, fake reflections, and some textures for decals. I cant think of a deployment/support dollar amount that would be cost prohibitive for them if it meant a 1,000% or more decrease in per-frame render time.

VrayRT is not production ready, but when it is.....:bowdown:

ThE_JacO
08-17-2009, 07:40 PM
I have to totally disagree here. They (ChaosGroup) did what nVidia could not do with a 3-5 year head start AND a whole building full of GPU programmers. nVidia said Gelato was production ready, and they couldnt even give it away for free. I'm not in the industry at the production level, but I'm sure we all suffer from the same albatross - Time.

Consider Pixar: 360 servers running RenderMan, each rendering a different frame at 17hours+ per-frame for production films. That's no GI, fake reflections, and some textures for decals. I cant think of a deployment/support dollar amount that would be cost prohibitive for them if it meant a 1,000% or more decrease in per-frame render time.

VrayRT is not production ready, but when it is.....:bowdown:
On all those frames though they also have rather large point clouds, motionblur and quite a few other things.
The increment wouldn't be 1000%, in fact the increment would be that things wouldn't render at all given the current GPU powered engines out there, including vray's first iteration ;)

The examples relating to gelato I'm not sure what to make of. That was quite a few years back, well before things such as CUDA and OpenCL, and it was mostly a bastard child of BMRT, it was also not exactly given away for free until it was considered dead. If anything it goes to show that even with a free product people still wouldn't equip farms with videocards.

Of course things will change in time. But I still doubt you're going to see mid-sized and up studios running out to buy videocards for for all the blades on the farms anytime this year. I would expect small clusters or renderstations to be happening a lot sooner than that in fact.

CHRiTTeR
08-17-2009, 08:20 PM
Of course things will change in time. But I still doubt you're going to see mid-sized and up studios running out to buy videocards for for all the blades on the farms anytime this year. I would expect small clusters or renderstations to be happening a lot sooner than that in fact.

off course it wont be lots of 'videocards' for renderfarms.
But like someone already mentioned earlier in this thread; think what will be possible with tesla systems.

http://www.nvidia.com/object/tesla_computing_solutions.html

And intel & amd are clearly also intrested and competing.


Sure this change/adaption will take some time but i think it will happen relatively fast.

ThE_JacO
08-18-2009, 12:20 AM
off course it wont be lots of 'videocards' for renderfarms.
But like someone already mentioned earlier in this thread; think what will be possible with tesla systems.

http://www.nvidia.com/object/tesla_computing_solutions.html

And intel & amd are clearly also intrested and competing.


Sure this change/adaption will take some time but i think it will happen relatively fast.
It was probably me mentioning those :p
That's what I meant with "I would expect small clusters or renderstations to be happening a lot sooner than that in fact."

CLouDZeRo
09-21-2009, 06:10 PM
this sounds too good to be true. someone is going to lose alot of money because of this. everyone from renderfarm dealers to nvidia's quadro. i think there will be lawsuits, or chaos is going to release it in slow increments to milk as much money from customers as possible

CGTalk Moderation
09-21-2009, 06:10 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.