mental ray + Cuda = Mental Ray 4.0 ?


#1

News has been floating around that nVidia has closed down gelato and moved that team on to port Mental Ray on Cuda. For those that don’t know what Cuda is, it allows you to run C code on the GPU. I have a friend that has ported several of his applications to run on cuda and has gained a 12x to 180x increase in performance going from an intel quad 2.4 to an 8800gt. The architecture requires things to be crazy multi threaded, MS word would run like crap, but photoshop could be 10x as fast (likely more than that).

This seems to be the next big revolution. I know there are nay sayers that will tout that cuda isnt that good. It has some strong limitations, but most of those dont effect rendering at all. nVidias GPUs are design from the ground up for rendering.

What is everyone elses thoughts on this?


#2

not everyone will run Nvidia or the chipsets that would make use of Cuda. They’d polarize the community who uses Mental Ray. I think it’ll still be a small community as Gelato was.


#3

Yes, but since mental ray is owned by nVidia I don’t think they are going to worry too much about offending ATI. Intel flipped the whole GPU industry the bird when they bought Havok and canned the GPU products and shifted to CPU only methods. Having said that nVidia would be smart to offer a Cuda version and a non Cuda version in my opinion.


#4

Why would there have to be two versions? Couldn’t it be as simple as turning on the “use cuda” button?

You know, like when you can use the “high quality renderer” in the viewport if you have a video card that can handle it.

If your computer can use the technology great. If it can’t, you could still be rendering as usual through your CPU no?

I’m by no means a programmer, but why discourage the implementation if it could bring us 10X+ speeds in rendering certain tasks?

…I mean…I don’t see how everyone would not move to this technology If all it took was an investment on video cards for 10X the speed.

That’s pretty cost effective if you compare it to investing in massive rendering farms to get the equivalent of a couple of video cards.


#5

Hi

the newest render kernel from mental images is not mental ray. Mental images have produced a other render Engine the Reality Server. This new Technologie is for the Future.

Look here: http://www.mentalimages.com/2_3_realityserver/index.html


#6

From what I read on the MI site, RealityServer doesn’t look to compete with MentalRay in any way. It Doesn’t even provide the same functions.

am I wrong?


#7

RealityServer is a platform for a different industry.

The Cuda idea seems like a fairly good one to an extent. Although cost-wise I’m not sure. A bank of Quadros chugging away cannot be cheap.


#8

I would have thought that would be true too. But there have been problems with frames looking different when you do that. Renderman Intel frames used to look different than Renderman AMD frames. Maya 64bit renders slightly different than maya 32bit, etc.

I totally agree, it’s a big problem for mental images to support two different versions of the same renderer. Its also a big problem for them to limit what video cards people can use. But at 10x the performance, everything seems more than worth it.

That would be true, but you can run this on any 8 series nvidia or higher (only the new 10 series will do double floats though). The 10x figure comes from a ~$200 GeForce 8800gt. Even if this is off by a factor of two its still very cost effective.

There are some problems with cuda but the benefits will be so huge that I can’t imagine it not being well worth it, especially now that nVidia owns mental images. Anything that sells more video cards is a win for them.


#9

The posibilities are endless.

I don’t even need a 10X increase in rendering…If I could take advantage of my video card in any way I would be happy (I have to have one anyways). Specially calculating motion blur, radiosity, caustics, refraction, etc…

I don’t see how this couldn’t be incorporated in the viewport, I mean, Nvidia shaders are already in maya for realtime preview, why can’t we dream of maya or MR shaders being told to split shading processes to the video card, or to have realtime feedback in the viewport.

One can only hope…:drool:

…and who the hell cares about ATi not being able to keep up? that’s exactly what keeps progress going…competition!

I’m sure ATI would have to come up with a solution to compete, which sounds good to me. Specially if they are owned by AMD now…now that’s a pair I want to see get creative!:beer:


#10

Raytracing calculations would be similar to physics calculations, so I wonder if the recent Aegia acuisition will take any role in future mray rewrites.

Getting every feature of mray to translate into CUDA might not be possible in the short run. But every possible accelaration through NV gpu is welcomed.

People are just asking too much from mainstream cpu’s. After all why should an office guy have a cpu that has a floating point calc power of a renderfarm in the future?


#11

What? Gelato’s dead?
Carry on the interesting discussion anyway :slight_smile:


#12

From what I have heard about developing in Cuda its very different than coding for a normal x86 cpu. For instance, you never use loops. Instead you fork out little threads to do what a loop would do. So instead of looping something 10,000 times, you run 10,000 threads. nVidia gpus need 1000 or more threads to run at its best. They have this funky hyperthreading like architecture that allows it to swich between threads while its waiting for something from memory. In a normal cpu you can get, at best, 50% of your memory bandwidth. With nVidia gpus you can get better than 90%. Thier band width is 4 to 6 times as fast to begin with.

In short, nvidia gpus are CRAZY fast if you can have a grid of things to do. This architecture has some serious limitations if what your doing cant be multi threaded. Rendering is where Cuda should really shine.

If we did get a real time preview I would guess that it would just be a refreshed IPR type thing. Even 10x isn’t near real time for most of what I work on. That would still be awesome though!


#13

It is when 4 of them outperform a €3,500,000 supercomputer:
http://fastra.ua.ac.be/en/benchmarks.html

Can’t wait :thumbsup:


#14
  • No, the team has not be moved to mental ray, actually several people of the team left, and the others moved to other projects

  • No, mental ray won’t be ever ported on cuda, there is no intention and it simply can’t. Only some parts could, and if so it will be done just for marketing reasons.

p


#15

For the future of what?

There is basically nobody using reality server, which in any case is not designed for production needs. It is (IMO) unsuccessfully aimed to a complete different market.
The latest kernels of mental ray are actually inheriting technology from rs, eg the bsp2.

p


#16

So Gelato is officially dead huh?..sad…it had great potential…

that’s even sadder…

could you provide a quick explanation of why you believe this? technicaly I mean.


#17

Even if only parts of it are ported there is the potential to increase the speed of the process, if even just passes (gelato ambient occlusion was fairly quick). And I know a fair number of studios use nVidia Quadros. (We have a couple hundred.)

Given the current generation of GPUs it wouldn’t seem to be a stretch to use the technology for film rendering eventually. I know there was talk (not sure how far it’s gotten) of porting mental ray to CELL processors as well.

IBM, not being a slouch on the technology front, already uses them with Opterons for Roadrunner. It would be doubtful that they spent that much R&D on a marketing ploy. I’d be more interested in that result as it is more immediately possible.


#18

Here you go.

So, to make things work on cuda you need to have a lot of coherent computations to perform: the problem of mental ray is that it is casting one ray then calling a native C shader to shade the intersection then maybe the shader casts another batch of rays and so on. Very non coherent.
This is just not suited for gpu computation unless if a massive rewrite/refactoring of the pipeline is done by mental images, which will not happen soon enough (I am speaking about years).
Some specific task like fast AO could be definitely done though (and also this won’t happen soon).

Also another problem of mental ray is that the codebase is 15 years old (and it’s not a Whiskey), native C for the CPU, full of pointers mem alloc and other shit, so to use an euphemism is a fcking mess.

p


#19

Eventually yes, I agree.

I know there was talk (not sure how far it’s gotten) of porting mental ray to CELL processors as well.

IBM, not being a slouch on the technology front, already uses them with Opterons for Roadrunner. It would be doubtful that they spent that much R&D on a marketing ploy. I’d be more interested in that result as it is more immediately possible.

It might end up working but it will never be as fast as the classic x86 and especially it will be never cost effective (given the price of cell blades).

p


#20

Thanks a lot for your thorough explanation Paolo.

I guess I’ll stick to Gelato for now. It does a lot of things very fast and will only get faster as new hardware comes out.

Even though the develoment has stopped, it will do just fine until pigs fly and we get our hands on a more resource efficient renderer.

Thanks again. :slight_smile: