mental ray + Cuda = Mental Ray 4.0 ?


#41
  Yes it is around the corner. Around six months from what I hear.
  
  The link you posted just backs up a few of my points. Unless I'm missing something, Firestream requires the Stream Computing Software Stack (which in term depends on Catalyst drivers which require an AMD GPU, correct me if I'm wrong).
  So just re-read my last post, replace "CUDA" with "software stack" and "Nvidia" with "AMD" and it should all make equally much sense. Also note my comments about using these in farms ...

Last time I worked in VFX (roughly two weeks ago), 99% of the big places had Linux render farms. The Stream Computing SDK from AMD is Windows XP only (and, like CUDA, closed source).
Might proove a little road block even if anyone was crazy enough to want to give this a try implementing support in their renderer as soon as Firesttreams can be purchased (not before September).
They would face the issue that anyone interested in then using it (probably around the time Nehalem becomes available anyway) would need to switch their farm to Windows XP. I think most places would rather eat their shorts. :stuck_out_tongue:

  Going back to CPUs --, AMD, too, will likely have a CPU with specs similar to Intel's Nehalem.
  And this generation of CPUs will be out much sooner than any vendor could add production-ready support for GPU stuff to their renderer and much much sooner than any sane decision making person at a big place will smoke a pot big enough to hallucinate about putting blades of any sort into their farms' blades. ;)
  
  As such it is nothing but contraindicated to spend time on adding GPU support to one's renderer at this time. Even more so with the rapid developments on the hardware side of things.
  
  From your comments in other threads and the love you regularly express for mental ray I would guess that you are probably one of these people who render single frame architectural stuff on a single or several desktops. And that mental ray is probably the only renderer whose use (or quirks) you more or less understand, thoroughly.
  
  If that was the case, consider that almost everything you believe to know about "rendering" does not apply in the world of VFX or full CG features and I don't mean that in any way pejorative. Consider then that all my comments apply solely to the application of rendering in these two use cases.
  
  You are mistaken in assuming that my regularly expressed amusement about people's affection with mental ray stemmed from a lack of knowledge about this renderer.
  
  I worked in commercials many years before switching to VFX and in the former field many places still haven't seen the light and put their artists through the daily pain that comes with using mental ray for most kinds of animation work.
  Working freelance I thus inevitably ended up at a place, sooner or later, where I had to cope with it. My opinion on mental ray is solely based on these "experiences" (and there were enough for two lives).

Lets just say, I’d have gladly missed any of them. :slight_smile:

  .mm

#42

LOL, we all know that you work in VFX, repeating it forever dont make you better, believe me

from your comments i’m sure you dont know anything about mental ray (and probably anything about most of the things you talk), but you like to talk…
oh, and yes, i know you work in VFX LOL

what are you talking about? you talking about software engeneering, this is not “rendering”

really? PROVE IT

LOL

yep, you seen the light… on the way for damasco?

yep, you was frustrated because you dont know how to work with it… this was clear for me :wink:


#43

Cool down guys. AMD and Nvidia GPU’s have reached “1 teraflop” processing power.

http://www.techpowerup.com/63068/AMD_FireStream_9250_Breaks_the_1_Teraflop_Barrier.html

http://www.tomshardware.com/reviews/nvidia-cuda-gpu,1954.html

Projects like Folding@Home and many other individual projects and tests on hardware and software geek sites are stating that the processing power of GPU’s are in some cases 100 times faster than the latest and greatest Intel CPU. There’s no denying to that. period.

So power consumption wise, GPU is a no brainer. Double precision 64 bit FPU IS there. The real problem IS that these tech needs an industry standard developers can embrace freely.

Nvidia with CUDA and its GPU, AMD with its GPU and upcoming GPCPU and Intel with Larabee (also called Laughebee by the Nvidia CEO) will fight their asses off to win this race cause it obviously IS the future.

I’ll have to agree though that Renderman has a huge advantage over mentalray during this revolution. Not only because Renderman is REYES based, but because mentalray which is owned by nvidia, will place all its bets on CUDA. So if CUDA fails in establishing itself as an industry standard, Mray might be pretty much bloatware.

AND who knows. Maybe we’ll see another renderer surpass both renderman and mray. Maxwell, Vray, Houdini Mantra, etc… Or maybe a totally new renderer, which is born on the GPU like Crytek’s Cryengine.


#44

Hey Mauritius the new Nvidia GTX260 and 280 do full DP with very good performance its one of the bigger features of this card…

rBrady I’m pretty sure like 99% confident the lighting in that clip is not pre-baked and all realtime, if anyone knows better feel free to correct me.


#45

Matteo,
you actually are not answering the point.
So besides the anger (that probably comes from abuse of mental technology), what is your point?

from your comments i’m sure you dont know anything about mental ray (and probably anything about most of the things you talk), but you like to talk…
oh, and yes, i know you work in VFX LOL

Elementary, Watson.
So, can you detail the chain syllogism that let you to this conclusion?

what are you talking about? you talking about software engeneering, this is not “rendering”
really? PROVE IT
LOL

Prove what? i don’t recall you proving your comment before. Why should he have to?

yep, you seen the light… on the way for damasco?

Don’t watch the Blues Brothers too many times, it might impact your sense of judgement.
So, contrary to what Mauritius said, did you actually ever (ab)use another renderer in your life? And not just for architectural pr0n.

yep, you was frustrated because you dont know how to work with it… this was clear for me :wink:

C’mon, man. Are you joking? Be respectful.

And I’m not gonna comment the italian cook provocation: never mix silly work with absolute pleasure.

Pace.

P


#46

Kabib, you might be right. I tend to think of cityscapes like that having the lighting baked, but I could be wrong. I do think that comparing realtime rendering to performance to our renderers is not quite fair. Offline renderes do far more samples per pixel normally.

I was also re-reading the email about Disney’s GPU farm. I miss spoke, he didn’t mention cuda specifically. So they could be using ATIs CTM (close to metal). I would be a bit suprised if the are using CTM, its all assembly level, pretty hard to develop on.


#47

speaking of being respectful, everyone needs to chill. It’s obvious people will continue to use Mental for their own reasons, and that should be respected. Personal attacks because people use Mental are pretty silly and makes others who use renderman seem elitist. They’re flippin renderers! Paolo is doing a good job helping educate on 3Delight on his site. I would expect more of that versus, “hahaha I’ve been in VFX longer than you and I use Renderman because MR sucks… and it’s because I know because I’ve been in VFX longer!”

This wasn’t directed at anyone, but the ego trips need to go or else this thread is pointless and should be shut down.


#48

I Agree with you.


#49

I know you did not mean to direct your criticism to anybody,
I just want to point out that Moritz is one of the administrators of Liquid, one of the contributors to the RenderMan API, a course teacher at Siggraph and the developer of the OpenSource Affogato for XSI, so he did a sh!t load more than me for the community and he is always keen to share his knowledge in an open way.

The criticism that lots of the VFX people direct to the makers of mental ra(dela)y is because their policy is to trap people in a tricky & closed API and into a closed business model that is unfit to newer democratic modern ones.

I worked there, I know them, I absolutely avoid them.
There is nothing bad about mental ray at the end, you can do pretty much everything with it (except animation - ok you can do it, but you need to be a masochist). It’s just the business model, the management and the API that sukcs ba||s.

p


#50

Yeah but does it really matter if its running at 30fps at 2k run it at 6k at 10fps and resize boom you got the best sampling possible and its still a crap load faster…

I can’t see how offline rendering can keep up there is simply more money being invested in realtime rendering hardware and software by a vast amount


#51

More water on the wheel in this hottopic thread. :smiley:

Nvidia Cuda Zone
Tomshardware
CGTalk


#52
 I agree, this reminds me of the days when SGI was big. They couldn’t keep up with gaming cards. Now all workstation cards are based on gaming cards. (the Quadro/Firegl lines are based on gaming cards, slightly modified but basically the same). Volume wins even when the architecture doesn't quite fit.




 Its interesting to see the storm brewing around the next generation of computing. x86, while still viable, is showing its age. As far as the next step, this is what I have heard. 

AMD has their Fusion architecture to fill the gap. This is basically an ATI gpu added to their cpu die. AMD is a smart bunch and I’m sure it will be very good. But I worry about the memory bandwidth limitations of putting everything on one die.

 Intel has their Larabee, while sounding cool, it seems it’s too little too late. Intel’s follow on will be a joint project with Cray supercomputers. Cray has always been the masters of vector computing for 30 years, but usually have problems with manufacturing. With Intel being the masters of manufacturing this should be a good match. But since we wont see anything for a few years its hard to get too excited though. 

 nVidia’s CUDA is unique in that its here now, it works, and people are already using it. It also appears that these other alternatives will be equivalent at best. Intel has a huge amount of ground to cover over the next few years, and even if they do, they are late to market. The best scenario is if ATI and nVidia agreed on a standard of a CUDA like architecture, very much like AMD and Intel agreed on a 64 bit standard. I have my doubts that this would ever happen for several reasons. But one can hope.

#53

the point is: there isn’t just one Market in the renderer world, there isn’t one Phylosophy, there are many impressive renderer and many different type of works, your (VFX) market isn’t the only nor the first!! i really hate the absolutism, contrarily of mauritius

syllogism?
this is enough for me

you know this is not real, even if you talk only about VFX (and again, there isn’t only this market on the world!!), you change many of your thoughts in the past years paolo, let me say just that

ahah paolo, this is the cat who caught his tail
i was thinking he dont know anything about mental ray and never express this thought, but he for first wrote this:

so, probably he have a guilty conscience? :wink:

you are probably kidding me paolo, but again, your thoughts in the past years are changed, so i can understand this answer, so probably:

  • vray
  • maxwell
  • fR
  • turtle
  • modo (renderer)
  • brazil

those aren’t “real” renderer for you right now
i got a try to your 3delight shader, did you remember it paolo?
i use what i think can help me in my work, and right now mental ray is on top of my needs
and again we are talking about rendering, archviz is part of this world, even if you dont like it

i respect who respect me in first, if someone want to prove i’m a f**ing monkey because i’m using a particular software i become many less respectful

pace

M


#54

…AMEN! :wink:


#55

Chill out everyone! Its disturbing to see such respected folks throwing trash at each other in a public forum over this.

Paolo, mm, dagon, all you guys are very very well respected in these forums and everywhere else on the net for your contributions. So you guys definitely know better not to bicker over this pointless age old renderer 1 vs. renderer 2 debate.

And dagon does have a point. Vfx isn’t the only industry that needs to render stuff. So quite a few renderers over the years have found their niche markets and are doing comfortably well in their respective turfs. So its pointless to argue one renderer is better than the other as you much experienced folks would definitely agree. You guys know better that when the end user is concerned he’ll adopt whatever suits his job.

Btw, archi porn? Hehe, dig that term :wink:

-Sachin


#56

To be cheeky i would say VFX is becoming the niche market in the world of rendering :slight_smile:


#57

You cant fight ego with ego, they both just get stronger. Lets stay on topic.


#58

Don’t like posting links to other forums but if this is of any interest to anyone here, there’s some kind of cuda contest going on:
http://www.cgtantra.com/forums/forumdisplay.php?f=149


#59

I am not going to reply by line otherwise I would fire up this thread.

So, read well: I meant that you were not answering the point: we were speaking about CUDA and all the wrong speculation on the porting of mr to it. Not about “this market is better than the other”: this rantle that you are manifesting also in other threads is not necessary here (nor there). So please stop doing it.

Also nobody called you an f-mokey if I recollect well, while I think you are being very aggressive with Moritz (eg going to see his personal blog about wrong spaghetti bolognese etc… and that was really pathetic), so I thought to try to explain to you that you should respect him because he did a lot for the community, and I tried to calm you down. And looks like I did not manage.

This thread is about mental ray + cuda speculation.
Feel free to contribute if you have something to say.
Otherwise relax.

p


#60

really? or about “speculations”? :wink:
this is the only “official” news paolo:

yours and the moritz are just speculations, nothing more nothing less