PDA

View Full Version : Mental Images Berlin office closed?


ACantarel
05-26-2011, 11:30 PM
Just stumbled over this one...

https://twitter.com/#!/virtualritz/status/72603197121896448 (https://twitter.com/#%21/virtualritz/status/72603197121896448)

Has anyone more news about it?

mercuito
05-27-2011, 12:11 AM
I wouldn't pay too much attention to rumors. The official nvidia press release seems much different.

Bullit
05-27-2011, 12:11 AM
Mentalimages is dead. Nvidia says it will integrate the technology in Quadro GPU line development. So it appears the Quadro line will have a Render engine.
Probably Mentalray name will be maintained for the software part.

Bullit
05-27-2011, 12:16 AM
More info :
http://forum.mentalimages.com/showthread.php?8432

mercuito
05-27-2011, 12:39 AM
More info :
http://forum.mentalimages.com/showthread.php?8432

Hmm interesting stuff in that thread. This could definitely be a good thing for mr.

EdtheHobbit
05-27-2011, 01:00 AM
hrm. I'm cautiously optimistic -- it seems like mental ray's development has been a little "here and there" for a long time. I think it's been lacking a clear set of goals, which maybe nvidia's research group will be able to help drill down.

Kabab
05-27-2011, 01:08 AM
OK, let me clarify – yes, we’ve just gone through a reorganization, and are now combined with other professional software teams within NVIDIA. There have been staff reductions, but most were due to overlapping disciplines and some priority shifts. At the same time, teams on core technologies like mental ray, iray and cloud rendering have staffed-up by a fair bit. All the development managers are in place, and are now working closely with their peers within NVIDIA. The truth is, that being “independent” from NVIDIA has kept us from leveraging some serious resources and know-how there (and vice versa).We will now take full advantage. Things are changing here – but we think you’re going to like the results."

Sounds good to me!

mustique
05-27-2011, 08:44 AM
that's all corporate nonsense talk. Mental images seems to be dead and the future of Mentalray will probably be the same of Nvidia's other renderers. Mental images was aquired to keep Intel (and its larabee team) away from mental ray anyway.

Well, I hope that's the final wake up call for ADSK to look for another renderer :wise:

ThE_JacO
05-27-2011, 09:10 AM
hrm. I'm cautiously optimistic -- it seems like mental ray's development has been a little "here and there" for a long time. I think it's been lacking a clear set of goals, which maybe nvidia's research group will be able to help drill down.
Oh yeah, because historically that went great for Larry Gritz and Gelato ;)

If this is as literal as Moritz puts it in his tweet, MRay will become an architectrual rendering engine oriented towards selling as many GPUs as possible.

Looking forward to nVIDIA killing yet another product in the effort to convince us that our renderfarms should equip a power hungry 400W SLI setup in every blade :) At least this time around, if they do it, they will be killing something that's been holding the industry back more than it's been helping it for years now.

Bitter
05-27-2011, 09:21 AM
This has been in the works for some time.

that's all corporate nonsense talk. Mental images seems to be dead and the future of Mentalray will probably be the same of Nvidia's other renderers. Mental images was aquired to keep Intel (and its larabee team) away from mental ray anyway.

mental images was acquired for two reasons: patent portfolio and their knowledge of rendering for both hardware and software. Many corporations will invest in a smaller company rather than take the time to recruit and build their own. That is slow and costly as opposed to just buying what you want.

Now, the above mental images quote isn't corporate nonsense. The above quote is from Barton Gawboy, their director of training. Someone I have worked with very closely as of late. He is not given to flights of fancy or corporate mumbo jumbo in the slightest. He's very good at not putting his foot in his mouth. The feeling I get is genuine excitement from some of the people at mental images.

This change has already been working for some time and you haven't seen an end to mental ray in that time, in fact you have seen the creation of MetaSL (mental mill) and iRay.

Nvidia has a very special interest in mental ray. Look here: http://www.irayrender.com/

It wouldn't behoove them to dismantle it.

Bitter
05-27-2011, 09:29 AM
Oh yeah, because historically that went great for Larry Gritz and Gelato

Mmm, yeah there's a much larger back story there. Part of the purchase of BMRT -> Exluna/Entropy, etc was a calculated move since they were being pursued legally by Pixar.

Once that was over, Nvidia lost a lot of those developers.

ThirdEye
05-27-2011, 09:55 AM
So basically it took Alias/Autodesk 10 years to integrate MR decently into Maya and now MR dies?

mustique
05-27-2011, 09:56 AM
This has been in the works for some time.



mental images was acquired for two reasons: patent portfolio and their knowledge of rendering for both hardware and software....

Nvidia has a very special interest in mental ray. Look here: http://www.irayrender.com/

It wouldn't behoove them to dismantle it.


The day MI was aquired, it was obvious that it was to keep Intel away from MI's patent portfolio. It was obvious that MR wouldn't get any X86 related code development (threading performance improvements, AVX acceleration...) Because Nvidia is a GPU company with the single goal of selling as much GPUs as they can. MR was dead the day Nvidia aquired MI, today only makes it official.

And as nice as Barton Gawboy is, I don't expect him or any other Nvidian to spill this truth in any open forum. As much as I don't expect iRay, MetaSL and RealityServer to make it as rendering tools. They are just marketing tools for overly expensive Quadros.

Bitter
05-27-2011, 10:26 AM
MetaSL is also the rendering language for a some people beyond mental images and is slated to be integrated into Vray.

But looking at this from a financial point of view: mental ray is more often integrated than any other package for rendering. It is not a niche product. mental images distributes mental ray, neuray, iray, and reality server to businesses that integrate these products. There are dozens of these businesses. From direct partners like ILM to companies like Autodesk.

Ending this business arrangement with multiple vendors wouldn't make a lot of sense.

Gelato never gained traction. Despite the opinion of some very vocal users in forums, mental images has been commercially successful and survived over 20 years while a few others have come and gone on their own.

So mental images, beyond the intellectual property, also has their customer base. These are now customers of Nvidia. . . . I can't say that those contracts are worth nothing to Nvidia in their quest for dominance.

mustique
05-27-2011, 10:55 AM
...

So mental images, beyond the intellectual property, also has their customer base. These are now customers of Nvidia. . . . I can't say that those contracts are worth nothing to Nvidia in their quest for dominance.

All those contracts are gold if you're a small company, but pocket change for a behemoth like Nvidia who bravely fights the X86 world. Don't get me wrong, I too hope that the the GPU will make 60 fps photoreality possible one day. I just hope that this GPU won't be a Quadro/Tesla server!

Bullit
05-27-2011, 11:15 AM
MRay will become an architectrual rendering engine

Isn't about time to stop this nonsense? Any render that can simulate reality with precision will be able to be employed for everything.

.....
Mentalray currently is a step from disaster. Will they now be able to fix the thing? I am not sure but i hope so since unfortunately Autodesk don't have a free to choose render for costumers. Costumers are stuck with Mentalray and if they want another render engine have to fork more $$$.

davius
05-27-2011, 01:37 PM
Guess it's about time to learn VRay... :D

Airflow
05-27-2011, 01:49 PM
Davius, you should not have waited this long.. Shame on you. :)

MasterZap
05-27-2011, 02:16 PM
Wow, the doomsday scenarios :)

Yes, there's been reorganization. Yes, some people had to leave (mostly redundancies in the combined organization). No, nobody is stopping development of mental ray; quite the contrary, we are starting a period of focus on mental ray like never before.

Alas, I plan a post on my blog on the topic.

/Z

jupiterjazz
05-27-2011, 02:27 PM
MetaSL is also the rendering language for a some people beyond mental images and is slated to be integrated into Vray.


MetaSL is vaporware, the industry runs on RSL and in case will look into OSL.
AFAIK VRay is not, at a present date, using MetaSL, and I think the fact mental images dissolved in dust will have a weight on any eventual future adoption at Chaos.


But looking at this from a financial point of view: mental ray is more often integrated than any other package for rendering. It is not a niche product. mental images distributes mental ray, neuray, iray, and reality server to businesses that integrate these products. There are dozens of these businesses. From direct partners like ILM to companies like Autodesk.


Wrong.

neuray was not deployed anywhere, it was shipped with reality server. And reality server was a complete failure with no real customers around. In fact I am quite sure NVidia is smart enough to drop it asap.


mental images has been commercially successful and survived over 20 years while a few others have come and gone on their own.


Survive. Good choice of verb.
Reread your previous statement I quoted and see how badly "survived" matches.


P.

jupiterjazz
05-27-2011, 02:34 PM
This has been in the works for some time.
mental images was acquired for two reasons: patent portfolio and their knowledge of rendering for both hardware and software. Many corporations will invest in a smaller company rather than take the time to recruit and build their own. That is slow and costly as opposed to just buying what you want.



Premise since I am answering you again: I have nothing against you personally, just your perception is quite off reality :)

1. 80M USD for mental image is hugely overpaid. Stocks were traded 40USD before mental, after that is freefall.

2. the FACT that for 4 years mental images was disconnected (again see Bart's message kindly confirming) proofs that buying what you want is just not enough. You need to /integrate/ it with what you do right away, this assumes you are working in an OPEN environment, a mental institute is obviously not.



This change has already been working for some time and you haven't seen an end to mental ray in that time, in fact you have seen the creation of MetaSL (mental mill) and iRay.


I think both MetaSL and mental mill will most likely disappear, they have no place in NV core business (GPU) and most importantly they are not a standard. Even Bart only mentioned mental ray, iray and DiCE in his statement. Just read between lines.

They still can sell it to ADSK (as 3dsmax now kinda depends on it) though so propagation of Mad Shading Language could persist :)


Nvidia has a very special interest in mental ray. Look here: http://www.irayrender.com/
It wouldn't behoove them to dismantle it.


iray != mental ray.

I am not gonna even comment on the website quality and I am scared it is not under the NV main site... This was probably done way before the restructuring.

P.

ThE_JacO
05-27-2011, 02:46 PM
Mmm, yeah there's a much larger back story there. Part of the purchase of BMRT -> Exluna/Entropy, etc was a calculated move since they were being pursued legally by Pixar.

Once that was over, Nvidia lost a lot of those developers.
I do remember the backstory. nVIDIA rammed down exluna the GPU factor of gelato, tried to sell it (for the wrong price too) in the hope people would introduce their powerhungry overheating hardware into computational centres and clusters, failed at that, tried to make it a freely available platform after it had hemorrhaged developers and the will to fight, and then after years canned it when it had been dead and barely twitching for a while.

Mentions of metasl, neuray and reality are on the joke side of things for our industry, some a joke across the board.

MI has survived through standardization for the last 10 years, and because it no real competition in the hybrid/raytracing business that was truly viable for film (and therefore hype) for a while. It's been obsolete and subpar for years now, and still solely exists because it's bundled.

Take away migration issues and have AD offering other options in place of MRay, for the same price, tomorrow, and you would see the userbase halved overnight.
It's been the Paris Hilton of rendering engines for a while now. It has its fans, hugely popular, but it's famous for being famous, and the world would be better without it at this point ;)

jupiterjazz
05-27-2011, 02:52 PM
It's been the Paris Hilton of rendering engines for a while now.

Amazing definition!

P

Buexe
05-27-2011, 03:53 PM
I love renderer discussions on cgtalk! :love:

republicavfx
05-27-2011, 03:55 PM
So basically it took Alias/Autodesk 10 years to integrate MR decently into Maya and now MR dies?

lol no they didnt ;)

eikonoklastes
05-27-2011, 03:57 PM
I wonder if a little turtle might sneak its way into Maya soon...

popol
05-27-2011, 04:35 PM
It's been the Paris Hilton of rendering engines for a while now.
This is the bestdefinition of mental ray that I have ever heard. X'D

davius
05-27-2011, 04:58 PM
Davius, you should not have waited this long.. Shame on you. :) Hey! Take easy on me! :D

But maybe all this is a good thing! I remember in the early days of 3ds Max, when MR wasn't bundled, I used to use VRay. The game's changed once AD decided to "give" MR with Max - this made me and a lot of people I know abandon VRay and focus on MRay instead. Heck, I even remember thinking "why didn't AD included VRay instead?" since I HAD some knowledge of it! AD did a huge favor to MI by integrating MR across all its products.

Anyways, MR is a good render engine, has its quirks and problems (reflection of bright light sources here (http://forums.cgsociety.org/showthread.php?f=6&t=927778) and here (http://forums.cgsociety.org/showthread.php?t=837884) , weird bump (http://www.neilblevins.com/cg_tools/cg_wishlist/3dsmax/wishlist_skylight_bump_mr.htm) mapping, lack of compatibility (http://www.neilblevins.com/cg_tools/cg_wishlist/3dsmax/wishlist_berconnoise_in_mr.htm) with the great Bercon Maps, and so on) but gets the job done. I really don't think the big issue here is MR or MI, but NVidia with its conflict of interests which is primarily sell GPUs than give support a CPU renderer.

Sil3
05-27-2011, 05:01 PM
It's been the Paris Hilton of rendering engines for a while now.

I'm so tempted in printing a T-Shirt with this phrase :)

RebelPixel
05-27-2011, 05:54 PM
There are a lot of problems concerning mental ray, not only nVidia with interest conflict in selling GPU stuff.

Problem is its a render engine that didnt get serious development compared to other engine in last years, its a render engine that is implemented into 3d packages by another company wich implements things on its own way, leaving out many things.

Mental Ray itself might be good, but it is old, didnt invent anything new in years, didnt help artists in daily work, didnt include many many features people are asking since ages, didnt fix global illumination method, didnt optimized anything, left users with a convoluted method of working with everything that is bound to this engine.

Just think about Irradiance Particles / Importons failure, once introduced, didnt change anything, half finished, poorly executed, dropped.
Displacement render times, DOF, mblur, caustics, photons, SSS, yeah quality is good, but the heck..its so slow and convoluted in those process that you are punishing yourself to use it.

Add to this scenario how AD implement it in every 3D packages, in a total different way, missing features, half implemented, this doesnt help at all.

As usual who suffers is the end user who have to upgrade to last release of a 3d package to get the last release of the render engine (wich anyway doesnt bring anything groundbreaking to the table), unified sampling? After 2 months i still dont understand what is the use of that mode. Totally useless.

MR is still the same as years ago, didnt develop or update anything significant except for mia_material and portal lights wich if i'm not wrong was in 3.3? or 3.4. before and after there is the abyss of nothing.

I dont understand how people can be positive about the whole nvidia/MI scenario, when the past (wich is the present, since the aquisition been done since months now) didnt shine at all and what was the news about MR? None. Adesk released 2012 packages, and again another year of nothing added to MR.
It is a shame compared to the constant updates from other software houses, chaosgroup, cebas, 3delight, just everyone.

As Jaco said, it would be amazing to see how many people would buy Mental Ray if AD would ship the software without the render engine.

The future doesnt look promising at all, if there will be a future.

3DMadness
05-27-2011, 06:16 PM
AFAIK VRay is not, at a present date, using MetaSL, and I think the fact mental images dissolved in dust will have a weight on any eventual future adoption at Chaos.
In the latest 2.0 SP1 release, chaos group added this new feature:

Added VRayGLSL material for direct rendering of GLSL shaders with V-Ray extensions;

So you're right, they are not using metasl but can render stuff made with it if exported to GLSL.

And I just loved the paris hilton definition for mental ray... :applause:

Kzin
05-27-2011, 06:48 PM
Adesk released 2012 packages, and again another year of nothing added to MR.


that is one thing i dont understand. there is no mr feature update here since years, but it looks like the max users are happy with the situation. so its up to you to report all the problems to ad, but it looks like only a fraction of the user are willing to report, write mails to ad or things like this to improve the situation. i read it all the time from disappointed max users since years, but nothing changed. so if the max users write mails since years, why does ad not react?

there are alot of problems in mr, but alot of it comes from bad/missing integration. i dont get it to make mi responsible for all problems without to name ad here.


dof rendering is way faster with unified and motion blur gets a big improvement in mr 3.9, rendertime wise. gi is one point that needs more work, ibl is one step in the right direction, brute force ip could be an addition but needs to be faster. but i dont know what are mi's plans here. it is not that bad like someone could thought after reading your post, but of course there is alot of work to do for mi.

3DMadness
05-27-2011, 06:50 PM
that is one thing i dont understand. there is no mr feature update here since years, but it looks like the max users are happy with the situation...
They think they're happy until they try vray... :twisted:

RebelPixel
05-27-2011, 07:21 PM
that is one thing i dont understand. there is no mr feature update here since years, but it looks like the max users are happy with the situation. so its up to you to report all the problems to ad, but it looks like only a fraction of the user are willing to report, write mails to ad or things like this to improve the situation. i read it all the time from disappointed max users since years, but nothing changed. so if the max users write mails since years, why does ad not react?

there are alot of problems in mr, but alot of it comes from bad/missing integration. i dont get it to make mi responsible for all problems without to name ad here.


dof rendering is way faster with unified and motion blur gets a big improvement in mr 3.9, rendertime wise. gi is one point that needs more work, ibl is one step in the right direction, brute force ip could be an addition but needs to be faster. but i dont know what are mi's plans here. it is not that bad like someone could thought after reading your post, but of course there is alot of work to do for mi.

Most of max users go with Vray, because luckily for them it is really a beast on 3ds max, in terms of speeds/features/updates.
You should hook people to use your product, adding stuff, developing features, making their work easier, if you dont do this at start (wich you are supposed at first to do) you cant really pretend people get interested in a 10 years old workflow render engine with no appeal, slower than other solution available, and pretty stagnant development.

And pretend users to give you feedback too? I guess people are tired. Everyone asked to make a new GI, to drop photon mapping in 21 century, to do something for ridicolous displacement time, to do something for bump calculation when you use it with FG.
I dont think no one sent them feedbacks, they just got totally ignored.
And here is the result, you cant blame userbase for sure..

check this out:
http://forum.mentalimages.com/showthread.php?7780-Unified-Sampling-in-3.9

IBL is one step to right direction, yeah too bad iBL isnt exposed, too bad progressive rendering isnt exposed, iRay only on max, bsdf materials are not exposed, IP/Importons are not exposed (except softimage) and even if they are they got pretty much abandoned.

As you can see, its frustrating like hell, user isnt supposed to look for string codes and geometry shaders to use features that should be already integrated in what they pay for.
In my post i blame both AD and MI equally for delivering a totally subpar, horrible implemented render engine.

There isnt a single year damn it, where you can say "hey Wow look how cool is MR, look what they did!"
Its always non-existant updates.
To me is absurd that it has come this far, it is really absurd the way Mental Ray is handled in AD products and by MI itself, then you open a random application like modo and render a GI in 10 seconds with 0 artifacts and you want to shoot yourself.

EdtheHobbit
05-27-2011, 07:32 PM
So, back to the topic at hand...

Don't you think this move might be a big part of mental images and nvidia addressing those very concerns?

ACantarel
05-27-2011, 09:06 PM
In general, no matter what nVidia does with MR in the future, if it is true that 30 something people of the team had to go, I'm not sure if the future development gets improved by this step. We'll see what happens.

Bullit
05-27-2011, 11:40 PM
Don't you think this move might be a big part of mental images and nvidia addressing those very concerns?

They have got resources to do that. But they didn't. Would this mean they start now? I also have doubts if it isn't better to start new than to use the existing framework: Who still wants to think if should use Final gathering and GI or only GI or only Final gathering?

Like i said above they have great resources. The question is: would they do anything with them?

CHRiTTeR
05-27-2011, 11:56 PM
Choosing to integrate was a really bad move from autodesk imho, it was already obvious back then that its development wasnt going as it used to.

Bitter
05-28-2011, 01:10 AM
Premise since I am answering you again: I have nothing against you personally. . . .

Your posts generally provide guidance and a reference designed to make them feel beneath your contempt. Or a salty sweet mixture thereof. I'm perfectly used to it. No harm.

And I don't suspect that every mental images product will survive. Nvidia is interested in what's best for them naturally. But that doesn't mean there won't be alignment. Otherwise there would be no reason to purchase them. In fact, it could be good to everyone when it comes to cross licensing.

As for predictions, I have a decent track record. Not perfect, but most people tend to like me enough to talk to me.

Southern charm.

BColbourn
05-28-2011, 01:20 AM
Guess it's about time to learn VRay... :D

you're in luck. it's a lot easier than mental ray

conflict
05-28-2011, 02:18 AM
Switched to vray last year, it's been pretty solid. Those guys are adding features every day.

Kabab
05-28-2011, 02:59 AM
What baffles me still is why ADSK didn't acquire Mi it would have been an acqusion that makes sense.

thorsten hartmann
05-28-2011, 09:28 AM
allways the same. The Vray Dummys say (not the Vray Profis): mental ray is to old, to slow and can not make my tea. :applause:

But back to the tread. I think it is a good News for mental ray or iRay users. At this time have 3DSMax the best integration of all 3D-Programms. With Render Optimizer, iRay Manager and Shader Utilities have you all Function and more. ;)

mfg
hot chip

DutchDimension
05-28-2011, 09:53 AM
I love renderer discussions on cgtalk! :love:

Yeah, but I think we're missing a few Italians slamming each other the way only Italians do. ;)

MasonDoran
05-28-2011, 10:01 AM
Oy Paolo, when is is AtomKraft going to be released? I keep checking you out every couple of months to see whats new, but still waiting. I cant wait to start rendering/lighting in Nuke.

mustique
05-28-2011, 10:23 AM
I can't help but feel sorry for those who still think there's a future for MR/iRay. :cry:
For those who haven't the time to reread this thread and translate the corporate wording:

Mentalray was suffering for a long time and is officialy dead as of yesterday. The creatively named iRay will probably follow the junkyard of former Nvidia renderers, as soon as Nvidia realizes that it doesn't help selling additional Quadro/Tesla GPU setups.

Software that comes bundled just dies eventually, (hint: Toxik/Matchmover)
So keep using them if it pays your bills, but don't expect a bright future.
Thank God there are good renderers like vray/3delight/RfM/Maxwell etc...

Thank God ADSK didn't aquire MI.

mister3d
05-28-2011, 11:08 AM
Mental ray is not slow, you just don't know how to use it. http://img641.imageshack.us/img641/682/defaultum.jpg (http://img641.imageshack.us/i/defaultum.jpg/)

ThirdEye
05-28-2011, 11:52 AM
Mental ray is not slow, you just don't know how to use it. http://img641.imageshack.us/img641/682/defaultum.jpg (http://img641.imageshack.us/i/defaultum.jpg/)

That's the same mantra i've been hearing for years from people who never used anything else in the first place.

ApaczoS
05-28-2011, 01:13 PM
Alberto, you made my day :applause:

jupiterjazz
05-28-2011, 01:19 PM
Your posts generally provide guidance and a reference designed to make them feel beneath your contempt. Or a salty sweet mixture thereof. I'm perfectly used to it. No harm.



As for predictions, I have a decent track record. Not perfect, but most people tend to like me enough to talk to me.

Sure dude. You really rock.

P

Buexe
05-28-2011, 01:35 PM
Yeah, but I think we're missing a few Italians slamming each other the way only Italians do. ;)

Hey, no spoilers, please! :D

Daniel-B
05-28-2011, 03:54 PM
I use both Vray and Mental Ray heavily, and know them each very well. The reality is they are both awesome in their own ways. Mental ray CAN be fast if you know what you're doing. Another reason people assume Vray is faster is because the defaults settings are actually too low in my opinion to get a decent quality image. If you put Vray's settings where they should be, you will get render times more akin to Mental Ray. In some cases I've had Mentral Ray rendering faster than the same scene with Vray (yes, I know how to optimize both for speed).

However, sometimes Vray will have an edge when high poly counts are concerned, because Mental Ray has to translate the scene from 3ds max, which can take additional time. Vray requires no translation and just renders straight out.

I hope for the continued development of both mental ray and iray, because I quite enjoy them both. I really don't understand all the doom and gloom predictions, but maybe it's because I'm not familiar with Nvidia's business history.

mister3d
05-28-2011, 03:59 PM
I use both Vray and Mental Ray heavily, and know them each very well. The reality is they are both awesome in their own ways. Mental ray CAN be fast if you know what you're doing. Another reason people assume Vray is faster is because the defaults settings are actually too low in my opinion to get a decent quality image. If you put Vray's settings where they should be, you will get render times more akin to Mental Ray. In some cases I've had Mentral Ray rendering faster than the same scene with Vray (yes, I know how to optimize both for speed).


So you can make a true DOF and 3d-motion blur in mental ray on par with vray? I tested both engines using the noise parameter visually, and mental ray is faster in some instances, like with materials. But if you need animations, it starts to crawl. And how about displacement in mental ray? With vray I can render displacement not a big deal and even with GI.
You can see some results here http://forums.cgsociety.org/showthread.php?f=21&t=852993&highlight=battle

BigPixolin
05-28-2011, 04:07 PM
Why has this become a Mental Ray and Vray pissing match?

bgawboy
05-28-2011, 04:11 PM
Since my name has been mentioned, and I've been quoted, I'm going to enter this discussion, but would hope everyone keep this courteous and polite as suggested by the board.

We are increasing the size of the mental ray team, significantly.

We have been working on future-looking and disruptive rendering technologies based on the path of increasing hardware (CPU&GPU) capabilities, and we're focused to deliver them within the rendering technology with which you are familiar, mental ray and iray.

We also know that we have to focus our efforts on some of the more near term issues of integration, and features easing use.

We are doing these things incorporating user feedback, from large studio users to the individuals using a 3D app. This has increased significantly even in the past year.

We are excited, and we don't require you to be. But if you are, we are glad if you can help us create something greater than the sum of individual efforts.

And as our work with Autodesk is intensifying, now is a good time for the users to work with Autodesk and NVIDIA, all three groups together, to create something great.

SheepFactory
05-28-2011, 07:46 PM
That's the same mantra i've been hearing for years from people who never used anything else in the first place.

Yea same here. It always ends with someone posting a arch viz render of a near empty room with (bad) GI and go "look, mr is more than capable!"

mustique
05-28-2011, 10:12 PM
I don't want to sound harsh. Especially to the many kind and gifted devs of former MI / new NV ARC people. It's not personal.

It's just that I and many coworkers felt so stupid throughout the years for not being able to tackle mysterious production problems with a renderer that was so shamelessly hyped and praised. I know it is not the fault of the devs, but ours for buying into all those marketing, but here we are.

MR and anything asociated is realy not trustworthy anymore. And the fact that it now operates under a GPU maker who has canned other renderers in the past because they weren't helping to sell some premium GPU line doesn't help either.

Please don't come with press releases or PM messages, especially if they contain absurd future product names like "Wonderay". It is very cheesy even as a code name. Getting rid of anything that reminds "mental", "ray" etc would be the best first step you could take IMO.

JeffPatton
05-28-2011, 10:39 PM
Why has this become a Mental Ray and Vray pissing match?I'm with you on that. Plus it's quite surprising to see a few forum leaders/moderators assisting in the derailment of this thread as well. Some of us that actually use these products would like to read more information on this situation without having to listen to people whine about why they dislike it. Geeesh. :banghead:

LordAuch
05-28-2011, 11:39 PM
Thanks for setting things straight Jeff, bgawboy, bitter,masterZap but how can we really expect anything else. With reading comprehension at such low levels, ego and unearned self assurance so high, how can "certain" users be expected to grapple with the complexities of a modern render-package. The best that can be hoped for is wrote memorization of a GUI. Push button render... is the only hope for some.
Computer graphics needs its Sara Palins, and there seems no shortage.http://forums.cgsociety.org/newreply.php?do=newreply&noquote=1&p=6997746#
Since some have all the answers. Do it yourself. Stop being a forum disinformation troll and write your own rendering package because learning something and admitting you don't understand as much as you think, is much too difficult.http://forums.cgsociety.org/images/icons/icon10.gif

Bitter
05-29-2011, 12:05 AM
Best known for mental ray – the most widely used rendering software within industry leading design and animation tools — the company has expanded its offerings to include iray, a renderer that makes photorealism easy to achieve faster by running GPUs, as well as technologies that enable interactive, cloud-based rendering.

From this you can surmise which teams were evaporated and which teams were increased.

The focus looks to be mental ray, iray, and cloud rendering. So some teams were lost while others increased their size/resources. This is all part of the delightful strategy businesses use called core business (http://en.wikipedia.org/wiki/Core_business) .

MasterZap
05-29-2011, 07:58 AM
I can't help but feel sorry for those who still think there's a future for MR/iRay. :cry:


Okay, guys. I was planning on blogging this on Monday, and actually enjoying my sunday, but... no, apparently not.


For those who haven't the time to reread this thread and translate the corporate wording:

Mentalray was suffering for a long time and is officialy dead as of yesterday.

mental ray is one of (if not the) most widely used renderer in the world, integrated in more applications than you can shake a whole chopstick factory of sticks at. "Suffering"? The core mental ray business was *never* the issue with mental images; what people don't seem to realize is all the other things mental images portfolio consisted of.

You need to understand that mental images was not solely a building was full of people only working on mental ray; there were many other projects, many other products, sub-products, technologies; there were entire sales-, development- and business development teams, not to mention QA people, human resources people, IT staff, etc.

Yes, we are now fully integrated into NVidia, and yes, there have been reductions in staff, but guess what - *all* of it is in those other parts. First the redundancies when coming in to a much larger organization that already have HR, IT departments, as well as dedicated sales and business development staff already.

The core work, the engineers, and in *particular* mental ray and iRay hasn't lost a person but has gained engineers (with probably more being added soon), mostly by simply refocusing existing engineering talent from other projects to pure mental ray dev work.

Things as "NVidia Advanced Rendering Center (ARC)" rather than "mental images" will change - yes - but it completely boggles my mind how anyone can conclude from that "the death of mental ray" or "the death of iray"!?

There's so much more I would like to say about cool and new things, but being part of publicly traded company turns my tongue into a collection of bitemarks already, so I think I'll end it there, for now.

/Z

Bullit
05-29-2011, 08:04 AM
This message has been deleted by Array. Reason: Received a message from Mental Images regarding discussion of yet unannounced products.

Strange that my post was deleted.
As the link i posted showed, i only posted a public available text.

Chokmah
05-29-2011, 08:19 AM
All of this reminds me olds stories :

'Autodesk buy Maya -> Maya will die and be included in Max !!! '

Years later ...

'Autodesk buy XSI - > XSI will die and be included in Max and Maya !!!'

and now

'Mental Image is no now fully integraded in NVidia -> it's the end of MR, we will all die !!'

Oh and I forgot : 'We will all die in 2012, caus' only some people said it !!! '

lol ^^

So now what if we take a look at the actual situation ? ...
Maya is still there, Softimage is still there,

and probably MR will still be there, and will continue to do the job for what it was developed ;)

(Sorry for my english :) )

Kabab
05-29-2011, 08:21 AM
So I assume its the end of the Mental Images brand? or are they going to keep that alive?

Chokmah
05-29-2011, 08:38 AM
So we should see what will come in the future ;) That's the only thing we have to do, keep a look and keep working :p

Syndicate
05-29-2011, 08:55 AM
I'm surprised to see so many negative comments toward mental ray.

A lot of pipelines I see still consist of this:

Skin/SSS/Water/Glass - Mental Ray
HD Exteriors/Interiors/Metals/Glossies - Vray

A true VFX artist knows the strengths of each tool and applies them accordingly.

One thing for certain is that Iray is no joke, and neither is Maxwell. The advent of CUDA/GPU rendering means we now have 80 core workstations (equivalent) for under 10grand.

To say a renderer is slow or not is really "old talk" because if you learn to use the renderer properly and also use the tremendous power of compositing, you can really save time and boost quality.

I'm going to say this about Nvidia... Aquiring MR is a really good move and we will begin to see some fantastic developments from the GPU giant. Nvidia really do listen to the creative world because their business depends on it.

Chaosgroup have a fantastic vision for V-ray and they have succeeded in revolutionising Archviz and Photorealism for the small studio artists, as well as the industry pros.
Part of this success is the fact that they listen to their customers and integrated features based on requests NOT marketability, which is the problem that plagues giants such as Autodesk and Adobe.

I could talk about this on and on, but ultimately I would like to say... chillout guys :) we will always have change in our industry and its up to us to learn new methods and employ them in the best possible way.

Chokmah
05-29-2011, 09:01 AM
Thanks Syndicate ;)

All have been in your comment ^^

I second you :thumbsup:

Buexe
05-29-2011, 09:43 AM
I am happy for those who can get stuff done with mr but the reality and experiences on my end is telling me another story. I am not gonna chime into the whining, because I have stated my critic before and dont want to belong to that group of people who repeat their point over and over again ( both directions). I guess nobody will mind a great product coming from Nvidias end, but it will take some convincing stuff to make me try it, simply because the other render options are really strong IMHO and so far I am not convinced on this GPU render thing. But we will see...

eikonoklastes
05-29-2011, 12:47 PM
Looks like we'll just have to wait a while longer before we get any updates.

Folks using mental ray are already used to this...

SreckoM
05-29-2011, 03:13 PM
^^ You got a point there.

But lets hope that this will bring better times...

derMarkus
05-29-2011, 08:40 PM
One thing I'm a little bit concerned of is, that nvidia now could maybe complicate things for other developers of GPU rendering solutions, now that they have their own team on that. Just a thought.

CHRiTTeR
05-29-2011, 09:15 PM
wouldnt be the smartest thing to do businesswise.

Syndicate
05-30-2011, 02:56 AM
One thing I'm a little bit concerned of is, that nvidia now could maybe complicate things for other developers of GPU rendering solutions, now that they have their own team on that. Just a thought.

The only real problem could be that Nvidia will release new features on its own solution first, but to be honest the Nvidia developer network is pretty open about new features and SDK's.
Also I absolutely love Vray RT and it runs on the GPU (which wouldnt be possible without Nvidia or AMD in the first place).

I am happy for those who can get stuff done with mr but the reality and experiences on my end is telling me another story. I am not gonna chime into the whining, because I have stated my critic before and dont want to belong to that group of people who repeat their point over and over again ( both directions). I guess nobody will mind a great product coming from Nvidias end, but it will take some convincing stuff to make me try it, simply because the other render options are really strong IMHO and so far I am not convinced on this GPU render thing. But we will see...

I was skeptical about GPU rendering, but loading textures are the only major problem.
Currently most consumer cards cap out at 1-2gig whereas the pro Nvidia cards have 3-6gig (Tesla/Quattro). In terms of cost-effectiveness its still a little bit on the heavy side, but within 1-2 years I'd say we will start seeing GPU's take on most of the tasks that the CPU does.
And rightfully so, I'm tired of not being able to use my workstation while a render is happening. With GPU rendering we can offload things like 3D rendering while we work on compositing which is CPU reliant. Makes sense to me :D

Also, have a look at Jeff Pattons blog. He's loving Iray and he is getting some really nice results in little time.

CHRiTTeR
05-30-2011, 03:34 AM
but within 1-2 years I'd say we will start seeing GPU's take on most of the tasks that the CPU does..

I seriously doubt it

Mauritius
05-30-2011, 07:24 AM
mental ray is one of (if not the) most widely used renderer in the world
Yeah, and WSJ and USA today are the most read newspapers in a certain country.
One is read primarily by folks who brought us the financial crisis and who are actively working on the next one and the other one is so evacuated of any standard that I don't even know where to start.

What most sheep, er people, do (read/use/insert other verb of choice here) is rarely a clever, much less a suggestible choice.
Definitely so when it comes to an area like rendering where very few folks have actual insight into what happens under the hood.
And with "insight" I don't mean knowing the flaws of on product so well that you can circumnavigate them.

.mm

Kabab
05-30-2011, 08:00 AM
Yeah, and WSJ and USA today are the most read newspapers in a certain country.
One is read primarily by folks who brought us the financial crisis and who are actively working on the next one and the other one is so evacuated of any standard that I don't even know where to start.

What most sheep, er people, do (read/use/insert other verb of choice here) is rarely a clever, much less a suggestible choice.
Definitely so when it comes to an area like rendering where very few folks have actual insight into what happens under the hood.
And with "insight" I don't mean knowing the flaws of on product so well that you can circumnavigate them.

.mm
What's the point of this bashing?

Okay so Mental Ray has issues (heck I'm not really a fan of it either) but the guys that ran Mental Images are definitely good businessmen they managed to get their product very widely adopted and managed to make a stack of cash off Nvidia so my hats off to them for their success.

There are more to tools then just their technical merit, you can have the best tools in the world but if you don't have the right sales/business strategy then they are probably never going to get off the ground and make an impact...

In short I think Mental Images was a very successful business and I bet you every other renderer developer out their envies what they have been able to achieve.

thorsten hartmann
05-30-2011, 08:28 AM
hi

in Germany we use iRay not in a Visualisation Company or Special FX Companys.

Iray is for the Industrial Designer or Architect, when they create the Product. The Designer have a very fast and photorealistic Result and a 6GB Ram Video Card is most enough. For example: Tricon Design AG create trains. A single Wagon need 20 Billion Polygons with max. 10 Materials. I need for this Scene 1.8 GB Video Ram (Quadro 3000). When i use a Tesla with 6GB Ram, i can render two Wagon. When Tricon need a artistic high-end rendering, they change to mental ray with the same scene and don´t need to convert! iRay is not in first priority for a 3D-Artist, it is for the Architect or Industrial Designer, when they create the Product. We 3D-Artist need the client only for the marketing images in the future.

mfg
hot chip

Bitter
05-30-2011, 08:53 AM
I suppose it could be more cost effective to use someone else's research and technology as the foundation of your renderer REYES in 3Delight (http://en.wikipedia.org/wiki/3Delight)

But an advantage Nvidia has, like Pixar, is that a lot of the technology is their own, either through the acquisition or prior. This means the resources of a 11.6 billion dollar company and the lessons learned from growing that business are accessible to the Nvidia team creating mental ray.

As such, you aren't really dealing with mental images anymore. So since mental images as a separate entity is indeed "dead" you are now dealing with Nvidia.

So shift focus some: observations about management and business plan will be more in line with that of Nvidia. Which makes a great deal of the chatter involving mental images a moot point. Now, it would be more accurate to analyze Nvidia and their success instead. And it is their success that allows them to sell their products and advance without having to give it away for free to entice customers. . . .

Might be time to look up more information regarding Nvidia. In here is a little like asking the wolf about the missing chickens.

ThirdEye
05-30-2011, 08:59 AM
What's the point of this bashing?

Okay so Mental Ray has issues (heck I'm not really a fan of it either) but the guys that ran Mental Images are definitely good businessmen they managed to get their product very widely adopted and managed to make a stack of cash off Nvidia so my hats off to them for their success.

There are more to tools then just their technical merit, you can have the best tools in the world but if you don't have the right sales/business strategy then they are probably never going to get off the ground and make an impact...

In short I think Mental Images was a very successful business and I bet you every other renderer developer out their envies what they have been able to achieve.

What he was trying to say is that even if Britney Spears sells millions of records her music still sounds like crap. It's his opinion, everyone is entitled to have one.

Bitter
05-30-2011, 09:43 AM
Plus it's quite surprising to see a few forum leaders/moderators assisting in the derailment of this thread as well.

It should indeed be about the possible impact of the business and ours as a whole.

Opinions are important, when they are relevant. . . .

dagon1978
05-30-2011, 09:45 AM
What he was trying to say is that even if Britney Spears sells millions of records her music still sounds like crap. It's his opinion, everyone is entitled to have one.
so, everyone is entitled to call other people "sheep" here
is it still possible to express an opinion without offending people?

and this is not the first time he´s acting like this, you know, but you don´t care, this is why i started to leave this forum of really "professional" people

Bitter
05-30-2011, 09:55 AM
and this is not the first time he´s acting like this, you know, but you don´t care, this is why i started to leave this forum of really "professional" people

Please tell me there's an alternative I haven't already explored. (But please PM me if it's so. :thumbsup: )

Constructively, leadership should continue to focus the thread on the topic: The possible industry impact of the acquisition. Preferably based on previous events, track records, and current goals/facts.

The current facts don't support the doomsday scenario. They in fact support promise for the future. Yes, that's just a hopeful promise. But it's certainly different from "We're closed. Production has ceased. Thanks for the memories."

At the least it supports what even detractors wanted to change: "We're under new management." Unless the detractor is a competitor.

ThirdEye
05-30-2011, 10:02 AM
so, everyone is entitled to call other people "sheep" here

If your (or someone else's) opinion is people are like sheep i have nothing to say about it. He never called you or anyone else "sheep" personally as far as i can see.

Bullit
05-30-2011, 10:22 AM
...It's his opinion, everyone is entitled to have one.

With opinions like that... for start WSJ was the one of rare newspapers that had prior articles to explain the credit bubble that was being build...

Anyway. Also the poster should know that most people ("sheep") can't choose render engines. It came with applications. Mentalray is a outdated render engine, and like someone said until and if they will have something good in futur might be too late. With an hiatus they can loose the market. They need to be fast.

ThirdEye
05-30-2011, 10:28 AM
With opinions like that... for start WSJ was the one of rare newspapers that had prior articles to explain the credit bubble that was being build...

He might reply that they knew it because they were (or were in contact with) the very same people who caused the issue in the first place, do you realize that? However back to render engines, let's leave finance and newspapers out of this.

Anyway. Also the poster should know that most people ("sheep") can't choose render engines. It came with applications. Mentalray is a outdated render engine, and like someone said until and if they will have something good in futur might be too late. With an hiatus they can loose the market. They need to be fast.

That's something i disagree with. There were not so many alternatives to MR on AD apps: sure Max has always had a lot of 3rd party engines, but what about Softimage or Maya? Vray is still in beta on those apps, same about Arnold and i'm sure there are not so many people using a REYES or a so called "unbiased" (MW) engine. They still hold a big chunk of the market (MR works on HDN and C4D too btw), if MR keeps coming preinstalled on AD apps they can take their time without losing too much i think.

Airflow
05-30-2011, 10:33 AM
Wow, I wonder if a vray gets sold to Autodesk new item would get so heated. :)
Ok there is the business side of things to discuss, but thats not going to happen without some people airing their greviances in using a specific software. I say just roll with it. I personally dont think its the end of mentalray, and hope it makes the product I use in 3dsmax and Maya (not in beta, Vs 2 is avaliable now) stronger and better intgergrated. But I feel people need to be able to explain/bitch about the products they use. If there is an oppertunity to do so, then threads will get derailed. Look past it.

Kabab
05-30-2011, 10:44 AM
To be honest I'd say there is something bigger going on with Autodesk and Nvidia which has pushed this move as well..

After reading comments from various Autodesk developers on this forum it seems like Autodesk & Nvidia are getting very close.

Lets see what the future holds should be very interesting.

Bitter
05-30-2011, 10:45 AM
Integration is the biggest issue. It's the elephant in the room. I've been stabbing it repeatedly and it won't die.

But, this is where Nvidia comes into play. This alters the playing field. Nvidia is interested in licensing their products. But how they play ball with integrators might be different since they have the ability to dictate more.

mental images alone lacked agility. A lot of their changes are dictated by their largest customers. And other good changes don't get integrated. How's that for sucky? Not the best business plan, is it?

Now, Nvidia is different management. Some of whom I know. And I'm seriously hoping they don't disappoint me. :twisted:

Someone once told me they wished Autodesk would just integrate Vray. I looked at them like they had 2 heads. It shows no one understands that means Autodesk becomes your paramount customer. He who has the pesos gets the say-so. It's good to get paid to do what you do best. But not when you have little control over it. How many of us have generated crap that doesn't go on our reel. . .but it's exactly what the customer wanted?

Syndicate
05-30-2011, 11:38 AM
I seriously doubt it

I just quickly searched for these articles, but if you watch the slides you will see how the GPU really does have the potential to undertake the CPU.
Already its possible to do intensive tasks such as rendering, compression, math / physics etc.
So I dont understand how you "seriously doubt it"... because its a reality :P

First Article talks about how GPU's are a threat to Intel:

http://www.beyond3d.com/content/articles/31/

"NVIDIA believes GPU will overtake CPU as future heart of computer (AMD hopes so too)"

http://www.geek.com/articles/chips/nvidia-believes-gpu-will-overtake-cpu-as-future-heart-of-computer-amd-hopes-so-too-20080429/

If there is a reason why GPU wont replace the CPU as the future core component of the PC, I'd love to know.

Laa-Yosh
05-30-2011, 11:49 AM
GPU's have a very different computing model, they can't handle branches in code as well. There's not much freedom in controlling the flow of the program you're running, which is necessary for most of the applications we're running.

It's basically "do this same thing for a lot of data you're going to get" and anything you can't pack into this approach isn't going to be running well, most likely not at all.

I think you should read up on general computer science instead of blindly accepting anything these companies say to push their products.

Syndicate
05-30-2011, 01:55 PM
I dont blindly accept anything. I test new methods and technology when available.
As for what the GPU does compared to CPU, I mentioned rendering, compression, math and physics... exactly what is it about those processes that doesnt involve number crunching? (or doing "the same thing" as you put it).

Will the CPU be around for a while? Yes (up to developers really). Will it be more feasible to add more and more cores to CPU's vs using the GPU for a particular task? Probably not.

You should note that most CG technical artists have a background in computing science so no need to throw your top-hat at me ;)

SreckoM
05-30-2011, 02:10 PM
Well you than realize why GPU is still limited compared to CPU. It might happen but it might not ...

Laa-Yosh
05-30-2011, 02:13 PM
I'm not going to compose and post here a long dissertation on the architectural differences of how CPUs and GPUs work, if you're interested, please do the work yourself. And it's not about doing any hat tricks either, just that you can't expect anyone to explain several years' worth of material to you in a forum post.

So in short, there are fundamental problems with graphics processors, like lack of free memory access or flow control, that keep them from running general code. There's plenty of stuff in rendering, math, physics and other tasks that require such features, can not be ported to data driven architectures, and thus GPUs can only be used for subsets of these tasks.


You could say that the GPU would act as a sort of co-processor to the CPU; but in that case one has to wonder if it's the best approach to place such a co-processor on a separate bus, with a separate memory bank, consuming more power and space - instead of putting the transistors into the CPU itself. And it still would not replace the CPU in the end, only help it out.

CHRiTTeR
05-30-2011, 03:24 PM
The only thing i personally like about mental ray are the various cool shaders floating around. There are some really nice shaders for it and i wish my renderer of choice had them. I can see this (custom shaders) being quite a big deal for big studios. But working with mental ray in general is quite 'unproductive' compared to some other renderers.

Just my personal opinion.

CHRiTTeR
05-30-2011, 03:40 PM
I
Already its possible to do intensive tasks such as rendering, compression, math / physics etc.






Yes, and how is this even close to what a cpu must be able to do? That is a verry limited list of possibilites, and thats also the same reason why the gpu is faster and better at some stuff.

Even when doing rendering gpu renderers cant do everything a cpu renderer can.

If you'd make a gpu that can handle everything a cpu can handle, you're actually making a cpu again.

Lets take the cpu out of your computer and try to run windows on it. ;)

articles

Most of these 'articles' are just expressions of some writers opinion on the future, especially in this case where no one really knows if GPU's will survive or not. And its funny to see how 97% of these articles are written by ppl who dont know what they are talking about.
I can just as easily do a google for some articles saying the gpu will die... they mean nothing at this point.

In my opion the gpu wont survive and intel will integrate something simular in their cpu.
Intel and AMD are actually already doing this (intel's AVX, for example, which does help exactly at the same tasks you mentioned). Which is a much more realistic and possible task then making a gpu that can handle cpu tasks.
I mentioned this was comming quite some time before, but no one cares, because everyone seems to be all hyped up by nvidias marketting crap. Someone mentioned sheep behaviour and thats exactly what it is.


Also, 1 or 2 years is really really really fast for such a huge step.

DrBalthar
05-30-2011, 04:41 PM
Upps being late to the game again.

Now we will see we've got two graphics hardware companies with a 3D rendering software company. One trying to break away from their main market and breaking into the market field of the other (not overly successful so far = nVidia). I wonder which one will survive longterm.

My prediction is that both will tank and take their 3D rendering software company with it.

DrBalthar
05-30-2011, 04:44 PM
mental images was acquired for two reasons: patent portfolio and their knowledge of rendering for both hardware and software. Many corporations will invest in a smaller company rather than take the time to recruit and build their own. That is slow and costly as opposed to just buying what you want.

If you've read their actual patent portfolio you would know two things:

1.) They are unlikely to sustain any true hard test
2.) They will run out (at least the most significant ones date back to mid-90s) so basically only have at best 5 years left on them.

DrBalthar
05-30-2011, 05:05 PM
I was skeptical about GPU rendering, but loading textures are the only major problem.
Currently most consumer cards cap out at 1-2gig whereas the pro Nvidia cards have 3-6gig (Tesla/Quattro). In terms of cost-effectiveness its still a little bit on the heavy side, but within

This problem will go away with the next generation of APUs. Upps nVidia can't do APUs well too bad that's why they will be irrelevant in a few more years. So will be mental ray I am afraid.

thorsten hartmann
05-30-2011, 08:50 PM
hi guys,

i have some new infos (from God) about mental ray in a future. Mental ray can at the next Version calculating Final Gathering with GPU. NVidia want more and more integrate GPU Rendering in mental ray (i mean not iray).

mfg
hot chip

DuttyFoot
05-30-2011, 09:52 PM
I love renderer discussions on cgtalk!

same here :) especially when you have people who know the inner workings of the render engine

PiotrekM
05-30-2011, 10:07 PM
hi guys,

i have some new infos (from God) about mental ray in a future. Mental ray can at the next Version calculating Final Gathering with GPU. NVidia want more and more integrate GPU Rendering in mental ray (i mean not iray).

mfg
hot chip

sure they want, quadro = profit, but who really cares? current quadros have maximum 6gb of ddr5, first this is not really enough for any serious work, second this cards costs almost $15K!
maybe in 5 years time...

mister3d
05-30-2011, 10:30 PM
Yep, that's a bit of the problem, buying quadros. And "NVIDIA" word here doesn't promise anything bright.

thorsten hartmann
05-30-2011, 10:44 PM
For me is the price of a Quadro Card or Tesla is ok, because a german student can not buy this card and all the pirats can not copy Hardware, only Software. I think that is a good future to make many.

mfg
hot chip

Syndicate
05-31-2011, 12:02 AM
CHRiTTeR (http://forums.cgsociety.org/member.php?u=2544)

I never said the GPU will replace the CPU... In fact I know Intel tried to somewhat merge both with Larrabee to leverage what each processing unit was best at.

Yes, and how is this even close to what a cpu must be able to do? That is a verry limited list of possibilites, and thats also the same reason why the gpu is faster and better at some stuff.

That limited list of possibilities is actually enough (or will be enough) to satisfy a CG artist.
I just feel that what everyone wants is something that does everything, rather than something that does a task really well. Yes there are limitations in memory etc but the amount of time saved in being able to do almost real-time global illumination / raytracing is amazing.
We used to have to resort to solutions like the $2000 ARTVPS Pure cards to do what gamer cards can now do for 300 bucks. I dont see cpu prices changing or increasing in that much performance. Yes more cores have been added, but trying to utilise each core and segmenting memory results in maybe just double the speed of 4 year previous equivalents.

Example: my Intel Xeon E5520 with 12 gig RAM renders a shattered glass scene in 25min.
My Q6600 from 4 years ago with 4 gig of ram renders the same scene in 40min.
Using my Quadro FX at work the scene renders in 8min.
Just an example of how I'm seeing 5x the performance increase compared to 2x.
What annoys me is when marketing put figures like 100x etc, when that is clearly false in a real world scenario.

I definitely dont follow marketing or hype, just real world results. I'm particularly interested in how well gamer cards perform. Affordability is king and so far I'm excited to say that a $400 Graphics card is going to take me further than spending $800 on a new motherboard, more ram and faster CPU.

Just a note that I'm giving my 2c, my views are just my views.

ThE_JacO
05-31-2011, 10:32 AM
That limited list of possibilities is actually enough (or will be enough) to satisfy a CG artist.
That list is awfully generic, and included in those broad categories there are a million and one things that GPUs can't even cope with, and thrice as many for which they aren't optimal.
"GPUs are good at maths, pyshics and compression" just for one is awfully shortsighted.

Plenty operations in those three categories are inconvenient to impossible with a GPU. Anything that can't be threaded safely or efficiently (and many things can't be threaded at all) is not ideal for a small, highly specialized, small input single output processing unit.
Part of it has been, and will further be, addressed by new techniques and algorithms that do the same things we've been able to do for years, but differently so they comply to the nVIDIA gods. Many of these things are now being done that way, at higher cost and in an inconvenient manner, just to leverage GPUs (due to availability and specialisation more than to commodity), many are done more efficiently or quicker than they were before. Many other things though are simply unaddressable, and do and still will require computational models unaccessible to GPUs.

I'm not dissing the use of GPUs in our industry, I've personally been playing with CUDA for quite a while for many reasons, but your blanket statement simply doesn't apply, and no amount of defending it will give it any sense I'm afraid. You simply bought into a lot of hype I'm afraid ;)

mister3d
05-31-2011, 01:06 PM
For me is the price of a Quadro Card or Tesla is ok, because a german student can not buy this card and all the pirats can not copy Hardware, only Software. I think that is a good future to make many.

mfg
hot chip

Quadros are almost identical to their gaming analogues. It's not very clever decision from both points: it can be hacked by using gaming cards, and also it's not ethnical to professionals. The speed and stability of current gaming cards plus recent stable versions of AD products make them obsolete.
If you're ok buying hardware for each program, great. But from a consumer's point of view, this sucks in many ways. And then they will push consumers buying new quadros each year, adding new incompatible features.

Syndicate
05-31-2011, 01:36 PM
I still dont see what it is exactly that the GPU cant do that you would want it to do :S

Your point basically means that developers have to go out of their way to develop code that runs on the GPU. The simple fact is that if you have not yet written your application, then its not a problem to create something that can utilise the GPU - because the thought process is different. Adapting an application is a different matter, and the GPU giants are offering interesting tools like Nexus that help with the transition.

The biggest bone to deal with is that the GPU does not use optimised algorithms but usually the slowest most inefficient. The key difference is multiplying the massive parallel capability of a GPU by a hundred and you quickly have a faster way of computing than by using the cpu.

At the end of the day, if my animations render faster, my physics/particles sims calculate faster... I'd be a happy man because I get to go home on time. Whats there not to like?

regarding hype... I'm not sure what you mean... I'm using 3dsmax 2012 and it comes with Iray/Physx (also using Vray RT) and it DOES make a difference. Would be a different scenario if I said something like "Death of the CPU" which I already said wont happen...

Bitter
05-31-2011, 01:38 PM
You can't always hack a GTX into a Quadro anymore. Used to be you could make a driver hack. Now it's more often hardware based instead.

I would argue that using software to sell hardware wouldn't work.

But then there's Apple. Especially since Apple hardware isn't unique to Apple. But their software moves their hardware out the door very quickly.

But what about using hardware to sell software?

Migration to CUDA has been pretty prevalent in a lot of facets of the VFX industry. PhysX as well. So the market penetration is there. I've worked at lots of different places and their video cards were all Nvidia. Hundreds of them. So the investment exists for some people that are looking to get more from it. One place is exclusively Vray, but they use CUDA for simulation and PhysX plug-ins (Thinking Particles is pretty ubiquitous. Rayfire as well.) These were GTX cards pretty often. Another studio uses Arnold, but again, all Nvidia cards in Apple Mac Pro hardware.

Hardware vendors also favor Nvidia. Lenovo, Dell, HP, etc have an option for ATI cards but their default is Nvidia for workstations (Mid-High range). Ironically, Apple is the opposite for workstation graphics.

What if I want to run Final Cut Pro? This is an old argument about "making" you buy something and it doesn't change the reality of the situation. People will buy it or they won't. But a lot already have it.

As for memory constraints, that's not very future-looking. Memory density is increasing pretty rapidly. But it has always lagged behind need. So I don't expect it will be here as fast as we want (Holodeck, where is my :twisted: Holodeck!) But it's coming. I'd rather have the software ready to use it than hardware ready without software.

Who'd by a piece of hardware without software to use it?

But then we've come full circle.

Kabab
05-31-2011, 02:10 PM
I still dont see what it is exactly that the GPU cant do that you would want it to do :SThink of it like this....
A GPU is like throwing 1000 school kids at a problem and getting them to solve it...
A CPU is like using 10 professors with PHD's to solve a problem..

There are just some things which are to complex for a GPU to efficiently process and then on the other hand there are things which are very simple to process but have a massive volume..

So really these 2 processors complement each other..

I really encourage you to watch this video from luxology

http://www.youtube.com/watch?v=4bITAdWvMXE

Its a comparison of CPU/GPU rendering with some pretty good benchmarking and reasoning.

Syndicate
05-31-2011, 02:25 PM
But I agree with what you are saying... I was asking, as a CG/VFX artist... what is it in your daily workflow that takes up a lot of time to calculate/process and that the GPU could not possibly do?
I'm interested in real-world problems, not hypothetical thats all.

I have seen the Luxology vid and quite a few others... The problem with the luxology video is the comparison isnt really applicable to a real world scene.

A real test would show that not only is the GPU capable of giving similar if not better times for rendering a scene (minus SSS or as yet unsupported features) but you also get DOF effects for free (just one of the upsides I have found).
So far the only drawback is features that are not yet developed. Having a look a Octane render/iray and Vray RT I see practical applications of the GPU.

CiaranM
05-31-2011, 02:30 PM
Do any GPU renderers (iRay included) support programmable shaders?

Syndicate
05-31-2011, 02:35 PM
Think of it like this....
A GPU is like throwing 1000 school kids at a problem and getting them to solve it...
A CPU is like using 10 professors with PHD's to solve a problem..


To be fair, I'd say its more like this:

A GPU is like throwing 1000 school kids at a problem using step-by-step but being able to contribute to the end result at the same time.
A CPU is like 10 professors with PHD's that can skip steps and use advanced formulas to use their time more efficiently.

Ultimately though, volume still wins because if you are comparing a method that is slower than another, but compensating it through mass calculation.

EdtheHobbit
05-31-2011, 02:51 PM
thanks for the link, kabab

E_Moelzer
05-31-2011, 03:50 PM
Ultimately though, volume still wins because if you are comparing a method that is slower than another, but compensating it through mass calculation.


No it does not. In my experience doing things in the GPU gets exponentially slower with complexity. Not linearily. Sometimes a single simple branch can cut your speed in half.
Stuff that actually improves the speed on the CPU and is something as simple as adding a variable and the half line of code needed to SKIP the shading on X samples in a volume renderer can cause a slowdown by 50%!
This is something that would make a software renderer X times faster and it causes a slowdown on the GPU. Just to give an example.
Add the lower amount of memory available on the GPU and the fact that you need to keep anything that in GPU- memory also in main memory and you quickly realize that the GPU is not meant to be a general purpose CPU. It is meant for special tasks. So we use the GPU for previews in VoluMedic and that is something where it really shines. Everything is a bit simpler and it does a bit less than the CPU- rendering, but it looks close enough and it is frigging fast!
IMHO at some point GPU and CPU will get really close to each other in regards to speed and what they can do. You will have fast multicore CPUs that do general purpose stuff a bit faster than the GPUs and then on the other side you will have GPUs that do specialized highly threadable tasks a bit faster than the CPU.
Where will that leave Nvidia? I have no idea. All I know is that I hope that they will be arroud for a long time to come as they are the only ones writing somewhat decent OpenGL drivers.
Ati is OKish, but Intel and the rest just outright suck.

beestee
05-31-2011, 04:05 PM
After having read through this entire discussion, it seems to me that the renderer situation within Max has the potential to either get better or stay the same. I really do not see it getting any worse.

Scenario 1: Mental Ray under the supervision of Nvidia succeeds and makes great strides with improvements/new features. This in turn will put some pressure on <insert render engine developer of choice here> to catch up/keep up/stay ahead.

Scenario 2: Mental Ray under the supervision of Nvidia fails, and Autodesk moves on to another renderer or puts some effort into developing their own raytracer/radiosity engine further.

Scenario 3: Worst case we end up somewhere in between and the development of Mental Ray remains in it's current 'me too' process and nothing changes.

thorsten hartmann
05-31-2011, 06:20 PM
Quadros are almost identical to their gaming analogues. It's not very clever decision from both points: it can be hacked by using gaming cards, and also it's not ethnical to professionals. The speed and stability of current gaming cards plus recent stable versions of AD products make them obsolete.
If you're ok buying hardware for each program, great. But from a consumer's point of view, this sucks in many ways. And then they will push consumers buying new quadros each year, adding new incompatible features.


- You can´t hack 100% a GForce to a Quadro, that is wrong. You can hack the Drivers, but not the Hardware. Second, at this time exist only 6GB Video Ram for a Quadro, and i need the 6 GB Video Ram. Slower Memory make not sens, only for small scene. For example a Car + HDRI Lighting.

- i not buying Hardware for each program. I buying Software + Hardware for my Workflow. ;)

mfg
hot chip

CHRiTTeR
05-31-2011, 06:32 PM
- You can´t hack 100% a GForce to a Quadro, that is wrong. You can hack the Drivers, but not the Hardware. Second, at this time exist only 6GB Video Ram for a Quadro, and i need the 6 GB Video Ram. Slower Memory make not sens, only for small scene. For example a Car + HDRI Lighting.

- i not buying Hardware for each program. I buying Software + Hardware for my Workflow. ;)

mfg
hot chip

The point is that quadro vs geforce gpu's are as good as identical. The only important difference is the driver, thats why so much ppl say nvidia is ripping ppl off.

At least thats how it was not so long ago, dont know if the difference is still close to nihil today (i wouldnt be surprised if it is).

thorsten hartmann
05-31-2011, 06:55 PM
sorry, but that is wrong. look at the hardware definition: Graphics Clock, Processor Clock .... A Quadro have more power, and where are the 6GB Gforce, i can´t see it.

davius
05-31-2011, 09:22 PM
sorry, but that is wrong. look at the hardware definition: Graphics Clock, Processor Clock .... A Quadro have more power, and where are the 6GB Gforce, i can´t see it.
No Thorsten, Quadros and GForces are basically the same and that's why softmoding has been possible. http://en.wikipedia.org/wiki/Nvidia_Quadro#PCI_Express

They (GF and Quadros) usually share the same core with slight differences - most of the times the GF counterparts are clocked higher with faster memory modules and more CUDA PUs. Only recently we've seen NVidia putting more memory on the Quadros cards in an effort to differentiate this product from the GForces and market iray at the same time. Anyone, me included, who ever tested a quadro and a gforce equivalent could see little to no difference at all in performance. Also worth mentioning is that AD is dropping support to performance drivers (Maxtreme) such is the "improvement" one can see when using a quadro card.

Of course this can not be said of the recent 6GB quadro when using iray, but if a manufacturer (Asus, XFX, PNY) decides to make a GForce with such amount of Ram I doubt the Quadro can keep up with it.

thorsten hartmann
05-31-2011, 10:50 PM
ok, when all guys say it is the same. i will change my opinion. :beer:

davius
05-31-2011, 11:00 PM
ok, when all guys say it is the same. i will change my opinion. :beer:
I always knew you're a nice guy! :beer:

Syndicate
05-31-2011, 11:33 PM
What happened to Quadro cards being optimised for opengl? I thought that was the whole point...
I use a Quadro fx at work and other than the memory gains for rendering its really not showing me value.
It even crashes the windows display driver when switching between photoshop/3dsmax/after effects.
I'm a bit worried that iray and certain other tools will become quadro exclusive so as to give buyers a reason over geforce cards. There still isnt a reason to buy a quadro card other than the memory and the stability has gone out the window.
I'm still hoping concepts like Larrabee will eventuate and we will see a new architecture in the processing units. There has got to be a modular way of adding performance without having to upgrade the mainboard, ram etc every time. Not to mention how unfriendly this is environmentally.
Come on Nvidia... be more GREEN :D (not just logo colours!)

ThirdEye
05-31-2011, 11:50 PM
After having read through this entire discussion, it seems to me that the renderer situation within Max has the potential to either get better or stay the same. I really do not see it getting any worse.

Scenario 2: Mental Ray under the supervision of Nvidia fails, and Autodesk moves on to another renderer or puts some effort into developing their own raytracer/radiosity engine further.

You say not getting any worse and then your second option is a total disaster. Do you realize that might take a decade to happen? Just have a look at how much time has passed before the Maya-MR integration got barely decent.

davius
06-01-2011, 01:36 AM
What happened to Quadro cards being optimised for opengl? I thought that was the whole point...
What? Buying a quadro to get opengl performance? That was the case ten years ago, but now this API on Max is as good as dead. All recent optimizations in Max viewport performance were made using DirectX instructions (a gaming API, optimized to gaming cards ;) ) and, more recently, nitrous, which I believe still uses DX somehow. Somebody like Bobo may know the inner guts of Nitrous (which would be awesome if he shared a word or two about it).

slipknot66
06-01-2011, 02:43 AM
So basically it took Alias/Autodesk 10 years to integrate MR decently into Maya and now MR dies?

Its still bad integrated.

Bitter
06-01-2011, 03:09 AM
Integration was originally mental images.

It's been up to Autodesk for a few years now. :hmm: Stagnation?

This is why when someone suggests integrating another renderer, it makes me cringe.

I'm still hoping concepts like Larrabee will eventuate and we will see a new architecture in the processing units.

Larabee could never match performance so it's dead. Even before a possible release it was so far behind it couldn't compete. I want to say I heard something similar with Nvidia but it was some time ago.

ThE_JacO
06-01-2011, 03:32 AM
But I agree with what you are saying... I was asking, as a CG/VFX artist... what is it in your daily workflow that takes up a lot of time to calculate/process and that the GPU could not possibly do?
I'm interested in real-world problems, not hypothetical thats all.
Evaluating DAG, pull only graphs can't be fully and widely threaded efficiently.
The only way to bruteforce and parallelise that would be to pan out possible evaluation paths and matching choices in the end, something that will only be viable when quantum computing will be efficient.

The above is the -entirety- of rigging and animation.

Getting and setting large amounts of data in most commonly used tree structures cannot be efficiently parallelised past very low amounts of branching before encountering fatal race conditions and synchronisation problems. That is a large amount of evaluating and writing caches in dynamics, and a speed improvement of 50x (a lot more than you get in a lot fo cases with a GPU port), would still not be enough to shift the paradigm from caching intensive towards purely computational all the time.

The above is a solid chunk of "dynamics".

Real world enough?
You keep saying you haven't heard examples where the GPU couldn't do something more efficiently than a CPU that applies to CG artists, that simply outlines that you have bought into the hype, and are thinking only to that level of abstraction from the reality of the problems that marketing wants you to think in. Developers know differently.

Volume SELDOM wins, and writing for volume is a major, fraught with workarounds and limitations, highly compromising pain in the arse. It pays back in some cases, it doesn't in the majority.

theotheo
06-01-2011, 08:11 AM
There is a fair amount of misconception about GPU computing in general. For those who are looking for a more "applied" version of why some things are threading/parallelism friendly, I highly recommend Mark's excellent write up on the subject over at OdForce.

http://forums.odforce.net/index.php?/topic/6421-multithreading-gpgpu/page__hl__gpu__fromsearch__1
(written in 2008, but still very relevant imho)

-theo

Laa-Yosh
06-01-2011, 11:58 AM
That is a very good writeup on the issue, even though the majority of the info about why GPUs are more limited is only at the end. Still, everyone should read it before making any claims about the future.

With that settled, maybe the thread can also get back to Mental Ray now... ;)

Kungfujackrabbit
06-01-2011, 04:05 PM
This sounded a lot like the max vs maya discussions i've heard for years. lol :rolleyes:

There is always a faster car, a stronger man, a renderer that does something you like and understand.

Best of luck to anyone who can manage to get some good art out there! :thumbsup:

DrBalthar
06-01-2011, 07:02 PM
Hardware vendors also favor Nvidia. Lenovo, Dell, HP, etc have an option for ATI cards but their default is Nvidia for workstations (Mid-High range). Ironically, Apple is the opposite for workstation graphics.

That's not ironically it is nVidia's own fault of being unable to deliver! Deliver inside promised specs and so on. Job's ego is bigger than JJH's so he looses! One of the few points where I actually agree with Jobs


What if I want to run Final Cut Pro? This is an old argument about "making" you buy

Well if you want to do that you already doomed yourself!

DrBalthar
06-01-2011, 07:06 PM
sorry, but that is wrong. look at the hardware definition: Graphics Clock, Processor Clock .... A Quadro have more power, and where are the 6GB Gforce, i can´t see it.
You're wrong Quadro''s actually have less compute power than Geforce simple because they have to guarantee that the chip will last 3 years under full strain. They however have more RAM as you said.

DrBalthar
06-01-2011, 07:09 PM
Come on Nvidia... be more GREEN :D (not just logo colours!)
Hahaha that's a great joke will never happen every generation so far nVidia card were more power hungry than the competition. Usually at least they had sometimes a significant performance gain but even that went away in the last two generations. nVidia is the typical US company philosophy BIG = GOOD! That's yesterday philosophy that doesn't work anymore that's why nVidia will tank!

techmage
06-02-2011, 05:47 AM
Being both somewhat of Nvidia and Mental images fan, I see this as very good, and entirely expected. I've been expecting nvidia for years to turn mental ray into a product specifically well integrated with CUDA for GPU rendering. When Nvidia originally bought Mental Images, that was the first thought that came to my head as to the reason why. Nvidia has long been trying to work GPU into the film industry even before the MI acquisition. After the MI acquisition I've been sitting around these past years wondering, what will Nvidia do with MI? How is Nvidia going to make mental ray into the big GPU based renderer? Well, this seems to be an obvious direction to take with that. Dismantle mental images and move them to nvidia and integrate them with Nvidia engineers. Clearly this is to get mental ray, or rather iray, running the best as it can on nvidia GPU's. But I am all for that. I think this is essentially going to secure CUDA as the dominant computing language, and also secure iray as the dominant GPU renderer. I mean how can another company possibly compete with Nvidia + Mental Ray combined to produce a GPU renderer? Nvidia can just put new sillicone into their Quadro cards and have iRay be all set up to use that new sillicone before any other company has even heard of it. Other rendering companies really won't be able to keep up anymore in the GPU sector. This is kind of unfortunate for competitions sake but... I personally think it needs to be done. Nothing is going to be able to advance a GPU renderer quicker than the that makes of that GPU renderer working in collaboration with the GPU company.

Also about Nvidia past effort Gelato failing. I don't see how that could happen with mental ray and iray. Gelato was ended because it never had a userbase to begin with. But mental images already has a usebase and contracts to sell to. I would suspect that Nvidia isn't attempting to turn iRay into a profitable venture by itself, but is rather pushing iRay to turn CUDA cards into the card to have for proffesionals and nvidia probably expects to get more money through the selling of gpus than the selling of iray licenses themselves. Actually, it would be quite cool if nvidia just gave our iray for free like they did gelato.

RebelPixel
06-02-2011, 03:52 PM
..and also secure iray as the dominant GPU renderer. I mean how can another company possibly compete with Nvidia + Mental Ray combined to produce a GPU renderer? Nvidia can just put new sillicone into their Quadro cards and have iRay be all set up to use that new sillicone before any other company has even heard of it. Other rendering companies really won't be able to keep up anymore in the GPU sector. This is kind of unfortunate for competitions sake but... I personally think it needs to be done. Nothing is going to be able to advance a GPU renderer quicker than the that makes of that GPU renderer working in collaboration with the GPU company.

Be sure many will compete, i think the amount of money you have to invest into an iRay setup is way too high, compared to any other competitor.
They want you to buy iRay setup, CUDA, Quadros, anything that makes them sell hardware too.
Now think about it, iRay doesnt come alone, i mean i dont think you can buy iRay only, it is always bounded with a 3d package or standalone mental ray, add to that the money you have to invest into hardware to make it decent..and well.. i dont have the data near me but i think you would spend 1/4 of that with octane + SLI gaming card.

That said, i dont want to derail this into a this vs that thread, i just hate how MR is implemented in AD packages.
Still i think the implementation of the Mental Ray / iRay on Autodesk products is the main thing they need to work on. Its pointless to add features to a render engine when you cant find them implemented into your 3d software because someone else decided to not put them in. Sometimes i feel many people here dont even know how many features currently Mental Ray has compared to the ones you have in Autodesk 3d Packages.
Working with a 10+ years old implementation is not really acceptable.
When AD will do something like "Mental Core" (still dont understand why no one from AD bothered to do those implementations in a professional way) for all the 3d applications where Mental Ray/iRay is present, then we'll see an improvement, untill then, it is just marketing and empty words to me.

DrBalthar
06-02-2011, 08:23 PM
But I am all for that. I think this is essentially going to secure CUDA as the dominant computing language, and also secure iray as the dominant GPU renderer.

I would highly doubt that they would have to throw iray completely away and start from scratch. Since currently it can't compete with the competition at all. It is laughable slow compared to the rest out there. You can not manufacture inspirado.
As soon as nVidia's core business is crumbling and the first signs are already written on the wall, the company is history. Therefore, their desperate attempt to get a foothold into the SoC sector. Because they know themself that GPUs will be mostly irrevelant as a standalone product. There are at best two (maybe three) more generations in it and then it's over!

DrBalthar
06-02-2011, 08:25 PM
Sometimes i feel many people here dont even know how many features currently Mental Ray has compared to the ones you have in Autodesk 3d Packages.

You think the Autodesk products have bad integration? Check out the products from Dassau that's what I would call really bad!

Kabab
06-03-2011, 12:57 AM
Iray in V6 looks pretty good..

saycon2000
06-03-2011, 10:17 AM
hi ! i juste want to know if some of you have some comparison between Blue Sky CGI Studio and Solid angle Arnold render !? with one is the most powerfull and fast..... ?:-)

I think their is only 3 real production rendering raytracer ; mental ray , CGI Studio ...and of corse Arnold !

two are used by studios ; Sony Pictures Imageworks(Arnold ) witch will be invalable soon by solid angle and CGI Studio(Blue sky)..never ?......and N.A.R (Nvidia Advance Ray) :-)....dont smile ...i am serious, he is still used by major studios all over the world and intergrated in a lot of application software.

with nvidia working on project Denver aka future arm/cpu+gpu chips will be a Killer solution to speed Mental ray !!! i know :-) some of you, think it's not that simple to porte a cpu render to gpu...maybe some component like ; FG .GI. dof....
( my bet; is that Iray will compite faster than Arnold in the production studio market ! ) and to sell more chips like a render server.

sorry for my english

ps: some interesting links about Nvidia project Denver and a possible gpu arnold render !

cote from Larry Gritz of Sony Picture Imageworks
The move to unbiased path-traced rendering “has been a major shift for us,” he said.

When questioned about utilising GPUs, he provided a standard response, explaining that moving code to GPUs would be time consuming and most companies’ renderfarms don’t posses any GPUs. As a result, there would be no major speed benefit of switching to GPUs and it would require a major investment in hardware. Also, scenes using 500GB of textures per frame would not fit into the current line of GPUs.

http://raytracey.blogspot.com/2011/01/arnold-render-to-have-full-gpu.html
"it doesn’t make sense to cram the kinds of scenes we throw at Arnold every day, with tens of thousands of piece of geometry and millions of textures, at the GPU. Not today. Maybe in a few years it will."..........Arnold render is a unidirectional path tracer, so it makes a perfect fit for acceleration by GPUs. "Maybe in a few years it will" could be a reference to Project Denver. When Project Denver materializes in future high-end GPUs from Nvidia, there will be a massive speed-up for production renderers like Arnold and other biased and unbiased renderers. The implications for rendering companies will be huge: all renderers will become greatly accelerated and there will no longer be a CPU rendering camp and a GPU rendering camp. Everyone will want to run their renderer on this super-Denver-chip. GPU renderers like Octane, V-Ray RT GPU and iray will have a headstart on this new platform. Real-time rendering (e.g. CryEngine 4) and offline rendering (e.g. Arnold) will converge much faster since they will be using the same hardware.

http://raytracey.blogspot.com/2011/01/carmack-excited-about-nvidias-denver.html
Nvidia's project Denver is very important in this respect and will bring the theoretical maximum speedup (limited by Amdahl's law) much closer to reality, because CPU cores and GPU cores are located on the same chip and are not depending on any bandwidth restrictions. The ARM CPU cores will take care of the latency sensitive sequential parts of the code, while the CUDA cores will happily blast through the parallel code. For ray tracing in particular, this means that the ARM CPU cores will be able to dynamically build acceleration structures and speed up tree traversal for highly irregular workloads with random access, and that the plentiful CUDA cores will do ray-triangle intersection and BRDF shading at amazing speeds. This will make the Denver chip a fully programmable ray tracing platform which greatly accelerates all stages of the ray tracing pipeline. In short, a wet dream for ray tracing enthusiasts like myself :D! Based on the power-efficient ARM architecture, I think that Denver-derived chips will also be the platform of choice for cloud gaming services, for which heat and power inefficiency from the currently used x86 CPUs are creating a huge problem.

http://raytracey.blogspot.com/2011/03/some-details-about-project-denver.html
With such extremely fast memory bandwidth between the ARM CPU and the Maxwell GPU (both on the same die), real-time ray tracing of dynamic scenes will benefit greatly because building and rebuilding/refitting of acceleration structures (such as BVHs) is still best handled by the CPU (although there are parallel implementations already, see the HLBVH paper by Pantaleoni and Luebke or the real-time kd-tree construction paper by Rui Wang et al.)
David Luebke (Nvidia graphics researcher and GPU ray tracing expert) said in a chat session preceding the GTC 2010 conference in September:"I think Jacopo Pantaleoni's "HLBVH" paper at High Performance Graphics this year will be looked back on as a watershed for ray tracing of dynamic content. He can sort 1M utterly dynamic triangles into a quality acceleration structure at real-time rates, and we think there's more headroom for improvement. So to answer your question, with techniques like these and continued advances in GPU ray traversal, I would expect heavy ray tracing of dynamic content to be possible in a generation or two." This would imply that the Maxwell generation of GPUs would be able to raytrace highly dynamic scenes and that path tracing of dynamic scenes could be feasible as well. A pretty exciting thought and much sooner than expected :-)

DrBalthar
06-03-2011, 07:53 PM
Iray in V6 looks pretty good..
Only because the number of features/options and shaders for iRay can be count on one hand!

jogshy
06-04-2011, 06:03 PM
Maybe related to this?
http://www.informationweek.com/news/windows/microsoft_news/229900137

Laa-Yosh
06-04-2011, 06:36 PM
You misunderstand that news bit - Microsoft apparently has made this deal back when they've licensed an Nvidia GPU for the first Xbox so it's more than a decade old.

It gives them the option to have first refusal of purchase if anyone attempts to buy more than 30% of Nvidia's stock. Basically they can stop a hostile takeover of the company.

There's no info at all on whether anyone wants to buy Nvidia right now.

davius
06-04-2011, 08:00 PM
@saycon2000 moving the rendering heat from the cpu to the gpu is already proving to be a challenge. Imagine the headache involved in moving all our tools from x86 to arm. Not worth the trouble in the foreseeable future.

Kerem
06-09-2011, 12:02 AM
Cool, so all the investment (books and DVDs) for learning Mental Ray For Maya will become useless somehow? Especially like a member on first pages told nonsense about "Autodesk should look for another renderer"...

Bitter
06-09-2011, 12:45 AM
Cool, so all the investment (books and DVDs) for learning Mental Ray For Maya will become useless somehow?

All this means right now is that mental images is now Nvidia.

From most perspectives, this is a good thing.

Kungfujackrabbit
06-09-2011, 02:37 AM
Kerem,
Dont worry about it. All the 3D forums pop up with a "sky is falling" type discussion like this from time to time. :cry:If this actually scared you just look at the maya vs max discussions for the past 10 plus years and youll understand ......most comments should be read as entertainment. :beer: Go about studying and make good art and you'll do fine.
Later-

EdtheHobbit
06-09-2011, 02:40 AM
All this means right now is that mental images is now Nvidia.

From most perspectives, this is a good thing.

Besides, if your mental ray books and dvds are any good, they'll be teaching you theory alongside mr-specific material. You can apply just about any mental ray technique to other raytracers, even if the terminology and methods are a little different.

EdtheHobbit
06-09-2011, 02:42 AM
Oops, sorry, forgot to add this:

AAAAAAAAAAHHHHHHHHHHHHH THE SKY IS FALLING AHHHHHHHH

Samo
06-09-2011, 08:10 AM
Example: my Intel Xeon E5520 with 12 gig RAM renders a shattered glass scene in 25min.
My Q6600 from 4 years ago with 4 gig of ram renders the same scene in 40min.
Using my Quadro FX at work the scene renders in 8min.

Rendering performance depends on the scene and the algorithms used too, not only on the hardware. GPU rendering is for instance very good at ray intersection work. That's not a small feat as in some scenes ray intersection calculations could make up to 70% of the computing time. But GPU is bad at any kind of optimisation hack, that's why you get to see GPU raytracers doing fairly simple old-dated bruteforce stuff like path tracing. We should remember here that path tracing wasn't invented with animation in mind, and can not render efficiently some global illumination effects and cases. On the other hand, path tracing delivers consistency, if you have a good render farm that can get you always far down in the noise curve.

Anyway thanks to everybody for your input on GPU rendering versus CPU. The hype about GPU rendering is even worse in other communities. I will point them to this thread in the future.

BTW, I don't think photon mapping is outdated.

Kungfujackrabbit
06-09-2011, 04:28 PM
Edthehobbit AKA EdtheLiteral Nice one, me likes! :bounce:

strangerman
06-09-2011, 09:26 PM
BTW, I don't think photon mapping is outdated.

Hehehe Of course you don't, it's one of the main GI systems used by Yafaray ;) and it's pretty stable and efficient, at least in yafa, which i happen to tested it some time ago.
I think it all depends on the implementation. Mental ray has an old and not very non-baked-animation wise one, though it works quite well for baked ones and stills. Yafa photon mapping works quite well for non baked animations, even with caustics.

Dont take me wrong, i'm a mental ray guy, but it's starting to show some age wrinkles with the gi systems, needing an overhaul of them. I'm quite excited with this nvidia thing as it seems to be a boost to mr development.

mocaw
06-13-2011, 08:09 PM
Like most highly generalized tools, mr is capable, but often feels a bit un-refined and lacking on the specifics.

Yes- unless you've been on a cave, on the whole it's slower than many render engines in many areas. It's almost all I've used since going with SI several years ago, but every time I even dabble in another render engine (3Delight, Modo, Vray, pmg etc.), I am, for the most part, amazed at how much faster and equally or more stable they are than mr. While I'm not an mr genius, I'm not a complete mr noob/idiot either and feel the "you just don't know how to use it" clause is over used. This is not to say these "alternative render engines" are perfect, but that mr, being "the most widely used*" render engine should at least do better than it does.

In the end, I could care less what part of Nvidia the mr development team works under, or how many employees work there now. The proof that mr is headed on the wrong/right track will come when we actually see real world, useable improvements to mr (hopefully not all GPU centric) that go beyond limited tech demos and two cycles.

That said...were starting with a mixed track record on both mr and Nvidia's part when it comes to delivering on talk, hype, and feature sets with render engines. In most people's minds it's an up hill battle and this new move on restructuring does nothing to change that mentality (pun not intended).

Personally, I'd love to see mr dead and for people to have more choices as to what render engine they use than just going with the "bundled" one.

CGTalk Moderation
06-13-2011, 08:09 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.