Oh, the humanity! (Maya 2015 Finag Gather Woes)


#21

Funny! :smiley:



.


#22

quality=1.0 isn’t really all that high. Try quality=2.5 and max samples=256. See what improvement you get and how it affects render time.


#23

Ok… I’ll have to read up on what that actually does… because right now it resembles too much something like ‘turn up a notch or two on a need basis’. Which isn’t necessarily bad except for the fact that it provides no intuitive, controllable understanding. I would like a lot more something that I can know for sure that if set it at 1 I get the best possible quality.


#24

If you are using the AO built in to the mia shader I’m pretty sure the grain from you are experiencing is from the mia AO which has it’s own sampling settings per shader.
Try increase this sampling under the mia AO rollout.


#25

Yes. Most of it is from the area lights though… it’s a weird mix or parameters at work here, trying to average AO samples, quality in the unified sampler and the area lights for example.

Probably the most condemning aspect of this is the lack of proper, elegant documentation.

(I hate to think about what will happen when I start on materials&textures on this scene)


#26

Yes, I’ve been toying with best practices on unified sampling for a while and I’m getting closer to getting the idea. I’ve noticed unified sampling can provide significant speed improvements with motionblur, although I still fall back on using legacy sampling for stills since I am very familiar with them. For many years I would never use an area light in mentalray but now its more feasible.

Here are a few links that are a good read.
https://elementalray.wordpress.com/2011/10/31/unified-sampling/
https://elementalray.wordpress.com/2011/11/30/unified-sampling-visually-for-the-artist/
https://elementalray.wordpress.com/category/unified-sampling/page/2/

Pretty well all the modern renderers require some amount of discovery for efficient production quality settings. If you know of one that doesn’t let us know.


#27

Some amount of discovery. ‘Some’ amount! The problem is that while a few renderers do take some amount, there are a few others that seem to take close to infinity. And I’m afraid mental ray is not that far from being one of the latter…


#28

Sometimes there isn’t really a great way to get rid of the grain without upping render times a lot. You often have to either crank AA samples up or increase the resolution and use more reasonable samples and then downsize in post.

That said, I hate using FG. Even when you think you’ve got a solid image and semi-decent render time without getting blotchies from frame to frame, once you get the exr in post and start doing color corrections to crush the blacks some, you end up seeing blotchies in the darks that you weren’t seeing before.


#29

FG does have limitations, but then it is a camera based shortcut that approximates global illumination. Just don’t try to get high detail using FG, especially with high contrast lighting. If you set the point interpolation high enough you can prevent the blotchies. Combined with appropriate use of AO as is possible with the mia_material you can get the lost detail back.

Sentry66 do you normally view renders in maya with renderview colormanagment turned on for gamma boost? Im asking because with viewer gamma boosted you should have seen any blotchies directly in renderview.


#30

I know if you crank up FG settings high enough you can reduce the blotches, but there’s a point where the settings you need make the render too expensive to finish on time. I later read about the FGshooter script that uses multiple cameras for a more stable FG solution, but never got around to trying it out.

I don’t need perfect GI accuracy, so in the meantime I’ve been using AO nodes with a really wide radius and spreads with a bright color for the “dark” color to give a semi-approximation for diffused GI/FG. I’d much rather minor have pixel noise that looks similar to film grain than huge sploches.

When I was using FG/GI, I used MIA camera simple and photo lenses to account for linear gamma correction. What I rendered out as raw exr’s it didn’t have blotches when viewed normally similar to what it looked like in the renderview, but it did have blotches when I did any sort of extreme artistic color correction. Most people probably didn’t notice much until I pointed it out, but I got sick of seeing it.

I know MR is cooking up some new GI/FG stuff specifically with their new MILA shaders, so I’m hopeful of it’s future, otherwise I would have jumped to vray if I seriously needed accurate animated GI.


#31

why should it be different? given that FG didn´t change in maya 2015 I can´t understand why you´re doing this test
maybe a test of the new (brute force) indirect illumination would be more interesting?


#32

I didn’t say crank up FG settings, I suggested only to increase the point interpolation. Increasing FG accuracy settings will approach the render times of brute force GI.
I also would like to see some new developments in GI with mentalray. Hopefully the MILA shaders will offer that.


#33

Hehe, you may have a point there. :slight_smile:

The reason for this is somewhat hinted at in the first posts of this thread. I simply wanted to see what mental ray has to offer in 2014 in terms of GI and arch viz rendering. And compare that - workflow wise, speed wise, anything wise - to v Ray. I have said that I have been away for 2, 3 years from maya and mental ray implicitly and I just wanted, as a challenge among colleagues to test this.

All these renderings will be compared an older version of v Ray, which we still use, and not 3.0.
So… I think that it will become an ongoing thing, that will include version 3 as well.

In the meantime, I am still translating and rebuilding the scene in maya - so proper renderings will come.


#34

In my experience FG didn´t change for ages in mray :slight_smile: (exept for the progressive refinement, which is anyway based on the old FG tech).
GI (photons) is even older, IP is at a close end.
So mray is offering more ore less what you left 2/3 years ago in therms of global illumination.
But you can see a light at the end of the tunnel, with the new GI GPU accelerated, this is meant to be completely new tech, and should really work well with MILA stuff…

I wouldn´t insist too much on the FG optimization, what is better in the MILA approach is the flexibility and speed for complex scenes (layering and so on), for basic speed improvements, combined with better GI I would look elsewhere

p.s.
archviz is not only GI, this is mainly interior rendering, for exterior mray can compete and outperform vray, in my experience, for interior… well, you need to know some tricks to get there


#35

I tried all combinations of settings including high point interppolation - which did reduce noise to smaller areas while increasing render times some. It still didn’t solve blotches when the image was tone-mapped in an extreme way in post. I came to the conclusion that the blotches are always there to some degree, though our eyes aren’t sensitive enough to see it when the contrast is low, but doing any sort of large color adjustment in post magnifies them.


#36

Yes, I haven’t yet tried GPU accelerated anything… I think the tech is still pretty young, and to my knowledge - for maya 2015 the only one officially exposed is AO on the gpu. I don’t really know MILA yet to be at all excited about it. Other than the fact that it is in fact something new and possibly better.

I am beginning to feel convinced that mray hasn’t really introduced anything new in all these years. Not that v Ray has really offered of a plethora of new features itself, but it has been (subjectively if you want) better at GI and ‘everything else’ than mental ray. I feel that you are simply able to accomplish more out of the box than with mental ray in maya - for example I haven’t really felt the need to search online for custom shaders or to look for scripts or templates that expose this or that feature.

What kind of tricks would be those?! :slight_smile: I know you can very easily get lost in abstract setups with mental ray, however one of my main focus points here is also ease of use. With v Ray you get pretty good defaults, you wouldn’t have to worry about any fgShooter scripts, raySwitchers, string Options etc…


#37

This is exactly why I’m so aggressive on this kind of testing… with a scene like this you shouldn’t get splotches. I also have the GI my colleague rendered out in v Ray, and yes, it does have splotches - the thing is that he mainly just hit render… he didn’t do any optimizing on the GI, except for just enough high settings to make it subtle enough - this is how he gained enough time to focus on texturing and lighting etc.

So why is it that a renderer, in this case mental ray, has so much trouble with such a simple scene?


#38
  1. Unified Sampling
  2. GI GPU
  3. MILA - which brings with it the ability to layer shaders
  4. Vastly improved Maya UI (which will continue to get better)
  5. Separate Plugin (finally, which means updates are decoupled from maya releases)
  6. Built In IBL (not there when I first started using mental ray)
  7. MDL (announced and coming later)
  8. LPE (hopefully)
  9. A better integration of progressive rendering into IPR (which will get better)
  10. Light Importance Sampling
  11. Multiple Importance Sampling (not there yet but this comes with MDL)
  12. Object Lights

Some of these are Maya issues and not mental ray itself, but the point is work is being done to make the entire user experience better. I don’t know why people keep acting like nothing is happening in terms of mental ray and its integration. Clearly lots of effort is going into solving user grievances and the results are readily apparent in Maya already.


#39

It’s because most of those things are brand new or still in development so they’re not exactly battle-proven yet or they’re additions that are still based on old tech. Half that list are announced features currently being worked on, but not yet available.

We know all the cool stuff is coming and will come together soon so everyone is hopeful, but that day hasn’t come quite yet where the big problems have been solved and the tools in people’s hands.


#40

It was pretty evident that this post intended to show how much better Vray is when it comes to global illumination and noisy results. You only have to search this phrase “vray gi noise problems”, to notice the dozens of posts made by confused Vray users. Again its pretty clear that with all the current renderers “you need to know some tricks to get there”. This forum is a great source for such knowledge.