Maya 2012 Unified Sampling in mental ray


How came i get rid of the Unified Sampling altogether from showing up when I open maya? When I delete the strings and save, then re-open my scene, all the strings are back.

Is there a way to just get rid of them without having to keep going into my miDefaultOptions and clicking delete for each one?


Couple things:

Are you using a script that might restore them? Like the Native UI integration?

Are you saving it after deleting them?


Save the scene without the strings.

it saves the settings by scene.


It depend about what you use to integrate string options.


OK, sorry guys. Silly me. Just had to delete and then restart maya. Now they dont appear. For whatever reason they kept coming back when I was opening another scene. Thanks


sorry to revive an old thread, but it doesn’t seem right to start a new one.

I’ve been playing with unified sampling in maya 2013 extension, doing all the optimizations that have been documented, and I see the theoretical benefits, but I’m not convinced it’s better in every case.

I don’t usually have scenes with heavy transparency or severe raytracing features other than shadows, relying more on SSS in medical animations making FG and GI less worthwhile. I don’t render DOF or motion blur directly in my renders and I know that’s one of the major benefits of unified sampling that I’m not using.

I agree that the edge quality is inferior to adaptive sampling for the same render time regardless of how I tweak it. Textures have a little more pop with unified sampling, but can also be a bit grainier.

I’m still finding that I get a far superior image to render with adaptive sampling at 2x the target resolution with AA -1 min, 1 max, .17 contrast threshold and a slightly higher than normal filter like gaussian 3.5 with SSS lightmaps clamped to the target resolution, then downsize the final frames to the target resolution. Depsite the AA settings being slightly noisy at 2x resolution and using way fewer AA samples, it looks fantastic when downsized to the desired resolution -much better than unified sampling at 1x with high quality AA.

Depending on the scene, those render settings take around the same time as a 1x resolution with adaptive AA set to 0 min, 3 max, and .08 contrast threshold at 2.5 gaussian filter.

For a comparison of 1x resolution depending on the scene, unified sampling seems to render slightly faster for roughly the same quality, but for my scenes, it still doesn’t compare to rendering at 2x resolution with far fewer samples and then downsizing the frames.

Also, you get to keep all the calculated pixels at the 2x higher resolution instead of all that sampling work being filtered down and averaged into fewer higher quality pixels at 1x resolution.

This is how I’m able to render at 2160p for roughly the same price of 1080p and yet the 2160p frames look fantastic in motion because the pixels are so small anyway, it’s difficult to see the extra AA noise from the lower sampling - especially if you use some slight film grain in post.

Anyway, just posting my findings…


Rendering really large and then downsizing is a pretty old technique typically used to catch fine details. Your posts seems to say that Unified will catch these at 1x resolution faster than Legacy AA at 1x. (This is expected from Unified Sampling without the extra work or sampling) The 2x resolution is really 400% (22=4 and 44=16 so 400% of the area) So what you’re really doing is capturing course details quickly and reducing them, possibly sharpening in the process.

rQMC sampling benefits complex scenes greatly. Legacy AA may still outperform on scenes with less complexity or no expensive effects like Depth of Field, glossiness, or Motion Blur.

This isn’t any different than the Vray docs stating that there are cases where Adaptive Subdivision will outperform DMC, just that it’s very unlikely as scenes increase in complexity.

Unified Sampling will provide better anti-aliasing of small features like hair, wires, etc as well as textural details. I’m curious why you are increasing the gaussian filter so much. I would say most animation should stick to gaussian 2 2 to provide crisp details without artifacts.

An example of Unified Sampling and Brute Force rendering:

Longest frames at HD with motion blur were still sub-3 hours.

Another benefit of more brute force rendering like Unified Sampling is that quality is consistent more often across scenes. This is more of a set-it-and-forget-it scenario where you send something to the farm and go home.

Sadly, the current trend in lighting and rendering is simplicity at the cost of speed (iray, Arnold, etc) But Unified Sampling can still provide a speed improvement with simplicity when used in modern context.


This is very interesting information. I almost never use unified anymore either. Most of the time it’s a lot slower than an adaptive render of equal quality. (I mostly render smartphone TVC:s btw.)
I will try out your double resolution settings for comparison as soon as I get some time to spare.

Thanks for sharing.


yeah I’m not doubting unified sampling’s benefits, I’m just not seeing them in my scenes which are mainly 20-60 objects with 500k-3mil polys each with few higher end raytracing features. I absolutely agree that scenes with lots of scattered glossy transparent surfaces and motion blur/DOF will probably see huge benefits with unified sampling.

I know rendering at 2x resolution is an old-school approach, but it’s still AA and a valid way to curb the number of samples adaptive AA uses.

For a full quality AA 2x render, I absolutely won’t use a higher gaussian filter, but in the cases where I’m “artificially” rendering at 2x by cutting the sampling way down, I’m effectively treating the 2x resolution as if it was an upsampled 1x resolution image. The higher filter helps soften the noise from the low sampling.

Believe me, I did a million test renders with different filter settings. After I downsample from 2x to 1x, the 3.5 gaussian filter at 2x resolution with low sampling basically looks like using a 2 gaussian filter at 1x resolution with full quality AA settings. Going lower than 3.5 starts looking more like the Box filter at 1 with a regular high quality AA 1x resolution render.

The other upside, is alphas are SUPER crisp at 2160p and all your 2D post effects have an extra amount of AA after the final frames are downsized to 1x. The bad news of course is the file sizes are huge and comp times are slower, but in the end I have viable 2160p frames.


Large files sizes when comping is another possible problem, true.

Are you doing motion blur in post? If so, I would try Unified Sampling and use motion blur in-render. With a few raytraced effects like reflections blurring etc, you will find it might be more realistic looking.


yeah I do motion blur and DOF in post

I know it’d be better if directly rendered and I will eventually experiment with it. The issue with my job in particular is often doctors want to use frames from animations for something and it’s really nice to have the option to just open the file and disable the motion blur so they quickly get a nice crisp image with no rerender time.

The other thing is my animations usually don’t have anything very fast moving to really make a huge difference in motion blur quality. I’ve also been doing more 60fps animations lately which again cuts the need for perfect motion blur in half.

I’ve been able to get by just fine with minor or medium amounts of fake DOF. I know you can’t truely fake an extreme DOF shot in post, but so far I’ve been able to dodge those shots one way or another.


Sidenote: Unified Sampling can undersample an image by setting the min samples to fractional values.

For example:

samples min 0.1
samples max 100
samples quality 1.5

This means a minimum of 1 sample every 10 pixels. I see you are undersampling in your renders at higher resolution.


Yeah, I also tried the same undersampling at 2x resolution with unified sampling - tried several combinations of fractional values like .25 or .5 min and several different max values.

The resulting images using the fractional min values were ugly and really noisy. No matter what combination I tried, I couldn’t get an acceptable combination of render time and quality at 2x resolution that could compete with my undersampling adaptive AA settings at 2x resolution.

My conclusion was that for my scenes, adaptive AA was better for previews, unified sampling possibly better for full quality full resolution images, and adaptive AA was better for cheap 2160p renders. In the end, I’ll always choose the 2160p adaptive AA renders over the unified sampling 1080p renders when they both take around the same render time and the 2160p looks better anyway.


If you undersample you can’t really use a more brute force shader setting either. You would have to work more like AA settings with higher reflection rays.

Is there an example of one of your typical animations somewhere?


I don’t have anything within the last 4 years online yet. My stuff is all copyrighted and so I can’t really just post anything at will.

There is an animation from 4 years ago here:

It’s been completely butchered to death, repurposed, re-edited, new audio, picture-in-picture insert of a seriously old animation I had 3 hours to do for a newscast, but it gives you a sense of what I do. My anatomy models are massively more detailed now from when this animation was made.

I wish our company would put more of my current work online, but right now they’re more concerned with filling their webpages with content for the all the various medical disciplines on their website, no matter how dated it is.

But as you can see, I don’t have anything crazy going on with raytracing features. It’s pretty stripped down and basic with the main emphasis being on anatomy modeling detail.


Ah ok, this gives me an idea. You may not need lots of sampling overall. And your camera moves, like you said, are probably more subtle.

This sort of underscores a lot of newer techniques. Most are designed to handle more and more complexity as machines get faster; so more brute force rendering can be done on previously prohibitive scenes easily.

You can probably add more details to your anatomy renders but you are limited by:

  1. Human anatomy itself doesn’t get more complex over the years.
  2. Illustrative/Instructive work still needs to be clear to the viewer.

So added effects and complexity for you may actually make it harder for the viewer to understand what is going on. Changing shader techniques will be of more help for you than core rendering schemes. The layering library when it’s released will be helpful for you, things like SSS are easier and there’s no more lightmapping. And more complex looks are easier to achieve without having to write something. They are designed with Unified Sampling in mind but will work with anything.


yes, exactly.

I’ve had to strip detail out of a lot of my work at times because it was too distracting for what the main point needed to be. I have several sets of shaders and automation scripts for different looks or rendering speed needs.

We’re getting to the point where it isn’t feasible to fake things anymore with a texture map or displacement map and those tiny details need to actually be modeled and react correctly to light it’ll just look lame.

What’s been neat is that as more detail gets put into the modeling, it becomes less work for setting up textures. We don’t even really need bump or displacement maps a lot of the time anymore. Most of the textures are 3d procedural because it’s a UV nightmare with bitmaps.

I so look forward to getting rid of SSS lightmaps - can’t wait. And I really look forward to true translucency that isn’t insanely expensive like it is now.

Yeah, anatomy isn’t going to change, but there is so much of it that we’ve hardly scratched the surface with that will require multiple thin geometry layers using hardcore blurry refraction shaders, translucency, etc.


Keep an eye on the SSS2 controls (not the phenomenon), those will lead you to the new SSS shader controls when released.


Yeah I read that MR hinted at the next maya release will have SSS without lightmaps so I’m excited and can’t wait to see what raytracing possibilities that might unlock.

So many CPU cores are wasted generating lightmaps. Even with the resolution cut down, it’s sometimes still not great.


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.