Maya 2012 Unified Sampling in mental ray


#21

Thanks for the technical insights as well as the nice pics, great job David…!:thumbsup:

Btw as well as the “imf_disp” you can also use the “ProEXR” plug-in, in conjunction to PS “CS4X64.bit” to view the layers embedded in the “diagnostic.exr” OpenEXR file provided…

Link to the high rez version:

http://www.samui3d.com/CGTALK/Capture_Pro_EXR.jpg

A last little rambling, about the “Unified Sampling” String Options controls syntax and btw you was correct since the start…!

The doubts expressed in my previous post on the decimal point operator syntax, was caused by the “irradiance particles scale” String Options (20) “Value” field…

That in the old Maya classroom scene used for these tests, was still set to the integer value (1) which is actually wrong…

The “decimal point” value (1.0) should be used instead, since that the “Type” field value is set to “scalar” by default in the String Options (20).

These are just “left overs” from previous tests, used to mess around with the Mental Ray Importons and Irradiance Particles features…!

Guess I’m just tired after going through thousands pages of Maya and Mental Ray technical info’s, for the creation of the Tutorial Guide…:blush:

A little recap… If you input an “integer” such as the value (64) in the “Value” field of the “samples max” String Options (47), which is set to “float” in the “Type” field value…

Mental Ray which expects a “decimal point” value instead, will attempt to convert it “on the fly” or else will just default to the standard “Adaptive Sampling” sampling mode algorithm.

Hence disabling the “Unified Sampling” String Options controls as well, am I correct…?

This could be the reason of the different samples interpolation displayed by the previous tests, as well as the increased render times.

A last couple of tests to show how important is the syntax used for the “Unified Sampling”, as well as for the Mental Ray String Options controls in general…

In the first test I forced Mental Ray to convert values “on the fly”, by setting the “Value” field of the “samples max” String Options (47) to the “decimal point” value (64.0)…

Link to the high rez version:

http://www.samui3d.com/CGTALK/Decimal_Point.jpg

Notice an increase in render times compared to the next test, in which I used the correct decimal point value (64.) displayed by the String Options posted at the start of this thread.

Link to the high rez version:

http://www.samui3d.com/CGTALK/No_Decimal_Point.jpg

Seems that both input value syntaxes used in the previous tests, leads to the same samples interpolation and just the render times differs quite noticeably…

In the next test I did the opposite by omitting the decimal point value (0) in the “Value” field of the “samples quality” String Options (48), which is set to the input value (1.)…

Please note the “Type” field value set to “scalar” by default, in the String Options (48) as well.

Link to the high rez version:

http://www.samui3d.com/CGTALK/No_Decimal_Point_Quality_Scalar.jpg

Notice an increase in render times compared to the next test in which I used the default decimal point value (1.0), set in the “Value” field of the “samples quality” String Options (48).

Link to the high rez version:

http://www.samui3d.com/CGTALK/Decimal_Point_Quality_Scalar.jpg

Hope it helps…!

Alex


#22

Thanks so much for the MEL commands!

I do have some questions, however. When unified is active, does it act as an override of all the settings in render globals? What settings does it overrride?

I made a simple motion blur test of a sphere with a 3d displacement added (amounting to about 750k polys) and sent it moving and rotating at high speed along a reflective floor (using MIA shaders). Added 2 area lights as well. The results are initially disappointing. 1:48 with unified/quality 1. 1:12 with unified off/max samples 2. 15 secs with unifies off/rasterizer motion blur on (vis samples 4/shading quality 1).

Now, rasterizer did not blur the reflections (as expected) but produced a less noisy blur than the other 2 tests. The non-unified raytraced blur seemed a bit grainier than the unified test.

The point is that I did not see any significant speed up when rendering motion blur, which is disappointing. What should I be expecting? What kind of scenarios will make unified pull ahead?


#23

Unified overrides the regular adaptive sampling. It also operates as a control for the Rasterizer if it is enabled. This is in the first post of this thread. Hence the term “Unified”

Your scene isn’t complex enough to benefit from Unified. Unified does a lot more work in the background than regular adaptive. Part of this is to simplify your work by giving you fewer controls. The drawback is that the most benefit can be seen in more complex scenarios that were prohibitive before. Also, do not use Scanline. Turn it to Raytrace (scanline off).

Rasterization will yield smoother motion blur, but Rasterization is not true raytraced motion blur. Not only are the benefits of rasterization eaten away when you raytrace, but you must shade triangles more than once to blur raytraced effects. Your performance will degrade faster and with less accurate results. (Highlights, for example, darken in rasterized blur where they do not in raytrace.)

Another fallacy is that most people will tune their motion blur to be an absolute and perfectly smooth result for animation.

Don’t do that. It won’t be noticed in an animation. The vast majority of blur you see in film is honestly, really grainy. 10-12 hours a frame using Vray for Hereafter and the blur was still substantially noisy. No one noticed.

If you render blur with regular raytrace you will see not only is the grain there, but the time samples create and odd temporal artifact between time samples (looks layered or stuttered). Unified has better temporal and adaptive measurement between actual sub-pixel samples. This means that even with motion blur on, it will sample adaptively across the image. Some pixels may still only require 1 sample (or your minimum) to resolve both the blur and the detail. This is where the savings become much more apparent in a complex scene.


#24

Hey David, I hope that when this stuff makes it into an official documented release, that autodesk employs you (or someone with your knowlege) to write said documentation. Your detailed analysis complete with examples is invaluable. Thank you :thumbsup:

David


#25

Thanks for the info Bitter. Ack, yeah, I missed your blurb about the overrides in the first post. So much new info to digest in this thread, that part slipped through the cracks. :slight_smile:

This is overall very good news, as I’ve been wanting to drift away from rasterizer for a while now. mr proxies don’t work with rasterizer, high ram usage, innacurate results, etc. But it’s combo of fur and detail shadow maps make it awesome for animation with furry critters as far as render times are concerned. What I really want to do is use area lights with fur, which has traditionally been an insanely expensive combo that produced poor looking results. I’ll do some tests using unified and see if such a combo is now feasible…


#26

Thanks, wish I had more images to show but most of what I’ve used while testing this can’t be shown. I also used to teach college, so I’m used to boring students with large amounts of technobabble. :slight_smile:

You should find raytraced hair is much easier to do now. Look here: Adaptive vs Unified

As this feature is used more, feedback will help tune its performance for future iterations. For now I have some edge issues but in motion it’s very useful/fast. And for situations where I need more quality but complexity means tuning individual settings will take forever, I can just crank up Quality and send it to the render farm.


#27

But since mine can… Here we go with a little update, from the main Tutorial Guide…

A link to a lighter copy of the original “diagnostic.exr” OpenEXR 32-bit floating point image file output, rendered at the resolution of 800x600…

Since that the original “diagnostic.exr” file, is almost 100 MB in size:

http://www.samui3d.com/CGTALK/diagnostic.rar

A screenshot of the “Diagnose samples” option checkbox which should be enabled in the “Raytrace/Scanline Quality” section, located under the “Quality” tab of the main 'Render Settings" panel.

To ensure that the “UnifiedSampling” String Options controls will write the “diagnostic.exr” OpenEXR 32-bit floating point image file output…

To the main"SPONZATUTORIAL/renderData/mentalray", Maya 2012 Project directory:

Link to the high rez version:

http://www.samui3d.com/CGTALK/Unified_Sampling_Diagnose_Samples.jpg

As I said already in my previous post, you can also use the “ProEXR” plug-in in conjunction to PS “CS4X64.bit” to view the layers embedded in the “diagnostic.exr” OpenEXR file…

A screenshot of the “mr_diagnostic_buffer_time.[Y]” layer embedded in the “diagnostic.exr”, OpenEXR 32-bit floating point image file output…

Displayed in Photoshop “CS4X64.bit”, used in conjunction to the “ProEXR” plug-in:

Link to the high rez version:

http://www.samui3d.com/CGTALK/Alex_Diagnostic_Pro_Exr_1.jpg

A screenshot of the “mr_diagnostic_buffer_samples.[Y]” layer, “tonemapped” using the Photoshop “Image/Adjustments/Exposure” field control set to the input value (-5.0)…

Displayed in Photoshop “CS4X64.bit”, used in conjunction to the “ProEXR” plug-in:

Link to the high rez version:

http://www.samui3d.com/CGTALK/Alex_Diagnostic_Pro_Exr_2.jpg

But as already pointed out by “Bitter”:

To view the layers embedded in the “diagnostic.exr” OpenEXR 32-bit floating point image file output, as well as the samples per pixel value (S)…

Displayed by the “samples” layer and the “time” layer which reads in seconds per pixel, you should use the Mental Ray “imf_disp” image display utility instead.

A screenshot of the main “mr_diagnostic_buffer” layer, embedded in the “diagnostic.exr” OpenEXR 32-bit floating point image file output…

Displayed using the Mental Ray “imf_disp”, image display utility:

Link to the high rez version:

http://www.samui3d.com/CGTALK/Alex_Diagnostic_Imf_Disp_1.jpg

A screenshot of the main “mr_diagnostic_buffer_error” layer, embedded in the “diagnostic.exr” OpenEXR 32-bit floating point image file output…

Displayed using the Mental Ray “imf_disp”, image display utility:

Link to the high rez version:

http://www.samui3d.com/CGTALK/Alex_Diagnostic_Imf_Disp_2.jpg

A screenshot of the main “mr_diagnostic_buffer_samples” layer, embedded in the “diagnostic.exr” OpenEXR 32-bit floating point image file output…

[font=Verdana]Displayed using the Mental Ray “imf_disp” image display utility and “tonemapped” using the “Exposure” field control, set to the input value (-5.0): [/font]

Link to the high rez version:

http://www.samui3d.com/CGTALK/Alex_Diagnostic_Imf_Disp_3.jpg

A screenshot of the main “mr_diagnostic_buffer_time” layer, embedded in the “diagnostic.exr” OpenEXR 32-bit floating point image file output…

[font=Verdana]Displayed using the Mental Ray “imf_disp” image display utility and “tonemapped” using the “Gamma” field control, set to the input value (2.2): [/font]

Link to the high rez version:

http://www.samui3d.com/CGTALK/Alex_Diagnostic_Imf_Disp_4.jpg

Never heard anything about the Ultimate Maya “which in sanskrit means illusion”, Tutorial Guide?

http://forums.cgsociety.org/showthread.php?f=87&t=922792&page=4

Hope it helps…!

Alex


#28

Never heard anything about the Ultimate Maya “which in sanskrit means illusion”, Tutorial Guide?

OH, ok. I was looking for something on your site but since it’s not out, guess I’ll have to wait. :cool:


#29

today at work I tried Unified Sampling for an upcoming project. Basically it’s a character with a LOT of fur and with Unified it rendered almost 5 times faster! Unbelievable and extremly nice for animations imho.


#30

I just got my educational copy of maya 2012, so I’ll be playing with this later – and will probably put together a really simple MEL interface for tweaking the main controls, as well. I’ll put it up tonight, if I get it done.

In the mean time – How does the unified sampling scheme affect other sampled processes, like mib_amb_occlusion or soft shadows?


#31

There already exists a script UI for the 3.9 features on the forum. No need to duplicate features. But you can alter it to suit your needs probably: enjoyMentalRayStringOptions

Unified doesn’t affect the occlusion samples directly. Meaning a value of 32 on an occlusion shader is still 32. But Unified can make the pattern of the sampling in the image more acceptable. Noise is easier to resolve in Unified. And in some cases you can simply crank up the Quality knob on Unified as opposed to tweaking the Occlusion shader.

Not as efficient, but ok if you need to send it to a farm to cook.


#32

Bitter: thanks for the info!

I think I’m going to make something similar to the enjoyMR script, with only those controls that are related to unified sampling. No need for a ton of UI just to play with this feature!

cheers


#33

I just chanced upon this thread and this unified samples thing looks really handy. I tried it in a scene I’ve got that has lots of mr proxy instances and with raytracing, it appears to render twice as fast, even with motion blur!

One problem I’m having is that I’ve got some spinning wheels on cars and the wheels appear a lot bigger than they should. I’ve tried changing the sample quality/min/max settings but to no avail. I can switch it over to rasterizer and the motion blur renders out correctly but render times are much slower.

Any ideas on how to get this working?


#34

I am unaware of that problem. Can you post a comparison?

As for spinning wheels, how many motion segments are you exporting in the motion bur settings? Spinning blur can come apart sometimes if not enough segments are exported.


#35

Here is a comparison of renders:

It’s not as bad as it was because I really cranked the settings up this time but you can see that the motion blurred wheel is displaced vertically compared to the non motion blurred wheel which makes the cars look like they’ve had lowered springs when comped into the render.

Settings I used were:
samples min: 1.
sampled max: 256.
samples quality: 8.

Motion Blur: No Deformation
Motion Blur by: 1.000
Shutter open: 0.000
Shutter closed: 1.000
Displace Motion Factor: 1.000
Motion Steps: 15
Time Samples: 20
Time Contrast: 0.050

The wheels are Mental Ray proxies.


#36

Yeah, those settings are extremely high.

I’d say:

Min 1
Max 64 -100
Quality 1.0 - 1.5 unless this is for print.

Motion Samples may be a bit high, I can usually get away with 6-8 on a rotation.

Ignore time samples when using Unified. Set the time samples back to 1. You do not really need extra evaluation, Unified will do it for you and it’s possible this will slow you down.

Is there any other animation on the wheel?

Do you need proxies for memory reasons?


#37

Wow, you’re all over this thread!

Yes, I know the settings are extremely high, I was just trying to rule them out. I didn’t really know what effect they would have on the unified rendering so thanks for clearing that up.

Is there any other animation on the wheel?

The wheels are moving right to left in the frame.

Do you need proxies for memory reasons?

Yes, in some of the scenes there are hundreds of cars (the wheels are linked to car bodies in another layer). Proxies were the only way I could make the scenes manageable.

PS, really appreciate all the info on this new sampling technique.


#38

Wow, you’re all over this thread!

I get bored at night.

I can’t seem to replicate that.

Is the car also moving? Or the camera? Samples will happen during different times while the shutter is open and if the cars are moving the frame evaluation will shift each of those samples. Especially if the parent node is moving.


#39

Yes, the camera is moving and a bunch of it’s parents are too. It’s mainly horizontal movement though, the cars are travelling down a road, right to left, the camera is following them with a tiny bit of vertical motion to make it feel more realistic.

The thing is, if I render the masterLayer, the wheels are still displaced vertically compared to the cars.


#40

Hmm, I would say check the animation and layer settings. I can’t reproduce this with just using Unified.

I tried this on a clean scene and a scene with animation on a 16 million tri vehicle and I can’t replicate it. But if you figure it out it would be good to know. I seem to recall something similar from someone on the mental images forum but can’t find it now.