PDA

View Full Version : DOF lens shader and unified sampling weirdness


lostparanoia
02-13-2012, 10:05 AM
Hi there.
I have a weird error as you can see in the following image:
http://i1052.photobucket.com/albums/s443/lostparanoia/dofWeirdness.jpg

it happens both with the mia_lens_bokeh and the Physical_lens_dof

These are my render settings

unified sampling on
Max samples: 256
sample quality: 8

Multi pixel filtering= gaussian 3 3

The material:
mia_material_x
anisotropic reflections
file textures in the Diffuse and glossiness
mipmap texture filtering, filter: 0.5 (it will still happen if I turn off filtering)

it happens with both photographic and exposure_simple lens shader.

I'm using area lights for lighting and final gather (final gather does not affect the issue).

The issue seems to only be happening when using unified sampling. if I turn off unified and increase dof shader samples it will just look like regular grain like it should.

m0z
02-13-2012, 10:29 AM
did you set the samples to 1 for brute force ( on the lens shader) ?
You aren't using a bokeh file?
Because I have never had such issues with samples@1, Unified on, bokeh map off

lostparanoia
02-13-2012, 10:53 AM
did you set the samples to 1 for brute force ( on the lens shader) ?
You aren't using a bokeh file?
Because I have never had such issues with samples@1, Unified on, bokeh map off

yep. 1 sample, no bokeh map.

I tried turning off motion blur, and voila! problem disappears!

so it's somehow related to both the DOF and motion blur in conjunction. hmm...

lostparanoia
02-13-2012, 11:07 AM
i've had problems earlier with motion blur and dof shaders in conjunction with scanline renderer. but all I had to do then was to switch to full raytrace and it worked... this doesn't solve the problem in this case though.

I'm guessing that the dof shader, motion blur and unified sampling don't really play nice together sometimes.

Bitter
02-13-2012, 05:27 PM
Do you have the sample file? I have used MB and DOF with Unified together without problem.

m0z
02-13-2012, 05:41 PM
Maybe it's the bokeh shader? I've had these block artefacts recently too, without motion blur they were gone. hm

lostparanoia
02-13-2012, 06:40 PM
Do you have the sample file? I have used MB and DOF with Unified together without problem.

I'm at home now, but I will post a file first thing tomorrow.



Maybe it's the bokeh shader? I've had these block artefacts recently too, without motion blur they were gone. hm

yeah, they only appear with motion blur on. and it's the same thing with the physical_lens_dof shader. I assume they are partly using the same code.

lostparanoia
02-14-2012, 09:27 AM
Here's a scene containing the error: http://dl.dropbox.com/u/11041045/DofError.mb

Bitter
02-15-2012, 07:34 AM
It appears to be some odd clumping caused by the DOF shader. Possibly scene dependent because I've not seen it yet. Or maybe it was subtle in motion so I didn't notice.

Raising the samples on the DOF shader seemed to help, 2-4. 4 looked best, 2 was a decent tradeoff. Render time increase shouldn't be linear because you're using Unified Sampling. And in actual motion I might consider some noise inconsequential.

Sidenote: You have 'single sample from environment' ticked on the shader for the anisotropic material. But I don't see a mia_envblur node anywhere or a global environment. Was that intentional?

lostparanoia
02-15-2012, 12:27 PM
It appears to be some odd clumping caused by the DOF shader. Possibly scene dependent because I've not seen it yet. Or maybe it was subtle in motion so I didn't notice.

Raising the samples on the DOF shader seemed to help, 2-4. 4 looked best, 2 was a decent tradeoff. Render time increase shouldn't be linear because you're using Unified Sampling. And in actual motion I might consider some noise inconsequential.

thanks for having a look at it.
render times are allready a lot longer than I'd like them to be, but I guess I'll just have to bite the bullet then. the animation is sort of a matrix-like, high-speed to slow-mo style, so it will definately be visible otherwise. :/

Sidenote: You have 'single sample from environment' ticked on the shader for the anisotropic material. But I don't see a mia_envblur node anywhere or a global environment. Was that intentional?

well, it was intentional at the time when I had an env_blur node. but then I switched to unified. guess I forgot to untick it... well, shouldn't make a difference anyway so...

Bitter
02-15-2012, 01:26 PM
DOF in render is expensive for anything (except maybe path tracing) so it's generally done in post and objects are rendered in layers.

The mia_envblur node is still valid for brute force Unified.

Instead of sending multiple samples to a pixel to resolve a blurry environment, Unified can send just one instead. So I would still use it where possible.

lostparanoia
02-15-2012, 03:13 PM
DOF in render is expensive for anything (except maybe path tracing) so it's generally done in post and objects are rendered in layers.

The mia_envblur node is still valid for brute force Unified.

Instead of sending multiple samples to a pixel to resolve a blurry environment, Unified can send just one instead. So I would still use it where possible.

yes, except in this case I have meshes spliting up into hundreds of pieces, spinning around, building up another shape, that explodes into hundreds of other pieces, spinning around, and so on. So unfortunately I can't really effectively split that into layers. :/

hmm, I haven't noticed any real difference in render times with the mia_envblur, since it will, for a single ray, be slightly more computationally expensive. but I guess it can still be beneficial then. thanks for the heads up.

Bitter
02-15-2012, 03:19 PM
hmm, I haven't noticed any real difference in render times with the mia_envblur, since it will, for a single ray, be slightly more computationally expensive. but I guess it can still be beneficial then. thanks for the heads up.

Why would it be computationally more expensive?

I should just call the rasterized environment from the mia_envblur node which is much less expensive than multiple samples to resolve an environment and should result in less grain meaning fewer samples from Unified.

mia_envblur rasterizes once it is called and stores that for future strikes. Just a couple seconds for most maps I've seen.

lostparanoia
02-15-2012, 05:41 PM
Why would it be computationally more expensive?

I should just call the rasterized environment from the mia_envblur node which is much less expensive than multiple samples to resolve an environment and should result in less grain meaning fewer samples from Unified.

mia_envblur rasterizes once it is called and stores that for future strikes. Just a couple seconds for most maps I've seen.

I don't know exactly how the envblur node works but I assumed that it would need to rasterize it many times depending on the amount of different blurs that are required. say for example that you have a glossiness map on your material ranging from 0.1 to 0.9. there would be many different levels of blurriness required to render it accurately.
I also did a test once. At the time I didn't really gain neither speed nor quality, so I came to the conclusion that it was redundant when using unified sampling.

Then again... I don't know exactly how it works and I assume that you do, so I'll take your word for it. :)

Bitter
02-15-2012, 06:18 PM
envblur is a trick where it can blur together different amounts of the original rasterization to generate a correct blur for the shader setting.

If you find the detail for small amounts of blur are poor you can increase the resolution attribute in the envblur node (that's why it's there, to be sure detail doesn't get blurred away completely)

During rendering you should see the shader rasterize the texture once (with progress messages) from then on all shaders set to single sample from env will collect from that.

lostparanoia
02-15-2012, 09:04 PM
I know how it works settings-wise. I don't know how it works mathematically though.
If I understand it correctly, it works more or less like this (simplified ofc, refl only). Please correct me if I'm wrong.

at the start of the render:
a low res environment image is rasterized. This takes a small amount of time.

when a pixel is rendered:
an eye ray is sent. it hits a surface.
gloss value for that surface position is stored.
sends reflection ray that hits the environment.
applies blur of an amount multiplied by the inverse gloss value to the environment pixels affected.
this value is returned for further calculation to get final pixel values.

this will render a lot faster with adaptive because we set the shader to just grab a single sample if it hits the environment, while it will need to send a whole bunch of reflection rays without envblur + single sample.
However, a blur requires quite a few calculations to be completed. it needs to check how many pixels that needs to be blurred, and then it needs to collect the RGB values of these pixels and basically add them together and divide by the number of pixels collected. (depending on what kind of blur algorithms that are used I guess, I just know the very basics about this stuff)

Now we come to the big question. since you seem to be the go to guy when it comes to unified sampling, you might be able to answer this. :)
will unified only send one reflection ray for that particular pixel if it hits the environment, or will it send one reflection ray per eye ray sent?
Because if it sends one reflection ray per eye ray, I believe that this could in deed become slower to calculate than without the env blur.
I mean you might get a lot of noise to up-sample even without the noise in the reflection. (from area lights, textures etc). if you do, you might need to recalculate the blur for each reflection ray sent, even though it's not really necessary.

Disclaimer: this theory is based on some actual knowledge but mainly from a lot of assumptions, so please don't chop my head off if I'm wrong. I'm here to learn. :D

Bitter
02-15-2012, 10:19 PM
I only chop off co-workers heads.

If the environment is sufficiently detailed and you have a glossy surface sending one ray at a time, then Unified will continue to pound that surface until it resolves the details into the glossy reflection of your chosen Quality. This is because that one reflection ray is unlikely to strike something that the next ray will see. Generally this is efficient and fast for complex tasks between geometry but overkill for an environment that is already a texture that we can manipulate smarter.

Using single sample from environment + mia_envblur means one ray is enough to resolve the glossy reflection from the environment.

Now the *possibly painful* hidden overhead of brute force WITHOUT single sample: if that shader is relatively complex, it's not JUST sampling for reflection. It's doing diffuse, light loops, etc each time. Shaders best designed for Unified will separate this at the component level (stay tuned, there's a forum thread about that on the Nvidia ARC forums that explains it and how to do this better) so for now each little strike trying to resolve the reflection may be forced to do other work as well.

So just sending a ray once is best. :-) It strikes the environment, calls the already pre-baked rasterized texture, does a lookup and returns that color result. The sample next door does the same and a comparison says you're fine.

Easy test, Maya checkerboard on the IBL, run brute force unified with one sphere single sample on and another off. Look at the sample and time buffers. The sphere without the envblur took more samples and time and it still looks worse than the envblur sphere.

CGTalk Moderation
02-15-2012, 10:19 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.