PDA

View Full Version : Mental Ray 3.6 - More details...


Pages : [1] 2

Fus|on
09-12-2007, 01:52 AM
Hey everyone

Sorry if this has already been posted,

new update looks awesome. :thumbsup:

Good work Mental Images and Master Zap :scream:

__________________________________________

http://area.autodesk.com/blogs/blog/4/blogpost/5244/

mental ray for Maya rendering

Increase rendering efficiency by converting textures to an optimized format: You can convert textures to an optimized format with a tileable structure to increase rendering efficiency. Using this format, mental ray does not load entire images but only the portion of the images required to render. Memory consumption is therefore reduced and you can render larger scenes and scenes that include textures with a higher resolution.

This feature increases rendering efficiency when you have a complex scene with a lot of textures that cannot all fit into memory at the same time.

* Converting textures to optimized format
* Rendering preferences

Create mental ray transfer maps in object space: You can now create mental ray transfer maps in object space.

Render hardware particles with mental ray: You can now render the following particle types with mental ray: Points, MultiPoint, Spheres, Sprites, Streak, MultiStreak.

Note: Motion blur is supported for hardware particle rendering in mental ray.

Hardware particles are shaded in a way that is similar to software particles: a shading group must be assigned to the particle shape, and its surface shader completely determines the appearance of the particles.

New mia_material_x shader: A new mia_material_x shader has been added that allows you to simulate materials used in architectural and product design renderings. This shader is similar to the mia_material shader but offers additional features such as additional bump mapping attributes and returns multiple outputs.

User framebuffers in output passes: You can now use user framebuffers in mental ray camera output passes. When you create an output pass you can now select user framebuffers and output to file.

When you select a user framebuffer to use with an output pass, mental ray for Maya automatically makes the required framebuffer and output pass node connections.

The user framebuffers are rendering attributes (color, alpha, depth, and so on) that control which image channels are passed to the shader and in what format. When used in conjunction with output passes, framebuffers are useful to split a render into component passes that you can later composite.

Set maximum resolution for Material Sample swatches: You can now set a maximum resolution for your Material Sample swatches. If your file texture is above this resolution, a swatch will not be created until explicitly requested.

This reduces memory consumption and reduces the Hypershade load time, increasing performance especially when dealing with many large textures. This is most useful for initial load of scenes with many large file textures.

Convert Maya hair to native mental ray hair: By default, Maya hair is converted to native mental ray hair so that it can be rendered with mental ray standalone.

New options for the mental ray for Maya command line renderer: You can now control rendering options, such as the Auto Memory Limit or Render Threads option, via the mental ray command line renderer as well as via the Render > Batch Render > r and Render > Render Current Frame > r windows.

Smaller .mi files are created when you export fur and hair using the binary format: When you export your object(s) to a .mi scene file using the binary format, the Hair and Fur (Hair Primitive) objects will be approximately 50% smaller as compared to previous releases of Maya. This will result in significantly smaller .mi files.

Improvement with IPR for mental ray for Maya: Maya 2008 provides improved performance with the startup of IPR in mental ray for Maya. In addition, a snapshot of the model view is provided when IPR is used. The default for IPR Quality is now Render Settings.

Simplified workflow for creating lightmaps with the mental ray fast subsurface scattering shaders: When you create any of the misss_fast_shader, misss_fast_simple_maya, misss_fast_skin_maya nodes via the Hypershade, Maya automatically creates the lightmap network for you.

New mental ray shaders have been added:

Use the mia_light_surface shader to represent the shape of a light source.

Use the mia_envblur shader to blur the environment in a way that looks very similar to shooting an extremely large amount of glossy reflection rays into it.

Use the mia_portal_light shader to create a portal light to reduce final gather rays and rendering times.

Use the mia_exposure_photographic shader to perform tone mapping.

Use the mia_lens_bokeh shader to simulate depth of field.

Use the contour_shader_silhouette shader to put contours at the silhouette of objects.

Use the contour_shader_maxcolor shader to take the maximum (in each color band) of the two material colors on each side of the contour.

Improved workflow for mia_roundcorners shader: The mia_roundcorners shader can be used with any Maya shader that has bump mapping. You can also chain other bump textures to the Bump_vector attribute of the mia_roundcorners shader to layer the bump effect to your shader.

Attribute Editor templates supported for mental ray structure and arrays: Maya now supports Attribute Editor templates for mental ray struct and arrays.

Support for input of DDS files: mental ray for Maya now supports input of the .dds format.

Improved method for subdivision approximation node: Beginning Maya 2008, the subdivision approximation node produces ccmesh primitives instead of subdivision base mesh primitives. ccmesh primitives can define polygons with an arbitrary number of edges, rather than just triangles and quads. This new method can be up to several times faster than the previous. For best performance, use triangles and quads or a combination of both.

Support for native mental ray light linking: mental ray for Maya now uses native mental ray light linking. Shaders are no longer responsible for their own light linking. All the Maya linking information is automatically handled by the mental ray core and shaders automatically receive the relevant light information.

Custom Text supports Python code: The Custom Text Editor now supports Python code.

Swatch now available for mental ray texture nodes: The mib_texture_lookup, mib_texture_lookup2, and mib_texture_filter_lookup nodes now provide a swatch display.

Maya hardware renderer improvements: The hardware renderer improves the display of layered textures as well as multiple UV sets for different textures in a scene.

Because the Maya hardware renderer interprets the Env Ball/Env Cube map in a different way from the Maya software renderer, you can choose between the different configurations through the Render Settings: Maya Hardware tab.

A new option has been added to control the specular highlight so that it only appears on the opaque surface and not the transparent surface.

Maya now provides shader files that developers can incorporate into their own CgFX shader.

The Maya hardware renderer now supports negative lighting.

Support for HLSL hardware shaders: Maya now provides support for HLSL hardware shaders.

Jozvex
09-12-2007, 05:34 AM
All sounds pretty good to me!

:thumbsup:

Gal
09-12-2007, 11:31 AM
any idea when it will be out though?

techmage
09-12-2007, 11:49 AM
still. no shadow pass...

do big mental ray studios really not use shadow passes in their pipeline?

mental
09-12-2007, 03:10 PM
Render hardware particles with mental ray: You can now render the following particle types with mental ray: Points, MultiPoint, Spheres, Sprites, Streak, MultiStreak.

Hardware particles are shaded in a way that is similar to software particles: a shading group must be assigned to the particle shape, and its surface shader completely determines the appearance of the particles.
@Zap (if you're reading this): Does this mean that a 3rd party shader that employs the rasterizer like Bigmuh's muhHair (http://animus.brinkster.net/stuff/stuff.asp) will be needed to get selfshadowing/deepshadow-like effects on particles rendered within a reasonable amount of time?

Does this also mean that a shader like Guy Rabiller's lm_2DMV (http://www.alamaison.fr/3d/lm_2DMV/lm_2DMV.htm) is applicable to these particle types now?

I do not have access at the moment to Maya 2008 to test this out myself.

Also are there any cool new particle rendering tricks that can be pulled off with the inclusion of these particle types or with mentalray 3.6 in general?

asche
09-12-2007, 03:42 PM
does anyone here how to use the output passes of the mia material ?!
is it possible to get all the shaders (given that you only use mia) write the indirectlight pass into a single image ? if that is possible, how do you do that in maya ?!

mental
09-12-2007, 03:59 PM
does anyone here how to use the output passes of the mia material ?!
is it possible to get all the shaders (given that you only use mia) write the indirectlight pass into a single image ? if that is possible, how do you do that in maya ?!
What does this have to do with discussing new MR 3.6 features? :shrug:

asche
09-12-2007, 04:02 PM
well it is a new feature of mentalray 3.6 , so i thought it would kinda fit into a thread about new MR 3.6 features .... :)

cpan
09-12-2007, 04:32 PM
you can with the mia_material_x as that one has a struct output of all the 'passes' you'll ever
need. And use ctrl_buffers (or any buffer writter shader) to write those to the hdd.

asche
09-12-2007, 05:17 PM
what a pitty, i thought it was somehow integrated into mr 3.6 so you can output the passes directly, without using 3rd party plugins :(

anyways, the new maya and mr 3.6 look like solid releases

royter
09-12-2007, 05:50 PM
anyways, the new maya and mr 3.6 look like solid releases

it doesn't look solid at all.
it's PURE marketing, they are introducing more and more new features with disucussable priority.what's the purpose of new features if already existing elements aren't working seamlessly and in harmony with maya's architecture and vice versa?

i hope this time that they work on the implementation in maya and test the software before they release it.

acidream
09-12-2007, 09:36 PM
any idea when it will be out though?

It was out to Platinum subscription members on Monday.

cpan
09-13-2007, 06:12 AM
it doesn't look solid at all.
it's PURE marketing, they are introducing more and more new features with disucussable priority.what's the purpose of new features if already existing elements aren't working seamlessly and in harmony with maya's architecture and vice versa?

i hope this time that they work on the implementation in maya and test the software before they release it.

/*OT hat
2008 was more of a 'bring previous tools to modern age' coupled with several
strong features and serious performance improvements in certain areas, and all
these were requested a lot believe it or not. Still, there's a lot of room to improve
on the 'small houses'-'freelancers' side but I'm sure good things are on 'alias's' todo
list as unhappy guys actualy send requests to the devs.

I see you bitching (bitching != criticizing) maya/mentalray and maya quite a lot and
I'm wondering why don't you switch to other 3d software as most of them are
'much better than maya' to quote you. :)
*/

mentalray 3.6 is awesome!
(had to post something regarding ray hehe)

slipknot66
09-13-2007, 06:38 AM
/
I see you bitching (bitching != criticizing) maya/mentalray and maya quite a lot and
I'm wondering why don't you switch to other 3d software as most of them are
'much better than maya' to quote you. :)
*/

mentalray 3.6 is awesome!
(had to post something regarding ray hehe)

lol.. was wondering the samething.. i work with max too and i hate how mental ray is integrated inside max, not to mention max itself..lol

inguatu
09-13-2007, 11:30 AM
lol.. was wondering the samething.. i work with max too and i hate how mental ray is integrated inside max, not to mention max itself..lol

and don't forget how Max users can kick off Max/MR render jobs on their backburner (or other farm software) using "unlimited" free nodes where Maya users are forced to buy seats of Maya just to run commandline MayaMR renders on their farm. It's utter crap which AutoAliasSkymatter refuses to even acknowledge as an issue for Maya users. Autodesk pansies.

Kabab
09-13-2007, 01:12 PM
and don't forget how Max users can kick off Max/MR render jobs on their backburner (or other farm software) using "unlimited" free nodes where Maya users are forced to buy seats of Maya just to run commandline MayaMR renders on their farm. It's utter crap which AutoAliasSkymatter refuses to even acknowledge as an issue for Maya users. Autodesk pansies.
Have you ever considered there may be some outstanding contractual aggreements between Alias/Autodesk and Mental Images in regards to MR licenses for Maya.

Tierackk
09-13-2007, 01:21 PM
you can with the mia_material_x as that one has a struct output of all the 'passes' you'll ever
need. And use ctrl_buffers (or any buffer writter shader) to write those to the hdd.

Could you please expand on this a bit more. PLLEEAASSSE:D

ctrl.studio
09-13-2007, 01:41 PM
I'm not sure ctrl.buffer can work with maya2008. seems framebuffer api are changed in this last mray release.

max

alexx
09-13-2007, 01:43 PM
and don't forget how Max users can kick off Max/MR render jobs on their backburner (or other farm software) using "unlimited" free nodes where Maya users are forced to buy seats of Maya just to run commandline MayaMR renders on their farm. It's utter crap which AutoAliasSkymatter refuses to even acknowledge as an issue for Maya users. Autodesk pansies.

aha.. that is completely new to me..
my last info on max was: 8 satellite licenses (as in maya unlimited) and unlimited max renderer licenses (as with the maya software renderer).

and i really dont think they changed that.

back to topic: did anyone already test drive the particle rendering? to be honest that is the feature 2008 sounds most promising to me.

cheers

alexx

Fus|on
09-13-2007, 01:49 PM
back to topic: did anyone already test drive the particle rendering? to be honest that is the feature 2008 sounds most promising to me.


was wondering when that would happen :applause:

hopefully soon enough more people will be testing 3.6 to pass some informative comments.

dagon1978
09-13-2007, 02:00 PM
aha.. that is completely new to me..
my last info on max was: 8 satellite licenses (as in maya unlimited) and unlimited max renderer licenses (as with the maya software renderer).



unlimited max & mental ray licence in netrender (backburner) ;)

alexx
09-13-2007, 02:33 PM
DOH.. (about max)
and
DOH (about particle rendering):

nice but a bit memory hungry.
i can not test if a mem limitation would have success.

stats:
630.000 particles
random creation expression for color and opacity (the latter one was a really bad idea)
mem usage in GUI without tweaking for production quality: >> 4 GIG (i only have 4 so it was really slow as it started swapping)

but: self shadows :)

cheers

alex

http://img213.imageshack.us/img213/6752/particles630kwo7.jpg

cpan
09-13-2007, 02:52 PM
I'm not sure ctrl.buffer can work with maya2008. seems framebuffer api are changed in this last mray release.

max

yep but still one can use maya's own buffer creation & writer and store things in them
with the buffer_store (the new maya mentalrayUserBuffers are numbered 0,1,2 etc
it seems so its not much guesswork... wondering if theres a builtin store function though?)
still ctrl_buffers is a lot better... waiting for 2008 version eheh!! multilayer EXR FTW:scream:

Htogrom
09-13-2007, 04:17 PM
I would like to see geometry shaders working in MR. That is the thing that is most needed in production pipeline, not some fancy arch shaders.

royter
09-13-2007, 05:01 PM
I see you bitching (bitching != criticizing) maya/mentalray and maya quite a lot and
I'm wondering why don't you switch to other 3d software as most of them are
'much better than maya' to quote you. :)
*/
mentalray 3.6 is awesome!
(had to post something regarding ray hehe)

i am not criticizing neither Maya witch i think is a great neither Mental ray witch is a great renderer.i am criticising the IMPEMENTATION of Mental ray in maya witch is a big joke and i have more than 100 reasons to say that that, i'll name a fiew :


- attaching a MR shader or a physical light shader to a maya light , and not knowing what option controls what by then...becose autodesk forgot to gray out the light options that will be overrided by the MR shader.

-the lack of a decent MR layered shader (not a 3d party shader) and the fact that autodesk always relys on other people to do the job for them.

-the houres wasted finding what's the problem in the batch rendering of your AO pass because objects starts to disapear (oh....it turns out that the "load objects on demand" option doesn't work with the ao pass). How are artists supposed to know that.

-the houres wasted finding what's the problem in the batch rendering tha crashes , OHH....it turns out that rendering a motion blur with "scanline" is extremly bugy and slow. you should use "raytrace".How are artists supposed to know that.

-in MR you can render this, and you can render that but no.....you can not render this with this ( it is forbidden by the .... ).

- the mentalray/animation related problems, that pops at the last minute. It's amazing how an animated mesh looks well in the viewport even in the maya renderer but in thr MR render it looks diffrent and the elbow is not moving.
http://forums.cgsociety.org/showthr...p?f=87&t=532204 (http://forums.cgsociety.org/showthread.php?f=87&t=532204)

-the lack of simple logic in the MR render globals. Why not have a simple click for your reflection map, environement map, FG map...just like Vray.

-the pathetic memory management. spending hours tuning bsp settings, memory limit,task size, converting to .map,........but still "OUT OF MEMORY"(no comment).

-i could go like this for pages bu i am short on time.

I am totaly aware of the MR power, i think it's a great renderer. And Maya is an amazing sophisticated piece of software. But the 2 working together is a complete joke. Sometimes i wonder why MR is very well implemented in MAX and in XSI, and so rubishly in Maya.
Someone is not doing his job, that's the answer.
In the mean time users will still have to investigate, gather evidence, to achieve a simple thing instead of spending time "creating" 3D content.


i am not wiling to throw away years of experience and i will keep criticising "unacceptable" imperfections untill i am realativly satisfied.

BTW, i really hope that maya 2008 'bring previous tools to modern age', that will be great and i hope that the same goes for the MR implementation.....fingered crossed.

Cheers

jupiterjazz
09-13-2007, 05:15 PM
DOH.. (about max)
and
DOH (about particle rendering):

nice but a bit memory hungry.
i can not test if a mem limitation would have success.

stats:
630.000 particles
random creation expression for color and opacity (the latter one was a really bad idea)
mem usage in GUI without tweaking for production quality: >> 4 GIG (i only have 4 so it was really slow as it started swapping)

but: self shadows :)

cheers

alex

http://img213.imageshack.us/img213/6752/particles630kwo7.jpg

I see no motion blur? ;)

So, I tested this implementation already, and it is totally not production ready, personally I won't recommend it for any production shot.

Besides, as you can read in the ray user guide, mental ray has no optimized point primitive (hello, we are in 2007: several opensource renderers have that), and probably (I say 'probably' because I don't work at mental anymore, so I am not 100% sure) the Maya implementation is creating pseudo-optimized points within the plugin and intersecting them with the particle intersectors shaders (just by looking at mayabase.mi shipped with Maya), but it's not as efficient imho, besides 4GB RAM for ~half a million particles demonstrates how weak it is.

Also, hw particles do not support rgbPP and ppacityPP, making them quite useless if you want to do something with them.

Not that I have anything against using ray, which is good for architectural and design-type of renders, but for the animation & film production VFX domain the only smart choice are renderman renderers for these things, I personally like 3delight and prman since they provide nice Maya integrations.

Both of their integrations in fact can render several millions of particles with 3D motion blur and deep shadows with very low memory requirements, and very, *very* quickly. They also support particle displacement and all the PP attributes (even custom ones).



p.

T-R
09-13-2007, 06:15 PM
Does 3.6 for maya have
mip_matteshadow
mip_cameramap
mip_mirrorball
mip_motionblur
As show in Zaps dancing robot video or is it a max thing.

dagon1978
09-14-2007, 06:12 PM
Good work Mental Images and Master Zap??
I wanna say great work Zap, surely! But still i wanna know where are the core enhancements in mray 3.6?
Seems a 3.5.xsp1, with many new shaders (and this is the Zap work), but i can't see improvements in the core.

Ok, shaders are great, translator fixes are great too, but... ehy mental, we need something more, we are still waiting something new in the GI engine, something like lightcache (i'm starting to hate photons!), ability to combine different GI methods ala vray/fR, a decent brute force method for quality renders, something usable as a GI preview (vray standalone as a PPT with interective realtime preview, where's mental ray?), something to fix the FG flickering (another time, vray as a new IC method interpolated frame-by-frame for this!), ...

In the release notes i can see just this:
- BSP2
which seems great and very helpful for heavy scenes, but... it's buggy!! instances don't work, if you use the BSP2 for the eyes rays... and the worse thing is... i can't use grid anymore!! that's sad
- ccMesh
are this implemented in maya? still i have to investigate...
- FG force
seems interesting for architectural... but... where's this new feature? sad!
- Importons
i dont think we can use this feature in maya 2008 (and also in max 2008!), am i wrong?

i'm a little bit disappointed, especially when i have to see an annoying FG bug untouched in mray3.6!! please mental! stop working on the rasterizer (who's using rasterizer for production??) and fix these kind of bugs! :banghead:

jupiterjazz
09-14-2007, 10:43 PM
Good work Mental Images and Master Zap??
I wanna say great work Zap, surely! But still i wanna know where are the core enhancements in mray 3.6?
Seems a 3.5.xsp1, with many new shaders (and this is the Zap work), but i can't see improvements in the core.

Ok, shaders are great, translator fixes are great too, but...


Good point. Shaders not matter how good they are should not be the only way to improve a renderer.
The core should receive most of the attentions, shaders can still be made by good programmers. And they were kind enough in the past sharing them for free on cgtalk.

You are right to congratulate with Zap since without his shaders mr integrations would not stand their competitors.



ehy mental, we need something more, we are still waiting something new in the GI engine, something like lightcache (i'm starting to hate photons!), ability to combine different GI methods ala vray/fR, a decent brute force method for quality renders, something usable as a GI preview (vray standalone as a PPT with interective realtime preview, where's mental ray?), something to fix the FG flickering (another time, vray as a new IC method interpolated frame-by-frame for this!), ...



Modo has it too.
And if I am not wrong Turtle too.

And PRMan/3Delight/Pixie are all using point-based occlusion and color bleeding technique, which is efficiently used also for area lights of arbitrary shape and texturing, and also for glossy reflections.




In the release notes i can see just this:
- BSP2
which seems great and very helpful for heavy scenes, but... it's buggy!! instances don't work, if you use the BSP2 for the eyes rays... and the worse thing is... i can't use grid anymore!! that's sad


More than sad it's an error to remove grid before the bsp2 is able to render all previous scenes with enough stability. This is very strange too since AD is always concerned about compatibility.



- ccMesh
are this implemented in maya? still i have to investigate...



CCMesh *should* provide better 1) performance with displacement and 2) support mixed tris+quads polygon meshes to be converted as SDS at rendertime.
AFAICS from my tests the feature is very shaky, I had several crashes while trying to render mix quad-tris polys as sds, and when it manages it takes quite some time. Features should be tested carefully before being claimed 'features', a more proper name would be 'experi-mental features' ;)



- FG force
seems interesting for architectural... but... where's this new feature? sad!


Maybe here they just forgot.


- Importons
i dont think we can use this feature in maya 2008 (and also in max 2008!), am i wrong?


Not in Maya 2008, it seems not there: it requires some preprocessing in building the importance photon map, as you can read from the release notes of ray.

Aaanyway... this stuff is coming too late, the present and future of GI in production are point-based techniques.





i'm a little bit disappointed, especially when i have to see an annoying FG bug untouched in mray3.6!! please mental! stop working on the rasterizer (who's using rasterizer for production??) and fix these kind of bugs! :banghead:


Your concerns are quite understandable Matteo.

The rasterizer tries to address the speed of the REYES renderman algorithm, using "microtriangles" instead of REYES micropolygons with a similar decoupled shading samples Vs pixel samples system. It can actually perform decently compared to renderman, but only if you only use undisplaced polygon meshes (although even motion blurred).

It is basically a poor-man's REYES since it is slowed down by the performance hit of NURBS, HAIR and SDS tesselletion, of displacement, and performs bad with masiively ray-dependent algorithms like AO, FG, Glossy and it works very very tricky shading wise with volumes & transparency.

So, you are probably right: it would make more sense to concentrate the efforts only on the design and architectural market and on the relative core features, leaving the production rendering and animation VFX market to renderers who can really handle it without circumnavigating the bugs and limitation with legions of programmers.

Finally two words which are rarely used: ergonomic and usability studies.
Efforts should be concentrated there too.

Well, I think for today we earned our 2 cts for the nightly product strategy update lessons ;)

p

Puppet|
09-15-2007, 10:09 AM
Paolo or Master Zap or someone else know why new mip_* shaders (about 14) are hidden in Maya 2008?
And where is new framebuffer library?

jupiterjazz
09-15-2007, 10:28 PM
Paolo or Master Zap or someone else know why new mip_* shaders (about 14) are hidden in Maya 2008?
And where is new framebuffer library?


I guess - since I am not involved wit mental ray anymore - because they are unsupported.
Just grep all the .mel script for 'suppress', probably the node creation in the UI is hidden.
But you can crete them by hand:

MEL:
createNode mip_whatever


whatever... :)

As per the buffer stuff it's obviously not in.
No passes and no point primitive... very smart.

p

Puppet|
09-16-2007, 12:06 PM
I guess - since I am not involved wit mental ray anymore - because they are unsupported.
I suppose that "unsupporting" is only that no
.xpm icons and no beauty AETempletes :)

MEL:
createNode mip_whatever
Just edit your mentalrayCustomNodeClass.mel for unhide all shaders.

It looks like Max developers do (hide half of standart mr shaders). Because Autodesk "love" users and care about vulnerable beginner's mind.

As per the buffer stuff it's obviously not in.
No passes and no point primitive... very smart.
As I can see Maya native shaders ready for passes. Just view mayabase.mi. Maya's shaders split out many passes like mia_material, but no way to store it to buffers. And no additional outputs(from shaders) in Maya gui.


And question for mr developers...
Why TIFF with compression(as texture) still not supported, for example LZW or ZIP? What is the problem with it?

MasterZap
09-16-2007, 02:58 PM
I guess - since I am not involved wit mental ray anymore - because they are unsupported.


The reason is very simple and down-to-earth: Resources on the Quality Assurance side. I.e. for something to be released officially with the product as an officially supported feature, it requires a very resource-demanding set of testing, which takes a lot of peoples and a lot of time. For this release, there was simply not people and time available.

The compromise was to include the shaders in this "unsupported" way (the alternative would have been to not have them included at all).

Stay tuned at my blog for more info. The shaders should work fine with Maya, and if they don't - let me know.

The same will be true for 3ds max 2008 as it is released, and you can read Ken Pimentels (the product manager for 3ds max) own words on the subject here (http://area.autodesk.com/blogs/blog/5/blogpost/5212/)

/Z

dagon1978
09-16-2007, 09:00 PM
i'm definitively loving the new BSP2, it seems incredibly fast when you have to use many samples

some mia preset:
2.272.358 polys
AA -1 2
ct 0,033
FG 200 0,4 20
glossy samples 8

BSP = 16m 20s
BSP2= 9m 58s
speedup = 63%

http://img338.imageshack.us/img338/1946/buddah2bsp16m20skm9.jpg


FG bruteforce scene:
544.579 polys
AA -1 2
ct 0,025
FG 4
glossy samples 8

BSP = 4m 20s
BSP2 = 3m 06s
speedup = 39%

http://img338.imageshack.us/img338/3940/2bsp23m06sye9.jpg

Fus|on
09-17-2007, 12:56 AM
man that is fast, thanks for results Dragon :applause: - one more reason why I might upgrade now...

feel free to share anymore tests :D


i'm definitively loving the new BSP2, it seems incredibly fast when you have to use many samples

jupiterjazz
09-17-2007, 08:11 AM
I suppose that "unsupporting" is only that no
.xpm icons and no beauty AETempletes :)


Just edit your mentalrayCustomNodeClass.mel for unhide all shaders.


That's the one I was after by grepping the .mel scrips.


It looks like Max developers do (hide half of standart mr shaders). Because Autodesk "love" users and care about vulnerable beginner's mind.


Yep, kids shouldn't play with dynamite :)


As I can see Maya native shaders ready for passes. Just view mayabase.mi. Maya's shaders split out many passes like mia_material, but no way to store it to buffers. And no additional outputs(from shaders) in Maya gui.


Yes, I am aware, the structs are all in place:


# Blinn
#
declare shader
struct {
color "outColor",
color "outGlowColor",
color "outMatteOpacity",
color "outTransparency",
# BRDF components.
color "outAmbient",
color "outIncandescence",
color "outIrradiance",
color "outDiffuseShadowed",
color "outDiffuseNoShadow",
color "outShadow",
color "outSpecularShadowed",
color "outSpecularNoShadow",
color "outReflection",
color "outRefraction",
color "outScatter",
color "outOpacity"
}



but no way to generate those and the custom ones except if there is a way with some dynamic attributes... I wouldn't be surprised if there is, I know the "style"...


And question for mr developers...
Why TIFF with compression(as texture) still not supported, for example LZW or ZIP? What is the problem with it?

Dunno why, a shot in the dark could be because of .map generation from those.


p

jj80
09-17-2007, 10:28 AM
Is there a way with MR 3.6 to sample user framebuffers independently ? This is the only reason holding me back from using a framebuffer based render pipeline and a huuge waste of time.

I heard from a bird that even a big TVC series with a little green alien is suffering tremendously from that...

Kryzys
09-17-2007, 11:36 AM
Dagon, can you tell something more about Final Gathering mode "force" ?
Can't figure out where it is? Maybe in miDefaultOptions ? But not, are only old's modes ( Multiframe, Automatic etc. ). I searched in other places, but without results. If you can share more information about this, in this place, more peoples beginnig journey in MRay was be grateful. For example me ;)

... and Sorry for my weak English.

Greetings for all Mray users.

MasterZap
09-17-2007, 01:25 PM
There is a string option type "boolean" named "contrast all buffers" that you can set to "on". This will make oversampling driven by all the buffers.

You can't get *different* sampling in each buffer (that would be just impossible to implement) but it will make sure all output buffers are individually well oversampled.

/Z

jj80
09-17-2007, 01:56 PM
oh that's great news !

Do I find this option in the renderGlobals ? If not, what would be the best way to switch it "on" ?

techmage
09-17-2007, 02:23 PM
Dagon, where did you find the FG force switch? I can't find it anywhere.

Also, for the BSP2, I found it in miDef, but the switch seems to be a little buggy as in. If you start on BSP1 and go to BSP2, the BSP1 options stay open. But if you start on grid and go to BSP2, the grid options stay open and the BSP1 greyed. So I am wondering, do the BSP1 options of depth and size even have an effect on BSP2?

Also, guys did you notice theres a new feature that may have the ability to function as a proxy on-deman objects. There is even some example code for a geometry shader that can use them in the mental ray docs. I don't have anything set up to compile this and try. So I'm wondering, would anyone else be willing to expiriment with it?
file:///C:/Program%20Files/Autodesk/Maya2008/docs/Maya2008/en_US/RefGuide/assembly_api.html

ctrl.studio
09-17-2007, 02:58 PM
Do I find this option in the renderGlobals ? If not, what would be the best way to switch it "on" ? from ray3.5 new options are not anymore hardcoded into a fixed struct that ray will use to setup itself and the scene. they can be added as string options. in maya2008 there's a first, buggy, implementation of a GUI to add string options to the options block of the mi scene. as it does not work I get/set them manually via mel..


to contrast all buffers.... (didn't tested)
setAttr -type "string" miDefaultOptions.stringOptions[0].name "contrast all buffer"
setAttr -type "string" miDefaultOptions.stringOptions[0].value "on"
setAttr -type "string" miDefaultOptions.stringOptions[0].type "boolean"

to enable fg bruteforce...
setAttr -type "string" miDefaultOptions.stringOptions[1].name "finalgather mode"
setAttr -type "string" miDefaultOptions.stringOptions[1].value "force"
setAttr -type "string" miDefaultOptions.stringOptions[1].type "string"


you can use also ctrl.ghost.settings that just do the same via a geometry shader and add string options via a special interface. (for records... 'importon on boolean', and 'ambient occlusion cache' on boolean, seems broken in the api and need to be set via maya client).

max

techmage
09-17-2007, 03:06 PM
Thanks, I see... where do i get ctrl.ghost.settings?

and what exactly do you mean by setting variables manually via maya client?

also, I've been wondering this for a while, does ctrl.studio have a website?

ctrl.studio
09-17-2007, 03:09 PM
and what exactly do you mean by setting variables manually via maya client?

that even if you can find them in the ctrl.ghost.settings, they do not work because the api are broken.

setAttr -type "string" miDefaultOptions.stringOptions[2].name "importon"
setAttr -type "string" miDefaultOptions.stringOptions[2].value "on"
setAttr -type "string" miDefaultOptions.stringOptions[2].type "boolean"

setAttr -type "string" miDefaultOptions.stringOptions[3].name "ambient occlusion cache"
setAttr -type "string" miDefaultOptions.stringOptions[3].value "on"
setAttr -type "string" miDefaultOptions.stringOptions[3].type "boolean"

then ctrl the respective values from the geoshader.


i'm definitively loving the new BSP2, it seems incredibly fast when you have to use many samples for comparison you should report also the bsp1 settings. maybe you did not set correctly them and that's why bsp2 seems so fast. sure it's more easy to get good results without having to setup the bsp values manually when heavy scenes. :-)

max

techmage
09-17-2007, 04:08 PM
The BSP size and depth settings do seem to affect BSP2.... but overall BSP2 is still faster no matter what numbers I try.

MasterZap
09-17-2007, 04:36 PM
http://img338.imageshack.us/img338/3940/2bsp23m06sye9.jpg


Nice... can we have that with the new mia_exposure_photographic and a vignetting of at least 1, please? ;)

/Z

jupiterjazz
09-17-2007, 06:28 PM
Is there a way with MR 3.6 to sample user framebuffers independently ? This is the only reason holding me back from using a framebuffer based render pipeline and a huuge waste of time.


Nope you can't.
And you can't either reduce the multipixel filtering, nor the filter size...

Out of curiosity, do you want to recompose these images back in compositing?

Anyway, to mention the standard way used in the production world, with renderman renderers, you can override, - besides the obvious datatype/format/quantization - the multipixel filtering technique and the filter width.

An implementation of this system is 3delight for Maya, wher you can set the former filter/filtrsize for the 13 shader output passes (diffuse, spec, shadow, translucence etc...), and for unlimited secondary passes, with support of all the renderer variables (just render a pass with N, and you get the normals...) and several options for alpha and matte exclusivity, you do this simply in the UI, look here:

http://www.3delight.com/en/uploads/docs/3dfm/3dfm_4.html#SEC11

To add a cherry on top of the cake (if makes sense in english ;) ) if you use Delayed Read Archives, you can also use a custom render pass settings on a DRA-basis, so you can demand load your procedural hierarchy of geometries and apply a specific render pass to it, each render pass can have multiple output passes with independent filtering ;)
Wi(l)dely used with Massive and with LOD too.
And you do all this from the UI, no need to custom develop.

If you have access to the old MTOR or the new RenderMan for Maya 2 / Pro you have access to around "42 passes: 17 of the renderers primitive variables (P, N, s, t etc) and 25 shader output variables to be used for secondary images":

http://www.fundza.com/rman_shaders/secondary_images/index.html



p

reptil
09-17-2007, 07:15 PM
Yes renderman & 3delight are very powerfull but very obscure too and not enough good tutorials for fast update from mental ie to renderman
Dagon very good test :thumbsup:

jupiterjazz
09-17-2007, 07:22 PM
Yes renderman & 3delight are very powerfull but very obscure too and not enough good tutorials for fast update from mental ie to renderman


Your idea is on the pipeline already my friend: "TRIX R 4 KIDS - renderman version" is in production.


Dagon very good test :thumbsup:

reptil
09-17-2007, 08:24 PM
i m very happy to know this Paolo !!! :scream:

dagon1978
09-17-2007, 10:11 PM
Nice... can we have that with the new mia_exposure_photographic and a vignetting of at least 1, please? ;)

/Z

ehy zap i was rendering the scene with the exposure_photographic, but i encountered a strange problem on the mia_mat (a bug?)

this is without glossy (exposure_simple)
http://img409.imageshack.us/img409/9572/ajaxit9.jpg

this is with glossy (exposure_simple)
http://img409.imageshack.us/img409/269/ajaxbugyw4.jpg

other shaders (dgs, mib_glossy, blinn, etc) work good and i can't reproduce it in other scenes

dagon1978
09-17-2007, 11:34 PM
dont mind, it was a problem related to the FG bruteforce samples (with less then 5 samples the mia_mat works bad, dont know why only the mia_mat...)

here's the ajax with vignetting = 2
http://img404.imageshack.us/img404/2327/ajaxdoftr8.jpg

edit: updated with DOF

dagon1978
09-17-2007, 11:43 PM
for comparison you should report also the bsp1 settings. maybe you did not set correctly them and that's why bsp2 seems so fast. sure it's more easy to get good results without having to setup the bsp values manually when heavy scenes. :-)

max

my test are not a BSP vs BSP2 comparison
but just an user-side point-of-view
i mean, how many users are optimizing the BSP? 1%? 0,1%?
so, this is a "default" BSP vs BSP2 race
to be 100% honest i tested also a scene (with just 1 object with 1M+ polys) where the BSP was faster then BSP2 (something about 20% with few samples, but less then 5/10% with many samples)... but i dont have many time to test right now...

jude3d
09-18-2007, 11:57 AM
all those BSP test are really relativ and scene dependent so be carefull about this BSP speed, just try both in your pipeline.

floze
09-18-2007, 12:56 PM
i'm definitively loving the new BSP2, it seems incredibly fast when you have to use many samples

some mia preset:
2.272.358 polys
AA -1 2
ct 0,033
FG 200 0,4 20
glossy samples 8

BSP = 16m 20s
BSP2= 9m 58s
speedup = 63%

http://img338.imageshack.us/img338/1946/buddah2bsp16m20skm9.jpg


FG bruteforce scene:
544.579 polys
AA -1 2
ct 0,025
FG 4
glossy samples 8

BSP = 4m 20s
BSP2 = 3m 06s
speedup = 39%

http://img338.imageshack.us/img338/3940/2bsp23m06sye9.jpg
Hey dagon, were you able to check out the new 'final gather contrast' feature? It wasnt working quite well with 8.5, maybe check out this thread: http://forum.lamrug.org/viewtopic.php?f=6&t=1168&sid=610fd3f430ce79914f9f7a7d78f3d208

I really halved the fg calculation times with it, just set it to ~0.45 or something, the quality wont suffer much.

jupiterjazz
09-18-2007, 01:23 PM
Hey dagon, were you able to check out the new 'final gather contrast' feature? It wasnt working quite well with 8.5, maybe check out this thread: http://forum.lamrug.org/viewtopic.php?f=6&t=1168&sid=610fd3f430ce79914f9f7a7d78f3d208


Funny, looks like even mentals don't know what it's made for... ;)



I really halved the fg calculation times with it, just set it to ~0.45 or something, the quality wont suffer much.

So, official explanation from release notes:

"mental ray has a new string option "finalgather contrast" r g b a and command line option
-finalgather_contrast r g b a which let the user or application specify the contrast threshold for the adaptive final gathering pre-computing phase independently of the contrast used for beauty rendering. If it is not set then the rendering contrast is used as previously."


Hence, this parameter affects the precomputation of FG points starting from ray35. Up to version 34 FG points in the precomputation phase (besides other rules, check “TRIX R 4 KIDS Vol.2”), were also placed adaptatively based on the pixel contrast, but with the same logic that applies to the contrast-driven antialiasing sampling.

From 35 this attribute allows you to have a different sampling distribution, by contrast threshold just for FG points in the precomputing stage, independently from the global contrast.

So this would be the classic mental-like explanation with no usability informations :)


In the real world of people struggling with badly designed renderers populated with millions of parameters and no documentation , it should be mostly not specified, so the contrast would be the same as the one specified with the renderer contrast, BUT it might be useful to be turned off for *preview renderings* with very low number of fg rays which shows uneven distribution of FG points.

So, for preview renderings of stills the pipeline could be:

- Set automatic FG: -finalgather_mode automatic (although it works with all the FG modes)
- Set <1 presampling density
- Set low accuracy and rays: -finalgather_accuracy view 100
- HERE IT IS: switch off contrast-driven FG sampling: -finalgather_contrast 1 1 1 1
- Set rebuild off: -finalgather_rebuild off
- Tune at your needs: -finalgather_points [you don’t need to recompute the FGmap]

This is at least what I remember from when I was trying to find a use for it more than an year ago ;)

On second thoughts it could be used for IPR-ing FG, but to be really honest IPR is really unusable and instead of adding such little optimizations AD and mental images together (since they do co-jointly develop mental ray for maya, as reported in their press news: http://www.mentalimages.com/1_1_news/news_texte/020722.html) should look at Modo to see how a preview renderer solution works...

So, to sum it up in a one-sentence user guideline: leave fg contrast as it is.


p

jj80
09-18-2007, 01:39 PM
Nope you can't.
And you can't either reduce the multipixel filtering, nor the filter size...

Out of curiosity, do you want to recompose these images back in compositing?



that's right, I have a nice setup which renders an arbitary number of passes, but I always have to set my minSamples = maxSamples ...

floze
09-18-2007, 01:39 PM
Funny, looks like even mentals don't know what it's made for ;)




So, this parameter affects the precomputation of FG points starting from ray35. Up to version 34 FG points in the precomputation phase (besides other rules, check “TRIX R 4 KIDS Vol.2”), were also placed adaptatively based on the pixel contrast, but with the same logic that applies to the contrast-driven antialiasing sampling.

From 35 this attribute allows you to have a different contrast threshold just for FG points in the precomputing stage.

So this is the classic mental-like explanation with no usability informations :)

In the real world of people struggling with badly designed renderers populated with millions of parameters and no documentation , it should be mostly not specified, so the contrast would be the same as the one specified with the renderer contrast, BUT it can be useful to be turned off for *preview renderings* with very low number of fg rays which shows uneven distribution of FG points.
BTW the thing works with all the new FG modes.

So, for preview renderings of stills the pipeline could be:

- Set automatic FG: -finalgather_mode automatic
- Set <1 presampling density
- Set low accuracy and rays: -finalgather_accuracy view 100
- HERE IT IS: switch off contrast-driven FG sampling: -finalgather_contrast 1 1 1 1
- Set rebuild off: -finalgather_rebuild off
- Tune at your needs: -finalgather_points [you don’t need to recompute the FGmap]

This is at least what I remember when I was trying to find a use for it more than an year ago ;)



p
Yeah I figured that, and beside that it works quite well in combination with fixed sampling rates, but that's only practical in the least cases for obvious reason.

Ye sound quite frustrated by your ex-employer, not sure if that's the right place to express this feelings. ;)

elvis75k
09-18-2007, 01:54 PM
Funny, looks like even mentals don't know what it's made for ;)



I've wasted 5 (five) years of my life to understand what i was doing,
and just right now i'm going far far away from mental ray..
and this is the right place to express this feelings. (autumn is coming)




http://www.hermann-uwe.de/files/images/leave.preview.jpg

adios

jupiterjazz
09-18-2007, 02:05 PM
Ye sound quite frustrated by your ex-employer, not sure if that's the right place to express this feelings. ;)

On the contrary, I am finally free to express my personal user opinion. I sometimes have to do consulting with it.


Besides I care a lot about Maya and I always happy to help other users, in the period I was working for mental I did not have access to the internet, so I couldn't really contribute to the user community, and you know from my TR4K serie that I like to do it ;P

dagon1978
09-18-2007, 02:19 PM
Hey dagon, were you able to check out the new 'final gather contrast' feature? It wasnt working quite well with 8.5, maybe check out this thread: http://forum.lamrug.org/viewtopic.php?f=6&t=1168&sid=610fd3f430ce79914f9f7a7d78f3d208

I really halved the fg calculation times with it, just set it to ~0.45 or something, the quality wont suffer much.

thanx floze! ;)
i was using it in maya 8 with the old ctrl.ghost and it was working very well
now i'm testing the new features so i ignored this one... it's working with the miDef or are you using the ctrl.ghost as well?
also, i wanna know what the FG importance do... for what bart said seems related to the shaders... but why there is a FG importance in the miDef?

jj80
09-18-2007, 02:19 PM
out of curiousity jupiter, why were you curious if I comp them ? ;)

Do you know something I should know about this workflow ?

Thanks for your TRIX4Kids btw, very useful resource.

jupiterjazz
09-18-2007, 02:28 PM
thanx floze! ;)
i was using it in maya 8 with the old ctrl.ghost and it was working very well
now i'm testing the new features so i ignored this one... it's working with the miDef or are you using the ctrl.ghost as well?
also, i wanna know what the FG importance do... for what bart said seems related to the shaders... but why there is a FG importance in the miDef?

An explanation from my "TRIX R 4 KIDS Vol.4" (unpublished, siggraph 2006):


The problem:
FG density behind semi-transparent objects
in mental ray 3.4 the density of FG points generated behind semi-tranpsrent objects was deliberately reduced to 40% for optimizations. This however produced some artifacts in particular scenes such as interior architectural rendering where many transparent surfaces are present and where indirect illumination was a key-factor for the final quality of the image.

The shader API TRICK:

In order to solve this problem in mental ray 3.5+ there is an 'importance' parameter to the shader API, in 'mi_compute_avg_radiance':


miBoolean mi_compute_avg_radiance(
miColor *result,
miState *state,
miUchar face, /* 'f' front, 'b' back */
struct miIrrad_options *irrad_options); /* options to overwrite */


typedef struct miIrrad_options {
int size; /* size of the structure */

/* finalgather part */
int finalgather_rays; /* no. rays in final gather */
miScalar finalgather_maxradius; /* maxdist for finalgather */
miScalar finalgather_minradius; /* mindist for finalgather */
miCBoolean finalgather_view; /* radii in raster pixels? */
miUchar finalgather_filter; /* finalgather ray filter */
miUchar padding1[2]; /* padding */

/* globillum part */
int globillum_accuracy; /* no. GI photons in estimation */
miScalar globillum_radius; /* maxdist for GI photons */

/* caustics part */
int caustic_accuracy; /* no. caustic photons in est. */
miScalar caustic_radius; /* maxdist for caustic photons */

/* 3.5 extensions */
miUint finalgather_points; /* #fg points for interpolation */
miScalar importance; /* importance factor */ <<<

/* this structure may be extended in the future */
} miIrrad_options;


In Maya 8.5 shaders supports importance,
an importance of 1.0 means 100% of fg rays shot also behind semitransparent geometry.

Also in Maya 8.0 Mayabase shaders do support this new 'importance', but with some trickery.
You can act in two ways, set this parameter on a shape-basis (so on each shape node) or globally. You need in both cases to enable the ‘finalgather override’ in the UI.

In the shape nodes of objects behindsemi-tranpsarent geometry you should:

- enable 'final gather override'
- addAttr -at "float" -ln miFinalGatherImportance -dv -1.0 <shape>

-1.0 is the default of ray34: 40% less fg rays, set it to 0.1 (10%) to 1.0 (100%)
- addAttr -at short -ln miFinalGatherPoints -dv 0 <shape>

Alternatively you can add a global option to miDefaultOptions:

- finalGatherImportance type scalar and set it to +1.0

Then you need to enable on all the objects in the scene final gather override attributes and set there the same as in your render settings:

- MEL: select `ls -typ geometryShape`
- Then goto the Maya Attribute spread sheet and set the parameters there.

You can easily script it.

The architectural material “mia_architecture” implements tis importance factor.

NOTE that the importance sampling of fg points is not visualized in the finalgather diagnostic mode because it does not change the density of FG points, it changes the number of FG rays shot around for each FG point.
You will see however a higher number of fg rays shot in the rendering statistics.

p

dagon1978
09-18-2007, 02:39 PM
thanx for the info paolo ;)

jupiterjazz
09-18-2007, 03:35 PM
out of curiousity jupiter, why were you curious if I comp them ? ;)

Do you know something I should know about this workflow ?


hehe ;)
I do remember that a there were a couple of ways to use such a pass.
So I just imagined you needed that for comp, maybe having unfiltered results in a special pass to use for saying "what's in - whats's out", like a special compare depth pass... to be honest I forgot the real need of it. :p



Thanks for your TRIX4Kids btw, very useful resource.

Thanks dude, appreciated!

p

elvis75k
09-18-2007, 04:47 PM
HAlleluja!! Finally we have a full time HERO! We badly need a hero over here to keep going..
Sorry for the off topic, and i like the "maya grindhouse".

yeah!

jupiterjazz
09-18-2007, 05:38 PM
HAlleluja!! Finally we have a full time HERO! We badly need a hero over here to keep going..
Sorry for the off topic, and i like the "maya grindhouse".

yeah!


Ok, wait, although it's fun to read, I smell some over-excitement here.. ;)

First, there are already a lot of heroes here, and they even provide very nice free tools that make your cg-life peachier. And these are heroes not only for the user community but also for the "vendors community" ;)

On my side, if I can help I will - paraphrasing the good'ol Nike ad line - just do it.
But regarding the "full time" I must advise:

1) I am often busy on other things
2) I love holidays


As for the 'Maya Grindhouse' title.
It's the adaptation of the 4 hands double feature of Tarantino+Rodriguez you well know, paraphrased into this year's 4 hands double Siggraph Maya Masterclass with me and Sergey Tsyptsyn: "Maya GrindHouse: Planet Nucleus and Render Proof".
It was a wild stunt on "Alternative nucleus simulations & rendering techniques". B)


p

mental
09-18-2007, 09:16 PM
Hey Paolo,

Do you know if Autodesk intends on releasing "Maya GrindHouse: Planet Nucleus and Render Proof" as part of the Masterclass material or a seperate DVD release?

Thanks!

wizzackr
09-19-2007, 07:44 AM
Yes renderman & 3delight are very powerfull but very obscure too and not enough good tutorials for fast update from mental ie to renderman
Dagon very good test
Your idea is on the pipeline already my friend: "TRIX R 4 KIDS - renderman version" is in production.
That would be the best thing ever. We're a small shop with just a few artists, no dedicated TDs and/or programmers. Yet, at least once a year we seriously consider giving 3delight a serious look - which up to now always ended with a bit of fiddling, saying "yeah that could be awesome"... and then falling back to mr. Mostly due to what reptil already said: it seems "very powerfull but very obscure too and not enough good tutorials for fast update from mental to renderman"...

(...)Finally two words which are rarely used: ergonomic and usability studies.
Efforts should be concentrated there too.(...)
Before I forget: Big fat thanks for all the great efford, Paolo. I guess the trix4kids series was the first time i started to wrap my head around mr... :thumbsup:

jj80
09-19-2007, 08:00 AM
hehe ;)
I do remember that a there were a couple of ways to use such a pass.
[...]


I think I missed something ;) Wasn't actually talking about a specific pass. Just framebuffers to put out *any* pass you want, using a shader which stores stuff in frameBuffers and then a framebuffer (geo) shader.

Anyhow, I'll try the oversampling option variable as soon as I get a chance, sounds like it might do the trick.

I'm very interested in 3Delight as well btw !

jupiterjazz
09-19-2007, 08:06 AM
That would be the best thing ever. We're a small shop with just a few artists, no dedicated TDs and/or programmers. Yet, at least once a year we seriously consider giving 3delight a serious look - which up to now always ended with a bit of fiddling, saying "yeah that could be awesome"... and then falling back to mr. Mostly due to what reptil already said: it seems "very powerfull but very obscure too and not enough good tutorials for fast update from mental to renderman"...


Duly noted, don't worry.

As I said, I am aready doing it, it just takes time. But it will come possibly within this year.
Note also that escape studios is doing some great documentation for renderman, not sure if it will be available for free.


Before I forget: Big fat thanks for all the great efford, Paolo. I guess the trix4kids series was the first time i started to wrap my head around mr... :thumbsup:

Thanks, again. :)

p

jupiterjazz
09-19-2007, 08:09 AM
Hey Paolo,

Do you know if Autodesk intends on releasing "Maya GrindHouse: Planet Nucleus and Render Proof" as part of the Masterclass material or a seperate DVD release?

Thanks!

We are in the process of discussing what to do with all that material right now.

Paolo

jupiterjazz
09-19-2007, 08:42 AM
I think I missed something ;) Wasn't actually talking about a specific pass. Just framebuffers to put out *any* pass you want, using a shader which stores stuff in frameBuffers and then a framebuffer (geo) shader.

Anyhow, I'll try the oversampling option variable as soon as I get a chance, sounds like it might do the trick.


Oversampling the main buffer is the solution, but obviously not a good solution.
It's like when in the old Maya Renderer sometimes people were required to render double size... C'mon!!!

So you should avoid oversampling and min samples = max samples for obvious reasons.

Besides, I don't know if they broke (again) the coverage option.
There is also a 'contrast all buffers option'...
Even having played with this renderer for a looong time I start having difficulties understanding myself: there are too many options, deprecated options, experimental options... when I was doing options cross-testing, permutations were becoming too many.
All this served with an incredible lack of documentation.

But I do remember - Puppet can help here - that with coverage on and adaptive sampling you then have filtering problems, aka artifacts: more artifacts with high order filters than with box 1 1.
AFAIK this problem was present until Maya85 included.
Dunno if they fixed it in 36. Ask Puppet, I remember he needed it.

Anyway, that's why I mentioned that you can do it easily in 3delightForMaya.
renderman renderers don't have min and max samples (that is no adaptive QMC sampling), RenderMan uses 'stochastic sampling' and you tune only the 'pixel samples' param.
And they can independently filter each secondary pass with different techniques without artifacts.



I'm very interested in 3Delight as well btw !

You should. It's really a powerful renderer and I like the way they are distributing it: first license for free, even for commercial use. AFAIK it is the only product which includes in the same package the maya plugin (3delight for maya) and the 'standalone' renderer (renderdl) so it's like getting mayatomr + mental ray standalone.

This, together with the fact that if you buy support you get client updates (new builds) almost on a daily basis, and for all the platforms, made me decide to use it for 'Maya Grindhouse', so that anybody could use it.

This is a fresh, modern way to do software development, and, always IMO, the right way to handle the relationship with customers.

Paolo

Puppet|
09-19-2007, 09:25 AM
If you have beauty pass as sum of all other passes in most cases you really don't need use any oversampling. For example 0 2 will be enough. And don't need 'contrast all buffers' option too.

With one exception if you have some passes that have no effect to beauty like masks and some others. In this case you should enable 'contrast all buffers' option.

Oversampling is bad solution, because render time will be greater than to render all passes independently.


I can't check artifact with filtering with mr 36, because I have no mental ray stand alone, but my p_MegaTK_pass shader became broken after recompiling it for mr36. Shader works fine but no passes created and writen. Looks like I need some code edit... more...

To developers...
Why buffers not created and write? In docs said that old buffer format is still supported and should works without any modification. Maybe I miss something?

tostao_wayne
09-20-2007, 07:57 AM
Please, how can i use the multiple outputs of the mia_material_x??

there is nothing about it in the documentation.


Thanks

Tostao

ACamacho
09-20-2007, 11:17 AM
Well there is no easy way to get the outputs from the shader. With Maya 2008 you can create the framebuffer, and you can save the framebuffer out, but there is no simple way of storing something IN the framebuffer. What I ended up using was ctrl_buffer's "buffer store" node and piping each output of mia_material_x to it.

I posted this in the other thread but I guess I can post it here too. I can get the outputs from the mia_material_x, and they are in float, but the images are clipped before they are saved to float EXR. I made sure the framebuffers were rgba float in both the primary and secondary buffers. :-/

Also I tried yesterday Mega_TK and 2008, and yeah it renders fine but I get an error when outputting buffers. Actually in order to get the mega_TK to render I had to pipe it through a p_maya_shading_engine node. If I just piped the TK node in the shading group it would give me a struct error.

Puppet|
09-20-2007, 11:34 AM
I can get the outputs from the mia_material_x, and they are in float, but the images are clipped before they are saved to float EXR. I made sure the framebuffers were rgba float in both the primary and secondary buffers. :-/
Master Zap, can you confirm this problem?

Also I tried yesterday Mega_TK and 2008, and yeah it renders fine but I get an error when outputting buffers. If you have disable 'Export Post Effect' no any error will be and still no passes (no defined, no saved).

Actually in order to get the mega_TK to render I had to pipe it through a p_maya_shading_engine node. If I just piped the TK node in the shading group it would give me a struct error.
If you just connect p_MegaTK(or some of my shaders) directly to shading group Maya 2008 automatically enable 'Export with Shading Engine' and it cause error during render. For avoid it just use p_maya_shadingengine or disable 'Export with Shading Engine' option.

ACamacho
09-20-2007, 11:48 AM
If you just connect p_MegaTK(or some of my shaders) directly to shading group Maya 2008 automatically enable 'Export with Shading Engine' and it cause error during render. For avoid it just use p_maya_shadingengine or disable 'Export with Shading Engine' option.

I've noticed that too. It's good because I have been caught not checking the export option and having maya crash on me.

Renderman is sounding more and more lovely lol.......I kid I kid. ;)

iosifkefalas
09-23-2007, 11:30 PM
I just read in the mental ray documentation about a new final gather mode in mi 3.6 called "force". I can't find that option anywhere in render globals or even in the midefaultOptions node. Is it possible that it's not implemented in Maya 2008?

jupiterjazz
09-24-2007, 07:39 AM
I just read in the mental ray documentation about a new final gather mode in mi 3.6 called "force". I can't find that option anywhere in render globals or even in the midefaultOptions node. Is it possible that it's not implemented in Maya 2008?


You can enable with ctrl.studio's ctrl.ghost

Anyway, you will probably rarely use that option since it's hard to get a good image out of it, that's also probably why it's not in the UI.

p

iosifkefalas
09-24-2007, 07:47 AM
I use mac so I don't have the opportunity to try any of the wonderful ctrl_shaders. Too bad..

iosifkefalas
09-24-2007, 09:26 AM
So does anybody know which of the new features of mental ray 3.6 are implemented in maya 2008? because I constantly read about features of 3.6 that just don't exist in the GUI or maybe they do but evaluate automatically under the hood and I really don't know under what circumstances they do. Man why the mental ray documentation is so bad written?

jupiterjazz
09-24-2007, 12:40 PM
So does anybody know which of the new features of mental ray 3.6 are implemented in maya 2008? because I constantly read about features of 3.6 that just don't exist in the GUI or maybe they do but evaluate automatically under the hood and I really don't know under what circumstances they do. Man why the mental ray documentation is so bad written?


Never heard this one before... :)
I have only answers with high amount of sarcasm, so this time I will skip them... ;)

Simply try to avoid from suing hidden features, exept for the taste of experimenting or the tendency to masochism. Most of the times are not production ready.
In this case you won't get too much help from force fg mode (that is an uninterpolated brute force method), just longer render times and noisy look. You should be able to get what you need with the other modes.

p

iosifkefalas
09-24-2007, 02:06 PM
Never heard this one before... :)
I have only answers with high amount of sarcasm, so this time I will skip them... ;)

Simply try to avoid from suing hidden features, exept for the taste of experimenting or the tendency to masochism. Most of the times are not production ready.
In this case you won't get too much help from force fg mode (that is an uninterpolated brute force method), just longer render times and noisy look. You should be able to get what you need with the other modes.

p


OK, Then they shouldn't be contained in the Maya help files at all because simply they don't exist in Maya. Although I like experimenting, the reason I was looking in the mental ray's what's new doc was absolutely not because I'm a masochist but because I was hoping for a better FG solution that doesn't require ridiculously high FG rays to render an animation without flickering.

jupiterjazz
09-24-2007, 02:26 PM
OK, Then they shouldn't be contained in the Maya help files at all because simply they don't exist in Maya.


I agree.



Even though I like experimenting, the reason I was looking in the mental ray's what's new doc was absolutely not because I'm a masochist but because I was hoping for a better FG solution that doesn't require ridiculously high FG rays to render an animation without flickering.

I understand.
I wasn't referring to you with that masochist, but just to the avg user struggling with a plethora of parameters poorly documented.

FG force is not suitable for animations. It make sense only for stills and imho it does not work too well anyway.

Furthermore the intrinsic nature of FG makes it also, in general, unsuitable for animation, especially when lighting changes.

If you want to use FG solutions for animation, VRay and the Modo renderer (as of v301) offer time-aware final gathering modes, which provide better results.

iosifkefalas
09-24-2007, 02:37 PM
I agree.
If you want to use FG solutions for animation, VRay and the Modo renderer (as of v301) offer time-aware final gathering modes, which provide better results.

Hmm Vray ah? The only thing I 've missed since I migrated from Max (PC) to Maya (Mac).

P.S. I could try the modo 301 on mac though if I can manage to export animation to modo..

slipknot66
09-24-2007, 02:42 PM
If you want to use FG solutions for animation, VRay and the Modo renderer (as of v301) offer time-aware final gathering modes, which provide better results.

Well, about V-ray, you will have the same problems that you have with mental ray.
We did some tests here with v-ray for max, and it was as difficult as with mental ray to achieve some non flicker animation, specially when the light changes.Also was slower than mental ray.

jupiterjazz
09-24-2007, 03:25 PM
Well, about V-ray, you will have the same problems that you have with mental ray.
We did some tests here with v-ray for max, and it was as difficult as with mental ray to achieve some non flicker animation, specially when the light changes.Also was slower than mental ray.

Well, about V-ray, you will have the same problems that you have with mental ray.
We did some tests here with v-ray for max, and it was as difficult as with mental ray to achieve some non flicker animation, specially when the light changes.Also was slower than mental ray.


I guess it all breaks down in "what you are doing" and "how you are doing it".

For the avg user, I would check into Modo301 and see if it matches the required needs also becuase it's way easier to setup than VRay or mental ray.

If you want to investigate on powerful pipeline solution instead go check the point-based renderman stuff, the future is there since the algorithm is basically noise free and you get occlusion and color bleed at very high speed with very low memory footprint.

p

dagon1978
09-24-2007, 03:49 PM
Well, about V-ray, you will have the same problems that you have with mental ray.
We did some tests here with v-ray for max, and it was as difficult as with mental ray to achieve some non flicker animation, specially when the light changes.Also was slower than mental ray.

did you tested the new "Interpolation frames" feature? seems very powerful

BTW slower or faster... it depends on what you have to do

iosifkefalas
09-24-2007, 04:00 PM
Well, about V-ray, you will have the same problems that you have with mental ray.
We did some tests here with v-ray for max, and it was as difficult as with mental ray to achieve some non flicker animation, specially when the light changes.Also was slower than mental ray.

Vray indeed as I remember was a bit slower than mental ray's FG in my tests but the result on animation was much less flickery than FG.

i-d
09-25-2007, 10:21 PM
Some indirect lighting and two portal lights,
not something that could not be done with
area lights (maybe faster) but with portals its there

http://forums.cgsociety.org/attachment.php?attachmentid=&stc=1

Puppet|
09-26-2007, 08:55 AM
Master ZAP, what is the difference between new FG mode and path tracing solution?

MasterZap
09-26-2007, 08:59 AM
Master ZAP, what is the difference between new FG mode and path tracing solution?

The "force" mode? Not much, really. Of course, it's only "path tracing" going "out" from the 1st generation FG point, not from the eye. (Of course, technically, multibounce FG has always been of a "path tracing" style, the real difference here is that the interpolation engine for the FG map (what most other people call an "irradiance cache", btw) is disabled with "force", so you are really brute-force looking up for every single shade point....)

BTW, If you are into the "path traced look", you can get a "best of both worlds" method with the new "AO with bleed" in the mia_material_x ... if you set the AO radius large, you get pretty much a "brute force QMC" solution for the small details (i.e. "first-generation locally path traced") but further bounces are picked up from FG.

/Z

Puppet|
09-26-2007, 09:42 AM
Thanks.
I already have 'path tracing' solution with mutibounce and etc. in my shader. I just interest advantage of 'force' in comparison with 'path'.

MasterZap
09-26-2007, 11:52 AM
Oh... of course. I didn't even read who I was replying to, oops. ;)

Hi Puppet. ;)

/Z

dagon1978
09-26-2007, 01:27 PM
Oh... of course. I didn't even read who I was replying to, oops. ;)

Hi Puppet. ;)

/Z

ehy zap! is it possible to implement the ao+bleed in custom shaders? i mean... an easy way for developpers (as puppet)? this is probably the best 3.6 improvement for Viz works ;)

MasterZap
09-26-2007, 01:33 PM
Well, AFAIK Puppet already did (I guess he beat me to it ;) )

Actually in an ideal world this would have been a core feature... but I'm the "shader guy", so it didn't make it into the core in this version. It may, some day.... if so, consider the mia_material_x implementation a testbed....

For use in other shaders... well... you can always use the diffuse component of mia together with any other shader you want to use. Or use only the diffuse component of mia with NO LIGHTS assigned, to use it as a "indirect light only" shader.

/Z

dagon1978
09-26-2007, 02:08 PM
Actually in an ideal world this would have been a core feature... but I'm the "shader guy", so it didn't make it into the core in this version. It may, some day.... if so, consider the mia_material_x implementation a testbed....


that would be great ;) but please, keep it on a per-shader-basis (i really like to choose on which surface i'll add details, instead of a general option as in vray)



For use in other shaders... well... you can always use the diffuse component of mia together with any other shader you want to use. Or use only the diffuse component of mia with NO LIGHTS assigned, to use it as a "indirect light only" shader.

/Z

thanx for the tips :thumbsup:

ciau

mat

Matadŏr
09-26-2007, 02:53 PM
Nice test i-d :thumbsup:

i've been testing the mia_portal_light node and i can't see many improvements over the use of regular MR area lights, at least in render times.
The look of the renders using the two types could be matched and the times are really close... i will keep testing and post some images.
I mainly work in Arch viz, and i'm really trying to see some advantage on using this new feature. It promises a lot, but i'm not able to materialise this promises till now ;).

greets

MasterZap
09-26-2007, 05:17 PM
Nice test i-d :thumbsup:

i've been testing the mia_portal_light node and i can't see many improvements over the use of regular MR area lights, at least in render times.


The portal is a mental ray area light, and the new maya "area light" modes contain similar optimizations that the portal lights does.


The look of the renders using the two types could be matched and the times are really close... i will keep testing and post some images.


This is quite true, although the portal does use a very specific sampling technique to minimize noise and do proper intensity attenuation based on a "subtended solid angle" of the opening, i.e. the "amount of sky visible from the shading point", rather than some simplified "distance squared" or similar falloff.

The main *point* of the portal is that it automatically gets it's color and intensity from the environment map (generally, the mr sky), which automatically gives you the correct light, rather than "something I eyeballed with an area light".

Also, it blocks final gather rays, so you do not get a debalanced final gather solution because some rays see the very bright sky and some rays only see comparatively dim interior light... now FG "sees" only the interior, letting you use lower FG settings for equal or better quality.


I mainly work in Arch viz, and i'm really trying to see some advantage on using this new feature. It promises a lot, but i'm not able to materialise this promises till now ;).


I think some people have overstated it's "promise"; It does a specific job, that is "replace light coming in through an opening that otherwise would have been handled via FG (with often suboptimal quality or long render times) with direct (area)light with the same intensity and color that yeilds nicer shadows easier"

That's the only actual promise.. ;)

/Z

dagon1978
09-26-2007, 05:38 PM
and there is also the photon advantage ;)

KidderD
09-26-2007, 05:57 PM
New tutorial on the IPR, portal light and tone mapper. Bottom of the page.

http://usa.autodesk.com/adsk/servlet/item?siteID=123112&id=9974388

iosifkefalas
09-26-2007, 06:09 PM
Oh thank you for this. I was waiting for those videos to update. Downloading now...

MasterZap
09-26-2007, 06:13 PM
and there is also the photon advantage ;)

True; I forgot; It gives you photons from the sky!

/Z

iosifkefalas
09-26-2007, 06:41 PM
Hey KidderD and guys. I know this is a mental ray thread, just wanted to add that autodesk also updated the first modeling workflow video.

http://usa.autodesk.com/adsk/servlet/item?siteID=123112&id=9974388

KidderD
09-26-2007, 07:17 PM
Thanks Sifis,

Yeah I actually posted that here

http://forums.cgsociety.org/showthread.php?f=7&t=544045

Just want to comment though. That reduction tool is just slick. And can anyone mention the specs for the machine they use in the rendering? Seems like pretty quick IPR to me, but if they are on so multi quad core or something, well maybe not so great.

dagon1978
09-26-2007, 10:05 PM
Thanks Sifis,
And can anyone mention the specs for the machine they use in the rendering? Seems like pretty quick IPR to me, but if they are on so multi quad core or something, well maybe not so great.

the IPR is working really better now (also with FG finally!) but it's far behind fprime (or rendition), we need a proper progressive refinement, please alias! :scream:

Matadŏr
09-27-2007, 07:42 AM
Thanks MasterZap for all the help and the valueble "inside scope" you provide to everyone in this forums.
I have to say that the new features in MR are great and till now they are working with zero problems :)

The portal is a mental ray area light, and the new maya "area light" modes contain similar optimizations that the portal lights does.

I didn't know that MR area lights where optimized, in my tests i noticed that they seemed a little faster.

This is quite true, although the portal does use a very specific sampling technique to minimize noise and do proper intensity attenuation based on a "subtended solid angle" of the opening, i.e. the "amount of sky visible from the shading point", rather than some simplified "distance squared" or similar falloff.

The main *point* of the portal is that it automatically gets it's color and intensity from the environment map (generally, the mr sky), which automatically gives you the correct light, rather than "something I eyeballed with an area light".

Also, it blocks final gather rays, so you do not get a debalanced final gather solution because some rays see the very bright sky and some rays only see comparatively dim interior light... now FG "sees" only the interior, letting you use lower FG settings for equal or better quality.

Now all this portal light thing is a lot more clear to me (yes, a little slow on the thinking :rolleyes:). Thanks for that "real world" explanation.

I think some people have overstated it's "promise"; It does a specific job, that is "replace light coming in through an opening that otherwise would have been handled via FG (with often suboptimal quality or long render times) with direct (area)light with the same intensity and color that yeilds nicer shadows easier"

That's the only actual promise.. ;)

/Z

I recognize that when i first saw that video in max 2008 features demo i immediately thought of a life saving amazing thing, that would easy the making of interior images in MR to the point that i could return to my mountain biking and finnish all that pile of books that are waiting for years. My fault for being lazy :D

And that photon casting feature that dagon1978 pointed is really a great thing, i tested with photons and it really helps.

Many thanks again for your help everyone
Zé Pedro

cpan
09-27-2007, 08:10 AM
portal lights, color bleed AO "FG enhancer", photographic tonemapper, bokeh lens, light_surface,
overall easier to use mia_material all make mentalray a much better archviz solution:)

What's missing is a material preset database for mia_material eh:D
A online one with direct interface into maya (download/update/create
from inside maya... err another reason to bring back that webBrowser
command:>) would be awesome! Crossplatform format as Zap
already suggested would be even nicer:)


now bring those importons in 3.7 :p

MasterZap
09-27-2007, 08:16 AM
I recognize that when i first saw that video in max 2008 features demo i immediately thought of a life saving amazing thing, that would easy the making of interior images in MR to the point that i could return to my mountain biking and finnish all that pile of books that are waiting for years. My fault for being lazy :D


Well, as a matter of fact... it really is almost "that easy". Some people have called this "the closest thing to a 'make pretty' button yet".

The point is - sure - you can "fake this with your own area light", but then you have to figure out what is the right intensity, the right color, etc (and deal with some sampling noise issues) and deal with photons.

The portals make this a no-brainer thing. Just put it in and hit render and go out and mountain-bike! ;)


And that photon casting feature that dagon1978 pointed is really a great thing, i tested with photons and it really helps.


Yes indeed, the trouble was that before, if you wanted bounced light from the sky, you were locked into using multibounce final gather, because any time you turned on photons, the sky was reduced to a single bounce (since photons turn off multi-bounce final gather, but the physical sky doesn't actually generate any photons - it's just "an environment").

The portals can actually "turn" an environment into "photons and light".

Also note, that while it works best with a "smooth" environment like the sky, it actually works with any environment (although if the environment map contains a lot of high-frequency details, it works less well, although you can get around that by wrapping your environment in mia_envblur).

/Z

MasterZap
09-27-2007, 08:21 AM
portal lights, color bleed AO "FG enhancer", photographic tonemapper, bokeh lens, light_surface,
overall easier to use mia_material all make mentalray a much better archviz solution:)


Thanks, it's appreciated. And we try, we try...


What's missing is a material preset database for mia_material eh:D
A online one with direct interface into maya (download/update/create
from inside maya... err another reason to bring back that webBrowser
command:>) would be awesome! Crossplatform format as Zap
already suggested would be even nicer:)


Yes, I suggested this multiple times... someone with a lot of free time (which means... NOT ME ;) ) could make some MEL script which outpts simple mia_material parameter sets (plus any attached bitmaps) into some... I dunno... XML format.

Then some other equally clever person writes a similar maxscript one. And a third, clever, person does it for XSI. And someone makes an AutoLISP port... and....

All you really need to support is the mia_material parameters, and the ability to put a plain old bitmap on them (or the most "important" ones) and you are pretty much "done".

/Z

iosifkefalas
09-27-2007, 08:29 AM
Thanks, it's appreciated. And we try, we try...

All you really need to support is the mia_material parameters, and the ability to put a plain old bitmap on them (or the most "important" ones) and you are pretty much "done".

/Z

Or better an XPM icon instead of BMP. Please don't forget the mac Maya users..

dagon1978
09-27-2007, 08:36 AM
now bring those importons in 3.7 :p

or something better... :D

MasterZap
09-27-2007, 08:39 AM
Or better an XPM icon instead of BMP. Please don't forget the mac Maya users..

When I said "bitmap" I meant any bitmap texture, .jpg, .tif, whatever. The idea was that in an XML (or whatever) format you'd simply store info like:

diffuse_color = molly.jpg
reflection_color = 1 1 1

....that's all I meant. So it's either a value, or a simple reference to a file. No magical settings, UV mappings, all that is "lost", just a simple info "this texture goes there".

/Z

Matadŏr
09-27-2007, 10:14 AM
I just tried that mia_material shadow shading trick that i've read on zap's mental ray tips (http://mentalraytips.blogspot.com/) in a scene i'm having some problems with rendertimes going up (and the client on my back)... and i'm trully surprised to see the rendertime going down from 15'32'' to only 5'12''!!
All the settings are the same and the image is virtually the same (only a shadow on some small object changed slightly). Great saver!

Just one more big thanks for the great MasterZap :thumbsup:

Zé Pedro

doody
09-27-2007, 02:43 PM
Is their still a gamma problem with mr, or can you just render like other renderers yet?

MasterZap
09-28-2007, 06:25 AM
Is their still a gamma problem with mr, or can you just render like other renderers yet?

There isn't, and never has been, a "gamma problem" with mental ray.

Gamma is something somebody in the imaging pipeline has to take care of, if you want to render anything remotely resembling physically correct, and display it on a standard computer monitor (and using standard run-of-the-mill files like .jpg's as texture input)

If some "other renderer" allows you to "just render", one of three things is happening:

a) The renderer hides away all the gamma complexity without even telling you (e.g. Maxwell)

b) You are savvy enough to handle gamma in your imaging pipeline yourself, and have everything already set up, so you "just render"

c) You are getting an inherently non-physical, useless, non-photo-real result, because you are rendering with no regards to gamma whatsoever at any stage, and you are "just rendering" in ignorant bliss of the errors in your results.


The trick is that c) is very common, and very deceptive. It may "seem" to work because the trivial transfer function of textures "seem" correct, as in "My texture looks like when I painted it in photoshop in my final render".

The uninitiated may then think that c) is "not a problem" and "just render". People have been doing exactly that for way over 20 years.

The thing is that doing c) is great for "Toy Story I" style renders, with no regards to any form of physical accuracy. It's been - and unfortunately continues to be - the default settings in many popular 3D softwares.

If you want to render physically, you must make sure that the color space while "inside" your render is true scene-reffered linear light (i.e. values in the renderer are directly proportional to cd/m^2 measurements in the real world).

And something, somewhere in the pipeline must make sure that whatever you put in (say, texture files, color values) and stuff you get out (image files) are handled properly.

Since most computer screens has a gamma of 2.2-ish (actually they are sRGB, which is close), this means that a good 99.9% of the textures you put in, which you simply "painted in photoshop", comes with an embedded gamma of 2.2. If you do not linearize those images in input, you are putting the wrong thing in.

And when you want to view your render, if you do not display the render in some way that the gamma of the display device (screen) is taken into account somewhere enroute fromthe true scene-reffered linear values to the values stored in the video cards framebuffer that drives the actual pixels, you are displaying the wrong thing.

The error most people who "just render" (those that do "c" above) is doing is that those two problems cancel eachother out for textures.

They see "my texture goes in, and then comes out similar" and are happy. But since you are putting in something wrong, and then again viewing it wrong (but in the opposite way), the light math in the middle, which in itself is correct, ends up being wrong because you view it wrong.


The big problem in getting people to render correctly w. gamma isn't to get people to add the gamma at the end. That's easy. You can easily show people "look, now it looks more realistic" with some simple models and demonstrations. There are multiple ways to make sure the gamma gets applied (lens shaders, set framebuffer gamma in viewer, render to linear .exr and use a gamma-aware viewing tool, etc. etc. etc.)

No, the obstacle is when people put some color in, and they do not understand what color space they are using. "I made my sphere 255,64,64 but it comes out all pink", or "my texture is washed out" are common complaints from these people.

The reason for their complaint is that "255,64,64" are pixel values, which has the gamma embedded. The value needs to be linearized to be a "correct input" to the renderer (any renderer).

Similarily, most textures are painted on (or at least adjusted to be viewed on) a computer screen, which means they have the gamma embedded too. Unless this is linearized, you are putting the wrong thing in.

So of course you get the wrong thing out.
Allow me an analogy.


Imagine you have a glass half-full of water, and you have a machine which is supposed to fill the glass with oil, such that the amount of water and oil is the same. I.e. the glass is supposed to contain double the volulme when it comes out, half water, half oil.

However, the machine is built simply, on the assumption that the operator knows what he is doing. Rather than measuring actual volumes, the machine senses the level of water, and fills in oil until the level is twice as high. Out comes a glass with a nice layer of water, and on top of this a nice layer of oil (oil floats on water).

Now, imagine the water is colored brightly red, so it can be easily seen in relation to the oil. Anyone who glances quickly on this machine will see the level of water going in, and the level of water and oil coming out.


The person who renders with no regard to gamma at all in the pipeline is like the person who puts glasses in that are filled with some amount of water. Only the glass isn't straigt... it's slightly V-shaped, say at an angle of 30 degrees, wider at the top than the bottom.

The glasses go into the machine, the machine does it's thing, and similar glasses come out, witht more fluid in them; the level is indeed twice as high!

If you put an "incoming" glass next to an "outgoing" glass, you can easily see that the level of water is the same, and the level of oil is twice as high! Fantastic; it's working. Or is it?

The thing is, that due to the shape of the glass, the volume of oil is actually much larger than the volume of water. The machine did it's job properly - raised the level to it's double height - but the user put the wrong thing in (a V shaped glass).

The c) style user, though, is happy. He only cares that levels "look" the same coming out, as they were going in. This is because "seeing" height is much easier than "seeing" volume.



The person with a correct gamma workflow is like a person who fills the to some level with the red water, only his glass is straight, perfectly cylindrical. The glass goes into the machine, and it comes out again, and it has twice the volume in it.

Again the incoming glasses set side by side with the outgoing glasses have the same level of water, and the same level of oil, but now the volumes are correct.

We both preserved our input (the red level) and got the physically correct output (twice the volume).



Now comes Bobby. Bobby doesn't really know he is working with V-shaped glasses. However, someone told him to put straight glasses on the output side of the machine. He has been told that straight glasses are good, but he's not sure why. And he isn't even aware that his incoming glasses aren't.

So he starts the machine. He puts in a V-shaped glass with some amount of red in it... the machine takes it, puts the fluid in a straight glass, then doubles the level, and outputs it.

Bobby is perplexed. What happened? I put in 2 inches of water, and out comes only 1? This machine is broken? To heck with it! This "straight glasses" workflow sucks. I immediately throw his stack of straight output glasses and goes back to the V-shape....


The fact that the V-shaped glasses have been manifactured for over 20 years, and nobody ever actually cared much about the shape of the glass (because it didn't really matter until somone started to actually care about the accuracy of the outgoing volume of oil) doesn't make thing easier. The fact that every machine manifacturer just assumed it was obvious that you should put in straight glasses, because, well, it's obvious..... and so on....



The unfortunate reality is that most people who attempt a gamma correct workflow end up like poor Bobby. They enable some display gamma, quickly conclude "meh, everything went washed out", disable it again, and continue to render incorrectly like c).


It's all education, and I'm trying.... a user at a time....


/Z

Buexe
09-28-2007, 08:39 AM
Thanks for this explanation. My believe is still that a good interface design should be aware of this problem/issue and offer an option to generally de-gamma stuff, so that the "correct" stuff gets fed into the machine. It wouldn`t be hard to write a small checkbox for the render settings window to de-gamma all color infos fed in via files and/or procedural textures, so that the renderer can take this info and do the de-gamma himself. There are many 3D artists who I know who don`t hang around in forums like this one and thus lack this valuable information provided kindly and if it is not in the docs ...

ctrl.studio
09-28-2007, 09:47 AM
it's there already if you use the build-in gamma option in an 8bit workflow. implementing a custom lens shader that fill up a global gamma value where then single texture nodes can refer to, is not a big deal too.

max

Buexe
09-28-2007, 09:54 AM
it's there already if you use the build-in gamma option in an 8bit workflow. implementing a custom lens shader that fill up a global gamma value where then single texture nodes can refer to, is not a big deal too.

max

Maybe not for you, but there are people like me who don`t have this advanced knowledge. Try telling a student that he has to pull a lens-shader trick so that his file-textures are de-gammad where he is happy that he just got a file texture on his shader, haha! And if it is so easy why were there so long discussions over this topic on this board again and again?

MasterZap
09-28-2007, 10:14 AM
Thanks for this explanation. My believe is still that a good interface design should be aware of this problem/issue and offer an option to generally de-gamma stuff,

I agree, and I'm working for that every day.

The reality is that all the "interfaces" was created long before someone even thunk the thought "hey, before we do this stuff, perhaps we should consider if this is actually accurate related to the brightness of screen pixels??"

Basically, nobody cared to "get it right" before because computer graphics was such voodoo-trickery that the slight nonlinearity of the display was the least of anyones worries. (The irony, of course, is that so much work was then put into workarounds for this over the years, which wouldn't have been necessary had it been done right from day one.)

Basically; Yes, it's easy to "fix". It's getting 20+ years of old habits out of people, and get them to accept "now things work differently, that is, correctly" is the problem.

Also, if one tool "fixes" it, it then works differently from others. Since people think what "the others do" is the norm, they perceive the "fixed" tool as broken.

We are really at a slow painful paradigm shift. The advent of floating point files has finaly got us to the point where we are taking a look at the accuracy of our colors, and going "Oh crap".

It took the orphanage boys writing eLin to get Adobe to wake up to float compositing!

/Z

Puppet|
09-28-2007, 10:19 AM
There is no any problem with gamma in mental ray, all works fine.
But really Gamma in render setting is better for usage.

If you understand how 'gamma' correction works and why it used. You'll never have a questions.
Problem is that beginners use 'One Button Sky' and really don't know what is happen (what shaders created and what it do).

jupiterjazz
09-28-2007, 10:48 AM
I'm going to add my two cents here.

In case textures are involved, note that mental ray applies gamma correction to all output images that are *not* floating-point, _and_ applies *reverse gamma* to all non-fp input textures.

AFAIK there is (at least in 8.5) currently no way to block the reverse gamma of non-fp input textures from within mrfM except by disabling gamma correction at all in mental ray.

p

Buexe
09-28-2007, 10:51 AM
There is no any problem with gamma in mental ray, all works fine.
But really Gamma in render setting is better for usage.

If you understand how 'gamma' correction works and why it used. You'll never have a questions.
Problem is that beginners use 'One Button Sky' and really don't know what is happen (what shaders created and what it do).^

The gamma thing is just one example of the many technical aspects one has to learn about to create nice images in mr. Too many if you ask me, and when you have it running the render times make it indiscussable. It is always easy to blame it on the lack of knowledge of the users. I switched to another renderer and was able to create realistic (in my sense and context) images in realistic render times in two weeks that I was not getting out of mr in two years. I read books (yes, buying them is not enough : ) ), watched DVDs, read forums, talked to people who certainly have an advanced knowledge of mr, but still the whole render process was always frustrating, especially rendering animations. Well, I`m glad I can focus now on content creation and go to bed early : )

MasterZap
09-28-2007, 10:53 AM
I'm going to add my two cents here.

In case textures are involved, note that mental ray applies gamma correction to all output images that are *not* floating-point, _and_ applies *reverse gamma* to all non-fp input textures.


...which is exactly what you want a good 99% of the time. Except for bump maps and displacement. So while this is a good try at "automatic under the hood", it's not 100% automatic under the hood.

The problem is that the color swatches in Maya aren't gamma corrected, so the colors you "see" aren't correct.

Also, other apps (like max) doesn't use the "mental ray" gamma, but add it's own layer of gamma management on top, which makes this much more difficult to describe "easily". (But at least max can gamma correct the color swatches ;) )

/Z

Buexe
09-28-2007, 10:59 AM
The presence of mr brain power here is intimidating : )
I didn`t want to hijack this thread, I´m out. And the whole gamma thing made me smarter, so thanks guys!

MasterZap
09-28-2007, 11:09 AM
^

The gamma thing is just one example of the many technical aspects one has to learn about to create nice images in mr.

I can't stress enough how this is in no way, shape, or form specific to mr.

It may appear to be because I am passionate about getting people to do this right. "The others" may not care, for all I know.

/Z

Buexe
09-28-2007, 11:16 AM
I can't stress enough how this is in no way, shape, or form specific to mr.

It may appear to be because I am passionate about getting people to do this right. "The others" may not care, for all I know.

/Z

And I know and understand that, and your efforts are appreciated by me and probably by many others, too! So zap on, Zap! : )

jupiterjazz
09-28-2007, 11:17 AM
...which is exactly what you want a good 99% of the time. Except for bump maps and displacement. So while this is a good try at "automatic under the hood", it's not 100% automatic under the hood.


Well, that's quite an "except"... (especially for bump, since it is used a lot to overcome ray discussable performance when displacing geometry).

And in any case sometimes 99% is not good enough: I had a client which was stuck right in that 1% category.

p

ctrl.studio
09-28-2007, 11:48 AM
The reality is that all the "interfaces" was created long before... ...even interface designers or product specialists were there. that's the point I was trying to split out from my previous post. if things are easy to be implemented why they aren't ? I think because there wasn't just the 'culture' of that. now that the motivations are coming up also a new culture is coming up. as of that. new designers are coming up to fill the gap. but.. old ppz must fall down. it's a kind of revolt. revolts start always from the peoples. and to have some grip over the actual state of the things, imho, must be an (r)evolution. so end-users and developers must work together in a new spirit. if you want. end-users must be a little also developers, and developers must be finally also end-users. you see mray is not only a physical virtual system. it's also a real world system that brings ppz and ideas together. it does evolve with us as we're evolving.

max

MasterZap
09-28-2007, 12:27 PM
And in any case sometimes 99% is not good enough: I had a client which was stuck right in that 1% category.


So you preffer the reverse situation, where 99% is WRONG instead?!?!? :rolleyes:

/Z

jupiterjazz
09-28-2007, 12:42 PM
So you preffer the reverse situation, where 99% is WRONG instead?!?!? :rolleyes:

/Z

No, I prefer when it's 100% wrong, so I am sure the opposite works like a charm.

Just kidding :)

I prefer when there is no need to find out where is that 1% wrong tiny bit, almost for each feature, or at least when docs are good enough to make it clear per se ;)

But as Max is wisely (and Darwinistically) saying, the situation is evolving.

p

thematt
09-28-2007, 12:43 PM
Mr Zap thanks for the explaination, seriously, but I read, I read about all those gamma thing and tone mapping but still can't get the hang of it?
What exactly is going on?
Exemple: I create an object, texture it in photoshop, apply the shader and all the stuff I want to it in Maya i render in MR, adjust light ect until it looks right, then i export layer i need , put it in comp and adjust to my heart content until I'm statisfy or the client is statisfy.Done..
So exacly where was I wrong? Where is this gamma thing exactly taking over, one after the texture one at rendering?, probably it is the interface or something but until now I was definitly not aware of all those stuff and yes I was a "c" type of person, I'd like to change but would sure neeed some clear explanation with picture to understand what's going on..can i find that somewhere?

anyway, I wil reread all that, thanks again for the explanation.

cheers

ctrl.studio
09-28-2007, 01:48 PM
But as Max is wisely (and Darwinistically) saying, the situation is evolving. I don't consider evolution as a natural progress. I think people make all the effort and take the risk to change their mind. it's not individualism either, as there's a 'system' we can refer to. I mean is that that bring us, in the first, the matters and then their solutions.

Paolo, I'm in Venice (well at the Lido actually) in the next couple of months. Are you also there ? :rolleyes:

max

jupiterjazz
09-28-2007, 02:00 PM
I don't consider evolution as a natural progress. I think people make all the effort and take the risk to change their mind. it's not individualism either, as there's a 'system' we can refer to. I mean is that that bring us, in the first, the matters and then their solutions.


Personally I think the natural process is "resistance to evolution" until the system breaks and the paradigm shift happens.


Paolo, I'm in Venice (well at the Lido actually) in the next couple of months. Are you also there ? :rolleyes:
max

Coolness.
Yes, I will stay in italy for another week, very near Venice, so let's meet. I'm gonna contact you now. ;)

p

doody
09-28-2007, 02:09 PM
how about mr just put a gamma button on it's renderer and get all this explaining over with, why do we have to work so hard to make a render look correct when other renderers look correct from the get go.. and even if you guys do say and are right that other renderers gamma is wrong.....well i don't hear any complaints from clients or other artists about faded colors or to bright or shiny materials from those other renderers...i do thank you for all the info though...

MasterZap
09-28-2007, 02:34 PM
how about mr just put a gamma button on it's renderer and get all this explaining over with, why do we have to work so hard to make a render look correct when other renderers look correct from the get go..

You keep missing the fact that this has nothing to do with mental ray. I dunno if I have to tattoo this on my forehead or something ;)

It has to do with any renderer, and if either you, your renderer or your pipeline isn't inherently handling this.... it's wrong.

and even if you guys do say and are right that other renderers gamma is wrong.....well i don't hear any complaints from clients or other artists about faded colors or to bright or shiny materials from those other renderers...

Yes, yes you do. You hear all these complaints about lights acting non-naturally, how you have to add bounce lights, how you have to do all sorts of things to get light to look naturally and yet it still somehow "looks CG".

Ever hear anyone complain how a light completely blows out near it if you use a physical distance-squared falloff? I bet you have.

The real problem is that everyone is already used to all the "old tricks"... tricks that exists solely to work around this very problem.

The reason you head these complaints about "faded colour" is from people who are "Bobby" in my example before... they added the gamma at the end, but nowhere else. They "expect" to just be able to plug their V-shaped glasses into the machine with no concern, and "complain" that their water level is "wrong" going out (reffering back to my little analogy).

That your client can't vocalize what is wrong in your render, or that you spent so much time in comp with tricks and "transfer modes" of various perverted kinds to get it to "look right", none of which would have been necessary had it been done right in the first place.

If you ever have to go to comp, and use the "Screen" transfer mode for anything, this is simply a big underlining mark of your failure to output the real correct thing in the first place. If you ever comp layers in anything but add or multiply, you are compensating for doing stuff wrong earlier in the pipe.

A good render doesn't even need any comping tweaks. Yes, all the layers should be there so you can, but if you are rendering to "fix it in post", your whole rendering-mentality is fallacious from frame #1. But don't worry - you are not alone, most people are like that.

And what caused this in the first place? Doing thing in the wrong gamma.


Ever read a tip of separating the diffuse and specular into passes, and comp with "screen"? COMPENSATION FOR DOING IT WRONG.

Ever have someone tell you to apply curves to a reflection layer and then comp it in screen? COMPENSATION FOR DOING IT WRONG.

Ever have someone tell you to "screen your render over itself?" CONGRATULATIONS, you just compensated for doing it wrong. (Actually "screen over yourself" evaluates mathematically to applying a gamma of 2.0 ;) )



/Z

doody
09-28-2007, 02:42 PM
I'm sorry but all these complaints you are talking about and i've heard only circle around mr, 95% of the time, I practically never hear of people using vray or maxwell or final render 2 complaining about light or materials or gamma, and I have never once complained about final renders lighting, it looked perfect ever since i started using it, the only thing i like about mr is that it supports everything in maya.

MasterZap
09-28-2007, 02:52 PM
I'm sorry but all these complaints you are talking about and i've heard only circle around mr,

This is because it's too easy to end up in the "Bobby" situation if you don't know what you are doing.

Yes, I admit, when we introduced the sun&sky we also introduced the proper exposure (including gamma) so you could actually view these physical values in the correct way (so it actually looks like sun and sky and not a pink-and-blue gradient and a yellow light, like it looks viewed in gamma=1.0 which is what you get by default in most competing implementations).

This was then integrated in the apps with a nice "one click" feature. Nice. But the other end (incoming) wasn't handled one-clicky. Therein lies the problem. The mistake was thinking that people would already have a gamma-aware workflow. The reality was vastly different. People wen't "gamma.... huh?".

I practically never hear of people using vray

So the dozens of tutorials for "linear workflow" in vray doesn't exist. Mkay. ;)

or maxwell

maxwell has all this built in under the hood and doesn't even tell you. It de-gammas every texture and color you put in magically.

or final render 2 complaining about light or materials or gamma

Quite probably don't know what they should see.

The error runs deep. People are so used to seeing the wrong thing done they are hardly reacting. (Most people react to seeing the right thing, thinking it's wrong. While it's measurably correct. Yes, it's a thankless task ;) )

and I have never once complained about final renders lighting, it looked perfect ever since i started using it

Well, then, don't use any gamma in mental ray either, if that makes you happier! Easy. Your problem is solved, you can continue to fix everything in post or use linear falloff lights, or whichever. Go ahead, nobody is stopping you.

I'm just tryiing to tell you how to do physiclly correct rendering (regardless of renderer, totally separate from mental ray). Don't mind me. ;)

/Z

doody
09-28-2007, 03:04 PM
i do mind you and like all your input, it's really good info, i'm just giving you mine. but the fact is that mr is rediculous when it comes to gamma problems, you say all other renders have problems and tutorials, well everyone i know moved from mr to other renders because they have no problems with textures or lightingproblems with the other renderers. I don't know anyone who asked me you know of any tutorials for vray, my lighting look horrible... mr is the only render i know with out a redicilous amount of time tweeking that studios don't have, your render comes out looking bad, if it's correct or not... it isn't worth the time to get the same quality of other renderers, their is a thread called "mr, vray like interiors", it not "vray, mr like interiors".... they showed how to make mr look like vray....their is a reason for that.....and they almost did make mr look like vray, but at what cost, hours of wasted time to get the same quality of another incorrect renderer as you state.

djx
09-28-2007, 03:07 PM
MasterZap, I'm loving that you continue this conversation (and many others) with such a passion. Where I work we are starting to move away from working solely in 8bit color space rendering and compositing and your examples and insights have really helped me understand much more about this whole "linear/gamma" thing.

I'm someone who has had to change my cg lighting techniques and adjust to this new way of doing it - but so far I'm really happy with the results I am getting.
Keep it up.

-- David

MasterZap
09-28-2007, 03:08 PM
of another incorrect renderer as you state.

Never claimed vray was "incorrect". The vray situation is the exactly the same as mental ray; you need to handle this correcly in vray as well. And if you don't, your result is just as incorrect. Which is where all those "linear workflow" tutorials come in.

/Z

doody
09-28-2007, 03:19 PM
i don't have much experience with vray, i only know the few times i did use it, i hit render and it came out looking perfect, i would love to get the same from mr, don't get me wrong, i'm not here to just bash mr, but it's ridiculous that mr didn't make any upgrades to make workflow easier for people that don't have a hour upon hours to spend tweaking. I know the saying if it ain't broke then don't fix it, well i think this is something that isn't broke, just handled very poorly..... But then again i'm sure i'm not as knowledgeable in this area as you. that last statment isn't sarcastic even if it kinda sounds that way, dam internet... but i do listen to what everyone is complaining about, and it's true we all do just want a one button push for mr too, quick fast and looks perfect.....why can't mental images make a one button push that makes everything correct, it's gotta be possible, or we wouldn't be able to do it manually....

yann22
09-28-2007, 03:54 PM
"TRIX R 4 KIDS - renderman version"

Looking forward to it, the Mental Ray version was one of the most helpful and in-depth but understandable documents on rendering I've come across :thumbsup:.

inguatu
09-28-2007, 04:36 PM
i don't have much experience with vray, i only know the few times i did use it, i hit render and it came out looking perfect, i would love to get the same from mr, don't get me wrong, i'm not here to just bash mr, but it's ridiculous that mr didn't make any upgrades to make workflow easier for people that don't have a hour upon hours to spend tweaking. I know the saying if it ain't broke then don't fix it, well i think this is something that isn't broke, just handled very poorly..... But then again i'm sure i'm not as knowledgeable in this area as you. that last statment isn't sarcastic even if it kinda sounds that way, dam internet... but i do listen to what everyone is complaining about, and it's true we all do just want a one button push for mr too, quick fast and looks perfect.....why can't mental images make a one button push that makes everything correct, it's gotta be possible, or we wouldn't be able to do it manually....

strange.. I haven't really seen many complaints about this. *shrug*

doody
09-28-2007, 04:38 PM
you must be new to this then.

MasterZap
09-28-2007, 04:58 PM
i don't have much experience with vray, i only know the few times i did use it, i hit render and it came out looking perfect, i would love to get the same from mr, don't get me wrong, i'm not here to just bash mr, but it's ridiculous that mr didn't make any upgrades to make workflow easier for people that don't have a hour upon hours to spend tweaking.

I dunno, it's like you havn't been using the last couple of versions of mr at all.... :rolleyes:

You are talking largely nonsense too, since the issues are identical for these other renders. Probably, just nobody dares to tell people how to do it right for those. I dare tell people how to do it right for mr. So sue me. ;)

why can't mental images make a one button push that makes everything correct We don't make the buttons - Autodesk do.


Btw, one reason you see "more complaints in mr" than for, say, vRay, is that vRay is predominantely run in max, which has tools for a proper gamma workflow (including ability to gamma-correct color swatches and material editor), whereas Maya doesn't.

Hardly mr's fault.


/Z

theotheo
09-28-2007, 05:09 PM
Hi Zap, i also just chimed in to read your posts, i do find them very usefull and you are bringing much needed light on the whole gamma issue in cg renders. Keep it up.

-theo

inguatu
09-28-2007, 05:09 PM
you must be new to this then.



not likely.. but hey.. thanks for assuming. carry on.

ACamacho
09-28-2007, 05:11 PM
doody, it seems like your frustration is misguided. Blame Autodesk for not implementing MR very well....not mental images. At least not in regards to this ;)

doody
09-28-2007, 05:11 PM
well i use final render, and their is no issue with gamma, i'm sure your gonna tell me because it's incorrect; but it looks perfect... I guess i am at fault because other renders you push render and it comes out great, or maybe it's because i don't want to spend hours of tweaking to get the same results, or probably maya's fault for not making correct buttons. sounds to me everything else is at fault not MR... yet everything else works to peoples satisfations... i started a thread about this once, it's over 130 pages long now if i'm correct, and the same conclusion is what i kinda hear here, everyone elses or other programs fault, not mr...

doody
09-28-2007, 05:12 PM
well Acamacho, I used XSI also and it's MR, same problems, so XSI is the problem also?

ACamacho
09-28-2007, 05:42 PM
well i use final render, and their is no issue with gamma, i'm sure your gonna tell me because it's incorrect; but it looks perfect... I guess i am at fault because other renders you push render and it comes out great, or maybe it's because i don't want to spend hours of tweaking to get the same results, or probably maya's fault for not making correct buttons. sounds to me everything else is at fault not MR... yet everything else works to peoples satisfations... i started a thread about this once, it's over 130 pages long now if i'm correct, and the same conclusion is what i kinda hear here, everyone elses or other programs fault, not mr...

Dude if you think the output from finalrender or vray or renderman or blah blah blah is great and perfect, then great more power to you. But Zap is simply stating that the workflow we have been using in the past is not the "physically" correct way to do it (in any program that doesn't account for linear worklow). Nothing more. I am frustrated too having to hack in a linear workflow in Maya....but I fault Autodesk for not changing Maya to accommodate. And I don't use XSI, but if you have to ungamma your colors and procedural textures there too, then yes I do think it's wrong.

I am not sure why this is going as far as it is to be honest...

*edit: I am not saying MR is perfect by any stretch, there are many things that are frustrating me with the renderer, but this isn't one of them.*

doody
09-28-2007, 06:00 PM
i'm am not blaming zap for mr or for maya, his input is well needed and appreciated.. it's just that blame can't be completely put on others or other programs, i mean, who makes the mr shaders, i havn't seen a gamma correction mr shader that will gamma correct the entire scene, i mean their is tonner maps, but they do not fix the problems. after you have ungamma everything then the tonner maps work great... It's up to the user, i prefer to have a render in minutes that a client thinks looks great, not have a render in hours that a client sends me back to fix the color, lighting or textures... just my personal experience. I guess i'll just have to wait for maya or XSI to correct all their faults for mr so we can render easier and have better work flows like other renderers....

slipknot66
09-28-2007, 06:06 PM
I guess i am at fault because other renders you push render and it comes out great

That is not crrect, ppl use Photoshop a lot to resolve the problems in post.
Also theres nothing wrong with gamma in mental ray.

Saturn
09-28-2007, 06:12 PM
I think you should a post a pic of what you think is perfect or not to avoid any more confusion.

For the gamma thing this a common problem in any pipeline whatever the render you use. Typically I am using renderman and MR almost every day and we have to take of that. For film it s slightly more complicated because with look up table to watch our render. Even in videogames the problem of the gamma is addressed ( eg: Crysis )

However Renderman or MR are unlike Maxwell multi purpose renderer. That mean you can do photorealistic or non photorealistic render. If I do remember there is a option in MR ( and renderman and even in Vray actually ) that allow you to specify what is your gamma output and then ungamma automatically your texture. It's present for ages but nobody use it so don't blame the software on this but more the users ( or the documentation maybe ).
The gamma problem you pointed out isn't a MR problem but a user problem that don't do the right thing. As Zap mentioned we are so used to see the wrong thing that we accepted it as the right thing.

Have a look to this :
http://www.xsi-blog.com/archives/133#comments
It s another way to explain the problem.

It tooks me a year to get ride of all the confusion and to accept that I was doing the wrong thing for 5 years.

doody
09-28-2007, 06:46 PM
where is this ungamma automatically even textures button you are talking about, you mean under framebuffer?, please tell me I would like to know and try it.

i-d
09-28-2007, 06:52 PM
Yep, rendering is not easy.

Here is a quick tip for gamma bitching crowd.
People look at your textures before complaining
about beeing washed out, majority of textures,
commercial or free are not suitable for tonemap
rendering (they should be 16bit at first) but more
importantly they lack contrast.
Your woods must be darker more saturated and
with lots of good contrast. Imagine you are painting
undercolor and mia material adds some more on top of it.
I still use some gamma nodes on product textures but
for everything else only photoshop.
And that specific tone you need to push your texture
is best achieved in ps with a click of a button (edit).
With new photographic exposure this works like a clock.

Hans-CC
09-29-2007, 12:58 AM
the gamma info from zap has been very valuable for me !! and i dont think mr is that hard !!
so please dont make this thread another renderers figth doody !!

zap your input is too much apreciated for me !!

thanks

h.

rfer79
09-30-2007, 08:49 AM
I thought I understanded gamma till quite recently. A CRT monitor physically produces a gamma of 2.5. A LUT inside of the monitor linearizes this by applying a gamma correction. However, our eyes like a little bit of gamma so this correction is the inverted of 2.2 instead of 2.5. So our monitors (and video) work with an end-to-end gamma of 1.1 / 1.2

I thought this was all I needed to know in the colour workflow... but apparently not! I couldn't understand Master Zap's explanation. (I'm sorry I'm really slow with metaphors.) Do you think you could explain an actual workflow (I imagine there is more than one) from texture creation to compositing to deal with the gamma issue in Maya/Mental Ray? I would be particularly interested in a workflow that can be applied to a small design company with no access to a colour scientist.

THANK YOU!

Fus|on
10-01-2007, 12:40 AM
bloody hell ZAP - Your much too compassionate with your info :bowdown:

Awesome stuff, Really opened my eyes to a few issues and I was again amongst the many who think other renderer's are easier than MR etc etc etc.....you are entirely correct and it flippin makes so much sense.

Again your mental ray myth faq on your blog is awesome as well......when will you be updating with some details for mr 3.6 :p sorry man I know your hell busy.

Guys keep up the awesome posts and all the great info thats coming out, please be careful and not to take the next kazillion posts into vray vs mr etc etc....unless of course if its necessary or information worthy,.......remember zap and other mr dudes make total sense out of most of the things they are saying, and they come with much more experience than most of us have...........sorry, don't mean to offend anyone.

sixbysixx
10-01-2007, 12:53 PM
I thought I understanded gamma till quite recently. A CRT monitor physically produces a gamma of 2.5. A LUT inside of the monitor linearizes this by applying a gamma correction. However, our eyes like a little bit of gamma so this correction is the inverted of 2.2 instead of 2.5. So our monitors (and video) work with an end-to-end gamma of 1.1 / 1.2

I thought this was all I needed to know in the colour workflow... but apparently not! I couldn't understand Master Zap's explanation. (I'm sorry I'm really slow with metaphors.) Do you think you could explain an actual workflow (I imagine there is more than one) from texture creation to compositing to deal with the gamma issue in Maya/Mental Ray? I would be particularly interested in a workflow that can be applied to a small design company with no access to a colour scientist.

THANK YOU!

Which workflow you prefer also depends on if your renders are 32bit or not, but in my opinion the simplest one with the least confusion would be setting the framebuffer gamma to 0.455

Also check this thread: http://forums.cgsociety.org/showthread.php?f=87&t=543800

http://farmhousepost.com/weblinks/GammaWorkflow.jpg

MasterZap
10-02-2007, 08:46 AM
I thought I understanded gamma till quite recently.

Well, you are not alone in misunderstanding the topic. It's tricky, and it's hard and filled with strange terminology that confuses people. For example, some people say "linear" when they mean the perceptually linear space of a computer monitor, i.e. that which has a gamma 2.2-ish relation to actual luminance values.

ILM even have to name this specifically in the OpenEXR spec as scene reffered linear, which means the values in the file are directly linear references to real luminance values in the real world.

A CRT monitor physically produces a gamma of 2.5.

Close enough; A standard monitor has an sRGB curve. This is a linear segment with a gamma 2.5-ish curve on top. This all works out to, in practice, the same as a gamma 2.2 curve. (If you plot the difference between sRGB and a gamma 2.2 curve, it's nigh inperceptible)

A LUT inside of the monitor linearizes this by applying a gamma correction.

NO! N O !

This is a very common misconception but it is false. Any LUT you see, in your graphics card or elsewhere, has the job of calibrating the monitor to exactly sRGB, and nothing else.

Unfortunately, many people misunderstand these LUT's, and often they are poorly labeled. For example, most NVIDIA card has a "gamma" contorl which normally says "1.0", leading people to believe this makes their graphics into "magic gamma 1.0". But this is a transfer gamma, which has the meaning that "1.0 = unchanged". So it isn't compensating for anything in either direction when set to 1.0

Some cards more correctly show the gamma as 2.2, but this then gives the people the other wrong impression - that the card is "compensating" for the gamma of 2.2. This is also WRONG. What the card is doing is making sure your monitor adheres to sRGB, i.e. making sure it is the correct 2.2. gamma.

However, our eyes like a little bit of gamma so this correction is the inverted of 2.2 instead of 2.5. So our monitors (and video) work with an end-to-end gamma of 1.1 / 1.2

This is technically true but for this discussion beside the point. The idea of "rendering intent" (a special term unrelated to comptuer rendering) within video, which works out to "people want stuff a tad brighter than it really is", or a "end-to-end" gamma slihgtly above 1.0, is really a completely different issue to what we are talking about here. (I could elaborate if I had time, but just trust me that this is not relevant)

BTW: Some people wonder "why not set the screen to really be 1.0 and be done with it"?

Well, if the frame buffer of your graphics card was 16 bits, or floating point, then yes indeed it would work!

But for an 8 bit frame buffers (like 99% of them out there is, having 255 levels of red, green and blue at their disposal) an actual gamma=1.0 on the screen doesn't work, because our eyes resolve detail in the dark regions much more than in the bright regions. So we would see banding in the dark areas, because the 255 levels are not enough. Level 0, 1, 2, 3 would be visibly different, whereas levels 251,252,253,254 and 255 woudl look completely identical. We don't have enough resolution at the "bottom" and waste resolution at the "top".

This is exactly why sRGB is there. The sRGB curve matches our "perception", so each "step" in it looks "similar" to our eyes, and the banding is minimized, and the "waste of resolution" is minimized.

Reality, however (the real world we try to mimic in CG) has real world physical light values. If we want the pixel values to have a valid relation to the real world physical light values, we must take the fact that the screen is sRGB into account.

I thought this was all I needed to know in the colour workflow...

Many people do. Another common misconception is "I calibrated my monitor, so I don't need to care". This is WRONG.

When you calibrate your monitor, you calibrate it to be exactly sRGB, i.e. to have a gamma 2.2 curve. Then the idea is that all your software should adhere to the sRGB standard, i.e. assume you are wieving stuff on a sRGB monitor.

Most print places assumes the .jpg's you send it from Photoshop, for example, has sRGB (or Adobe RGB, which is similar enough for our discussion) which has the gamma baked in. If they didn't assume this (and assumed pixel values where actual scene reffered linear), the print would come out way too dark.

but apparently not!

Indeed. You have to assume your screen is sRGB, and any "8 bit" image (like .jpg's) which most software display "directly" to the screen (w. no processing) come with this sRGB baggage "baked in".

So you need to linearize such thing coming in. And you need to make sure that your physically correct values in the renderer comes out onto your sRGB screen with the fact that the screen is sRGB taken into account somewhere in the pipeline.

Exactly where in your imaging pipeline this happens is up to you. (Professional movie folks always render with an output gamma of 1 to floating point files, and keep everything in true scene-reffered linear all the way, but any time they view such data on an sRGB screen, the view software makes sure to apply the gamma for them.)

I would be particularly interested in a workflow that can be applied to a small design company with no access to a colour scientist.

When I get infinite time, I will create a site about this. I already own the domain name. But for now, I havn't had time... yet.

(But perhaps I could just round up my latest posts on the subject and use that as site content ;) )

/Z

MasterZap
10-02-2007, 08:50 AM
bloody hell ZAP - Your much too compassionate with your info :bowdown:

Thanks, man. I really try. Words like yours warms ye olde hearte... ;)

Again your mental ray myth faq on your blog is awesome as well......when will you be updating with some details for mr 3.6 :p sorry man I know your hell busy.

Soon. Very. Soon.

/Z

MasterZap
10-02-2007, 08:55 AM
but in my opinion the simplest one with the least confusion would be setting the framebuffer gamma to 0.455

In the applications which allow direct access to the "mental ray" internal gamma setting, I would tend to agree - to a point.

The "problem" is that I can't simply tell people to set the global gamma (1/2.2 = 0.4545) for a couple of reasons:

a) it doesn't fix your color swatches. Your color swatches will not take gamma into account, making you "suprised" that what looks like a "deep red" in your color swatch comes out "borderline pink" in the render

b) certain applications actually layers it's own gamma handling on top. For example, 3ds max has probably the most mature gamma handling of the apps, but it is not using mental rays internal gamma at all. Which means the rules are 100% the ones of 3ds max, not the "mental ray" rules (in max, no matter what gamma you set globally, from the point of view of the mental ray core gamma is 1.0. It is the shaders that handle the gamma in max, both going in, and going out. This is largely a good thing, but makes giving a "general suggestion for a workflow" between applications really difficult.... which is the main reason my big linear workflow site takes so long to make... ;) )


/Z

sixbysixx
10-02-2007, 09:28 AM
In the applications which allow direct access to the "mental ray" internal gamma setting, I would tend to agree - to a point.

The "problem" is that I can't simply tell people to set the global gamma (1/2.2 = 0.4545) for a couple of reasons:

a) it doesn't fix your color swatches. Your color swatches will not take gamma into account, making you "suprised" that what looks like a "deep red" in your color swatch comes out "borderline pink" in the render

Of course (I guess I should have mentioned that;-) but every method has some pitfalls and is slightly awkward and for me not having the color swatches is the smallest one.


b) certain applications actually layers it's own gamma handling on top. For example, 3ds max has probably the most mature gamma handling of the apps, but it is not using mental rays internal gamma at all. Which means the rules are 100% the ones of 3ds max, not the "mental ray" rules (in max, no matter what gamma you set globally, from the point of view of the mental ray core gamma is 1.0. It is the shaders that handle the gamma in max, both going in, and going out. This is largely a good thing, but makes giving a "general suggestion for a workflow" between applications really difficult.... which is the main reason my big linear workflow site takes so long to make... ;) )


/Z

I was only referring to Maya. I had no idea Max is handling this differently.
I guess these things make it so much more tricky for you guys to make Mental Ray easy to use and intuitive, if every application handles this stuff completely different.

A suggestion: why don't you build a gamma option into the shaders like Sphere did with the mia_material_rg?

Appreciate all your input Zap:buttrock:

floze
10-02-2007, 11:50 AM
In the applications which allow direct access to the "mental ray" internal gamma setting, I would tend to agree - to a point.

The "problem" is that I can't simply tell people to set the global gamma (1/2.2 = 0.4545) for a couple of reasons:

a) it doesn't fix your color swatches. Your color swatches will not take gamma into account, making you "suprised" that what looks like a "deep red" in your color swatch comes out "borderline pink" in the render

b) certain applications actually layers it's own gamma handling on top. For example, 3ds max has probably the most mature gamma handling of the apps, but it is not using mental rays internal gamma at all. Which means the rules are 100% the ones of 3ds max, not the "mental ray" rules (in max, no matter what gamma you set globally, from the point of view of the mental ray core gamma is 1.0. It is the shaders that handle the gamma in max, both going in, and going out. This is largely a good thing, but makes giving a "general suggestion for a workflow" between applications really difficult.... which is the main reason my big linear workflow site takes so long to make... ;) )


/Z
One idea MasterZap, wouldnt it make sense to apply an 'inbetween correct' gamma solution, i.e. by only correcting the value of the rendered pixels, and leave the hue and saturation as is? Of course that's a quite dirty approach, but quick and easy as well, since you dont have to mess with anything else than the screen pixels..? I'd say it's (obviously) dirtier than the regular gamma correction approach where you supposedly correct the sRGB textures, but cleaner than the no-gamma-at-all approach, hence 'inbetween correct'.

And thanks a million for your contribution, and for shaking up the community!

MasterZap
10-04-2007, 06:33 AM
I was only referring to Maya. I had no idea Max is handling this differently.
I guess these things make it so much more tricky for you guys to make Mental Ray easy to use and intuitive, if every application handles this stuff completely different.


Exactly. Therein lies my headache in exactly how to provide
a) Tips
b) Shaders
...which support all the desirable workflows without making the shaders laden with app-specific stuff (keep them general) and yet allow flexibility without becoming too confusing.

I admit that in some cases we may have erred on the side of "too confusing" ;)

The whole thing gets hugely complicated, also, by the fact that the max "Logarithmic Exposure Control" behaves as a gamma in and of itself, so if you combine that with real gamma correction, you have a huge mess on your hands.

This will be solved by the new photographic exposure in max 2008 which handles this correctly. It'll be e relief, I promise. ;)


A suggestion: why don't you build a gamma option into the shaders like Sphere did with the mia_material_rg?


If it was only up to me, I'd surely consider such a thing. But it's not solely up to me.

/Z

MasterZap
10-04-2007, 06:41 AM
One idea MasterZap, wouldnt it make sense to apply an 'inbetween correct' gamma solution, i.e. by only correcting the value of the rendered pixels, and leave the hue and saturation as is? Of course that's a quite dirty approach, but quick and easy as well, since you dont have to mess with anything else than the screen pixels..? I'd say it's (obviously) dirtier than the regular gamma correction approach where you supposedly correct the sRGB textures, but cleaner than the no-gamma-at-all approach, hence 'inbetween correct'.

One would - naďvely - think that this would work. And indeed, perhaps some in-between approach could work. However, it would still be wrong in a technical sense.

What happens if you try to just modifie the "value" of a pixel is that the natural balance in color (which you'd think you preserve) actually gets thrown completely out of whack.

The fact is that in video, film, etc. it is completely normal that a brightly lit object appears less saturated, as it pushes "towards" the overexposed region. If you do a full preserve of "hue and saturation", you get this:

http://i51.photobucket.com/albums/f366/MasterZap/zap-not-ok-5.jpg
(image calculated w. gamma applied to the "value" of the color, leaving hue and saturation alone)

You see this looks pretty horrid. The reason it looks horrid is that you get a luminance compression without the accompanying saturation compression. The net effect actually ends up being that of the bright things seemingly increase in saturation, which is something you'd absolutely do NOT want.


Whereas using a per-component gamma would yeild:
http://i51.photobucket.com/albums/f366/MasterZap/zap-new-tm.jpg

If one still thinks the gamma correct image lacks saturation, one have to think about that:
a) It probably meant you didn't put in proper linearized (de-gammaed) colors in the first place
or
b) Your scene is extremely over-lit or has been massively contrast-compressed in the tone mapper
and
c) ...If it's still an issue, it's fairly simple to simply turn up saturation a hair in post!


...which of course is the reason that the new mia_exposure_photographic has a very carefully programmed little "saturation" knob built in (set it at around 1.3 for a nice thick look), as well as a "shadow crush" which puts some bite into the shadows. :thumbsup:


/Z

Bitter
10-04-2007, 09:02 AM
'There's only so much you can pour into a shotglass of a mind.' -- Bud Bundy

My brain is full.

So, given the normal workflow from Photoshop to Maya, out through mental ray, or renderman, or whoever. . .how can I spit out images (most of which may start as 8-bit from a camera) that when rendered, won't be incorrect? I know Spitzak mentioned conversion to linear space from 8-bit files. Can I do that, then render?

http://mysite.verizon.net/spitzak/conversion/index.html

In theory, "correcting" in post is artistic consideration. Correct or not, especially for film, I have to consider where I want to focus the viewer. Do I want to purposefully push a highlight on the edge of a blade? Do I need to deepen the colors to make a point, or skew something to the unreal so the audience sees it as realistic? (Like cartoon animation, you have to be more expressive.) But I am very interested in the correct workflow so that the image making process is less likely to make me want to quit my job.

Feel free to direct this to another post since that was not the intent of the thread.

As a side note, (or maybe ANOTHER sidenote) a tool is a tool. Renderman, mental ray, Maxwell, etc. If your vision lacks quality or education, your output will never be satisfactory. I don't care what you use. If I can model in NURBS, polys, AND Sub-Ds, then I will never go unemployed.

I for one don't want to lose my job when the pipeline changes. Clinging to the idea that any one product will solve your problems forever and ever will doom you to be in one job. . .until they replace you with someone more flexible. Right now that job is mental ray. Am I going to bite the hand that feeds me?

Not likely.

I enjoy using mental ray and all the headaches particular to it just as much as the headaches particular to Renderman, Maxwell, etc.

Knowing the right way to do it just might eliminate one of those headaches without killing me. :shrug:

I'm assuming that's why we're really here. No? Thanks for the help Zap, much appreciated.

Matadŏr
10-04-2007, 11:13 AM
Wow, this thread is starting to look like an entire repository on CG techniques, optics, color theory and image creation!
The information pulled by some of you. and in particular Zap, is very important and interesting for the rest of us.
I always admire the ones that put such effort on helping other users.

And now, a doubt (one more ;),
the more i look to 3D raw images (with no PS treatment at all) i see that one of the most important contribution to image quality (towards the "real"), is the contribution of one material color and aspect to all others in the scene or near it and vice versa.
I know that AO as now the color bleed option, and thats really helpfull for some effects, but i 'm thinking that what i loved most of the old Radiosity renderer like Lightscape was the way color bleeded from all materials, and we saw that on those beautiful white walls with all the rich shadowing with all kind of hues, values and saturation blends. What i´m trying to say is, basically, that when i look to MR of Fr images they always look much more "grey" than Maxwell (and i know that i shouldn't compare them) and Vray.

Is there any way or technique that we should be aware in preparing our materials and lighting to try maximize this bleeding factor, without it being that basic bouncing of an alien green spilled by a carpet on the ceiling ;).

Contribution is the key word, here on this problem, and on this forums :)
Once more, many thanks everyone for all this discussion and help.

Zé Pedro
(sorry for my lazy English, no revision on this one, and i'm sure it needs ;))

ACamacho
10-04-2007, 11:34 AM
I think soon someone may have to convert alot of Zap's info into a PDF like the Vray and Maxwell threads. :)

MasterZap
10-04-2007, 12:09 PM
the contribution of one material color and aspect to all others in the scene or near it and vice versa.


But mental ray does this inherently with GI and FG techniques.

But what is funny, is that if I had a dime for every time some guy told me "I have white walls, then I put in a brown wood floor, and the sun is now shininig in through the window, and my stupid walls are now brown tinted, your software is broken, waah, waah, waah".... well... then I would have... like.... three dimes... at least. ;)

Seriously. Tho. IT's a common complaint "We get too much color bleed".

And of course, with the risk of sounding like a broken record.... one reason for the "too much bleed" effect is ... working in the wrong gamma, because saturation gets overly enhanced that way. Another reason for the "too much bleed" effect is that people simply have materials of too high diffuse reflectance. (The "1 1 1 white" material doesn't really exist in reality).

But now you come and want more bleed. Interesting. ;)



I know that AO as now the color bleed option,

Not really; AO is used for two things
a) "fake GI" for movie production, where you only care about "getting nice contact shadows and a rough GI-ish effects with no risk for strange flicker due to different bounce light calculation between frames"
b) To enhance an intentionally over-smooth GI solution (like the "detail enhancement" in mia_material can take an intentionally oversmoothed - and hence quick - GI solution and put "the details back")

You don't really use AO for the "bleed" itself. (But of course we added precicely that option in 2008, where you have a "AO with bleed" thing... kind of like a "local brute force GI" mode).

and thats really helpfull for some effects, but i 'm thinking that what i loved most of the old Radiosity renderer like Lightscape was the way color bleeded from all materials, and we saw that on those beautiful white walls with all the rich shadowing with all kind of hues, values and saturation blends.

I agree. And mr does that. And some users complain to no end when it does ;)

What i´m trying to say is, basically, that when i look to MR of Fr images they always look much more "grey" than Maxwell (and i know that i shouldn't compare them) and Vray.

Shouldn't be the case. The basic math is well known and the same in pretty much all renderers.

/Z

dagon1978
10-04-2007, 02:11 PM
zap, vray has more "bleed" because of the lightcache "infinite" light bounces calculation, and i think maxwell has something similar
in mray you need to "cut" all the unnecessary bounces because of the time consuming

it's 3 years now i'm trying to ask this, please mentals add the lightcache, or add something similar to mray, and please separate primary and secondary bounces (look at fR, vray... and turtle 4.1 now...)
i know you aren't involved in this part of the development, but i'm a bit frustrated when i look at the other renderers GI :sad:

back on topic... is it possible to add a "screen" Vs "world" radius option for the AO+colorbleeding? this would be great ;)

floze
10-04-2007, 02:57 PM
zap, vray has more "bleed" because of the lightcache "infinite" light bounces calculation, and i think maxwell has something similar
in mray you need to "cut" all the unnecessary bounces because of the time consuming

it's 3 years now i'm trying to ask this, please mentals add the lightcache, or add something similar to mray, and please separate primary and secondary bounces (look at fR, vray... and turtle 4.1 now...)
i know you aren't involved in this part of the development, but i'm a bit frustrated when i look at the other renderers GI :sad:

back on topic... is it possible to add a "screen" Vs "world" radius option for the AO+colorbleeding? this would be great ;)
I second this, lightcache is pimp!

Matadŏr
10-04-2007, 04:30 PM
Master Zap i hope you didn't receive my previous inquiries as some more yelling about how MR is wrong and the sky is falling in our heads :rolleyes:
I´m a happy user of Maya since v 1.0 and in fact, a happy user of MRfM since its appearence. Your work on both mia materials and Sky/Sun was a big revolution on my daily routines and a welcomed one.
I know there is bleed from materials and that GI/FG accounts for that. I'm very comfortable with the way MR works in Maya besides some little "features", many of them you and others mentioned, and all is well.
All this to say that i'm not another unsatisfied user. Urraahh
I was talking of, and i don't know if i expressed well my ideas, something that i see in images and that i'm interested in knowing what's happening in the background, on the renderer side, for the images that are being produced to be different in that specific way.

The behavior could be related to the facts that dagon1978 mentioned. I think that it all resumes to what we could perhaps call
simple bleeding VS dense/deep bleeding?
A more "rich" way of scattering the values of light through the scene... i really don´t know how to express it correctly.

And with your knowledge on the matter, you are the perfect person to reach for some answers :)


Zé Pedro

MasterZap
10-04-2007, 05:15 PM
Master Zap i hope you didn't receive my previous inquiries as some more yelling about how MR is wrong and the sky is falling in our heads :rolleyes:


No no no, please take what I say with a big smile and a nudge-wink.

I'm kinda trying to be funny, a bit. I'm a funny guy. Don't take it too seriously. ;)

Nudge-wink-know-what-I-mean....

/Z

MasterZap
10-04-2007, 05:21 PM
Ooops, double post. Durned the internets!

/Z

Fus|on
10-05-2007, 02:41 AM
I think soon someone may have to convert alot of Zap's info into a PDF like the Vray and Maxwell threads. :)


hehe yea...how about if we just PDF zaps brain? :curious:

bkircher
10-05-2007, 04:42 PM
First off: I love this discussion, this clears up a lot. Thanks a lot to MasterZap for all the insight (:

I got confused with all this discussion about FG and Colour-Bleeding and did a little test:
I tried FG, FG w. secondary diffuse bounce on and finally a CTRL irradiance shader that
increased the irradiance saturation (thanks to the CTRL-people).

I always thought FG would look into the scene and sample the colours it found. It seems to be doing so after the first ray. Why is that, the poor MR-Users asks himself ? (The Colours are there, only extremely dim, checked vs. a version with no fg, no bleed at all)

sixbysixx
10-05-2007, 05:21 PM
I got confused with all this discussion about FG and Colour-Bleeding and did a little test:
I tried FG, FG w. secondary diffuse bounce on and finally a CTRL irradiance shader that
increased the irradiance saturation (thanks to the CTRL-people).

I always thought FG would look into the scene and sample the colours it found. It seems to be doing so after the first ray. Why is that, the poor MR-Users asks himself ? (The Colours are there, only extremely dim, checked vs. a version with no fg, no bleed at all)

You have to ask yourself: how would this look like in reality?
I would say more like the first image. no?
Your light (which looks like the default light without decay?) comes from the top, so the sphere isn't really lit from underneath. I wouldn't expect the ground to pick up much of the colours of the sphere like this in reality.

If you had a light pointing up from underneath the sphere I'm sure you're gonna see some colour bleeding onto the ground, also with only primary FG...

sixbysixx
10-05-2007, 05:27 PM
OOOPS - douple post...

dagon1978
10-05-2007, 05:45 PM
You have to ask yourself: how would this look like in reality?
I would say more like the first image. no?
Your light (which looks like the default light without decay?) comes from the top, so the sphere isn't really lit from underneath. I wouldn't expect the ground to pick up much of the colours of the sphere like this in reality.

If you had a light pointing up from underneath the sphere I'm sure you're gonna see some colour bleeding onto the ground, also with only primary FG...

absolutely right :thumbsup:

so, here another wish on my list:
- more control over the FG map (saturation, contrast)

dagon1978
10-05-2007, 06:58 PM
there is definitively a performance issue with the mia_mat in maya 2008
i do know this post (http://mentalraytips.blogspot.com/2007/09/sitting-in-shade-with-maya-miamaterial.html) on the zap blog, but there's something more

here's some test:

http://img229.imageshack.us/img229/2912/mianoshadows43spz2.jpg

http://img229.imageshack.us/img229/3066/miasimplesorted1m31shm8.jpg

http://img229.imageshack.us/img229/5998/miasegments1m35ssz4.jpg

bkircher
10-05-2007, 07:12 PM
Originally Posted by sixbysixx
You have to ask yourself: how would this look like in reality?
I would say more like the first image. no?
Your light (which looks like the default light without decay?)
comes from the top, so the sphere isn't really lit from underneath.
I wouldn't expect the ground to pick up much of the colours of the
sphere like this in reality.

If you had a light pointing up from underneath the sphere I'm sure you're
gonna see some colour bleeding onto the ground, also with only primary FG...


Light used was a MR-Area light (linear, I think), at a diagonal angle.
Mia Exposure Simple, Gamma Corrected Pink.Checked with quadric falloff with similar outcome.

I find the effect very subtle, and probably less than I'd expect in a real situation with a
100% pink sphere and white walls, though the other results are very pinkish?

Bitter
10-06-2007, 02:00 AM
Back to the pink spheres. . .I have noticed in MR that color bleeding is best served with photons and not Final Gathering. By design or not, it seems to work fine for me.

FG was originally designed to complete GI and not as a direct substitute.

I assume you're looking for the physical correctness of the color bleed? If that is the case you must include GI. If you do that, does it look better?

Bitter
10-06-2007, 02:39 AM
This might have been covered but the description of light caching seems like the description of Importons.

Backward traced from the camera.

Similar results?

MasterZap
10-06-2007, 09:04 AM
there is definitively a performance issue with the mia_mat in maya 2008


The performance issue is known, as I posted on my blog, but the artifact is new. Have you logged a bug? Always, always log a bug.

/Z

MasterZap
10-06-2007, 09:07 AM
Back to the pink spheres. . .I have noticed in MR that color bleeding is best served with photons and not Final Gathering. By design or not, it seems to work fine for me.

There shouldn't really be a difference.


FG was originally designed to complete GI and not as a direct substitute.


Originally - yes. But with FG multibounce it is a direct substitute.

I assume you're looking for the physical correctness of the color bleed? If that is the case you must include GI. If you do that, does it look better?

No, this is not true. The physical correctness should be the same. But what are your FG bounces set to? Maya's defaults on all things raytracing are very "1989" in that they are set to things like 1 bounce and stuff. Also I see no shadow of the sphere so I have a big problem realizing how close to the floor it is.

You should get correct 1st bounce immediately, if not, something is not set up right, probably in your FG trace depths, or something funky in the material settings(!?)

/Z

dagon1978
10-06-2007, 02:02 PM
The performance issue is known, as I posted on my blog, but the artifact is new. Have you logged a bug? Always, always log a bug.

/Z

but the performance is not related to the "segment" mode, it's always slow in simple/sorted too
ok, i'll log the bug to autodesk

and, here's another problem in maya 2008:

if you use the "render settings" you get just 1 single FG diffuse bounce, if you want more you have to set it via miDefault
i would know who's the mr4maya developer... :banghead:

ctrl.studio
10-06-2007, 02:33 PM
i would know who's the mr4maya developer... mentalimages :)

Bitter
10-06-2007, 04:26 PM
Oddly enough, some training videos we'd received from Autodesk about a year ago mentioned that GI and FG together were necessary for physically correct rendering.

As a side note, in the Master Class this year for the mia_material, her example did not produce noticeable bleed until she added GI. She already had FG bounces set to 2 (miDefaultOptions) but the bleed wasn't as pronounced until she showed a render with GI.

Is that a flaw with Maya's integration?

And to my knowledge mr4maya is co-developed with Autodesk, it is not 100% up to mental images how it is integrated. Is that the case?

sixbysixx
10-06-2007, 07:18 PM
Oddly enough, some training videos we'd received from Autodesk about a year ago mentioned that GI and FG together were necessary for physically correct rendering.

As a side note, in the Master Class this year for the mia_material, her example did not produce noticeable bleed until she added GI. She already had FG bounces set to 2 (miDefaultOptions) but the bleed wasn't as pronounced until she showed a render with GI.

Is that a flaw with Maya's integration?


I always found that the colour bleed with Photons with MR is overpronunced.
I guess it all comes down to taste anyway, but when it comes to realism I find that the GI colour bleed is too much. But that just my opinion anyway;-)

floze
10-07-2007, 11:41 AM
but the performance is not related to the "segment" mode, it's always slow in simple/sorted too
ok, i'll log the bug to autodesk

and, here's another problem in maya 2008:

if you use the "render settings" you get just 1 single FG diffuse bounce, if you want more you have to set it via miDefault
i would know who's the mr4maya developer... :banghead:
That's actually Maya 8.5 issues as well; I encountered the mia shadow shader problem quite some time ago (only didnt tell anyone about, ugh, should have logged a bug.. uhmmm), and also the fg diffuse bounces default to 1 in the miDefaultOptions node.

I should indeed log those bugs and inconveniences in the future.. :wip:

floze
10-10-2007, 01:27 PM
The final gather contrast option still does not seem to work properly in Maya 2008, ugh? :argh:

dagon1978
10-10-2007, 04:24 PM
The final gather contrast option still does not seem to work properly in Maya 2008, ugh? :argh:

it doesn't work
i'm working only with ctrl.ghots right now... it's much simpler...

asche
10-11-2007, 08:42 AM
Hi, i am wondering if anyone else here had some problems with the mip_gamma_gain node.

I am trying to use it, and run into a couple of issues :
You cant use the mip_gamma_gain.outValueA ... .no alpha ... mentalray doesnt even bother to render anymore :(

you cant access the values via mel ... so if i make these small lines of code :

float $rgb[]=`getAttr "mip_gamma_gain1.outValue"`;
print $rgb;


i always get "0 0 0"
no matter what goes into the gamma node or what color or settings it is set to.
the colors in the viewport are displayed correctly , though, even when i use the outAlpha of the node ...
ayone here with a solution to that ?

(i want to access the alpha simply to see if it is gamma corrected , too ... and if its not i want to use it ...)

MasterZap
10-11-2007, 10:22 AM
(i want to access the alpha simply to see if it is gamma corrected , too ... and if its not i want to use it ...)

The alpha isn't gamma corrected, only the color.

If you want to "see" the alpha you can probably use mib_color_alpha shader which simply returns the alpha of the input in all channels....

/Z

floze
10-11-2007, 11:32 AM
it doesn't work
i'm working only with ctrl.ghots right now... it's much simpler...
I figured I can use the new string options in the miDefaultOptions node, it's pretty simple and straightforward too imho:

setAttr -type "string" miDefaultOptions.stringOptions[1].name "finalgather contrast";
setAttr -type "string" miDefaultOptions.stringOptions[1].value "0.45 0.45 0.45 0.45";
setAttr -type "string" miDefaultOptions.stringOptions[1].type "color";

Although it's a shortcoming that these options are not built in from the beginning, we can at least add them in more easily.. sigh.

A little explanation for those who dont use stuff like MEL too often:
Copy and paste the above lines into the script editor and execute them. By doing so a new item under Extra Attributes>String Options in the miDefaultOptions node is added.
Be aware that the '[1]' in these lines is an index, which is starting with zero, i.e. '[0]' - so '[1]' practically means the second position instead of the first (that's weird programmer thinking).
The reason I'm using the '[1]' is because the first position is already taken by default by some motion factor option (which is probably an example by the devs?).
If you wanted to add another string option to the miDefaultOptions you would have to go on with the index, for example:

setAttr -type "string" miDefaultOptions.stringOptions[2].name "importon";
setAttr -type "string" miDefaultOptions.stringOptions[2].value "on":
setAttr -type "string" miDefaultOptions.stringOptions[2].type "boolean";

The next option would amount '[3]' and so on.

Olegr
10-11-2007, 12:09 PM
What exactly is the benefit from setting the FG contrast?

cpan
10-11-2007, 12:13 PM
Florian, do you happen to know more usefull hidden string attributes (maybe a list with
all the attributes hidden until fully tested)? :)

salut,
calin

floze
10-11-2007, 12:23 PM
What exactly is the benefit from setting the FG contrast?
It controls the adaptivity of final gathering pre-computation, and it works similar to the regular sampling contrast option, but opposed to that it does not decide where to put (or rather neglect) new samples, but final gathering points instead.

So with a fg contrast color towards white there should generally be less final gathering points, because the algorithm decided to neglect points where the contrast was not high enough - hence it 'adapts' the point density. This reduces rendering time and quality, but imho it generally converges better than if I reduced the overall point density.

floze
10-11-2007, 02:50 PM
Florian, do you happen to know more usefull hidden string attributes (maybe a list with
all the attributes hidden until fully tested)? :)

salut,
calin
Well, for now:


contrast all buffers //boolean


ambient occlusion cache //boolean

ambient occlusion rays //integer

ambient occlusion cache density //scalar

ambient occlusion cache points //integer


importon //boolean

importon density //scalar

importon merge //scalar

importon trace depth //integer

importon traverse //boolean


finalgather mode //string ('3.4', 'strict 3.4', 'automatic', 'multiframe', 'force')

finalgather contrast //color


Whereas the data types need to be as follows:


miBoolean "bool[ean]" 'on' 'off' 'true' 'false' '0' '1'

miInteger "int[eger]" integer value

miScalar "scal[ar]" floating-point value

miScalar "float" floating-point value

miVector "vec[tor]" 3 floats

miColor "col[or]" 4 floats

miString "[string]" string

While the stuff in [] brackets means, you can write it either 'boolean' or the shorter 'bool' for example. The data type on the left side with the mi prefix is the mental ray equivalent. Be aware that color means four values, i.e. R, G, B, A.

The importons take quite long to collect if you have their density at around 1.0, but it seems to work well with values like ~0.1.

The ambient occlusion cache is sort of a mystery for me right now, it does something, but I dont quite get the expected visual feedback.

MAV4d
10-11-2007, 03:39 PM
So after reading all of this... in one sitting (my head hurts:D)... im still trying to figure out the workflow for physically correct gamma.

Is the new gamma setting in the photographic lense the way to have physically correct gamma? but doesnt that negate whole point of starting with gamma correct textures? I also saw the bits being thrown around, will other images files naturally be post processed for gamma? (ie not jpg... maybe iff or targa?)

im really really trying to get my head around correct workflows, but i cant seem to find any hard information about it. I would also love to see industry standards for lighting/shading/rendering workflows...BUT! thats another topic.

i am a student and going to be graduating soon and hopefully getting a job in the industry and i would love to be able to hit the ground running

thanks alot

-Mike

Saturn
10-11-2007, 05:18 PM
Well, for now:


The ambient occlusion cache is sort of a mystery for me right now, it does something, but I dont quite get the expected visual feedback.

Let's speculate then :

- Point based occlusion a la Prman ?
- Separate Fgmap for occlusion only ?

eddgarpv
10-11-2007, 06:29 PM
From the docs:


-----------------------------


To speedup rendering, ambient occlusion caching may be enabled with the "ambient occlusion cache" string option. If caching is turned on, several preprocess passes are computed. In the first pass, some ambient occlusion points are created on a coarse grid. Subsequent passes refine the grid adaptively. The density of the grid is identified by the "ambient occlusion cache density" string option which gives the upper bound to the number of ambient occlusion points per pixel.

During tile rendering, ambient occlusion is interpolated from several ambient occlusion points closest to the lookup location. The number of points used for interpolation is given by the "ambient occlusion cache points" string option. The default value is 64.

Ambient occlusion caching is disabled by default.

jupiterjazz
10-11-2007, 08:07 PM
Let's speculate then :
- Point based occlusion a la Prman ?
- Separate Fgmap for occlusion only ?


Briefly

- no, it involves raytracing, point-based approach do not use raytracing, and imho are going to be the future in production

- no, it's like a pregressive approach that creates 1 pass at a certain density and then refines samples adaptively in later passes (2 or 3). Nothing is written on disk although it's called 'cache'.

An intelligent approach (that at least would give a meaning to the name 'cache') would be writing down a cache that accelerates rendering by storing occlusion estimates and re-using them at nearby locations... other renderes do this very well. Check AIR renderman if you want an AO cache. 3Delight, PRMan, Pixie all do point-based occlusion and color-bleeding without raytracing.

If you wanna know more about AO cache in mr, check the mr docs, there are few lines about it with no usability infos (how strange...) ;)

What I can say is:

- you need to rewrite your shaders to use it
- it should be faster that regular occlusion, but slower than adaptive occlusion
- it will provide a more washed-out / blurred look

p

Olegr
10-11-2007, 08:21 PM
Briefly
- no, it involves raytracing, point-based approach do not use raytracing, and imho are going to be the future in production


Raytracing is not the future? That seems a bit odd to me. When you have major hardware vendors like Intel working on solutions for realtime raytracing, and raytracing is arguably a more accurate reproduction of reality (which is after all what we want to do).

Why should not raytracing become the standard for production as hardware increases in power and removes the only disadvantage of raytracing (speed)? Raytracing is after all a problem that can be easily parallelized. I can accept that PRman might be a superior renderer now (when used with a raytracer when you need raytracing) but arguing that raytracing is not the future seems far fetched. At best.

jupiterjazz
10-11-2007, 08:29 PM
Dude.. read well..
I am not saying raytracing is not the future.
I am saying that IMO point-based techniques for effects like bleed and occlusion (are the present) and will be the future in animation PRODUCTION.

Ah, edit, lemme also add a note, you said say that photorealism is ">after all< what we want to do". After all is inadeguate, since having all phisically correct and photoreal, especially in a framework that does not allow easy sahding changes or that lacks of a shading language for artistrs, reduces (photo)surrealism, which IS the realm of art and fantasy.

So more than "after all" is "depending on your aim".

In any case it would be a lot of fun to play with the Larrabee.
For who does not know: http://en.wikipedia.org/wiki/Larrabee_%28GPU%29


p

Buexe
10-11-2007, 09:57 PM
Point-based stuff is cool, divides my render-time by factor 2 and there is no notable visual difference in my stuff. Me likes a much!

Saturn
10-12-2007, 11:28 AM
Briefly

- no, it involves raytracing, point-based approach do not use raytracing, and imho are going to be the future in production



Yep I know. But I can't see why you could not approach the problem in the same way ( I mean no raytrace ) in MR. MR is a raytracer ok but you are not obligated to use raytrace.

For instance you can generate the point cloud when you pass the geometry and reuse it later.

Am i wrong ?
Any ways thanks for the explanation.

jupiterjazz
10-12-2007, 12:09 PM
Yep I know. But I can't see why you could not approach the problem in the same way ( I mean no raytrace ) in MR. MR is a raytracer ok but you are not obligated to use raytrace.
For instance you can generate the point cloud when you pass the geometry and reuse it later.
Am i wrong ?
Any ways thanks for the explanation.


- cumbersome and incomplete KD-tree API (to create the point cloud)
- no shading function API ready for it (to shade using point cloud data)
- no brickmaps (you will end up with gigabites of point clouds in your pipe...)
- rasterizer performs way slower compared to REYES implementations

p

Visor66
10-13-2007, 12:07 AM
I would really like to learn more about the pointCloud-/brickMap-Stuff in Renderman? Is there a good source to learn from? Paolo maybe?

Thanks!

dagon1978
10-13-2007, 12:34 AM
http://graphics.pixar.com/

MAV4d
10-13-2007, 04:30 PM
So after reading all of this... in one sitting (my head hurts:D)... im still trying to figure out the workflow for physically correct gamma.

Is the new gamma setting in the photographic lense the way to have physically correct gamma? but doesnt that negate whole point of starting with gamma correct textures? I also saw the bits being thrown around, will other images files naturally be post processed for gamma? (ie not jpg... maybe iff or targa?)

im really really trying to get my head around correct workflows, but i cant seem to find any hard information about it. I would also love to see industry standards for lighting/shading/rendering workflows...BUT! thats another topic.

i am a student and going to be graduating soon and hopefully getting a job in the industry and i would love to be able to hit the ground running

thanks alot

-Mike


im bumping my self one bc i really would like to know and two i think alot of other people are in the dark about this

hell according to zap most of the industry is lol

jupiterjazz
10-13-2007, 04:35 PM
I would really like to learn more about the pointCloud-/brickMap-Stuff in Renderman? Is there a good source to learn from? Paolo maybe?

Thanks!


I'm working on some learning material.


p

Visor66
10-13-2007, 06:02 PM
I'm working on some learning material.

I know, and Im counting the days! ;)

slipknot66
10-13-2007, 08:20 PM
Been testing 3delight, and well.. i think i will still use mental ray, specially where i need raytraced, things can get really slow in 3delight when using raytraced, but im really impressed with the displace maps, thats the only thing where mental ray needs some work.

Als
10-13-2007, 09:08 PM
self censored


Als

jupiterjazz
10-14-2007, 02:18 PM
Been testing 3delight, and well.. i think i will still use mental ray, specially where i need raytraced, things can get really slow in 3delight when using raytraced, but im really impressed with the displace maps


thats the only thing where mental ray needs some work.


Depends what you are doing, but in general yes. As for now. If scenes gets very complex although you might get surprised.

Anyway, saying that displacement is the only area that needs improvements is a mega bold statement man.

I identify at the very very least:

- 3d motion blur
- intense particle counts
- NURBS tessellation
- SDS tessellation (yes, even with the new ccmeshes..)
- AOVs/passes
- procedural, hierarchies of geo archives (yes, even with the new assemblies..)
- hair and fur
- frame consistency in animation
- antialiasing
- depth of field
- etc etc...

As I said other times I think it's very good for things like archviz stuff and product design,
but for animation, especially full cg animation, with lots of arbitrary geometry, it's quite shaky, if you have a team of programmers then you can do something and in some cases fix-it-yourselfTM, still it can't handle large databases, even when you have enough RAM.
And in any case, an HW becomes more complex also VFX does, so you will never fit all in memory.

So really depends on what you are doing.

Ah, of course this is just my point of view.


p

slipknot66
10-14-2007, 03:27 PM
Well, i may agree with you in some of the things you pointed there, but what i menat saying the displace was the only thing that mental ray needs some work, its because in my opinion displace is completely broked in mental ray.All the other things you mentioned can be resolved in some way, like DOF for example where you can do in post.This is something that is not possible with displace maps.I also agree that things could be a lot better in mental ray, specially the mental ray integration with Maya.But saying the renderman or 3delight is the answer for everything, because thats what it sounds the way you were putting things here, its not correct.It may be the answer for some things, but not all.Maybe thats why the big studios use both render engines for specific things.Also you know better than me, that as soon as you start doing some serious things with 3delight you will face the same problems that you face with mental ray, where you will need to have or know some kind of programmer stuff.But as you said, thats your point of view.. and this is my point of view :)
Btw, you did i nice job with Trickz are for kidz for mental ray:thumbsup:

Als
10-14-2007, 06:18 PM
But saying the renderman or 3delight is the answer for everything, because thats what it sounds the way you were putting things here, its not correct.

I'm afraid it was/is.
Numerous films are made only with renderman type of renderers (including 3Delight, Air, etc.) without using mental ray at all, and there is clearly a reason why many studious are still using them.

https://renderman.pixar.com/products/whatsrenderman/movies.html

I'd love to render film quality stuff with mental ray but honestly I haven't found how.
Have you seen any tutorial or is there any information inside maya how to render for film?

But you are also correct that in maya there isn't ANY renderer which can render ALL the objects from maya, and that's really ANOYING.

And I've tested speed of rare polygon crunching of 3Delight vs Maya and MR render, and it's scary...


Als

Olegr
10-14-2007, 08:39 PM
Thank god the major movie studios are not as religious about renderers as the majority of forum people out here and just use the right tool for the job.

Doing the major movie studio arguments are also in most cases void. They can pick and choose without having to worry too much about licensing. The ones who need to complain about renderers are the small or single person firms who don't got the money to spend on ten different renderers. I don't, but luckily for me buildings usually don't move so MR does a splendid job. :)

Olegr
10-14-2007, 09:02 PM
Double post.

jupiterjazz
10-14-2007, 10:38 PM
The ones who need to complain about renderers are the small or single person firms who don't got the money to spend on ten different renderers.

You forgot the category of "the ones who spent years trying to get a bunny outta the - full-of-holes - hat".

;)


p

Buexe
10-15-2007, 01:36 AM
ArchViz is not the only application, mr is flexible enough to be used in a variety of media, for example games:
http://www.mentalimages.com/4_2_games/index.html

floze
10-15-2007, 10:53 AM
ArchViz is not the only application, mr is flexible enough to be used in a variety of media, for example games:
http://www.mentalimages.com/4_2_games/index.html
lmao they still have that site up.. :scream:
I've been working together with the guy who did the image in the middle, Norbert Raetz, when I was a teenager. And trust me, although we were both working in the same game dev company back then, it doesnt have to do anything with games.. ^^

Now that I think about it.. it might even have been some 3dsmax cebas/pyrocluster test, so no mental ray at all. :D

Sorry to spoil the bash with the OT, please bear with me.

dagon1978
10-15-2007, 02:23 PM
lmao they still have that site up.. :scream:


this site need a serious refresh... old images, old information (mray 3.4??), no mray forums, they seem to live in the past :shrug:

jupiterjazz
10-15-2007, 05:06 PM
this site need a serious refresh... old images, old information (mray 3.4??), no mray forums, they seem to live in the past :shrug:

If you browse it with NCSA Mosaic looks better though. ;)

(sorry, this was served on a silver plateau and i couldn't resist...)

p

Bitter
10-15-2007, 09:16 PM
I know people at EA and Microsoft that have used mental ray for some game projects for current consoles.

I also know some lighters and technical directors at larger studios (ILM, Imageworks, Digital Domain) that honestly don't seem to love or hate a specific renderer.

But then they use several options for output based on the strength of the package.

If there was a perfect answer I don't think we'd have as many choices as there are. So here let me illustrate the problem in a familiar form: which is better, OSX or Windows?

The argument is ridiculous. If there were truly as many flaws in the software as has been claimed in this particular thread then no one would use the software. Everyone would use the same software on the same platform. However, we know that to be untrue. So the argument is moot.

Now, back to the original idea of the thread. . .has anyone experienced weirdness with framebuffers in 3.6? We might migrate but I want to know if anything already exists I might be missing before I make the change.

Buexe
10-15-2007, 09:31 PM
Forgot to put on your funny hat today, hmm?

Bitter
10-15-2007, 09:35 PM
The dry cleaners lost it.

djx
10-15-2007, 11:35 PM
has anyone experienced weirdness with framebuffers in 3.6?
I can only get 8 bit output from them. The output file is a 32bit exr but the content has been clipped to 0-1 range.

-- David

jupiterjazz
10-16-2007, 08:46 AM
The dry cleaners lost it.

No worries, I found it :)

floze
10-16-2007, 11:09 AM
I can only get 8 bit output from them. The output file is a 32bit exr but the content has been clipped to 0-1 range.

-- David
Did you use any lensshaders maybe? Some of them actually clip.

djx
10-16-2007, 12:15 PM
Did you use any lensshaders maybe? Some of them actually clip.
Yes, I was using mia_exposure_simple in my tests. In fact the lens shader is being ignored by the userBuffers, so everything is really bright and clipped. I would be able to get some use from 8 bit userBuffers if I could figure out how to get them tone mapped - but that would still be a compromise.
I have tried, also with no lens shader and still get clipping.

But I would love to be wrong here. Are you getting high dynamic range 32 bit exr's using userBuffers?

-- David

Bitter
10-19-2007, 05:31 AM
Hmm, I can't get the framebuffer shaders in a file from 8.5 to render in 2008 at all. :sad:

Bitter
10-22-2007, 02:21 AM
Ok you're right, it's clipping. Babysteps I suppose. . . . :shrug: Haven't tried the 'contrast all' yet, it's on the list, but clipped buffers is a pain. I'll report it if they can't give me a solution.

Kel Solaar
11-08-2007, 03:39 PM
Ok you're right, it's clipping. Babysteps I suppose. . . . :shrug: Haven't tried the 'contrast all' yet, it's on the list, but clipped buffers is a pain. I'll report it if they can't give me a solution.

Any news from this clipping issue?

Bitter
11-08-2007, 05:52 PM
It's possible Autodesk will patch up the version of Maya with a newer build like 3.6.something. They have released different builds before, but nothing concrete or announced. And if that is released it would be something where you could download it from the Old Platinum deal (Gold now I think) but not get shipped a new disc I don't think, etc. As for a workaround, I don't see a good one and I am about to enter into production. :sad:

Puppet|
11-08-2007, 06:51 PM
If nobody still reported to Autodesk about bug with clamping, I could do it.
What bug? How reproduce it?

Bitter
11-08-2007, 08:43 PM
http://www.alias.com/glb/eng/support/bug_report/reportBug.jsp?Product=Maya

Just follow the directions. As issues are reported it pushes the error up the list to get fixed. Same thing with features. . .majority rules.

Kel Solaar
11-09-2007, 10:42 AM
It's possible Autodesk will patch up the version of Maya with a newer build like 3.6.something. They have released different builds before, but nothing concrete or announced. And if that is released it would be something where you could download it from the Old Platinum deal (Gold now I think) but not get shipped a new disc I don't think, etc. As for a workaround, I don't see a good one and I am about to enter into production. :sad:

A workaround is render as hdr file, problem with that is that u can't have an alpha in your image, can be annoying but manageable.


If nobody still reported to Autodesk about bug with clamping, I could do it.
What bug? How reproduce it?


Glad to see u here Pavel, to reproduce the bug easy : Render a Floating Point file, and see if its not clamped. Exr and Tiff32 are broken currently.

Kel Solaar
11-09-2007, 11:02 AM
Just Posted The Defect Form. Hope They Will Repair This Issue Fast.

Puppet|
11-09-2007, 11:24 AM
All works fine with iff, tiff and exr, but only with 'Maya Batch'. 'Render View' always clamp image. Really it's not big problem, because you may always use 'Maya Batch' instead of 'Render View'.
But nevertheless looks like it's bug.

I have logged a case.

Olegr
11-09-2007, 11:31 AM
So I can render out 32bit passes with exr if I use batch and not render view? I would like to use passes, but was planning on waiting with looking into using them till the problem with 32bit and clamping was fixed. It this is working with batch then I should start figureing out how passes work.

floze
11-09-2007, 11:38 AM
All works fine with iff, tiff and exr, but only with 'Maya Batch'. 'Render View' always clamp image. Really it's not big problem, because you may always use 'Maya Batch' instead of 'Render View'.
But nevertheless looks like it's bug.

I have logged a case.
I just checked it, there should be a big emphasize that its only happening with the mentalrayUserBuffer system in Maya 2008, isnt it? The newly introduced output pass system seems to be broken or half baked; the internal framebuffers seem to be alright, there's probably only short data written to them. Couldnt check it with other 3rd party framebuffer stuff yet though (like yours, Puppet ;)).

Kel Solaar
11-09-2007, 12:46 PM
All works fine with iff, tiff and exr, but only with 'Maya Batch'. 'Render View' always clamp image. Really it's not big problem, because you may always use 'Maya Batch' instead of 'Render View'.
But nevertheless looks like it's bug.

I have logged a case.
Oh yeah forget to precise, i was speaking about the framebuffers currently.

Here are parts of the .MI file when i export it :

options "miDefaultOptions"
object space
desaturate off
colorclip raw
premultiply on
dither on
gamma 1.
acceleration bsp
bsp size 10
bsp depth 40
task size 0
contrast 0.05 0.05 0.05 0.05
samples -1 1
filter clip mitchell 4. 4.
jitter 0.
samplelock on
scanline on
trace on
trace depth 1 6 7
shadow segments
shadowmap on
shadowmap rebuild off
"shadowmap pixel samples" 3
caustic off
globillum off
finalgather on
finalgather accuracy view 64 35. 35.
finalgather scale 1. 1. 1. 1.
finalgather secondary scale 1. 1. 1. 1.
finalgather rebuild on
finalgather filter 0
finalgather falloff 0. 0.
finalgather trace depth 1 6 0 7
finalgather presample density 4.
"finalgather mode" "multiframe"
"finalgather points" 24
finalgather file "default.fgmap"
lens on
volume on
geometry on
displace on
displace presample on
output on
merge on
autovolume on
hair on
pass on
face both
"motion factor" 1.
"maya filter size" 0.0001
"maya reflect blur limit" 1
"maya refract blur limit" 1
"maya render pass" 0
"maya shader filter" on
"maya shader glow" on
"maya shadow limit" 6
frame buffer 0 "+rgb"
frame buffer 1 "+rgb_fp"
frame buffer 2 "+rgb_fp"
frame buffer 3 "+rgb_fp"
frame buffer 4 "+rgb_fp"
frame buffer 5 "+rgb_fp"
frame buffer 6 "+rgb_fp"
frame buffer 7 "+rgb"
frame buffer 8 "+rgb"
frame buffer 9 "+rgb"
frame buffer 10 "+rgb"
frame buffer 11 "+rgb"
frame buffer 12 "+rgb"
frame buffer 13 "+rgb_fp"
frame buffer 14 "+rgb_fp"
frame buffer 15 "+rgb_fp"
frame buffer 16 "+rgb"
state "maya_state" (
"passAlphaThrough" off,
"passDepthThrough" off,
"passLabelThrough" off,
"glowColorBuffer" 17
)
data "miDefaultOptions:data"
end options

camera "perspShape"
output "+rgba_fp,fb16" = "shaderGlow1:perspShape"
# Diffuse_mentalrayOutputPass
output "fb1" "exr" "images_tmp/Render_04_2008.0100_Diffuse.exr"
# Epidermal_Scatter_mentalrayOutputPass
output "fb2" "exr" "images_tmp/Render_04_2008.0100_Epidermal_Scatter.exr"
# Subdermal_Scatter_mentalrayOutputPass
output "fb3" "exr" "images_tmp/Render_04_2008.0100_Subdermal_Scatter.exr"
# Back_Scatter_mentalrayOutputPass
output "fb4" "exr" "images_tmp/Render_04_2008.0100_Back_Scatter.exr"
# Glossy_mentalrayOutputPass
output "fb5" "exr" "images_tmp/Render_04_2008.0100_Glossy.exr"
# Specular_mentalrayOutputPass
output "fb6" "exr" "images_tmp/Render_04_2008.0100_Specular.exr"
# Occlusion_mentalrayOutputPass
output "fb13" "exr" "images_tmp/Render_04_2008.0100_Occlusion.exr"
# Normal_mentalrayOutputPass
output "fb14" "exr" "images_tmp/Render_04_2008.0100_Normal.exr"
output "+rgba_fp" "exr" "images_tmp/Render_04_2008.0100.exr"
resolution 1920 1080
aspect 1.777
aperture 1.41732
frame 100 100.
clip 1. 200.
focal 2.95275
lens = "mia_exposure_simple"
environment = "Environment_mip_rayswitch"
end camera

ctrl.studio
11-10-2007, 04:09 PM
Hi Thomas,

can I see an excerpt of the output window too. ie. the options part.

max

hoi
11-10-2007, 07:55 PM
Hey Thomas
would you be so kind to show us how we can actually get the passes out from MR buffer and how to set it up

thanks in advance :)