OpenEXR 2.0: Where is it?!


#1

I’ve read so many promising articles about openEXR 2.0 and the advent of deep compositing for all. But nothing has materialized. It was supposed to be released at Siggraph last year.

Nuke has deep image support, and there are great plugins like peregrine bokeh that supports deep depth of field, but since there is no standard for deep images (like openExr 2.0) only Prman users can take advantage of any of it.

What’s the deal :stuck_out_tongue:


#2

There was an update on the openexr dev list a while back
but don’t think those changes have made it into the openexr repository:

http://lists.gnu.org/archive/html/openexr-devel/2011-08/msg00006.html
http://cvs.savannah.gnu.org/viewvc/OpenEXR/?root=openexr
http://cvs.savannah.gnu.org/viewvc/IlmBase/?root=openexr


#3

Can I ask what are deep images?


#4

here are more infp on deep images…
http://www.deepimg.com/
http://www.fxguide.com/fxguidetv/fxguidetv_095/


#5

is the deepimage DOF as accurate as DOF from the renderer?

where for example did WETA use deep image compositing on Avatar?


#6

peter hillman is talking here about planet of the apes and deep comp…
and peregrine laps about there dof plugin…
http://www.fxguide.com/fxguidetv/fxguidetv-118-rise-of-the-apes-nuke-plugins/

i dont think deep comp is for everyone… cause the data is really huge…

http://www.fxguide.com/quicktakes/animfxnz-avatar/


#7

For the record, there is beta support in vray too. Using shademap files and a custom reader for nuke.

Regards,
Thorsten


#8

Are there plugins for compositing which support it?


#9

Well, there are nuke’s built in deep tools. For additional tools the only one that comes to my mind is pgBokeh (which is great and cheap, deep or not heh)

Regards,
Thorsten


#10

Yeah I’ve been waiting for exr 2 as well, cause I heard that basically mental ray can’t export any deep data until then.

I appreciate the dev’s of exr cause I don’t think they’re getting any money for this, it’s a community thing is my understanding.

Renderman has had deep data for a while so it’s just for the rest of us not using RM.


#11

thanks, I can’t wait to read on this stuff. Probably it will take a few months until I understand what you are talking about


#12

I was reading over something, and am I correct in thinking that one of the biggest advantages to this new standard is the support of Z-depth information on semi-transparent shaders?


#13

ok, quick explanation of the advantages of deep compositng:

  1. Depth of field that respects antialiasing/transparency/motionblur/fine details like fur
  2. Deep holdouts. So you can correctly composite elements behind other objects, even if they are motion blured. Good example here is volumetrics, so you can comp an object inside a cloud for example.

they are the big ones i can think of.

And yes, the problem with them is file size/speed. They average around 100meg but can get up to 2-4gigs depending on how complex the shot it. That said, i think computer speeds are improving so much, that this will be less of an issue in the future.


#14

Don’t rule out storage space and network performance too quickly on that one. We’re talking about exponentially rising storage needs. That’s quite a big hit. It will level out most definitely. But currently it can be quite a hassle at times sadly.

Thorsten


#15

Apologies for resurrecting an old thread but does anyone have any news on a possible release of openexr 2 this Siggraph?

Also, has anyone tried using the dtex file from prman as a hold out matte using the hider option string mattefile? I’m not noticing any render time advantage when using the dtex image as matte file vs. holding out the actual geometry in the scene (which had volumetrics and brickmap geo). Prman forums is mostly mum on this. And the file sizes are truly massive for volumetric data!


#16

I don’t use the deeptex format in Rman because it’s too freaking huge.
I wonder if eventually there will be some lossless compression format good enough to deal with the data, or some version of deeptex which has less data but is still good enough for most cases (like half float exr). Until then, it won’t fit into a small pipeline.


#17

Hey guys, was EXR 2.0 ever released? I’m looking at rendering from Houdini to EXR for deep compositing – the rat-to-dtex workflow is not working for us at present…

thanks


#18

The biggest advantage from my perspective has been that when rendering volumes (fire and smoke) then you don’t need to render with holdouts. You just give your deep render to a comper and the volume composites with ANYTHING that has deep info. so if they change a model slightly, then you dont have to re-render the smoke.


#19

its still in beta phase, but you can get the code over here:
https://github.com/openexr

and surely someone has compiled and implemented it in houdini


#20

Deep data & Alembic support - Push the boundaries of Deep compositing with new OpenEXR 2.0 Deep data read and write capability, as well as read and write geometry and cameras to and from Sony Picture Imageworks’ Alembic file format.

http://www.thefoundry.co.uk/articles/2012/11/29/450/nuke-70-is-out-now/#feature_alembic

Nuke 7 has open exr 2 support apparently. I thought Nuke 7 was out but it still says beta on the site.

http://forums.cgsociety.org/showthread.php?f=59&t=1082534

this post says it’s out aaaahhhh . . . what is out and not out?