OpenEXR 2.0 released

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 04 April 2013   #1
OpenEXR 2.0 released

Quote:
"
April 9, 2013 - Industrial Light & Magic (ILM) and Weta Digital announce the release of OpenEXR 2.0, the major version update of the open source high dynamic range file format first introduced by ILM and maintained and expanded by a number of key industry leaders including Weta Digital, Pixar Animation Studios, Autodesk and others.

The release includes a number of new features that align with the major version number increase. Amongst the major improvements are:
  1. Deep Data support - Pixels can now store a variable-length list of samples. The main rationale behind deep images is to enable the storage of multiple values at different depths for each pixel. OpenEXR 2.0 supports both hard-surface and volumetric representations for Deep Compositing workflows.
  2. Multi-part Image Files - With OpenEXR 2.0, files can now contain a number of separate, but related, data parts in one file. Access to any part is independent of the others, pixels from parts that are not required in the current operation don't need to be accessed, resulting in quicker read times when accessing only a subset of channels. The multipart interface also incorporates support for Stereo images where views are stored in separate parts. This makes stereo OpenEXR 2.0 files significantly faster to work with than the previous multiview support in OpenEXR.
  3. Optimized pixel reading - decoding RGB(A) scanline images has been accelerated on SSE processors providing a significant speedup when reading both old and new format images, including multipart and multiview files.
  4. Namespacing - The library introduces versioned namespaces to avoid conflicts between packages compiled with different versions of the library.
"

http://www.openexr.com/
__________________
LW FREE MODELS:FOR REAL Home Anatomy Thread
FXWARS
:Daily Sketch Forum:HCR Modeling
This message does not reflect the opinions of the US Government

 
Old 04 April 2013   #2
how long do you think before this filters down to the common man do you think? Your average Vray user with Nuke for examples
__________________
"Don't confuse me with the facts, I've got my mind made up"!

www.willelliott.co.uk

Polycount Checker Max script
 
Old 04 April 2013   #3
Nuke and Vray nightlies are up to date with EXR2.
I see the problem more in storage for the deep data... those files are huge...
__________________
...
 
Old 04 April 2013   #4
I took a look at wiki and some other sites as I'm not familiar with the deep comp concept.

To my understanding, the new exr 2 spec allows compositing without garbage masks because they have depth info for every pixel? Like Zbrush's "pixol" tech?

So the exr2.0 rendered passes of a watch for example... The glass, the displaced numbers, the underlying mechanism can be recomposited in nuke without any masks and would respond well with Zdepth created in post?.. If what I understood is right, it really could be worth the extra file sizes imo.

Are there other practical examples of usage scenarios that take adv of exr 2.0 you can give?
__________________
"Any intelligent fool can make things bigger, more complex & more violent..." Einstein
 
Old 04 April 2013   #5
here is an example..
http://www.youtube.com/watch?v=19w3vkFp5X0
__________________
...
 
Old 04 April 2013   #6
Originally Posted by mustique: To my understanding, the new exr 2 spec allows compositing without garbage masks because they have depth info for every pixel? Like Zbrush's "pixol" tech?


afaik, a pixol just uses a zdepth buffer. Deep data is much more advanced.
 
Old 04 April 2013   #7
Originally Posted by oglu: here is an example..
http://www.youtube.com/watch?v=19w3vkFp5X0


thx that made everything clear
__________________
"Any intelligent fool can make things bigger, more complex & more violent..." Einstein
 
Old 04 April 2013   #8
Originally Posted by oglu: Nuke and Vray nightlies are up to date with EXR2.
I see the problem more in storage for the deep data... those files are huge...



hopefully not long then

cheers!
__________________
"Don't confuse me with the facts, I've got my mind made up"!

www.willelliott.co.uk

Polycount Checker Max script
 
Old 04 April 2013   #9
Does anybody know much about the mental ray release?
 
Old 04 April 2013   #10
So how is this much more advanced that zDepth?

that Planet of the Monkeys video was good, but I was struggling to see what was different (other than implementation) to what zDepth would have.
__________________
www.jd3d.co.uk - I'm available for freelance work.
 
Old 04 April 2013   #11
Originally Posted by mynewcat: So how is this much more advanced that zDepth?

that Planet of the Monkeys video was good, but I was struggling to see what was different (other than implementation) to what zDepth would have.


From what I understand, it takes into account invisible info behind the object, which it renders (?).
 
Old 04 April 2013   #12
Originally Posted by mynewcat: So how is this much more advanced that zDepth?

that Planet of the Monkeys video was good, but I was struggling to see what was different (other than implementation) to what zDepth would have.

It's not vastly different when dealing with solid opaque materials, but it's miles different when dealing with semi transparent or volumetric objects.

ie lets say you have a fog layer. (This is the classical example)
You now need to place your CG character in there.

Depending on where you place the character, the fog will have a different efect on it's visual.
Before deep compositing, this would have to be faked and/or rendered with that in mind.

With deep compositing, every pixel has information for itself at different depths within the fog.
So you can place the character in depth as you want, and the compositing program will take into account ever pixel sample in front of that depth.




It's less like zDepth and more like raymarching or deep shadows, where the 2D dataset can account for the various samples along it's path in 3d space.
(ie my translucent hair can exist on its own but also can take into account the hairs behind and in front of it)
 
Old 04 April 2013   #13
Originally Posted by oglu: Nuke and Vray nightlies are up to date with EXR2.
I see the problem more in storage for the deep data... those files are huge...


What is the size of an exr2 in deep pixel in 1920*1080 ? 100mo ?
 
Old 04 April 2013   #14
Originally Posted by mynewcat: So how is this much more advanced that zDepth?


Much more advanced. It produces far higher quality results and is much easier to do depth-compositing - you just need to plug your deep nodes into a deep-merge node and it will handle the depth-sorting for you.

Originally Posted by mister3d: From what I understand, it takes into account invisible info behind the object, which it renders (?).


It can, but it's certainly not the default behaviour or something you'd want to do for all but a few rare exceptions. If the renderer is not set to render hidden and backfacing surfaces then it won't include them in the deep file.

Deep renders will only render visible depth samples until it becomes 100% opaque along that depth, after that it's assumed that your not going to see anything behind that element and so it will stop rendering. Note that it takes much longer to render hidden/backfacing surfaces.

This is why deeps mainly stand out for use in volume rendering, however the same rules with opacity apply, once the volume becomes 100% opaque, it will stop rendering along that depth.

Originally Posted by bigbossfr: What is the size of an exr2 in deep pixel in 1920*1080 ? 100mo ?


Varies greatly depending on how opaque, how deep and how big on screen it is - it also depends on how much samples you record as you can control how coarse/fine the depth samples are. A character (which is largely opaque and not very deep) might only takes up a few 5-15MB per frame. Where as a large dust cloud which is semi opaque might take up a few GBs of space per frame. With an FX heavy shot you could end up with it taking up quite a few terabytes of space in deep files.
 
Old 04 April 2013   #15
Originally Posted by earlyworm: Varies greatly depending on how opaque, how deep and how big on screen it is - it also depends on how much samples you record as you can control how coarse/fine the depth samples are. A character (which is largely opaque and not very deep) might only takes up a few 5-15MB per frame. Where as a large dust cloud which is semi opaque might take up a few GBs of space per frame. With an FX heavy shot you could end up with it taking up quite a few terabytes of space in deep files.


the footage i played with had 500MB per frame... thats no fun...
__________________
...
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 10:52 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.