CGTalk > Main > News
Login register
Thread Closed share thread « Previous Thread | Next Thread »
 
Thread Tools Search this Thread Display Modes
Old 04-10-2013, 03:21 PM   #1
RobertoOrtiz
[Forum Leader]
 
RobertoOrtiz's Avatar
CGTalk Forum Leader
portfolio
Roberto Ortiz
Illustrator/ Modeler
Washington DC, USA
 
Join Date: May 2002
Posts: 32,025
Send a message via MSN to RobertoOrtiz
OpenEXR 2.0 released

Quote:
"
April 9, 2013 - Industrial Light & Magic (ILM) and Weta Digital announce the release of OpenEXR 2.0, the major version update of the open source high dynamic range file format first introduced by ILM and maintained and expanded by a number of key industry leaders including Weta Digital, Pixar Animation Studios, Autodesk and others.

The release includes a number of new features that align with the major version number increase. Amongst the major improvements are:
  1. Deep Data support - Pixels can now store a variable-length list of samples. The main rationale behind deep images is to enable the storage of multiple values at different depths for each pixel. OpenEXR 2.0 supports both hard-surface and volumetric representations for Deep Compositing workflows.
  2. Multi-part Image Files - With OpenEXR 2.0, files can now contain a number of separate, but related, data parts in one file. Access to any part is independent of the others, pixels from parts that are not required in the current operation don't need to be accessed, resulting in quicker read times when accessing only a subset of channels. The multipart interface also incorporates support for Stereo images where views are stored in separate parts. This makes stereo OpenEXR 2.0 files significantly faster to work with than the previous multiview support in OpenEXR.
  3. Optimized pixel reading - decoding RGB(A) scanline images has been accelerated on SSE processors providing a significant speedup when reading both old and new format images, including multipart and multiview files.
  4. Namespacing - The library introduces versioned namespaces to avoid conflicts between packages compiled with different versions of the library.
"

http://www.openexr.com/
__________________
LW FREE MODELS:FOR REAL Home Anatomy Thread
FXWARS
:Daily Sketch Forum:HCR Modeling
This message does not reflect the opinions of the US Government

 
Old 04-11-2013, 09:05 AM   #2
irwit
aka willelliott.co.uk
 
irwit's Avatar
portfolio
Will Elliott
Chief Colourer Inner.
RealtimeUK
Manchester, United Kingdom
 
Join Date: Apr 2004
Posts: 1,112
how long do you think before this filters down to the common man do you think? Your average Vray user with Nuke for examples
__________________
"Don't confuse me with the facts, I've got my mind made up"!

www.willelliott.co.uk

Polycount Checker Max script
 
Old 04-11-2013, 09:49 AM   #3
oglu
Christoph Schädl
 
oglu's Avatar
portfolio
Christoph Schädl
Austria
 
Join Date: Mar 2003
Posts: 3,243
Nuke and Vray nightlies are up to date with EXR2.
I see the problem more in storage for the deep data... those files are huge...
__________________
...
 
Old 04-11-2013, 01:31 PM   #4
mustique
Expert
copywriter
 
Join Date: Jul 2002
Posts: 2,016
I took a look at wiki and some other sites as I'm not familiar with the deep comp concept.

To my understanding, the new exr 2 spec allows compositing without garbage masks because they have depth info for every pixel? Like Zbrush's "pixol" tech?

So the exr2.0 rendered passes of a watch for example... The glass, the displaced numbers, the underlying mechanism can be recomposited in nuke without any masks and would respond well with Zdepth created in post?.. If what I understood is right, it really could be worth the extra file sizes imo.

Are there other practical examples of usage scenarios that take adv of exr 2.0 you can give?
__________________
"Any intelligent fool can make things bigger, more complex & more violent..." Einstein
 
Old 04-11-2013, 01:38 PM   #5
oglu
Christoph Schädl
 
oglu's Avatar
portfolio
Christoph Schädl
Austria
 
Join Date: Mar 2003
Posts: 3,243
__________________
...
 
Old 04-11-2013, 01:45 PM   #6
CHRiTTeR
On the run!
 
CHRiTTeR's Avatar
Chris
Graphic designer extraordinaire
Belgium
 
Join Date: Feb 2002
Posts: 4,381
Quote:
Originally Posted by mustique
To my understanding, the new exr 2 spec allows compositing without garbage masks because they have depth info for every pixel? Like Zbrush's "pixol" tech?


afaik, a pixol just uses a zdepth buffer. Deep data is much more advanced.
 
Old 04-11-2013, 02:18 PM   #7
mustique
Expert
copywriter
 
Join Date: Jul 2002
Posts: 2,016
Quote:
Originally Posted by oglu


thx that made everything clear
__________________
"Any intelligent fool can make things bigger, more complex & more violent..." Einstein
 
Old 04-11-2013, 02:55 PM   #8
irwit
aka willelliott.co.uk
 
irwit's Avatar
portfolio
Will Elliott
Chief Colourer Inner.
RealtimeUK
Manchester, United Kingdom
 
Join Date: Apr 2004
Posts: 1,112
Quote:
Originally Posted by oglu
Nuke and Vray nightlies are up to date with EXR2.
I see the problem more in storage for the deep data... those files are huge...



hopefully not long then

cheers!
__________________
"Don't confuse me with the facts, I've got my mind made up"!

www.willelliott.co.uk

Polycount Checker Max script
 
Old 04-11-2013, 09:55 PM   #9
gauranga108
Lord of the posts
portfolio
jake mr
makaha, US
 
Join Date: Apr 2010
Posts: 736
Does anybody know much about the mental ray release?
 
Old 04-11-2013, 11:51 PM   #10
mynewcat
Feels the need for speed!
 
mynewcat's Avatar
portfolio
Justin Dowling
Freelancer
JD3D CGI
Bristol, United Kingdom
 
Join Date: Mar 2005
Posts: 1,271
So how is this much more advanced that zDepth?

that Planet of the Monkeys video was good, but I was struggling to see what was different (other than implementation) to what zDepth would have.
__________________
www.jd3d.co.uk - I'm available for freelance work.
 
Old 04-11-2013, 11:56 PM   #11
mister3d
Expert
 
mister3d's Avatar
portfolio
asdasd adsasd
Kiev, Ukraine
 
Join Date: Nov 2004
Posts: 6,041
Quote:
Originally Posted by mynewcat
So how is this much more advanced that zDepth?

that Planet of the Monkeys video was good, but I was struggling to see what was different (other than implementation) to what zDepth would have.


From what I understand, it takes into account invisible info behind the object, which it renders (?).
 
Old 04-12-2013, 12:40 AM   #12
DagMX
Frequenter
Dhruv Aditya Govil
New Delhi, India
 
Join Date: Jun 2006
Posts: 266
Quote:
Originally Posted by mynewcat
So how is this much more advanced that zDepth?

that Planet of the Monkeys video was good, but I was struggling to see what was different (other than implementation) to what zDepth would have.

It's not vastly different when dealing with solid opaque materials, but it's miles different when dealing with semi transparent or volumetric objects.

ie lets say you have a fog layer. (This is the classical example)
You now need to place your CG character in there.

Depending on where you place the character, the fog will have a different efect on it's visual.
Before deep compositing, this would have to be faked and/or rendered with that in mind.

With deep compositing, every pixel has information for itself at different depths within the fog.
So you can place the character in depth as you want, and the compositing program will take into account ever pixel sample in front of that depth.




It's less like zDepth and more like raymarching or deep shadows, where the 2D dataset can account for the various samples along it's path in 3d space.
(ie my translucent hair can exist on its own but also can take into account the hairs behind and in front of it)
 
Old 04-12-2013, 08:59 AM   #13
bigbossfr
aka DeeX
 
bigbossfr's Avatar
portfolio
Damien BATAILLE
Renderer/lighter/shading
Paris, France
 
Join Date: Feb 2005
Posts: 577
Quote:
Originally Posted by oglu
Nuke and Vray nightlies are up to date with EXR2.
I see the problem more in storage for the deep data... those files are huge...


What is the size of an exr2 in deep pixel in 1920*1080 ? 100mo ?
 
Old 04-12-2013, 04:00 PM   #14
earlyworm
car unenthusiast
 
earlyworm's Avatar
Will Earl
craftsperson
Grizzly Country, Canada
 
Join Date: Mar 2005
Posts: 1,685
Quote:
Originally Posted by mynewcat
So how is this much more advanced that zDepth?


Much more advanced. It produces far higher quality results and is much easier to do depth-compositing - you just need to plug your deep nodes into a deep-merge node and it will handle the depth-sorting for you.

Quote:
Originally Posted by mister3d
From what I understand, it takes into account invisible info behind the object, which it renders (?).


It can, but it's certainly not the default behaviour or something you'd want to do for all but a few rare exceptions. If the renderer is not set to render hidden and backfacing surfaces then it won't include them in the deep file.

Deep renders will only render visible depth samples until it becomes 100% opaque along that depth, after that it's assumed that your not going to see anything behind that element and so it will stop rendering. Note that it takes much longer to render hidden/backfacing surfaces.

This is why deeps mainly stand out for use in volume rendering, however the same rules with opacity apply, once the volume becomes 100% opaque, it will stop rendering along that depth.

Quote:
Originally Posted by bigbossfr
What is the size of an exr2 in deep pixel in 1920*1080 ? 100mo ?


Varies greatly depending on how opaque, how deep and how big on screen it is - it also depends on how much samples you record as you can control how coarse/fine the depth samples are. A character (which is largely opaque and not very deep) might only takes up a few 5-15MB per frame. Where as a large dust cloud which is semi opaque might take up a few GBs of space per frame. With an FX heavy shot you could end up with it taking up quite a few terabytes of space in deep files.
 
Old 04-13-2013, 03:09 PM   #15
oglu
Christoph Schädl
 
oglu's Avatar
portfolio
Christoph Schädl
Austria
 
Join Date: Mar 2003
Posts: 3,243
Quote:
Originally Posted by earlyworm
Varies greatly depending on how opaque, how deep and how big on screen it is - it also depends on how much samples you record as you can control how coarse/fine the depth samples are. A character (which is largely opaque and not very deep) might only takes up a few 5-15MB per frame. Where as a large dust cloud which is semi opaque might take up a few GBs of space per frame. With an FX heavy shot you could end up with it taking up quite a few terabytes of space in deep files.


the footage i played with had 500MB per frame... thats no fun...
__________________
...
 
Thread Closed share thread


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 10:31 PM.


Powered by vBulletin
Copyright ©2000 - 2016, Jelsoft Enterprises Ltd.