PDA

View Full Version : New camera enables focus after snapping shutter


ZacD
07-29-2009, 01:31 AM
http://www.uxbydesign.org/2009/07/22/new-camera-enables-focus-after-snapping-shutter/

The device captures the entire light field entering the lens, which can be compared to a three-dimensional CT scan, enabling doctors to effectively look at the interior of a person from any direction. This technique has several possible advantages. For one thing, being able to focus images after the fact means that cameras could take a picture sooner without waiting for an auto-focus mechanism to lock in. For another, because the depth of field also is adjustable along with focus, a pro photographer could fine-tune a picture to properly blur a background or get just the right amount of a subject in focus.

Sounds like it basically creates a depth pass for the photo and allows DoF and focusing to be done authentically after a photo has been taken. I'd like to see this implemented in video cameras, but this will probably revolutionize the filming process and vfx. Thoughts? Opinions?

I'm sure the data it captures could also be used to mask of people/objects based off distance, thus eliminating the need for green screens on certain shots.

Animasta
07-29-2009, 01:44 AM
The thread title says "after snapping shutter", meaning it's technically a post effect, right?

ZacD
07-29-2009, 01:55 AM
Sounds like it.

kemijo
07-29-2009, 03:26 AM
I think calling it a "post effect" would be misleading, which makes it sound like a 2D pixel manipulation. The camera collects enough info at shutter time to change the focal point and depth of field at a later point, with software that can read and use the new info. This requires a new camera, you couldn't use this effect on any image.

It was first mentioned a few years ago on this site. One of the developers is Pat Hanrahan, who is also a Pixar founder and one of the designers of RenderMan. Here's a link to a tech paper describing the technique from 2005. I have been looking forward to seeing it materialise in a real product...amazing technology!

http://graphics.stanford.edu/papers/lfcamera/

The same site has a video clip that explains how it works.

http://graphics.stanford.edu/papers/lfcamera/lfcamera.avi

grafikimon
07-29-2009, 01:32 PM
Wow that is amazing. Its not a post effect. Its like getting a HDR image and picking what range you want for a 24bit rbg image. Its how you can reveal details in a shadow for instance. This allows refocusing and even changing depth of field and position to a small extent. combined with raw photography it is dramatically going to improve the way people process images.

Wonder when photoshop will be able to edit such photos?

ZacD
07-29-2009, 04:02 PM
When these cameras are mass produced probably, I wonder how much larger this will make images, but the tilt thing they showed was pretty crazy too.

cyrille32
07-29-2009, 04:12 PM
In my opinion, it is a post effect, finetuning an image with added channels, like tweaking a 3D render with the help of RPF layers (MOBlur, etc.), but this is just a play on words.

Whats even more interesting here is the possiblity to make Z mask, for depth compositing, if this works, this means no more blue or green screen ffor compositing, we'll just use the Z channel.

I know there were some prototypes of camera that were able to shoot a Z with the image, but the quality was a quarter of the RGB picture resolution, but things might have improved since then.

Stankluv
07-30-2009, 09:45 AM
I suspect that it can't be like a depth pass...not from a single optical exposure, unless it uses radar or something.

My theory is that the data is like some type of point cloud that represents the lense volumetrically...each point in the cloud would be like a mini chrome ball and then that data is sorted and reconstructed per focus setting.

ZacD
07-30-2009, 04:18 PM
There must be some sort of information on the depth to be able to tweak DoF

^Lele^
07-31-2009, 01:44 AM
from the whitepaper's link posted above:
"This paper presents a camera that samples the 4D light field on its
sensor in a single photographic exposure. This is achieved by inserting
a microlens array between the sensor and main lens, creating
a plenoptic camera. Each microlens measures not just the total
amount of light deposited at that location, but how much light arrives
along each ray. By re-sorting the measured rays of light to
where they would have terminated in slightly different, synthetic
cameras, we can compute sharp photographs focused at different
depths."

In other words it's a clever math trick, possible by the number of different PoVs inside a single camera.

CGTalk Moderation
07-31-2009, 01:44 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.