PDA

View Full Version : Sayanora to Blurry Pics


tevih
11-21-2005, 06:17 PM
http://wired.com/news/technology/0,1282,69594,00.html?tw=wn_tophead_4

A prototype camera made by a Stanford University graduate student could herald the end of fuzzy, poorly lit photos.

A computer science Ph.D. student at Stanford University has outfitted a 16-megapixel camera with a bevy of micro lenses that allows users to take photos and later refocus them on a computer using software he wrote.

Pretty freakin' cool!! :eek:

ivanisavich
11-21-2005, 06:23 PM
Holy moly! If that's for real, that could change the way we take photos as we know it!

sumpm1
11-22-2005, 03:42 AM
This has already been posted here

eek
11-22-2005, 04:02 AM
Some of Ng's papers:

http://graphics.stanford.edu/papers/fourierphoto/

"main result is a theorem that, in the Fourier domain, a photograph formed by a full lens aperture is a 2D slice in the 4D light field."

umm 4d light slice?:argh:

for lack of a better word check this out its bl**dy amazing!

http://graphics.stanford.edu/papers/lfcamera/

oh and watch these videos NOW!

http://graphics.stanford.edu/papers/lfcamera/refocus/

eek

Hazdaz
11-22-2005, 09:31 AM
Wow - that is impressive - espesially those video clips.

I still am having a tough time wrapping my brain around how he is doing that though. Seems to me that he would have to be capturing an insane amount of depth info to get this to work. Like instead if snapping a 1600x1200 pix image, he would have to be capturing 1600x1200 x256 levels of depth (or however accurate his depth is). That is a HELL of a lot of info if that is how he is doing it.

Most impressive though.

tah
11-22-2005, 09:53 AM
very impresive...
gives a great idea for a new render engine ;)

danimat0r
11-22-2005, 12:13 PM
Whoah.


The implications for filmmaking could be mind-boggling. :argh: Still photos with z-depth... freaking cool.

instinct-vfx
11-22-2005, 12:24 PM
read the paper...it is not capturing the depth of the image...and itīs not a post "blur" they apply to completely sharp image. They use an array of VERY small lenses for each pixel of the final res. So each pixel is represented by a whole group of actual pixels (hence the final resolution is a LOT lower then the 16MPixel the camera can do). And the circle actualy contains multiple lightrays for different focuses. So in order to assemble the final image you have to select the right pixel from each of the circles...hard to explain...just read the paper :P

Regards,
Thorsten

P.S. If you want video with z-depth this is available since quite a while already. Namely zcam is an addon for your cam to create a z-buffer (i think it uses infrared to do so)....nice for depthkeying, selective corrections etc....see

http://www.3dvsystems.com/gallery/samples.html

Blazer
11-22-2005, 12:35 PM
does not compute.... does NOT compute..... DOES NOT COMPUTE...... DDooEssS NNooOTTT CCoomm...PPuutTT..eeEE.ee.ewRfse.kdfhdsfmsd/f.lksdnfksd,nf AAAAAAAAAAAHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH



Seriously though, I wonder if this technique can change the way DOF is applied as a post effect in 3D.

danimat0r
11-22-2005, 01:20 PM
Thanks Thorsten. :) I only had time to skim the article. It's just the closest thing to practical z-depth I've seen. :D

tevih
11-22-2005, 01:27 PM
I'm sorry if this has been posted already, I checked before I posted and didn't see anything. (Still don't see the other post...)

Thanks for showing those videos!! That's insane!! :surprised

Valrik
11-22-2005, 07:53 PM
I'm sorry if this has been posted already, I checked before I posted and didn't see anything. (Still don't see the other post...)


Original Post: http://forums.cgsociety.org/showthread.php?t=290015

Still very cool technology though.

slaughters
11-22-2005, 08:13 PM
Most blury photos come from dim lighting conditions and the corresponding slow shutter speeds. This will not fix that problem.

jeremybirn
11-22-2005, 08:28 PM
Like instead if snapping a 1600x1200 pix image, he would have to be capturing 1600x1200 x256 levels of depth (or however accurate his depth is). That is a HELL of a lot of info if that is how he is doing it.

It seems more like using up most of a finite number of pixels sampling parts of the microlenses. It seems as if he has a high res chip in a camera producing very low res images, like 320x240.

Most blury photos come from dim lighting conditions and the corresponding slow shutter speeds. This will not fix that problem.

True.

-jeremy

tevih
11-22-2005, 08:47 PM
True, but focusing a camera correctly can often take up valuable seconds (or even fractions of a second) which could cause someone to miss The Shot. With this, you don't have to worry about that.

Thanks for that link - didn't realize it was so old! That news article just came on wired.com..

enygma
11-22-2005, 09:14 PM
There are a few quotes that make me not quite look at this as anything we will see any time soon.
A computer science Ph.D. student at Stanford University has outfitted a 16-megapixel camera with a bevy of micro lenses that allows users to take photos and later refocus them on a computer using software he wrote.
Ng's camera pits about 90,000 micro lenses between the main lens and sensor.
A photographer could get pretty good results by modifying an 8-megapixel camera with Ng's invention, but it wouldn't be possible to refocus over as wide a range.
So basically, the lower the resolution of the camera, the less focus range this technique is going to give you.

It is a great proof of concept, and can definately has its applications, but it may only be available to professionals in terms of cost in the future, and even then, the resulting image isn't quite the same as having the focus there in the first place, judging from the images in the article. Looking closely at the image, the image reconstruction results in an area around the unfocused object that has been brought into focus, to be out of focus still. I'm not sure if that is specific to that image, or if you will see these kind of image reconstruction issues in different scenarios.

Props for being able to achieve that capability though. I'll have to look into the paper itself once I have time.

EDIT: The more I read into it though, the more it is starting to seem similar to some of the stuff we are doing with accoustics.

eek
11-22-2005, 09:40 PM
Well kodak are release 31 and 39 megapixel ccd chips by the end of next year. So i rekon your'll easily get consumer cameras with 12-16 mp cameras out. Eos 1n mark II has 12 already.

As for outfitting a camera with 90,000 pit lenses ala compound iris i dont know. What i do know is that theres no moving parts and only one lense, its not having to use multiple lense elements. So as a market product i dont know. It might becaome similiar to Hasalblads camera backs.

Theres some good articles on plenopic cameras on the web :thumbsup:

eek

Hugh-Jass
11-22-2005, 09:48 PM
sounds similar to the same technology on crime tv shows when they take a 128x128 satelite pixel image and upres it to see the full text in the university parking ID sticker on the windshield of te the car that was previously contained in four pixels...

god I hope that doesn't mean the end of depth of field... everything really crispwould be kind of annoying..

enygma
11-22-2005, 10:01 PM
It isn't really the end of DoF. It can be digitally recreated depending on the focus range you want to give it. Each picture that is taken contains exponentially more information about the light at a given sample location. Taking this information into the Forier domain allows them to adjust the focus based on the additional information.

You could eliminate DoF if you want, or just adjust the focus parameters so that DoF is still in the image, and present at a specified focal range.

CGTalk Moderation
11-22-2005, 10:01 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.