Rendering Synthetic Objects into Legacy Photographs

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 10 October 2011   #1
Rendering Synthetic Objects into Legacy Photographs

Interesting bit of technology...

http://kevinkarsch.com/publications/sa11.html

Quote: We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference. Further, our study shows that our method is competitive with other insertion methods while requiring less scene information. We also collected new illumination and reflectance datasets; renderings produced by our system compare well to ground truth. Our system has applications in the movie and gaming industry, as well as home decorating and user content creation, among others.
 
Old 10 October 2011   #2
Me after watching the video = Mind Blown!

That is very amazing and could save a ton of render time!
__________________
Archerx.com - Portfolio site
CG Cast -Ep35- Not dead yet
CG Chat - Lets Chat About CG!
 
Old 10 October 2011   #3
WOAH!
How do they do that? XD
__________________
Oh the pain!
 
Old 10 October 2011   #4
Originally Posted by Snooba: WOAH!
How do they do that? XD


Alot of math. I hate math.
 
Old 10 October 2011   #5
I can normally follow research papers and sort of see how they're doing it, even though I don't understand any of the maths. But I cannot wrap my head around how that gets the results it does, it's amazing. Really looking forward to seeing this applied to commercial work.
 
Old 10 October 2011   #6
That was pretty mind-blowing. It totally made sense too how he explained the tech. This is how I understood it:

It's basically constructing a kind of simple virtual replica of the environment in the photo based on user's input of where the light sources are, by shadow detection, as well as detecting the shapes of the hotpots in the image from light sources. So instead of replicating the scene in painstaking detail in 3D, it does it fast and using just the most relevant information needed, and the replicated virtual environment is only used to calculate the effects the lighting would have on the objects to be inserted, so the user never needs to see the virtual replica.
 
Old 10 October 2011   #7
Wow, this is nuts!! Aaaaand a bit scary . I dread to think what facebook pics will look like in a few years, what with people inserting anything and everything in their own pics
 
Old 10 October 2011   #8
Really impressive, but... what about color bleeding from light bounces? I only see direct lights working.
__________________
Life is too short to be wasted in front of any screen.
 
Old 10 October 2011   #9
Originally Posted by ShinChanPu: Really impressive, but... what about color bleeding from light bounces? I only see direct lights working.


I'm assuming when you defined the walls, ceiling, floor...etc, you already gave the algorithm enough information to calculate the effects of bounced light, because it only has to scan the values of those surfaces from the photo for light and dark patterns as well as colors to fit it to the possible bounce pattern according to the light sources. It really is quite ingenious how they simplified all of this and made it very easy to use.

If the a colored object in the photo is in the middle ground and thus making the image a bit more complex, I'm sure you can just define it quickly with a bounding box so the algorithm knows that's a freestanding object with a specific color, and then emulate it in the virtual scene so it can bounce its color onto whatever you need to insert into the scene?

Last edited by Lunatique : 10 October 2011 at 12:46 PM.
 
Old 10 October 2011   #10
Originally Posted by Lunatique: I'm assuming when you defined the walls, ceiling, floor...etc, you already gave the algorithm enough information to calculate the effects of bounced light, because it only has to scan the values of those surfaces from the photo for light and dark patterns as well as colors to fit it to the possible bounce pattern according to the light sources. It really is quite ingenious how they simplified all of this and made it very easy to use.

If the a colored object in the photo is in the middle ground and thus making the image a bit more complex, I'm sure you can just define it quickly with a bounding box so the algorithm knows that's a freestanding object with a specific color, and then emulate it in the virtual scene so it can bounce its color onto whatever you need to insert into the scene?


Yes, maybe what you say is working there... but I think you need really simple scenes to get accurate solutions. Small objects can be ignored, but medium and large ones need to be "rebuilt", and if their materials have textures of several contrasted colors (ie. patterns on courtains... filtered light by translucent ones!). Well, quick solutions for sure... accuracy perhaps not much.
__________________
Life is too short to be wasted in front of any screen.
 
Old 10 October 2011   #11
Originally Posted by vitalmaya: Wow, this is nuts!! Aaaaand a bit scary . I dread to think what facebook pics will look like in a few years, what with people inserting anything and everything in their own pics


Good point. This is one of many ways how 3d models will reach wide audiences of consumers in near future. Then artists can make a good living from selling their models online!
 
Old 10 October 2011   #12
Awesome stuff!

Only issue is defining reflectance for each real object in the scene. Eg, if they rolled that ball down a glossy wooden floorboard I'm pretty sure there would be some inaccurate results
But you never know... SIGGRAPH blew me away this year, and seeing these vids is a constant reassurance Im in the right industry
 
Old 10 October 2011   #13
That's awesome. It makes me curious as to whether this will replace traditional methods of integrating 3D into plates.
__________________
This should take less than a few minutes.
 
Old 10 October 2011   #14
We kind of already do this in vfx. Chrome balls and fisheye HDRIs are good and all, but are only really spatially correct from the location where the ball/fisheye was shot.

Certainly within the last two years I've been doing a lot of reprojections of plates and reference photography onto geometry in order to get better reflections and indirect light. Lights also get extracted from HDR reference photos for use as reflection cards (or on rare occasions - as area lights or light emitting geometry).

Although I certainly welcome anything like this that simplifies the process.
 
Old 10 October 2011   #15
well if tools like that will find place into compositing apps many works will be easier and faster.
Now what about compositing like this into footage with moving camera ?
__________________
Nemoid | Illustrator | 3D artist
.::Creating for you::.
www.lwita.com
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 12:52 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.