NEW 3D Texture extraction software from 2 photos.


#1

Hi Guys!

I've just developped a software tool to extract 3d models and 3d textures from 2 photos with a different view point. Here is an image.
[img]http://www.photosculpt.fr/images/frontpage7.jpg[/img]
The project started as a curiosity 2 years ago but I've constantly worked on it and now the results are genuinely impressive, (well, I think): 
 - ultra high 3D depth resolution, 
  • really automatic
    • incredibly fast
  • export color, normal, depth, ambient occlusion, specular maps,
  • export obj files too
    • beautifull seamless tileable mode, crop, perspective corrections
  • really fun user experience
  • master the soft in 5 minutes

PhotoSculpt is the registered name, here is the pictogram.

And a newborn web site to accompany that :
www.photosculpt.net
(surprises will come there soon)

More Videos here:
http://www.youtube.com/user/hipe0

I'm still working and finalising the program. What are your needs? What are your expectations? Let's discuss!

#2

I saw your videos and i must say, if those models were done only by the help of two pictures, then your program is really powerful, hope you continue working on it :slight_smile:


#3

I’ve tried a number of these programs and none work very well so I’m sorry to say I’m skeptical. I also noticed none of the video show a wireframe.


#4

Well I’ll bet the mesh is a regular world-space grid, which is kind of expected for a scanned mesh. You would usually re-topologize these to use them for animations, for example with something like CySlice.

I understand how you derive normal/displace/AO maps, but what process do you use for deriving specular maps? Maybe something similar to Crazy Bump… start with an inverse of the diffuse and tweak from there? Would be nice to have some more info.

Ryan Clark had a similar method for generating normalmaps from multiple photos, but his systemrequired controlled lighting for each shot. I like that your method uses stereo photos instead, very cool.

Also reminds me of Microsoft’s Unwrap Mosaics.


#5

Hi Guys, sorry for not replying sooner, (I’m sick and must stay in bed)

Eric, you’re right that’s a grid, but one that’s so dense you’ll not believe it. It’s a very high quality of detail but of course I understand that such a grid cannot be used directly for production. I actually rarely or never use or save the grid itself, too big, I use a subdivision of it, I export maps, displacement maps, normal maps and stuff.
Retopo is the other option, my program don’t do that.
Specular maps : I obtained good results with playing with a combination of ao and color(diffuse) map together.

philnolan3d: I will make one with wireframe for you and post it here

Stefanlp: thanks

I will post more soon when I get better, stay tuned!


#6

Looking forward to it. Hope you feel better!


#7

Looks quite cool and useful, would love to play with it.


#8

This seems just awesome, gonna keep following this thread!


#9

philnolan3d, here is the video you requested with wireframe mode, does that replys to your question?

http://www.youtube.com/watch?v=e3pOIKsskq8

Here you’ll see 400 000 tria maximum to get a fluid display for the video.
This model is actually a small square crop from the model displayed at http://www.youtube.com/watch?v=WaEP2RwDxVc

I have plenty of models in stock to display (about 10 000). Do you wish to see more? Any other questions?


#10

Oh I see, that’s very nice. I like that you can change the resolution too. Looks like it would be a good match with 3D-Coat’s retopology tools.


#11

wow, impressive :slight_smile:

I wonder what it looks like, when you try it with two photos of a face?


#12

That will work too for faces, but this is not an easy task, not for beginners.
I’ll do a special tuto for this someday.

I can’t wait to show you faces but please be patient. I need to ‘hire’ some friends that accept to be on youtube. Organise shooting when I get the time and so on.

Have you seen this already ?
http://www.youtube.com/watch?v=rzvN3qOeDPw


#13

This looks very promising and interesting! however i wonder about one thing and that is if i can use more than two photos to generate a model. Say i got 12 or 24 or even 36 photos of a model from all different directions. Can the program process all those and more into a complete model ?

/ Magnus


#14

More than 2 photos is not possible now with the software.

I faced the following problem:
Doing it 360° is very difficult. Even shooting is difficult, you need to make sure light and everything is constant all the way. Turntable is not possible as that make the shadows move on the object = you cannot avoid artefacts. Also many problems to solve for the artist, models are often distorded and unusable, UV map is useless, that’s at best days of manual rework. People that tried it can testify. I’ve discussed with a lot of them.

Well, as artist I don’t want that.

My program is the opposite of that, I wanted it fast, simple to use, nice drag n drop enabled interface, nice uv maps, no stitching mess, beautifull seamless tileable textures of all kind, normals, displace, and all automatic.

I know you might be disapointed not to have the 360° option, but think of all those wonderfull models around you that you can get anyway even if you cannot turn around directly.

If you really want to, just make 4 models front/right/back/left and stitch them together. That’s all I can propose now. Maybe one day I’ll take the challenge to make it full 3D, who knows?

Hoping I’ve answered your question? This was an excellent question that many many peope ask. Thanks for your interest, ideas, suggestions? Please ask!


#15

Thanks for your reply Hipe, i appreciate it!

Yes you’re right that the light is not uniform all around the object, but i don’t think it matters that much. In some cases yes it’s important but sometimes you just wanna capture the shape and the textures does not matter.

Your program looks good as it is already but i’m thinking about future developement, to where you could take this application one step beyond to make it even better.
Maybe it’s something you can think about for future versions of your application, run some surveys online and see if people would be interested in full 360 capturing with the non-uniform lighting and other drawbacks it offers, that way you know if more people besides me have a interest in it and if it’s something worth spending time coding.

Thanks again for your replies, and don’t worry. :slight_smile:

/ Magnus


#16

One annoyance with photogrammetry meshes is the stretched pixels in the colormap you get wherever the surface is perpendicular (or nearly so) to the displacement direction. I know this is not easy to solve, replacing the pixels with new data, but it would be helpful if your tool generated a mask that represents the derivative, so I could add detail there in another program. Like I might use an automated perlin dUdV function to add randomized detail, masked to the areas with extreme slope.

Curious about the tilable texture option. Any details? Photo Sculpt could be very useful for environment artists.

Do you support floating point output? 8bit grayscale is OK sometimes for displacement, but often I need more precision. Would also be helpful to set some options for normalmaps, like posX vs. negX and posY vs. negY.


#17

Perhaps instead of doing more than 2 photos for now, you could take 2, turn it, take 2 more, etc. Then merge the meshes together with something like Mesh Lab or probably even easier in 3D-Coat. Of course then the texture wouldn’t work but you’d get a full model.


#18

To Eric: Thanks for your very precise questions

Stretched pixels: you have a point: This is so true that I try to avoid this situation at the shooting stage by shooting at objects frontally (the first photo). Sometimes that cannot be avoided and manual rework is the only way. At least you have something good to start on.

Adding derivative filter: well, that’s relatively easy to do but I fear it won’t work, I’ve tried many functions like that too, you’ll be overfiltering the good stuff too.

I note your need for floating point depth map output. I think that can be usefull too but what software can read that today, do you know? Do you mean 16bit tiff?

My normal maps cannot be inverted now, but that shouldn’t be a big deal to add an option. Do you confirm you use that, really? What’s the software that makes you trouble? It has abnormal normals? :o)

Tileable option : You’re right it’s really a key feature. I’m very fond of this function that took me a few months to setup correctly. It’s a one click function with a “mix” option so you can vary the intensity of the tiling effect. That’s all 3D and it’s I think a “never seen before”. Oh yes I confirm usefullness to environment people (and they confirm that too).
For more information:
http://www.youtube.com/watch?v=bJ1RD88sepc

To Philnolan: I agree, you put it very well. It depends on subject though, it’s doable if it’s a simple shape only. A house would be perfect, snap facades and stuff.

To Magnus: yes you’re right, let’s be positive, it might be for a future development. I hoped my explaination made sense.


#19

16bit TIFF would be good. Many image editing and rendering tools can use this format, for me that means Photoshop and 3ds Max. EXR is another common float format.

Normalmap inverting is needed because different shaders use different standards. Some want red channel to point left, others want it to point right. Same with green channel. Integrating this into your tool means one less step to perform in an image editor. It would be helpful to be able to set this “permanently” instead of for each session.

Tiling looks great!


#20

Nice work Hippolyte. Remember reading about relief mapping in a Siggraph paper some time ago. Its good to see something commercial and user friendly being implemented by you. All the best!

-Sachin