So, I’ve decided to try and make a plugin or something similar for Maya that will be able to take 2d images that have 8 or 16 directions and derive 3d models from them. The original images are all from an orthographic viewpoint, around 30 degrees down, and either 8 or 16 different directions around the original object.
Ive tried multiple methods, and not had much luck but I’m convinced there’s got to be a way to do this effectively.
The closest Ive come so far is by using 2d fluid shapes with the appropriate alpha image applied to them using the paint fluids tool. This is then converted to a polygon shape, and stretched along it’s depth so that all the objects intersect. The problem at this point is that the booleans intersection tool doesn’t like objects with holes or strange geometry and once I combine 2 of them together, it does not like me intersecting with a third of the angles and the mesh disappears.
The other method involved taking each of the 2d images and using a projection to convert these to a 3d texture. At this point things get tricky, I have attempted to use a layered texture to then intersect the alpha components and apply to various 3d fluid shapes, or volume shaders, but havent been able to get anything to work correctly.
I am using an older version of Maya (2012), so there might be something newer that could help with this project.
My plan after the alpha intersections was to use lights from each direction projecting the color data from each image onto whatever surface or shape I came up with to generate a texture for the object.
In theory there’s no reason why this shouldn’t work, and I’m really interested to see what can be generated from these types of graphics. Any help is appreciated.