We’re looking to get feedback from game developers and artists.
We have a few videos we put together to show a few of the effects you can do with our system. The videos only show a small subset of the technology at this point, but you may find them interesting, and there’s much more to come. We’re just at the point where we can show this to a wider audience; maybe generate some more interest among the developers.
Whatif Productions has developed a complete solution for developing 3D real-time (PC and console) games and we are now looking for creative teams to partner with. We are trying to find the ideal partner and game design to best showcase our systems capabilities. We would be the tech group working directly with an outside groups creative to ensure the success of the game and full support during production.
You can download the videos here…
http://www.whatif-productions.com/video.htm
GameProcessor Sampler
Shows some of the effects you can do.
Intro to GameProcessor Content Development
Step by step how to create and export a simple interactive setup using 3ds max, to blend materials and shape.
Pre Alpha Art Tools Overview
Step by step how bones and meshes are blended, also an overview of a few of our art tools.
People have asked about the technology behind our blending system… GameProcessor uses a unique system we call possibility mapping.
A possibility map is a multi-dimensional space filled with the poses the artist chooses. Each pose contains a distinct set of characteristics or appearances for the entity. In the case of data from 3ds max, these characteristics may include geometry, materials, UVs, bones, position/rotation/scale, environment maps, special effects, light settings, etc. Other data from outside sources can also be added to the possibility map, including behaviors, sounds, etc.
The space between the poses is where the blending occurs. GameProcessor uses possibility maps to provide a wide range of interaction with the entity, through user input, artificial intelligence, behavioral explanations, etc.
As the entity travels through the possibility map, the pose percentages dynamically adjust, transforming the entity. The poses in the space can be thought of as a point cloud of data, with the possibility map linking these poses together. The shape of the cloud can be tuned by the artist from within 3ds max to create different types of interactivity. One of the strengths of possibility maps is that an artist can express his/her vision without requiring programmer involvement.
The n-dimensional nature of the possibility map allows the artist to arrange how the different attributes are added into the final solution. Unlike the three dimensions of traditional Cartesian space, possibility maps can contain up to 256,000 dimensions. Poses can be placed into separate axes of this space. This allows the user/ai/etc. to choose which axes to move through to create the current state of the entity. Movement through the dimensions of the possibility map and movement through the dimensions of the rendered scene are completely separate, although they can be linked if needed.
Rather than linear interpolation, GameProcessor automatically generates an easecurve between each pose in the possibility space, generating smooth transitions between all poses. Since the ease curve softens the transitions between poses, artists don’t need to hack the speed of their animations to cover up jerky motions caused by traditional animation blending in games. Blending in GameProcessor is not limited to triangle borders, it can be masked at the pixel level.
In the videos you can see a character example with a demon, minotaur, alien, and robot. The possibility map has been created using three axes of poses. Two axes are used for blending vertex shape, materials, UVs, bone weights, etc. The third axis is used to blend bone position/rotation/scale. So in GameToolkit the character can blend between the four characters, and between the four bone animations at the same time.
Additionally, were working on an IK-like system for adjusting bones based on collisions, so that for example feet wont slide when they connect with the floor or when they are climbing an incline or stairs.
I can tell you, as an artist, that it’s a lot of fun to play with our system. I can do so much with it, without waiting for others to integrate my work into the interactivity pipeline. I can play with things right away. Very exciting. I can’t wait until we have others developing a game with us.
We’re pretty excited about what’s possible, and we think real-time 3D artists and coders will be interested in the system. We’re curious to hear what you think, or if you have any questions.
[edit… same product, new name]


