Looks like the animation morphing is like vertex-morph8ing but on a per-object/bone basis instead per-vertex. I guess you need to define quite a lot of states to have a walkcycle?
If you’re using bones, then you just lay out note keys to “capture” as many poses as you wish. If you’re using morph targets, then you’d need a great deal of poses for a walk cycle… not recommended, since it’s a terrible waste of GPU/CPU cycles. But hey, if your hardware can take it, by all means feel free to do so.
Blending between poses can be per-bone, per-mesh, per-material-ingredient, per-behavior, etc., even per-vertex. You can use pixels to control blending, so blending can be on a per-pixel basis. Well, depending on if the hardware supports it, otherwise it may be per-vertex. This content-adjusting-to-hardware is automatically managed by the system, since it is hardware-agnostic, and at the same time hardware-aware.
You can have contact-based pixel generation, to control blending. So for example you could slice into a character’s arm and see the muscle tissue and eventually bone, and even slice all the way through. Where the sword contacts skin, “mask” pixels would start to be generated, which would blend in the “gore” model. You can use as many “levels” as you like, and as brighter mask pixels are generated by the contact, more levels would be blended into that spot. Does that make sense? Think of Resident Evil, with contact-specific damage. Or slicing bread. Or cutting an apple. We’re working on samples to show this off…
Btw. you still need mirroring in Verty? I’ve no time … but maybe i can take some 
Mirroring isn’t all that necessary, there are ways around it. What I would love to see is some refinements for the Insert New Verts function. It gets the vertex order perfect, but I wish it would give a better position for the new vert, perhaps an average of its neighbors, or even better… an approximation of the Source vert’s position, relative to its neighbors. While I’m making requests…
I would love to see UV and material ID and smoothing group info created for the new vert as well. Perhaps just copy this info from the source vert? Or maybe average from surrounding verts in the target mesh? Verty is great, thanks for sharing it.
I want to see execution of something applicable, as I mentioned in my previous post, before I can appreciate how it works or what it is. So far, I see a modeling/shader type dealie with a scripting language but nothing related to visual interactivity creation. This another SDK more or less? I’m interested in authoring(being an artist).
We’re working on it! It’s not just an SDK, it’s a whole system. Some of the tools are primitive right now, enough to demonstrate with, but we have a lot more ideas and tools in development. We’re keenly interested in creating tools for the creation of interactivity, since we feel this is one of the real strengths of GameProcessor. As soon as we have something to share, believe me we will share it.
Check out this demo me and a friend has made with 3DS and Director:
Hey petterms, that’s cool. It’s amazing what good lightmaps can do. I like the reflection in the hardwood floor too. I wish there was something more up those stairs…
calling quick export does not rely on hard-wired/coded params within the macro scripts.
Well, like the video says, the tools are pre-alpha. So these are preliminary tools at best, the quickest route for us to get functionality until we can start full tools production… soon! I like the persistent vars idea. Right now we save an ini file using the same dir and filename as the max file. Then anytime the exporter sees the two files together, it can use the settings stored in the ini (or you can override). But the ini files will probably be replaced too, we’ll see.
The quick export tools I don’t use anymore, they’re just quickie one-offs, will certainly be replaced by better tools. The one we do continue to use is the Mass Exporter, for auto-exporting reams of test content, which we do anytime the exporter code is updated, to easily check for any possible errors.
you should add ‘inspector’-like applications/tools
Yes, that’s the idea once we get to full tools production. Additionally, we’re continually updating the Opoc specification document, currently about 120 pages or so, as we develop the format and the tools. This contains a large amount of information about each feature you can export from max (and other tools not yet created)… what is supported, what isn’t, pitfalls to avoid, etc. I haven’t seen this kind of document for very many other tools, especially not for interactive apps. It takes a lot of work to develop and maintain… but ultimately it is worth every minute.
I know see that the morphing of poses (la bit like the ani-pose script i used - is it from Grant? ), need of course something you ‘called’ parallelism
In GameProcessor (new name, same product), “parallelism” is only needed when blending between two or more separate skeletons. It isn’t needed if you are blending poses for a single character… if so then you just animate as you normally would, then “capture” however many keyframes/poses you want to use (by laying down note keys), and GameProcessor will blend between them to create animation or interaction. Really pretty simple.
But if you have two separate character models, each with their own bones, and you want them to blend, then parallelism is needed… the same bones need to be assigned per vert. Also actually pretty simple, the tool does a good job of preserving vert weights.
Morphing is really not an apt term for what we’re doing, since the blending is a bit more complex. We use Akima splines to connect all the poses, so the space between them can be as curved or as linear as you wish. Curvature can distort both time and space between the poses, adjusting the way that the posm network is traversed. Very cool stuff.
Well, I could go on and on. But please do ask more questions if you have them, and I’ll do my best to answer them.
BTW, I haven’t tried Ani Pose, but here’s the page…
http://www.chuggnut.com/scripts/anipose/anipose.htm