Predict next vertex 3d position based on last two positions


I have an alembic file that I’m tracking certain vertex positions over time on.
In this particular example, the vertex numbering of the model changes quite often, and the original chosen vertex that was being tracked changes to a different location, so I’m looking for a way to ‘estimate’ where it would have been based on the last known 2 or 3 sets of x,y,z positions. At that point, I can take the ‘estimated’ position, and search for the nearest vertex to continue tracking.

Any thoughts on how this would be done?

I’m pretty novice at this, so any detail would help!



Oh, is it a game “Guess where the third one is hiding”?

But on a serious note, please explain why we need to “guess” instead of simply taking it from the file. Usually, the information in the file is straightforward enough.

(it would be helpful if you posted the file you are trying to read)


Let’s say it’s vertex id #1 that we are tracking, and it’s at the right shoulder of a 3D character for the first 30 frames.
With this particular alembic file structure, the vertex ids change over time. So, the vertex id #1 that was at the shoulder at frame 30, is now located at the ankle of the foot at frame 31. For whatever reason, there are a different number of polygons and vertexes in this file at frame 31. It could be that it’s a moving character, and things have been optimized, but the point is that the new vertex id number on the right shoulder is different than a frame before, and I would like the software to ‘predict’ as best it can what new vertex id is located where the old vertex id ‘should’ have been.
So, I’m storing the x,y,z positions in an array, and have already figured out when the change is going to occur by a simple vertex count/frame method. At that point, I know I need to start tracking the different ID number at frame 31…I just want to see if it can be chosen automatically based on where the last good vertex was, and then choosing the closed vertex to that location.

Hope this helps.


Sorry…forgot to add that I can’t post the file right now due to a NDA I’ve signed.


According to what you say, does this mean that each frame can be a different mesh in topology? (different number of vertices, faces, different order of vertices in faces).


You don’t tell anyone. And we won’t … :stuck_out_tongue_winking_eye:


Yes. It’s from a system that can do a live capture of a subject - turning it into an animated 3d mesh. So the mesh changes in both vertex count, and numbers of polys, if needed, on a frame by frame basis.
It’s then exported as an alembic mesh object.
I’m currently selecting a vertex I want to track, and then advance to the next frame that I know the mesh changes, and have to select a new vertex that is as close as possible to the last ‘real world’ location (meaning the shoulder point I mentioned before).
Here’s a file that may help. It’s in both a text format and excel format.
It shows the manual selections I’m doing, and time (marked by the ‘Y’ in the change column) the mesh changes.
So, at frame 3, I would like to use the data from frames 0, 1, and 2…to predict where it will be at frame 3, where the new vertex is needed. I would then run a ‘get closest vertex’ function that I have to find the real, new vertex closest to that prediction.

Alembic Sample.xlsx (17.7 KB)

Alembic Sample for CGS.txt (7.4 KB)


No… it can’t help in any way.

I assume we’re talking about some 3D scanning?

As far as I understand, it’s not about some kind of real-time solution. It’s more about preprocessing to make the mesh deformation data consistent from frame to frame, right?


No, trust me.
Real time capture of live talent.

Here’s a link to the hardware used:


I’m asking about what you want to do with it, not how it’s done. I am quite familiar with the technology described above.


Sorry…guess I’m not explaining it correctly.

Maybe this pic will help.


You’re trying to explain your process, but what I want to know is the ultimate goal you’re aiming for.

As I understand it, you want to achieve the goal of creating a “constant” mesh that preserves its topology during deformation. There are two main tasks involved in this: first, building the “constant” mesh, and second, generating morph-targets for traditional morph-target deformation.

Additionally, there is a potential extension of the task, which includes generating or optimizing the skeleton for the “constant” mesh (optimal bone placement) and calculating the skinning weights.

You are now working on a “constant” mesh… Am I right?


No, I’m not trying to alter the existing mesh at all…it will remain as is. No morphing.
I’m trying to pick a specific point on a mesh, and track it throughout the meshes recorded - existing animation.
I’ll later use that tracking info to attach another item to. That’s my ultimate goal.
I’m already achieving that goal manually, by using the existing vertex for as long as the mesh doesn’t change it’s vertex assignments… and when it does, I manually have to pick the new vertex that represents that spot on the mesh I want to track, and use that until the mesh changes again.
I was just hoping to automate the process a bit more.
I’m beginning to think that even if I do come up with something, it won’t be accurate enough. The characteristics of the human movement (mesh) are probably to chaotic to be predictive.

Thanks anyway…I appreciate the effort!


I’m confident that predicting the position of a specific vertex from the original mesh in the new sequenced mesh is impossible. It would require knowledge of the triangulation algorithm and manual calculations, which is not feasible.

If your character has textures, I recommend trying to snap to a point on the texture rather than the geometry. This approach would provide a computable 3D position that can be unambiguously calculated for all animation phases (sequenced meshes).


I’m sort of getting to that same conclusion myself!
I knew it wouldn’t (or couldn’t) be exact, but I was hoping to get close.
I’ve thought about looking at the texture method as well, but that’s way out of my league (which isn’t saying much).
I’m afraid that would run into similar issues in that the texture image also changes every time the model/vertex count does,. A new single texture map is created, and is used until the count changes again.


What are you working on, a game or a film (movie, cinematic)?


Nothing that high end!
Just trying to make a script that will help streamline a process for a couple of vfx guys for their proprietary process.
It’s closer to game development than anything, but very specific in it’s needs.

Originally, I was really hoping that with 3 or 4 known x,y,z positions, and maybe figuring out the trend in velocity, it might be possible to predict the next one…but the deeper I get into it, the harder it looks !

As I stated up front, I’m a novice when it come to scripting, and sadly, my math skills are decades, which is a big understatement :slight_smile: … old!



does each mesh have exactly the same number of verts ?

does the mesh have uv’s ?


No…both verts and polys change from time to time, depending on the complexity needed during the live capture of the subject.
Picture a man standing, then crossing his arms across his body…less polys needed where the arms combine together and across the chest area.

Yes it has uv’s.


does each model have a unique texture or is it one for all models ?