MaxScript (AnimatMaxScript (Animation Face with deformers) Sticky Points! HELP!


The script allows you to create points that are attached to an object. You are able to deform that object (via any modifier or transformation), and the point will be stuck on the surface of it. BUT the big addition here is that you can animate the point on top of the ‘constraint’. You can use this feature for animating controls in top of your morph animation, as shown in the video. Some animators like this system, because morphs allow you to do a fast blocking of the animation, and then you can refine with the controls points
if needed.

 I saw this         []([](
 How to write me this on Maxscript? 
 What technology he used ?


I’ve Written a similar system works like this, you can add Clusters on a Mesh/Poly, beside there are some other features there,including setting weights,copy/past/mirror weights,also mirroring the clusters on the Mesh and …The system is not using any bones and skinining stuffs,it directly works on the mesh or poly.
This script will be realased soon for free, and you can see the script.


Here is a thread about this topic… don’t know if you read it:

Hope this helps!

That sounds like very interesting script!


Hey, as Sami said, the link he points to contains a lot of information related to that. That was a very nice and collaborative thread, with several people aiming for the same goal :slight_smile: So if after reading the thread you are still wondering what’s happening there, I’ll tell you the recipe :wink:

  • 1 Visible Head (the mesh you are going to see in the viewport).
  • 1 Reference Head, that is a copy from the visible one. Link it to the Visible Head, so it inherits the transforms from it.
  • 1 bone, skinned to the Visible Head, linked to…
  • 1 helper, that drives the bone. In the video you show is the red sphere. This is linked to…
  • 1 attached point (with Attachment Constraint) to a vertex in the Reference Head, linked to…
  • 1 helper, that is linked to the Reference Head.

Use the script that Phoelix explained in the other thread as a controller in the bone, so it is going to move only if the red sphere is moved. After placing and skinning the Visible Head, you are done!

This system has been used in production and it works OK. I’m using ‘Sticky Points’ (cool name! :D) also in the project I’m working on too :wink:


Thanks guys! I am glad to hear
Wamo, when you will finish a script inform me please!!!):beer:
My mail
Thanks for this cool stuff!


I had a go at testing this recently based off a tutorial i saw, my basic approach is:

one mesh ‘A’ has a morph modifier and skin. Its morph has a target referencing a copy of itself ‘B’ and the other targets - with automatic update on.

A has a point ‘P’ attached to its vertex, which is parented to its corrisponding skinned bone. So e.g if it was on the face it would be parented to a head bone or its reference space.

A bone is placed at the same vertex position on the reference mesh ‘B’ and is added to a skin modifier on B with a root bone.

There is a control shape/object ‘C’ parented to P at its position.

C and B both have position list controllers, and an instanced bezier position controller between the two. This is the Active controller.

C has as a position_script controller tool after its instance controller - which has a target variable pointing to 'B’s instanced controller but is negative eg. ‘-counter’

Here’s my summing up.

A control object is parented to a point attached to a vertex on a mesh, as a bone is positioned on a reference mesh. It along with the bone share position controllers under a list controller via instancing. An additional script controller which points to the negative value of the bones instanced controller is added to the control to counter the double transformation when the reference mesh uses the bone as part of its skin that is referenced in tern, as a ‘live’ target the controls parent point is attached to.


Hi, eek
What if you rotate your head and then try to move point control - its will have unknown direction (coordinate space)?
Could you please show some simple scene-example of this, it’s helps alot. Senks. :slight_smile:


This is the first thing I tried to make use its relative space is correct - and it works. So the key is your two meshes are both skinned to root bones. On the base mesh the attach point is parented to its root bone. On the reference mesh child bones under the skin are parented to its root bone.

The reference mesh only has a root bone and the bones driving the lips,brow etc nothing else. The main mesh has root, and any bones you need like jaw etc - but probably not lip bones etc.

So a basic process could be as follow:

A head mesh ‘A’ has morphs and is skinned to a head and jaw bone. In its morpher it references a ‘live’ copy of itself.

This copy ‘B’ just has a skin modifier, and a root bone and lip control bone (child) placed at a vertex position.

The head mesh ‘A’ has a point attached to the same vertex as the copy. But doesnt follow the mesh (a checkbox), but is rather parented to the head bone. This is reference space. The root bone of the copy and the head bone should probably have the same coordynate space. Now a control point is parented to the attached point, and instance its position between it and lip bone on the copy.

Now at this point theres a double transform, so you need to add a controller to the control to point at the negative position of the bone instanced position. This counters its doubling.

A side note to this is if the copy ‘B’ isnt skinned to the bone, the control parented to the attached point on the main head wont move at all.

The key is both meshes have a skinned bone that is the reference for there children in the head its the head bone - the parent of the attach point. In the copy its a root bone the parent of the child lip bone.

All your main bones, head jaw etc need to be on the main head mesh. Only the lip controls local controls, and a reference space bone root should be in the skin of the reference.

I did a test where i added extra bones, in the reference like a jaw, and it cause a lot of mess on the main mesh - this is because its reference space is changed.

The math is that the control point is in a reference space relative to its attached point which in turn is in the same space as its head bone. This is why we can rotate the main head mesh with the head bone and it wont cause odd coordynate space issue because the attach point is still relative to it and the control to this.

So the key is that the attached point and the bone on the reference always need to be it the same rotational space. Even if we rotate the head bone the relative space between it and the attached point hasnt changed.

If i have some time, ill script up a demo for you - really busy atm. The thing would be to make a simple script that makes a point, a control point parented to it and bone. Then you just parent the point and the bone to there relative space and skin away.


Thanks for the info, Charles. I want to take a look at your solution, because right now the system I have for the points on the surface is not compatible with the squash/stretch solution I have for the head in my characters (some controls to offset the visible Head from the main Head bone).


Hey Iker,

How are you getting the correct face for the vertex index? I get odd odd faces :frowning:



Sorry for asking this like this - since i didn’t read all the previous posts… Are you talking about Iker’s red controls dots on the face?

Why not just pick any face attached to selected vertex? Using [b][b]GetVertexFace. I guess on character face it doesn’t matter to which face you attach control object as character faces are relatively round most of the time, so it wouldn’t matter which triangle/face control uses - Or are you using attachment constraint at all?

I did sticky point script for myself when the previous thread was going on but never since that had time to come back to it. I went the route of picking location on faces not picking vertices.

Then i worked out the location on face using mapScreenToWorldRay and intersectRayEx, then location on face by using barycentric coords and finally optionally oriented control to face. I just picked locations near vertex and then the script created the rest of the stuff needed for meshes.


$.getVertexFace $.selectedVerts[1].index 4 - this doesnt return any of the faces next to my vertex.

I guess i could go down the road firing rays etc… just wanted to be simple: pick a vert on the editable poly and away you go…


to eek:

Thanks for so good answer :). All works great, except if i skin my vertex on mesh A to more then 1 bone - then this vertex will have mixed parent space and its hard to animate.

In attachment - in frame 15 - red sphere moves ok, in frame 30 - oops.



     Um. I assumed you already had a vert selected? Thinking about this, i don't actually know how or if it is possible to somehow pick vertex with mouse from screen, and get it's index? That's why i went face route, as IntersectrayEx shows also the face number you hit. 
    But if you have the vert number, you should see the index of Nth face connected to that vertex just like you said, it works with editablepoly. This would show vertex selection's first selected vertex's first connected face's index.

         $.GetVertexFace ($.selectedVerts[1].index) 1
    This prints all faces connected to first vertex in vertex selection of editablepoly, and selects last face connected to it. 

         a_face = ()
       vert_index =  $.selectedVerts[1].index
       vface_count = $.GetVertexFaceCount vert_index
       for a = 1 to vface_count do 
       	a_face = $.GetVertexFace vert_index  a
       	format "vertface %'s index is: %
" a a_face
       -- select the last face connected to vertex1
       $.EditablePoly.SetSelection #Face #{a_face}
       setSelectionLevel $ #face
    But anyway, don't know if this is needed - i think intersectrayEx would be easier way... at least for me as i don't know any other way! :)


Hmm… yes i see if you rotate a bone influencing the verts of A - weirdness happens. I’ll have to have a think.


testing…never mind… :blush:

It seems rotating a bone in the skin mesh, that the point is attached to does some oddities to the transform space. If the attached points parent is in the same space as the bone it works , otherwise you get issues…


I found that working with EditableMesh is easier then EditablePoly. Sometimes trying to understand why some functions on the second do not work as expected drives me nuts. So here’s the function I’m using for getting the vertex number, under EditableMesh:

fn getVertex obj button =
     			if obj != undefined do
     			( = true
     				snapMode.type = #3D
     				local vertsArr = obj.verts
     				local vertexCount = obj.verts.count
     				local myPos = 
     					for i =1 to vertexCount collect vertsArr[i].pos as string
     				lookedPos = (pickPoint snap:#3D) 						
     				lookedPos_st = lookedPos as string
     				vertexvar = findItem myPos lookedPos_st
     				button.text = vertexvar as string -- this is for the UI
     				-- get the faces that own the vertex
     				surrFaces_bit = meshOp.getFacesUsingVert obj vertexvar -- also available in EPoly >>> polyop.getFacesUsingVert 
     				surrFaces = surrFaces_bit as array
     				-- take all the barycentric coords of every face
     				baryC = meshOp.getBaryCoords  obj surrFaces[1] lookedPos  = false
 So here we have all the data needed, and with this (it's a piece of code inside a function):
attachedP.pos.controller = Attachment()
    attachedP.pos.controller.node = selObj -- selObj is the current object
    addNewKey attachedP.pos.controller 0f
    theAKey = AttachCtrl.getKey attachedP.pos.controller 1
    theAkey.face = (surrFaces[1] - 1)
    theAkey.coord = baryC

… you have a point attached were you wanted. Hope it clears things a little… :wink:


Yes Epoly is nightmare - thanks for the help.


Allo allo. I resolved this problem in diffrent way, without double transformations and reference object. I attached one geomtry object to surface with skinwarp, where only one vertex is influancing this object - so there are no deformations on it. Next I’ve added some bones, which are linked to this object. When u add some morph target and u change it, the object follows the surface but the pivot stays in orginal place - so the bone stays with it. Next I added helper with attachment constraint to this object with skinwarp to get the local transformation and pivot with right position. Then I add proper controll object, linked to helper with attachment contraint and wired bone to it. Main problem of this recipe is that the contrllers will follow the morphing surface but the bones will stay. When u move controllers in their local transformations, the bones will move in their locals. This is my morning idea, its not perfect but maybe it will inspire u in some way to resolv this. I’m quit interested in this topic, becouse I wanted to add earlier sucha a function to my rigging tools, but I didn’t resolved it and gived up. Sorry for all language mistakes.

Here is the link to sample scene (sorry for rapid, I dont know where to add attachment)



what is this INTERSECTRAYEX ?

[b]Rivet (MAYA) /bIt constraints locator to Polygon or NURBS surfaces. Select two edges on polygon surface or one point on NURBS surface and click [b]rivet[/b] icon.

Script creates locator. After that you can parent or constrain any object to it. Or constrain objects to locator from other hierarchy.
Note: Script doesn’t create expressions. Calculation is fast and interactive!