Eye joints - what and why?


#1

Hey everyone!

I’m making a rig right now, and I can’t figure out what to do with rigging the eye balls.

I know some people do it with joints, just place a joint where I want the pivot at and bind the eyeball to it, then aim constraint/orient constraint etc.

I was thinking though, why add a joint to the eyes if I can just parent them to an existing joint (like the base of the head joint), and give them an aim constraint based on their existing pivot?

I can’t see the benefits of using a joint over not using one, so I was thinking I’ll ask here.

Right now I’m only looking for a way to rig the eyeballs, as the eyelids will be blendshaped. If I’d rig the eyelids too, I can see the benefit of making an eye joint, but if not I really can’t figure the difference between having one and not having one.

Thank you all :slight_smile:


#2

A joint is generally only used to deform a geometry, so if you don’t deform your eyeballs, then feel free to not use them.
As a general rule, follow your intuition and make your own tests, don’t blindly follow the tutorials from the internet - as soon as it works for you, that’s cool!


#3

copied from Jason Osipa’s stop staring 2nd edition:

No Parenting to Joints
In most rigs, even mine disscussed in other chapters, you are usually told to parent things that can be parented. It runs a little faster interactively in the scene, and well, if you can just parent some geometry to some joints, why not? The problem is, if you parent something like the eyes to eye joints in rig one and then bend and twist them to be squishy in rig two, the eyes will basically “fall out” of and blend that has both rigs plugged in at once, and that is annoying and confusing.

My facial rigs always have several layers blending in to one bridge geometry. So, I do not just parent a geometry to any joint. Other reason is to keep things organized. If you parent your eyeball geo to eyeball or head joint, if you hide the joints to keep them away from animators, the eyes disappear. Just to be safe. My animators are quite naughty, you can tell them to hit the joint filter button off, but to be better, don’t let them see any.


#4

Thanks for the replies!

So I understand I shouldn’t parent the eyes to the base head joint?

What is the solution if that’s the case?
I do want to deform the head in this rig, I was thinking if I make a lattice to hold all of the head parts and use it with a set driven key… but I didn’t test it yet.

But the eyes are still bugging me. Is there a tutorial I can read or something specific you can tell me to do with the eyes?

Thanks again.


#5

Eyes…well, do an automatic soft eyelids! Make your eyelids move with your eyeballs that way they do in reality. Try search ‘soft eye rigging’ there’re many method. Don’t forget to make a switch for it so animator can dicide to turn the automatic effect on/off.

Try using wire deformer on lattice that deforms the whole head. You’ll get nice cartoony effects in no time!

Each deformation apply to different copy of the head then blend all of them to one bridge head using parallel mode blendShape then blend to your global head which will has only single blendshape and a skincluster on top.


#6

Wow, I’ve been toying around with the wire tool after I read what you said… this is just plain amazing. Seeing things come to life, getting to work, so satisfying :slight_smile: Thanks so much :slight_smile:

I do have a few questions (sorry, this opened a pandora’s box now ^^)

I managed to make a wire, I put a blendshape on it to deform according to the curve of the eye as I rotate the wire around the required pivot.
Problem is that the end result of the deformation is inevitably a linear movement from the top to the bottom of the eye (even when not using a blendshape, it’s just a deformer pulling on verts in a certain direction).

How do I keep the curve of the eye on the eyelid? I was thinking of making a wire deformer for each edge loop on the eyelid, and have them move relative to each other using set driven keys, but I thought that makes things really messy and there must be some other cleaner way…

Also, I tested the lattice+wire method on the whole face, and it worked like magic! it’s so awesome. My buddy will go crazy tomorrow when I show it to him haha. Only thing that I noticed after doing this, is that the wire deformer I put on the eyelid doesn’t follow the entire face mesh. Not even when I parent the face mesh on top of it. the curve moves, but the effect of the wire doesn’t. It’s as if the actual deformer is hidden somewhere and all I’m moving is it’s controller, which doesn’t move it’s area of effect.

How do I make it so the whole thing moves? So that when I move the face, or deform it with the lattice, the wire curve will move and take the wire effect with it according the way I’m moving it (so that the eyelid closing won’t be messes up after I put on the lattice).

Hope the question is clear enough… I wasn’t sure how to put it to words, so sorry for the tl;dr.

And thanks again, I’m starting to suspect we just might have a viable rig at the end of this crazy process. :slight_smile:


#7

As I said, each deformer is applied on different group of mesh(head). Then all of them are pluged to the ‘bridge mesh’ which holds everything. So, my answer to your question is, you do create deformers on different copy of the head(local rig). You might end up with 5 different group of mesh with different deforming effect, one for eyelids, brow and jaw using wire and joints, another for the whole face defroming by lattice and wire and another holding 6 nodes of blendshapes for every single part of the face. Just make sure they don’t have to move together. They all being add together on that bridge mesh. This means, all those heads, including the birdge head DO NOT HAVE TO MOVE at all. They stey where they are, just the deforming data that has been pass through via blendShape. It’s the main rig(the global rig) that the animator actually move around the scene, with that having a single blendshape from the bridge mesh. Create some direct connection from your global rig control to those individual cluster the controls your wire curve, or to the joint rotation/translation you’ll get all the effect with you on the global rig. Don’t use any constraint between those locals to global rig since the local rig must stay still, constraint will try to move them with your global rig.

For the eyelids I usually use joints. Place them at the center of the eyeball, one for the ball, one(or more) for the upperlid the rest for the lower lid. Do some paint weight carefully then you’ll get nice arc motion on the eyelid when you rotate your joint.

In the lattice attribute editor set the ‘outside lattice’ attribute to ‘All’ so it won’t go crazy when some part of the mesh falls out of the lattice area.


#8

Thanks again, I really appreciate the detailed help.

I will have to take a look at how to all of this works in actuality, with connecting them using a blendshape (I never thought about using blendshapes to link between deforming… it’s really useful).

I hope I won’t have too many further questions. :slight_smile:

Thanks again! I have a lot to work with.


#9

Hey again,

So there’s been some progress. What I ended up doing with the eyes is I rigged them with joints, and painted the weights best I can, but because the eyes of the character bulge a lot I had to find a way to complete the curve of the eyelid at the extreme “eye closed” position, so I created a blendshape that does that, and set a driven key between the joint and the blendshape. It works pretty well with some tweaking.

I have a question about blendshapes though. We’re working with references.
When I rig the character, the model comes from a separate file, and I don’t know if I should create the blendshape models inside the model file and bring them with it, or just make them on the rig file.

Is there any prefered way?

Also, I’m struggling to find a way to apply blendshapes only to half a face.
I have my eyelid blendshape model, with both eyelids down, and I want them separated.
How is this usually done?

Thanks :slight_smile:


#10

Hi,
Why reference the model? Is there any specific reason to keep the model and rig seperated?

My best answer would be to have all the blendshapes geo and the bridge geo in the rig file, cos you have to link some connections to blendshape node, and if, in case you lose your reference you won’t lose the connection. You can just rereference the model hook blendshape from bridge geo to the model and you’re good to go.

You can paint weight on blendshape! Go to edit deformers>paint blendshape weight tool.
Select the verts, flood one half black(zero), duplicate it now you have only one half of the influence, and do the same for the other half.
I recommend a free tool online ‘abSymMesh’. The script been there for a long time but still useful.


#11

We reference the model because it’s not final yet, and due to a very tight schedule we have to clear the rigging out of the way before all the models are finalized… And since we’re about 3 people working on everything, and I have most of the experience, we end up working a lot with references to be able to push the whole thing forward together instead of one step at a time.
Luckily, no reference trouble so far…

I solved the blendshape problem, it all works awesome now. Thanks!

I do have a big problem with the eyelids though. I thought I solved it before, but apparently it’s not yet good enough.

I looked around for different solutions for this, and I found a few, but all of them involve some insanely complex rigs. I heard that the Animation Mentor rig (Bishop) has something like a joint per vertex for the eyelids, which is something I considered. I also thought about making wires the deform with a blendshape based on their rotation around the eyeball pivot, and put them on the edgeloops around the eyeball…

But all these solutions seem so crooked and overly complex, isn’t there a more straightforward way?

The problem with the eyes of this character is that they’re bulging out quite a bit, and the eyelid has to really go around almost half a sphere before it’s completely closed. That makes it so there’s always a point where the eye underneath goes through the mesh.

This is really frustrating. :slight_smile:


#12

Using joints on eyelids you have to have very clear and quite a lot of edge loops around the eye. Joint orientation have to be properly align to the eyesocket angle too.

A more stright forward way is to use inbetween blendshapes. You can first do the eyelid rotation with joints and duplicate the mesh then use as inbetween blendshape. This way you can tweak verticies by hand -very stright forward.


#13

Wouldn’t that require many inbetween blendshapes to make the linear transition less obvious? Or is there a way to make the transition spline-like?


#14

Yes, you will need quite a number of inbetweens. But don’t forget this is the eyelid, unless you have extreme close up shot in slow motion on an eye, noone will see that linear motion. One advantage over joints you’ll get is total control on each vertex. OR, you can just use joints and do some corrective blendshape on where things go wrong.


#15

We do have a couple of shots with a full head closeup, and blinks in it too… I’ll have to get it looking right.

I tried making corrective blendshapes, but there’s ALWAYS an area with some of the eye pushing out. Unless I use a lot of inbetweens, which ruin the smooth motion of the joint-based deformation.

What about using clusters or something? Wouldn’t that be a good solution.

By the way, thanks to the help you’ve given me our rig is already far better that we could ever hope for. I’d like to really thank you. :slight_smile:

All that’s left is to solve the eyelid problem, some body deformation and a wrap deformer to transfer all the info to the highpoly character.


#16

hi,
if you want to stick to joint you may want to make sure your eyelid area is capable of rotating in arc motion, which means, you may have to take a closer look at your model. It should have equal space around the eyeball. Do some sculpting should fix the eye from pushing out. Having eyelid half way close in default pose is also a good idea. I always use joints though they’re a lot easy to manage than inbetween blendshapes, and also, I usually ran into the exact problem you’re having here. So, tweak the model would do it.

ps. you’re welcome.


#17

Eyes half closed is very clever! though I think it’s a problem now that I have quite a few blendshapes applied, all of them with the previous default eye position…

Unless there’s a way to change the eyelid default position after I rigged all the facial blendshapes without having to redo them all?


#18

If blendshapes you were talking about wasn’t effect the eye socket area, I think the ‘abSymMesh.mel’ script would do the job but only with the same topology - so you can’t add edge loops.

Tweak verticies around the eye socket all you want on a duplicate geo, then select that geo as the ‘base mesh’(hitting ‘select base geometry’ button). Then you can go to each blendshape geo, select edge loops around the eye socket, convert selection to vertex and hit ‘revert selected to base’ button. The script will go through each vertex you selected and tweak them to be exactly like the one with the new eye tweaks.


#19

After much research I found an AMAZING script, python based, which basically does the whole eyelid deformation PERFECTLY.

Here’s a link:
http://www.paolodominici.com/products/zvradialblendshape/

It works like magic. It basically creates a pivot around which the vertices rotate instead of going linearly from point a to point b. Really, solved the whole thing in a sec, it works amazingly well.

Now, to another problem (this just doesn’t end…)
I tried to rig soft eyelids to the character, and it works well, until I bend the character left and right from the spine (and I guess any other way…). I have no idea what’s causing this.
Basically I used set driven keys to connect the eyeball (the mesh itself) rotation, to the radial blendshape values which lift and lower the eyelids. I used the X axis of the eyeball to control those. Then, I connected the Y axis of the eyeball to a twist parameter, to widen the eyelids when the pupil goes left and right.

My eyeball mesh is parent constrained to a joint. I’m using ikRP to aim my eyes, so this joint is the parent of another joint, which is connected to the aim controllers.

So it’s a joint that rotates my eyeball, and the eyeball is moving the blendshapes of the eyelids using a set driven key.

If this is not enough info, I’d be happy to send over the rig for you (or anyone else who’s interested) to look at and try to figure this out… this is really puzzling me right now. I tried 3 ways to rig the eyeball to the blendshape controls. This one works best, apart from this quirky problem.

Thanks again, and I hope I at least helped someone a bit with the script I put above. :slight_smile:


#20

Hi,
I don’t see clearly on your explanation. But, I do have some general guide

The up and down motion of the soft eyelid is the probably the same as eyelid closing and opening so the blendShape that does this should be the summation of 2 values:

  1. from the eyeball joint that has orient constraint to the eyeball control which its group is aimConstraint to the eye aim control.
    *(constraint makes rotate value change so we can use that value to drive something else, parenting do make child follow but no value changed)

  2. from an attribute animators going to animate.

    • soft eyelid shoud have some kind of switch to turn the behavior on/off in case the animator want to control manually. Using blendColor or multiply divide to multiply the value with zero is okey. You probably feel more comfortable with set driven key but it might be a little difficult to setup this kind of switch for animator.

The side to side soft eye motion is another blendshape driven by some value (rotateY?)

Since they’re 2 different blendShapes you might have to check to make sure that they’re combinding well together. Imagine if the character is looking at the left then blink the eyes at the some time.

The problem ur having might be from the ikRP ur using. IkRP needs ‘pole Vector’ to determind which direction is the ‘Up Axis’ to aim. Try aimConstraint with ‘object rotation up’ mode.

The heirachy of control would be:

[[eyeGeo]]>>parentCons/smoothBind>> [[eyeJoint]]>>orientCons>>[[eyeControl]]>>aimCons>> [[eyeAimControl]]

ps. Upload your file, man, I would love to see.