Hugh & MDuffy has already outlined a good way of implementing a ‘HyperShade-like’ system. 
I’ve thought about crowd sims, but really written them down, so here goes:
For each agent, you have 1 Brain Node, which is evaluates all the ‘thinking’-nodes connected to it which determine what action to do next. That is the basic system.
An action is an index of a certain move which the agent already knows. In the case of a crowd sim for an animation, actions would be a list of clips the character has, such as walking, falling, jumping and so forth. To go even more advanced, you can make the action a certain character movement control, which instead of having an index for running, you’ll have controls for parts of the agent such as legs, arms, head. Not necessarily animation controls, but ACTION controls specific to body part. Leg action control would then contain several clips that are based on walking, running, falling. Combination of these controls would result with numerous mix of animation.
A thinking-node is a basic conditional type function which takes the input of certain attributes or another node and calculates them in a user-specified way. Thinking nodes can blend several actions together or determine which action is appropriate or has a higher priority, again depending on the user. Thinking nodes also can be made to be specific for a certain action only.
The brain node takes all its inputs coming from several thinking nodes, and makes a list of actions to be applied to the agent.
Example1:
Lets say we got a thinking node which determines where the agents eyes should be looking at, and another thinking node which takes control of the direction the agent is aiming with its weapon. The brain then compiles a list which will look something like:
ActionCtrl “eyesController -perform lookAt(-20,-10,50)”
ActionCtrl “aimController -perform aimSniper”
The brain node will then start performing these actions and go through the nodes again once they are done. Controllers need an action to perform and some attributes if needed as shown in the list compiled by the brain node. The list compiling just makes it easier to perform actions and have the attributes needed for those actions ready instead of going through the thinking nodes and finding which attribute is needed.
But then WHAT IF you want the agents to be smarter? What if lets say, in the middle of performing a Running action, a big wall suddenly appears in front of the agent. We can solve that by having runtime thinking-nodes which checks the actions being performed by the agents and makes sure they are still valid.
Another thing to think about is heirarchies and priorities. Which actions have more importance than the other and which actions affects other actions. Since this isnt going to be a full on complete AI controlled system where each agent is capable of thinking on its own and able to do everything (like an advanced bot), its going to be easier since the user can make simpler agents that do only a certain amount of things and have specified heirarchies and priorities.
Anyway I just woke up, so I hope I made sense. This is really interesting however, AI and animation, makes me wanna make my own crowd sim for fun hehe.