MDuffy and Hugh: Great info! I’m working with an “open” node-based system right now and have alwayed wondering how the big apps got around the problems I face with my app. It makes a lot more sense now.
-b
MDuffy and Hugh: Great info! I’m working with an “open” node-based system right now and have alwayed wondering how the big apps got around the problems I face with my app. It makes a lot more sense now.
-b
The other thing that might be interesting to think about (warning: advanced feature ahead) is upstream flows as well as down…
This is another shake thing...
Remember how I said that, in Shake, image plugs have loads of chldren plugs. Well, not all of these are down-stream plugs - some of them send data UP the tree…
Why's that?
Well, I can only describe it with relation to image nodes - I can’t think straight off how you’d use it with straight data, but I’m sure someone could come up with a use…
Shake uses a PULL/PUSH method (thanks to Angus for the terminology… I just wish I could do the hand signals at the same time!) - don’t confuse this with the PUSH/PULL that I was talking about earlier - they are completely different… I’m just calling them the same thing to confuse you.
At the bottom of the tree, Shake PULLs the image data. This passes a chain reaction up the tree, as each node needs the data from the previous node to generate its own output. However, when the reaction gets to the top of the tree, instead of sending the data straight back down again, it PULLs on an upstream plug, asking from data from further down the tree. This data includes what the current time is, which part of the image is wanted, which colour channels are wanted. This chain flows back down to the bottom of the tree, where the main output has to PUSH some data back up. So for any node, it has to ask for an image (PULL), but before it gets the image, it is asked about some specs for the image that it wants, and it has to answer (PUSH)
If that made no sense to you, I'll try to explain....
Let's say we've got the following nodes
FileIn ("/some/path/to/an/image.tif")
|
Blur (20)
|
Reorder (“rg00”)
When the users says that they want to look at the output of the Reorder node (which switches around channels - in this case it says to keep red and green, and make blue and alpha 0), The following happens:
GUI: Pull on Reorder’s output image buffer
Reorder: Pull on Blur’s output image buffer
Blur: Pull on FileIn’s output image buffer
FileIn: Pull on Blur’s mask plug
Blur: Pull on Reorder’s mask plug
Reorder: Pull on the GUI’s mask plug
GUI: Return “rgba” to Reorder
Reorder: See “rgba”, AND it with “rg00”, and return “rg00” to Blur
Blur: Return “rg00” to FileIn
FileIn: Pass red and green channels of image to Blur
Blur: Do the blur on the red and green channels
Reorder: Pass the image straight through to the GUI
GUI: Display the image
So you see that the saving here is that the Blur node is only doing half the work it would be if it was blurring all four channels, because it already knows that the blue and alpha channels will be lost somewhere below it. Each node has to take the inputs from below, modify them depending on what it’s parameters are and pass them on up the tree.
Anyway, hope that was interesting… it’s a nice way of doing it - passing dirty bits around would still work - you’d just have to think of each node as having lots of connections (and you’d have to make sure that you’d got dependencies implemented, as it could get really messy otherwise!)
Wooooaaahhh!!! Loads of info!! Cheers Hugh & MDuffy.
Just had a thought tho. Is any of this state-based? As in with Shake or Maya, you’re always producing a final image at the end of the graph, however with AI or logic, there surely must be a result, whether it be state or the current anim frame/position?
I’m guessing there’s no major difference, but just wondered what any thought …
Atually you can achieve this without the pull approach if you think in terms of dirty attributes and not dirty nodes. In your example you had one set of attributes holding image data, and another holding a channel mask. So your dependency flow actually looks like this if you think in terms of attributes
GUI.image
|
Reorder.image
|
Blur.image
|
FileIn.image
|
FileIn.mask
|
Blur.mask
|
Reorder.mask
|
GUI.mask
Notice in the above where FileIn.image requires data from FileIn.mask. All the other connections are external node-to-node connections, but the FileIn.image and FileIn.mask are an internal dependency. The FileIn node probably requires inputs of “mask” and “filename”, and has an output of “image”. The “image” output is dependent on clean values from mask and filename, so these two internal dependencies are created by default when the FileIn node is created.
You can accomplish the same as a Push paradigm by simply propogating dirty flags, and then letting the pull approach actually get the data. For example, let’s say that the file on disk changes, thus invalidating the FileIn node (the FileIn node would probably have to be set up in some timed polling event so it could check if the files on disk changed). The FileIn node sets its filename attribute to dirty. Since there is a dependency connection from filename to image, the image attribute is set to dirty. When the FileIn.image attribute is dirtied, it passes the dirty state along its connections, so Blur.image is set to dirty. This in turn sets Reorder.image to dirty, which sets GUI.image to dirty.
Now the next time the GUI is updated, it sees that it needs to recalculate the image so it calls on Reorder.image to give it data, which asks Blur for its image node, etc on down the line. Once it gets to FileIn, the cached value for mask is used because that attribute wasn’t dirtied this time around, so it should still be valid.
And yes, Hugh. I do happen to be a Maya user…
And no I’m not copying their MEL syntax… completely… hehehe…
Cheers,
Michael Duffy
Just found this and thought it might be useful http://www.boost.org/libs/graph/doc/index.html
Has anyone used this? It looks a lot easier than rollign your own. (you still have to deal with your own GUI callbacks though).
Simon
Cheers Simon, will look into that when I get some time! Busy, busy, busy - too many project finishing!! 
Ta
Dan
I’ve thought about this for awhile sometime ago, and I thought it’d be fun and easy to implement. I’m just wondering, are you guys all doing this for fun in your own projects?
Hi orgee
Why I’m looking into this is for a crowd sim project I’m working on at the moment, and I’m thinking of designing a node network so that users can code the ‘brains’ of the agents of the simulation.
I’d be interested to hear your views/ideas on node-based programming and designing a similar system to material networks (ie. HyperShade in Maya, Houdini’s procedural coding interace, etc) …
Hugh & MDuffy has already outlined a good way of implementing a ‘HyperShade-like’ system. ![]()
I’ve thought about crowd sims, but really written them down, so here goes:
For each agent, you have 1 Brain Node, which is evaluates all the ‘thinking’-nodes connected to it which determine what action to do next. That is the basic system.
An action is an index of a certain move which the agent already knows. In the case of a crowd sim for an animation, actions would be a list of clips the character has, such as walking, falling, jumping and so forth. To go even more advanced, you can make the action a certain character movement control, which instead of having an index for running, you’ll have controls for parts of the agent such as legs, arms, head. Not necessarily animation controls, but ACTION controls specific to body part. Leg action control would then contain several clips that are based on walking, running, falling. Combination of these controls would result with numerous mix of animation.
A thinking-node is a basic conditional type function which takes the input of certain attributes or another node and calculates them in a user-specified way. Thinking nodes can blend several actions together or determine which action is appropriate or has a higher priority, again depending on the user. Thinking nodes also can be made to be specific for a certain action only.
The brain node takes all its inputs coming from several thinking nodes, and makes a list of actions to be applied to the agent.
Example1:
Lets say we got a thinking node which determines where the agents eyes should be looking at, and another thinking node which takes control of the direction the agent is aiming with its weapon. The brain then compiles a list which will look something like:
ActionCtrl “eyesController -perform lookAt(-20,-10,50)”
ActionCtrl “aimController -perform aimSniper”
The brain node will then start performing these actions and go through the nodes again once they are done. Controllers need an action to perform and some attributes if needed as shown in the list compiled by the brain node. The list compiling just makes it easier to perform actions and have the attributes needed for those actions ready instead of going through the thinking nodes and finding which attribute is needed.
But then WHAT IF you want the agents to be smarter? What if lets say, in the middle of performing a Running action, a big wall suddenly appears in front of the agent. We can solve that by having runtime thinking-nodes which checks the actions being performed by the agents and makes sure they are still valid.
Another thing to think about is heirarchies and priorities. Which actions have more importance than the other and which actions affects other actions. Since this isnt going to be a full on complete AI controlled system where each agent is capable of thinking on its own and able to do everything (like an advanced bot), its going to be easier since the user can make simpler agents that do only a certain amount of things and have specified heirarchies and priorities.
Anyway I just woke up, so I hope I made sense. This is really interesting however, AI and animation, makes me wanna make my own crowd sim for fun hehe.
It seems like the impotrant part is having a good scene graph and then worry about updating it efficiently afterwards.
Simon
Good Question. MDuffy? Hugh? Would this stuff be under design patterns or something similar?
I haven’t run across any overall document on node based architecture. You’ll have to pick up bits and pieces here and there, and then combine them to create the framework you want.
For design patterns, I used the book Design Patterns by Gamma, Helm, Johnson, and Vlissides. You probably won’t use any single patterns exclusively, but will use them as ideas on how to handle certain challenges. I use the Iterator pattern to access my nodes, the State pattern to define what the node types are, and I think I use an Observer pattern to keep iterators and smart pointers up to date.
I rely on the book “Algorithms” by Robert Sedgewick a lot when I’m needing to figure out how to traverse a graph efficiently, or handle tasks that computer scientists have studied. For example, the pseudo code in that book served as the backbone of the Regular Expression parser I wrote a few weekends ago (couldn’t find a good cross-platform regular expression parser that met my needs… had to write my own). “Algorithms” has some node/tree traversal code in there, and I’d recommend the book without hesitation.
Some node traversal code is under the topic of Graph Theory, but there is a lot of work in this field and not all of it will apply.
You will also figure out how you want to design the node system based on your experience with other packages. My node system shares some overall design similaries with Maya’s node graph, but will be traversed an entirely different way than how Maya approaches it internally. I also learn pros and cons of different design approaches by looking at how Houdini, Modo, Dark Tree, Slim, compositing packages, and other software handle node approaches. Heck, my storage format is even similar to Lightwave’s LWS format, except that it is in XML and addresses the shortcomings of LWS that make it difficult to work with.
So I haven’t found any all-in-one source reference for node based coding, but rather a lot of different sources that I can take one or two ideas from. There aren’t any revolutionary new ideas in my code… just hopefully good implementations of common knowledge. You just have to look around and see how others do it, take the good ideas, leave the bad ideas, avoid patented ideas, and come up with something of your own.
Cheers,
Michael Duffy
Woaahh!! Cheers for the responses!! :eek: Haven’t seem this thread for a while and there’s loads to investigate!!
I agree with MDuffy about the design patterns book by Gamma (et al). Very useful for all types of problems, plus very easy to pick up for a good read!
In terms of node-based architectures, probably the best documentation I found was the Shake SDK docs (pointed to me by Hugh, cheers!!
) that include how their node engine works and how the ‘push-pull’ driven part of the network works in relation to dependencies and re-evaluating nodes, etc. A good read if you can get it! Also, the Maya DG is a another good example and the API docs or one of the many books of the API are usually good for a indepth explanation.
For my own use (i.e brain nodes, neural net stuff) I’m starting to look into ‘Motivational Graphs’ (from a article in AI Prog Wisdom 2) if anyone’s heard of them?
Pretty much everything I would have said has been said already…
I’ve never properly developed a node-based system, so pretty much all I know is from using other APIs… These being Maya and Shake…
I’d recommend Complete Maya Programming, which has some great info on Maya’s DG, and also, as Dan mentioned, the Shake SDK docs…
Just another related question. If you want to ignore a node, what do you do about it’s upstream connections? Should it block, pass through, or have the option of either?
And if you do have it pass through, but have different data types (ie colours and floats in a hypershade like application), how would you handle the conversion? (max. min, avg, luminance etc??). Do you need some sort of temporary override conversion node?
Simon
If a node is ignored, all connections should just pass through…
Actually, I just re-read your post… I didn’t realise that you were working with multiple data-types in your tree… in that case, I’m not sure…
You could have a generic conversion for each type which is used when you ignore a node, but ideally, ignoring should only be allowed when the input and output types are the same…