Beyond that, I have a larger problem I'd like to solve: I have several hundred images imported from the harvard u's whole brain atlas
that I'd like to turn into a detailed generic brain model. I've written a nice little set of scripts (with a few kludges to workaround MEL's nested UI slider issue) that puts each image into its own file texture in its own polyplane in its own layer and lets me show zero to five layers of each perspective with arbitrary offsets from the central layer specified by the slider. e.g., show the layers that are offset -10, -5, 0, 5, 10 layers from the slider setting.
What I WANT to be able to do is
1) create some kind of scaling mechanism where I can show an arbitrary brain slice polyplane, click at the top, bottom and sides of the image, and create a scaled version of my template based on my mouse-clicks. I'd like some visual feedback of what the thing will look like before I create it, even if its just drawing the points where I clicked.
2) I'd like to be able to go back to these arbitrary templates and focus on areas of interest and add a cluster of CVs that are percolated through an arbitrary number of templates while keeping their relative ordering so when I add the wrinkles and then loft the curves I don't get weird distortions--the brain is complicated enough without introducing wrinkles that don't exist! (I want to simplify, not make more complicated). I'd like visual feedback on this beyond what I get with the existing tools.
I can do this without visual feedback, but I think it would be much easier with feedback ANDwould be a useful thing to know how to do anyway.
Anyone know of any sample code that I might examine that is beyond "helloworld" and not so far beyond it that I'd get lost trying to keep track of whta is what?