The Viewport Maxscript Editor - NEW


The circle idea has been conceptually profitable:

The grey circle represents each node’s weld proximity. Nodes within each other’s weld proximity evaluate left to right as code. This eliminates the need for endless connection lines/arrows inside of single line statements.

Proximity size can be user set, node size can be user set as well. Large nodes could be built to support multiple small nodes branching off. Replacing can now be based on an inner proximity check (does the black circle touch the target black circle? then user wants to replace node, else user wants to link node to target node). The target node’s circles could turn green to indicate that max understands you want to replace this node with the node you are dragging. No need for a queryBox to confirm a yes/no, and no need to link any nodes together… so less steps for the user. Less steps with the same result = better.


probably my school is too old but the text representation looks for me more intuitive, simple, dynamic, powerful, and compact than the graphical one.
PS. I think that Visual MAXScript is most useless tool in MAX, and I’ve never used it in my max script development,


Here’s a fun looking case statement:

denisT: probably my school is too old but the text representation looks for me more intuitive, simple, dynamic, powerful, and compact than the graphical one.
PS. I think that Visual MAXScript is most useless tool in MAX, and I’ve never used it in my max script development,

@Denis: much thanks for all the help you have given me and others! I agree that the text representation is much better in all ways… that’s what I’m used to… however I think there are different ways to interact with and develop code beyond typing. I usually hate writing case statements due to my head not being able to remember the proper syntax, but building the one above felt liberating and somewhat enjoyable. I think it’s because there’s no formal syntax that I have to obey when constructing these nodes. And, in a way, the nodes separate the user from the language being written. I’m sure a node could be written to interpret a nodeset as maxscript, or C#, or python… maybe even CIL code. It’s just writing strings to a file based on objects cas. If I can build the nodes to work with any language, then any nodeset should be able to ‘compile’ to any file type. And since nodes can be written/saved/loaded as simple .ms files, this allows the community to start developing their own custom nodes. I value your opinion and direction very highly denisT! :slight_smile:


Here’s one more for today. Top is zoomed in on the try(). Bottom is zoomed out to show the whole try()catch().

This assumes nodesets can be grouped and customized. Now imagine traversing through a script by zooming in and out of circles… instead of scrolling up and down. And this is all still in xyz space, so you could start stacking them, and separating them by layers, since they are after all both sceneNodes and code.


If I don’t like Visual MAXScript editor it doesn’t mean that I don’t wish any good one. I use Visual UI Editors in many other places. I just don’t like the MAX’s one.


Just started messing with your script, pretty cool stuff :).

I do find it a little … unintuitive, to scroll through/pick something in the menu, and instantly have it create it in the viewport. It would be better if you could possibly have drag/drop to create a new UI Node. Or even choose it in the list, and press a button to create it.


Here is progress on the script. I’m preparing version 1.1 to be released, which will have value nodes that can be linked to rollout elements based on a padding/proximity value (grey circle).

Notice the 1st dropdown? This allows me to organize objects into categories, paving the way for all kinds of awesome nodes. Once the values can be linked up and properly written as code, I’m going to extract the objects build parameters and write parameters to external .ms files - effectively turning the .ms file into a node that can be built and written based on it’s contents.
This does three major things:

  1. opens up the nodes to be customized by max users.
  2. paves the way for the VMeditor to write nodesets as a single node (.ms file).
  3. because of (2) allows the user to simplify and navigate scripts using custom nodes.

@Kickflipkid687: Thanks man!
Let me explain my reasoning behind putting the nodes in a drop down: this allows me to group them into categories by adding another dropdown above it. So, now a user can select UI Elements or Code from the categories, then select what they’d like to create from the ‘create’ dropdown.

Now let me address why there isn’t a ‘confirm’ or ‘create’ button. This approach is specifically tailored to working quickly, and assumes that when you select something - you want it created. Nothing annoys me more than Windows Vista or Windows 7 asking me ‘are you sure you want to do that?’ with a popup, or forcing me to confirm. So it’s my attempt at trying to address the dichotomy that exists within UI today: un-necessary confirmations. I’m also aiming to eliminate the steps the user has to take, which also simplifies learning how the script works.

I want drag and drop. However, drag and drop would have to be done in dotNet (which I don’t know), and would require some clever code to determine where on the depthAxis (z axis in screen mode) the node would be ‘dragged’ to. You also have to address having a ‘library’ of nodes (icons) that take up screen real estate. I did some concepts along this route, but ultimately couldn’t implement them. I considered learning Helium to write the nodes in, but I don’t want to have to deal with the additional install, and I didn’t have visual control over the nodes in the manner that I wanted.

So, I hope you don’t read that as “I disagree, you’re wrong”, but rather as “Good thoughts, there is a lot to be considered while creating this, please keep contributing your feedback”. I’d much prefer to have an elegant drag and drop method. :slight_smile:


Yeah, that makes sense. I also hate the confirms as well, so it’s not bad. Dotnet isn’t too hard to learn, it’s a little differen’t/confusing maybe at first, but not too bad.
But you are right in that choosing what want, and having it created right then is a good method.

I could take a look into drag/drop of stuff into the viewport. For this project and just because it might be useful for me or others in the future. So I’ll poke around and see if I can’t get something working.
As far as the zdepth , I think just having them at the origin would be best. So if everything is always at the origin, it should all be good. You can probably also lock the axis to only allow the user to move then in x,y and not back/forward.

There might be a way to use the GW Drawing methods in max as well, instead of 3D Objects. So its then overlayed and not an actual node. But that might not be as… user friendly, I would have to look into that.


This post has some interesting stuff. Seems like you might be able to combine some of this, so u could drag/drop from a list, and copy the name, and if it’s “button”, then create a button at the mouse.pos. Then you can drag around the button to place it and click on build UI or something.


K, so I got this

But I just learned of LoneRobots method for drag/dropping via Dotnet. So I got it now where I can drag/drop from a DialogMenu, and it creates a box. So now I just gotta see if it can do the same, but with a dropdown list. Just get the arg/value u selected, and if its like Button, then make a rectangle.


Garrick, this is awesomely inspiring stuff! :applause:
Massive respects and best of luck with it in the future!


@Kickflipkid687: Nice post! I’ll be digging thru that and learning more about potential drag and drop solutions.
Please feel free to put together a drag and drop script. Here’s my thoughts on how I see that working: the drag and drop library would load all the .ms files (with corresponding .png icons) from a folder and display them visually in a way that doesn’t take up too much space. This may be accomplished with scrollbars, a paging system, dynamically scaling icons (think Adobe Bridge), or however. It’s important that the library be able to load as many .ms and .pngs as the folders contain, so users can add as many nodes as they’d like to the library folders. If there is a way to do it with .jpgs or another file type, that may be better. I suggest .pngs cause of transparency.


@Kickflipkid687: Nice! Just saw your .avi! I like that functionality. I’d like to see dragging a button into the viewport and building a box. If that’s a solid route to go, then we may just have drag and drop after all! :slight_smile: :slight_smile: :slight_smile:

@elT: Thanks man! I hope Adsk realizes what potential an interface like this has.


Yeah, having something like buttons with Images as their background, for each node would be easy enough to do. It would just read each image and ms file in, on rollout open, and fill the thumbnail array with those. I already have that code working for another tool I made. But it collects objects bitmaps from their materials. But I could easily switch it to collect images from a folder.


One of my biggest concerns is being able to actually develop useable code using nodes.
So, to see how this design I’ve described might ‘play-out’ I’ve attempted to build the VMeditor code inside of the latest flowCode node concept designs. I’ve gotta say, building the node sets was pretty fun and fast… and zooming in and out of the code feels like I’m writing atoms together under a microscope. Weird, but kinda ‘natural’. Here is lots of images:

First, a zoomed out view of the nodeset.

Here’s the input(bottom) and output(top) links for the rollout.

Here’s the 4 main functions. I didn’t realize there were four until I began organizing the code. I color coded them as well. This will help later on :wink:

Above is a zoomed in view on the rollout, showing how lines can link UI elements to nodesets.
Below is a custom nodeset describing the compileScriptsButton on pressed event. I envision being able to ‘open’ these custom nodes and drill down to a more granular view of the code.

Here is a drill down into the createDD selected i event. Here’s where the colors come into play, as I can easily see which case evaluates to which of the rollout main functions. In this case, it’s handling 7 UI elements, 3 values, and 1 custom function build (the VME editor - in purple).

Here’s the timer tick event that calls updateOffsets (one of the rollouts main fuctions, in blue).

I’ve shown this to some friends and they all say it looks like bacteria/atoms. :slight_smile:
More coming soon.


@Kickflipkid687: That’s it man! Let’s plan to build that kind of functionality into the script.
That is awesome!!!


Ok, so I took my other code and got this working.

Right now it takes the images in a folder, looks for .png images, and reads them in and sets their size in that rollout, based on their actual size bitmap size. Then, internally it sets the buttons name(s) in the rollout, based on their filename, so Button or checkbutton, ect.

After that, when you drag/drop the button into the viewport, it reads that buttons name, and sets the text to be that same name. :slight_smile:


Heres the .ms file


That’s all the basic functions covered! That was fast man!
How do you see the integration into the current script happening?
I think the script needs more work before it can handle loading the .ms files.
I’d like to get linking working before extracting the UI elements and nodes to .ms files.
However, if you’d like to integrate it into the current script that would interesting to see, because it would give us the ability to start trouble shooting and expanding upon the drag and drop idea.

It’s your call. :slight_smile:
I’ll be working as fast as I can to get the linking and writing of values done.


Well I figured I’d give u the .ms script, so u can implement it if you want. Or I could tinker around with it more and figure out a better setup.

What might be better is, having a drop down, for categories, then below that is a sidescrolling window,
showing all the thumbs of the buttons/ect you can work with.

So you would switch to UI Elements, and it would update the thumbs with those button images, or if you switched to operators, it would show thumbs of your otther nodes, like multiplication and so forth.


@DenisT: I’ve been thinking more about how to bridge the code and node worlds. Here is a concept of ‘opening a custom node’ (a .ms file represented as a node):

The first comment has a typo in it, it should read “–this is the code view of the writeSource node”.
The buttons on the bottom represent save and cancel, just a quick way to access the real code, while still retaining the ability to traverse the script as nodesets. I’m sure something in DotNet could be made even better than this example (which was made in about 20 seconds using VMEv1).

Maybe we could drag and drop nodes into this code snippet popup, to add nodes directly into actual code? Hmm…