Real-Time 3D modeling collaboration.


Imagine you could call your buddy Tony that lives around the globe and tell him

YOU: “Hey pal, what’ s up? Listen, I’m kind of stuck with this character I’m working on, could you give me a hand with it?.. I’m in kind of a hurry.”
TONY: “Sure thing, you finish the face while I complete the clothing. Just give me your IP address.”

Now imagine it is real !!!

The Blender Foundation just released a Verse-enabled version of Blender, making real-time 3D collaboration a real and present thing, I just tested it myself and it works !!

Verse is a collaboration protocol for 3D data… and it isn’t limited to Blender. Gimp, Maya and 3DS Max Verse plug-ins are been working on right now… (I believe that a Photoshop plug is also in the works). More details at the end of the linked URL.


Now that sounds pretty handy :smiley:


That’s pretty cool. I don’t even want to think of all of the potential problems that this could cause in the app and in any workflow, but it’s definitely nice to see it at least being attempted.


Hehe…“Hey man…can you make the arms/legs/torso/face/hair while I work on the eyelashes???”


“I really want to apply meshsmooth to this character, but currently it is not shaped like the most awesome firedemon ever imagined and more like a box. Could you help me out?”

It’d be nice for ‘real-life’ learning, though, you could give lessons across the net and show people in real-time what they should do here and there :slight_smile: imagine the teacher making something and telling the students: “Good, now make your own, here are your cubes.”


It isn’t as experimental as it sounds. The guys behind VERSE has been working on it for years now. Verse itself is on version 2.0 already (meaning a whole 0.x and 1.0 experience to polish it up). More details about the VERSE protocol at and

Blender isn’t the first application to adopt the VERSE protocol, there were many others before it, but Blender is the first “full featured” 3D app to use it. (Noticed the quotation marks? That’s because I don’t intend to start a discussion about Blender’s feature set vs. X app’s feature set :slight_smile: )

BTW, I did some internet digging, and seems like the GIMP plugin is already available at and some real use examples can be seen at


modeller A: thats my pixel!

modeller B: No, its MINE!

*push and pull contest ensues…


haha, wouldn’t it be cool if the other persons view is represented like a camera in 3d space, so you could see where he is looking… perhaps ‘check out’ regions of the mesh and then check them in when finished. Also a ‘free for all’ option as well of course. :wink:

(if thats exactly how it works already then forget what I said)


It is kind of a “Free for all” way of working.

It works like this:

[li]You are working on some scene and decide that you would like some collaboration on any particular object(s) already present on your scene. [/li][li]So conect to a Verse server (running locally or over the internet) and you tell to your 3D application to publish your object on the Verse server. [/li][li]Everybody else who is connected on the same Verse server can see now a new subscription option… now it is up to them to decide if they want to subscribe to that new object or ignore it. [/li][li]As soon as somebody subscribes to the object you published, any changes that person makes to the object are shown in real time on your application, even while you are modifing the object (yes, it can lead to push and pull vertex contests). [/li][li]At any time you can choose to disconnect from the Verse server and keep working on the object as a usual local object. [/li][/ul]
According to the Verse FAQ, subscribed objects could be anything, from a mesh to an image… so you could be giving the final touches to your model while the texture artist is trying and modifying texture maps on your model.

The officially released Blender version is an alpha preview for the upcomming Blender 2.40, and it only allows for Meshes to be subscribed. By the time the final 2.40 is released it should allow all types of objects to be subscribed. Even been an alpha release, it is quite stable, just spent almost an hour playing with Verse over a LAN and all the time the operations seemed to happen in real time and without glishes.


Really I think this is the future. Imagine the increase in workflow. A modeler creates a character’s head…begins working on torso. And at the same time texturing begins on the head. Next modeling is completed. The animator begins to animate. Mid-animation the texturing is completed and added. A belt is modeled on the character and updated in realtime. If its done correctly and smoothly this could be very useful. But things like UV mapping, weight mapping, etc would have to blend in smoothly when the model is updated. I can see a lot of problems with it, along with a lot of potential.


Models take a long time to make, at least quality models do. You’d be better off investing in some other asset management software IMO. I applaud their efforts, but I think they’re misdirected.


True. But at the very least, this could have some potential for training.


There is another freeware way of doing this without plugins and all applications are supported by the way this works. It is called VNC (

RealVNC allows you to set up a server (or another person) and then the two computers can be linked together. Whatever is on your computer shows up on the other computer. They do not need to have the software installed (i.e. MAX) in order to see your version of MAX running. What you do on your computer shows up on the other and the other person can also move the mouse cursor on your computer and use the application as well.

For example, I know very little about rigging and animation, but my partner is the expert in this area. He lives in California and I am in Israel. We set up VNC not too long ago and I fired up MAX and loaded a model. He could see my entire desktop on his computer. I sat back and watched him build a rig on MY computer in my version of MAX. At a certain point I asked if I could take over and do some of the rigging. Now he was watching me do it and advising me. We were using Skype to talk while we visually worked through VNC.


Looks nice as well. Though the big difference is that Verse is completely free (and open source) and VNC isn’t. Also, most would op for the commercial version of VNC since it seems to have the required security features.

Plus VNC looks a bit different. VNC is complete desktop control. While Verse is realtime interaction of modelers, texturing etc.

Unfortunately because of the stigmata that everyone seems to have of the horror that is Blender :rolleyes: , Verse probally isn’t going to immediately get the attention it deserves. Though I don’t think Blender is taken as seriously as it should either. But thats another thread all together. Granted I rarely use Blender beyond experimenting with its new tools. That said, I do think Verse has some potential. I think people should at least try verse in a work situation before dismissing it.


Deskto p sharing (ala VNC) has been a freeware reallity since … well, since forever!!

It is one of the main concepts behind UNIX/LINUX´s X11 architecture and the capability has been shipping with Windows at least since Windows 3.11 (There even is a whole category of malware to clasify the possible exploits for this on Windows, anybody remember the NetBus troyan?). Mac OSX, been a specially revamped UNIX, also have desktop sharing capability… in other words, pretty much every single computer in use today can do that and you don´t need to buy any expensive tool.

But Verse is very different from virtual desktop sharing:

First, it isn´t limited to two persons… as long as your network can handle it, the entire CGTalk userbase could be coollaborating on a single proyect (wishfull thinking, no network is that strong).

Second, all parties involved keep working at the same time. With VNC-like programs, you have to let go your console for others to take over…, so there is no real increase in productivity. In Unix at least you can activate it per application, but the one person at a time limit remains.

Third, on your example both you and your friend where working on the same app (MAX), but with Verse you could be working with Max and your friend could be working with Maya and the data flow would be seamless (funny example, considering the recent Auto-Alias developments). Likewise, a second friend could be working with Photoshop and it´s data would be seamlessly integrated on both Max and Maya.


This is cool in ways I have yet to imagine. Thanks for sharing…


wow that is really VERY cool, i’m dying to try it out now. Only i cant find the so called max/maya plugins? (or photoshop for that matter)
are they on the uni-verse site? i click the downloads button and all i get is a small popup with a black screen :shrug:
And is uni-verse trying to make a oneplugin fits all type thing, or are they making lots of little plugins for different apps?..wishes for XSI


Do you see what people are working on real-time or they have to commit to the server first ?


It’s about freakin’ time! openCanvas for 3d! :slight_smile:

Speed-modelling comps will go into overdrive with this baby.


You see it on real time. It is kind of freaky, like if you got a gost in the shell.

Layer01, it is one plug for each app. So far I have tested the plugs for Blender and for The Gimp… I believe that the plugs for the others apps aren´t finished yet. But who knows, the Gimp plug isn´t finished yet and it already works like it should :shrug:

BTW, Here on the C4D forum, some users are already suggesting to write an open source Cinema-Verse plugin.