new compositor - very early stages


#25

Playmesum;

“what happens when all your images don’t fit in memory?”

this app will be optimized for fast cards w/ 256mb-1gb of onboard memory.

“What happens when you try to composite 2 2k float images together and they don’t fit on the GPU?”

You’re right-- system performance could be slowed by large images with alot of processing on them. Those trees can be written to disk. I may include a -write to ram- caching system later, but it will not be directly implemented into the flow of the app, as it will slow performance when not needed.

Also, sorry for the vague wording in my last response. I’ll try to explain this better-- I have a rough simulator built for testing texture performance/PBO. I’ve simulated the following;

  1. glTextureSub Image()
  2. a PBO Image pusher
  3. a double PBO, using two pixel object buffers.

I’ve found double PBO to be the fastest. Then simulated caching back to system RAM and found that appreciable (approx 50% when using glReadPixels, and 20-25% for asynchronous readback) decreased speed (on a humble system) due to the bottleneck of having to send the info back to system RAM and back to the GPU for display.

The two trouble spots for performance without RAM caching are; 1. playback will be slowed with added processing/layers, and 2. enabling the paint tools to save the frames (a later, optional -write to RAM- can be implemented for previewing, then -write to disk-) .

Also, about the different nodes-- correct, the ‘creator’ nodes have no input in the node flow-- some are image in nodes, and some will create geometry, etc… I have yet to fully optimize the app’s flow, but sorry for the confusing wording anyway…

So, where I’m at right now with the app is;
-must optimize the flow
-must design and implement the node-tree system
-implement the GUI
-EDIT; also thinking about a plugin-maker as a seperate app, for others to be able to add GLSL/Cg shaders as plugins, to semi-automate the implementation of additional needed coding

I’ve noticed several people with much more programming experience attempt a node-based compositor and stop, I don’t want to get lost writing the backend or handcoding extensive GUI functions. Just trying to get it to alpha as soon as possible so people here and elsewhere can dig through it and hopefully make some changes/additions.


#26

Odub,

SDL sounds like an excellent option during beta once the app is up-and-running, but I don’t want to get myself lost with a switch-over-- need to keep this as simple as I can until I have a working foundation. I’ve been concerned about the windows-centricness of Visual Studio, as I may be switching development of the app over to linux once the app is in alpha.


#27

Questions for everyone;

-what features would you like to be scriptable ?
-in python ?

any other features anyone would like to see in a compositor ?


#28

I’d say just glue the whole thing together in python. Write the important stuff, like your GL library in c++, and your base node and graph classes, then derive everything else in python. Since all the script will be doing is handling the ui, setting up framebuffers and calling GL shaders you won’t have to worry about speed.

This means every part of it will be customisable… it’ll also make developing it a lot quicker too.


#29

Playmesum,

cool.

I’m thinking ahead to a final render engine for export since I’m not all that happy with OGL filtering for final renders… As Cg has many similarities to Renderman, I’m looking at the possibility of integrating CGKit for Python and Rman, as this would give the system an incredible amount of scriptability as well as add an excellent/fast/flexible/customizable render engine. This is subject to change, but I am reviewing the possibility extensively.


#30

It would be very nice to have another visual shader editor - the problem is the same as a compositor (it could easily be argued it is the same problem - and a hard one too).

An OpenGL preview would be especially nice as the only one I know that does this right now is Houdini.

One thing that could be a problem is that most RenderMan renderers expect their textures in prioritory formats. Usually these are tiled TIFF files or based off them - just most other software only ever bothers with scanline TIFFs. This extra conversion step could be a problem.

I think Gelato and Mental Ray(?) may let you write your own texture import plugins. Neither is RenderMan complient.

You may also be able to optimize more if you keep everything 2d - although saying that it seems that most compositing packages get some sort of 3d projection system eventually.

Simon

PS I agree with Anders about having it set up as a python API with a GUI on top - makes it very flexible.


#31

Actually thinking about it, you probably could support other file formats by using a shadeop dso. However you’d then need to deal with filtering/memory etc yourself - which is why most people don’t bother. At least prman 13’s SIMD dso’s let youdo this more efficiently than before. You’d also need to compile it for each renderer that you want to support.

Simon


#32

Odub,
I finally got a good look at SDL-- very attractive. May be going in that direction for further development. :slight_smile:

Anders and Simon,
Agreed-- I’m knee-deep in Python at the moment. I have scripts ready for camera shake and the like, trying to figure out how to include Python scriptability into the GLSL/Cg shaders, while simultaneously making them animatable in the graph editor, and yet keeping them user-writable and editable…

note; the Rman compliance wouldn’t be for the 1st gen. of the compositor, definately down the road a bit. may be including a proprietary Rman-compliant renderer with the app at that time…


#33

I don’t think having a seperate final renderer from what you’re using in the gui is a good idea. It means you have to have 2 different render paths. Trying to retarget a 3d renderer to doing 2d images just for the sake of having a shading language is a bit of a waste: after all a well-constructed dag is a visual programming language. If you have a good set of base components you can do almost anything you want, and you can always expose a node that lets you put in custom fragment shaders.

I don’t understand your concerns about OGL filtering. Surely you don’t want to do any filtering at all? There’s a chapter in GPU Gems II by the people who developed Apple’s Motion that includes some information on making sure the GPU did predictable pixel lookups, perhaps that’s what you mean?

pmsc.


#34

Playmesum,

the app is eventually going to have a very robust 3d system, with .obj import, lights, 3d camera, 3d tracker, HDRI import (and painting), deformable meshes, etc… I already have code for this stuff at least half-way written, just not implemented yet. I have 32x32 bicubic filtering ready to go in GLSL, which should be more than adequate for 2d, will need more options for more advanced 3d stuff later. This possibility is being considered due to several requests to make the app able to render on a renderfarm without OpenGL hardware… Again, this is a long way down the road, not version 1…

For Version 1, I have code now for creating nodes, defining edges and noodles, etc… I have yet to complete the design of the plugin architecture to include the needed gui elements, graph editability, and Python scriptability. Also, still need to create and implement the graph editor…


#35

also, regarding videocards, I did a bit of research about inexpensive cards and found a card with 512m of ram, memory bandwidth of 10.7 Gigs/sec, fill rate of 2.8 billion pixels per second, and an HDTV output for $115 USD (!). The 256mb ram version is $80. 256mb will likely be the minimum requirement for doing HD with the compositor. These cards are SLI capable (as the compositor will be).

I would like to take advantage of the HDTV outputs on these cards for previewing composites without additional hardware.


#36

hi everyone, i kinda stumbled across this thread and i have to say that i don´t have a clue about graphics programming. But I think the most commom while developing a program inside the opensource community is this:
it is made by a programmer for a programmer.
Please don´t take me wrong. This is problem but most of all a challeng to be mended throug the software development pipeline.
Most of the compositng software as similiar, and i´m speaking only in terms of th GUI, the rest i don´t know much about. So here is my suggestion forget about the compositing gui and think about the opossite. For instance how come you have to manage a node system which i think is one of the best inventions ever, why won´t you have those nodes in a system kinda similar to MACOSX tiger Dashboard or exposé - its a great user interface.
Also think that we use both hands to work and not only the right one. try to make somthing similiar like mayas hotbox for menus like color correction and filter like that.
bare in mind that these are only suggestions and i don´t even know how to make them work for you. like i said i´m not a programmer only an end-user.
About your software is it going to be opensource?
i hope.
hugs bern


#37

Bernie,

yes, I am considering several keyboard shortcuts for accessing the image, node, and graph windows, as well as play, stop, go the beginning, etc., so it can be a 2 handed process.

also, I mentioned previously that if I use Qt for creating the program, I will be bound to a GPL license with the software. Not sure if I’m going to use Qt yet… I would actually like to make it open source for the first few releases, then offer updates for free. I will also be making a commercial compositor after with some of the same components (maybe compatible), with additional features.


#38

Not to confuse matters, but you might want to consider wxwidgets too http://www.wxwidgets.org/

Simon


#39

Render,

yes I have d/l’d wxwidgets to look at as well. also currently checking out freemat, the freeware equivolent to matlab for some additional processing (it includes a compiler for apps/processors made within matlab/freemat).

I’m hoping to have a serious portion of the code posted by mid-december.


#40

sorry for the delay, just bought an intel mac pro. still working on the app development. OSX has some very interesting development tools-- XCode, an OpenGL shader builder/tester, Quartz Composer, which appearantly can be used to create FXPlug plugins for Motion and Final Cut Pro, etc…

has anyone seen conduit ? sheesh ! $150 for an openGL node-based compositor in motion and final cut ! I’ve heard it’s a bit noodly to get around in though…


#41

Hello,

Extremerly interested in your compositing endevour. Basicly thinking of starting my own.

And - don’t listed to people that claim that DAG traversing is hard. I implemented such in ShaderMan during 2 evenings (actually, during a weekend). Just good planning, a probably a lil bit of OOP - but you’re on C, so - good planning.

Please keep us informed, and you can count on me regarding the testing and publicizing.

Alexei Puzikov
renderman.ru - co-creator, admin
dream.com.ua - ShaderMan and other stuff
[unnamed studio in Canada] - pipeline


#42

Alexei,

thank you for your kind words. Shaderman is certainly an inspiration for this program.

The date for posting code is being moved back, approximately a month (mid January) due to switching platforms, and a commercial project in the works.


#43

Any news since then? Wonderful new MacOS X Universal Build Alpha? :slight_smile:


#44

If you’re going to open the source, why not register it as a project on SF.net or similar? They provide source control (CVS, SVN) so that will encourage more developers to contribute patches, etc.

(I do hope you’re currently using version control too.)