View Full Version : this may be a dumb question but....
10 October 2009, 07:27 PM
I'm assuming that using a compositing program like Shake is generally used for doing only still images or to help in the aid of defining texture colors, specularity, etc. that can then be tweaked in Maya or some other 3d package to then render out a final beauty pass. Is this process of compositing ever used for doing full animations?
I would think that this would become very time consuming otherwise if you were to render out all of these separate passes for each frame of an animation to composite in Shake.
Am I correct in my assumption?
10 October 2009, 01:47 AM
Yes compositing programs used in tandum with multiple passes from the 3d application gives the ability to tweak specific elements of the render without the need to send the whole shot back to 3d for a re-render. The thought behind it is that if it can be fixed in post it wont take more time on the render farm to correct and other shots can go thru the pipeline and dont have to wait to get access to the the farm.
10 October 2009, 02:01 PM
I'm just a little guy but even so, compositing is probably the single most important technique that I use. Dunno, I probably over-use it. But I've never had nearly "enough" computer-power. From the very beginning I had to learn how to economize; how to do stuff without a "farm."
I use the 3D software to produce the content, and I separate the various layers of possible output material: shadows, specularity, and so on. The idea is that once the computer has done this once, the rest of it is "tweaking." Which is basically a two-dimensional mixdown process ... ergo, "very fast." There might be twenty or thirty channels of information going into a scene and there is a knob on every one. All of those channels of information were calculated ahead of time.
I do use Blender for most of that mixdown work. In other words, there is one file which produces the underlying layers, and another file which contains a compositing network, and believe it or not, Unix Makefiles (or equivalent Perl scripts, these days) which cause the appropriate re-renders and re-mixing to occur at the proper time.
So, "if all you want to do is to make that shadow just a little darker, or to change the color of that reflection," you can do it, and you can do it now. Not quite right? Twist the knob a little bit and hit "Play" again. Instant gratification... I like that. ;)
10 October 2009, 07:22 PM
the process of writing buffer information out is not so time-consuming on a frame by frame basis considering the advantage you have tweaking them in comp... the time of the render is the same as the beauty pass, + the time to simply write out the single passes, cause to calculate the beauty means simply calculating all the passes first and use the rendering algorithm to composite back the final beauty, nothing more than that.
10 October 2009, 03:14 AM
There's a great museum in Tucson, Arizona called the Center for Creative Photography. You can actually get your hands on Ansel Adams' large-format photo negatives and do things with (copies of) them in a darkroom. (Ansel even created hands-on lessons, if you're so inclined ...)
And what you really see is: "the picture is made in the darkroom." You can hold in your hands the negative to Moonrise Over Hernandez (http://www.hcc.commnet.edu/artmuseum/anseladams/details/moonrise.html) and it looks nothing at all like the final print.
10 October 2009, 03:14 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.