Baking Plug-ins?


#1

Hello, gentlemen

Ramjac soft and Paralumino are working together with a new XP-based plug-in. All is going normally, but we’ve a problem: access to animation channels (same as XP-interchange and some other things) are restricted in Camera. Thus we are forced to discover solutions like: “let user run preview, the plug-in saves animated model in a file that will be “substituted” in Camera instead of usual plug-in’s generate phase”. Hmm… maybe it makes a sense to “bake” other plug-ins too (not only for our concrete task)?

Positives:

  • fast render with saved file, avoid plug-ins re-calculation at each pass (bitmap shadows, glows, etc.)

  • no problems with weight maps, XP etc. - all things are absent in Camera;

  • portability - you can copy file to all your slaves - you don’t need a plug-in itself if its result is baked

Negatives:

  • numerous headaches with saved file (it can be really large)
  • motion blur (maybe not always it can be provided adequately - we aren’t sure)

Your opinion, gentlemen?


#2

It would seem with Camera’s limitations, choice is not an option here. Perhaps we should be thinking of methods to ensure efficiency with the baking process and see if its even a viable solution. Personally, I think the idea of baking the plugin calculation and results is a fine idea provided its implmentation is effective and predictable. Particle systems like Dante, Fyreworks, and PPPro already use a somewhat similar approach. They can write out a cache of particles to assist Camera. From what I remember from Blair, Camera lacks the ability to look forward or backward in time thus a particle cache is needed.

Plug-in baking is definitely an acceptible solution in my book.


#3

One idea I completely missed the boat on is how NextLimit does fluid data. They have a file for every single frame of the animation and named with the frame number. Yes, there will be a ton of files but you’re far less likely to run into 2 GB size limits.

The big problem with any plugin related scratch file is that Renderama knows nothing about it and wouldn’t be able to automagically send them to the slaves.


#4

Hi, Brian

They should write cause particles need their “history”, but it has nothing in common with baking.

Can’t imagine such “Travelling in time” at render time :slight_smile:


#5

I’m sure that a particle cache and baked plugin data is definitely two different things. I look at from the perspective that one type of system can do it, so why wouldn’t baking plugins work too. Seems feasible to me.

My understanding on simulations is its important for them to have the ability to look forward and backward in time. My guess is for dynamic accuracy but I don’t really know for certain. I’m not a programmer. :wink: Since camera can’t do that, a particle cache must be generated.


#6

Hi, Brian

But, Brian, a common logic is absolute same for programmers and not programmers (last ones are only more familiar with concrete implementation details, nothing more). Imagine a render can see “past and future”. Do you think it would be a big happy? Frame 5 asks frame 4, but frame 4 asks frame 5… How it’s called in English? Like “death embraces”, right? And how many time frame 5 needs to repeat all generation process at frame 4? Yes, Camera (not camera) doesn’t do this simply cause it’s effective program :slight_smile:
“Baking” looks an rational idea overview but we aren’t sure in tech. details (especially this motion blur grr…)


#7

I think it’s called an “infinite loop” in English. =)

I think Mental Ray works like that by examining the next frame (not sure about previous) in order to calculate the motion blur. I’m not sure about this but it seems logical as I don’t know how you’d figure out how much blur to add. Again, I’m no programmer so take my words with a grain of salt.

I’m not familiar with the lineage but it appears to me that it’s time that EITG opens up camera a bit more. Camera is a hot renderer, no doubt about it but there are some things that plugin developers should simply have access to in order make their cool plugins more robust.

I’m sure there’s a lot I don’t know about this issue but if you look at the success of a renderer like Mental Ray, one might argue it has a lot to do with how open it is. I think Camera is every bit as good as Mental Ray. I think it’d be in the best interest of EI to open up camera a bit.

Just my 2, uninformed cents. =)


#8

Baking here sounds like a sound solution, ‘Running the preview’ to save out the file already happens with the realflow plug-in. I can’t think of any other way to do it.

So far as motion blur… Is there any need to look backward and forward in time?
When using Blaster you must render a minimum of two frames to get motion blur, perhaps something similar would be the case here.

The benefits seem to be rather large here, anyway there is only one way to find out :slight_smile:
Ian


#9

Hi, Brian S., Ian

We never shared an opinion “Camera should be opened” :slight_smile:
In this concrete case (motion blur) we cannot blame our host in “not enough service/info”.

A plug-in doesn’t need to know how motion blur is drawn etc. For each vertex a plug-ins passes a “blur position” (vertex position at previuos frame) to host, it’s all the render engine needs. Plug-ins are also informed about any object’s motion, scaling etc, (all linear transformations). Thus in many cases the motion blur can be calculated very simple, as
blur_vertex_position = current_vertex_position * blur_transform_matrix
However, it doesn’t work for plug-ins with mutable toplogy (like MrBlobby) and for skinned groups. In this case a plug-in needs to interpolate final blur position based on blur of its child groups. For example MrBlobby (btw: this plug-in is opened :slight_smile: ) calculates blur as an average of blob’s sources blurs. We see no way how to repeat/reproduce such blur with “baking”


#10

I know it’s the wrong kind of baking, but I would be happy just for it to be possible to bake out shaders, GI & occlusion to UV maps. Can camera render to UV space or is there a way to automatically deform an object to it’s UV’s (like a morph)?

BB


#11

Hi, Brian B.

AFAIK Camera cannot do this. However, IMO this baking isn’t so attractive as it looks first. “Baking GI & occlusion” is exactly what radiosity does. Baking procedurals is problematic cause a lot of shaders are view-dependent. And any baking doesn’t increase RT speed. So, what is a sense? “Make fast phong faster?” :slight_smile:


#12

Baking comes in pretty handy with realtime engines. Not that I do much of that work asside from playing around though. Fancy MForge metal in a game engine… woo Hoo! =)


#13

Hello Igors,

occlusion baking to textures (and to normal maps) makes sense since the renderer does not need to render the occusion for static objects again and again each frame. Its not about making phong faster, but to substitute long raytracing rendertimes with short phong render times. Its not only good for realtime purposes, but also for architectural renders, and environments in general.

The benefit of baking to textures in opposit to radiosity is, that radiosity needs many polygons and subdevisions to store the shading within the vertext colors of the object. If you have a simple cube you have six (12) polygons. If you bake to a texture you have still 6 polygons and a texture that stores all the subtle shading information an occlusion pass usually provides. if you use radiosity instead you increase polycont by a factor of some hundrets more polygons.

This may not sound like a real problem for a simple cube, but when it comes to detailed architetural models it can be a problem since the benfit of the baking is eaten up by the increased numbers of polygons to be rendered.

A general benefit of texture baking is, that the textur can store far more shading infomation. It is possible to bake detailed maps containing all details of the original model which can later be substituted with a low poly version of th same moel - the render will look identical but take only a fraction of rendertime compared to the raytraced original model.

Is all about render efficiency.


Baking (as initially asked for in this thread) of models to a rendering database is somthing I personally really do not like due to the fact, that it is very timeconsuming to use on large scale projects, since the distribution of the cache file to renderslaves only works manually. It gives major headaches when you want to change things in your setup fast.

I have not a solution for this but this is the reason I do not wok with particle plugins in EI anymore and rather like to do them in After Effects or somwhere else.

Jens


#14

with most of the major 3rd party developers and even Matt Hoffmann visting CGTalk from time to time, let me ask you:

is it really not possible to teach renderama to distribute plugin cache data?
why does it have to sit in the socket folder?

as a non-developer i imaging there is only a path variable to be passed on (and modified for the slaves).
like so:

  • plugin tells EIAS that it has cache data and where to find it
  • EIAS saves this information into the project file and tells renderama
  • renderama copies these files over to the slaves and changes the path to the slaves temp folder

there is probably more info to be passed over like timestamps to catch changes and so on, but the situation right now is pretty oldschool, if you know what i mean.
i like my socket folder to stay clean and not cluttered with project specific files.


#15

Hi, Brian S.

Sorry, but MF is unlucky example. Anisotropic (speculars) cannot be baked. Same as MF’s reflections and others view-dependent gradients. Thus effect of “MF baking” would be a little only :slight_smile:


#16

OK I think Jens beat me to it, but to reiterate:-

Baking GI/ambient occlusion, like render shadows only once except it’s way better! & editable in photoshop.

Baking shaders, I now that some shaders are view/incidence dependent but most are not. Usually all I need is diffuse & bump/normal. It’s important because although I have lots of very nice shaders, I rarely get to use them when I’m working as part of a larger pipeline. Co-workers may not have the same shaders or even be using EIAS.

I do see benefits in baking out animation & geometry deformations, especially for your new tressle & scrim + mrs bebel. However, my checkbook will be out much sooner for the other kind of baking.

Thanks
BB


#17

Hi, Jens

A fate of radiosity is not very lucky (in EI and anywhere), but IMO it’s a rational thing/idea. We understand that radiosity’s subdivision raises a lot of problems. However, technical reasons are fully clear: it’s still much faster than “baking occlusion as texture”. Even very rough estimation shows: “occlusion baking” has excellent chances to be very and very slow. Why? Because it needs to calculate “each pixel” instead of “each vertex”. Because it needs to calculate ALL pixels, not visible only (you want to fly fast in baked scene, right?). Because it needs to calculate MORE pixels (texture should be enough large). So, how slow it is? Set GI sampling to 1x1, test and multiply render time by 8-10 (be sure it’s a very loyal coefficient). We guess it would cold a little your enthusiasm to bake illumination :wink:

We also talk about this:)

No problems, we’ll use our time to write others (more actual) things :slight_smile:


#18

Hi, Uwe

Hmm… it’s not clear do you ask Igors (or even Matt Hoffman) :slight_smile: Ok, let us answer what we know.

Hi, Uwe

There are 2 kinds of temp file plug-ins typically create:

a) temp file to avoid second and further generatings at passes of single frame (bitmap shadows, glows etc.). This kind has no problems with rama cause plug-in creates this file in specific temp folder and Camera automatically removes all temps at finish. Note, however, that second and other passes are not momentary with temp file, there are parts of plug-ins’ work that can be performed only at final pass (for example, motion blur and calculating UVs). Note also that not all plug-ins creates such temps to optimize performance and not always it’s rational. For example, a plug-in cannot create a constructive UV’s at first pass cause child’s UV’s are unknown yet.

b) “history” temp file (like particles do). We agreed that your propositions are quite rational, but, sorry, IMO they are not enough “conceptual”. Really, adding “transport features” looks not near silly task, but what is a result? “Now EIAS copying plug-in files automatically (of course with new vers of plug-ins), save your time you spent for manual copy!!”. Hmmm… looks a mountain yields a mouse, right? :slight_smile:


#19

Hi Igors,

i am aware that this is not a big “buzz” feature.
BUT drag and drop reordering in project window wasn’t a big feature, too.

it’s about a more modern approach, about letting the user spend his time with the artwork and not with file managment.
like Jens said, doing this kind of stuff manually is just annoying and screams for errors (like, if you forget to update a slave out of 5 with the new cache data -> please render again).

if you compare renderama with other network render solutions, rama seems a little old fashioned and VERY limited!
but this is an other story and i have already posted my requests for that over at EITGs forum with no official reply so far. to bad.


#20

Hello Igors,

Why does GI sampling needs to be set to 1x1 for occlusion baking? But even if I assume rendertimes to be, say, 80 times slower than on a “normal” GI rendered frame, an animation of 1000 frames (very usual frame count) will still be much faster with a baked object (including baking time) than with GI calculated for every frame, and, usually you would do the baking exactly one time and be than able to layout your animation very quickly and flexible.

Edit: Another benefit of baking is, that your animation will not look grainy/noisy. I recognized that GI animations produce alot of noise if sampling and ray amount is set too low. So, for a really clean animation a sampling of 32x32 is not sufficient anyway.

Rendering is always also about testing and testing until you have a final. So - at least here - it is not unusual to render the 1000 frame sequence 4 to 8 times until it is a final. It would certainly be an improvement to not need to render it with GI all the time…

A working radiosity solution in EI would of course still be welcome.


The fact that I do not like object caches is only my pesonal opinion. But if it the only solution to the problem it certainly should be done this way. The problem is not the object cache itself, but the distribution in a render network, and that task should ideally be solved by host. I sure know about the problems to change host…

Jens

Edit: I also think Uwe is right. Its the task of host to deliver all system features like file distribution and so on. Thzese might not be buzz featurs. But reliable features like this seperate professional software from toy software. What have been the most valuable features in the past? Object reordering and context menus. Yes, of course also GI. I absolutely do not want to diminish the big features that have been introduced to EI in the recent past. But if you need to work with it every day the value and reliability of the “small” features do far more for your working experience than the big ones.