PDA

View Full Version : Baking Plug-ins?


Igors
06-20-2006, 11:31 PM
Hello, gentlemen

Ramjac soft and Paralumino are working together with a new XP-based plug-in. All is going normally, but we've a problem: access to animation channels (same as XP-interchange and some other things) are restricted in Camera. Thus we are forced to discover solutions like: "let user run preview, the plug-in saves animated model in a file that will be "substituted" in Camera instead of usual plug-in's generate phase". Hmm... maybe it makes a sense to "bake" other plug-ins too (not only for our concrete task)?

Positives:

- fast render with saved file, avoid plug-ins re-calculation at each pass (bitmap shadows, glows, etc.)

- no problems with weight maps, XP etc. - all things are absent in Camera;

- portability - you can copy file to all your slaves - you don't need a plug-in itself if its result is baked


Negatives:

- numerous headaches with saved file (it can be really large)
- motion blur (maybe not always it can be provided adequately - we aren't sure)

Your opinion, gentlemen?

Vizfizz
06-20-2006, 11:52 PM
It would seem with Camera's limitations, choice is not an option here. Perhaps we should be thinking of methods to ensure efficiency with the baking process and see if its even a viable solution. Personally, I think the idea of baking the plugin calculation and results is a fine idea provided its implmentation is effective and predictable. Particle systems like Dante, Fyreworks, and PPPro already use a somewhat similar approach. They can write out a cache of particles to assist Camera. From what I remember from Blair, Camera lacks the ability to look forward or backward in time thus a particle cache is needed.

Plug-in baking is definitely an acceptible solution in my book.

NorthernLights
06-20-2006, 11:53 PM
One idea I completely missed the boat on is how NextLimit does fluid data. They have a file for every single frame of the animation and named with the frame number. Yes, there will be a ton of files but you're far less likely to run into 2 GB size limits.

The big problem with any plugin related scratch file is that Renderama knows nothing about it and wouldn't be able to automagically send them to the slaves.

Igors
06-21-2006, 12:02 AM
Hi, Brian Particle systems like Dante, Fyreworks, and PPPro already use a somewhat similar approach. They can write out a cache of particles to assist Camera. They should write cause particles need their "history", but it has nothing in common with baking.

From what I remember from Blair, Camera lacks the ability to look forward or backward in time thus a particle cache is needed. Can't imagine such "Travelling in time" at render time :)

Vizfizz
06-21-2006, 12:17 AM
I'm sure that a particle cache and baked plugin data is definitely two different things. I look at from the perspective that one type of system can do it, so why wouldn't baking plugins work too. Seems feasible to me.

My understanding on simulations is its important for them to have the ability to look forward and backward in time. My guess is for dynamic accuracy but I don't really know for certain. I'm not a programmer. ;) Since camera can't do that, a particle cache must be generated.

Igors
06-21-2006, 12:48 AM
Hi, BrianMy understanding on simulations is its important for them to have the ability to look forward and backward in time. My guess is for dynamic accuracy but I don't really know for certain. I'm not a programmer. ;) Since camera can't do that, a particle cache must be generated.But, Brian, a common logic is absolute same for programmers and not programmers (last ones are only more familiar with concrete implementation details, nothing more). Imagine a render can see "past and future". Do you think it would be a big happy? Frame 5 asks frame 4, but frame 4 asks frame 5.. How it's called in English? Like "death embraces", right? And how many time frame 5 needs to repeat all generation process at frame 4? Yes, Camera (not camera) doesn't do this simply cause it's effective program :)
"Baking" looks an rational idea overview but we aren't sure in tech. details (especially this motion blur grr...)

sacslacker
06-21-2006, 02:28 AM
I think it's called an "infinite loop" in English. =)

I think Mental Ray works like that by examining the next frame (not sure about previous) in order to calculate the motion blur. I'm not sure about this but it seems logical as I don't know how you'd figure out how much blur to add. Again, I'm no programmer so take my words with a grain of salt.

I'm not familiar with the lineage but it appears to me that it's time that EITG opens up camera a bit more. Camera is a hot renderer, no doubt about it but there are some things that plugin developers should simply have access to in order make their cool plugins more robust.

I'm sure there's a lot I don't know about this issue but if you look at the success of a renderer like Mental Ray, one might argue it has a lot to do with how open it is. I think Camera is every bit as good as Mental Ray. I think it'd be in the best interest of EI to open up camera a bit.

Just my 2, uninformed cents. =)

halfworld
06-21-2006, 08:20 AM
Baking here sounds like a sound solution, 'Running the preview' to save out the file already happens with the realflow plug-in. I can't think of any other way to do it.

So far as motion blur... Is there any need to look backward and forward in time?
When using Blaster you must render a minimum of two frames to get motion blur, perhaps something similar would be the case here.

The benefits seem to be rather large here, anyway there is only one way to find out :)
Ian

Igors
06-21-2006, 10:47 AM
Hi, Brian S., IanI'm not familiar with the lineage but it appears to me that it's time that EITG opens up camera a bit more. Camera is a hot renderer, no doubt about it but there are some things that plugin developers should simply have access to in order make their cool plugins more robust.We never shared an opinion "Camera should be opened" :)
In this concrete case (motion blur) we cannot blame our host in "not enough service/info".

A plug-in doesn't need to know how motion blur is drawn etc. For each vertex a plug-ins passes a "blur position" (vertex position at previuos frame) to host, it's all the render engine needs. Plug-ins are also informed about any object's motion, scaling etc, (all linear transformations). Thus in many cases the motion blur can be calculated very simple, as
blur_vertex_position = current_vertex_position * blur_transform_matrix
However, it doesn't work for plug-ins with mutable toplogy (like MrBlobby) and for skinned groups. In this case a plug-in needs to interpolate final blur position based on blur of its child groups. For example MrBlobby (btw: this plug-in is opened :) ) calculates blur as an average of blob's sources blurs. We see no way how to repeat/reproduce such blur with "baking"

bbuxton
06-22-2006, 12:49 AM
I know it's the wrong kind of baking, but I would be happy just for it to be possible to bake out shaders, GI & occlusion to UV maps. Can camera render to UV space or is there a way to automatically deform an object to it's UV's (like a morph)?

BB

Igors
06-22-2006, 03:02 AM
Hi, Brian B.I know it's the wrong kind of baking, but I would be happy just for it to be possible to bake out shaders, GI & occlusion to UV maps. Can camera render to UV space or is there a way to automatically deform an object to it's UV's (like a morph)?AFAIK Camera cannot do this. However, IMO this baking isn't so attractive as it looks first. "Baking GI & occlusion" is exactly what radiosity does. Baking procedurals is problematic cause a lot of shaders are view-dependent. And any baking doesn't increase RT speed. So, what is a sense? "Make fast phong faster?" :)

sacslacker
06-22-2006, 07:46 AM
Baking comes in pretty handy with realtime engines. Not that I do much of that work asside from playing around though. Fancy MForge metal in a game engine... woo Hoo! =)

Jens C. Möller
06-22-2006, 09:10 AM
Hello Igors,

occlusion baking to textures (and to normal maps) makes sense since the renderer does not need to render the occusion for static objects again and again each frame. Its not about making phong faster, but to substitute long raytracing rendertimes with short phong render times. Its not only good for realtime purposes, but also for architectural renders, and environments in general.

The benefit of baking to textures in opposit to radiosity is, that radiosity needs many polygons and subdevisions to store the shading within the vertext colors of the object. If you have a simple cube you have six (12) polygons. If you bake to a texture you have still 6 polygons and a texture that stores all the subtle shading information an occlusion pass usually provides. if you use radiosity instead you increase polycont by a factor of some hundrets more polygons.

This may not sound like a real problem for a simple cube, but when it comes to detailed architetural models it can be a problem since the benfit of the baking is eaten up by the increased numbers of polygons to be rendered.

A general benefit of texture baking is, that the textur can store far more shading infomation. It is possible to bake detailed maps containing all details of the original model which can later be substituted with a low poly version of th same moel - the render will look identical but take only a fraction of rendertime compared to the raytraced original model.

Is all about render efficiency.

-------------

Baking (as initially asked for in this thread) of models to a rendering database is somthing I personally really do not like due to the fact, that it is very timeconsuming to use on large scale projects, since the distribution of the cache file to renderslaves only works manually. It gives major headaches when you want to change things in your setup fast.

I have not a solution for this but this is the reason I do not wok with particle plugins in EI anymore and rather like to do them in After Effects or somwhere else.

Jens

bronco
06-22-2006, 09:35 AM
with most of the major 3rd party developers and even Matt Hoffmann visting CGTalk from time to time, let me ask you:

is it really not possible to teach renderama to distribute plugin cache data?
why does it have to sit in the socket folder?

as a non-developer i imaging there is only a path variable to be passed on (and modified for the slaves).
like so:

- plugin tells EIAS that it has cache data and where to find it
- EIAS saves this information into the project file and tells renderama
- renderama copies these files over to the slaves and changes the path to the slaves temp folder

there is probably more info to be passed over like timestamps to catch changes and so on, but the situation right now is pretty oldschool, if you know what i mean.
i like my socket folder to stay clean and not cluttered with project specific files.

Igors
06-22-2006, 12:02 PM
Hi, Brian S.Baking comes in pretty handy with realtime engines. Not that I do much of that work asside from playing around though. Fancy MForge metal in a game engine... woo Hoo! =)Sorry, but MF is unlucky example. Anisotropic (speculars) cannot be baked. Same as MF's reflections and others view-dependent gradients. Thus effect of "MF baking" would be a little only :)

bbuxton
06-22-2006, 12:14 PM
OK I think Jens beat me to it, but to reiterate:-

Baking GI/ambient occlusion, like render shadows only once except it's way better! & editable in photoshop.

Baking shaders, I now that some shaders are view/incidence dependent but most are not. Usually all I need is diffuse & bump/normal. It's important because although I have lots of very nice shaders, I rarely get to use them when I'm working as part of a larger pipeline. Co-workers may not have the same shaders or even be using EIAS.

I do see benefits in baking out animation & geometry deformations, especially for your new tressle & scrim + mrs bebel. However, my checkbook will be out much sooner for the other kind of baking.

Thanks
BB

Igors
06-22-2006, 01:15 PM
Hi, Jens
occlusion baking to textures (and to normal maps) makes sense since the renderer does not need to render the occusion for static objects again and again each frame. Its not about making phong faster, but to substitute long raytracing rendertimes with short phong render times. Its not only good for realtime purposes, but also for architectural renders, and environments in general.

The benefit of baking to textures in opposit to radiosity is, that radiosity needs many polygons and subdevisions to store the shading within the vertext colors of the object. If you have a simple cube you have six (12) polygons. If you bake to a texture you have still 6 polygons and a texture that stores all the subtle shading information an occlusion pass usually provides. if you use radiosity instead you increase polycont by a factor of some hundrets more polygons.A fate of radiosity is not very lucky (in EI and anywhere), but IMO it's a rational thing/idea. We understand that radiosity's subdivision raises a lot of problems. However, technical reasons are fully clear: it's still much faster than "baking occlusion as texture". Even very rough estimation shows: "occlusion baking" has excellent chances to be very and very slow. Why? Because it needs to calculate "each pixel" instead of "each vertex". Because it needs to calculate ALL pixels, not visible only (you want to fly fast in baked scene, right?). Because it needs to calculate MORE pixels (texture should be enough large). So, how slow it is? Set GI sampling to 1x1, test and multiply render time by 8-10 (be sure it's a very loyal coefficient). We guess it would cold a little your enthusiasm to bake illumination ;-)

Is all about render efficiency.We also talk about this:)

Baking (as initially asked for in this thread) of models to a rendering database is somthing I personally really do not like due to the fact, that it is very timeconsuming to use on large scale projects, since the distribution of the cache file to renderslaves only works manually. It gives major headaches when you want to change things in your setup fast.No problems, we'll use our time to write others (more actual) things :)

Igors
06-22-2006, 01:57 PM
Hi, Uwewith most of the major 3rd party developers and even Matt Hoffmann visting CGTalk from time to time, let me ask you:Hmm... it's not clear do you ask Igors (or even Matt Hoffman) :) Ok, let us answer what we know.

Hi, Uweis it really not possible to teach renderama to distribute plugin cache data?
why does it have to sit in the socket folder?

as a non-developer i imaging there is only a path variable to be passed on (and modified for the slaves).
like so:

- plugin tells EIAS that it has cache data and where to find it
- EIAS saves this information into the project file and tells renderama
- renderama copies these files over to the slaves and changes the path to the slaves temp folder

there is probably more info to be passed over like timestamps to catch changes and so on, but the situation right now is pretty oldschool, if you know what i mean.
i like my socket folder to stay clean and not cluttered with project specific files.
There are 2 kinds of temp file plug-ins typically create:

a) temp file to avoid second and further generatings at passes of single frame (bitmap shadows, glows etc.). This kind has no problems with rama cause plug-in creates this file in specific temp folder and Camera automatically removes all temps at finish. Note, however, that second and other passes are not momentary with temp file, there are parts of plug-ins' work that can be performed only at final pass (for example, motion blur and calculating UVs). Note also that not all plug-ins creates such temps to optimize performance and not always it's rational. For example, a plug-in cannot create a constructive UV's at first pass cause child's UV's are unknown yet.

b) "history" temp file (like particles do). We agreed that your propositions are quite rational, but, sorry, IMO they are not enough "conceptual". Really, adding "transport features" looks not near silly task, but what is a result? "Now EIAS copying plug-in files automatically (of course with new vers of plug-ins), save your time you spent for manual copy!!". Hmmm... looks a mountain yields a mouse, right? :)

bronco
06-22-2006, 02:28 PM
Hi Igors,

i am aware that this is not a big "buzz" feature.
BUT drag and drop reordering in project window wasn't a big feature, too.

it's about a more modern approach, about letting the user spend his time with the artwork and not with file managment.
like Jens said, doing this kind of stuff manually is just annoying and screams for errors (like, if you forget to update a slave out of 5 with the new cache data -> please render again).

if you compare renderama with other network render solutions, rama seems a little old fashioned and VERY limited!
but this is an other story and i have already posted my requests for that over at EITGs forum with no official reply so far. to bad.

Jens C. Möller
06-22-2006, 03:24 PM
Hello Igors,

Set GI sampling to 1x1, test and multiply render time by 8-10 (be sure it's a very loyal coefficient). We guess it would cold a little your enthusiasm to bake illumination ;-)

Why does GI sampling needs to be set to 1x1 for occlusion baking? But even if I assume rendertimes to be, say, 80 times slower than on a "normal" GI rendered frame, an animation of 1000 frames (very usual frame count) will still be much faster with a baked object (including baking time) than with GI calculated for every frame, and, usually you would do the baking exactly one time and be than able to layout your animation very quickly and flexible.

Edit: Another benefit of baking is, that your animation will not look grainy/noisy. I recognized that GI animations produce alot of noise if sampling and ray amount is set too low. So, for a really clean animation a sampling of 32x32 is not sufficient anyway.

Rendering is always also about testing and testing until you have a final. So - at least here - it is not unusual to render the 1000 frame sequence 4 to 8 times until it is a final. It would certainly be an improvement to not need to render it with GI all the time...

A working radiosity solution in EI would of course still be welcome.

-----------

The fact that I do not like object caches is only my pesonal opinion. But if it the only solution to the problem it certainly should be done this way. The problem is not the object cache itself, but the distribution in a render network, and that task should ideally be solved by host. I sure know about the problems to change host...

Jens

Edit: I also think Uwe is right. Its the task of host to deliver all system features like file distribution and so on. Thzese might not be buzz featurs. But reliable features like this seperate professional software from toy software. What have been the most valuable features in the past? Object reordering and context menus. Yes, of course also GI. I absolutely do not want to diminish the big features that have been introduced to EI in the recent past. But if you need to work with it every day the value and reliability of the "small" features do far more for your working experience than the big ones.

Igors
06-22-2006, 04:10 PM
Hi, Uwe
i am aware that this is not a big "buzz" feature.
BUT drag and drop reordering in project window wasn't a big feature, too.Oh, yes, "gratitude is the most momentary of all feelings" :)

it's about a more modern approach, about letting the user spend his time with the artwork and not with file managment.
like Jens said, doing this kind of stuff manually is just annoying and screams for errors (like, if you forget to update a slave out of 5 with the new cache data -> please render again).

if you compare renderama with other network render solutions, rama seems a little old fashioned and VERY limited!
but this is an other story and i have already posted my requests for that over at EITGs forum with no official reply so far. to bad.Uwe, it's very easy to find at least 10 such problems in render (weight maps?) and 100 such prolems in app. In any 3D app, not in EI only. But.. and what from that? "We should report to host in order to fix it immediately"? It looks a bit naive, there are no hosts that runs immediately. The usual formula "(maybe) it will be fixed in next vers", and it's normal. Many and many max's bugs live long and happy from vers 4 yet, but there we never heard labels like "old fashioned", "very limited" etc :)

Let us back to "plug-ins baking" (our original theme). "Why plug-in repeats calculation over and over?" - we heard this Q at least 25 times at recent 3 years. Ok, now we propose a solution. Maybe our solution isn't perfect, maybe it's even bad (quite possible). But.. we see you aren't interested in this little tech detail, you prefer to discuss other themes (now your prjs have not this problem). Very well, let's talk about occlusion baking, renderama probs etc, - that's also interested for us. But please count: it's a talk only, not our concrete proposition as a plug-ins baking was.

Vizfizz
06-22-2006, 04:34 PM
EI has never really been suited for OpenGL type applications like previs and game construction. However, with the advent of 6.5 I could see a potential shift in that area if we could develop a few handy tools.

When I construct previs in Maya we use a large number of visual cheats to make an OpenGL hardware rendering look like its almost been software rendered by baking the lighting into simple texture maps. Maya only supports 8 hardware lights and that usually isn't enough. By baking lighting into the textures we get the appearance of more. Its a fast, no cost solution. Its a far simpler solution than a full blown radiosity requirement or even baking the information into the geometry. Though both of those are much cooler.

Personally, I would love to see a plugin that acts similar to Steamroller. Child objects are parented to the plug, the plug analyzes the children, fires off renderings to Camera and for each object under the plugin, texture maps are generated that include all the software rendered textures and lighting and saves a new texture map that can be applied back to the object in animator using standard texture controls. I would desire individual maps for each objects. Or, for the sake of speed, the entire scene could be rendered and a applied back onto the geometry as a camera map. Those textures could then been utilized in animator by activating "Display Texture" for the object. Acceptible for OpenGL type applications and can work quite readily in a software rendered solution.

Is this solution as cool as radiosity or geometry/occlusion baking? No... but it certainly could be useable. Its great for creating fake shadows under objects, lights up against walls and stuff like that.

Vizfizz
06-22-2006, 04:36 PM
Let us back to "plug-ins baking" (our original theme). "Why plug-in repeats calculation over and over?" - we heard this Q at least 25 times at recent 3 years. Ok, now we propose a solution. Maybe our solution isn't perfect, maybe it's even bad (quite possible). But.. we see you aren't interested in this little tech detail, you prefer to discuss other themes (now your prjs have not this problem). Very well, let's talk about occlusion baking, renderama probs etc, - that's also interested for us. But please count: it's a talk only, not our concrete proposition as a plug-ins baking was.

I truly believe its worth exploring.

Igors
06-22-2006, 04:52 PM
Why does GI sampling needs to be set to 1x1 for occlusion baking? Cause you said "occlusion texture baking"

But even if I assume rendertimes to be, say, 80 times slower than on a "normal" GI rendered frame, an animation of 1000 frames (very usual frame count) will still be much faster with a baked object (including baking time) than with GI calculated for every frame, and, usually you would do the baking exactly one time and be than able to layout your animation very quickly and flexible.We heard a young talent said to Tomas Addison "I discovered an universal diluent, it dissolves absolute all!". And Tomas answered like: "In what bottle you'll save it?". We aren't Addisons, but let us ask you: "Your arithmetic with 1000 frames is perfect, but how do you plan to setup such scene?"

Edit: Another benefit of baking is, that your animation will not look grainy/noisy. I recognized that GI animations produce alot of noise if sampling and ray amount is set too low. So, for a really clean animation a sampling of 32x32 is not sufficient anyway.32x32 is out of the question. Even 8x8 is not enough for animation anywhere (count EI)

What have been the most valuable features in the past? Object reordering and context menus.Bravo, Jens. We always appreciate a man who has his own opinion out of chorus. But developers cannot afford such luxury - they are more dependent from common public opinion. We too

Jens C. Möller
06-22-2006, 05:35 PM
Hello Igors,

Cause you said "occlusion texture baking"

Yes, sure. Right now, when I attempt something like baking in EI - that means making an occlusion pass and use it as a texture afterwards - I never set it to 1x1 but use default settings. This works very well. My question was, why this should not work when baking to a texture?

We heard a young talent said to Tomas Addison "I discovered an universal diluent, it dissolves absolute all!". And Tomas answered like: "In what bottle you'll save it?". We aren't Addisons, but let us ask you: "Your arithmetic with 1000 frames is perfect, but how do you plan to setup such scene?"

I setup such a scene like I setup every other scene also. Place models, light models, making testrenders, test things multipass in Photoshop, and decide for a final look. I most always do this with occlusion. I do not understand the question in this regard. What should be different in the setup of a scene with and without baking?

Bravo, Jens. We always appreciate a man who has his own opinion out of chorus. But developers cannot afford such luxury - they are more dependent from common public opinion. We too

Again, its my personal experience from 12 years of working as a 3D professional. One of the best new features in EI 2.7.5 was "shadow object only" and "generate shadow mask" and camra maps - not really buzz word features. In the old days proposals for such feature came directly from professionals who worked with the software - aka ILM - and this where the features that lifted EI into the highend realm.

There are different types of features for developers in general to take care of. First is the feature for the advertising billboards - the buzz word features, yes, I agree these are important. But there is a level beyond marketing and that is keeping users to stay with a software and to keep them happy (actually this is also markting). These second type of features do not shine in the ads, but shine when you work. Good software has both. Its not a luxury at all to have such a solid basis.

O2F is a perfect example. What does it do? It is a stupid file converter - not really a shiny software, and sure a feature that should be built into host. But it isn't. And you know what? Eventhough it does not create buzz at all it sells and sells, because people need this to work. Do people like to buy O2F? No. Because they would rather like to see this in host (me also). But it isn't.

All buzz does not help if you unpack your new toy but you need to recharge the batteries all 15 minutes - unless somone invents a solution for this. Until then it is waisted money.

Jens

Igors
06-22-2006, 06:17 PM
Hi, JensYes, sure. Right now, when I attempt something like baking in EI - that means making an occlusion pass and use it as a texture afterwards - I never set it to 1x1 but use default settings. This works very well. My question was, why this should not work when baking to a texture?

I setup such a scene like I setup every other scene also. Place models, light models, making testrenders, test things multipass in Photoshop, and decide for a final look. I most always do this with occlusion. I do not understand the question in this regard. What should be different in the setup of a scene with and without baking?

Look at movie Uwe posted recently (Modo discussion). That's a "normal" radiosity (we absolute don't know how it's actually named though). The "illum resolutions = vertices resolution". But not same for texture. Yes, you can render a GI pass and then use this map, but it has nothing in common with baking as we know it. Because your GI render is view-dependent. If an object is far away it occupies a several pixels only. Yes, you can bake them but you cannot hope for something rational if a camera is closer to it (like Uwe walks in his room). And without "I can fly!" the baking idea looses its charm momentary.


Again, its my personal experience from 12 years of working as a 3D professional. One of the best new features in EI 2.7.5 was "shadow object only" and "generate shadow mask" and camra maps - not really buzz word features. In the old days proposals for such feature came directly from professionals who worked with the software - aka ILM - and this where the features that lifted EI into the highend realm.

There are different types of features for developers in general to take care of. First is the feature for the advertising billboards - the buzz word features, yes, I agree these are important. But there is a level beyond marketing and that is keeping users to stay with a software and to keep them happy (actually this is also markting). These second type of features do not shine in the ads, but shine when you work. Good software has both. Its not a luxury at all to have such a solid basis.Oh, yeah, we also were amazed from abilities that "shadow mask" gives! In most case a result looks like "absolute natural", but.. impossible to achieve without this feature! We tried to explain, but.. no luck, people say like "What do you try to show?" (it's soo obvious, it should be):)

O2F is a perfect example. What does it do? It is a stupid file converter - not really a shiny software, and sure a feature that should be built into host. But it isn't. And you know what? Eventhough it does not create buzz at all it sells and sells, because people need this to work. Do people like to buy O2F? No. Because they would rather like to see this in host (me also). But it isn't.Hmm... and maybe they are right? (same as about EPS import). It's always a good idea to think before writing a product, not after.

All buzz does not help if you unpack your new toy but you need to recharge the batteries all 15 minutes - unless somone invents a solution for this. Until then it is waisted money.Jens, your philosophic sentences have no answers (at least for us). We can say only: it would be an absolute crazy and intolerant world with "professionals only" :)

Jens C. Möller
06-22-2006, 06:25 PM
Hello Igors,

Jens, your philosophic sentences have no answers (at least for us). We can say only: it would be an absolute crazy and intolerant world with "professionals only" :)

:) Agreed.

All I wanted to say was, that suggestions from professionals for features that do not create buzz but good workflow should be seriously considered.

Jens

bronco
06-22-2006, 06:57 PM
Look at movie Uwe posted recently (Modo discussion).

it was Hans, not me.

All I wanted to say was, that suggestions from professionals for features that do not create buzz but good workflow should be seriously considered.

amen

Igors
06-22-2006, 08:07 PM
Hi, Uweit was Hans, not me.Yes, it was Hans, sorry for our confusion

iKKe
06-22-2006, 08:25 PM
Look at movie Uwe posted recently (Modo discussion). That's a "normal" radiosity (we absolute don't know how it's actually named though). The "illum resolutions = vertices resolution". But not same for texture. Yes, you can render a GI pass and then use this map, but it has nothing in common with baking as we know it. Because your GI render is view-dependent. If an object is far away it occupies a several pixels only. Yes, you can bake them but you cannot hope for something rational if a camera is closer to it (like Uwe walks in his room). And without "I can fly!" the baking idea looses its charm momentary.


The scene was one object with one UV mapped texture (small texturemap, low render settings), just to test the baking process.

The GI 'pass' is not view dependent, it's baked GI based on the UV map of the model.

This was the GI bake from the test model:
http://homepage.mac.com/groothuis/modo/baked.jpg

Imagine you separate objects (floor, wall etc) and bake separate high res Images (maybe only an occlusion pass). Then you can Phong render interior animation with a GI appearance and without the GI splotches jumping around ;-)

For me this kind of baking gives me new possibilities, keep the render-times low, and still have nice GI/occlusion scenes.

I would love to do the baking in EI, even if it was only occlusion....

Cheers

Hans

Vizfizz
06-23-2006, 06:48 AM
This is the same process I use in Maya for previs. Maya's bake lighting capabilities can create texture maps that can be aligned with the target's UVs and the results are quite beneficial.

It does automatically what I tend to do by hand in EI with standard mapping techniques and snapshots taken from key locations.

plsyvjeucxfw
06-23-2006, 07:12 AM
Jens,

what happened to the Ramjac gallery? You had an example in there of a Lightwave scene where the lighting was baked out to a texture, then the model was converted with OBJ2FACT and rendered in EIAS.

I've always remembered that example of texture baking (from an app that could do it) combined with render speed from EIAS.

Igors
06-23-2006, 09:12 AM
Hi, HansThe scene was one object with one UV mapped texture (small texturemap, low render settings), just to test the baking process.

The GI 'pass' is not view dependent, it's baked GI based on the UV map of the model.

This was the GI bake from the test model:

Imagine you separate objects (floor, wall etc) and bake separate high res Images (maybe only an occlusion pass). Then you can Phong render interior animation with a GI appearance and without the GI splotches jumping around ;-)

For me this kind of baking gives me new possibilities, keep the render-times low, and still have nice GI/occlusion scenes.

I would love to do the baking in EI, even if it was only occlusion....For us it's always interested to see different appreciations of same things :) Imagine EI proposed (announced) a technique for such scene as you posted. We know well what would happen :) Hans would be one of first who pointed out on these visible edges, "faceting" etc. "Professional soft should not.." (right, Jens?). But.. it's not in EI, thus your loyality turns on 180 degrees: "it's interested", "give new abilities", and, of course, it's sooo bad that EI has not it.

How about to learn a toy carefully before ask it here? Where is a "production" test that Jens likes to talk about but absolute doesn't hurry to do when he has such ability? Why Hans is in modest silence about how long was baking phase for his scene? :)

Jens C. Möller
06-23-2006, 09:37 AM
How about to learn a toy carefully before ask it here? Where is a "production" test that Jens likes to talk about but absolute doesn't hurry to do when he has such ability? Why Hans is in modest silence about how long was baking phase for his scene? :)

ohhhh - I hurry :) I did not post an image here, but be sure I work on it - didn't I show you my latest renders? The animation will (and does already) utilize baking. Its also not sooo important that this is in EI, since Modo does this very well - so its all there, but not for the average customer who only wants to work in one package - and needs to make a decision what package to buy (and looks up the buzz word list for illumination baking).

Why are you so sure that baking in EI would produce such bad response? Where do you see loyality turning 180° because baking is not in EI? If people would not be loyal, they would not say anything here.

what happened to the Ramjac gallery? You had an example in there of a Lightwave scene where the lighting was baked out to a texture, then the model was converted with OBJ2FACT and rendered in EIAS.

I've always remembered that example of texture baking (from an app that could do it) combined with render speed from EIAS.

I need to redo the O2F gallery. When we initially made O2F the conversion of baked objects was sure very interesting for us - so we made this Lightwave test. I hav also baked illumination in Maya for an earlier project and brought the model with O2F to EI. Back in that days baking was not very userfriendly. But with Modo you just push a button and thats it.

There will be a new O2F gallery showing the results. But I can not say when.

Jens

Here is the Animation with baked illumination from Maya (no illunination from lights in the scene) rendered in EI (in 2004). Ah, and yes, the animation has exactly 1000 Frames :)

http://www.jcm-animation.de/downloads/Inkallakta.mov

bbuxton
06-23-2006, 09:49 AM
Baking occlusion/GI is already part of many users pipeline. Those that need this functionality find a way, I have used lightwave to do this for ages. A big caveat is that unless there is a compelling reason to render in EIAS (mforge for example), then it's usually much easier & quicker to stay in the other package. It is for this reason that I have found myself using EIAS less & less for what I bought it for - it's SPEED! Having the means to bake out lighting etc from within EIAS would reverse that trend.

bronco
06-23-2006, 09:59 AM
...but not for the average customer who only wants to work in one package - and needs to make a decision what package to buy.
...

this would be me for instance.
my modeling needs are well covered with Silo and EI Modeler (yes, i still use it), so i have no plans to buy Modo just to use baking.
there are ways to fake something like this in EIAS right now, but the process is long and not a real pleasure. jens will remember the garage we did a few years ago. it wasn't perfect to todays standarts, but it worked.
and i also can see the benefit of just one very long renderhit compared to 1000s frames of GI rendering.
what if radiosity could calculate it's solution to a texture, so avoiding the high-poly count on final render?
well, i would always prefer GI baking.

Igors
06-23-2006, 01:39 PM
Hi, Uwewhat if radiosity could calculate it's solution to a texture, so avoiding the high-poly count on final render? well, i would always prefer GI baking.We glad to see you begin to understand us :)

Let us explain what we know (of course, we can mistake). The "Occlusion Baking" is NOT a principal new technique, it's SAME radiosity, just with another "output cascade". Yes, you've a map (or set of maps) instead of tons of vertices (who knows what's better?), but it changes nothing in core radiosty engine. The radiosity remains radiosity

Igors
06-23-2006, 01:51 PM
Hi, JensIts also not sooo important that this is in EI, since Modo does this very well We appreciate your opinion, but it still would be preferable to see a concrete animation and concrete render/baking times to discuss how well Modo does this

Here is the Animation with baked illumination from Maya (no illunination from lights in the scene) rendered in EI (in 2004). Ah, and yes, the animation has exactly 1000 Frames :)

http://www.jcm-animation.de/downloads/Inkallakta.mov
Nice movie but please agree: it has zero information about how fast and easy it can be achieved

Jens C. Möller
06-23-2006, 02:28 PM
Hi, JensWe appreciate your opinion, but it still would be preferable to see a concrete animation and concrete render/baking times to discuss how well Modo does this

I'll let you know as soon as I have something to report.


Nice movie but please agree: it has zero information about how fast and easy it can be achieved

It was not easy at all, since baking in Maya - at least back in 2004 - was not very intuitive, since Maya is not very intuitiv, at least not for me. Brian may have another opinion about Maya. It also took a long time to calculate (several houres) This is also the reason why I did not used this further. But it was the only way to create something like a "GI" render in EI back than. Rendering was on the other hand very fast - just geometry with textures, and some ray lights. Of course much faster than if I used GI which was not present in EI in 2004.

Maybe Hans can give some benchmark results for his test with the room?

Jens

iKKe
06-23-2006, 02:44 PM
My workflow is very straightforward, Modeling in Modo (EIM for special modeling needs) & I do ALL my animation/rendering in EIAS, as I did the past 10 years ;-)

When I use GI in EIAS on animation, I render a separate GI pass, and composite it in post, this way I have more control (and the jumping GI splotches effect can be reduced). I like EIAS GI rendering. it's fast, and I get the "Camera" quality I want. But a GI animation pass takes a lot of render-time, and you can't easily re-render when you need to do small adjustments to your animation. So the use of GI is always a cost vs. benefit decision.

The situation has changes for me since a few weeks ago, my Modeler could suddenly bake all sorts of information into texture-maps. When scene objects have baked occlusion textures, a re-render is very fast, EIAS fast :-)

Sorry about the baking time silence:
http://homepage.mac.com/groothuis/modo/occlusion_screen.jpg

14 min, I don't think that's bad for a baked occlusion (on my 3 year old Mac).
(the 'jaggies' at the edges are repeated pixels to overlap the UV's by a few pixels)

A scene I am working on now took about 6 hours of baking time (6 4K textures), and Camera is rendering the animations for this scene with the speed and quality I like so much. Setting up the model for a occlusion bake takes no time at all besides creating UV maps.

The advantage of a baked texture vs. a baked radiosity solution into the model, is predictability and control (for example; a baked texture can be edited in photoshop). I never used Radiosity in production, the whole proces is just to time consuming.

Until recently I didn't use baking, as it was not possible with the applications I use in my workflow. Now I tasted it, I can't imagine I ever worked without it, it's like Raytracing, remember when we did all our work with only Phong rendering.

Now why would baking in EIAS be interesting for me now that Modo can do this? That's easy to explain, from what I've seen in Modo so-far, Camera is the better renderer, faster, and better quality. For me it would be great if I could work on the scene, send baking to Renderama, and continue working.

Cheers

Hans

Igors
06-23-2006, 03:45 PM
It was not easy at all, since baking in Maya - at least back in 2004 - was not very intuitive, since Maya is not very intuitiv, at least not for me. Brian may have another opinion about Maya. It also took a long time to calculate (several houres) This is also the reason why I did not used this further. But it was the only way to create something like a "GI" render in EI back than. Rendering was on the other hand very fast - just geometry with textures, and some ray lights. Of course much faster than if I used GI which was not present in EI in 2004.There is no lighting technique without problems. Here is a very simple experiment:

- create an XZ plane
- place a radial light above it
- animate the light (move it up along Y-axis)

Result: more distant light produces MORE illumination (not less as in real life). Of course, it's just an example but please note: even "elementary" thing has problems (btw: "elementary" does not mean "simple"). If we consider, say, bitmap shadows, we can find a tenth of problems, not say about RT shadows and GI. Each method has its advantages and disadvantages, it's a normal "dialectic" in CG world. But, Jens, we listen you and we hear "advantages only" :) (of baking approach). Looks like only narrowed/limited developers don't understand great advantages of this technique ;) LOL

Sorry, but in our experience CG is not a place with "milky rivers and kissel beaches" (don't know how it's in English). Let us to remind you the back side of this great approach

- several hours of pre-calculating is absolute NOT slow/bad for scene you showed. From engeneering point of view. But please tell us how to work with this practically. Our first acquaintance with EI radiosity: calculating... Two hours later: an indicator definitily has moved? Or nope? Next morning: still calculating.. Ok, "direct way" is not a way to go, clear, simplifying settings, ok, here are rough but enough fast results. Ok, "interpolating" render time, balance speed/quality etc. Yes, all is possible (and even not very complex), but.. please agree: any talk about "easy to use" is out of the question.

- a class "fly only" of animated scenes (nothing is moved/changed except camera) is not a little, but please tell us: what to do with others? A partial solution (even perfect) still remains a partial (for camera fly only). Yes, it's not a little, but.. it's a part only

- (maybe most important). Any changing of scene forces re-baking. But what's a difference? I need too re-render too with GI etc, right? Yes, right, but with baking you could see nothing before baking is finished. And it's a big difference in practical work

Please think about these little things listed above before you, gentlemen, apploud to illum baking and talk condescending about modest Monte-Carlo GI :)

Igors
06-23-2006, 04:16 PM
Hi, HansFor me it would be great if I could work on the scene, send baking to Renderama, and continue working.Hans, there is a good medicine for "love from first view" - it's just a second and further views. Please test and use it more - then we would be happy to hear about your results/experience

Jens C. Möller
06-23-2006, 05:08 PM
Each method has its advantages and disadvantages, it's a normal "dialectic" in CG world. But, Jens, we listen you and we hear "advantages only" :) (of baking approach). Looks like only narrowed/limited developers don't understand great advantages of this technique ;) LOL

Please do not get me wrong. I never said that baking is the solution to all GI render problems, but it certainly has advatages over plain GI render and is more efficient *for certain tasks*. Does it provide a solution for changing light situations? No. Does it provide a solution for the interaction between moving objects and baked objects? No. Does it need rebaking if you want to change the scene? Yes. As you said, its all true.

It is just a very useful tool/technique - at least for me. Also Monte Carlo GI is a useful and most aprechiated tool and technique for me, but not the most efficient *in all cases*. I do not argue against you, I argue from my experience as a CG artist - its just my opinion. But if I listen to you I could get the idea that you diminish the advantages and that you might think it is an overrated feature. I think its not. Its just a normal controversal discussion.

Please think about these little things listed above before you, gentlemen, apploud to illum baking and talk condescending about modest Monte-Carlo GI :)

I applaud it for the benefits. I do not use it when there is no benefit. Its one technique of many with its pros and cons - just like MC GI. Baking is not needed in EI - but if you look at it this way, what kind of new feature is needed at all in the EIAS?

----------

I think we should pick up at the point where Uwe suggested to utilize the radiosity engine for baking, since that was an attempt that catched your attention. So, how could this be done and what are your thoughts in this regard?

Jens

bronco
06-23-2006, 05:19 PM
- a class "fly only" of animated scenes (nothing is moved/changed except camera) is not a little, but please tell us: what to do with others? A partial solution (even perfect) still remains a partial (for camera fly only). Yes, it's not a little, but.. it's a part only

- (maybe most important). Any changing of scene forces re-baking. But what's a difference? I need too re-render too with GI etc, right? Yes, right, but with baking you could see nothing before baking is finished. And it's a big difference in practical work

Please think about these little things listed above before you, gentlemen, apploud to illum baking and talk condescending about modest Monte-Carlo GI :)

i am sure you understand that illum baking is already in use by the people here and others, with all the pros and cons you mentioned. and i guess they are all aware of this.
if you use it just for fly-bys or use it for the enviroment and comp your main subject in later (with dynamic lighting/shading), it is still a very usefull technique.
it is NOT limited to camera fly-throughs.

for a project i did in 3ds max some time ago i used baking (not only illum but shaders also) for the whole enviroment (a highway with vegetation around) with a truck crashing on it.
the action was rendered seperat in brazil, the rest with max scanline. after first tests we calculated around 2 weeks pure rendertime on 5 maschines -> past deadline
with baking and multi layer rendering this project was rendered in around 3 days pure rendertime. sorry, i can't give you exact numbers, because this was freelance work for an other company and i don't have the project anymore.

so, let me repeat: it is already in use, just not in EIAS.

iKKe
06-23-2006, 05:34 PM
There is no lighting technique without problems. Here is a very simple experiment:

- create an XZ plane
- place a radial light above it
- animate the light (move it up along Y-axis)

Something like this?

http://homepage.mac.com/groothuis/modo/animatedlight.mov

For me this thread is about possibilities, I like EIAS GI it's very good, and very usable in production! The GI rendering is one of the best things added to EIAS the last few years
My complaints about splotchy animation renders is a general GI problem, all applications I have seen so-far have this problem.

(Although Vray has an interesting approach to this problem http://www.spot3d.com/vray/help/VRayHelp150beta/tutorials_imap2.htm but I don't use MAX)

Cheers

Hans

Vizfizz
06-23-2006, 06:16 PM
Although my experience at ILM was primarily in previsualization, I did spend a month or two working on THX-1138. For this show I was required to create a complete final shot from start to finish. I could use any tools I wanted. I wanted to stretch myself a bit so I used Maya and a lot of ILM's proprietary tools. I've attached a snapshot of the scene. The LUT is all out of whack in this snapshot, so it might look a little weird, but you get the idea.

The concept of baking occlusion lighting is used all the time in the Matte Painting department at ILM. A single high resolution ambient occlusion or GI still frame is generated and composited into the scene using any number of compositing packages. For most applications, a single frame was enough. If there was a slight need for camera movement a camera map could be implemented. It was combined with a diffuse, reflection, reflection occlusion, and specularity pass. Other passes were used if called for. It was ILM that got EI to include reflection occlusion, shadow masks and render passes into EI in the first place, but back then there was no ambient occlusion / GI capabilties in EI. They used the equivalent of a manually generated illuminator back then to create "fake" ambient occlusions. If I remember right, they called it a "Bee Hive" lighting rig. This of course worked well for lock offs and cg generated background plates and thanks to EI's speed, fake animated ambient occlusions were possible. Generating a true animated GI pass / ambient occlusion that will accomodate for lighting shifts is considerably more expensive. So, the ability to bake the GI pass would add a tremendous speed advantage but at a major cost as well.

The ILM philosophy is to have as much control over the appearance of the scene as possible. Although we created a GI pass, it was never baked into the textures/geometry. We were trained to break everything out into a separate render passes...and I mean everything. Some of my after effects comps were hundreds of layers deep. Kinda crazy. But you know, it came in handy when the director wanted to pixel f*@$ your scene. However, for the average user, a baked GI pass could be a boon.

I sometimes wonder if the reason why EI doesn't have lighting/texture baking is because of EIAS' lack of a decent UV editing system. If we were to ever get that implimented by the host, I think this sort of thing would become a reality much faster.

Igors
06-23-2006, 08:49 PM
Please do not get me wrong. I never said that baking is the solution to all GI render problems, but it certainly has advatages over plain GI render and is more efficient *for certain tasks*. Does it provide a solution for changing light situations? No. Does it provide a solution for the interaction between moving objects and baked objects? No. Does it need rebaking if you want to change the scene? Yes. As you said, its all true. Same as all you said is a true too. Yes, "I can fly" (practically in real time) is a really cool feature, but, Jens, an user should be informed well and exactly about "how much" it "costs" in view of pre-calculating time and efforts - otherwise, sorry, Jens, but it's an irresponsible promising of "communism tomorrow" - we've seen a lot of such ones.

I applaud it for the benefits. I do not use it when there is no benefit. Its one technique of many with its pros and cons - just like MC GI. Baking is not needed in EI - but if you look at it this way, what kind of new feature is needed at all in the EIAS? If you ask what feature should be first for EI (if we understand you well) then we can answer: multi-layered rendering and post-processing system. That's our opinion

Igors
06-23-2006, 10:13 PM
Hi, Uwei am sure you understand that illum baking is already in use by the people here and others, with all the pros and cons you mentioned. and i guess they are all aware of this.Uwe, you are not who shares an approach "ich (aux verb we don't remember) klein mann" :)

Igors
06-24-2006, 07:31 AM
Hi, BrianHowever, for the average user, a baked GI pass could be a boon.Average user (same as advanced one) can do this with radiosity from 5.0. Hmm... but it's really amazed how attractive is an old thing in a new bright envelope :)

Jens C. Möller
06-24-2006, 11:27 AM
The problem with the radiosity engine since EI 5.0 is, that it never worked well.

If you ask what feature should be first for EI (if we understand you well) then we can answer: multi-layered rendering and post-processing system. That's our opinion

Agreed.

Jens

Igors
06-24-2006, 03:17 PM
Hi, JensThe problem with the radiosity engine since EI 5.0 is, that it never worked well.We aren't radiosity fans/specialists thus we cannot judge. But, Jens, look at "occlusion baking" you are excited with:

- enough long pre-calculating phase;

- topology problems (visible dges etc.);

- huge amount of data (btw: much more with textures);

- far perspective of almost momentary render ("twenty years of persistent labor, thousand years of happy" - hmm.. people (count us) need a little happy but now)

Is all correct? If so the list above is a classic description of radiosity as it's known recent 5 years. Maybe Modo discovered a new, more friendly, radiosity? Maybe, that's we don't know. But in any case their marketing is genius (and 100% correct). Of course, you can name it as "occlusion baking" (a new revolutionary appraoach - Alonzo, where are you? ;-) You can even name a cat as a cow. But such "cow" still says "mhau" and gives no milk :)

Jens C. Möller
06-24-2006, 07:34 PM
You do not like baking.

I like it.

I guess that is clear by now.

Lets move on to a more productive topic, shall we?

Jens

iKKe
06-24-2006, 11:21 PM
You do not like baking.

I like it.
You're not alone, every major 3D application supports this feature in some form ;-)
I just never realized it could be so effective on interior scenes, ... and such a time saver!


I guess that is clear by now.

Lets move on to a more productive topic, shall we?

Jens
Maybe multi-layered rendering?

Cheers

Hans

Igors
06-25-2006, 08:05 AM
Hi, JensYou do not like baking.

I like it.

I guess that is clear by now.Not we started this aspect as it's clear from this thread begin. But if it happened, let us note: the thing you like (as you just said) and the thing you don't like (radiosity) are ..same things in fact :)

Lets move on to a more productive topic, shall we?Sure! BTW: waiting for last XP-server changes, help

Igors
06-25-2006, 08:31 AM
Hi, HansMaybe multi-layered rendering?Sorry, Hans, we've nothing to say instead of annoyed "it should be in host" :sad: So, it's just "our dream"

BTW: we would be interested in details about SSS in Modo. Your little explanation/images would be wanted. Of course, no probs if you've no time

Jens C. Möller
06-25-2006, 11:11 AM
Hi, JensNot we started this aspect as it's clear from this thread begin. But if it happened, let us note: the thing you like (as you just said) and the thing you don't like (radiosity) are ..same things in fact :)

Maybe its the same thing, but I have never worked with a workable radiosity engine but my experience with baking is good so far.

Sure! BTW: waiting for last XP-server changes, help

Did Patrick give you a date for a new build? I have contacted him to see whats up with it.

Jens

plsyvjeucxfw
06-25-2006, 06:40 PM
For the Igors, this thread has drifted off topic.

Did you ever get an answer to the possibility of baking out particle data in a preview run?

Igors
06-25-2006, 08:34 PM
Hi, KurtDid you ever get an answer to the possibility of baking out particle data in a preview run?Initially we proposed a "baking" for all "capricious" plug-ins that, for example, repeat their calculations, a bit slow, have some problems in Camera etc. - say shorter, just "calculate once" for plug-ins. Yes, for particles too (just final particles are baked, not their database)

MagicEgger
06-27-2006, 12:18 AM
Hey Igors, Jens, Uwe, Hans, Brian..

I’m really liking these Cgtalk chats.. but Im really busy and crazy trying to finish my FIAT job.
Playing with Maya I found they love to use baking data, Shave and Hair Cut (hair and models instance), Syflex (Cloth and Skin).. Applications like RealFlow does too.
A long time ago Kishore helped me doing a script to export Maya animations in Obj format to be read in EIAS using (OBJ2FACT, Thanks Jens) and with the new cycling feature be used in EIAS... but EIAS need to read all sequences in the Host, right?
so, I always asked myself.. why EIAS cant read data from the Hard Disk? Now we have G5s, Pentiums, fast HDs... and Animator will have more memory to work.
Its the same for baked Data, I guess.
If a plug-in does all math, and you dont change anymore any channel.. why It dont read the data in the HD?
How Morphing targets work? How animator store each target? or it acess each target in each frame?
If Animator read always 3 frames in the current position of the time in the animation’s Data from the HD it understand always the Motion Blur and the view port preview, correct?
I agree some positions.. like Rama need to make the Data Distribution to us in the renderfarm.
I have 30 machines here.. its impossible to make it by hand without a mistake.

And Changing the issue of course.. like I love to do.
I Like GI and Textures Baking like all other users here.
I remmebered When J. Banta showed me How ILM baked all GI using the LightWave plug-in to render in EIAS in the Pear Harbor feauture Film.. its a really interesting and pipeline aproved tool.
I know... like Igors always says, have good and bad issues in all techs.
But I pretty sure that all users here knows about the prbs in Textures and GI..like poor quality in Zoom ins.. but with some tricks like creeate a second map to only the area which you will zoom in.. this GI map or Texture map.. will make our Camera’s render really Fly.
I liked How Modo does.. lets test more Modo and see How it works.
But Igors, think a bit more.. is it not interesting How many Pro users are asking it?
I remebered when I bought my first EIAS version and asked Matt Hoffman.. Matt, is it not possible to add a feature which you flatten all the textures (like Photoshop) in only one texture to render faster? without know anything about 3d.. which means bake Textures.. and in that time Bake wasnt created yet in 3d.

Thankssss

Tomas

Igors
06-27-2006, 02:06 AM
Hi, TomasI’m really liking these Cgtalk chats.. but Im really busy and crazy trying to finish my FIAT job.Know-know :) BTW: your letters would be very easy to recognize even without "sssss" and "Tomas" - that's your style to discuss 10-15 themes same time :) As always, we ask you to be more concrete and we would be happy to answer all we know for all your Q

MagicEgger
06-27-2006, 11:00 PM
Haha,

Like you know, I’m a multi core brain.

Ok, lets start with Baking Data to plugins:
why EIAS cant read data from the Hard Disk? Now we have G5s, Pentiums, fast HDs... and Animator will have more memory to work.
Its the same for baked Data, I guess.
If a plug-in does all math, and you dont change anymore any channel.. why It dont read the data in the HD?
How Morphing targets work? How animator store each target? or it acess each target in each frame?
If Animator read always 3 frames in the current position of the time in the animation’s Data from the HD it understand always the Motion Blur and the view port preview, correct?
I agree some positions.. like Rama need to make the Baked Data Distribution to us in the renderfarm.
I have 30 machines here.. its impossible to make it by hand without a mistake.

Thanks
Tom (Dual Core Brain)

Igors
06-27-2006, 11:55 PM
Hi, Tomas
Ok, lets start with Baking Data to plugins:
why EIAS cant read data from the Hard Disk? Now we have G5s, Pentiums, fast HDs... and Animator will have more memory to work.Eventual reading data from HD doesn't reduce their size - Animator will have same amount of memory
Its the same for baked Data, I guess.
If a plug-in does all math, and you dont change anymore any channel.. why It dont read the data in the HD?We guess you talk about "common caching". But, unlike other caching applications/usage, it's not a rational idea to cache all generated geometry at each frame. For example, a caching of a plug-in like Ubershape. For what? Such cache of animation can occupy many gigabytes of disc space and even its reading will be slower than simply to force the plug-in to repeat its analytical model's building.How Morphing targets work? How animator store each target? or it acess each target in each frame?Any non-linear transformations (deforms, morphs, plug-ins) are recalculated from scratch if any of their source data is changed (or they have time-sensitive flags). AFAIK that's same in all apps.If Animator read always 3 frames in the current position of the time in the animation’s Data from the HD it understand always the Motion Blur and the view port preview, correct?Motion blur requires to know vertex's position at previous frame, but this position is never read from HD. Each "transformer" is responsible for correct motion data creating (often it needs to repeat all calculations with "minus time delta")I agree some positions.. like Rama need to make the Baked Data Distribution to us in the renderfarm.Hmmm... agree/disagree doesn't make things faster :)

MagicEgger
06-28-2006, 12:55 AM
Igors,

What you have in mind to make plug-ins faster to calculate?

Tomas

Igors
06-28-2006, 01:41 AM
Hi, TomasWhat you have in mind to make plug-ins faster to calculate?What we said in #1 of this thread: link a slow (or problematic) plug-in to a "saver" that provides a file cache

Jens C. Möller
06-28-2006, 10:13 PM
Hi, TomasWhat we said in #1 of this thread: link a slow (or problematic) plug-in to a "saver" that provides a file cache

You think of a plugin where problematic other plugins can be linked to, to save out one model for each frame? Sounds cool. Wouldn't this be incomplete without also a "loader" plugin that takes care of the model sequence?

This loader plugin could "check" for the geometry of the last and the next frame and so maybe provide the motion vector for the blur, if API allows for this.

This sound to me like a useful object cycling plugin. The problem is still the distribution of the model sequences. The easiest would be to just write all data to one single fact file, that would store all different "poses" of the mesh, like

sequence // parent effector
group#000 //mesh indexed with frame number
group#001
group#003
...
group#n

Edit: Another question would be, if the saver saves the sequenc with its lokal coordinate system, or in the world coordinate system...

disadvantage of this would be, that with high density meshes the resulting fact would be really large. I absolutely like the idea that the cache data would be editable, since it would simply be a fact file. Also the distribution of a single file sounds easier to me as the distribution of a folder with a fact sequence.

Next problem to solve would be texturing. As long as the models have a UV space it should be easy. The saver plugin could also write the current texture space as UV to the facts, like contortionist does.

To solve the distribution issues maybe this could be coupled with a drag and drop utility in the finder, that let you define Remote Folders and simply copys all data that you drop on it to all asigned slave folders... sounds very unfancy to code, but would solve alot of problems.

Jens

AVTPro
07-01-2006, 07:30 AM
What's the plug-in? Wouldn't the objective dictate guidelines of it's procedure? How can you determine which process is effective without knowing it's use? Tell me what the plugin is and I can tell what I think about baking during my art creation.

What concerns me is intuitive use for artist. Intuitive interaction is paramount for creativity. Example, I hear the process of getting baked cloth is so conjunctive I would never consider it. I don't want to save thousand linked files or even right the script. It's a bottleneck, leading a direction away from the task at hand...creating.

Functional, streamlined, workflow is the only thing that makes sense to me nowadays. Most highly-optimized workflows still require iteration upon iteration because of artistic scrutiny.
Artist seek to make this laborous refinement process as painless or even pleasant as possible. If you have to manually run a thousand baked files through this slugish process everytime you want to make a changes, then how creative can one be?

The preview mode would have to be dead on...but what if the client wants to preview it?
Also, I though blur was a post process on a frame? so it can be on anything.

Actually, I am evalutating Cloth as opposed to Syflex. There's a script that makes Syflex even more interactive called EZ Flex. One has properties to create blue jeans fabric with ten buttons, one has a preset with one button. I'm picking the Mustang over a mule. I just want to get there. Whichever is fast, looks good and tamed!! (cooperates with what I want to do). Yes I want depth of control, but at my perrogative.

If baking, saving and spitting out a bunch of files were controlled with something like a render wrangler, or auto queue controller like renderama then I guess that would be ok.

AVTPro
07-01-2006, 08:09 AM
Hi, TomasWhat we said in #1 of this thread: link a slow (or problematic) plug-in to a "saver" that provides a file cache

I must have missed this too. That's a no brainer. A solution not a question. Anything that makes it faster and smoother is good.

AVTPro
07-05-2006, 07:24 PM
Sorry I miss understood, "Baking" is kind of confusing to me. To me it means clearing the simulation process within the application to speed up the interface or perform other operations. It seems your talking about about something outside the application to speed up for the renderer only? Or something for interactivity for the interface? In which case it would load back in?

I didn't edit out my previous post because that's how I feel about CG now any.

Igors
07-05-2006, 10:33 PM
Hi, Alonzo, glad to hear you :) Sorry I miss understood, "Baking" is kind of confusing to me. To me it means clearing the simulation process within the application to speed up the interface or perform other operations. It seems your talking about about something outside the application to speed up for the renderer only? Or something for interactivity for the interface? In which case it would load back in? "Baking" is a wide term that, as we understand, means: "pre-calculate, save and then load instead of calculating again". Can be used for render and preview as well depending from concrete implementation

AVTPro
07-06-2006, 01:44 AM
Here is a very small cloth sample with only a few vertices that I baked. Once I baked it, the vertice are stored in the animation channal and is no longer simulated but is a regular animation based on points. Thus, the animation becomes effecient in preview speed and calculation time. It also has the added benefit to be controlled by animation channels.

http://i47.photobucket.com/albums/f190/AVTPro/Bake_cloth17.jpg

I found it most helpful in the case of a coin dropping with th use of dynamics (Before Rodoe Dynamics stimulator by Ramjac—now coin falling on takes 2 minutes in EIAS with Rodeo). So you can tweak the channels once a suitable simulation have been achieved.

So I can see this being very helpful with Rodeo which does dynamics. (dont remember if it "bakes" or not already). It does work with Maya dynamics baking into EI via FBX. I would love to see something like this with FBX for Cloth or some Maya script that could export cloth files or object and import them into EI.

So there's my frustration, to get cloth simulation from Maya into EI, each frame must be individually saved manually. Then each object converted into a model format EI can use.

I would love to see this automated for EI.

MagicEgger
07-07-2006, 01:20 AM
Yeah,

its a good addition.
I have a script which export in .OBJ format a cloth animation from Maya.. but you need to convert all models in OBJ2FACT or transporter.

Thanksss

Tomas

AVTPro
07-07-2006, 04:38 AM
Tomas,

You never cease to amaze me... Beyond your great art is your great heart.

Thanks. Where can I get the script? I would like to work on making this cloth

process smooth for EI users. With FBX import of character motions, we should have the

functionality to append dynamics simulations.

Once again Tomas, you have my utmost gratitude, thanks for leading the way and setting

the standard how to raise the bar (by helping each other).

BTW, Nice photo, iPhoto will get the red out better than Visine. Just two clicks :)

AVTPro
07-07-2006, 04:48 AM
Maybe you can just send the script E-mail. athreet@adelphia.net.

Then I can find something to load the models automatically as a 'replacement" animation object. Hopefully with out a huge database or crazy project file. :}

CGTalk Moderation
07-07-2006, 04:48 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.