Baking Plug-ins?


#51

Hi, Jens

We aren’t radiosity fans/specialists thus we cannot judge. But, Jens, look at “occlusion baking” you are excited with:

  • enough long pre-calculating phase;

  • topology problems (visible dges etc.);

  • huge amount of data (btw: much more with textures);

  • far perspective of almost momentary render (“twenty years of persistent labor, thousand years of happy” - hmm… people (count us) need a little happy but now)

Is all correct? If so the list above is a classic description of radiosity as it’s known recent 5 years. Maybe Modo discovered a new, more friendly, radiosity? Maybe, that’s we don’t know. But in any case their marketing is genius (and 100% correct). Of course, you can name it as “occlusion baking” (a new revolutionary appraoach - Alonzo, where are you? :wink: You can even name a cat as a cow. But such “cow” still says “mhau” and gives no milk :slight_smile:


#52

You do not like baking.

I like it.

I guess that is clear by now.

Lets move on to a more productive topic, shall we?

Jens


#53

You’re not alone, every major 3D application supports this feature in some form :wink:
I just never realized it could be so effective on interior scenes, … and such a time saver!

Maybe multi-layered rendering?

Cheers

Hans


#54

Hi, Jens

Not we started this aspect as it’s clear from this thread begin. But if it happened, let us note: the thing you like (as you just said) and the thing you don’t like (radiosity) are …same things in fact :slight_smile:

Sure! BTW: waiting for last XP-server changes, help


#55

Hi, Hans

Sorry, Hans, we’ve nothing to say instead of annoyed “it should be in host” :sad: So, it’s just “our dream”

BTW: we would be interested in details about SSS in Modo. Your little explanation/images would be wanted. Of course, no probs if you’ve no time


#56

Maybe its the same thing, but I have never worked with a workable radiosity engine but my experience with baking is good so far.

Did Patrick give you a date for a new build? I have contacted him to see whats up with it.

Jens


#57

For the Igors, this thread has drifted off topic.

Did you ever get an answer to the possibility of baking out particle data in a preview run?


#58

Hi, Kurt

Initially we proposed a “baking” for all “capricious” plug-ins that, for example, repeat their calculations, a bit slow, have some problems in Camera etc. - say shorter, just “calculate once” for plug-ins. Yes, for particles too (just final particles are baked, not their database)


#59

Hey Igors, Jens, Uwe, Hans, Brian…

I’m really liking these Cgtalk chats… but Im really busy and crazy trying to finish my FIAT job.
Playing with Maya I found they love to use baking data, Shave and Hair Cut (hair and models instance), Syflex (Cloth and Skin)… Applications like RealFlow does too.
A long time ago Kishore helped me doing a script to export Maya animations in Obj format to be read in EIAS using (OBJ2FACT, Thanks Jens) and with the new cycling feature be used in EIAS… but EIAS need to read all sequences in the Host, right?
so, I always asked myself… why EIAS cant read data from the Hard Disk? Now we have G5s, Pentiums, fast HDs… and Animator will have more memory to work.
Its the same for baked Data, I guess.
If a plug-in does all math, and you dont change anymore any channel… why It dont read the data in the HD?
How Morphing targets work? How animator store each target? or it acess each target in each frame?
If Animator read always 3 frames in the current position of the time in the animation’s Data from the HD it understand always the Motion Blur and the view port preview, correct?
I agree some positions… like Rama need to make the Data Distribution to us in the renderfarm.
I have 30 machines here… its impossible to make it by hand without a mistake.

And Changing the issue of course… like I love to do.
I Like GI and Textures Baking like all other users here.
I remmebered When J. Banta showed me How ILM baked all GI using the LightWave plug-in to render in EIAS in the Pear Harbor feauture Film… its a really interesting and pipeline aproved tool.
I know… like Igors always says, have good and bad issues in all techs.
But I pretty sure that all users here knows about the prbs in Textures and GI…like poor quality in Zoom ins… but with some tricks like creeate a second map to only the area which you will zoom in… this GI map or Texture map… will make our Camera’s render really Fly.
I liked How Modo does… lets test more Modo and see How it works.
But Igors, think a bit more… is it not interesting How many Pro users are asking it?
I remebered when I bought my first EIAS version and asked Matt Hoffman… Matt, is it not possible to add a feature which you flatten all the textures (like Photoshop) in only one texture to render faster? without know anything about 3d… which means bake Textures… and in that time Bake wasnt created yet in 3d.

Thankssss

Tomas


#60

Hi, Tomas

Know-know :slight_smile: BTW: your letters would be very easy to recognize even without “sssss” and “Tomas” - that’s your style to discuss 10-15 themes same time :slight_smile: As always, we ask you to be more concrete and we would be happy to answer all we know for all your Q


#61

Haha,

Like you know, I’m a multi core brain.

Ok, lets start with Baking Data to plugins:
why EIAS cant read data from the Hard Disk? Now we have G5s, Pentiums, fast HDs… and Animator will have more memory to work.
Its the same for baked Data, I guess.
If a plug-in does all math, and you dont change anymore any channel… why It dont read the data in the HD?
How Morphing targets work? How animator store each target? or it acess each target in each frame?
If Animator read always 3 frames in the current position of the time in the animation’s Data from the HD it understand always the Motion Blur and the view port preview, correct?
I agree some positions… like Rama need to make the Baked Data Distribution to us in the renderfarm.
I have 30 machines here… its impossible to make it by hand without a mistake.

Thanks
Tom (Dual Core Brain)


#62

Hi, Tomas

Eventual reading data from HD doesn’t reduce their size - Animator will have same amount of memory

We guess you talk about “common caching”. But, unlike other caching applications/usage, it’s not a rational idea to cache all generated geometry at each frame. For example, a caching of a plug-in like Ubershape. For what? Such cache of animation can occupy many gigabytes of disc space and even its reading will be slower than simply to force the plug-in to repeat its analytical model’s building.

Any non-linear transformations (deforms, morphs, plug-ins) are recalculated from scratch if any of their source data is changed (or they have time-sensitive flags). AFAIK that’s same in all apps.

Motion blur requires to know vertex’s position at previous frame, but this position is never read from HD. Each “transformer” is responsible for correct motion data creating (often it needs to repeat all calculations with “minus time delta”)

Hmmm… agree/disagree doesn’t make things faster :slight_smile:


#63

Igors,

What you have in mind to make plug-ins faster to calculate?

Tomas


#64

Hi, Tomas

What we said in #1 of this thread: link a slow (or problematic) plug-in to a “saver” that provides a file cache


#65

You think of a plugin where problematic other plugins can be linked to, to save out one model for each frame? Sounds cool. Wouldn’t this be incomplete without also a “loader” plugin that takes care of the model sequence?

This loader plugin could “check” for the geometry of the last and the next frame and so maybe provide the motion vector for the blur, if API allows for this.

This sound to me like a useful object cycling plugin. The problem is still the distribution of the model sequences. The easiest would be to just write all data to one single fact file, that would store all different “poses” of the mesh, like

sequence // parent effector
group#000 //mesh indexed with frame number
group#001
group#003

group#n

Edit: Another question would be, if the saver saves the sequenc with its lokal coordinate system, or in the world coordinate system…

disadvantage of this would be, that with high density meshes the resulting fact would be really large. I absolutely like the idea that the cache data would be editable, since it would simply be a fact file. Also the distribution of a single file sounds easier to me as the distribution of a folder with a fact sequence.

Next problem to solve would be texturing. As long as the models have a UV space it should be easy. The saver plugin could also write the current texture space as UV to the facts, like contortionist does.

To solve the distribution issues maybe this could be coupled with a drag and drop utility in the finder, that let you define Remote Folders and simply copys all data that you drop on it to all asigned slave folders… sounds very unfancy to code, but would solve alot of problems.

Jens


#66

What’s the plug-in? Wouldn’t the objective dictate guidelines of it’s procedure? How can you determine which process is effective without knowing it’s use? Tell me what the plugin is and I can tell what I think about baking during my art creation.

       What concerns me is intuitive use for artist. Intuitive interaction is paramount for creativity. Example, I hear the process of getting baked cloth is so conjunctive I would never consider it. I don't want to save thousand linked files or even right the script. It's a bottleneck, leading a direction away from the task at hand...creating.
       
       Functional, streamlined, workflow is the only thing that makes sense to me nowadays. Most highly-optimized workflows still require iteration upon iteration because of artistic scrutiny. 
       Artist seek to make this laborous refinement process as painless or even pleasant as possible. If you have to manually run a thousand baked files through this slugish process everytime you want to make a changes, then how creative can one be?
    
    The preview mode would have to be dead on...but what if the client wants to preview it? 
    Also, I though blur was a post process on a frame? so it can be on anything.
   
   Actually, I am evalutating Cloth as opposed to Syflex. There's a script that makes Syflex even more interactive called EZ Flex. One has properties to create blue jeans fabric with ten buttons, one has a preset with one button.  I'm picking the Mustang over a mule. I just want to get there. Whichever is fast, looks good and tamed!! (cooperates with what I want to do). Yes I want depth of control, but at my perrogative.
  
  If baking, saving and spitting out a bunch of files were controlled with something like a render wrangler, or auto queue controller like renderama then I guess that would be ok.

#67

I must have missed this too. That’s a no brainer. A solution not a question. Anything that makes it faster and smoother is good.


#68

Sorry I miss understood, “Baking” is kind of confusing to me. To me it means clearing the simulation process within the application to speed up the interface or perform other operations. It seems your talking about about something outside the application to speed up for the renderer only? Or something for interactivity for the interface? In which case it would load back in?

I didn’t edit out my previous post because that’s how I feel about CG now any.


#69

Hi, Alonzo, glad to hear you :slight_smile:

“Baking” is a wide term that, as we understand, means: “pre-calculate, save and then load instead of calculating again”. Can be used for render and preview as well depending from concrete implementation


#70

Here is a very small cloth sample with only a few vertices that I baked. Once I baked it, the vertice are stored in the animation channal and is no longer simulated but is a regular animation based on points. Thus, the animation becomes effecient in preview speed and calculation time. It also has the added benefit to be controlled by animation channels.

[img]http://i47.photobucket.com/albums/f190/AVTPro/Bake_cloth17.jpg[/img]

I found it most helpful in the case of a coin dropping with th use of dynamics (Before Rodoe Dynamics stimulator by Ramjac—now coin falling on takes 2 minutes in EIAS with Rodeo). So you can tweak the channels once a suitable simulation have been achieved. 

So I can see this being very helpful with Rodeo which does dynamics. (dont remember if it "bakes" or not already). It does work with Maya dynamics baking into EI via FBX.  I would love to see something like this with FBX for Cloth or some Maya script that could export cloth files or object and  import them into EI. 

So there's my frustration, to get cloth simulation from Maya into EI, each frame must be individually saved manually. Then each object converted into a model format EI can use. 

I would love to see this automated for EI.