m:studio 2.4d in use on SA Sports Hall of Fame


#41

Hahah - I know exactly what you mean - tweak and tweak and tweak… :wink:

Well, I like the look a lot, I just think that - depending on the customer - it may need to shout “gold” more in the end (I had that happen once), therefore I could imagine going more towards yellowish reflection tinting. But that’s just me.

That AA Problem really sucks. You can’t do much but cheap tricks, like using a find edge as a mask for a blur in comp etc. Nothing elegant about that. If you render to fields, you can try a half to one pixel vertical blur - this not only prevents flickering interlace lines, but often helps smoothen the look and isn’t as recognizeable as a full blur.

Taron! Help! :bowdown: :bowdown: :bowdown: Please!


#42

The AA artifacts nearly go away when scaling down from 1440x1152 to 360x288.

Looks good to me but the price to get to this is way too high. :sad:

Video Toaster’s Aura effects package has this tool built in and I use it to great effect, even when not rendering to fields (it still gets broadcast in fields). CG is often very sharp and this filter just takes the edge off it - makes it rock solid and really yummy for TV.


#43

That would only work if you could use no AA - then the rendertime should be relatively similar to final size at higher AA. But you would have to render at 2x to 4x the size…
AA isn’t much different from that process in the end.

Be careful though, if you disable AA, the softshadows are rendered perfectly - and slow. Another Bug IMO, since if you disable AA, you normally want fast results…

Cheers,


#44

:cry: these are tears of joy…FINALLY! I’m continuously trying to tell that, but never quite so beautifully…thank you! :wise: :slight_smile:

And while we are at it, we’ve revisted the AA issues with overexposed pixels and are adding an option to do a pre-AA clipping, which will make it all work fine again. You will not need to go to extra high AA settings to get a smooth render! Oddly enough, that wouldn’t have helped before either…hehe…

SO anyway, I thought I had to react to your wonderful post and we have to react to the great job you’re doing! Makes us happy and proud! :buttrock:

Thanx a bunch and count on a patch very soon…we’ll give it a test spin first, of course, but you will know it first, too!


#45

Hi Paul!
Now the text and the lion gets burried in the reflections.
If you add little blur on the reflections you will get a more realistic and expensive gold look, try it! And you should also try to put back some but very subtile diffuse.

/ Svante


#46

I designed this project with messiah in mind, with no intention of falling back onto Lightwave. I wouldn’t commit to a fairly large commercial project if I thought messiah would fail me. Of course the AA issue popped out as a threat . . .

Displaying this WIP on this forum also speaks about my confidence in messiah. On the other hand, I’ve grown to value the messiah community. Even though it’s only a shiny logo thingy without character animation, any crits on my work can only improve it and help me and others to learn.

Now there’s a spot of welcome news! Wow! If this AA issue can be solved, man, I’ll be happy, since you can see I’m sort of knee-deep in shiny metal on this project.

What can I say . . . :blush: thank you all for your encouragement. I’ll do my best :wip: :arteest:

:bounce:wooot!! I’m excited. :bounce: Here’s hoping that will be before I need to hit the final render button on this project!


#47

Hey, thanks for the post! This layout is not for my project, just for fun, part of scratching the itch, you know!? In the final setup I’ll show more respect for the client’s logo. :wip:

Would you suggest blurring the reflection map, or should I soften the reflection in the material? The latter was very slow when I tried it and then I just dropped it, but I did wonder what the best would be.

I’ll play with the diffuse in the final scene. I pulled it down so that the dark parts of the probe can influence the reflection more. In studio photos of shiny metal, reflections are sometimes pitch black in places. I’ll also play with Tint to Surface Colour although I’m already on 70%.


#48

Blurred reflections are rather expensive to compute, therefore I always use pre-blurred HDRIs. Either HDRIshop or Photoshop CS2 can do the job. It works best if you first convert the Probes to spherical maps.
I keep maps with Gaussian Blur Values between 5 and 40 Pixels available and use them depending on the scene.

With smaller probes, I convert them to higher resolution in the ->mirrored ball to ->spherical map conversion and then apply the blur, so there is more pixels for subtle changes.

Cheers!


#49

Thanks for the tips Thomas :thumbsup:

Nice to learn from the wisdom of others. Max always had a blur parameter for texture maps. Does messiah not have a Blur node? Or TLHPro 1.2 perhaps? :curious:


#50

No, there is no blur in messiah and I still haven’t started to look into 2D stuff…
From a speed-point-of-view, I would always blur in advance anyway…

Cheers!


#51

How about a node called “Upstream Cache” which can be useful for nailing down any shader calculations that the user feels do not need repeating for each frame. The idea is to do the normal processing on the 1st frame, then write the node cache (the result of upstream shaders leading to that node) to disk, and then enjoy faster shader computation for any subsequent frames.

In this way, if there was a Blur node (wink, nudge) my flow would look like this:

[frame 1] Texture -> Color Correction -> Blur -> Upstream Cache (write) -> Material

Let’s say I’m rendering a sequence from frame 1 to 60. The shader flow gets computed as usual on frame 1 and Upstream Cache writes the results to disk. From frame 2 to 60, only the cache is accessed (loaded into RAM), bypassing all the nodes before Upstream Cache for subsequent frames.

[frame 2] Upstream Cache (read) -> Material
[frame 3] Upstream Cache (read) -> Material
[frame 4] Upstream Cache (read) -> Material

If I render from frame 73 to 950, then 73 does caching and 74 to 950 benefits.

Make sense?

Being a node, the user has the freedom to use this where and how they want, making the decision themselves about what can benefit from caching.

Hitting F9 for a single test render should not activate the cache. Only for rendering sequences.


#52

Yeah, we discussed this before.

In XSI/Mental Ray this is called Lightmaps, but the use is limited and not as universal as you might think:

The first thing is, that many things depend on viewing angle and how close the camera is to a surface. You can neither bake/buffer reflections nor refraction or even specular highlights.
Shadows change as soon as something moves…

The second thing is, that baking/caching is always bound to a resolution. What you can do is creating 2D textures from a shadertree (with the above limitations) and then work further on that, down the tree - this allows for instance the blurring of procedurals.
But at the same time, you loose the resolution-independence of procedurals…

Also, this has to be done on a per-surface/object basis and can therefore grow very large.

So while tempting, this is way more involved and limited as one may think - but also, it opens doors to very interesting effects not possible otherwise.

I would be very happy for now, if we could at least bake out surfaces (GI for instance) to textures as a first step.

Cheers,


#53

Well, I never thought of this in the sense of baking, but rather as a way to cache shader tree computations before the result hits the surface. Sort of the way in which 2D flows are computed in Digital Fusion or Shake. Imagine that the material surface is like the last node in a 2D compositor. A shader tree feeds into that final node to create the result. Some portions of the tree can be very processor intensive, and so that get’s cached only once IF the user sees the need for it.

For example, Let’s say I have a HDR probe environment texture (a 5MB file) and then decide to use the Color Correction node to kill the colour saturation on it. I decide that this will be static for the entire animation. So I’d like to tell messiah that this is static (I won’t animate Color Correction changes down the line) so please compute it only once and cache it.

Dropping saturation was perhaps very fast, so I did not gain much.

But how about a probe with 0 saturation, higher contrast and a heavy blur (wink, nudge) which I combine with another probe with the same nodes modifying it. This is 2 probes with 3 processes applied to each of them for every frame. Quite an overhead for this one material (a heavy scene may have many more such situations, compounding the overhead). An Upstream Cache would be able to combine the result into a single probe texture without any additional processes. For heavy shader trees this can also reduce the memory requirement.

Digital Fusion does this sort of thing automagically, detecting if nothing will change on any subsequent frames and then caches the static stuff and never revisits those nodes. It saves A WHOLE LOT in render time.

As I understand it, right now messiah treats everything as dynamic, even if the user sets it up as static. Surface Displacement is also currently like that. This is not efficient, although the job still gets done.


#54

OK, perhaps I shouldn’t be making the parallel between a 2D compositor and a 3D shader tree. In 2D you know what you’re working with and you know the resolution and you’re doing one thing after another to reach an end goal. In 3D you’re probably not computing that end result before rendering, but basically moving up the tree in reverse to see what influences the surface which is being rendered under the current pixel in the camera view.

So perhaps we should say that certain nodes relating to bitmap textures should have a Upstream Cache feature. In this way we could parallel with 2D compositors and we could have speed and memory benefits.


#55

Ah, I see.

Yes, the old dream of compositing in messiah basically, with good buffering…
I want that since I first saw the cool nodes in messiah…

So yes it would be fantastic, but on the other hand, the things you describe are so easy to do externally, that it is almost a waste of development time in the current state.
I always use layered Tif files in Photoshop for stuff like this. They can be read by all the 3D apps I have with no problem, but in the same time, the layers keep me very flexible.
CS2 with the Live Object Layers even keeps the resolution if you scale something…

But where Fusion has a rather simple job to do, 3D is much more complicated and finding out if something changes or not can be tricky. Even XSI, which with it’s “lazy evaluation” approach optimizes everything like you describe it, sometimes has problems recognizing a change somewhere in the the scene tree…

As much as I like the idea, I see more pressing things on the list :wink:

As far as Displacement goes: it actually uses a kind of caching already, if you think of the displacement map as a buffer. Displacement isn’t thought as much for static stuff as for deforming objects, and there your caching fails.
For static objects, you can easily freeze the mesh in Zbrush etc. - Nothing else as the cache would do in the end.

In programming, there is always the tradeof to make between speedup and slowdown. Testing if something changes first slows down processing. This must lead to a major improvement later on to be worth it.

Sometimes it is more efficient to put some trust into the users brain and let him find the best approach himself. Since after a while, an app with all those little bells and whistles turns into something called MAYA… :wink:

Cheers :slight_smile:


#56

I was thinking, like, uh, messAYA.

OK, I’ll get on with being a user. :thumbsup:


#57

Just for your viewing pleasure I did the unthinkable - rendered a 60 frame sequence at 4x resolution and scaled down 4x just so that we can all get some ultra-shiny eye candy. It ran overnight at between 15 and 20 minutes per frame. A very painful thing to do. :eek:
Ufuzzi gold medallion QT (620KB)

Once again, don’t evaluate this as part of a commercial project, just a little sequence done on the side for, uh, educational purposes or R&D or whatever official thing you want to call it.

Post processing in DF was a short Gaussian blur (defocus to in focus) and a bit of extra Gain.

Now imagine HDR AA fixed and this sequence could render at 1.5 minutes a frame in the desired resolution . . . that will be sweeeeeeeeet! :stuck_out_tongue:


#58

Hahaha - cool.

For the final goldfinger touch, I would reduce the HDRI brightness to maybe 0.5 and push reflection tinting full throttle, so that is really yellow.
But under the “metal” topic it looks really great.
Do you have a gradient on it at all for the reflection?

:thumbsup:


#59

I see some members of the audience just have to have ‘real’ gold. I’ll see what I can do.

Nope. Just plain reflection. I’m thinking of using an airbrushed map to govern reflection, with the highest polished areas on top and the duller areas to the sides.

Did you have any other suggestions with gradients?


#60

In the File Tab under the Multiply Items block, if there was a “Reload” button next to “Clone”, “Mirror” and “Replace” it would save some frustration.

Right now I have to click replace, then hunt around in a folder for an object which is already selected in messiah.

Or is there something like this but I didn’t notice? I couldn’t find it in the docs.