Transition between displacement maps in the same shader?


#1

Hello everybody,

I’m currently a VFX student at Savannah College of Art and Design. My class and I are working on a large project involving a cinematic.

We’re having an issue in which we need a character to have displacement maps that transition between each other to account for small high detail traits, such as wrinkles, muscle movement, etc. that can’t be animated as that would require a much higher detail mesh. In other words, the issue is that the blendshapes change, but the displacement maps don’t correspond with those. Makes sense? Any ideas on a process that could help achieve these transitions between displacement maps?

We’re working in Maya 2014 and rendering using Renderman.

Thank you.


#2

Probably this should be in the Maya specific forum?

Anyways…

A place to start us with math (multiplication) nodes: bring all your disp maps in normalised to 0 to 1 then multiply them together in the correct way to get a final map. i.e. x0 is off, x1 is on. You can set this up with driven keys so it matches the weighting of your blend shapes - so as Growl BS = 1 and Frown BS = 0, you’ll have Growl Disp *1 and Frown Disp *0. As this changes from 1:0 to 0:1 the driven keys will ensure the disp maps follow correctly. Finally you take the output map and remap it to whatever range renderman requires.

The math can/should vary (i.e you don’t need to normalise and then remap, you can just perform one set of blending calculations) depending on the renderer requirements and the bitdepth of your disp maps etc.

Worth noting that just like with blendshapes you’ll get a blur between them as you blend from one to the other. Depending on how much detail you’re requiring and how much change there is between maps this can look bad. Easy way to check is to re-create the shading network in compositing software and just look at the maps as they changes occur - you’ll see if things get too blurry.

Anyway there are more advanced setups you can use but this is probably a good place to start.

Finally just want to add that I think blending displacement maps is something you want to try to avoid if you can. It’s a pain to QC and, depending on the quality you need, it’s often just as effective to switch out to a higher resolution mesh and use disp (or normals) only for higher frequency detail. I’d hazard a guess and say you might be using a too low res mesh for your final binding. Usually I’d recommend a low res mesh for animators (but enough detail to still see what’s going on with facial features etc) that swaps out during lighting to a higher resolution render-friendly mesh.


#3

Hey Axiomatic,

Thanks for your response. I’ll be forwarding that information to my team for them to test.

I also emailed a studio called 3Lateral, and they replied with:

"You need to setup an animated displacements network which mimics blendshapes. This can be done through utility nodes by reproducing a formula

R= (T1-N)*m1+ (T2-N)*m2+… (Tn-N)*mn

where is

R - resulting displacement
N - displacement in neutral state.
T- target displacement (one for every high resolution shape)
m- multiplier that’s connected to GUI controls via SDK

This can all be done through utility nodes in Hypershade, no additional plugins required. Best done with 32bit displacement but use sparingly- these maps easily get very big.

hope that helps.

Vladimir Mastilovic"

This last one we tested and works, but will forward your response anyways. Once again, thanks.


#4

Is this also the logic behind some Facial “Stress Map” technology like the one used below at the 2-minute mark?

//youtu.be/38wh5Fn4WEs

So the basic logic is you have a map representing what you want to happen (ie: a Stress Map), then a Math Node attached to a Driver, and then funnel that to the finished material?

And you can have multiple maps in setups like this, right? Just like demonstrated on the Devil’s face in the demo? Since you can use Math Nodes to “activate/deactivate” them?

I’m really interested in this topic.


#5

Please, post software and technique specific questions in the appropriate forums. GD is NOT a catch all for any given topic one is unsure about :slight_smile:
Moving this to Maya/Rendering for you.


#6

It seems you already already have the correct way to do this, but another simpler way that is definitely more hacky is to rename all your displacement maps as an image sequence. Then you could use the standard texture node with the image sequence option. Break the connection to the expression that gets created and you have a file switcher where you can switch to any of your frames containing the proper displacement map. You can even make inbetweens for your displacemnts in an image editor. Then you could have have certain sections of your displacement image sequence be the tweens. (ex frame 1 - 5 = neutral vein to pumped up vein). You can then use set driven keys to move through a certain frame range and attach that to any effector you need.


#7

Hello everybody again,

Sorry for not replying before, it’s been crazy busy for me lately.

We’ve set up a node network based on the formula that I posted before, using “multiplyDivide” nodes (driven by SDKs for blending) and a “plusMinusAverage” node to add up all the displacements. The resulting displacement is connected to a Renderman shader, and the renders so far look very promising.

Unfortunately, we run into a problem when submitting renders to the renderfarm. It seems to only use the displacement blended based on the frame the scene was saved at. (It does not recompute the displacement for each frame).

I was wondering if any of you have had this problem before. Have you encountered similar problems with utility nodes failing to recompute on a renderfarm? Just as a remainder, we’re using Renderman.

Any advice would be greatly appreciated. Thanks again!

Best regards,
Gian


#8

Any ideas?