Yeah, it’s really only half-working right now: If a middle layer object sticks out in front of the top layer, or if the bottom layer object sticks out in front of the middle or top layer (within the threshold), then that point is just shaded with the top (or middle) layer’s shader. I call it a half-fix because, while it won’t hide the offending geometry, it will at least allow it to blend in; whereas it might be extremely obvious when an inner layer is sticking out if it’s shaded differently, this will just make it look more like the piece that’s sticking out is just a part of that outer layer.
Actually hiding the geometry is going to take a little bit more work. I won’t have time to do more with it for a couple days, but I’ll post the source code if you want to mess around with it.
#include <shader.h>
#include <geoshader.h>
struct cloth_layers {
miColor top;
int top_object_label;
miColor middle;
int middle_object_label;
miColor bottom;
int bottom_object_label;
miScalar threshold;
};
DLLEXPORT int cloth_layers_version(void) {return(1);}
DLLEXPORT miBoolean cloth_layers (
miColor *result,
miState *state,
struct cloth_layers *paras)
{
//declare variables
int top_object, middle_object, bottom_object;
miScalar threshold;
double distanceCovered = 0;
miVector normalGeom = state->normal; //store state->normal in another variable. This is only necessary to avoid modifying state variables when mi_vector_neg() is called later on
miBoolean satisfied = 0;
miBoolean middleHit = 0;
miBoolean bottomHit = 0;
int label;
/*the first test is whether or not the eye ray has hit the top layer
So, evaluate the tag of the top layer. It is not necessary at this point to
evaluate the tags of either the bottom or middle layer (slight optimization)*/
top_object = *mi_eval_integer(¶s->top_object_label); //get the label of the object the eye ray has hit
if(mi_query(miQ_INST_LABEL, state, state->instance, &label)) {
if(label == top_object) { //if already at the top layer, we're done
*result = *mi_eval_color(¶s->top);
} else {
//the eye ray hasn't hit the top layer , so check to see if it's the middle or bottom
middle_object = *mi_eval_integer(¶s->middle_object_label);
bottom_object = *mi_eval_integer(¶s->bottom_object_label);
threshold = *mi_eval_scalar(¶s->threshold);
mi_vector_neg(&normalGeom); //reverse direction of normalGeom to give a vector pointing directly inwards, for further raytracing
if(middle_object == label) {
middleHit = 1;
}
if(bottom_object == label) {
bottomHit = 1;
}
if(threshold > 0) { //optimization; the shader isn't coded to handle negative thresholds, and a threshold of zero requires no further tracing. Otherwise, tracing is necessary
state->child->point = state->point;
while(!satisfied) { //Trace until we've either gone past the threshold, or trace_probe doesn't hit anything
if(mi_trace_probe(state, &normalGeom, &state->child->point)) {
distanceCovered += state->child->dist; //distanceCovered is the distance from the original shading point (from the eye ray) - need to add the new tracing distance each time
if(distanceCovered > threshold) { //gone over the threshold, finished with the loop
satisfied = 1;
} else {
if(mi_query(miQ_INST_LABEL, state, state->child->instance, &label)) { //query for the label of the new point hit by secondary tracing
if(label == top_object) { //if we find the top layer within the threshold, we're done
*result = *mi_eval_color(¶s->top);
return(miTRUE);
}
if(label == bottom_object && !middleHit) {
bottomHit = 1;
}
if(label == middle_object) {
middleHit = 1;
bottomHit = 0;
}
} else {
return(miFALSE);
}
}
} else { //trace_probe didn't hit anything. we're done with the loop
satisfied = 1;
}
}
}
if(middleHit) { //if middleHit has been checked, return the middle layer
*result = *mi_eval_color(¶s->middle);
} else if(bottomHit) { //if middleHit isn't checked, but bottomhit is, return the bottom layer
*result = *mi_eval_color(¶s->bottom);
}
}
} else { //if mi_query fails, return false. This shouldn't happen.
return(miFALSE);
}
return(miTRUE);
}
Also, I really think that you should be using the interpolated normals for this instead of the eye ray direction. The example you gave with the two spheres is perfect for showing why. I’m attaching an image that illustrates this. But if you were to do the secondary tracing directly from the eye ray, in examples like your top-layer sphere within a bottom-layer sphere setup, the bottom layer would be clearly visible as you begin to reach the outside of the sphere.
The only problem that really arises from using the normals is that these aren’t always going to point straight inwards. But for the vast majority of cases, I think it’ll be close enough.