View Full Version : Subsurface Scattering Using Depth Maps
ajm000 09212005, 09:35 AM How do I go about to complete this program? The part I'm confused with is how to compute the 2D lightDepthTex from the float4 dist? Does it go like... the vertex position (x,y,z) is projected to texture space (s,t) then the depth (dist) is stored there? Thanks so much!!!
If you have a better way to do subsurface scattering please tell me. Just not the texture diffusion method (e.g. blur shaders).
DEPTH PASS
//VERTEX PROG
struct a2v {
float4 pos : POSITION;
float3 normal : NORMAL;
}
struct v2f {
float4 hpos : POSITION;
float dist : TEXCOORD0;
}
v2f main(a2v IN,
uniform float4x4 modelViewProj,
uniform float4x4 modelView,
uniform float grow)
{
v2f OUT;
float4 P = IN.pos;
P.xyz += IN.normal * grow;
OUT.hpos = mul(modelViewProj, P);
OUT.dist = length(mul(modelView, IN.pos));
return OUT;
}
//FRAGMENT PROG (SEPARATE FILE)
float4 main(float dist : TEX0) : COLOR
{
return dist;
}
THICKNESS COMPUTATION PASS
float trace(float3 P,
uniform float4x4 lightTexMatrix,
uniform float4x4 lightMatrix,
uniform sampler2D lightDepthTex)
{
float4 texCoord = mul(lightTexMatrix, float4(P, 1.0));
float d_i = tex2Dproj(lightDepthTex, texCoord.xyw);
float4 Plight = mul(lightMatrix, float4(P, 1.0));
float d_o = length(Plight);
float s = d_o  d_i;
return s;
}


playmesumch00ns
09222005, 01:08 PM
What's the texture diffusion method?
ajm000
09232005, 01:51 AM
Gaussian Blur and the likes sir.
I like to implement something like a realistic looking marble shader hopefully in realtime. And I think Gaussian blurs doesn't produce believable results that mimic marbles. Am I right?
ajm000
09232005, 02:16 AM
Looking at the program and reading lots of OpenGL and Cg Tech Specs, I was somehow enlightened on how this works or how i might make it work. If I am not mistaken this program says that...
Firstly, I need to set up a lightDepth texture which is the same as shadowdepth texture.
Secondly, I need to set up a light matrix using gluLookAt which I will use to project every incoming hposition to, so I could get the correct length for d_o, which I will use later to get the s (distance traveled through the material) which is just the difference between in d_o (point light exit) and d_i (point light entered).
the remaining thing which I am doubtful about is the lightTexMatrix. is this the product of the lightMatrix and the TextureMatrix (CG_GL_TEXTURE_MATRIX)? if so I implemented this and then I was able to compute s which I used in the expression
exp(s*sigma_t)*other_colorComputations;
what is a good value for sigma_t (e.g. known values to simulate for something like jade or marble)?
I got bad results :( maybe it's not good to combine this with selfshadowing (nvidia cg_skin)?
help :_(
ajm000
09232005, 10:39 AM
Hello. I modified my computation for s. I thought about it for hours and hours reading more and more about projective texturing and shadow mapping. The cg shader code above came from GPU Gems 1 by the way (Thanks for the code Mr Simon Green). I just wanted to implement Realtime Subsurface scattering with self shadowing using shadow/depth maps (implemented with pbuffers, see cg_skin from NVidia).
What I thought was that instead of the lightmaps, i could use the shadow maps to compute for the distance s traveled through the material. In my implementation, as far as I know (and please correct me if I'm wrong) the vertices in eye space are projected to the shadow map by multiplying it with the textureProjection matrix (or texture matrix for short?). once the shadow map is complete the the second pass starts wherein (in the vertex program) incoming vertices are multiplied with texture projection matrix to compute for its texCoords which is then used (in the fragment program) to lookup/sample the shadowmap. This result we get is in effect the d_i. (Imagine a sphere being intersected all the way through by a ray coming from the light, hence it has 2 intersection points. These two intersection points when projected back to the light (and saved to a texture) will be projected to the same point but the one which was stored was the value for the first intersection point, hence d_i.) Now, getting the position (eyespace) for the current vertex being shaded in the fragment program I projected it to the lightSpace so I could get its length(dist from light). Note that this vertex can either be the intersection point one or the intersection point two. If it is the intersection point two then I have d_o. Then I just get the difference between d_o and d_i to get s. But if it is the same point hmmm (haven't thought about it yet).
But then my question is where is the best place to use s? Well so far I modulated the diffuse component of the color with it i.e., diffuse*exp(s*some_value). Then I tried to modulate every other part then everything. lol. but the results, all of them, were still horrible :_( anyways i'm looking for suggestions or if you found something wrong with my implementation. anything would help. light maps and shadow maps basically work the same way right? i know some subtle differences but if you could point it out for me more (relative to mine and mr green's implementation) i would appreciate it sir ^_^ Thanks so much!
CGTalk Moderation
09232005, 10:39 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
vBulletin v3.0.5, Copyright ©20002015, Jelsoft Enterprises Ltd.