Volume Rendering..help needed


#1

Hi

Has anyone tried to implement the volume rendering method described in this paper?

http://graphics.stanford.edu/~fedkiw/papers/stanford2003-02.pdf

Any idea about:
-How to have colors, because the Voxel grid does only contain intensity and Illumination (both floats?).
-What would be efficient algorithm for calculating light rays trough the voxel grid and how to handle antialiasing, because 1 voxel row seems to equal one pixel on the screen.
-How the particle mapping should be done? the algorithm i have now just maps all of them, but is this really necessary?

I have tried to make the actual rendering system work first and then concentrate on the fluid dynamics.

Any help is needed.

t:\Pekko

ps. thanks


#2

In answer to your questions:

  1. You map colours to the range of floating point values, . i.e.

0 < intensity < 0.1 = black
0.1 < intensity < 0.6 = brown
0.6 < intensity < 0.7 = red

… and so on up to white for the most intense parts. That’s your colours. In the paper illumination is stored as a colour value. But if you want to save space you could just assume white light. Or treat external and internal illumination differently and with two different colours.

  1. a)There are many algorithms for stepping through a voxel grid. Search google for “grid traversal algorithm” or somesuch. Tracing for illumination is more difficult as the voxel grid is not uniformly shaped in world space.
    b) The whole point is you don’t have to worry about antialiasing since the grid cells are aligned with your pixels.

  2. What do you mean by “particle mapping”, interpolation?


#3

By mapping i mean the process of mapping or “putting” densities of the particles to the voxel grid. What i have there now is very basic algorithm that goes through all the particles:

-Is the particle in voxel grid.

-how many samples we would need (depends on the distance of camera)
-take the samples required and assing them to corresponding voxels.

the paper says that they do not need the particles that are deep in the smoke, thus not visible. But i don’t know efficient way to do this, yet.

Paper said that they were able to render images 2000 pix widht . That would make something like 20001000200 grid (y ja z are my estimates) and with 2 GB RAM the 2 floating point values are the maximum i can have stored in the grid.

I understood the mapping of color values you explained, but i am still litle confused about the colored lights and light scattering (consernig colours).

For ray traversal for lights i am trying following:

i use hybrid space (compination of camera and screen space) to do the traversal. the x ja y coordinates are put in the screen space but z coordinates are just divided with voxel depth.
This should give me the “Voxel Grid space”. I was thinking that if i do this for every coordinate there , i (may)be able to consider the grid as normal cube, instead of truncated pyramid.

t:\Pekko


#4

One writers of the paper replied to me and said that they have actually 4 floats stored in each voxel (density and RGB). Now it is easy to have colors there. Takes lot memory though.

t:\Pekko


#5

I’ve been reading through various papers and it would be great if one of you guys could explain how the voxels end up aligned with pixels. The grids are usually in world space aren’t they? So how can the voxels be in anything other than world space?

thanks

Rob


#6

Having voxels aligned with pixels seems to be pretty simple, think your voxel grid as a view frustum and each pixel in your final image as a voxel row.:
-First i transform the particles in the camera space.
-then project x,y coordinates of particles to view plane (particle.x/(particle.z/viewplanedistance));
Then you know the particle location (x,y coordinates) in the view aligned voxel grid.
z coordinate i leave to camera space and calculate the corresponding voxel for the particle.
so in the end i have 1 voxel rox / pixel. you don’t have to store the location of individual voxels, just calculate one when you need it (for light calculations. for example).
This is very beatiful idea because you don’t have to use huge voxel grid size in z-direction (away from camera) , i have tests with value of 50 and it seems to work fine.

t:\Pekko


#7

can anyone describe less mathematical way to do light scattering as diffusion process and using hirarcial method accelerate this. I have read the papers, but the mathematics in those is little bit too difficult for me, at this point. So if anyone could outline (in pseudo code)the basic algorithm for light scattering in voxel volume, it would be great help.


#8

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.