volume rendering of scientific data

I have a stack of binary images. I combined them into a 3D array in MATLAB and used the isosurface/isocaps and patch commands to extract the isosurface and visualize it . . but its too slow. I want to do the same with CUDA. I am new to CUDA. I saw a project ‘volume render’ in the CUDA SDK. It reads a raw file ‘Bucky’ and renders. I want to know how can I stack up my images into a 3D volume, extract the isosurfaces and render it . . could the ‘volume render’ sample be modified to achieve this ? should I convert my images into raw format and do something . . please help

Not sure about the first phase, but according to what you say, sounds easy.

As for iso-surface extraction, check out the sdk-sample called “marching cubes” - it can be used to extract isosurface of a given scalar function or data-set.

I saw the marching cubes example in the cuda sdk. It is using a raw file “Bucky.raw” as input. I tried combining all my images into a single raw file but the results are not as expected. I have a sequence of binary images which I want to combine in a volume data so that I can use it with this example . . please help

Check out www.accelereyes.com

They have a CUDA toolkit for MATLAB. That MIGHT help. I have no idea if that would solve your problem. But u may want to check it out.

It seems they have two ways of using the code:

  1. using Bucky.raw

  2. using an inline function

I’m unsure what your storage format is, as binary images could basically represent anything…

So is it some kind of point-cloud or do the binary values combine into a scalar field in 3d, or…?

I could also imagine that the data would be binary and you would want to “connect the dots” to make the iso-surface, so that would be an isosurface with iso-value 1 in space of ones and zeros?

Notice also that the marching cubes example expects power of two from each dimension of the grid, so if you have other dimensions you must expand your data to the next power-of-two border (with some benign value).

I have modified the marching cubes example to fit my own needs (now I can just basically pass it in any power-of-two floating point data and it just generates the vbos for me (expects gl and cuda to have valid contexts)) and I would love to share that work, but I’m unsure about licensing issues - can someone tell me if it’s ok to post modified sdk-samples here?

Also in order to achieve this I think some “getting your hands dirty”-coding will be necessary.

I will tell you how my data is:

I have many gray scale bmp images which I wish to combine in a 3D volume, get isosurfaces and render it.

I have uploaded one such image (1.bmp) in this post.

As you can see from my image, it has many circles. It is just one section of a cuboid like volume.

Basically, I have dipped some glass spheres in a fluorescent die. And these images are taken in different sections of that volume.

So, what you see in this single image is one such section. In the consecutive sections/images, the radii of these balls keep increasing/decreasing (different section of each glass sphere)

I have also uploaded a screenshot of the isosurface rendering of my volume data (done in matlab) : file name - ball_surface.

It is too slow in matlab, hence I wanted to use CUDA.

I know about accelereyes/jacket. But its commercial and I cannot publish that work.

I wanted to do it entirely by a code.

If you are not getting a clear picture of the kind of data I have then please give me your mail id then i will narrate more.
ballsurface.bmp (828 KB)
1.bmp (57.9 KB)

Ok, so basically your data is a scalar field - therefore what you seek is implementable with minor engineering effort using the marching cubes example as starting point, or by about a little bit more by implementing marching cubes from scratch, if you can’t use the sample implementation due to licensing issues ((?) I have no idea if there are some issues with the license nvidia grants - I’m no lawyer).

What probably went wrong in you trial to convert you bmps to a single raw was either

a) The “colorformat” (8bit scalar value is expected by the sample) was not wrong of the raw you gave in (could had been 16 or 32 bits?) or
b) The size of the input was “incorrect”: the sample expects 32x32x32 volume data by default (look for global gridSizeLog2)

Fixing these should fix you problems - also note that you would need to somehow change your input data to have dimensions that are power of twos (either by stretching the images, or by filling the empty space with zeros)