I have a project which needs to visualize several billion particles. The simulation and generation of them all can be done in CUDA.
But I want to RASTERIZE those particles… they’ll end up being disks (not points) with 3D position and depth as viewed from a camera.
What’s the best strategy for this kind of visualization?
I assume that using the GPU’s own rasterizer would be most efficient, so I could generate lists of particles in CUDA, export those to OpenGL context, rasterize them in OpenGL, then go back to CUDA for the next load of particles.
This is roughly the strategy used by the Fluids and n-body SDK examples allthough they deal with mere millions and can do them all in one batch.
But would it be effective to do billions of particles this way? I’d probably have to flip flop back and forth between OpenGL and CUDA and do it in batches, would that cause any problem?
Is there any problem doing this on a card NOT being used for display? Especially a Tesla, or maybe a “spare” GPU in a machine.
I don’t expect any of this to be real time, since we’re really talking about brute force of 50B+ 3D spheres, but that’s fine, it’s not meant to be interactive. (yes, I also realize that with that many particles there’s going to be intense occlusion etc, that’s OK, I can work on optimization and culling later but I want a brute force solution first as a baseline.)
Are there other approaches to rasterization, especially any other methods except OpenGL/Direct3D? The other obvious idea is in CUDA itself but that is likely not nearly as efficient as using the rasterization hardware.
Thanks for advice and pointers. I’m more of a science guy than an OpenGL hacker so the rasterization is all new to me.