CUDA C++ and/or OpenGL for fast CubeMap Textures computations

Hi, I have a triangulated model with (let’s say) 15 million faces. I want to assign a distinct ID (or maybe color) to each triangle (i.e. 15 million IDs) and after that, I want to generate a few million cubemap textures from regular points within this model. Then, for each cubemap texture I want to scan which distinct colors exist and keep a global (shared) accumulating counter, among all cubemap textures. Or in other words, I want to count how many times each distinct id (i.e. triangle) is found to exist within all cubemaps. For example, the triangle with ID 412345 (or color RGB 240,102,248) is located in half of the cubemaps textures.

With that process I will be able to see which triangle is the most/least visible. I do not care about visuallizing anything. I only want the counts and for that I am searching for the fastest possible approach.

Can I achieve this, fast, by using only CUDA C++ parallelism? Or I’m I going to need other specific tools/frameworks/libraries/etc… for that (e.g. OpenGL)? What would the fastest approach be? By the way I have no experience with either CUDA C++ or OpenGL. I can only speak Java, Python, R and Julia.

Thank you for your time.

This sounds overly complicated, and given the amount of memory you’ll be using / dependent lookups into these cube maps, I doubt you will get good performance out of it.

Can you explain your end goal a little more clearly? If you aren’t visualizing anything, then what does it mean for a triangle to be the “most visible”? The way I am interpreting your question is probably incorrect, but in order for triangle X to be more visible than triangle Y, that means that there is some viewpoint V from which X is “in front of” Y.

For what it’s worth, if this is actually what you are doing, you don’t really even need to use CUDA for this task. Assuming the mesh you have can fit in GPU memory, you might want to look at Z-Buffering:

This of course would mean you need to get your scene set up with OpenGL, have a viewpoint, etc. In which case, you may as well visualize. However, you don’t need OpenGL to do Z buffering. You can implement it on your own with CUDA. But the driver developers behind your OpenGL implementation likely have some crazy good implementations, since Z-Buffering is a “staple” in deferred rendering with OpenGL :)

Sorry if I’m misunderstanding your question!

My voxel model is 3-dimensional. However, let’s simplify the case study into a square 2-dimensional model, to help you understand what I am trying to calculate.

If you visit the following link you will see a complete (towards all directions in 2D space) raycasting scan from a red point.

Now, let’s assign a distinct color on each square of this 2D model and then convert the entire (360deg) scan into a single (cubemap) texture. Within this image, all visible colors suggest that their corresponding square surfaces are visible from the red point. Contrariwise, all colors that are not shown in this image (although they were assigned somewhere), belong to surfaces (squares) that are not visible from the red point. Now, let’s store which squares are shown from the red point.

Next step, generate more red points (at other places) and repeat the procedure. In the end, I want to calculate which square is in total (among all textures) more “visible” (i.e. being shown the most times) and of course this will also show me the opposite (i.e. which square is the most hidden in total).

This is it. In my real case, I have a 3D voxel model with about 3 million voxels and the “red” points are about 200k.

What would the best approach for such calculation be you suggest? Maybe the idea of using cubemap textures is overkill and something smarter should be done here.

PS. I am working on a laptop: GTX 980M (4GB) / 32GB Ram / i7 / M2 SSD

unfortunately, i don’t understand, hopefully somebody else does. if you just do ray casting through the voxel grid to form a cube map with say just doing “forward”, “backward”, “leftward”, “rightward”, “upward”, and “downward”, I think you’re going to end up with really distorted results.



if x1 is your first view point, you won’t have any information about where the little crevace is pointed to by the v, but viewpoint x2 will. This is really bad ascii art, but you would have to really densely sample the entire space in order to get accurate results?

as I said, I don’t understand, so I’m going to disgracefully bow out. sorry for adding noise :/