Need lots of rays determining what they would hit in a scene. Can cuda help?

I’m back on the drawing board with this and was wondering if a gpu based approach might help.

Basically I have a position in whatever scene and, looking in one direction, I need to figure if I hit a bounding box /the sky /nothing or any object in that scene… And
Do that hundreds of thousands times, as fast as possible.

I’m doing this in unity which has a raycasting component that does that quite fast. However our app would greatly profit if we could turn this up a few notches.

Now, in my mind the gpu has all the data we need.

So is there a way to find that out via cuda and query in the scene?

I searched the web but it’s one of those things where I’m not sure what exactly to look for.

Any input would by great!

Oh and yes, this is close to Pixel counting. But I don’t want to wait for the end of the frame.

RTX GPUs have ray-tracing hardware built-in. It is not directly accessible from CUDA, however you can access it from Optix or one of the graphics APIs (DirectX, OGL, Vulkan).

Here’s a recent discussion:

https://devtalk.nvidia.com/default/topic/1047673/is-there-any-performance-difference-implementing-a-ray-tracer-in-cuda-vs-rendering-pipelines-/

Thank you! There is really a lot of valuable info on there. I’m a single dev doing this for a client.

What would be a reasonably priced card to get for that? Or will any rtx do?

For example, an rtx 2080 would do, I assume?

Yes, RTX 2080 has ray-tracing hardware built-in.

Excellent, thanks. Out shopping then :)

Hi there, dished out the money for the 2080 and starting to the read the Optix examples.

Is there an easier way to access OptixPrime from Unity than going through a custom c++ layer?

There is a forum for optix-specific questions on these forums - under professional graphics -advanced graphics. You may want to ask your question there. I can move this thread there if that is what you want.

That would be great, yes. Sorry if this was wrong here.