RT Cores

Hello, I just wondered… Is there any existing documentation about accessing the newly announced Ray-Tracing cores through CUDA code? Or will that even be possible? (I kind of failed to find anything)

One single search gave me this:
https://devblogs.nvidia.com/introduction-nvidia-rtx-directx-ray-tracing/
https://devblogs.nvidia.com/ray-tracing-games-nvidia-rtx/

Thanks; I might find this to be useful.

However, I was searching for more of a GL-less approach(without OpenGl/DirectX) either directly from CUDA kernel, or HOST code, like what we have for tensor core operations;
(Main purpose was/is to make my path tracer faster and it does not exactly contain any graphical api-s)

I am also waiting for some more information on it because I would be interested to see if the API can be adapted/used for mechanical wave propagation, for which the result is not an image, but amplitude response.
So it seems we have to get some popcorn, hold on and see if Nvidia showcases a bit more of it.

So, it looks like we’re waiting for CUDA 10 then;

CUDA early access might be a way to get a hold of it early. I talked about it in another thread.

https://devtalk.nvidia.com/default/topic/1038619/api-for-bvh-traversal-on-turing-gpus/?offset=2#5277475

Cbuchner1, I couldn’t find anything that describes the ray-tracing features of the new hardware. Do you have any insider information?
I hijacked TheDonsky’s thread as we are interested in different objectives (image rendering vs wave propagation).
Sorry, TheDonsky. :-)

saulocpp, be my guest :-)

Yes, I too would be interested in having a lower-level compute-centric API for the new hardware-accelerated ray-tracing and BVH traversal capabilities included in Turing, similar to the paradigm used for leveraging tensor core support in Volta. There’s definitely a lot of uses for path-tracing and discretized space traversal algorithms outside of graphics rendering applications.

I had exactly the same question, what do those new ray-traceing hardware mean for GPGPU applications?

especially, my GPU Monte Carlo simulation code for particle tracking is essentially a ray-tracer, except that it not only traces the ray, but does a lot of data processing along the way (scattering, attenuation etc). I am currently doing everything in a CUDA kernel - ray-tracing, data processing and storage, but I would like to know if these newly developed hardware can beat my heavily tuned ray tracer (specifically for ray-voxel intersection), as short as 30 lines of c-code

https://github.com/fangq/mcx/blob/master/src/mcx_core.cu#L250-L301

if yes, I would be very much interested in finding out if there is any CUDA interface so that I can replace my ray-tracer to use the new hardware, or there is another framework that I can re-code this algorithm. My impression is that ray-tracers for the graphics rendering pipelines are not flexible enough to accommodate GPGPU computation, but my impression could be wrong here.

So, mmm… I read this https://medium.com/@sulej.robert/the-moon-made-twice-at-home-a2cb73b3f1e8
and I suppose this is non standard RT core use)) Like at all.