RTX in python

Is there a way to use hardware ray-tracing to render a interactive scene in real-time on Linux in python.
I want to avoid c/c++ as much as possible because the added complexity of a compiler-setup and lack of flexibility that comes with python.
I don’t need to render a scene at like 1000 FPS and the easy to use environment of python makes it quick to test and refine.
i already asked ChatGPT for solutions but with on success

Hi Jeremy,

the only way to use RTX is through a graphics API that supports it, for example Vulkan on Linux.
To leverage that with Python you should look for python wrappers of Vulkan.

I hope that helps.

Thanks!

And how do I set up hardware raytracing in Vulcan

Check the link I shared before: https://nvpro-samples.github.io/vk_raytracing_tutorial_KHR/

That is the ray tracing extension on Vulkan and will implicitly use hardware acceleration through the driver if it is supported.

Thanks.

There is actually a Python wrapper for the OptiX 7 and 8 API inside the OptiX Toolkit repository on github:
https://github.com/NVIDIA/optix-toolkit

(Beware of CUDA memory alignment restrictions which must be fulfilled:
https://forums.developer.nvidia.com/t/illegal-memory-access-when-adding-parameters-in-pyoptix/286806 )

Does it allow for real-time raytracing that can be interactive and how could I implement such a raytracer with OptiX in python. Are 8GB of v-ram good enough for 1080p

The ray tracing happens on the GPU so the performance depends much more on what GPU you’re using than on which language you’re calling the OptiX API functions with.
The newer and more high-end your GPU, the faster the hardware raytracing.

I cannot say which overhead Python has vs. native C++ applications with that use case. All my OptiX applications are using C++.

The memory for the output buffer is less of an issue for raytracing.
Let’s say you’re implementing a progressive renderer which accumulates results into a float4 buffer. That alone would only take 1920*1080*16 bytes ~ 32MB.
Where the VRAM limits your raytracing capabilities is the scene size inside the acceleration structures (AS) and vertex attribute data, and how big your textures of your materials are.
A very coarse rule of thumb is that about 12.5 MTriangles fit into 1 GB with AS compaction on RTX boards.
You need more memory during the initial AS build.

The overall ray tracing performance itself depends on the number of rays shot, so the rendering resolution affects the performance, and what happens inside the different program domains. (Consider hit programs like fragment shaders inside a rasterizer.) But the rendering resolution also affects rasterizers, which performance is in turn more susceptible to the number of primitives rendered.

Ray tracing is very good at instancing the same model geometry multiple times and also updating the instance acceleration structures on affine (rigid body) animations is very fast.

Depending on what you’re doing, it’s possible to implement real-time raytracxing with all three raytracing APIs (OptiX, Vulkan RT, DXR) but since OptiX is based on CUDA, the final display will always need to be done with some graphics API anyway, and CUDA supports interoperability with all three graphics APIs (OpenGL, Vulkan, DX12/D3D) to migrate the raytraced image into some texture for final display.

Have a look at this thread which explains some of the differences between OptiX and Vulkan RT.

For more information, please refer to the OptiX Programming Guide and the OptiX developer forum which has more explanations on specific details.