Dear NVIDIA Jetson developer team,
According to David in this post (OptiX 7.3 on Jetson AGX Xavier - #6 by dhart), I want to show my interest in running OptiX on Jetson devices.
First, thank you for the great work. We enjoy to use your modules with our robots, especially for inference of deep learning models. Recent improvements in your Orin modules have even allowed us to use them as stand-alone systems on both ground and air robots.
Our robots use 3D scenes (triangle meshes) as maps of the environment, indoors and outdoors. We use MICP-L (GitHub - uos/rmcl: Software Tools for Mobile Robot Localization in 3D Meshes.) to 6D localize the robot live in those maps, which internally uses OptiX to perform ray intersection tests with the map. We can use this software for our ground robots by equipping them with a NUC mini-PC and using a mobile RTX GPU for hardware-accelerated ray tracing.
Our drone, however, is not capable of carrying the NUC mini-PC since it is simply too heavy and consumes too much energy. So we thought of using Jetson devices instead. However, the OptiX shared library was missing from the driver for Jetson devices. So we used Intel Embree instead, which can efficiently compute the necessary operations on the ARM processor of the Jetson devices. It is working, but it somehow seems wrong to compute millions of ray intersections on the ARM CPU while having the capabilities of an even stronger GPU on board. Even if it was the right way to do it, we would like to have the possibility to distribute computations across all accessible computing devices as flexibly as possible.
With this small story, I hope I made it a bit more clear why we would be very happy to have OptiX integrated into Jetpack. At the same time, I’m completely unsure about how much effort it would take to actually do it. I only know that it is possible to run OptiX applications on NVIDIA GPUs without hardware acceleration as well.