Hello, I would like to get some advice. I have run a large number of ROS nodes in my ORIN, and for better development, I have placed these nodes inside my Docker. However, when I develop some inference nodes that require CUDA dependencies, these nodes must be compiled in the Docker environment containing CUDA. So how can I better integrate the inference nodes into my system? If I use the CUDA version of Docker provided by NVIDIA, can my ORIN not install Jetpack? Is it a waste to include Jetpack both inside and outside the Docker? How is this situation handled within the industry?
Moving to Jetson Orin Forum for better support.
Please see the below link for some available containers.
For JetPack 5, JetPack components are installed directly within the container.
YOu don’t need to install them on Jetson natively anymore.
Thank you for your suggestion. I’ll take some time to study it