How to enable CUDA on docker container under JetPack 6.0?

I Upgraded JetPack 6.0 on the Jetson AGX Orin and built a dockerfile using dustynv/ros:noetic-ros-base-l4t-r35.4.1 image.

Unfortunately, I opened the python to check if CUDA-enabled torch was working, it wasn’t functioning as expected. How can i use GPU resources?

Here’s a checklist of what i did.

  1. Attached the container
  2. Open the python
  3. import torch and check of torch.cuda.is_available() # the result is “False”
  4. pip list | grep torch on termial
torch                        2.0.0+nv23.5
torchvision                  0.15.1a0+42759b1

This setup worked well with JetPack 5.X
and I Used docker run option of --runtime nvidia


Unfortunately, there are some dependencies between the CUDA driver and SDK.
You will need to use the same OS version between the image and the Jetson environment.

For JetPack 6, which is r36.2, please build the container on the top of dustynv/ros:xxx-l4t-r36.2.0 flag:


Thank you for the reply.

I am utilizing ROS1 Noetic alongside Torch with CUDA on Python 3.8.
Considering this setup, what would be the advisable course of action to ensure optimal performance and compatibility?
Should I downgrade to JetPack 5.X, or would upgrading the CUDA version be a more appropriate solution?


Ubuntu 22.04 (JetPack 6) default python is 3.10.
If you want to use python 3.8, JetPack 5 should provide better compatibility.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.