Docker with GPU support in Jetson orin NX

I have been trying to install Docker with CUDA support for my application. I have tested various methods, and the NVIDIA CUDA image is working. I built a container and installed PyTorch on it. The CUDA check (torch.cuda.is_available()) returned True, but the container is not allocating any memory, leading to an issue.

The device has a total memory of 16 GB, but inside the Docker container, the allocated memory remains 0 GB.

I need assistance in resolving this issue.

Goal:

I want to build a Docker container with CUDA support on Jetson Orin NX, ensuring that it runs without any issues.

Hello,

Thanks for visiting the NVIDIA Developer forums! Your topic will be best served in the Jetson category.

I have moved this post for better visibility.

Cheers,
Tom

Hi,

When running docker with nvidia runtime, you should have GPU access.
Please make sure the installed PyTorch package has enabled CUDA support.

Or you can find our prebuilt in the below link:
The image containing igpu tag can work on the Jetson.

Thanks.