I have been trying to install Docker with CUDA support for my application. I have tested various methods, and the NVIDIA CUDA image is working. I built a container and installed PyTorch on it. The CUDA check (torch.cuda.is_available()
) returned True
, but the container is not allocating any memory, leading to an issue.
The device has a total memory of 16 GB, but inside the Docker container, the allocated memory remains 0 GB.
I need assistance in resolving this issue.
Goal:
I want to build a Docker container with CUDA support on Jetson Orin NX, ensuring that it runs without any issues.