GPU access inside Docker in Jetson Xavier

Hi,
Since I have some specific requirements, I am building my own Docker image for OpenCV 4.x and PyTorch on Jetson Xavier. I have tried two base images nvcr.io/nvidia/l4t-base:r32.4.3 and nvcr.io/nvidia/l4t-cuda:10.2.460-runtime. When I tried to compile OpenCV with CUDA flag ON, it succeeds in the former but fails in the later. Also, torch.cuda.is_available() throws False. So my question is i) which is the correct base image to start and ii) How do I access gpu inside the docker container? Are there any git repo to try?

Thanks.
Naveen

Hi,

The difference is that l4t-base mounts CUDA library from the Jetson native.
But l4t-cuda has the CUDA library installed in the container.

Please noted that currently we only provide runtime version CUDA container (l4t-cuda:10.2.460-runtime).
This indicates the image only has the essential library for running an executable.
So you may fail to build CUDA binary due to some missing header.

For l4t-base, please find the link below for the information to access nvcc at building time:

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.