Hi,
I have created a custom container using the image: nvcr.io/nvidia/pytorch:23.05-py3 .
The host computer is using Ubuntu 23.04, I have nvidia-docker2 , and the nvidia drivers above 530 to be able to use CUDA. Here is the output of nvidia-smi :
The container runs well, but the drivers from the graphics card are not detected:
I have used the following devcontainer file:
and docker-compose:
Thank you for your help.
Best regards
Just updating a bit, after some research. It may be that I need to use “sudo” command to build the image. However, how can I achieve the same without the need to use sudo?
Hello,
This may be related to a reported outage with nvcr.io .
Please follow this thread: I can't pull nvidia images - #5 by TomNVIDIA
Thanks,
Tom
Hello,
I have check and it is something related to the topic described in:
I have come across a potential rough edge with the nvidia docker runtime provided with Jetpack 4.2.1.
All of he following is run on a TX2 module mounted on a Colorado Engineering XCarrier carrier board.
I am working with a deviceQuery binary built locally from the Cuda samples provided in jetpack and I can run it successfully in any user account on the device itself.
When I try to run it in a container under the root user e.g.:
FROM nvcr.io/nvidia/l4t-base:r32.2
COPY deviceQuery .
CMD ./de…
I will check the information from that topic to see if I can make it work.
Best regards,
Albert
system
Closed
July 4, 2023, 6:34pm
6
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.