I am working on a device called ZF-ProAI that uses Nvidia-Xavier-SOC, CPU 8 Cores @ 2.1 GHz, GPU Volta, 4TPC with Linux tegra-ubuntu 4.14.78-rt44-tegra OS installed in it.
This hardware is sold with this preinstalled OS and with CUDA-10.1 for AI development.
A standalone python application for “object detection works fine” on this hardware
Retinanet_resnet50_fpn model + python3.7 + Conda environment
Now I want to containerize this application, but I am unable to find an exact base docker-image from docker hub (Docker Hub). I built the docker image using approximately matching docker container
# Dockerfile FROM nvidia/cuda:11.2.1-base-ubuntu18.04
When I run the container with the image the following error shows up,
The GPU usage using “–gpus all” was explained in How to Use the GPU within a Docker Container
nvidia@tegra-ubuntu: docker run --gpus all gpu-nvidia-test docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown. ERRO error waiting for container: context canceled
I have tried many other methods, but was still unsuccessful. Can anyone help me with this issue. Thankyou in advance.