I understand this may be a duplicate question but I’m looking for some advice.
I currently have a docker container that allows me to run an inference script using my TensorFlow PB model.
After flashing my Jetson Agx Xavier, I am getting the following error:
I am starting my docker container with the following command:
xhost + && sudo docker run --privileged
–runtime nvidia --network host
-ti raptor/model_inference:latest bash -i
And I’ve discovered that the docker container runs perfectly without the “–runtime nvidia” parameter.
I didn’t create the docker container myself so if you need to see, for example, the DockerFile or any other docker related files, let me know and I’ll attach them.
Because of my inexperience in this software, I am only clutching at straws for a solution and could be doing with a gentle nudge in the right direction.
My guess is that the problem lies with CUDA and the version installed on my Jetson Xavier but I’m not sure.
I have tried the following forums and have tested many of the suggested solutions but have had no success:
- Error response from daemon: OCI runtime create failed · Issue #661 · NVIDIA/nvidia-docker · GitHub
- nvidia-docker error: docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: ... · Issue #1264 · NVIDIA/nvidia-docker · GitHub
- Can not use nvidia-docker. docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: ... · Issue #1225 · NVIDIA/nvidia-docker · GitHub
- Error response from daemon: OCI runtime create failed: container_linux.go:348 · Issue #683 · NVIDIA/nvidia-docker · GitHub
Thanks in advance for any help or advice as its much appreciated.