I have a Deep Learning AMI on the AWS EC2 (Deep Learning AMI (Ubuntu 18.04) Version 48.0).
I need to use TensorRT on this.
I set up the ngc settings (API key), then, I pull the TensorRT container (docker pull [nvcr.io/nvidia/tensorrt:20.11-py3])
After that, when I try to run the docker image by this command,
docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvidia/tensorrt:20.11-py3
I got this error
When I try to run docker image another command, I got different error.
What should I do to solve this problem and running the TensorRT on my EC2 instance?
This is kind of emergency problem, please help me as soon as possible.
TensorRT Version: TensorRT 7.2.1
GPU Type: Tesla K80
Nvidia Driver Version: 450.142.00
CUDA Version: container include NVIDIA CUDA 11.1.0
CUDNN Version: container include NVIDIA cuDNN 8.0.4
Operating System + Version: (Ubuntu 18.04) Version 48.0
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.5
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered