What should I do to solve this problem and running the TensorRT on my EC2 instance?
This is kind of emergency problem, please help me as soon as possible.
Thanks
Environment
TensorRT Version: TensorRT 7.2.1 GPU Type: Tesla K80 Nvidia Driver Version: 450.142.00 CUDA Version: container include NVIDIA CUDA 11.1.0 CUDNN Version: container include NVIDIA cuDNN 8.0.4 Operating System + Version: (Ubuntu 18.04) Version 48.0 Python Version (if applicable): 3.6.9 TensorFlow Version (if applicable): 1.15.5 PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
TRT container includes:
-NVIDIA CUDA 11.1.0
-NVIDIA cuDNN 8.0.4
-NVIDIA NCCL 2.8.2
So, in the beginning, I only need Nvidia driver 455 or later.
Since I am using EC2 deep learning AMI instance, this instance is coming with Nvidia driver. I dont need any pre-installation for TensorRT. Right? or do I need extra installation for this purpose?
Hi @skilic ,
In the error mentioned in the screenshot, i see you are using the command as is
docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvidia/tensorrt:xx.xx-py3
However here you need to replace the local_dir:container_dir with your host dir and mount dir resp.
You need to mount the path o your host machine to the container.
Can you please try that and let us know.
Hi @AakankshaS , I am working AWS amazon linux Tensorflow deep learning AMI EC2 instance. I am able train model, covert tolite model, convert to ONNX but while trying TensorRT, getting an error as shown in image. It says the Tensorflow is not built with TensorRt but these are already installed and is default. Can you let me know how to solve this issue