Could not load dynamic library 'libnvinfer.so.5'

I’m trying to use TensorRT for the first time.

Environment:
OS: Ubuntu on Amazon AWS EC2 instance
python 3.7
tensorflow 2.0

I’ve built a model in tensorflow and am trying to convert it using TensorRT following the Tensorflow 2.0 example here:
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usage-example

When I run:

converter = trt.TrtGraphConverterV2(input_saved_model_dir='models/mymodel')

I get the error:
2019-11-14 17:29:07.427738: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library ‘libnvinfer.so.5’; dlerror: libnvinfer.so.5: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64/openmpi/lib/:/usr/local/cuda/lib64:/usr/local/lib:/usr/lib:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/mpi/lib:/lib/:/home/ubuntu/src/cntk/bindings/python/cntk/libs:/usr/local/cuda/lib64:/usr/local/lib:/usr/lib:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/mpi/lib:/usr/lib64/openmpi/lib/:/usr/local/cuda/lib64:/usr/local/lib:/usr/lib:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/mpi/lib:/lib/:
2019-11-14 17:29:07.427783: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
Aborted (core dumped)

Looking at the thread here, it seems like it may be an issue with needing to set LD_LIBRARY_PATH:
https://devtalk.nvidia.com/default/topic/1036527/tensorrt/importerror-libnvinfer-so-4-cannot-open-shared-object-file-no-such-file-or-directory/

However, when I google setting the LD_LIBRARY_PATH, it seems only necessary when manually installing TensorRT from tar. I have not built/installed TensorRT separately and am just using what’s bundled in with Tensorflow 2.0. Can anyone advise on what’s wrong?

Thanks

I solved the problem. While it seems like some portion of tensorrt is bundled in with tensorflow-gpu, there are still installation steps required (see https://www.tensorflow.org/install/gpu#ubuntu_1804_cuda_10)

I already had CUDA drivers installed (10.1). I ran the following code, without modifying version numbers, and this allowed me to successfully convert the model. I’m not sure if I needed to the “install NVIDIA driver”, as I’m assuming that was already installed, as I had already been running tensorflow-gpu without issue.

# Add NVIDIA package repositories
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo apt-get update
wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
sudo apt-get update

# Install NVIDIA driver
sudo apt-get install --no-install-recommends nvidia-driver-418
# Reboot. Check that GPUs are visible using the command: nvidia-smi

# Install TensorRT. Requires that libcudnn7 is installed above.
sudo apt-get install -y --no-install-recommends libnvinfer5=5.1.5-1+cuda10.0 \
    libnvinfer-dev=5.1.5-1+cuda10.0