TensorRT on Tensorflow Docker images, ld cannot find cudnn

Hello,
I have an x86 desktop computer with 2 TitanX card on Ubuntu 16.04. With cuda-9.0
correctly installed on the host PC.
I started off with tensorflow’s official docker and run it as :

docker run --runtime=nvidia -it tensorflow/tensorflow:1.12.0-gpu bash

I can ensure that tensorflow with python works as expected and even GPU works correctly for training.

I am trying to get tensorRT working correctly with this image.
So, I downloaded " TensorRT 5.1.5.0 GA for Ubuntu 16.04 and CUDA 9.0 tar package " from https://developer.nvidia.com/nvidia-tensorrt-5x-download

I followed these instructions for installing:

Everything seem to install correctly. Everything was run as root:

$ cd TensorRT-5.1.5.0/lib
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`
$ cd ../python/
$ pip install tensorrt-5.1.5.0-cp27-none-linux_x86_64.whl
Processing ./tensorrt-5.1.5.0-cp27-none-linux_x86_64.whl
Installing collected packages: tensorrt
Successfully installed tensorrt-5.1.5.0
$ cd ../uff/
$ pip install uff-0.6.3-py2.py3-none-any.whl
Processing ./uff-0.6.3-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python2.7/dist-packages (from uff==0.6.3) (1.15.4)
Requirement already satisfied: protobuf>=3.3.0 in /usr/local/lib/python2.7/dist-packages (from uff==0.6.3) (3.6.1)
Requirement already satisfied: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf>=3.3.0->uff==0.6.3) (40.5.0)
Requirement already satisfied: six>=1.9 in /usr/local/lib/python2.7/dist-packages (from protobuf>=3.3.0->uff==0.6.3) (1.11.0)
Installing collected packages: uff
Successfully installed uff-0.6.3
$ cd ../graphsurgeon/
$ pip install graphsurgeon-0.4.1-py2.py3-none-any.whl 
Processing ./graphsurgeon-0.4.1-py2.py3-none-any.whl
Installing collected packages: graphsurgeon
Successfully installed graphsurgeon-0.4.1
$ which convert-to-uff
/usr/local/bin/convert-to-uff

However, as I attempt to compile samples, it fails:

$ cd ../samples
$ make
make[1]: Entering directory '/app/Downloads/TensorRT-5.1.5.0/samples/sampleCharRNN'
../Makefile.config:7: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:10: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
Linking: ../../bin/sample_char_rnn_debug
g++ -o ../../bin/sample_char_rnn_debug ../../bin/dchobj/sampleCharRNN.o ../../bin/dchobj/../common/logger.o  -L"/usr/local/cuda/lib64" -L"/usr/local/cuda/lib64" -L"../lib" -L"../../lib" -L../../bin -Wl,--start-group -lnvinfer -lnvparsers -lnvinfer_plugin -lnvonnxparser -lcudnn -lcublas -lcudart -lrt -ldl -lpthread -Wl,--end-group
/usr/bin/ld: cannot find -lcudnn
/usr/bin/ld: cannot find -lcublas
collect2: error: ld returned 1 exit status
../Makefile.config:161: recipe for target '../../bin/sample_char_rnn_debug' failed
make[1]: *** [../../bin/sample_char_rnn_debug] Error 1
make[1]: Leaving directory '/app/Downloads/TensorRT-5.1.5.0/samples/sampleCharRNN'
Makefile:49: recipe for target 'all' failed
make: *** [all] Error 2

I understand that this error means libcudnn.so and libcublas where on in LD_LIBRARY_PATH.
libcudnn.so —> is in dir: /usr/lib/x86_64-linux-gnu/libcudnn.so.7
libcublas.so → is in dir: /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcublas.so.9.0

so I added these on LD_LIBRARY_PATH:

$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/x86_64-linux-gnu/:/usr/local/cuda-9.0/targets/x86_64-linux/lib/

Even after this make fails with the same error. How can this be fixed?