Description
I am converting a tensorflow model (.h5 → saved_model_format → tensorrt model) to a tensorrt model using tensorflow 2.5.0 (attached tensorrt.py). It is (tensorrt model) is occupying almost 3.5GB of GPU Memory.
If I load the same model in the below specified environment, then the tensorrt model is occupying max ~1.1GB of GPU memory:
TensorRT Version: 5.1.2.2-1+cuda10.1
GPU Type: GeForce RTX 2080 Ti
Nvidia Driver Version: 418.87.00
CUDA Version: 10.1
CUDNN Version: 7.6.2
Operating System + Version: Ubuntu 16.04.7 LTS
Python Version (if applicable): 3.6.13
TensorFlow Version (if applicable): 1.14.1
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): NA
I also tried using tensorrt-nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb with TF2.5, but the code won’t run as it requires libnvinfer.so.7. And the code only runs if we use CUDA11.1.
Environment
TensorRT Version: 7.2.3-1+cuda11.1
GPU Type: NVIDIA GeForce RTX 3080
Nvidia Driver Version: 470.57.02
CUDA Version: 11.2 & 11.1(Got with the installation of TensorRT)
CUDNN Version: 8.1.0.77-1+cuda11.2
Operating System + Version: Ubuntu 18.04.5 LTS
Python Version (if applicable): 3.9.6
TensorFlow Version (if applicable): 2.5.0
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): NA
Relevant Files
tensorrt.py (4.2 KB)
dummy.h5 (4.6 MB)
gpu_usage (272 Bytes)
Output files:models
gpu_usage_dummy.txt (1.4 KB)
tensorrt_output.txt (16.5 KB)
Steps To Reproduce
Setting up the environment:
- nvidia-driver installation
- libnvinfer installation as mentioned here - libnvinfer7.2.3-1+cuda11.1
- cuda installation steps: here - change ( ```
sudo apt-get -y install cuda-11-2
* CUDNN installation: [from .deb](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installlinux-deb)
* ```
sudo apt-get install -y --no-install-recommends libnvinfer7=7.2.3-1+cuda11.1 \ libnvinfer-dev=7.2.3-1+cuda11.1 \ libnvinfer-plugin7=7.2.3-1+cuda11.1
- tensorflow installation: pip install tensorflow==2.5.0
Adding paths in .bashrc:
export PATH=/usr/local/cuda-11.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/bin/bash
Running the code:
- Put the dummy.h5 model in models directory
- Run tensorrt.py & run gpu_usage to obsereve the memory occupied