TF-TRT optimization

Hello NVIDIA Developers,

Currently, doing transfer learning from the pre-trained SSD-MobileNet-v2 model, I have created and trained my own neural network with the goal of detecting a single class of objects, people. Therefore, I use tensorflow version 1.15.5.
I use the tensorflow==1.15.5+nv21.5 version and not tensorflow-gpu.
The jetson Nano runs under Ubuntu 18.04 and the Jetpack version is 4.5.1.

My goal is to integrate this model on a Jetson Nano and have it do real time processing.
Being too heavy for an embedded system like the Jetson Nano, I decide to optimize this code with TF-TRT (Tensorflow to TensorRT).

I have a saved_model.pb file containing the pre-trained modemais je n’ai pas su où le placer dans ce codel.
I have already found many codes available on the Internet but none of them uses a pre-trained .pb file.

Among these sites, I tried this one: Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation

It gives this:

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = "r/home/images"
output_saved_model_dir = "r/home/tf_trt"

converter = trt.TrtGraphConverter(
    input_saved_model_dir=input_saved_model_dir,
    max_workspace_size_bytes=(11<32),
    precision_mode="FP16",
    maximum_cached_engines=100)
converter.convert()
converter.save(output_saved_model_dir)

with tf.Session() as sess:
    # First load the SavedModel into the session    
    tf.saved_model.loader.load(
        sess, [tf.saved_model.tag_constants.SERVING],
       output_saved_model_dir)
    output = sess.run([output_tensor], feed_dict={input_tensor: input_data})

The path to the folder containing the .pb model and the .pbtxt model is : “r/home/saved_model” but I didn’t know where to put it in this code.

When I run this python code via python3 code.py, I get :

2021-05-31 16:41:59.329369: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
2021-05-31 16:42:06.500247: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.500494: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.500538: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2021-05-31 16:42:06.503361: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.503417: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
Aborted (core dumped)

The first messages are related to the cuda library and the fact that I use tensorflow and not tensorflow-gpu.

Do you have any advices ?
I also found some codes with many more lines, am I missing some elements?
Thank you by advance

Paul Griffoul

Hi,
We recommend you to check the below samples links, as they might answer your concern

If issue persist, request you to share the model and script so that we can try reproducing the issue at our end.
Thanks!

Hi @paul.griffoul,

This looks like environment setup issue. Please make sure all dependencies are installed properly. Please refer installing tf-trt section, Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation

We suggest you to use Nvidia Tensorflow container, which comes with TensorRT enabled to avoid any sytem dependencies.

Thank you.

Hi, @spolisetty @NVES

Following your recommendations, I decided to use a docker to do the optimization via TensorFlow-TensorRT.
So I wanted to download the tensorflow docker on my Jetson Nano.

Being on a version of tensorflow 1.15.1+nv21.5, after registering on the NGC registry, I executed the command below:
docker pull nvcr.io/nvidia/tensorflow:21.05-tf1-py3

On the site, it is indicated that the download occupies only 4.69 GB but unfortunately at the almost end of the installation I constantly have a message indicating that there is no more memory space available and this is really the case. I’m up to more than 13GB of free memory and the problem persists and I still get to 0 at the end of the installation.

Knowing that I am using a 32 GB SD card and that the kernel already occupies a good part of the memory space, it will be difficult to free much more memory space.

Would you have a solution for me?
Thank you
Paul Griffoul

Hi @paul.griffoul,

If you have memory constraints, better setup Tensorflow locally. Coming to original errors, it looks like configuration issue. We would recommend you to please post your concern on Jetson Nano forum to get better help.

Thank you.