TF-TRT on Jetson Nano

Hello NVIDIA Developers,

Currently, doing transfer learning from the pre-trained SSD-MobileNet-v2 model, I have created and trained my own neural network with the goal of detecting a single class of objects, people. Therefore, I use tensorflow version 1.15.5.
I use the tensorflow==1.15.5+nv21.5 version and not tensorflow-gpu.
The jetson Nano runs under Ubuntu 18.04 and the Jetpack version is 4.5.1.

My goal is to integrate this model on a Jetson Nano and have it do real time processing.
Being too heavy for an embedded system like the Jetson Nano, I decide to optimize this code with TF-TRT (Tensorflow to TensorRT).

I have a saved_model.pb file containing the pre-trained model but I didn’t know where to place it in this code
I have already found many codes available on the Internet but none of them uses a pre-trained .pb file.

Among these sites, I tried this one: Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation

It gives this:

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = "r/home/images"
output_saved_model_dir = "r/home/tf_trt"

converter = trt.TrtGraphConverter(
    input_saved_model_dir=input_saved_model_dir,
    max_workspace_size_bytes=(11<32),
    precision_mode="FP16",
    maximum_cached_engines=100)
converter.convert()
converter.save(output_saved_model_dir)

with tf.Session() as sess:
    # First load the SavedModel into the session    
    tf.saved_model.loader.load(
        sess, [tf.saved_model.tag_constants.SERVING],
       output_saved_model_dir)
    output = sess.run([output_tensor], feed_dict={input_tensor: input_data})

The path to the folder containing the .pb model and the .pbtxt model is : “r/home/saved_model” but I didn’t know where to put it in this code.

When I run this python code via python3 code.py, I get :

2021-05-31 16:41:59.329369: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
2021-05-31 16:42:06.500247: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.500494: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.500538: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2021-05-31 16:42:06.503361: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.503417: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
Aborted (core dumped)

The first messages are related to the cuda library and the fact that I use tensorflow and not tensorflow-gpu.

One of the NVIDIA developers suggest me to use Nvidia Tensorflow container, which comes with TensorRT enabled to avoid any sytem dependencies.

Following your recommendations, I decided to use a docker to do the optimization via TensorFlow-TensorRT.
So I wanted to download the tensorflow docker on my Jetson Nano.

Being on a version of tensorflow 1.15.1+nv21.5, after registering on the NGC registry, I executed the command below:
docker pull nvcr.io/nvidia/tensorflow:21.05-tf1-py3

On the site, it is indicated that the download occupies only 4.69 GB but unfortunately at the almost end of the installation I constantly have a message indicating that there is no more memory space available and this is really the case. I’m up to more than 13GB of free memory and the problem persists and I still get to 0 at the end of the installation.

Knowing that I am using a 32 GB SD card and that the kernel already occupies a good part of the memory space, it will be difficult to free much more memory space.

Would you have a solution for me?
Thank you

Paul Griffoul

Hi,

Based on the error, it seems you don’t have TensorRT v7.1.3 in your environment.

Which JetPack do you install?
For compatibility, please use JetPak4.5.1 for Tensorflow v1.15.1+nv21.5.

More, since your target is real-time inference.
It’s more recommended to use pure TensorRT rather than TF-TRT.

You can find a SSD related example in our OSS GitHub below:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffSSD

Thanks.

Hi, @AastaLLL
Thank you for your answer,
Indeed, I did not install TensorRT v7.1.3, I have some difficulties with dependencies with libnvinfer at the time of the installation.

I am using the Jetpack 4.5.1 version.

Do you advise me to use the tensorrt container below ?

and run the command: docker pull nvcr.io/nvidia/tensorrt:21.05-py3

By going to the link below,
https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html

I observe that there are only TensorRT v7.2.3.4, TensorRT v7.2.2.3 and TensorRT 7.2.2.3+cuda11.1.0.024.
How could I get the TensorRT v7.1.3 version?
I already have CUDA 10.2.

On the same website, I could see information about the tensorflow container which also contains TensorRT. Which container do you think is better for me to install?

Also, I was wondering why it is better to use pure TensorRT? I already have a saved_model.pb and I was thinking about optimizing it via Tensorflow-TensorRT to get a new saved_model.pb.

Thanks in advance

Paul Griffoul

Hi,

First, you will need to install TensorRT on your Nano directly.
Jetson package (aarch64) can be found in JetPack installer.

Please noted that Jetson docker mounts libraries from the host.
After installing the package, you can check l4t-tensorflow or l4t-ml.

And it’s expected to get a better performance with pure TensorRT.

Thanks.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.