Converting Tensorflow model to tensorRT model.

My hardware is jetson tx2 and i installed Jetpack 3.2, tensorflow 1.9 completely.
I am converting Tensorflow model to TensorRT model.

I follow the sample:
GitHub - NVIDIA-AI-IOT/tf_trt_models: TensorFlow models accelerated with NVIDIA TensorRT

here my code:
[

coding=utf-8

import tensorflow as tf
import tensorflow.contrib.tensorrt as trt

from tf_trt_models.detection import download_detection_model
config_path, checkpoint_path = download_detection_model(‘ssd_inception_v2_coco’)

from tf_trt_models.detection import build_detection_graph
frozen_graph, input_names, output_names = build_detection_graph(
config=config_path,
checkpoint=checkpoint_path
)
trt_graph = trt.create_inference_graph(
input_graph_def=frozen_graph,
outputs=output_names,
max_batch_size=1,
max_workspace_size_bytes=1 << 25,
precision_mode=‘FP16’,
minimum_segment_size=50)

Import the TensorRT graph into a new graph and run:

output_node = tf.import_graph_def(
trt_graph,
return_elements=[‘your_outputs’])
]

Here is my error:
[
2019-02-13 16:15:37.672887: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:438] MULTIPLE tensorrt candidate conversion: 7
Segmentation fault (core dumped)
]

I checked some related topic on forum, but i realize it has not fixed because he did not run Tensorflow- tensorRT code anymore so i hope you can support me completely. Here are topic that is checked:
https://devtalk.nvidia.com/default/topic/1044593/jetson-tx2/tf-trt-issue-/1

Beside, I follow code in ACCELERATING INFERENCE IN TENSORFLOW WITH TENSORRT, and it also fail but different error. But in this topic i want focus on the above error.

I have some questions. How can i know when i convert TF to TRT completely using this code? Will I get a real tensorRT model file in the directory? and how can i apply the tensorRT model (that I converted) to Deepstream SDK 1.5?

I hope you answer all my question.
Thank you so much.

Hi,

May I know where your TensorFlow package comes from?
Please noticed that there are some dependencies between TensorFlow, cuDNN, CUDA toolkit and GPU driver.

For tx2, it’s recommended to use JetPack 3.3 with our official TensorFlow release:
[url]https://devtalk.nvidia.com/default/topic/1038957/jetson-tx2/tensorflow-for-jetson-tx2-/[/url]
The topic you mentioned is solved by reflashing their environment with JetPack3.3.

There are two suggestions for you:

1. TensorFlow-TRT is different from the TRT, it use TensorRT acceleration but keep the TensorFlow implementation like input, prepocess.
To use it, you will still need to load the whole TensorFlow library which may occupy lots of resource and memory.

It’s more recommended to use our pure TensorRT but it needs some porting effort.
Please check this page for the tutorial:

2. DeepStream 1.5 doesn’t support UFF model which is essential for TensorFlow model.

Thanks.

Hi aastaLLL,
Sorry i need some day to run an example that you give me.

The version tensorflow come from documentation " tensorflow for JetsonTx2" .
I don’t know why we have both TF-TRT and TRT. What is different between them?

I follow your link (GitHub - NVIDIA-AI-IOT/tf_to_trt_image_classification: Image classification with NVIDIA TensorRT from TensorFlow models.). I convert inception_v1.ckpt and I got “inception_v1.plan” in the end of the process, Is it tensorRT format?

and do you have some suggestions(or sample) to deploy TensorRT model(inception_v1.plan) into Deepstream SDK 1.5 on jetson tx2 ?

Thanks so much.

Hi,

1) TF-TRT are different.
TensorRT: is our high performance inference engine: https://developer.nvidia.com/tensorrt
TF-TFT is a wrapper in TensorFlow to help user run their model with TRT engine.

2) The link you shared (GitHub - NVIDIA-AI-IOT/tf_to_trt_image_classification: Image classification with NVIDIA TensorRT from TensorFlow models.) is the right one to show how to use pure TRT.

3) This is a good question.
By default, deepstream 1.5 doesn’t supported TensorFlow model since the converter for TF->TRT is not enabled.
But since the PLAN file is ready, you can hack it as the PLAN file generated from Caffe model to see if works.

[primary-gie]
...
cache=file:///path/to/the/TRT/PLAN

Thanks.

Hi AastaLLL,
1.what is PLAN file? i do not understand clearly about it.

  1. In [primary-gie], we have model-cache; model-file ; proto-file, labelfile-path …

model-cache ~~~ link to ~~~ PLAN file. what about others?

i have TF model not caffe , so i don’t know how to link model-file, proto-file … correctly.

I hope you describe more detail for me.

Thanks a lots.

Hi,

1. PLAN is a serialized TensorRT engine.
Here is our tutorial for your reference:
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#fit[/url]

2. DeepStream only supports caffemodel and TensorRT PLAN.
So you will need to generate the TensorRT engine first and feed it into deepstream.
model, proto, … is ignored when PLAN file is given.

Please noticed that your PLAN may not be supported by deepstream since it is generated from TensorFlow.
But it is still worthy to give it a try.

Thanks.