Deploy tensorflow model on TX2 with tensorRT

Hi,

I developped a basic tensorflow model and I would like to deploy it on my TX2 using tensorRT.
I am following this doc in order to achieve it: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/topics/workflows/tf_to_tensorrt.html

But there is something I don’t understand and I didn’t find the answer clearly…

In this doc, it seems that I can convert my tensorflow model to an uff file directly. But this function is only available on ubuntu x86 distribution. Moreover, install tensorflow on such distribution is very painful…

So, I need to create a frozen model from my trained model (on a x64 OS), then export the file to a x86 Ubuntu in order to convert it into an uff file and then a tensorRT engine, and finally, export it to my TX2 to execute it?

Or is it possible to simply use uff.from_tensorflow() on the TX2 directly?

Thank you,
Matthieu

Hi,

To launch TensorRT with a TensorFlow model on Jetson, it requires following steps:
1. Convert TensorFlow model to UFF format

  • Require x86 Linux platform
  • Python interface
  • Sample is located at '/usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/'

2. Create an TensorRT engine with UFF file

  • Can be applied on Jetson
  • C++ interface
  • Sample is located at '/usr/src/tensorrt/samples/sampleUffMNIST/'

By the way, tensorflow release a pre-built package for x86 Linux environment:

(tensorflow)$ pip install --upgrade tensorflow      # for Python 2.7
(tensorflow)$ pip3 install --upgrade tensorflow     # for Python 3.n
(tensorflow)$ pip install --upgrade tensorflow-gpu  # for Python 2.7 and GPU
(tensorflow)$ pip3 install --upgrade tensorflow-gpu # for Python 3.n and GPU

Thanks.

Hi AastaLLL,

Thank you very much for your answer, I finally got it clear and I managed to convert my tensorflow model to UFF format.

Hi,

My experience with tensorRT is quite tough:

a) python3 -m uff.bin.convert_to_uff tensorflow -o “$out_file” --input-file=“$pb_file” -I input_node,new_node_name,dtype,dim1,dim2… -O output_node1 -O output_node2 -O …

First, the convertor will remove identity OPs from the frozen pb, which makes a very lengthy “output node name”

And an exception is raised in “uff/converters/tensorflow/converter.py”

convert_tf2numpy_const_node()
    np_dtype = cls.convert_tf2numpy_dtype(tf_node.attr['dtype'].type)

Where tf_node is a “Sub OP”, and has only one attribute(no ‘dtype’ attribute)

T: {"type":"DT_FLOAT"}

BTW, Its upper frame is inside function “convert_transpose()”

b) And after I removed those OPs(whole post-process is removed), it can be generate uff successfully. But then reported an error of “Unsupported operation _ExpandDims” while loading the uff

Hi,

When I worked with TensorRT, I was using Keras models. Then I used the following function in order to convert it into a .pb file :

from keras.models import load_model
import keras.backend as K
from tensorflow.python.framework import graph_io
from tensorflow.python.tools import freeze_graph
from tensorflow.core.protobuf import saver_pb2
from tensorflow.python.training import saver as saver_lib

def convert_keras_to_pb(keras_model, out_names, models_dir, model_filename):
	model = load_model(keras_model)
	K.set_learning_phase(0)
	sess = K.get_session()
	saver = saver_lib.Saver(write_version=saver_pb2.SaverDef.V2)
	checkpoint_path = saver.save(sess, 'saved_ckpt', global_step=0, latest_filename='checkpoint_state')
	graph_io.write_graph(sess.graph, '.', 'tmp.pb')
	freeze_graph.freeze_graph('./tmp.pb', '',
                          	False, checkpoint_path, out_names,
                          	"save/restore_all", "save/Const:0",
                          	models_dir+model_filename, False, "")

It can be found on this link https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html

It worked pretty well, with these files, I could generate uff files from both Python API and C++ API (on the TX2 directly).

The only bug was that sometimes, the uff file was generated successfully but the resulting inference engine was unable to perform properly, a regeneration of the uff file was sufficient to fix it.

Hoping this can help ;)