TensorRT and UFF support for NVIDIA DRIVE PX 2


We are working to deploy a neural net defined in TensorFlow on NVIDIA DRIVE PX 2 using TensorRT. The details of our development environment follow.

Training Desktop: Ubuntu 16.04 LTS, Python 2.x, CUDA 8.x, cuDNN 5.x, TensorFlow 1.4
Platform Host (PC): Ubuntu 16.04 LTS, Python 2.x, CUDA 9.0, cuDNN 7.1, TensorRT 3.0.2
Platform Target (DPX2): Ubuntu 16.04, Python 2.x, CUDA 9.0, cuDNN 7.1, TensorRT 3.0.2

Since I am using TRT version 3.0.2, the UFF version is 0.2.0.

We received a frozen graph (.pb) file from our internal Applications Group. Thereafter, on the hoswe followed the steps described at https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_302/tensorrt-developer-guide/index.html#convert_model_tensorflow. The terminal output of this procedure is shown below.

sagar@Sagar:~$ python
Python 2.7.12 (default, Nov 12 2018, 14:36:49) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import uff
>>> uff_model = uff.from_tensorflow_frozen_model(frozen_file="/home/sagar/backups/Reshape_1_frozen_model.pb", output_nodes=["output/Reshape_1"], preprocessor=None, output_filename="/home/sagar/backups/tmp.uff")
Using output node output/Reshape_1
Converting to UFF graph
Warning: No conversion function registered for layer: Merge yet.
Converting as custom op Merge stage4_0/Gconv1x1/batch_normalization/cond/Merge
name: "stage4_0/Gconv1x1/batch_normalization/cond/Merge"
op: "Merge"
input: "stage4_0/Gconv1x1/batch_normalization/cond/FusedBatchNorm_1"
input: "stage4_0/Gconv1x1/batch_normalization/cond/FusedBatchNorm"
attr {
  key: "N"
  value {
    i: 2
attr {
  key: "T"
  value {
    type: DT_FLOAT

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/sagar/.local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/home/sagar/.local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
  File "/home/sagar/.local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/home/sagar/.local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 46, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: u'^stage4_0/Gconv1x1/batch_normalization/cond/switch_t'

My first question is - in what ways can I resolve this error?

Upon reading through many posts on this forum, I observed that many people have reported such issues, and they have been asked to upgrade their TensorRT/UFF version. Suppose I upgraded the UFF version on our Platform Host PC. Suppose I also succeeded in converting the .pb file to a .uff file with the new UFF. The problem is, I have to parse this UFF file on the Platform Target DPX2, and the UFF version for that platform is stuck at 0.2.0 (since it is part of the DriveInstall package for automotive use).

This brings me to my second question - is it possible to create a UFF file with UFF v0.5.x and then parse it with UFF v0.2.0?

Thank you.


Currently, the layer for “Merge” is NOT supported by UFF. For a list of supported layers, please reference: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#layers

Also reference howto adding Custom Layers: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending