TensorFlow 1.2 to TensorRT 3.0 Workflow doesn't work if there is custom layer or input channel

Our target: convert tensorflow model to tensorrt + INT8, hope to enhance the performance and workload.
Our decision: we converted our model to TensorFlow 1.2, and follow the steps in “TensorRT 3 User Guide”(DU-08602-001_v3.0 | November 2017) section “2.3.2. TensorFlow™ Workflow” to import the model with TensorRT 3.0.
Our Platform: Tesla P4
Our Steps:

  1. We install TensorRT 3.0 and uff-converter-tf with the guide of “TensorRT Installation Guide”(DU-08731-001_v3.0 | September 2017) 3.1. Debian Installation.
  2. Some fellow frozen the TensorFlow model to “pb” file, and I call the convert_to_uff.py command line tool want to convert the “pb” to “uff” file.
  3. Convert “pb” to “uff” failed at beginning, so I hacked the convert_to_uff.py command line tool and changed the converter.py and graph.py file to let the tool generate “uff” file.
  4. Event we hacked the convert_to_uff tool and got the “uff”, the converted uff file still unabled to parse by UffParser in our clone TensorRT UffMNIST sample code.
  5. We create a test model which duplicate all problem in our real mode to help NVIDIA reproduce our problem and hope to push the TensorFlow to TensorRT work more successfully.
    Our Conclusion:
  6. The convert_to_uff tool has code to parse customer node(layer) and know the operation key work, but this function seems miseed in UffParser in TensorRT library.
  7. All operation and channel is support by TensorFlow, recover channel info to in TensorRT plugin wil be hard event dismiss the customer operation problem, as the plugin’s input and output format need fit the TensorRT’s tensor format.
    Our Attachment:
    test.py TensorFlow model defination code written in Python.
    test.pb the exported model from TensorFlow code.
    test.pb.txt the exported mode in text format.
    test.uff generated uff file with hacked convert_to_uff tool.
    converter.py hacked converter.py file which orginal path is /usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py, you need a compare tool to view the changes.
    graph.py hacked graph.py file which orginal path is /usr/lib/python2.7/dist-packages/uff/model/graph.py
    sampleUffVAPD.cpp hacked from sampleUffMNIST.cpp which just load test.uff file, you need clone other files in UffMINST sample to compile it.
    UFFParser.log the log when running the program output compiled sampleUffVAPD.cpp.
    Something may worth to try:
  8. There are 7 operations not supported by TensorRT but presented in TensorFlow, which one is mostly possible layer should transform with in “ Supported TensorFlow Operations”?
  9. Is it possible for TensorRT plugin to read the channel info present in node name and prefix with ‘:’ symbol?
  10. Is is suible to hacked UffParser to generate custom operation and layer automatically for TensorRT, as we used custom operation deep and heavily.
    All attachment can be retrieve by contact Rocky Tang, Ming Li, Joey Zhang in NVIDIA. we present these is want to focus our problem and want to find a solution faster, we really want to taste the TensorRT+INT8 performance!
    We need advice, Thank You!

pb2uff.zip (7.42 KB)

Hmm, I upload the zip file which contain all data and source code files with the method noticed by Rocky.


Python-based uffparser doesn’t support plugin API.
We will check your source code and update further suggestion with you.



TensorRT 3 don’t provide plugin interface for uffpaser.
Currently, plugin API is only available for Caffe users.

We are sorry for such late information.
And also thanks for your feedback.

We got it, thanks.