Converting frozen inference graph of Deeplab model into UFF

Dear all,
I would like to run an Xception_71 model from the DeepLab Model Zoo [1] trained on Cityscapes dataset on my PX2 (AutoChaffeur, Ubuntu 16.04, AArch64) using TensorRT. To do that, I obviously need to convert its frozen inference graph into a UFF file; as far as I understand (please correct me if I am wrong), I do not have to do that on PX2, but can also use a different machine, as UFF files are not platform- or GPU-specific. The downloaded model consists of the three files:

  • frozen_inference_graph.pb
  • model.ckpt.data-00000-of-00001
  • model.ckpt.index

My question is whether I just need to run the UFF converter on frozen_inference_graph.pb as described in the SDK documentation [2], i.e. run

convert-to-uff frozen_inference_graph.pb

to get my UFF file, or I am missing something essential here.

Many thanks in advance!

[1] https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md
[2] https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#import_tf_python

Hi,

Yes, you can use the “convert-to-uff” script to convert the frozen model to UFF, otherwise you can use UFF converter API as well.
Please refer below link for further details:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/uff.html
Supported UFF operators:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/Operators.html

Another alternative is to convert your model to ONNX instead using tf2onnx and then convert to TensorRT using ONNX parser. Any layer that are not supported needs to be replaced by custom plugin.
https://github.com/onnx/tensorflow-onnx
https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

If you are using TRT 7 please note that we are deprecating Caffe Parser and UFF Parser in TensorRT 7.
Plan to migrate your workflow to use tf2onnx, keras2onnx or TensorFlow-TensorRT (TF-TRT) for deployment.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-release-notes/tensorrt-7.html#rel_7-0-0

Thanks

Hi SunilJB,
Thanks a lot! Could you also give some advice regarding how one can use a resulting UFF file for inference on PX2!? As I need to employ it on automotive platform with PX2, I can only use C++ API with PX2, so I am thinking about modyfing one of the samples from https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html
Thanks a lot in advance!

Hi,

Since i don’t have much experience of PX2, I might not be able to help you in PX2 related queries. But feel free to reach out to our experts on “Drive PX2” forum.

Regarding UFF file, once the UFF model file is generated you need to use uff parser to generate the optimized TRT engine.
You can refer below link for examples specific to UFF.
https://github.com/NVIDIA/TensorRT/tree/release/7.0/samples/opensource/sampleUffSSD
https://github.com/NVIDIA/TensorRT/tree/release/7.0/samples/opensource/sampleUffPluginV2Ext

Supported UFF operators:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/Operators.html

You can also try tf2onnx + ONNX parser as an alternative.
https://github.com/onnx/tensorflow-onnx

Thanks