Tensorflow saved_model.pb to UFF

I have a Tensorflow saved model (e.g one of the models available here).

The workflow I am looking at requires having a UFF version of the file to run in TensorRT.

Following the examples available in /usr/src/tensorrt/samples/python, there are ways of loading from a Tensorflow frozen_model with uff.from_tensorflow_frozen_model.

You can also load a saved_model.pb using TF-TensorRT as seen in [[https://github.com/tensorflow/tensorrt/blob/master/tftrt/examples/object_detection/object_detection.py][this example]].

However this latter approach doesn’t generate a UFF file, all of my existing workflows require a UFF.

Is there a way from a Tensorflow saved_model I can get a UFF?

Hi,

To generate uff file, please use the below command:

$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o [output].uff -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py

config.py is sometimes model dependent, and you may need some update for a customized model.

If you are seeking an API sample to convert the .pb file, please check the following GitHub:

Thanks.

Hi there,

I have found the convert to UFF from a frozen_model the issue is how do I convert to UFF from a saved_model.

This workflow only works for frozen_models, if I pass a saved_model to it, it fails with:

  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 993, in _DecodeUnknownField
    raise _DecodeError('Wrong wire type in tag.')
google.protobuf.message.DecodeError: Wrong wire type in tag.

One approach could be using Tensorflow to turn the saved_model into a frozen_model. However the approaches I’ve tried haven’t worked.

Since TF-TensorRT can load saved_models directly, is there a workflow that does this for UFF?

Freezing graphs is a notoriously tricky process in Tensorflow when going from a saved_model.pb. Here’s an example, from a publically available model: http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8.tar.gz

Running freeze graph for this model fails:

python3 -m tensorflow.python.tools.freeze_graph --input_saved_model_dir ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8//saved_model/ --output_node_names 'num_detections','detection_classes','detection_boxes','detection_scores'

Even when we verify the output tensor names are correct with saved_model_cli show --all --dir ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/.

Do you have any suggestions on how to either freeze this model, or convert the saved model directly to UFF?

Hi,

The uff converter needs a frozen .pb file as input.
If you want to use a non-frozen model, another workflow is to convert the model into ONNX based format.

ONNX is also one of TensorRT supported format, and you can find the converter below:

Thanks.

Thanks for the help!

Was looking at the ONNX convertor in parallel, and I think it would have done the job had it not been for the unsupported operation.

Am moving to TF-TRT for now.

Please also check the following suggestion on the topic above.

Thanks.