I have a Tensorflow saved model (e.g one of the models available here).
The workflow I am looking at requires having a UFF version of the file to run in TensorRT.
Following the examples available in /usr/src/tensorrt/samples/python, there are ways of loading from a Tensorflow frozen_model with uff.from_tensorflow_frozen_model.
I have found the convert to UFF from a frozen_model the issue is how do I convert to UFF from a saved_model.
This workflow only works for frozen_models, if I pass a saved_model to it, it fails with:
File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 993, in _DecodeUnknownField
raise _DecodeError('Wrong wire type in tag.')
google.protobuf.message.DecodeError: Wrong wire type in tag.
One approach could be using Tensorflow to turn the saved_model into a frozen_model. However the approaches I’ve tried haven’t worked.
Since TF-TensorRT can load saved_models directly, is there a workflow that does this for UFF?
Even when we verify the output tensor names are correct with saved_model_cli show --all --dir ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/.
Do you have any suggestions on how to either freeze this model, or convert the saved model directly to UFF?
The uff converter needs a frozen .pb file as input.
If you want to use a non-frozen model, another workflow is to convert the model into ONNX based format.
ONNX is also one of TensorRT supported format, and you can find the converter below: