Converting Tensorflow .pb models to TensorRT Models to run on the Nano

Hi! First time poster.
We have been building and deploying our DeepLearning object detection models with Tensorflow and running them on our Nvidia GPUs for some time now. I have purchased a Jetson Nano and have been experimenting with it. I was able to follow all of the tutorials and have it detecting well with the standard SSD models.

I would like to convert my frozen_inference_graphs.pb models to run on the Nano (.uff). I have installed TensorRT as well as the uff-converter-tf python script form here:

I have searched for some time trying to find a step by step on how to do this. Could anyone provide a little insight?
I have read though

UFF Converter — NVIDIA TensorRT Standard Python API Documentation 8.4.3 documentation

but can not find the path to the python program and am unsure of passing any perams other than my .pb file.
Thanks!

Hi,

You can find a sample at /usr/src/tensorrt/samples/python/uff_ssd/.
Please follow the README file for the detail steps.

Thanks.

Thanks for this. I managed to work though quite a few errors and am getting held up here. I dont see an logical error message to troubleshoot:

python3 detect_objects.py /home/pal/Documents/img1.jpg

/home/pal/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint8 = np.dtype([(“qint8”, np.int8, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint8 = np.dtype([(“quint8”, np.uint8, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint16 = np.dtype([(“qint16”, np.int16, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint16 = np.dtype([(“quint16”, np.uint16, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint32 = np.dtype([(“qint32”, np.int32, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
np_resource = np.dtype([(“resource”, np.ubyte, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint8 = np.dtype([(“qint8”, np.int8, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint8 = np.dtype([(“quint8”, np.uint8, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint16 = np.dtype([(“qint16”, np.int16, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint16 = np.dtype([(“quint16”, np.uint16, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint32 = np.dtype([(“qint32”, np.int32, 1)])
/home/pal/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
np_resource = np.dtype([(“resource”, np.ubyte, 1)])
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/graphsurgeon/_utils.py:2: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.

WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/graphsurgeon/DynamicGraph.py:4: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

An error occured when running the script.

Hi,

Do you meet other error message?
The log you shared is warning message.

Thanks.

It returned no error message and no output. Is this the proper way to convert my .pb model to .uff format so I can run it on my nano? Since I am stuck here I found a second way on stack exchange that may work as well:
https://stackoverflow.com/questions/59345600/converting-tensorflow-frozen-graph-to-uff-for-tensorrt-inference

This also looks promising, I just need to know where to run these from:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/uff.html

Just looking for a good solution to do this so i can continue testing

Update, I am close to this by following this doc:

I managed to find the converter located in:
/usr/lib/python3.6/dist-packages/uff/bin

I tried running the command on multiple saved models and got the same error. To make sure this wasn’t an issue with my models I downloaded one from the model zoo. Same issue:
python3 convert_to_uff.py --input-file /home/pal/Documents/ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb

output:
Converting Preprocessor/map/while/TensorArrayReadV3/Enter as custom op: Enter
Traceback (most recent call last):
File “convert_to_uff.py”, line 96, in
main()
File “convert_to_uff.py”, line 92, in main
debug_mode=args.debug
File “…/…/uff/converters/tensorflow/conversion_helpers.py”, line 229, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “…/…/uff/converters/tensorflow/conversion_helpers.py”, line 178, in from_tensorflow
debug_mode=debug_mode)
File “…/…/uff/converters/tensorflow/converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “…/…/uff/converters/tensorflow/converter.py”, line 79, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
File “…/…/uff/converters/tensorflow/converter.py”, line 41, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File “…/…/uff/converters/tensorflow/converter.py”, line 222, in parse_tf_attrs
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof(‘value’) is not None}
File “…/…/uff/converters/tensorflow/converter.py”, line 222, in
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof(‘value’) is not None}
File “…/…/uff/converters/tensorflow/converter.py”, line 218, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File “…/…/uff/converters/tensorflow/converter.py”, line 190, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File “…/…/uff/converters/tensorflow/converter.py”, line 103, in convert_tf2numpy_dtype
return tf.as_dtype(dtype).as_numpy_dtype
File “/home/pal/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py”, line 126, in as_numpy_dtype
return _TF_TO_NP[self._type_enum]
KeyError: 20

Any idea what is causing KeyError:20?
Running Tensorflow 1.14 on Ubuntu Desktop 16.04

Hi,

Sorry for the late update.
Could you try to run this command to see if helps?

$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py [pb/file/path] -o [output/file/name] -O [output/layer/name] -p /usr/src/tensorrt/samples/sampleUffSSD/config.py

Thanks.

Thanks for the update. I tried the command passing my perams:

sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py /home/pal/deeppress/dp/trained_models/RiverAi_X1/frozen_inference_graph.pb -o /home/pal/deeppress/dp/trained_models/RiverAi_X1/riverai.uff -O /home/pal/deeppress/dp/trained_models/RiverAi_X1/cards.pbtxt -p /usr/src/tensorrt/samples/sampleUffSSD/config.py

and the output was:

File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 229, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 105, in from_tensorflow
pre = importlib.import_module(os.path.splitext(os.path.basename(preprocessor))[0])
File “/usr/lib/python3.6/importlib/init.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 994, in _gcd_import
File “”, line 971, in _find_and_load
File “”, line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named ‘config’

Please let me know if this looks obvious. I am just trying to run a MobileNetV2 model. I am using the newest version of the Jetpack on my Nano and it runs tensorflow 2.0 now. The problem is I dont think nanos are really meant to run in this fashion. I have to convert my models for TensorRT

Hi,

Could you try to install config and run the command again?

$ sudo pip3 install config

Thanks.