I encountered an error of ImportError: No module named uff.
As I converted frozen graph to TensorRT engine with Jetpack3.3 without manaully install UFF.
I got another error as I converted frozen graph to TensorRT engine with Jetpack3.3 but manually install TensorRT 4 and uff.
The error message is as the below.
But I converted mobilenet_v1_0p25_128.plan with Jetpack3.3 and manaully installed TensorRT 3 and uff.
How to do tensorflow with tensorRT with Jetpack3.3 on TX2 exactly?
error message :
nvidia@tegra-ubuntu:~/tf_to_trt_image_classification$ python scripts/convert_plan.py data/frozen_graphs/mobilenet_v1_0p25_128.pb data/plans/mobilenet_v1_0p25_128.plan input 128 128 MobilenetV1/Logits/SpatialSqueeze 1 0 float
Traceback (most recent call last):
File “scripts/convert_plan.py”, line 71, in
data_type
File “scripts/convert_plan.py”, line 22, in frozenToPlan
text=False,
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 41, in from_tensorflow https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link"“”.format(err))
ImportError: ERROR: Failed to import module (No module named graphsurgeon)
Please make sure you have graphsurgeon installed.
For installation instructions, see: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link
As I extracted TensorRT-4.0.1.6, I got the folder TensorRT-4.0.1.6 not TensorRT-3.0.4.
I did the step you said.
However, I got two different results as I did step5.
converted to TensorRT engine succesfully, as I install TensorRT-3.0.4 uff.
$ sudo pip install TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl
Failed to converted to TensorRT engine, as I install TensorRT-4.0.1.6 uff.
$ sudo pip install TensorRT-4.0.1.6/uff/uff-0.2.0-py2.py3-none-any.whl
The error message:
nvidia@tegra-ubuntu:~/tf_to_trt_image_classification$ python scripts/convert_plan.py data/frozen_graphs/mobilenet_v1_0p25_128.pb data/plans/mobilenet_v1_0p25_128.plan input 128 128 MobilenetV1/Logits/SpatialSqueeze 1 0 float
Traceback (most recent call last):
File “scripts/convert_plan.py”, line 71, in
data_type
File “scripts/convert_plan.py”, line 22, in frozenToPlan
text=False,
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 41, in from_tensorflow https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link"“”.format(err))
ImportError: ERROR: Failed to import module (No module named graphsurgeon)
Please make sure you have graphsurgeon installed.
For installation instructions, see: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/#python and click on the ‘TensoRT Python API’ link
nvidia@tegra-ubuntu:~/tf_to_trt_image_classification$
I am using tensorrt-3.0.4 uff package along with the recommended CUDA (9.0), CuDNN (7.0) and tensorflow (1.12.0) versions. However, I am facing an issue with writing the uff model using the “convert_to_uff.py” script at the location “/usr/local/lib/python2.7/dist-packages/uff/bin”. I have tried all permutations and combinations to dump the uff model but I have not been successful.
Could you please help me out and let me know how to save the uff model converted using the above script ?