I found the script convert_to_uff.py (in dist-packages), but when I try to use it:
python3 convert_to_uff.py frozen_inference_graph.pb -o output.uff
It tells me:
Traceback (most recent call last):
File “convert_to_uff.py”, line 96, in
main()
File “convert_to_uff.py”, line 92, in main
debug_mode=args.debug
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 229, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 178, in from_tensorflow
debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 79, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 41, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 222, in parse_tf_attrs
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof(‘value’) is not None}
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 222, in
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof(‘value’) is not None}
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 218, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 190, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 103, in convert_tf2numpy_dtype
return tf.as_dtype(dtype).as_numpy_dtype
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py”, line 126, in as_numpy_dtype
return _TF_TO_NP[self._type_enum]
KeyError: 20
any ideas? am I using the wrong syntax?
I understand some operations are not yet convertible from tensorflow to trt, but I am converting an ssd_inception_v2 network to trt, which I know exists since it is one of the networks in the examples
Hi,
It looks like there are some missing argument in your command:
Please use the similar like this:
$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py [your/pb/file] -o [output/uff/name]] -O [output/layer/name] -p /usr/src/tensorrt/samples/sampleUffSSD/config.py
Or you can check this tutorial for ssd_inception_v2 network directly:
https://github.com/AastaNV/TRT_object_detection
Thanks.
Using output node /home/neural-networks/models/convertToTRT/testTRT.layer
Converting to UFF graph
Traceback (most recent call last):
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 96, in
main()
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 92, in main
debug_mode=args.debug
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 229, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 178, in from_tensorflow
debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 62, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: /home/neural-networks/models/convertToTRT/testTRT.layer was not found in the graph. Please use the -l option to list nodes in the graph.
I am confused… It sounds like it is looking for the file it is supposed to be outputting, testTRT.layer.
this was my input:
sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o /home/neural-networks/models/convertToTRT/testTRT.uff -O /home/neural-networks/models/convertToTRT/testTRT.layer -p /home/TensorRT-7.0.0.11/samples/sampleUffSSD/config.py
If I skip the layer part:
sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o /home/neural-networks/models/convertToTRT/testTRT.uff -p /home/TensorRT-7.0.0.11/samples/sampleUffSSD/config.py
it gives me a .uff file.
Unfortunately running this using the detect-net example gives me:
:~/jetson-inference/python/examples$ ./detectnet-camera.py --camera /dev/video0 --network=raccoonnet
jetson.inference.__init__.py
jetson.inference -- initializing Python 2.7 bindings...
jetson.inference -- registering module types...
jetson.inference -- done registering module types
jetson.inference -- done Python 2.7 binding initialization
jetson.utils.__init__.py
jetson.utils -- initializing Python 2.7 bindings...
jetson.utils -- registering module functions...
jetson.utils -- done registering module functions
jetson.utils -- registering module types...
jetson.utils -- done registering module types
jetson.utils -- done Python 2.7 binding initialization
jetson.inference -- PyTensorNet_New()
jetson.inference -- PyDetectNet_Init()
jetson.inference -- detectNet loading network using argv command line params
jetson.inference -- detectNet.__init__() argv[0] = './detectnet-camera.py'
jetson.inference -- detectNet.__init__() argv[1] = '--camera'
jetson.inference -- detectNet.__init__() argv[2] = '/dev/video0'
jetson.inference -- detectNet.__init__() argv[3] = '--network=raccoonnet'
detectNet -- loading detection network model from:
-- model networks/SSD-Raccoonnet/ssd_raccoon.uff
-- input_blob 'Input'
-- output_blob 'NMS'
-- output_count 'NMS_1'
-- class_labels networks/SSD-Raccoonnet/ssd_raccoon_labels.txt
-- threshold 0.500000
-- batch_size 1
[TRT] TensorRT version 6.0.1
[TRT] loading NVIDIA plugins...
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] Could not register plugin creator: FlattenConcat_TRT in namespace:
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension '.uff')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /usr/local/bin/networks/SSD-Raccoonnet/ssd_raccoon.uff.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading /usr/bin/ /usr/local/bin/networks/SSD-Raccoonnet/ssd_raccoon.uff
[TRT] UffParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
[TRT] failed to parse UFF model '/usr/local/bin/networks/SSD-Raccoonnet/ssd_raccoon.uff'
[TRT] device GPU, failed to load networks/SSD-Raccoonnet/ssd_raccoon.uff
detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load built-in network 'raccoonnet'
PyTensorNet_Dealloc()
Traceback (most recent call last):
File "./detectnet-camera.py", line 49, in <module>
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference -- detectNet failed to load network
If I take the “.engine” file from a the mobilenet network and put it into my network’s /data/networks/SSD-NETWORK/ folder, then the program starts, but does not use my network, instead using mobilenet’s. But when I generated my uff, it did not produce a .engine file
Is it possible that the issue is at line 54( Unsupported operation _FusedBatchNormV3)? If so, it is odd that only that one had problems, considering there are also FusedBatchNormV1 ,2 ,and 4 operations in the graph.pbtxt file
Hi,
The error in comment#4 shows that the output layer is not specified or incorrectly passed.
For ssd_inception_v2 model, the output layer name should be NMS.
Ex.
sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -O NMS -p config.py
You can also check our README for more information:
/usr/src/tensorrt/samples/sampleUffSSD/README.md
Thanks.
thanks, Unfortunately there are still 2 problems. First, the converter still complains about the network:
~/convertToTRT$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py
2020-01-17 07:53:40.140960: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
Loading frozen_inference_graph.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py:227: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
input: "Cast"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]
=========================================
Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:179: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.
Warning: No conversion function registered for layer: Cast yet.
Converting Cast as custom op: Cast
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
etc…
The second is that the inference code also still has that problem:
[TRT] UffParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
[TRT] failed to parse UFF model '/usr/local/bin/networks/SSD-Raccoonnet/ssd_raccoon.uff'
[TRT] device GPU, failed to load networks/SSD-Raccoonnet/ssd_raccoon.uff
detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load built-in network 'raccoonnet'
PyTensorNet_Dealloc()
Traceback (most recent call last):
File "./detectnet-camera.py", line 49, in <module>
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference -- detectNet failed to load network
Also it seems to be weird that the convert code is complaining about tf1.14 when tf2 doesn’t seem to be officially supported on xavier (at least I was unable to find an official installation guide). Installing tf2 via:
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 tensorflow-gpu==2
causes problems right away:
~/convertToTRT$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py
2020-01-17 11:19:57.588473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
Traceback (most recent call last):
File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 18, in <module>
from tensorflow import GraphDef
ImportError: cannot import name 'GraphDef'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 18, in <module>
import uff
File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/__init__.py", line 2, in <module>
from uff.converters.tensorflow.conversion_helpers import from_tensorflow # noqa
File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 23, in <module>
https://www.tensorflow.org/install/""".format(err))
ImportError: ERROR: Failed to import module (cannot import name 'GraphDef')
Please make sure you have TensorFlow installed.
For installation instructions, see:
https://www.tensorflow.org/install/
ok so the problem isnt tf1.14. it seems to have something to do with the converter not knowing what a batch normalization is…
Hi,
FusedBatchNormV3 is a new operation in TensorFlow and doesn’t have TensorRT support yet.
A possible workaround is to convert the model into onnx format first.
Could you check if this comment works for you?
https://devtalk.nvidia.com/default/topic/1066445/tensorrt/tensorrt-6-0-1-tensorflow-1-14-no-conversion-function-registered-for-layer-fusedbatchnormv3-yet/post/5403567/#5403567
Thanks.
CLEI/HECI codes are 10 character alpha numeric codes that have a 1:1 relationship to a product identifier (part number).
there are some other codes that are used for shopping as covenant eyes promo code and also other promo codes and discount codes