@dusty_nv,yes, parser.register_input(“Placeholder”, (1,28,28), 0)works. But I don’t know how to use that function because I cannot find register_input() method in python API of tenorrt3.0.
The Python API documentation is located in the python/doc directory of the tarball download. It includes the package references for the inference engine, utilities, and parsers. The register_input() function takes the name of the input layer as the first parameter in addition to the input tensor dimensions. Replace “placeholder” with the name of your input, typically called “data” in network architectures like Alexnet/Googlenet/Resnet.
Hi, there seems to be a minor typo in the user guide (TensorRT-3-User-Guide.pdf DU-08602-001_v3.0 ) in section 2.3.2.2.3 in the convert-to-uff arguments (should be - instead of _ in --input_file .
Nice example in 2.3.2.1.1. Training a Model in TensorFlow. Would be nice to have a simple end-to-end keras example too.
Hi, I got same problem of parser.register_input(“Placeholder”, (1, 28, 28)) and it’s solved
But new error message show up, and here’s it
[TensorRT] ERROR: UFFParser: Parser error: x: Invalid number of Dimensions 0
[TensorRT] ERROR: Failed to parse UFF model stream
File “/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py”, line 127, in uff_to_trt_engine
assert(parser.parse(stream, network, datatype))
Traceback (most recent call last):
File “/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py”, line 127, in uff_to_trt_engine
assert(parser.parse(stream, network, datatype))
AssertionError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/home/yan/PycharmProjects/Python36/MNIST_CNN/test.py”, line 34, in
1 << 20)
File “/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py”, line 135, in uff_to_trt_engine
raise AssertionError(‘UFF parsing failed on line {} in statement {}’.format(line, text))
AssertionError: UFF parsing failed on line 127 in statement assert(parser.parse(stream, network, datatype))
any suggestion of this?
Thank you for reporting this, we will submit the change for the TensorRT 3.0 GA release.
Hi, does ngsong’s suggestion work for you? I noticed python version 3.5 from your path, do you get same error with python 2.7?
section 2.3.2.1 refers to a .whl file that should have been provided with the installation. “tensorrt-3.0.0_EA-cp27-cp27mu-linux_x86_64.whl” was not included in the download link (debian nor tar version). Where can I get this?
@phojjat search for *whl files in the tar file.
@nikosR searched it but nothing showed up. Could you please tell me the exact location so I can further confirm?
Thanks!
Sure!
tar -zxvf TensorRT-3.0.0.Ubuntu-16.04.3.cuda-8.0.x86_64.tar.gz | grep whl
TensorRT-3.0.0/uff/uff-0.1.0rc0-py2.py3-none-any.whl
TensorRT-3.0.0/python/tensorrt-3.0.0-cp35-cp35m-linux_x86_64.whl
TensorRT-3.0.0/python/tensorrt-3.0.0-cp27-cp27mu-linux_x86_64.whl
Thank you! However:
‘TensorRT-3.0.0/uff/’ is empty.
‘TensorRT-3.0.0/python/’ only has ‘data’ and ‘doc’ folders.
Below are the instructions I followed for installation; The verification matched okay but the above problems exist.
TensorRT 3 RC Release Notes
TensorRT 3 RC for Jetpack 3.x running Ubuntu 16.04 and CUDA 8 tar package
TensorRT 3 RC for Jetpack 3.x running Ubuntu 16.04 and CUDA 8 debian package
Installation Guide for tar packages
Installation:
$ tar xzvf TensorRT-3.0.0.Ubuntu-16.04.3.cuda-8.0.aarch64.tar.gz
$ ls TensorRT-3.0.0
bin data doc include lib python samples targets TensorRT-Release-Notes.pdf uff
Installation Guide for deb packages
Installation:
$ sudo dpkg -i nv-tensorrt-repo-ubuntu1604-rc-cuda8.0-trt3.0-20170922_3.0.0-1_arm64.deb
$ sudo apt-get update
$ sudo apt-get install tensorrt
Verification:
ubuntu@tegra-ubuntu:~$ sudo dpkg -l | grep TensorRT
ii libnvinfer-dev 4.0.0-1+cuda8.0 arm64 TensorRT development libraries and headers
ii libnvinfer-samples 4.0.0-1+cuda8.0 arm64 TensorRT samples and documentation
ii libnvinfer4 4.0.0-1+cuda8.0 arm64 TensorRT runtime libraries
ii tensorrt 3.0.0-1+cuda8.0 arm64 Meta package of TensorRT
@nikosR I should mention that I am trying to install this on Jetson TX2 but the above issues exist.
tar -zxvf TensorRT-3.0.0.Ubuntu-16.04.3.cuda-8.0.x86_64.tar.gz | grep whl
tar (child): TensorRT-3.0.0.Ubuntu-16.04.3.cuda-8.0.x86_64.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I am trying to convert a model created by Tensorflow to UFF format, the model is for object detection.
I run
"convert-to-uff tensorflow --input-file frozen_inference_graph.pb -l"
frozen_inference_graph.pb is my object detection model created by Tensorflow.
the command list the following message:
12717 Identity: "detection_boxes"
12718 Identity: "detection_scores"
12719 Identity: "detection_classes"
12720 Identity: "num_detections"
then I run
"convert-to-uff tensorflow -o frozen_inference_graph.uff --input-file frozen_inference_graph.pb -O detection_classes"
but I get the following message:
======================================
Loading frozen_inference_graph.pb
Using output node detection_classes
Converting to UFF graph
Warning: No conversion function registered for layer: Identity yet.
Converting as custom op Identity detection_classes
name: "detection_classes"
op: "Identity"
input: "add"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
Converting as custom op TensorArrayGatherV3 SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_2/TensorArrayGatherV3
name: "SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_2/TensorArrayGatherV3"
op: "TensorArrayGatherV3"
input: "SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_6"
input: "SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_2/range"
input: "SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/Exit_3"
attr {
key: "_class"
value {
list {
s: "loc:@SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_6"
}
}
}
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "element_shape"
value {
shape {
dim {
size: 300
}
}
}
}
Traceback (most recent call last):
File "/home/cvml/anaconda2/envs/tensorRT/bin/convert-to-uff", line 11, in <module>
sys.exit(main())
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/bin/convert_to_uff.py", line 104, in main
output_filename=args.output
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, **kwargs)
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
name="main")
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
uff_graph, input_replacements)
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 28, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 177, in parse_tf_attrs
for key, val in attrs.items()}
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 177, in <dictcomp>
for key, val in attrs.items()}
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 172, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File "/home/cvml/anaconda2/envs/tensorRT/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 161, in convert_tf2uff_field
if shp.unknown_rank:
AttributeError: 'google.protobuf.pyext._message.RepeatedCompositeCo' object has no attribute 'unknown_rank'
=======================================================================
I am wondering any one can help me to solve this error?
Thanks
’
njsong, I had a similar problem when trying to import a finetuned inception model. I’m guessing that you have a map_fn somewhere in your model, which tensorrt can’t convert. I would recommend that you print out a pbtxt of your graph and search for “unknown_rank” in the graph. That should help you pinpoint where it’s coming from. Once you’ve found it, you should find a way to remove the layer/operation.
Njsong’s fix for register_input (Adding a zero argument at the end) also worked for me. But I’m having the same problem as yanadsl:
[TensorRT] ERROR: UFFParser: Parser error: input_image: Invalid number of Dimensions 0
[TensorRT] ERROR: Failed to parse UFF model stream
File “/home/chris/.local/lib/python2.7/site-packages/tensorrt/utils/_utils.py”, line 127, in uff_to_trt_engine
assert(parser.parse(stream, network, datatype))
Traceback (most recent call last):
File “dumb_test.py”, line 22, in
1 << 20)
File “/home/chris/.local/lib/python2.7/site-packages/tensorrt/utils/_utils.py”, line 135, in uff_to_trt_engine
raise AssertionError(‘UFF parsing failed on line {} in statement {}’.format(line, text))
AssertionError: UFF parsing failed on line 127 in statement assert(parser.parse(stream, network, datatype))
I installed cuDNN V7 in my desktop and tried to run a example of converting a tensorflow model to tensorRT, tf_to_trt.py,
I have the following error message:
Loaded runtime CuDNN library: 7001 (compatibility version 7000) but source was compiled with 6021 (compatibility version 6000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
it seems tensorflow need cudnn v6 but tensorRT only support cuDNN V7,
any help?
Thanks
njsong, I have the same problem with you
AttributeError: ‘google.protobuf.pyext._message.RepeatedCompositeCo’ object has no attribute ‘unknown_rank’
I think this error is raised from this piece of code in /usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py:
elif code == 'shape':
shp = val.dim
[b]if shp.unknown_rank:[/b]
=========================
and I checked the tf graphs (frozen using tf1.2 and 1.3 both), in “shape” attribute, there is NO unknown_rank field at all…
have you solved the problem?
not yet, I checked the pbtxt file of the model, there are a lot of “shape: unknown_rank” in this file.
it seems hard to remove it.
See this post, the TensorRT Python API isn’t available yet on Jetson in the RC. Those files are included in the PC version of the RC.