Pb to uff error

Description

I tried to convert a pb model to uff, but i got error:

Traceback (most recent call last):
  File "pb_to_uff.py", line 14, in <module>
    text = True) # If set to True, the converter will also write out a human readable UFF file.
  File "/home/hadoop-hdp/.local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 229, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/home/hadoop-hdp/.local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 178, in from_tensorflow
    debug_mode=debug_mode)
  File "/home/hadoop-hdp/.local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 94, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/home/hadoop-hdp/.local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 79, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
  File "/home/hadoop-hdp/.local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 47, in convert_layer
    return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
  File "/home/hadoop-hdp/.local/lib/python2.7/site-packages/uff/converters/tensorflow/converter_functions.py", line 33, in convert_const
    array = tf2uff.convert_tf2numpy_const_node(tf_node)
  File "/home/hadoop-hdp/.local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 141, in convert_tf2numpy_const_node
    array = np.frombuffer(data, dtype=np_dtype)
ValueError: cannot create an OBJECT array from memory buffer

Any thing wrong?

Environment

TensorRT Version: 7.0
GPU Type: T4
Nvidia Driver Version: 410.79
CUDA Version: 10.0
CUDNN Version: 7.6.4
Operating System + Version: Centos 7
Python Version (if applicable): 2.7
TensorFlow Version (if applicable): 1.15

Hi,

It doesn’t seems to be TRT issue, please refer below link:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html

Also, we are deprecating Caffe Parser and UFF Parser in TensorRT 7. I will recommend you to please use ONNX parser:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-release-notes/tensorrt-7.html#rel_7-0-0

Thanks

I tired to convert my pb model to onnx, but the oonx model’s structure is very different from my pb model. So i can only convert to onnx if i want to use trt(not tf-trt) to convert my tensorflow model?

If i understood your question correctly. You don’t need to convert to ONNX model if you are planning to use TF-TRT.
Only in case of TRT model you need to follow below workflow:
pb -> ONNX -> TRT

Thanks