Can not convert .pb to uff

Hi, all.
I tried to use convert-to-uff command to convert .pb file into .uff.
However, when I typed $convert-to-uff model/3/saved_model.pb , I faced the following error message.

2020-07-24 12:54:59.419041: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Loading model/3/saved_model.pb
Traceback (most recent call last):
  File "/usr/local/bin/convert-to-uff", line 144, in <module>
main()
  File "/usr/local/bin/convert-to-uff", line 139, in main
debug_mode=args.debug
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 275, in from_tensorflow_frozen_model
graphdef.ParseFromString(frozen_pb.read())
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/message.py", line 199, in ParseFromString
return self.MergeFromString(serialized)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1134, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 738, in DecodeField
if value._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 717, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 872, in DecodeMap
if submsg._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1188, in InternalParse
buffer, new_pos, wire_type)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 973, in _DecodeUnknownField
(data, pos) = _DecodeUnknownFieldSet(buffer, pos)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 952, in _DecodeUnknownFieldSet
(data, pos) = _DecodeUnknownField(buffer, pos, wire_type)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 977, in _DecodeUnknownField
raise _DecodeError('Wrong wire type in tag.')
google.protobuf.message.DecodeError: Wrong wire type in tag.

The protobuf version is

Name: protobuf
Version: 3.12.2
Summary: Protocol Buffers
Home-page: https://developers.google.com/protocol-buffers/
Author: None
Author-email: None
License: 3-Clause BSD License
Location: /usr/local/lib/python3.6/dist-packages
Requires: setuptools, six
Required-by: tensorflow-metadata, tensorflow-hub, tensorflow-datasets, googleapis-common-protos, tensorflow, tensorboard, uff

The version of tensorRT is

ii  graphsurgeon-tf                               7.1.3-1+cuda10.2                                 arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                7.1.0-1+cuda10.2                                 arm64        TensorRT binaries
ii  libnvinfer-dev                                7.1.0-1+cuda10.2                                 arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                7.1.0-1+cuda10.2                                 all          TensorRT documentation
ii  libnvinfer-plugin-dev                         7.1.0-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-plugin7                            7.1.0-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-samples                            7.1.0-1+cuda10.2                                 all          TensorRT samples
ii  libnvinfer7                                   7.1.0-1+cuda10.2                                 arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                          7.1.0-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvonnxparsers7                             7.1.0-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                              7.1.0-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  libnvparsers7                                 7.1.0-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt                 7.1.0.16-1+cuda10.2                              arm64        Jetpack TensorRT CSV file
ii  python-libnvinfer                             7.1.0-1+cuda10.2                                 arm64        Python bindings for TensorRT
ii  python-libnvinfer-dev                         7.1.0-1+cuda10.2                                 arm64        Python development package for TensorRT
ii  python3-libnvinfer                            7.1.0-1+cuda10.2                                 arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                        7.1.0-1+cuda10.2                                 arm64        Python 3 development package for TensorRT
ii  tensorrt                                      7.1.0.16-1+cuda10.2                              arm64        Meta package of TensorRT
ii  uff-converter-tf                              7.1.3-1+cuda10.2                                 arm64        UFF converter for TensorRT package

I made .pb file by using the following API.

import tensorflow_datasets as tfds
tf.disable_v2_behavior()
...
tf.saved_model.save(model, 'model/3')

Thanks in advance.

Hi,

May I know how do you setup your device?

It seems that the TensorRT package version is mixed in your environment and it may lead to some error when serialization/de-serialization.

ii  graphsurgeon-tf                               7.1.3-1+cuda10.2                                 arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                7.1.0-1+cuda10.2                                 arm64        TensorRT binaries

Would you mind to reflash your device and install the libraries all from JetPack4.4 GA first?
The version should look like below that all from v7.1.3.

$ dpkg -l | grep TensorRT
ii  graphsurgeon-tf                               7.1.3-1+cuda10.2                                 arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                7.1.3-1+cuda10.2                                 arm64        TensorRT binaries
ii  libnvinfer-dev                                7.1.3-1+cuda10.2                                 arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                7.1.3-1+cuda10.2                                 all          TensorRT documentation
ii  libnvinfer-plugin-dev                         7.1.3-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-plugin7                            7.1.3-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-samples                            7.1.3-1+cuda10.2                                 all          TensorRT samples
ii  libnvinfer7                                   7.1.3-1+cuda10.2                                 arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                          7.1.3-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvonnxparsers7                             7.1.3-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                              7.1.3-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  libnvparsers7                                 7.1.3-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt                 7.1.3.0-1+cuda10.2                               arm64        Jetpack TensorRT CSV file
ii  python-libnvinfer                             7.1.3-1+cuda10.2                                 arm64        Python bindings for TensorRT
ii  python-libnvinfer-dev                         7.1.3-1+cuda10.2                                 arm64        Python development package for TensorRT
ii  python3-libnvinfer                            7.1.3-1+cuda10.2                                 arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                        7.1.3-1+cuda10.2                                 arm64        Python 3 development package for TensorRT
ii  tensorrt                                      7.1.3.0-1+cuda10.2                               arm64        Meta package of TensorRT
ii  uff-converter-tf                              7.1.3-1+cuda10.2                                 arm64        UFF converter for TensorRT package

Thanks.

Thank you for your reply.
I reflashed xavier using Jetpack4.4, and my environment becomes

nvidia@xavier:/usr/lib/python2.7/dist-packages/uff/bin$ dpkg -l | grep TensorRT
ii  graphsurgeon-tf                               7.1.0-1+cuda10.2                                 arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                7.1.0-1+cuda10.2                                 arm64        TensorRT binaries
ii  libnvinfer-dev                                7.1.0-1+cuda10.2                                 arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                7.1.0-1+cuda10.2                                 all          TensorRT documentation
ii  libnvinfer-plugin-dev                         7.1.0-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-plugin7                            7.1.0-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-samples                            7.1.0-1+cuda10.2                                 all          TensorRT samples
ii  libnvinfer7                                   7.1.0-1+cuda10.2                                 arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                          7.1.0-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvonnxparsers7                             7.1.0-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                              7.1.0-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  libnvparsers7                                 7.1.0-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt                 7.1.0.16-1+cuda10.2                              arm64        Jetpack TensorRT CSV file
ii  python-libnvinfer                             7.1.0-1+cuda10.2                                 arm64        Python bindings for TensorRT
ii  python-libnvinfer-dev                         7.1.0-1+cuda10.2                                 arm64        Python development package for TensorRT
ii  python3-libnvinfer                            7.1.0-1+cuda10.2                                 arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                        7.1.0-1+cuda10.2                                 arm64        Python 3 development package for TensorRT
ii  tensorrt                                      7.1.0.16-1+cuda10.2                              arm64        Meta package of TensorRT
ii  uff-converter-tf                              7.1.0-1+cuda10.2                                 arm64        UFF converter for TensorRT package

nvidia@xavier:~/transfer-learning/tutorial1$ pip show protobuf
Name: protobuf
Version: 3.12.2
Summary: Protocol Buffers
Home-page: https://developers.google.com/protocol-buffers/
Author: None
Author-email: None
License: 3-Clause BSD License
Location: /usr/local/lib/python3.6/dist-packages
Requires: six, setuptools
Required-by: tensorflow-metadata, tensorflow-datasets, googleapis-common-protos, tensorflow, tensorboard, uff

However, still I faced with the same error message.

nvidia@xavier:~/transfer-learning/tutorial1$ python3 convert.py
2020-07-27 05:50:59.275108: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Traceback (most recent call last):
  File "convert.py", line 9, in <module>
    uff_model = uff.from_tensorflow_frozen_model("model/3/saved_model.pb",["prediction_layer/Dense"],output_filename='model.uff')
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 228, in from_tensorflow_frozen_model
    graphdef.ParseFromString(frozen_pb.read())
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/message.py", line 199, in ParseFromString
    return self.MergeFromString(serialized)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1134, in MergeFromString
    if self._InternalParse(serialized, 0, length) != length:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 738, in DecodeField
    if value._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 717, in DecodeRepeatedField
    if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 872, in DecodeMap
    if submsg._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1188, in InternalParse
    buffer, new_pos, wire_type)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 973, in _DecodeUnknownField
    (data, pos) = _DecodeUnknownFieldSet(buffer, pos)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 952, in _DecodeUnknownFieldSet
    (data, pos) = _DecodeUnknownField(buffer, pos, wire_type)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 977, in _DecodeUnknownField
    raise _DecodeError('Wrong wire type in tag.')
google.protobuf.message.DecodeError: Wrong wire type in tag.

Following two codes is what I used for saving model and converting .pb to .uff, respectively.

main.py

import os
import pickle
import numpy as np
import matplotlib.pyplot as plt

import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds

tf.disable_v2_behavior()
tf.enable_eager_execution()

IMG_SIZE = 160 # All images will be resized to 160 x 160
BATCH_SIZE = 32
SHUFFLE_BUFFER_SIZE = 1000

def prYellow(skk): print("\033[93m {}\033[00m".format(skk))

def format_example(image, label):
	image = tf.cast(image, tf.float32)
	image = (image/127.5) -1
	image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
	return image, label

if __name__ == "__main__":
	tfds.disable_progress_bar()

	(raw_train, raw_validation, raw_test), metadata = tfds.load(
	    'cats_vs_dogs',
	    split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
	    with_info=True,
	    as_supervised=True,
	)

	prYellow(raw_train)
	prYellow(raw_validation)
	prYellow(raw_test)

	get_label_name = metadata.features['label'].int2str

	train = raw_train.map(format_example)
	validation = raw_validation.map(format_example)
	test = raw_test.map(format_example)

	train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
	validation_batches = validation.batch(BATCH_SIZE)
	test_batches = test.batch(BATCH_SIZE)

	for image_batch, label_batch in train_batches.take(1):
		prYellow(image_batch.shape)

	IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)

	# Create the base model from the pre-trained model MobileNet V2
	base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
												  include_top=False,
												  weights='imagenet')
	# This feature extractor converts each 160x160x3 images into a 5x5x1280 block of features
	feature_batch = base_model(image_batch)
	prYellow(feature_batch.shape)

	# Feature extraction
	# You will freeze the convolutional base created from the previous step and to use as a feature extractor
	# Freeze the convolutional base
	# Freezing prevents the weights in a given layer from being updated during training
	# MobileNet V2 has many layers, so setting the entire model's training flag to False will freeze all the layers
	base_model.trainable = False
	base_model.summary()

	# Add a classification head
	global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
	feature_batch_average = global_average_layer(feature_batch)
	prYellow(feature_batch_average.shape)

	prediction_layer = tf.keras.layers.Dense(1)
	prediction_batch = prediction_layer(feature_batch_average)
	prYellow(prediction_batch.shape)

	# Stack the feature extractorm and these two layers using a tf.keras.Sequential model:
	model = tf.keras.Sequential([
		base_model,
		global_average_layer,
		prediction_layer
	])

	# Compile the model
	base_learning_rate = 0.0001
	model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate),
				  loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
				  metrics=['accuracy'])
	model.summary()

	prYellow(model.trainable_variables)

	# Train the model
	initial_epochs = 1
	validation_steps = 20
	loss0, accuracy0 = model.evaluate(validation_batches, steps=validation_steps)
	prYellow("initial loss: {:.2f}".format(loss0))
	prYellow("initial accuracy: {:.2f}".format(accuracy0))

	with tf.device("/gpu:0"):
		history = model.fit(train_batches, 
							epochs=initial_epochs,
							validation_data=validation_batches)

	# Save the weights
	model.save_weights("model/3/cp.ckpt")
	tf.saved_model.save(model, 'model/3')

convert.py

import tensorflow as tf
import tensorrt as trt
import uff

# Load your newly created Tensorflow frozen model and convert it to UFF
tf.gfile = tf.io.gfile
uff_model = uff.from_tensorflow_frozen_model("model/3/saved_model.pb",["prediction_layer/Dense"],output_filename='model.uff')

How can I solve it?
Any help would be greatly appreciated.
Thanks in advance.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Could you also share the saved_model.pb with us?
We want to reproduce this issue in our environment first.

Thanks.

The issue here is that you are trying to convert to uff from a saved_model, not a frozen_model. To use the uff.from_tensorflow_frozen_model function, one needs to convert their saved_model to a frozen_model.

1 Like