Object Detection in Jetson Nano

I followed https://medium.com/swlh/how-to-run-tensorflow-object-detection-model-on-jetson-nano-8f8c6d4352e8 steps to generate TF-TRT models for Jetson Nano. While I am able to generate the TRT models using Google Colab, I am facing problem in deploying those models in Nano.
While loading TensorRT graph using the code

import tensorflow as tf

def get_frozen_graph(graph_file):
    """Read Frozen Graph file from disk."""
    with tf.gfile.FastGFile(graph_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    return graph_def

# The TensorRT inference graph file downloaded from Colab or your local machine.
pb_fname = "./model/trt_graph.pb"
trt_graph = get_frozen_graph(pb_fname)

input_names = ['image_tensor']

# Create session and load graph
tf_config = tf.ConfigProto()
tf_config.gpu_options.allow_growth = True
tf_sess = tf.Session(config=tf_config)
tf.import_graph_def(trt_graph, name='')

I get the following error:

InvalidArgumentError                      Traceback (most recent call last)
~/.local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py in import_graph_def(graph_def, input_map, return_elements, name, op_dict, producer_op_list)
    425         results = c_api.TF_GraphImportGraphDefWithResults(
--> 426             graph._c_graph, serialized, options)  # pylint: disable=protected-access
    427         results = c_api_util.ScopedTFImportGraphDefResults(results)

InvalidArgumentError: NodeDef mentions attr 'half_pixel_centers' not in Op<name=ResizeBilinear; signature=images:T, size:int32 -> resized_images:float; attr=T:type,allowed=[DT_INT8, DT_UINT8, DT_INT16, DT_UINT16, DT_INT32, DT_INT64, DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE]; attr=align_corners:bool,default=false>; NodeDef: {{node Preprocessor/ResizeImage/resize/ResizeBilinear}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-4-380436e20a8b> in <module>
      3 tf_config.gpu_options.allow_growth = True
      4 tf_sess = tf.Session(config=tf_config)
----> 5 tf.import_graph_def(trt_graph, name='')

~/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    505                 'in a future version' if date is None else ('after %s' % date),
    506                 instructions)
--> 507       return func(*args, **kwargs)
    508 
    509     doc = _add_deprecated_arg_notice_to_docstring(

~/.local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py in import_graph_def(graph_def, input_map, return_elements, name, op_dict, producer_op_list)
    428       except errors.InvalidArgumentError as e:
    429         # Convert to ValueError for backwards compatibility.
--> 430         raise ValueError(str(e))
    431 
    432     # Create _DefinedFunctions for any imported functions.

ValueError: NodeDef mentions attr 'half_pixel_centers' not in Op<name=ResizeBilinear; signature=images:T, size:int32 -> resized_images:float; attr=T:type,allowed=[DT_INT8, DT_UINT8, DT_INT16, DT_UINT16, DT_INT32, DT_INT64, DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE]; attr=align_corners:bool,default=false>; NodeDef: {{node Preprocessor/ResizeImage/resize/ResizeBilinear}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

I search Stackoverflow for GraphDef issue. It says to downgrade the Tensorflow version to resolve it, but since NVIDIA is providing the Tensorflow version for Jetson Nano, we can’t do that.

Is there a way around for this issue?

Hi,

InvalidArgumentError: NodeDef mentions attr 'half_pixel_centers' not in Op<name=ResizeBilinear; signature=images:T, size:int32 -> resized_images:float; attr=T:type,allowed=[DT_INT8, DT_UINT8, DT_INT16, DT_UINT16, DT_INT32, DT_INT64, DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE]; attr=align_corners:bool,default=false>; NodeDef: {{node Preprocessor/ResizeImage/resize/ResizeBilinear}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

This looks like a compatibility issue.
A workaround is to regenerate the .pb model with the same TensorFlow version of Nano.

Thanks.

@AastaLLL, Is this because since normal Tensorflow is not compatible with Jetson Nano and Nano having its own version of Tensorflow?

Will it work if I install Jetson Nano version of Tensorflow in a host PC, regenerate the .pb model in that and than deploy it on Nano?

Hi,
As jetson Nano having tensorflow[1.13.1] so I installed same version of tensorflow and able to generate trt_graph.pb file using colab nootbook

import tensorflow as tf

def get_frozen_graph(graph_file):
    """Read Frozen Graph file from disk."""
    with tf.gfile.FastGFile(graph_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    return graph_def

# The TensorRT inference graph file downloaded from Colab or your local machine.
pb_fname = "/home/ageye01/Desktop/trt_graph_1.13.1.pb"
trt_graph = get_frozen_graph(pb_fname)

input_names = ['image_tensor']

# Create session and load graph
tf_config = tf.ConfigProto()
tf_config.gpu_options.allow_growth = True
tf_sess = tf.Session(config=tf_config)
tf.import_graph_def(trt_graph, name='')

tf_input = tf_sess.graph.get_tensor_by_name(input_names[0] + ':0')
tf_scores = tf_sess.graph.get_tensor_by_name('detection_scores:0')
tf_boxes = tf_sess.graph.get_tensor_by_name('detection_boxes:0')
tf_classes = tf_sess.graph.get_tensor_by_name('detection_classes:0')
tf_num_detections = tf_sess.graph.get_tensor_by_name('num_detections:0')

now, while loading the trt_graph.pb getting error.

XXXXX01@XXXXX01-desktop:~$ python3 Desktop/test.py 
WARNING:tensorflow:From Desktop/test.py:5: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
Traceback (most recent call last):
  File "Desktop/test.py", line 12, in <module>
    trt_graph = get_frozen_graph(pb_fname)
  File "Desktop/test.py", line 7, in get_frozen_graph
    graph_def.ParseFromString(f.read())
  File "/home/ageye01/.local/lib/python3.6/site-packages/google/protobuf/message.py", line 187, in ParseFromString
    return self.MergeFromString(serialized)
  File "/home/ageye01/.local/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1124, in MergeFromString
    if self._InternalParse(serialized, 0, length) != length:
  File "/home/ageye01/.local/lib/python3.6/site-packages/google/protobuf/internal/python_message.py", line 1189, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/home/ageye01/.local/lib/python3.6/site-packages/google/protobuf/internal/decoder.py", line 700, in DecodeRepeatedField
    raise _DecodeError('Truncated message.')
google.protobuf.message.DecodeError: Truncated message.

Please help me to fix this error

@dinesh.cse31 tensorflow 1.13.1 doesn’t have TensorRT in it, how did you manage to overcome that problem?

I also faced the same problem,

use this colab notebook link this environment is already having Tensorflow [1.14.0] so I uninstalled and installed [1.13.1] and there i am able to import TensorRT.

Hi,

May I know which TensorFlow version do you use for generating the .pb file?
Suppose you should avoid this issue by re-generating the model with v1.13.1.

Thanks.

This looks like a protobuf issue.

Please noticed that there are some compatible issue cross different TensorFlow version.
This may force you to regenerate the .pb file rather than parsing it.

Thanks.