[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time

Description

[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “main.py”, line 43, in
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

How to fix it?

Environment

TensorRT Version: 5.0
GPU Type: Jetson TX2
Nvidia Driver Version:
CUDA Version: 10.0
CUDNN Version: 7.3.1
Operating System + Version: Jetpakc 4.2 + ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):1.14.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Based on the error it seems that the reshape operation in your model doesn’t satisfy following condition:
“-1 specifies that the dimension should be automatically deduced - this can only be used at most once in any given shape.”
Please refer below link for more details:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/Operators.html#reshape

Thanks

Is that wrong with the training or wrong when I make an frozen inference graph?

What should I do to fix that error?

Hi,

Can you share the model and script file so that we can help better?

Thanks

I use ssd_mobilenet_v2 model.

research/object_detection/legacy/train.py : [https://github.com/tensorflow/models/blob/master/research/object_detection/legacy/train.py

research/object_detection/export_inference_graph.py : https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py

Can you share the “frozen_inference_graph.pb” model file that you are using to TRT conversion?

Thanks

Do I have to upload the file here? I don’t know what ‘share frozen_inference_graph.pb’ means.

Yes, could you please share your model file?

Thanks