ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more

Hello there.
When I use the ‘jetson nano example’, the example path is ‘usr/src/tensorrt/samples/python/uff_ssd’, using the example provided by ‘ssd_inception_v2_coco_2017_11_17’ can be predicted normally, when I train the same under Google object detection Model ‘ssd_incetion_v2’, copy pb and other files to ‘usr/src/tensorrt/samples/python/uff_ssd/workspace/models’, using ‘python3 detect_objects.py image/image1.jpg’ with the following error:
[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time
Building TensorRT engine. This may take few minutes.
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “detect_objects_beifen.py”, line 237, in
Main()
File “detect_objects_beifen.py”, line 188, in main
Batch_size=parsed[‘max_batch_size’])
File “/usr/src/tensorrt/samples/python/uff_ssd/utils/inference.py”, line 73, in init
Engine_utils.save_engine(self.trt_engine, trt_engine_path)
File “/usr/src/tensorrt/samples/python/uff_ssd/utils/engine.py”, line 83, in save_engine
Buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

The commands I train under Google object detection are:
’ python3 legacy/train.py --train_dir=voc/ssd_inception_work/train_dir/ --pipeline_config_path=voc/ssd_inception_work/ssd_inception_v2_coco.config’
The commands used to fix a pb file are:
‘python3 export_inference_graph.py --pipeline_config_path voc/ssd_inception_work/ssd_inception_v2_coco.config --trained_checkpoint_prefix voc/ssd_inception_work/train_dir/model.ckpt-5058 --output_directory voc/ssd_inception_work/export/ --input_type image_tensor’

Is there any way to solve this error?

The ‘model.py’ I used is an example. The mo_to_uff function looks like this:
Def model_to_uff(model_path, output_uff_path, silent=False):
“”"Takes frozen .pb graph, converts it to .uff and saves it to file.

Args:
Model_path (str): .pb model path
Output_uff_path (str): .uff path where the UFF file will be saved
Silent (bool): if True, writes progress messages to stdout

“”"
Dynamic_graph = gs.DynamicGraph(model_path)
Dynamic_graph = ssd_unsupported_nodes_to_plugin_nodes(dynamic_graph)

Eff.from_tensorflow(
Dynamic_graph.as_graph_def(),
[ModelData.OUTPUT_NAME],
Output_filename=output_uff_path,
Text=True
)
Where ModelData.OUTPUT_NAME is “NMS”
In my own training model, I analyzed the input and output nodes with tools, and the output looks like this:
Python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo/utils/summarize_graph.py --input_model ./frozen_inference_graph.pb
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
6 output(s) detected:
Detection_boxes
Detection_scores
Detection_classes
Num_detections
Raw_detection_boxes
Raw_detection_scores

The reshape question was mentioned in the error message because my input shape is not suitable?
The above tool is the information I got from a script from Intel. What should I do?

The model used in the example, the node I used to print the tool is as follows:

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo/utils/summarize_graph.py --input_model ./frozen_inference_graph.pb
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
16 output(s) detected:
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/switch_t
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/switch_f
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_t
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_f
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/switch_t
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/switch_f
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/cond/switch_t
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/cond/switch_f
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_3/switch_t
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_3/switch_f
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_3/cond/switch_t
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_3/cond/switch_f
detection_boxes
detection_scores
num_detections
detection_classes

Hi,

It looks like you refile a topic of issue 1055548: https://devtalk.nvidia.com/default/topic/1055548
Let use this topic to trace the following status.

It looks like your model changed after retraining.
So you will need to update the layer name of the model.py.

Would you mind to share the model with us so we can check it for you.
Thanks.