Generate trt engine error ‘ UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimen...

Hello there.
When I use the ‘jetson nano example’, the example path is ‘usr/src/tensorrt/samples/python/uff_ssd’, using the example provided by ‘ssd_inception_v2_coco_2017_11_17’ can be predicted normally, when I train the same under Google object detection Model ‘ssd_incetion_v2’, copy pb and other files to ‘usr/src/tensorrt/samples/python/uff_ssd/workspace/models’, using ‘python3 image/image1.jpg’ with the following error:
[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time
Building TensorRT engine. This may take few minutes.
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “”, line 237, in
File “”, line 188, in main
File “/usr/src/tensorrt/samples/python/uff_ssd/utils/”, line 73, in init
Engine_utils.save_engine(self.trt_engine, trt_engine_path)
File “/usr/src/tensorrt/samples/python/uff_ssd/utils/”, line 83, in save_engine
Buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

The commands I train under Google object detection are:
’ python3 legacy/ --train_dir=voc/ssd_inception_work/train_dir/ --pipeline_config_path=voc/ssd_inception_work/ssd_inception_v2_coco.config’
The commands used to fix a pb file are:
‘python3 --pipeline_config_path voc/ssd_inception_work/ssd_inception_v2_coco.config --trained_checkpoint_prefix voc/ssd_inception_work/train_dir/model.ckpt-5058 --output_directory voc/ssd_inception_work/export/ --input_type image_tensor’

Is there any way to solve this error?


Could you help to check /usr/src/tensorrt/samples/python/uff_ssd/utils/ file first?

This file describes the process to convert .pb file into .uff and is model-dependent.
You may need to update it if the model name/architecture changed.


Hi AastaLLL,
The ‘’ I used is an example. The mo_to_uff function looks like this:
Def model_to_uff(model_path, output_uff_path, silent=False):
“”"Takes frozen .pb graph, converts it to .uff and saves it to file.

    Model_path (str): .pb model path
    Output_uff_path (str): .uff path where the UFF file will be saved
    Silent (bool): if True, writes progress messages to stdout

Dynamic_graph = gs.DynamicGraph(model_path)
Dynamic_graph = ssd_unsupported_nodes_to_plugin_nodes(dynamic_graph)


Where ModelData.OUTPUT_NAME is “NMS”
In my own training model, I analyzed the input and output nodes with tools, and the output looks like this:
Python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo/utils/ --input_model ./frozen_inference_graph.pb
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
6 output(s) detected:

The reshape question was mentioned in the error message because my input shape is not suitable?
The above tool is the information I got from a script from Intel. What should I do?

Duplicate to topic 1055654.
Check the following status on the topic: