Problem with onnx2trt for Mobilenetv2 model

def build_engine():
flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
“”“Takes an ONNX file and creates a TensorRT engine to run inference with”""
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(flag) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1 << 28 # 256MiB
builder.max_batch_size = 1
# Parse model file
print(‘Loading ONNX file from path {}…’.format(onnx_file_path))
with open(onnx_file_path, ‘rb’) as model:
print(‘Beginning ONNX file parsing’)
print(‘Completed parsing of ONNX file’)
print(‘Building an engine from file {}; this may take a while…’.format(onnx_file_path))
engine = builder.build_cuda_engine(network)
print(“Completed creating Engine”)
with open(engine_file_path, “wb”) as f:
return engine
Hello, I am using this script to convert onnx into trt engine for ssd mobilenet model.
And then, I have the following error.

Beginning ONNX file parsing
[TensorRT] ERROR: Parameter check failed at: …/builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
Completed parsing of ONNX file
Building an engine from file ssd_mobilenet_v2_coco_2018_03_29/model_edit.onnx; this may take a while…
[TensorRT] ERROR: Network must have at least one output
Completed creating Engine
Traceback (most recent call last):
File “”, line 87, in
engine = get_engine(onnx_file_path, engine_file_path)
File “”, line 82, in get_engine
return build_engine()
File “”, line 78, in build_engine
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

What is wrong?
I tried with many articles related to that, but not working.
Could you please give me detailed help directly here?
I appreciate for that.

Hi @akulov.eugen,
Can you please share your onnx model, so that we can try this on our end?


I will be so appreciate if you would help it.
I want to be sure about that.

I need full guide how to run my model, because it is the customized ssd-mobilenet-v2-fn model, not the default model.
In addition, the default model can’t be converted, unfortunately.
I am new to this.
please help.

Hi @akulov.eugen,
Can you please share your Onnx model as well?


I used the following to generate onnx model.

python -m tf2onnx.convert --input frozen_inference_graph.pb --inputs image_tensor:0 --outputs detection_boxes:0,detection_classes:0,detection_scores:0,num_detections:0 --output model.onnx --opset 11

The result onnx modol is attached.

Hello, can you let me know whether it works on your side?

Hello, how is it going?