TensorRT build.build_serialized_network return silent None

Hello,

I have a ONNX file model that I want optimize with tensorRT.

I do the following :

import tensorrt as trt

TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)

def Construction_Engine_ONNX(fichier):
builder = trt.Builder(TRT_LOGGER)
config = builder.create_builder_config()
builder.max_batch_size = 1 # Max BS = 1
config.max_workspace_size = 1000000000 # 1GB
config.set_flag(trt.BuilderFlag.TF32) # TF32
config.set_flag(trt.BuilderFlag.STRICT_TYPES)

EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
network = builder.create_network(EXPLICIT_BATCH)

parser = trt.OnnxParser(network, TRT_LOGGER)

with open(fichier, 'rb') as model:
    if not parser.parse(model.read()):
        return None
    else:
        return builder.build_serialized_network(network, config)

engine = Construction_Engine_ONNX(“file.onnx”)
serialized_engine = engine.serialize()

Get error : ‘NoneType’ object has no attribute ‘serialize’

With trtexec it works.

I I use builder.build_engine(network, config) instead of build_serialized_network I got :

/usr/lib/python3/dist-packages/ipykernel_launcher.py:30: DeprecationWarning: Use build_serialized_network instead.

Can you help me please ? Thanks a lot !

Link to ONNX file : Regression_Resnet18_saved_model.onnx - Google Drive

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!