Build engine from onnx failed

Description

I use a docker container build from tensorflow/tensorflow:2.5.0-gpu, and download tensorrt by myself in this container. It failed when I try to build an engine from my onnx model, and I can’t find any helpful information from the error message. I upload my onnx file here, and anyone may help me build the engine(any environment is OK…)? Or just tell me why I can’t build engine. Thanks!

Environment

TensorRT Version: 8.2.1.8
GPU Type: rtx2080
Nvidia Driver Version: 465.19.01
CUDA Version: 11.2.152
CUDNN Version: 8.1.0
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): tensorflow/tensorflow:2.5.0-gpu

Relevant Files

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

def build_engine(onnx_path, using_half=True, dynamic_input=False):
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network(1) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
        # builder.max_batch_size = 1
        config = builder.create_builder_config()
        config.max_workspace_size = 2 * 1 << 30
        if using_half:
            config.set_flag(trt.BuilderFlag.FP16)
        with open(onnx_path, 'rb') as model:
            if not parser.parse(model.read()):
                print('error: failed to parse onnx model')
                for error in range(parser.num_errors):
                    print(parser.get_error(error))
                return None
        engine = builder.build_engine(network, config)
        return engine

engine = build_engine(r'./model.onnx', False, True)

update

solve the problem using the tensorrt ngc(21.11-py3) temporarily, and I have removed the onnx model.

solve the problem using the tensorrt ngc(21.11-py3) temporarily

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!