Segmentation fault when using TensorRT to compile a model

Description

Compiling the sub-model attached receives a segmentation fault.

Environment

TensorRT Version : 8.4.1.5
NVIDIA GPU : RTX 3080Ti
NVIDIA Driver Version : 510
CUDA Version : 11.6
CUDNN Version : 8.4.1
Operating System : Ubuntu 20.04
Python Version (if applicable) : 3.8

Relevant Files

model.onnx (942 Bytes)

Steps To Reproduce

import onnx
import tensorrt as trt

onnx_model = onnx.load("model.onnx")
model = onnx.checker.check_model(onnx_model, full_check=True)

builder = trt.Builder(trt.Logger(trt.Logger.WARNING))
network = builder.create_network(1 << (int)(
    trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
config = builder.create_builder_config()
config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 1 * 1 << 30)
parser = trt.OnnxParser(network, trt.Logger(trt.Logger.WARNING))
assert parser.parse(onnx._serialize(onnx_model))
engine = builder.build_engine(network, config)
# ...

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!