Deepstream onnx model error

0:00:00.443491425 1484 0x561082a6af30 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1642> [UID = 1]: Trying to create engine from model files

Input filename: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/huinengtong/cross.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.6.1
Domain:
Model version: 0
Doc string:

ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:00.918675080 1484 0x561082a6af30 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1662> [UID = 1]: build engine file failed

How to adapt onnx model?

config:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
onnx-file=/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/huinengtong/cross.onnx
batch-size=1
network-mode=0
num-detected-classes=5
interval=0
gie-unique-id=1

network-type=0
cluster-mode=2
model-color-format=0
maintain-aspect-ratio=1

[class-attrs-all]
nms-iou-threshold=0.3
pre-cluster-threshold=0.2

**• Hardware Platform (Jetson / GPU)**2080Ti
• DeepStream Version5.0Preview
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7
**• NVIDIA GPU Driver Version (valid for GPU only)**440

Hi,

It looks like you are using a customized onnx model.
Could you help to check if the model is fully-supported by the TensorRT first?

$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model/path]

Thanks.