Generate engine using onnx2trt, The engine was used to call deepstream, but an error was reported

Please provide complete information as applicable to your setup.

*• Hardware Platform (Jetson / GPU)jetson xavier nx *
**• DeepStream Version
4.5.1

**• JetPack Version (valid for Jetson only)jetpack 4.5.1
**• TensorRT Version7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)The engine file generated by onnx2trt cannot run on Deepstream
**• *I trained the model using the YOLOv8 framework, exported the code using YOLOv8, converted the YOLOv8 PT format model to ONNX format, and then used ONNX2TRT to generate the engine file. Finally, I called the engine file to Deepstream and reported an error.
Using the command “deepstream test5 app - c sourcetest1. txt”, an error message appears as follows
Unknown or legacy key specified ‘symmetric-padding’ for group [property]
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:08.784969216 11321 0x31ff6070 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/rongtong/model/202405/yolov8/best.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x1280x1280
1 OUTPUT kFLOAT output0 11x33600

0:00:08.785277312 11321 0x31ff6070 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/rongtong/model/202405/yolov8/best.engine
0:00:08.801270752 11321 0x31ff6070 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/rongtong/model/202405/yolov8/config_infer_primary.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

    p: Pause
    r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
Thu May 23 09:21:15 2024
**PERF: 0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 279
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 279
** INFO: <bus_callback:167>: Pipeline running

NvMMLiteOpen : Block : BlockType = 8
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
Segmentation fault (core dumped)*
**•
yolov8n.zip (5.7 MB)
Deepstream also called the engine file, but encountered an error while extracting the video stream. How to adjust the code to enable the engine file to be called by deepstream, pull video streams, and perform object detection and recognition

You can try to convert *.pt to *.onnx, and then let DeepStream convert the model format automatically.

If there are no operators not supported by TRT in your model, the conversion will usually not fail.

Here is a similar project, you can refer it.

I have tried using the YOLOv8 replacement code in Deepstream YOLO for conversion, but the following error occurred. Therefore, I tried using the onnx2trt command to produce the engine.

onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

ERROR: [TRT]: Repeated layer name: /0/model.22/Split_1 (layers must have distinct names)
ERROR: [TRT]: Network validation failed.
Building engine failed

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.285129120 8701 0x26bb2c70 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
0:00:05.285209568 8701 0x26bb2c70 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 1]: build backend context failed
0:00:05.285511392 8701 0x26bb2c70 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 1]: generate backend failed, check config file settings
0:00:05.285632448 8701 0x26bb2c70 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:05.285671680 8701 0x26bb2c70 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start:<primary_gie> error: Config file path: /home/rongtong/model/202405/yolov8/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:1451: Failed to set pipeline to PAUSED

I have tried your yolov8n.pt with DeepStream-Yolo and ultralytics, It work finely.

Noticed you are using a very old version, since you are xavier nx you can upgrade to DS-6.3.

You can use SDKManager to burn

https://docs.nvidia.com/sdk-manager/index.html

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks