TRT Fails to parse ONNX Model (yolov8 Segmentation)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson Nvidia Xavier NX
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.2.1.8
• NVIDIA GPU Driver Version (valid for GPU only):
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi

I tried to serialize a yolov8-Segmentation Network but it fails.

I converted the YoloV8 Model using the instructions given from GitHub - marcoslucianops/DeepStream-Yolo-Seg: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models .
When trying to generate the TRT it fails with the following error.

I also like to mention that I used many yoloV8 models for object detection and I never had an issue while serializing.
The issue only occurs when using segmentation model.

nvidia@ubuntu:/etc/stereo_camera/dnn/DeepStream-Yolo-Seg$ deepstream-app -c deepstream_app_config.txt 

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /etc/villiger/gripper/seg_dnn/yolov8s-seg.onnx_b1_gpu0_fp32.engine open error
0:00:03.498454304  7679   0x7f300022c0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/etc/villiger/gripper/seg_dnn/yolov8s-seg.onnx_b1_gpu0_fp32.engine failed
0:00:03.521018368  7679   0x7f300022c0 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/etc/villiger/gripper/seg_dnn/yolov8s-seg.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:03.521485568  7679   0x7f300022c0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 449 [RoiAlign -> "/1/RoiAlign_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.22/proto/cv3/act/Mul_output_0"
input: "/1/Reshape_1_output_0"
input: "/1/Gather_1_output_0"
output: "/1/RoiAlign_output_0"
name: "/1/RoiAlign"
op_type: "RoiAlign"
attribute {
  name: "coordinate_transformation_mode"
  s: "half_pixel"
  type: STRING
}
attribute {
  name: "mode"
  s: "avg"
  type: STRING
}
attribute {
  name: "output_height"
  i: 320
  type: INT
}
attribute {
  name: "output_width"
  i: 320
  type: INT
}
attribute {
  name: "sampling_ratio"
  i: 0
  type: INT
}
attribute {
  name: "spatial_scale"
  f: 0.25
  type: FLOAT
}

ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:04.630773120  7679   0x7f300022c0 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:04.656896928  7679   0x7f300022c0 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:04.657157920  7679   0x7f300022c0 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:04.657713088  7679   0x7f300022c0 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:04.657768064  7679   0x7f300022c0 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /etc/stereo_camera/dnn/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:707>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /etc/stereo_camera/dnn/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

The topic was previously discussed here, but unfortunately has not been solved.

But unfortunately not solved.

The moderator suggested to update to Jetpackversion:

Hi,
TensorRT 8.2 has been released for a while.
Would you mind giving JetPack 5 a try?
Thanks.

But Tensor version RT 8.2 was already used, even 8.2.1:

[09/28/2023-16:24:31] [I] TensorRT version: 8.2.1

Meaning, JetPack 5.1.2

Unfortunately I was not able to follow this further back then because of work issues but I’d like to solve this now.

I appreciate any help with this topic.

Cheers, Mike

I have tried on my Jetson Orin with Jetpack 6.0. It works normally. Maybe your Jetpack version is too low. You can check with the owner of the project to see which version he is using for this model.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks