Tensorrt YOLOv8 with deepstream python

• Hardware Platform (Jetson / GPU) Dual Nvidia A2
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only) 535.154.05
• Issue Type( questions, new requirements, bugs) questions & bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description

Hello there, I want to replace peoplenet as pgie with yolov8 in my system as a tensorrt engine , I started out with exporting the original yolov8 torch model from the official ultralytics repo to a dynamic onnx version using this code

from ultralytics import YOLO


model = YOLO('yolov8s.pt')  # load an official model
# model = YOLO('path/to/best.pt')  # load a custom model

# Predict with the model
# results = model('https://ultralytics.com/images/bus.jpg', save= True)  # predict on an image

model.export(format ="onnx",dynamic=True,imgsz=(640,640))

then used the following command to export to trt model

trtexec --saveEngine=./yolov8_s.engine  --fp16 --onnx=./yolov8s.onnx --minShapes=images:1x3x640x640 --optShapes=images:64x3x640x640 --maxShapes=images:64x3x640x640 --shapes=images:64x3x640x640 --workspace=10000

when running the deepstream pipeline I’m facing this error

ERROR: infer_postprocess.cpp:623 Could not find output coverage layer for parsing objects
ERROR: infer_postprocess.cpp:1078 Failed to parse bboxes
ERROR: infer_postprocess.cpp:388 detection parsing output tensor data failed, uid:1, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:275 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
0:00:03.940816100     1 0x7f3e20001d80 WARN           nvinferserver gstnvinferserver.cpp:581:gst_nvinfer_server_push_buffer:<primary-inference> error: inference failed with unique-id:1

and after some investigations I can tell that my configs need to be adjusted, the problem is I don’t know how to adjust them to work properly with deepstream 6.3, please let me know what needs to be modified.
the triton config file is

name: "yolov8"
platform: "tensorrt_plan"
max_batch_size: 64
default_model_filename: "yolov8_s.engine"
input [
  {
    name: "images"
    data_type: TYPE_FP32
    format: FORMAT_NCHW
    dims: [ 3, 640, 640 ]
  }
]
output [
  {
    name: "output0"
    data_type: TYPE_FP32
    dims: [ 84, 8400]
  }
]
instance_group [
  {
    kind: KIND_GPU
    count: 1
    gpus: 0
  }
]

and for the deepstream config

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 64
  backend {
    triton {
      model_name: "yolov8"
      version: -1
      model_repo {
        root: "/opt/nvidia/deepstream/deepstream-6.3/samples/triton_model_repo"
        strict_model_config: true
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    tensor_name: "images"
    maintain_aspect_ratio: 1
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    symmetric_padding: 1
    normalize {
      scale_factor: 0.0039215697906911373
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "/opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/labels_peoplenet.txt"
    detection {
      num_detected_classes: 80
      per_class_params {
        key: 0
        value { pre_threshold: 0.25 }
      }
      nms {
        confidence_threshold:0.2
        topk:100
        iou_threshold:0.45
      }
    }
  }

  extra {
    copy_input_to_host_buffers: false
    output_buffer_pool_size: 128
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  operate_on_gie_id: -1
  interval: 0
}
output_control {
  output_tensor_meta: true
}

please correct triton config. especially please check if name and disms are correct.

can you please clarify what exactly to do? the names are set as produced in the model summary when exporting and the dimensions are as stated in the official ultralytics repo

nvinferserver plugin is opensource. you can find this error in opt\nvidia\deepstream\deepstream-6.4\sources\libs\nvdsinferserver\infer_postprocess.cpp. it is because custom_parse_bbox_func is not set in postprocess. please refer to this yolo triton sample.

@fanzh Okay I’ve checked and turns out I did need to make a lot of adjustments, I came across this guide and this repo that everyone is using to integrate yolov8 with deepstream.

I followed all the instructions and managed to eliminate the error mentioned above, however there’s a new problem now that’s happening as the pipeline unexpectedly stops when I run it without any errors or issues, it just stops right before it’s supposed to start proocessing the frames and then retries after 30 seconds as the we have a retrial feature triggering when system stops, I’ll attach the 2 config files and the log file for 2 different runs one with peopelnet and one with yolo, please take a look and let me know if there’s something else to be adjusted or should be changed
config_infer_primary_pplnet.txt (1.2 KB)
config_infer_primary_yoloV8.txt (1.4 KB)
config_pplne.pbtxt.txt (1.4 KB)
config_yolov8.pbtxt.txt (553 Bytes)
log_yolo.txt (13.5 KB)
log_pplnet.txt (237.1 KB)

1 Like

please refer to this yolov8 triton sample.

okay I will take a look at it and see what I can find then get back to you

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

no worries, we had some other issues that needed attention first, so had to put this on hold for a while but I’m back looking on it now.

after checking this ticket and through it this one Nvinfer's results are different from nvinferserver - #16 by Fiona.Chen
unfortunately still nothing helped me solve my issue, as I clarified before the issue is not that the model doesn’t produce correct results as in the ticket mentioned here, the problem is that system stops unexpectedly without any errors or apparent reasons

can you try to replicate the behavior and let me know please, I already shared the configs above in case you need them, let me know if you need anything else from me

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

  1. you can compare your configuration files with the files in the topic above.
  2. please narrow down this issue, for example, you can add a probe function on nvinferserver’s sink to check if nvinferserver get data. you can add log in NvDsInferParseYolo to check if this function is called.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.