Deepstream yolov8 trition server load the model plan

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**NVIDIA RTX A4000
• DeepStream Version 6.1.1 (docker images)
**• NVIDIA GPU Driver Version (valid for GPU only)**12.0
I am load the yolov8 model in trition server format of plan(tensorrt_plan) and i am try to inference using deepstream 6.1.1 but detection not get right

config_infer_triton_yolov8.txt (1.0 KB)

{
“name”: “pretrain_yolov8”,
“versions”: [
“1”
],
“platform”: “tensorrt_plan”,
“inputs”: [
{
“name”: “input”,
“datatype”: “FP32”,
“shape”: [
-1,
3,
640,
640
]
}
],
“outputs”: [
{
“name”: “boxes”,
“datatype”: “FP32”,
“shape”: [
-1,
8400,
4
]
},
{
“name”: “scores”,
“datatype”: “FP32”,
“shape”: [
-1,
8400,
1
]
},
{
“name”: “classes”,
“datatype”: “FP32”,
“shape”: [
-1,
8400,
1
]
}
]
}
and this a input and output format
and also parse are using GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

did you modify the DeepStream-Yolo code? are you using own yolov8 model or the model from this link?

please refer to this topic.

Thank you problem has resolve.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.