Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)**NVIDIA RTX A4000
• DeepStream Version 6.1.1 (docker images)
**• NVIDIA GPU Driver Version (valid for GPU only)**12.0
I am load the yolov8 model in trition server format of plan(tensorrt_plan) and i am try to inference using deepstream 6.1.1 but detection not get right
config_infer_triton_yolov8.txt (1.0 KB)
{
“name”: “pretrain_yolov8”,
“versions”: [
“1”
],
“platform”: “tensorrt_plan”,
“inputs”: [
{
“name”: “input”,
“datatype”: “FP32”,
“shape”: [
-1,
3,
640,
640
]
}
],
“outputs”: [
{
“name”: “boxes”,
“datatype”: “FP32”,
“shape”: [
-1,
8400,
4
]
},
{
“name”: “scores”,
“datatype”: “FP32”,
“shape”: [
-1,
8400,
1
]
},
{
“name”: “classes”,
“datatype”: “FP32”,
“shape”: [
-1,
8400,
1
]
}
]
}
and this a input and output format
and also parse are using GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 8.0 / 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models · GitHub
