Frustrating Problem with Custom Yolov8 on Deepstream

• Hardware Platform dGPU
• DeepStream Version 6.3
• TensorRT Version
**• NVIDIA GPU Driver Version 535.129.03 **
• Question

I come here with frustrating problem, while doing my project for thesis. I’m new to DeepStream SDK, but I wanted to try create DP Python bindings usb camera pipeline detecting face masks using my custom Yolov8 model converted to ONNX and later to TensorRT engine.

My problem is: deepstream app detects everything it can, from lamp to mugs etc. all of which has the same class assigned and really high confidence (0.99-1)
I though it is a problem with my model but after using inference from this repo:
GitHub - triple-Mu/YOLOv8-TensorRT: YOLOv8 using TensorRT accelerate ! and Roboflow inference system I discovered it works perfectly fine.

Here are inputs and outputs from Netron:
name: images
tensor: float32[1,3,640,640]

name: num_dets
tensor: int32[1,1]

name: bboxes
tensor: float32[1,100,4]

name: scores
tensor: float32[1,100]

name: labels
tensor: int32[1,100]

For parsing I use custom C++ parser. Pipeline
v4l2src → nvvideoconvert → mux → nvinfer → nvvideoconvert → nvosd → video-renderer



# paths and func names
# onnx-file=../models/onnx/best.onnx

# output layers of YOLOv8


# use 0 for FP32, 1 for INT8, and 2 for FP16 precision.

# YOLOv8 has a specific number of classes it can detect, so update this to the correct number.

# gie-unique-id should be unique for each nvinfer element in the pipeline


Where those detected objects could come from? Thanks for your help

do you mean all objects have the correct bounding boxes and wrong labels? are all the labels the same?

what do you mean about this question? nvinfer plugin will do inference and postprocessing, then give these objects.

No, bboxes appearing on the screen are close to random, they focus on objects all around the room, pinting at edges of doors, lamp etc.

I’ve created another inference app, which is not connected with Deepstream and my custom parser and results are similar. I’ve alse tried using to test if the problem is not on the model side and I still I get multiple bounding boxes jumping around screen.

I think it might be something with pytorch → onnx → TensorRT conversion. I’m trying to change outputs to [bboxes, conf, class_id], but maybe it is not working correctly.

I attach screenshot of the output

(unkown label is appearing because I did not update labels.txt after changing model)

I had a similar problem when I was running YOLOv8 in Triton. The point was that you need to use preprocessing and postprocessing exactly the same as in the Ultralytics sources. Try it, it helped me

Thanks for the response,
I managed to solve my problem by changing the outputs of my Yolo model using GraphSurgeon and changing resolution of the source. It works perfectly now :)

Glad to know you fixed it, thanks for the update! Is this still an DeepStream issue to support? Thanks!