• Hardware Platform dGPU
• DeepStream Version 6.3
• TensorRT Version 188.8.131.52
**• NVIDIA GPU Driver Version 535.129.03 **
I come here with frustrating problem, while doing my project for thesis. I’m new to DeepStream SDK, but I wanted to try create DP Python bindings usb camera pipeline detecting face masks using my custom Yolov8 model converted to ONNX and later to TensorRT engine.
My problem is: deepstream app detects everything it can, from lamp to mugs etc. all of which has the same class assigned and really high confidence (0.99-1)
I though it is a problem with my model but after using inference from this repo:
GitHub - triple-Mu/YOLOv8-TensorRT: YOLOv8 using TensorRT accelerate ! and Roboflow inference system I discovered it works perfectly fine.
Here are inputs and outputs from Netron:
For parsing I use custom C++ parser. Pipeline
v4l2src → nvvideoconvert → mux → nvinfer → nvvideoconvert → nvosd → video-renderer
gpu-id=0 net-scale-factor=1 # paths and func names # onnx-file=../models/onnx/best.onnx model-engine-file=../models/trt_engines/best.engine labelfile-path=../models/labels.txt custom-lib-path=../src/nvdsinfer_custom_impl_YoloV8/libnvdsinfer_custom_impl_YoloV8.so parse-bbox-func-name=NvDsInferParseCustomYoloV8 # output layers of YOLOv8 output-blob-names=num_dets;bboxes;scores;labels batch-size=1 interval=0 # use 0 for FP32, 1 for INT8, and 2 for FP16 precision. network-mode=2 # YOLOv8 has a specific number of classes it can detect, so update this to the correct number. num-detected-classes=3 # gie-unique-id should be unique for each nvinfer element in the pipeline gie-unique-id=1 [class-attrs-all] pre-cluster-threshold=0.6 group-threshold=1``` Where those detected objects could come from? Thanks for your help