Unable to detect objects with custom single-class model in DeepStream pipeline

Hello NVIDIA DeepStream community,

I am working on a deployment using an NVIDIA Jetson platform and the DeepStream SDK, and I’m encountering an issue where my model loads but produces no detections in the pipeline. Below are the full details of my setup and what I have done so far. Any help or pointers would be greatly appreciated.

Hardware Platform (Jetson / GPU):
jetson orin nano super

DeepStream Version:
DeepStream SDK 7.1

JetPack Version (valid for Jetson only):
JetPack / L4T R36 (JetPack 6.x)

TensorRT Version:
TensorRT 10.3 (CUDA 12.6)

Issue Type:
Question / troubleshooting (model loads but no object detections)

How to reproduce the issue:

I trained a custom object-detection model (one single class) using Ultralytics YOLOv8.

I exported the model to ONNX, then built a TensorRT engine.

I set up config files (config_infer_primary_…txt and deepstream_app_config.txt) to point to the engine, included num-detected-classes=1, set output-blob-names=output0, and used parse-bbox-func-name=NvDsInferParseYolo.

On running deepstream-app -c …, the engine loads successfully (as seen in logs) but no bounding boxes ever appear on the display, and I do not see any objectList size > 0 in the logs.

I can verify via a separate ONNX inference script that the model indeed produces valid bounding box outputs on test images.

I have tried very low thresholds (pre-cluster-threshold=0.01), changed model-color-format, reduced streammux resolution, ensured correct paths and permissions, but still no detections.

Requirement details / question:
I would appreciate help with one or more of the following:

Confirming whether the parse-bbox function name (NvDsInferParseYolo) is correct for a YOLOv8 model that outputs shape 1×5×N (i.e., 4 coords + a score, single class).

Guidance on how to set the output-blob-names correctly when the TensorRT engine binding shows output0.

Advice on rebuilding or selecting the correct “DeepStream-Yolo” parser library (custom lib .so) compatible with YOLOv8 single-class format on Jetson with CUDA 12.6.

Any suggestions or known pitfalls when deploying a single-class YOLOv8 detection model in DeepStream on Jetson, especially for the “no detections” error scenario.

Thank you in advance for your support.

The DeepStream nvinfer configuration should be aligned to your model and the model’s training parameters. The postprocessing should be aligned to your model too. Please consult the author of the Ultralytics YOLOv8 model for the details of the model preprocessing and postprocessing.

The only sample we can provide is deepstream_tools/yolo_deepstream/deepstream_yolo at DS_7.1 · NVIDIA-AI-IOT/deepstream_tools

Hi Fiona,
Thanks a thousand times for your guidance on the DeepStream nvinfer configuration. I followed the example in the “yolo_deepstream/deepstream_yolo” repo you referenced and it made the difference. I admit I struggled quite a bit — especially when I forgot to add the transpose with the append_transpose_yolov8_v9.py step — but now it’s up and running. Thanks again for your help!