How to use custom object detector i.e nvinfer in ds-example

I am using yolov5 spp model i have converted it into tensorrt using this repo i have also done the bounding box parsing etc. etc. but still deepstream inference is giving wrong/strange output. here i have my code you can look at here. Another alternate would be to call my infer function id ds-example to process a buffer and attach the detected objects meta data.

Any suggestions would be appreciated.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only)


There is a guide of deploying YOLOv4 to DeepStream that could be a reference for you.

However, the way to parse model output for YOLOv5 may be very different. There are some hints that may be helpful.

Hint1: Contents of output may be different

In the guide for YOLOv4, there are 2 types of information in the output: 1) x1, y1, x2, y2 of bounding boxes and 2) Confidences of each bounding box throughout all classes.
The “YOLOv5” ouput may include different types of information. For example, it may include 3 types of data: bounding boxes, location confidences and class confidences

Hint2: Shape of output may be different

[batch_size, num_boxes, 1, 4] and [batch_size, num_boxes, num_classes] are output shapes of YOLOv4. But the “YOLOv5” output may be [batch_size, num_boxes, 5 + num_classes] or may be separated: [batch_size, num_boxes, 5] and [batch_size, num_boxes, num_classes].

Hint3: Values of output may be different

All coordinates (x1, y1 and x2, y2) from YOLOv4 are normalized within range [0.0, 1.0] but coordinates from the “YOLOv5” may not be normalized.

Hint4: Bounding box format may be different

Bounding box for YOLOv4 is [x1, y1, x2, y2] where (x1, y1) is top-left coordinate of the box, and (x2, y2) is bottom-right coordinate.
But bounding box from the “YOLOv5” could be like this: [x-center, y-center, width, height].

1 Like

@hidayat.rhman Did you run yolov5 in deepstream successfully? I am working on the same issue, but deepstream cannot load the engine file converted from tensorrtx. Could you share some experience? Thanks

Hi @xh2261.

Well i was able to run it. But due to some reason i am unable to share my code. So basically there is some issue with yolov5 kernel which i was not able to find out and it gives wrong result. so instead of calling the Detection kernel .cu file what i did was basically i parsed the output of the following layers conv26, conv22, conv18

ITensor* inputTensors_yolo[] = {conv26->getOutput(0), conv22->getOutput(0), conv18->getOutput(0)};

and in nvdsparser i parsed the output of these three layers and called the detection kernel manually.


I have found a workaround after some days.

I have update my repos to describe how to use yolov5 in Deepstream 5.0.


very useful : GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 5.1 configuration for YOLO models