YOLOv5 with Deepstream 5.1 having errors with batch-size > 1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU RTX 2060
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) N/A
• TensorRT Version 7.2.2
• NVIDIA GPU Driver Version (valid for GPU only) 460.32/CUDA11.2
• Issue Type( questions, new requirements, bugs)

YOLOv5 with Deepstream 5.1 having errors with batch-size > 1

Hi! So I have converted yolov5 from here https://github.com/ultralytics/yolov5 to onnx and wrote a custom parser to parse the output.

When I set the model batch size to 1, the output I get looks fine -

When I increase the batch size, I don’t get detections for certain streams -

I am not sure what can cause this issue as the parse-bbox-func-name is called per stream per frame.

seems the images with even batch number get BBOX drawed, could you try other batch size,e.g. 6, 4, to see if it works like this.
If it’s, I suspect it’s caused by the post-processing.

Thanks!

Thanks for reaching out so quickly @mchi

Yes, there is some weird pattern it follows with each batch number.

Batch Size 3 -

Batch Size 4 -

Also I am using deepstream-app for this so sometimes when I pause and resume, the streams it is processing shifts.

Under more investigation, I have no results in

float *out = (float *)outputLayersInfo[OutputLayerIndex].buffer 

for the streams that don’t have an output. So the issue must be between the tensor output of yolov5 and my custom-parser.

I tried writing my own engine creation function and set it using engine-create-func-name. I also tried setting explicit batch like so -

const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = builder->createNetworkV2(explicitBatch);

but I had the same result.

Is there a way I could debug this and get more information?

what do you mean “don’t have an output”? do you mean the buffer pointer is NULL or all data value in the buffer is zero?

Junk data which after processing and nms has no objects.

could you share the DS config files?

Attaching the model config and deepstream-app config. My config is slightly different modified for our product but it’s more or less similar.

config_infer_primary_yolov5s.txt (641 Bytes)
demo.json (12.5 KB)

There is such issue iwht this Yolov4 from yolov4_deepstream.
https://drive.google.com/drive/folders/1LGxz_-xMqP-N8gk2rubcrELwBRfBVRNZ?usp=sharing

The instruction to run it are:

  1. untar deepstream_yolov4.tgz under /opt/nvidia/deepstream/deepstream/sources/
  2. $ deepstream-app -c deepstream_app_config_yoloV4.txt
  3. check the output of yolov4.mp4

It has very similar nvinfer primary config.
So, I think you can make small changes to the config_infer_primary_yoloV4.txt and nvdsinfer_custom_impl_Yolo to run your yolov5 to check if it’s caused by the config.

Thanks for the solution @mchi. I will try this. However, I have solved my issue by getting rid of onnx and directly converting .pt to tensorrt engine by using IPLuginV2Layer.

1 Like