Yolov5 - NVIDIA DeepStream and NVIDIA Triton Inference Server

Description

Reference: https://developer.nvidia.com/blog/deploying-models-from-tensorflow-model-zoo-using-deepstream-and-triton-inference-server/

nms has no object when I use Yolov5 with deepstream and Trition Inference Server. I thought the problem in my config my at the preprocessing step but I have no idea to fix it. Please help.

Environment

TensorRT Version: 7.2.2.3
GPU Type: GTX 2080
Nvidia Driver Version:
CUDA Version: 11.1
CUDNN Version: 8
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/deepstream:5.1-21.02-triton

Relevant Files

Github: GitHub - ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
I modified the input shape of model after convert to onnx

model_yolov5s.onnx (27.7 MB)
config.pbtxt (510 Bytes)
labels.txt (620 Bytes)
deepstream_app_config_yolov5.txt (3.7 KB)
config_infer_primary_yolov5.txt (1.2 KB)

Steps To Reproduce

And nms has no object.

Hi,
We recommend you to raise this query in TRITON forum for better assistance.

Thanks!

1 Like

why you set max_batch_size : 0 ?

I did reset it to 1 in deepstream_config_yolov5.txt

I found my post processing didn’t not match with yolov5 output, I am still working on it.

Same issue , could you share the solution ? config.txt or post processing code? Thanks

1 Like

nvdsparsebbox_Yolo.cpp (5.5 KB)

Here it is, bro. Please ask me if something makes u confused

1 Like

Thanks

I was trying to run this onnx model + config you gave on triton server. Can you guide me how to write preprocess function for this in image_client.py?