Run YOLOv5 in Deepstream with .engine generated alternatively

Description

Hi. In official Yolov5 documentation it is defined how to export a Pytorch model (.pt) into different formats for deployments (i.e. Jetson inference).
They say to follow Deepstream-Yolo from Marcos Luciano in order to convert the Pytorch weights (.pt) to a .cfg and .wts files readable by Deepstream. Furthermore, we shall define in config_infer_primary.txt the engine file to be generated.
Going back to the Yolov5 export, it is possible to export it to a Tensor Rt engine file.
My question is, how should the config_infer_primary.txt be configured in this case, as there are no custom-network-config (.cfg path) nor model-file (.wts path). Just engine file.
I tried to comment those lines but still does not work.
I also tried to leave it uncommented, but set the path to my engine file, but it overwrites it.

Environment

TensorRT Version: 8.0.1.6
GPU Type: Jetson NX Xavier
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: Ubuntu 18.04.6 LTS
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Clone the Yolov5 repo, install all requirements and export a pretrained model to a Tensor RT engine file

git clone https://github.com/ultralytics/yolov5.git
cd yolov5; pip install -r requirements.txt
python export.py --weights yolov5n.pt --data data/coco.yaml --img 540 960 --batch-size 4 --device 0 --int8 --dynamic --nms --topk-all 100 --iou-thres 0.45 --conf-thres 0.25

Config_infer_primary.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#custom-network-config=yolov5s.cfg
#model-file=yolov5s.wts
model-engine-file=model_b4_gpu0_int8.engine
int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
symmetric-padding=1

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Hi,

We are moving this post to the Deepstream forum to get better help.

Thank you.

Why don’t you transfer the pytorch model to ONNX?

It is not true. There is “custom-network-config” and “model-file” parameters with gst-nvinfer configuration. Please refer to the document “Gst-nvinfer — DeepStream 6.3 Release documentation

There are Yolov2 and Yolov3 models samples of configuring .cfg and .wts files with customized model parser. /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo.

We will suggest you to convert the pytorch model to ONNX model which can be deployed with DeepStream directly without any customized model parser.

There are also some third party YolovX DeepStream deployment samples: DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Thank you for your reply.
With an ONNX model, when you say there is no need of a customized model parser, means that parse-bbox-func-name can be commented? What about the custom-lib-path?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

“parse-bbox-func-name” is not model parser, it is the model output parser, if your model output layer needs customized parser, you will need this parameter. “custom-lib-path” is the full path of the binary library of the customized model output parser. Please refer to the sample in /opt/nvidia/deepstream/deepstream/sources/objectDetector_SSD

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.