Unable to use yolov5s in deepstream pipeline

Description

I am unable to use Yolov5s in deepstream pipeline.

Environment

TensorRT Version: 8.2.5
GPU Type: Tesla T4
Nvidia Driver Version: 510.47.03
CUDA Version: 11.6
CUDNN Version:
Operating System + Version: Ubuntu
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.12.1
Baremetal or Container (if container which image + tag): deepstream6.1-dlevel

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Reference - Getting started with custom NVIDIA Deepstream 6.0 pipelines in Python | by Jules Talloen | ML6team

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  1. git clone GitHub - ml6team/deepstream-python: NVIDIA Deepstream 6.1 Python boilerplate

  2. install libraries:
    sudo apt install git-lfs
    cd deepstream-python
    git lfs install
    git lfs pull

  3. docker build -t deepstream .

  4. docker run -it --gpus all -v ~/deepstream-python/output:/app/output -v ~/deepstream-python/deepstream/app/:/app/app/ --entrypoint bash deepstream:latest

  5. Edit config and add:
    vi configs/pgies/pgie.txt
    replace:
    model-file=/opt/anpr/data/yolov5m/yolov5s.wts
    proto-file=/opt/anpr/data/yolov5m/yolov5s.cfg

  6. python3 run.py file:///opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264.

  • Exact steps/commands to run your repro
    Mentioned above

  • Full traceback of errors encountered

INFO:app.pipeline.Pipeline:Creating Pipeline
INFO:app.pipeline.Pipeline:Creating Source bin
INFO:app.pipeline.Pipeline:Creating URI decode bin
INFO:app.pipeline.Pipeline:Creating Stream mux
INFO:app.pipeline.Pipeline:Creating PGIE
INFO:app.pipeline.Pipeline:Creating Tracker
INFO:app.pipeline.Pipeline:Creating Converter 1
INFO:app.pipeline.Pipeline:Creating Caps filter 1
INFO:app.pipeline.Pipeline:Creating Tiler
INFO:app.pipeline.Pipeline:Creating Converter 2
INFO:app.pipeline.Pipeline:Creating OSD
INFO:app.pipeline.Pipeline:Creating Queue 1
INFO:app.pipeline.Pipeline:Creating Converter 3
INFO:app.pipeline.Pipeline:Creating Caps filter 2
INFO:app.pipeline.Pipeline:Creating Encoder
INFO:app.pipeline.Pipeline:Creating Parser
INFO:app.pipeline.Pipeline:Creating Container
INFO:app.pipeline.Pipeline:Creating Sink
INFO:app.pipeline.Pipeline:Linking elements in the Pipeline: source-bin-00 → stream-muxer → primary-inference → tracker → convertor1 → capsfilter1 → nvtiler → convertor2 → onscreendisplay → queue1 → mp4-sink-bin
INFO:app.pipeline.Pipeline:Starting pipeline
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
ERROR: [TRT]: CaffeParser: Could not parse binary model file
ERROR: [TRT]: CaffeParser: Could not parse model file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:153 Failed while parsing caffe network: /opt/anpr/data/yolov5m/yolov5s.cfg
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: /app/app/…/configs/pgies/pgie.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
INFO:app.pipeline.Pipeline:Exiting pipeline

Hi,

This looks like a Deepstream related issue. We will move this post to the Deepstream forum.

Thanks!

Ok

Where and how did you get the model files? Please ask the guy who provide the model for help.

There is a default resnet model which is working fine because it has proto.txt and it is in caffe format. But i want to run yolov5 in either .pt or .engine format in this deepstream pipeline.

TAO has provided a yolov5 model. NVIDIA-AI-IOT/yolov5_gpu_optimization: This repository provides YOLOV5 GPU optimization sample (github.com)

And this model can work with DeepStream. NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com)