Description
I am unable to use Yolov5s in deepstream pipeline.
Environment
TensorRT Version: 8.2.5
GPU Type: Tesla T4
Nvidia Driver Version: 510.47.03
CUDA Version: 11.6
CUDNN Version:
Operating System + Version: Ubuntu
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.12.1
Baremetal or Container (if container which image + tag): deepstream6.1-dlevel
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Reference - Getting started with custom NVIDIA Deepstream 6.0 pipelines in Python | by Jules Talloen | ML6team
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
-
git clone GitHub - ml6team/deepstream-python: NVIDIA Deepstream 6.1 Python boilerplate
-
install libraries:
sudo apt install git-lfs
cd deepstream-python
git lfs install
git lfs pull -
docker build -t deepstream .
-
docker run -it --gpus all -v ~/deepstream-python/output:/app/output -v ~/deepstream-python/deepstream/app/:/app/app/ --entrypoint bash deepstream:latest
-
Edit config and add:
vi configs/pgies/pgie.txt
replace:
model-file=/opt/anpr/data/yolov5m/yolov5s.wts
proto-file=/opt/anpr/data/yolov5m/yolov5s.cfg -
python3 run.py file:///opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264.
-
Exact steps/commands to run your repro
Mentioned above -
Full traceback of errors encountered
INFO:app.pipeline.Pipeline:Creating Pipeline
INFO:app.pipeline.Pipeline:Creating Source bin
INFO:app.pipeline.Pipeline:Creating URI decode bin
INFO:app.pipeline.Pipeline:Creating Stream mux
INFO:app.pipeline.Pipeline:Creating PGIE
INFO:app.pipeline.Pipeline:Creating Tracker
INFO:app.pipeline.Pipeline:Creating Converter 1
INFO:app.pipeline.Pipeline:Creating Caps filter 1
INFO:app.pipeline.Pipeline:Creating Tiler
INFO:app.pipeline.Pipeline:Creating Converter 2
INFO:app.pipeline.Pipeline:Creating OSD
INFO:app.pipeline.Pipeline:Creating Queue 1
INFO:app.pipeline.Pipeline:Creating Converter 3
INFO:app.pipeline.Pipeline:Creating Caps filter 2
INFO:app.pipeline.Pipeline:Creating Encoder
INFO:app.pipeline.Pipeline:Creating Parser
INFO:app.pipeline.Pipeline:Creating Container
INFO:app.pipeline.Pipeline:Creating Sink
INFO:app.pipeline.Pipeline:Linking elements in the Pipeline: source-bin-00 → stream-muxer → primary-inference → tracker → convertor1 → capsfilter1 → nvtiler → convertor2 → onscreendisplay → queue1 → mp4-sink-bin
INFO:app.pipeline.Pipeline:Starting pipeline
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
ERROR: [TRT]: CaffeParser: Could not parse binary model file
ERROR: [TRT]: CaffeParser: Could not parse model file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:153 Failed while parsing caffe network: /opt/anpr/data/yolov5m/yolov5s.cfg
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: /app/app/…/configs/pgies/pgie.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
INFO:app.pipeline.Pipeline:Exiting pipeline