Using YOLOv4 with gst-nvdspreprocess

Hello,

I’m experimenting with the new gst-nvdspreprocess: I’d like to apply yolov4-tiny to a set of ROI in a video using batch size of greater than 1 for performance.

When I use the demo pipeline below YOLO inference doesn’t happen, the pipeline runs, I see video and ROI but no detection boxes from YOLO.

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt  ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/ds-app/yolo/config_infer_primary_yoloV4_d.txt input-tensor-meta=1 batch-size=7  ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

The pipeline output shows warning (see attached output.log for details):

nvinfer gstnvinfer.cpp:1903:gst_nvinfer_process_tensor_input:<nvinfer0> warning: nvinfer could not find input layer with name = input_1

My YOLO TensorRT model was generated using darknet to onnx to TRT:

I’ve uploaded my model and configuration files here for review: YOLO - Google Drive

Thank you for all the help!!

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1.6
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Sorry for the late response, is this still an issue to support? Thanks

Yes, I’m still stuck with the issue. I’ve uploaded the model and all the config so it should be easy to reproduce. Any suggestions? Thank you

Hi @dartagnan64b ,
Sorry for delay!

as pointed out in the log, it can’t find “input_1” input specified by “tensor-name=input_1” in /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt

Thanks but I am not sure how to resolve that. “input_1” is defined in the nvidia configuration and it may not be in the yolo model (as following the tutorial steps I didn’t come across any input layers). Any suggestion on how to resolve the issue? If not, is there any tutorial or doc to follow to generate multi bach and feed it into yolo with gst-nvdspreprocess?

Thank you

You need to replace it with the input name of your yolov4 model.

The guide is Gst-nvdspreprocess (Alpha) — DeepStream 6.3 Release documentation
or could you just refer to GitHub - NVIDIA-AI-IOT/yolo_deepstream: yolo model qat and deploy with deepstream&tensorrt ?

Hi @dartagnan64b ,
I created a yolov4 sample - deepstream_yolov4_with_nvdspreprocess.tgz - Google Drive , you can take a try.

  1. download deepstream_yolov4_with_nvdspreprocess.tgz
  2. untar it in deepstream docker, e.g. deepstream6.0 devrel docker
  3. cd deepstream_yolov4_with_nvdspreprocess/
  4. ./nvdspreprocess_cmd.sh

// content of nvdspreprocess_cmd.sh is:

#!/bin/bash
make -C nvdsinfer_custom_impl_Yolo

gst-launch-1.0 -e -v filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file=config_preprocess.txt ! nvinfer config-file-path=config_infer_primary_yoloV4.txt input-tensor-meta=1 batch-size=2 ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! tee name=t \
        ! queue ! nvv4l2h265enc ! h265parse ! filesink location=file.h265 t. \
        ! queue ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=false

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.