YOLO-POSE Demo Accelerated with Deepstream and TensorRT(Pose Estimation with DeepStream Python Binding)

Hello everyone

I deployed customized pose estimation models (YOLO-Pose with Yolov8-Pose cose) on Jetson and accelerated it with Deepstream + TensorRT , feel free to refer to it and feedback better acceleration suggestions!

Environment

TensorRT Version : 8.5.2
GPU Type : J etson AGX Xavier / AGX Orin
Nvidia Driver Version :
CUDA Version : 11.4.315
CUDNN Version : 8.6.0.166
Operating System + Version : 35.2.1 ( Jetpack: 5.1)
Python Version (if applicable) : Python 3.8.10
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) : 1.12.0a0+2c916ef.nv22.3
Baremetal or Container (if container which image + tag) :

Relevant Files

https://github.com/YunghuiHsu/deepstream-yolo-pose

Steps To Reproduce

Environment Setting plz refer https://github.com/YunghuiHsu/deepstream-yolo-pose

Download Ripository

git clone https://github.com/YunghuiHsu/deepstream-yolo-pose.git

To run the app with default settings:


  • NVInfer with rtsp inputs

    python3 deepstream_YOLOv8-Pose_rtsp.py \ 
       -i  rtsp://sample_1.mp4 \
           rtsp://sample_2.mp4 \ 
           rtsp://sample_N.mp4  \
    
  • eg: loop with local file inputs

    python3 deepstream_YOLOv8-Pose_rtsp.py \
        -i file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4 \
        --file-loop
    
  • Default RTSP streaming location:

Note:

  1. if -g/--pgie : uses nvinfer as default. ([‘nvinfer’, ‘nvinferserver’]).
  2. -config/--config-file : need to be provided for custom models.
  3. --file-loop : option can be used to loop input files after EOS.
  4. --conf-thres : Objec Confidence Threshold
  5. --iou-thres : IOU Threshold for NMS

This sample app is derived from NVIDIA-AI-IOT/deepstream_python_apps/apps and adds customization features

  • Includes following :

    • Accepts multiple sources

    • Dynamic batch model(YOLO-POSE)

    • Accepts RTSP stream as input and gives out inference as RTSP stream

    • NVInfer GPU inference engine

    • NVInferserver GPU inference engine(Not yet tested)

    • MultiObjectTracker(NVTracker)

    • Automatically adjusts the tensor shape of the loaded input and output (NvDsInferTensorMeta)

    • Extract the stream metadata, image data from the batched buffer of Gst-nvinfer

      source : deepstream-imagedata-multistream

Thanks for your sharing to the community!

1 Like