• Hardware Platform (Jetson / GPU) = Jetson Xavier Nx
• DeepStream Version = 6.1.1
• JetPack Version (valid for Jetson only) = 35.1.0
• TensorRT Version = 126.96.36.199
• CUDA Version (valid for GPU only) = 11.4
• Issue Type( questions, new requirements, bugs) = Question
I am using the python binding of Deepstream to perform detection and tracking on a video.
For detection, I am using the custom YOLOv5 model and for tracking, I am using the Deepstream plugin.
(I am using this repo NVIDIA DeepStream SDK 6.1 / 6.0.1 / 6.0 configuration for YOLO-v5 & YOLO-v7 models · GitHub to perform detection and tracking using the Deepstream pipeline).
I can observe the seamless detection of vehicles with corresponding tracking ID on the display window as shown below.
Stream the output displayed on this OSD window to an RTSP-Server.
To achieve the target,
I am sending the inferenced-output frames (i.e., ndarray) to Redis and receiving and decoding it in the other end.
However, converting the ndarray to list is slower and it is causing lag in the inferencing.
Also, because in the python binding, the inferencing is running in the thread, the Redis needs to be initialized in the thread itself.
I am looking for a solution to stream my inference output to an RTSP-Server by removing the dependency on Redis.
Looking forward to some insights on this.