RTSP Delay while running application using Open CV 3.4.1

Hi,

I am working on a face detection application where I use the following,

Server - HPE Proliant EL1000 M510 Server
OS - Ubuntu 18.04 LTS Server Edition 64Bit
GPU - Nvidia Tesla T4
Open CV - 3.4.1
Nvidia - 418.39
Cuda -10.1
Cudnn - 7.6.5.32
TensorFlow -1.40
Redis -3.3.11
Dlib - 19.18.0
Caffe - 1.0.0

While running the application on a Logitech 930e USB webcam I get no delay in detecting faces. But while I run a RTSP streaming from IP Camera on the EL1000 with Nvidia T4 GPU card I get 4-5 seconds delay in streaming the visuals.

But in another case while I run the same application with RTSP streaming in another machine with GPU Nvidia GTX 980 I get 1-2 seconds delay only.

Can anyone provide a apopropriate solution for the streaming delay over RTSP in EL1000 using T4 card. ?

Hi,
In the application, do you use DeepStream SDK or pure OpenCV? We have optimization in DeepStream SDK. Ideally it should gives better performance than pure OpenCV.

In our application we use pure OpenCV. and the CV version that we use is 3.4.1. There is too much delay in the machines EL1000 and DL385 GEN 10 as compared to other machines

Hi,
In pure OpenCV, video decoding runs on CPU. Would suggest you try DeepStream SDK.