Metadata extraction after reading gstreamer pipeline with OpenCV

I am using the following pipeline to read video and then do some image processing and selection of specific tracking targets using OpenCV in Python. Video is being transmitted from a modified deepstream_test_2 python app to custom python script using OpenCV since deepstream_python_apps do not support frame extraction as of now otherwise this process could have been conducted using probe function.

cap = cv2.VideoCapture('gst-launch-1.0 udpsrc port=5200 ! application/x-rtp, media=(string)video, clock-rate=(int)90000 , encoding-name=(string)H264 , payload=(int)96  ! queue ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink', cv2.CAP_GSTREAMER)

Currently I am transmitting metadata through udp ports in a modified deepstream_test_2 python app, but the bounding boxes are some frames out of sync since OpenCV is not reading the frames in realtime(there is a initial delay of 0.8-1 seconds) and as far as I have checked OpenCV cap buffersize change(to ensure realtime processing) is only supported by DS 1394 v2.x video backend.

I would like to know if there is a way to extract metadata from the pipeline after reading using OpenCV in Python since OpenCV does not have this feature.

Have you checked https://devtalk.nvidia.com/default/topic/1066912/deepstream-sdk/deepstream-now-supports-python-/?
Also, would you mind to share the modified deepstream_test_2 python app with me for further check?

Hi zararyounis,

Is this still an issue to support?
Any result can be shared?