How to use OpenCV with gstreamer support on X86 platform?

I want to use opencv with gstreamer support on x86 platform, GPU is Tesla T4, I built the opencv with gstreamer from source, and install all the gstreamer dependencies to the system.

Here is my code:

import cv2
import time

uri = "rtsp://10.168.1.202:8554/test"
# cap = "rtspsrc location=%s latency=0 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink" % uri
cap = "rtspsrc location=%s ! application/x-rtp, media=video ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink"  % uri


cap = cv2.VideoCapture(cap, cv2.CAP_GSTREAMER)
frame_num = 0
tic = time.time()
while True:
    frame_num += 1
    ret, frame = cap.read()
    if ret:
        pass
    else:
        break
toc = time.time()
print("Decode FPS: %.2f" % (frame_num / (toc - tic)))

cap.release()


However it failed to run:

[ WARN:0] global /opt/opencv-4.4.0/modules/videoio/src/cap_gstreamer.cpp (713) open OpenCV | GStreamer warning: Error opening bin: no element "nvv4l2decoder"
[ WARN:0] global /opt/opencv-4.4.0/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

I built the opencv and dependencies as follows (Dockerfile):

RUN apt-get update && apt-get install -y libgstreamer-plugins-base1.0-dev gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav 

RUN apt-get install -y libgstreamer1.0 libgstreamer1.0-dev libgstreamer-plugins-bad1.0-0 libgstreamer-plugins-base1.0-0 libgstrtspserver-1.0-0 libjansson4 \
    build-essential cmake pkg-config vim wget \
    libjpeg8-dev libtiff5-dev libjasper-dev libpng12-dev libgtk-3-dev \
    libsm6 libxrender1 libxext-dev libgl1-mesa-glx && \
    rm -rf /var/lib/apt/lists/*

ADD opencv-4.4.0/ /opt/opencv-4.4.0/  # I've download the opencv code repo.

RUN cd /opt/opencv-4.4.0 && mkdir build && cd build/ && \
    cmake   -D CMAKE_BUILD_TYPE=RELEASE \
    -D PYTHON_DEFAULT_EXECUTABLE=$(python3 -c "import sys; print(sys.executable)")   \
    -D PYTHON3_EXECUTABLE=$(python3 -c "import sys; print(sys.executable)")   \
    -D PYTHON3_NUMPY_INCLUDE_DIRS=$(python3 -c "import numpy; print (numpy.get_include())") \
    -D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
    -D CMAKE_CXX_FLAGS="-std=c++11" \
    -D CUDA_NVCC_FLAGS="--compiler-options "-std=c++03"" \
    -D WITH_GSTREAMER=ON \
    -D WITH_GSTREAMER_0_10=OFF .. && \
    make -j20 && make install && ldconfig

didn’t get chance to repo with your steps.
but you may could refer to IP Camera RTSP + GSTREAMER + C++ (Stream Latency 4-5 seconds) - #4 by mchi firstly.

Thanks, I am 100% sure that it would work on jetson platform, I tried the same pipeline before. But this time I want to run on x86, some say the nv-gst-plugin only work on ARM machine?

DS propvides DS plugins, e.g. nvv4l2decoder, that has the same name, properities on Jetson/ARM and x86.
But some NV plugins, e.g. nvvidconv , is not DS plugin, which can only work on Jetson.

Here is the reference - DeepStream SDK FAQ - #15 by Fiona.Chen about DS plugin and non-DS plugin

I found a solution (maybe…), tensorrt_demos/camera.py at 32ce1c39ae74c9cf0b93e491ecefbe3453a855c8 · jkjung-avt/tensorrt_demos · GitHub

Instead of using nvv4l2decoder, using avdec_h264 can make this pipeline running successfully.

uri = "rtsp://192.168.1.202:8554/test"
cap = "rtspsrc location=%s ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink" % uri

I wonder if it will hurt the performance? @mchi Since it doesn’t use the nv decoder plugin.

I test the decode perf with this script:

import cv2
import time

uri = "rtsp://192.168.1.202:8554/test"
cap = "rtspsrc location=%s ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! appsink" % uri

cap = cv2.VideoCapture(cap, cv2.CAP_GSTREAMER)
frame_num = 0
tic = time.time()
while True:
    frame_num += 1
    ret, frame = cap.read()
    if ret:
        pass
    if not cap.isOpened():
        sys.exit("Failed to open camera!")
    if frame_num == 1000:
        break
toc = time.time()
print("Decode FPS: %.2f" % (frame_num / (toc - tic)))

cap.release()

The decoding FPS is: 23.96, almost the same as the web camera’s FPS, CPU utilization is about 50% (htop).

avdec_h264 is using SW/CPU for decoding, and videoconvert is also using CPU to do conversion.
So, this pipeline is complete using CPU.

Why can’t use

rtspsrc location=%s ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! appsink" % uri ?

I’m afraid it won’t work:

root@2a13c2648369:/opt/nvidia/deepstream/deepstream-5.0# python3 test_decode.py 

(python3:15540): GStreamer-CRITICAL **: 05:58:01.239: gst_mini_object_unref: assertion 'mini_object != NULL' failed

The testing script will stuck in here and seem to be frozen, even ctrl + C can not quit the program.

I already use the deepstream-5.0.1 docker container, which is x86 version.

Really need help to run opencv with gstreamer in x86 machine.

Tried below command wroks on my side.

$ gst-launch-1.0 -e rtspsrc location=rtsp://10.19.225.234/media/video1 ! decodebin ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=I420” ! queue ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=test.mp4

below command runs well without error
$ gst-launch-1.0 -e rtspsrc location=rtsp://10.19.225.234/media/video1 ! decodebin ! nvvideoconvert nvbuf-memory-type=3 ! fakesink

but it failed with log - “VideoCapture not opened” after integrating into opencv. The same code work on Jetson.
Will check and get back.

int main()
{
    std::string pipe = "rtspsrc location=rtsp://10.19.225.234/media/video1 ! decodebin ! nvvideoconvert nvbuf-memory-type=3 ! appsink";

    VideoCapture cap(pipe, CAP_GSTREAMER);

    if (!cap.isOpened()) {
        cerr <<"VideoCapture not opened"<<endl;
        exit(-1);
    }

    while (true) {
        Mat frame;

        cap.read(frame);

        imwrite("receiver.png", frame);

        getchar();
    }

    return 0;
}