Python GStreamer pipeline with appsink and filesink

Hello,

I am trying to display a camera image and to record the video at the same time in python,
with H264 recording at 120FPS.

I have the following working pipeline on the command line:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 \
! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=120/1, format=NV12' \
! tee name=t \
	t.  ! queue \
		! omxh264enc ! 'video/x-h264,stream-format=(string)byte-stream' ! filesink location=test.h264 \
	t. ! queue ! nvvidconv ! xvimagesink

Now, I am trying to setup a cv2.VideoCapture based on this pipeline:

My pipeline string looks like this:

pipeline_str = """
nvarguscamerasrc sensor-id=0
! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=120/1, format=NV12'
! tee name=t
    t.  ! queue
        ! omxh264enc ! 'video/x-h264,stream-format=(string)byte-stream' ! filesink location=test.h264

    t. ! queue name='display_queue'
            ! nvvidconv 
            ! video/x-raw, width=(int)640, height=(int)360, format=(string)BGRx 
            ! videoconvert ! video/x-raw, format=(string)BGR 
            ! appsink
    """

# This call hungs the application 
    cap = cv2.VideoCapture(pipeline_str, cv2.CAP_GSTREAMER)

Could someone help me?

Many thanks in advance!

Not sure, but you may try to remove the first ‘t.’.
Tee plugin has it own output and in your case it has no sink.
You would use ‘t.’ only for 2nd, 3rd,… subpipelines.

Thanks for your answer, but removing the initial ´t.’ does not solve the issue.

Does 30 fps works ?

I do not know. I will test it tomorrow

Hi, did you success to do it?
as I experienced, I could not do appsink and filesink at the same time using videocapture opencv.
let me know if I’m wrong!

Any news about this ? facing the same issue

this is with the first t.

(python:20850): GStreamer-WARNING **: 15:13:32.731: Trying to link elements t and queue0 that don't share a common ancestor: queue0 hasn't been added to a bin or pipeline, and t is in pipeline0
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: syntax error
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Traceback (most recent call last):
  File "app/yolocam/test_camset.py", line 66, in <module>
    assert cap.isOpened(), 'Failed to open '
AssertionError: Failed to open 

this is without the first t.

(python:20864): GStreamer-CRITICAL **: 15:13:54.275: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: syntax error

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

EDIT :

this work :

camSet = ("nvarguscamerasrc sensor-id=0 ! "
        "video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, framerate=(fraction)30/1 ! tee name=t "
        "t. ! queue! omxh265enc ! matroskamux ! "
        "filesink location=test.mkv "
        "t. ! queue! nvvidconv ! video/x-raw, format=(string)BGRx ! "
        "videoconvert ! video/x-raw, format=(string)BGR ! appsink")

Hum, this was a long time ago. Not sure I remember well how I handled it.

However, I remember that opencv for python3 was not build with gstreamer support (and I think python2 had gstreamer support)

I did something like this to compile opencv for python3 with gstreamer support

git clone https://github.com/opencv/opencv.git && git clone https://github.com/opencv/opencv_contrib.git
cd opencv  && git checkout 4.1.1 && rm -r .git && cd ..
cd opencv_contrib && git checkout 4.1.1 && rm -r .git && cd ..

cd opencv && mkdir build && cd build && cmake -D CMAKE_BUILD_TYPE=RELEASE \ 
  -D WITH_CUDA=ON  \ 
  -D WITH_CUDNN=ON \ 
  -D WITH_CUBLAS=ON \
  -D WITH_LIBV4L=ON \
  -D BUILD_opencv_python3=ON \
  -D BUILD_opencv_python2=OFF \
  -D BUILD_opencv_java=OFF \
  -D WITH_GSTREAMER=ON \
  -D BUILD_TESTS=OFF \
  -D BUILD_PERF_TESTS=OFF \
  -D BUILD_EXAMPLES=OFF \
      -D CUDA_TOOLKIT_ROOT_DIR= /usr/local/cuda-10.2 \
  -D OPENCV_EXTRA_MODULES_PATH= ../../opencv_contrib/modules \
  -D PYTHON_EXECUTABLE=/usr/bin/python3 \
  -D CMAKE_INSTALL_PREFIX=/usr/local .. && \
make -j6 && make install

(beware, this was a long time ago).

i’m trying to get something similar working. i think the fundamental issue is that you need to have “emit-signals=True” and somehow attach a callback to a “new-sample” signal. i’ve not been able to get this working thus far for a parsed pipeline.

Here is my pipeline.
G_STREAM_TO_DISC = v4l2src device=/dev/video0 ! video/x-raw , width=(int){} , height=(int){} , format=(string){} , framerate=(fraction)60/1 ! timeoverlay ! nvvidconv ! video/x-raw(memory:NVMM), width=(int){}, height=(int){}, format=I420 ! tee name=t

t. ! queue! nvjpegenc ! multifilesink location={}/{}.jpg

t. ! queue ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR, width=(int){} , height=(int){} ! appsink

This works. Curly brackets are replaced with string format of course…

1 Like