How to Filesink and Appsink simultaneously in OpenCV Gstreamer. (Gstreamer pipeline included)

Hi,
I’m developing C++ based multi v4l2 camera application on Jetson AGX Xavier.

Current Status

[1 V4L2, 2 V4L2 ... N V4L2 ] >-----[App]-----> [Output]

It requires N x cameras’ original videos and one more processed output video.

  1. original video appsink pipeline
    gstream_elements = "v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY, width=(int)3840, height=(int)2160 ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)BGRx, width=(int)3840, height=(int)2160 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink"

    and it is reading the camera input like this below.
    cv::VideoCapture(gstream_elements, cv::CAP_GSTREAMER)

  2. original video encoding pipeline & Result Output video encoding pipeline
    gstream_elements = "appsrc ! video/x-raw, format=(string)BGR ! queue ! videoconvert ! video/x-raw, format(string)I420 ! queue ! nvvidconv ! video/x-raw(memory:NVMM), width=3840, height=2160 ! queue ! omxh264enc qp-range=15,30:5,20:-1,-1 ! queue ! mpegtsmux ! hlssink max-files=0 playlist-length=0 target-duration=4 playlist-location=0playlist.m3u8 location=0segment%05d.ts "

    and it writes the frame like this below.
    cv::VideoWriter(gstream_elements, cv::CAP_GSTREAMER, 0, m_fps, cv::Size(3840, 2160), true)

Issue

  1. Current separated pipeline show HIGH CPU USAGE. Therefore, I want to integrate appsink and filesink in one pipeline. but it seems it doesn’t work. when I search on web, it maybe because opencv VideoCapture cannot do both job…
    Is there any other way?

    gstream_elements = "v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY, width=(int)3840, height=(int)2160 ! nvvidconv ! tee name=t ! queue ! video/xraw(memory:NVMM), format=(string)I420, width=(int)3840, height=(int)2160 ! omxh264enc qp-range=20,30:20,30:-1,-1 ! queue ! mpegtsmux ! hlssink max-files=0 playlist-length=0 target-duration=4 playlist-location=0playlist.m3u8 location=0segment%05d.ts t. ! queue ! video/x-raw(memory:NVMM), format=(string)BGRx, width=(int)3840, height=(int)2160 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink"

    cv::VideoCapture(gstream_elements, cv::CAP_GSTREAMER)

  2. Some says ipcpipeline can be used here. but it failes to get ipcpipelinesrc.

    video_capture_gstream_elements = "v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY, width=(int)3840, height=(int)2160 ! nvvidconv ! tee name=t ! queue ! ipcpipelinesink t. ! queue ! video/x-raw(memory:NVMM), format=(string)BGRx, width=(int)3840, height=(int)2160 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink"

    video_writer_gstream_elements = "ipcpipelinesrc ! video/xraw(memory:NVMM), format=(string)I420, width=(int)3840, height=(int)2160 ! omxh264enc qp-range=20,30:20,30:-1,-1 ! queue ! mpegtsmux ! hlssink max-files=0 playlist-length=0 target-duration=4 playlist-location=0playlist.m3u8 location=0segment%05d.ts "

is there somebody who succeeded in appsink and filesink at the same time?

Thank you.

Hi,
Please go to OpenCV forum to get further suggestion. We usually run single appsink to get buffer in BGR format. This usecase is a bit complicated and we don’t have enough experience to give suggestion. Would be better to go to the forum. Users in the forum may have more experience in developing complex usecases.

1 Like

Thank you.
I test with ‘tee’. and it works fine.
however when there is high computing load, some of pipeline is stopped…

Hi,
Some CPU loading is expected due to BGR format. Please take a look at

You may set sync=0 to the sinks and check if pipeline can run without getting stuck.

Thank you DaneLLL.
I will focus on reduce CPU load.

I added sync=0 in the end of the pipeline, but which part should I check when it runs?

btw, could you check if this pipeline is optimal for v4l2src to appsink(opencv)

std::string gstream_elements = "v4l2src device=/dev/video0 "
"! video/x-raw, format=(string)UYVY, width=(int)3840, height=(int)2160 "
"! nvvidconv ! video/x-raw(memory:NVMM), format=(string)BGRx, width=(int)3840, height=(int)2160 "
"! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink sync=0 ";

Hi,
Since your source output is UYVY format, you may try nvv4l2camerasrc:

import sys
import cv2

def read_cam():
    cap = cv2.VideoCapture("nvv4l2camerasrc ! video/x-raw(memory:NVMM),width=3840,height=2160,framerate=15/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink sync=0 ")
    if cap.isOpened():
        cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
        while True:
            ret_val, img = cap.read();
            cv2.imshow('demo',img)
            cv2.waitKey(10)
    else:
     print "camera open failed"

    cv2.destroyAllWindows()


if __name__ == '__main__':
    read_cam()

The setting video/x-raw(memory:NVMM),width=3840,height=2160,framerate=15/1 is for E-Con CU135 USB camera(capability is 4Kp15). You need to change it per your source. Running with OpenCV takes certain CPU usage, please run sudo nvpmodel -m 0 and sudo jetson_clocks to get max performance. And can run sudo tegrastats to get system status.

Thank you.
the jetson that I’m using does not have nvv4l2camerasrc ,
It seems some packages are not installed properly. I will try it after re install gstreamer.

>>gst-inspect-1.0 nvv4l2camerasrc
No such element or plugin 'nvv4l2camerasrc'

Hi,
The plugin is added in JP4.4(r32.4.3) and JP4.4.1(r32.4.4). If you use other version, your pipeline looks OK. Or you may try

v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY, width=(int)3840, height=(int)2160 ! videoconvert ! video/x-raw, format=BGR ! appsink sync=0 

Since OpenCV uses CPU buffers, may not need to copy CPU buffers to NVMM buffers and then copy back to CPU buffers.

1 Like