openCV video writer to gstreamer appsrc

Hi everyone! First off, long time creeper and first time poster on here. Thank you everyone for posting on here. I use this forum everyday to learn how to work with the nano

Secondly (mainly), I’m trying to alter the code in the jetson hacks dual_camera.py file. I want to send the stitched together frames to the 264 encoder and then a udpsink. I’m able to open the camera and receive frames just fine, I just can’t send the frames out for processing.

This is how I open the videWriter:
out = cv2.VideoWriter(‘appsrc !’
‘omxh264enc control-rate=2 bitrate=4000000 !’
‘video/x-h264, stream-format=byte-stream !’
‘rtph264pay mtu=1400 !’
'udpsink host=192.168.0.110 port=5000 sync=false async=false ',
0, 60, (540,960*2) #reason for *2 is because this will use 2 frames hstack’d together
)

this is how the images are meant to be received:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! autovideosink sync=false async=false -e

and this is the output for the python script I’m getting:

nvbuf_utils: Could not get EGL display connection
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 3
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: PowerService: requested_clock_Hz=6048000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 1
Camera mode = 3
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module appsrc0 reported: Internal data stream error.
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline

I’m not getting anything on the receiver side

any help would be appreciated!!

Hi,
There is a reference of udp streaming:

Please check if you can run it successfully.

I can’t. I’m running the nano headless. So I have to ssh then redirect the output streams to my laptop. Right now I am able to run:

[server/nano]
gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1’ ! omxh264enc control-rate=2 bitrate=4000000 ! video/x-h264, stream-format=byte-stream ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false

[client/laptop]
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! autovideosink sync=false async=false -e

and get frames.

which works well but since I have the newer nano with the second CSI input I want to get both cameras versus just one. So I looked at this https://github.com/JetsonHacksNano/CSI-Camera/blob/master/dual_camera.py to figure out how to get a second input. This does what I need mostly. I just changed a vstack to an hstack. Which I can do. This would work well, but again since this a network connection I’m ssh’ing to, its not compressing the X11 windows well enough. So it appears I’m only getting 1hz frame rates and the frames look unusable.

To counter that I figure I would add back in the 264 compression part of the original pipeline so that the network doesn’t get bogged down transferring whole uncompressed frames. So I’m trying to do this by writing frames to the videoWriter with:

out = cv2.VideoWriter(‘appsrc !’
‘omxh264enc control-rate=2 bitrate=4000000 !’
‘video/x-h264, stream-format=byte-stream !’
‘rtph264pay mtu=1400 !’
'udpsink host=192.168.0.110 port=5000 sync=false async=false ',
0, 60, (540,960*2) #reason for *2 is because this will use 2 frames hstack’d together
)

however I am not able to get frames and see this error:

[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module appsrc0 reported: Internal data stream error.
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline
[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline

I’ve tried googling and looking on here but I’m not able to see anything that does this and am stuck at this point.

Hi,
There is a post about OpenCV+RTMP:


You may check if it helps your usecase.

Also we have nvcompositor plugin. You may consider to use it for compositing the two sources to one frame and then send to encoder.

Okay I tried doing the raw command for it using nvcompositor as you suggested but now I keep getting just a “WARNING: erroneous pipeline: syntax error” There is no other output to get hints on.

Would should be happening is a compositor is made with 2 areas. One frame on top of the other. This combined frame is sent to the 264 encoder. Camera 0 is sent to the sink_0 spot and camera 1 is going to the sink_1 spot.

gst-launch-1.0 nvcompositor name=comp \
sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1920 sink_0::height=1080
sink_1::xpos=0 sink_1::ypos=1080 sink_1::width=1920 sink_1::height=1024 ! \
nvv4l2h264enc bitrate=4000000 insert-sps-pps=true ! \
rtph264pay mtu=1400 ! \
udpsink host=192.168.0.106 port=5000 .
nvarguscamerasrc sensor-id=0 sensor-mode=3 ! \
‘video/x-raw(memory:NVMM),
width=1920,
height=1080,
format=NV12,
framerate=30/1’ ! \
nvvidconv flip-method=0 ! \
comp.
nvarguscamerasrc sensor-id=1 sensor-mode=3 ! \
‘video/x-raw(memory:NVMM),
width=1920,
height=1080,
format=NV12,
framerate=30/1’ ! \
nvvidconv flip-method=0 ! \
comp.

Hi,
It only supports RGBA in source pad of nvcompositor, so you need to run like
…! nvcompositor ! nvvidconv ! nvv4l2h264enc ! …

Another reference of using nvcompositor:

I tried the example you sent me. I’m now getting

nvbuf_utils: Could not get EGL display connection

WARNING: erroneous pipeline: could not link nvarguscamerasrc0 to nvvconv1, nvarguscamerasrc0 can’t handle caps video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)RGBA, framerate=(fraction)60/1

This seems really strange because this camera can handle 720p at 60fps plus I’m not looking for a display.

Here’s the new pipeline with the altercations from your example

gst-launch-1.0 nvcompositor name=comp \
sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1280 sink_0::height=720
sink_1::xpos=0 sink_1::ypos=720 sink_1::width=1280 sink_1::height=1024 ! \
nvvidconv ! \
nvv4l2h264enc bitrate=4000000 insert-sps-pps=true ! \
rtph264pay mtu=1400 ! \
udpsink host=192.168.0.106 port=5000
nvarguscamerasrc sensor-id=0 sensor-mode=3 ! \
‘video/x-raw(memory:NVMM),
width=1280,
height=720,
format=RGBA,
framerate=60/1’ ! \
nvvidconv flip-method=0 ! \
comp.sink_0
nvarguscamerasrc sensor-id=1 sensor-mode=3 ! \
‘video/x-raw(memory:NVMM),
width=1280,
height=720,
format=RGBA,
framerate=60/1’ ! \
nvvidconv flip-method=0 ! \
comp.sink_1

Also thank you for all the help you have giving me so far

GOT IT!!! It didn’t like the RGBA format

st-launch-1.0 nvcompositor name=comp \
sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1280 sink_0::height=720
sink_1::xpos=0 sink_1::ypos=720 sink_1::width=1280 sink_1::height=720 ! \
nvvidconv ! \
nvv4l2h264enc bitrate=16000000 insert-sps-pps=true ! \
rtph264pay mtu=1400 ! \
udpsink host=192.168.0.106 port=5000
nvarguscamerasrc sensor-id=0 sensor-mode=3 ! \
‘video/x-raw(memory:NVMM),
width=1280,
height=720,
format=NV12,
framerate=60/1’ ! \
nvvidconv flip-method=0 ! \
comp.sink_0
nvarguscamerasrc sensor-id=1 sensor-mode=3 ! \
‘video/x-raw(memory:NVMM),
width=1280,
height=720,
format=NV12,
framerate=60/1’ ! \
nvvidconv flip-method=0 ! \
comp.sink_1

now I’m going to try and put this in a videoWriter in openCV

1 Like

okay so this is failing in python now. I reduced it to just one camera without the compositor. I figure when I add that back in there it will look like one big frame either way.

def gstreamer_pipeline_in():
return (
"nvarguscamerasrc sensor-id=0 sensor-mode=3 ! "
"video/x-raw(memory:NVMM), "
" width=1280, "
" height=720, "
" format=NV12, "
" framerate=60/1 ! "
"nvvidconv flip-method=0 ! "
"videoconvert ! "
“video/x-raw, format=BGR ! "
" appsink”
)

def gstreamer_pipeline_out():
return (
"appsrc ! "
"nvv4l2h264enc bitrate=16000000 insert-sps-pps=true ! "
"rtph264pay mtu=1400 ! "
"udpsink host=192.168.0.106 port=5000 "
)

out = cv2.VideoWriter(gstreamer_pipeline_out(), 0, 60, (1280,720*2))

out.write(image)

What is your frame format in opencv ? If it is BGR/RGB, you need to convert into I420 or NV12 for nvv4l2H264 encoder, videoconvert would do that. I’d also suggest to set config-intgerval to 1 for rtph264 if streaming through UDP:

def gstreamer_pipeline_out():
return (
"appsrc ! "
"videoconvert ! "
"nvv4l2h264enc bitrate=16000000 insert-sps-pps=true ! "
"h264parse ! "
"rtph264pay config-interval=1 ! "
"udpsink host=192.168.0.106 port=5000 "
)

Yeah openCV is supposed to give me RGB. I put your suggestion in but I still get the:

[ WARN:0] global /PATH_TO_OPENCV/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline.

You may try adding queue after appsrc:

def gstreamer_pipeline_out():
return (
"appsrc ! "
"queue ! "
"videoconvert ! "
"nvv4l2h264enc bitrate=16000000 insert-sps-pps=true ! "
"h264parse ! "
"rtph264pay config-interval=1 ! "
"udpsink host=192.168.0.106 port=5000 "

Also check if the writer is opened:

if not out.isOpened():
    print('VideoWriter not opened')
    exit(0)

Be also sure that the frame you’re trying to push is 1280 x 1440.

I have this pipeline working on Xavier with R32.3.1:

"appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! omxh264enc ! video/x-h264, stream-format=byte-stream ! h264parse ! rtph264pay pt=96 config-interval=1 ! udpsink host=192.168.0.106 port=5000

Note that nvv4l2h264enc has problems in this case, and leads to much lower framerate.

This can displayed with:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp ! queue ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! videoconvert ! fpsdisplaysink video-sink=xvimagesink
1 Like

Here is the data type that is coming out of camera.read()
(Pdb) p right_image.shape
(720, 1280, 3)

And type is a bumpy array
(Pdb) p type(right_image)
<class ‘numpy.ndarray’>

queue does not help either.

If the image you are pushing is 1280x720 as your capture, then your VideoWriter has an extra “*2”. Try changing to

out = cv2.VideoWriter(gstreamer_pipeline_out(), 0, 60, (1280,720))
1 Like

wow. haha when you look at code for so long you forget a basic like that

also thank you so much for helping me get this part going I really appreciate it. I’ll post the full script one here once I get it working for people to template off of it

okay with that I still get: I’ll try your known working pipeline next to see if I can get that working. Perhaps I did a bad install

CONSUMER: Producer has connected; continuing.

[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

Opening in BLOCKING MODE

[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module appsrc0 reported: Internal data stream error.

[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline

[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline

[ WARN:0] global /home/reed/opencv/modules/videoio/src/cap_gstreamer.cpp (1663) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline

Note I have not tried with python but from C++ API.
You should first try to display your final image from opencv with cv2.imshow() followed by cv2.waitKey(-1). When you can display it, then try the pipeline I’ve provided and post errors if any.

is the magic pipeline! Had one slight issue of my mac bing issued a new ip address to x.x.x.107 but it work and frames are coming through!!!

I’ll clean the code up and post it tonight along with the receiver pipeline

Thank you @Honey_Patouceul and @DaneLLL for helping with this

1 Like

code for others to start working on streamer camera apps with jetson nano and OpenCV in python and send it off remotely.

#! /usr/bin/python3

# /IT License
# Copyright (c) 2019,2020 JetsonHacks
# See license
# A very simple code snippet
# Using two  CSI cameras (such as the Raspberry Pi Version 2) connected to a
# NVIDIA Jetson Nano Developer Kit (Rev B01) using OpenCV
# Drivers for the camera and OpenCV are included in the base image in JetPack 4.3+

# This script will open a window and place the camera stream from each camera in a window
# arranged horizontally.
# The camera streams are each read in their own thread, as when done sequentially there
# is a noticeable lag
# For better performance, the next step would be to experiment with having the window display
# in a separate thread

import cv2
import threading
import numpy as np

# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
# Flip the image by setting the flip_method (most common values: 0 and 2)
# display_width and display_height determine the size of each camera pane in the window on the screen

cam = None

class CSI_Camera:

    def __init__ (self) :
        # Initialize instance variables
        # OpenCV video capture element
        self.video_capture = None
        # The last captured image from the camera
        self.frame = None
        self.grabbed = False
        # The thread where the video capture runs
        self.read_thread = None
        self.read_lock = threading.Lock()
        self.running = False


    def open(self, gstreamer_pipeline_string):
        try:
            self.video_capture = cv2.VideoCapture(
                gstreamer_pipeline_string, cv2.CAP_GSTREAMER
            )
            
        except RuntimeError:
            self.video_capture = None
            print("Unable to open camera")
            print("Pipeline: " + gstreamer_pipeline_string)
            return
        # Grab the first frame to start the video capturing
        self.grabbed, self.frame = self.video_capture.read()

    def start(self):
        if self.running:
            print('Video capturing is already running')
            return None
        # create a thread to read the camera image
        if self.video_capture != None:
            self.running=True
            self.read_thread = threading.Thread(target=self.updateCamera)
            self.read_thread.start()
        return self

    def stop(self):
        self.running=False
        self.read_thread.join()

    def updateCamera(self):
        # This is the thread to read images from the camera
        while self.running:
            try:
                grabbed, frame = self.video_capture.read()
                with self.read_lock:
                    self.grabbed=grabbed
                    self.frame=frame
            except RuntimeError:
                print("Could not read image from camera")
        # FIX ME - stop and cleanup thread
        # Something bad happened
        

    def read(self):
        with self.read_lock:
            frame = self.frame.copy()
            grabbed=self.grabbed
        return grabbed, frame

    def release(self):
        if self.video_capture != None:
            self.video_capture.release()
            self.video_capture = None
        # Now kill the thread
        if self.read_thread != None:
            self.read_thread.join()


# Currently there are setting frame rate on CSI Camera on Nano through gstreamer
# Here we directly select sensor_mode 3 (1280x720, 59.9999 fps)
def gstreamer_pipeline():
    return (
        "nvarguscamerasrc sensor-id=1 sensor-mode=3 ! "
        "video/x-raw(memory:NVMM), "
        "  width=1280, "
        "  height=720, "
        "  format=NV12, "
        "  framerate=60/1 ! "
        "nvvidconv flip-method=0 ! "
        "videoconvert ! "
        "video/x-raw, format=BGR ! "
        " appsink"
    )

def gstreamer_pipeline_out():
    return (
        "appsrc ! "
        "video/x-raw, format=BGR ! "
        "queue ! "
        "videoconvert ! "
        "video/x-raw, format=BGRx ! "
        "nvvidconv ! "
        "omxh264enc ! "
        "video/x-h264, stream-format=byte-stream ! "
        "h264parse ! "
        "rtph264pay pt=96 config-interval=1 ! "
        "udpsink host=192.168.0.110 port=5000"
    )

def start_cameras():
    cam = CSI_Camera()
    cam.open(gstreamer_pipeline())
    cam.start()

    if (not cam.video_capture.isOpened()):
        print("Unable to open any cameras")
        SystemExit(0)

    out = cv2.VideoWriter(gstreamer_pipeline_out(), 0, 60, (1280,720))

    while not out.isOpened():
      print('VideoWriter not opened')
      SystemExit(0)

    while True :
        _ , frame=cam.read()
        img = cv2.blur(frame,(3,3))
        #img = cv2.medianBlur(img,9)
        out.write(img)

    cam.stop()
    cam.release()

if __name__ == "__main__":
    start_cameras()

Use this to receive it:

gst-launch-1.0 udpsrc port=5000 ! \
application/x-rtp,encoding-name=H264,payload=96 ! \
rtph264depay ! \
h264parse ! \
queue ! \
avdec_h264 ! \
autovideosink sync=false async=false -e