Apply pipeline Gstream in Opencv

I try to use Opencv to do the pretreatment of real time video from my camera on Nano. After that, I want to use cv2.VideoWriter() to send that to do the Deepstream. Here is my code:

import numpy as np
import cv2
from multiprocessing import Process

def send():
    cap_send = cv2.VideoCapture(0)
    out_send = cv2.VideoWriter('appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host= port=5000',cv2.CAP_GSTREAMER,0, 20, (320,240), True)

    if not cap_send.isOpened() or not out_send.isOpened():
        print('VideoCapture or VideoWriter not opened')

    while True:
        ret,frame =

        if not ret:
            print('empty frame')


        cv2.imshow('send', frame)
        if cv2.waitKey(1)&0xFF == ord('q'):


def receive():
    cap_receive = cv2.VideoCapture('udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! appsink', cv2.CAP_GSTREAMER)

    if not cap_receive.isOpened():
        print('VideoCapture not opened')

    while True:
        ret,frame =

        if not ret:
            print('empty frame')

        cv2.imshow('receive', frame)
        if cv2.waitKey(1)&0xFF == ord('q'):


if __name__ == '__main__':
    s = Process(target=send)
    r = Process(target=receive)


I can only get the window “send”, it seems that something goes wrong in out_send. And I got this in the terminal:

NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
NVMEDIA: NVMEDIABufferProcessing: 1503: NvMediaParserParse Unsupported Codec

Does someone know where is the problem?
Thanks in advance!!!

What is it you are trying to accomplish? what is the end to end pipeline you want to create so we can help you better?

From the info given, it sounds like you’d be better to use the deepstream elements to capture your video and process it and not use OpenCV to run the overall process which is much much slower. If you need to do any specific openCV processing on each frame you can use the dsexample element within the pipeline.

We have implementation in source group:

Please take a look and check if you can use existing implementation. If none can be apllied directly, you would need to implement appsrc into source group. The file of source group is


Hello, thanks for your answer, in fact we are trying to capture the real-time video using a camera and we add a logo in real-time video by using OpenCV(based Python),and after the treatment of the real-time video, we want to send the real-time video treated to another machine by using UDP(or any other protocol if it could realize the fonction), and we want to realize this kind of fonction by only using OpenCV(based Python), it means that we want to use the OpenCV fonction ‘cv2.VideoWriter’(if it is right) to send the real-time video treated, but we don’t know how to send the frame using gstreamer based Python(combining gstreamer with OpenCV based Python), we’ve writen a test code in the question above but it can’t run correctly, could you please check the error or if you could give us an example actual? Thank you!

Looks like your source is a camera source. We have type=1: Camera (V4L2) and 5: Camera (CSI) (Jetson only). Please check if you can adapt to use the two source. For putting a logo, you can use NVBufSurface APIs to access frames and add the log. Please check


There is sample of calling OpenCV APIs in dsexample plugin: