[Gstreamer] nvvidconv, BGR as INPUT

Hello,
according to:
https://developer.download.nvidia.com/embedded/L4T/r32_Release_v1.0/Docs/Accelerated_GStreamer_User_Guide.pdf

nvvidconv allows BGRx as input but not BGR, have you considered to add this format?
In this way is not required to use videoconvert and use CPU.

I request this info why my application:

  1. capture frame from RTSP and convert it to BGR
  2. detect objects
  3. edit frame with OpenCV
  4. write frame to nvdrmvideosink

In step 4 I use this code (python):

send_gst = "appsrc ! video/x-raw, format=(string)BGR ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 ! nvdrmvideosink -e"
out_send = cv2.VideoWriter(send_gst, 0, int(stream_fps), (opt.width, opt.height))

and I need to use videoconvert instead of to use directly nvvidconv to convert frame to I420.

Hi,

It is limitation of hardware VIC engine. You have to use videoconvert plugin in the usecase.

We have DS4.0.1 for deep learning usecases. You may also check it and may be able to apply to your usease.

I’m trying to find a solution, unfortunately deepstream is not an option. I need to work in python.

I’m using this pipeline:

rtspsrc location=rtsp://admin:Password@192.168.0.202:554/ latency=500 ! queue ! rtph264depay ! queue ! h264parse ! omxh264dec !   nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=200000000 ! appsink sync=true

It works but I would like to avoid to use videoconvert cause it uses CPU instead of GPU.
In theory is possible to manage RGBA frames using python, but if I change my pipeline in this way:

rtspsrc location=rtsp://admin:Password@192.168.0.202:554/ latency=500 ! queue ! rtph264depay ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=RGBA ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=200000000 ! appsink sync=true

I get an error:
Trying to dispose element capsfilter21, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.

Hi,
We have tried the following python code on TX2/r32.1/OpenCV3.4.1

import sys
import cv2

def read_cam():
    cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)I420 ! appsink")
    if cap.isOpened():
        cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
        while True:
            ret_val, img = cap.read();
            img2 = cv2.cvtColor(img, cv2.COLOR_YUV2BGR_I420);
            cv2.imshow('demo',img2)
            cv2.waitKey(10)
    else:
     print "camera open failed"

    cv2.destroyAllWindows()


if __name__ == '__main__':
    read_cam()

It may help your usecase. FYR.

2 Likes

But in this way…

img2 = cv2.cvtColor(img, cv2.COLOR_YUV2BGR_I420);

is not done by CPU and not from GPU?

Am I wrong?

Hi,
It is don on CPU. Since nvvidconv plugin does not support BGR.

Another way is to run

cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)RGBA ! appsink")

And do

img2 = cv2.cvtColor(img, CV_RGBA2BGR);

However, cv2.cvtColor(img, CV_RGBA2BGR) is still done on CPU.

hi all,
thanks for this discussion and information to see how i get gstreamer on the jetson nano into opencv.
but i am stuck with the right pipeline from a rtsp stream. may anybody can kindly give me a hint?

i adopt DaneLLL last suggestion for my rtsp stream , but this fails to grab:

cap = cv2.VideoCapture('rtspsrc location=rtspt://user.pwd@192.168.14.83/Streaming/Channels/101 ! decodebin ! nvvidconv ! video/x-raw,format=RGBA ! appsink sync=0' )

i got this both variants working (with same python code):

cap = cv2.VideoCapture('rtspsrc location=rtspt://usr:pwd@192.168.14.83/Streaming/Channels/101 ! decodebin ! nvvidconv ! video/x-raw,format=I420,width=1061,height=600 ! videoconvert ! appsink sync=0' )

cap = cv2.VideoCapture('rtspsrc location=rtspt://usr:pwd@192.168.14.83/Streaming/Channels/101 ! decodebin ! nvvidconv ! video/x-raw,format=BGRx,width=1061,height=600 ! videoconvert ! video/x-raw,format=BGR ! queue! appsink sync=0' 

i simply want to achive to grab the rtsp-stream (working good with gestreamer in opencv) and feed the frames into TensorRT (wich i installed from dusty-nv github and its working fine).

on first try with some pipes i had grayscale images, and with rtsp both of DaneLLL suggested conversions above dont ome into play with the rtsp pipeline and nvidconv-format=RGBA failing on the rtsp-stream.
cv2.cvtColor(img, cv2.COLOR_YUV2BGR_I420);cv2.cvtColor(img, CV_RGBA2BGR);

maybe a little hint to get a lean/smallest and/or fasted gstreamer accelerated pipeline for a RTSP-source into opencv ?

thx, all suggestions greatly appreciated

1 Like

Opencv application is only able to grab 1 or 3 channels frames from videoCapture.
1st working case is getting I420 frames, you may have to convert into BGR with:

cv2.cvtColor(img, cv2.COLOR_YUV2BGR_I420)

In second case, you use nvvidconv HW ISP for converting into RGBx and then videoconvert for removing the extra 4th byte and pack into BGR, ready for most opencv algorithms.

You would probably not bother with RGBA conversion for this case.

Note that conversion from YUV to BGR in opencv may not be better than nvvidconv+videoconvert in gstreamer. You may benchmark your case for being sure.