I want to adapt opencv code to gstreamer in jetson

I completed video streaming jetson tx1 and My PC

now I have opencvcode of video processing

so, I want to adapt opencv code to gstreamer in jetson

Next code is PC-jetson tx1 streaming and opencv code

CLIENT_IP=10.100.0.70

gst-launch-1.0 nvcamerasrc fpsRange=“30 30” intent=3 ! nvvidconv flip-method=6 \

! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1’ ! \

omxh264enc control-rate=2 bitrate=4000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! \

h264parse ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false

// PC code//

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! xvimagesink sync=false async=false -e

//opencv code

#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

int main()
{
cv::Mat img,img_gray;

cv::VideoCapture input(0);

for(;;)
{
if (!input.read(img))
    break;

cv::cvtColor(img, img_gray, CV_RGB2GRAY);

cv::imshow("img", img);
cv::imshow("gray", img_gray);
char c = cv::waitKey(30);

if (c == 27)
    break;
}

}

You have first to get sources, configure, build and install your own opencv library, because opencv4tegra doesn’t provide gstreamer support.

You may follow: http://docs.opencv.org/3.2.0/d6/d15/tutorial_building_tegra_cuda.html and check configuration for TX1, changing

-DWITH_GSTREAMER=OFF \

to

-DWITH_GSTREAMER=ON \

.
Some info may also be found in http://dev.t7.ai/jetson/opencv/.

You also change CMAKE_INSTALL_PREFIX to /usr/local/opencv-3.2.0 for example if you’re building 3.2.0 version.

Then compile… it make take one hour. Be sure to build on a disk having at least 4GB free.
Then install.

At that point I suggest you have a look to this thread for capturing frames with gstreamer and converting to BGR and sending into opencv application : https://devtalk.nvidia.com/default/topic/1001696/jetson-tx1/failed-to-open-tx1-on-board-camera/post/5117370/#5117370.

Then you may use class cv::VideoWriter to send your transformed frames from your application to the second part of the pipeline that will encode and send on UDP. Be sure to add a space at the end of that pipeline because of a bug whereby it may think otherwise it is a file path, looking for extension after a dot of IP address.

Be aware that the framerate may be low depending on your processing, as it will get frames in CPU memory.

Thank you for your answer.

But I have question

I can’t understand next part on your answer.

’ Then you may use class cv::VideoWriter to send your transformed frames from your application to the second part of the pipeline that will encode and send on UDP. Be sure to add a space at the end of that pipeline because of a bug whereby it may think otherwise it is a file path, looking for extension after a dot of IP address.’

this URL is real time monitoring PC to jetson tx1 so can I adapt your method to this URL?

What I mean is that currently you have a single pipeline on jetson that goes :
nvcamerasrc → nvvidconv → H265enc → h265parse → h265pay → udpsink

That should be splitted after nvvidconv in 2 parts:

First part: frame acquisition with:
nvcamerasrc → nvvidconv → (add here videoconvert for BGR conversion) → appsink
This pipeline will be of class cv::videoCapture, and opencv is its appsink, and opening it you will be able to get frames into cv::Mat, as shown in the above mentioned example.

Second part: processed frame encoding and sending : appsrc → (maybe something required for conversion from BGR to a sink format suitable for omxh265enc->) h265enc → h265parse → h265pay → udpsink
You will therefore have also created a cv::videoWriter with this second part of your pipeline, so that each processed frame you will write in will be automatically encoded and sent into the udp stream.

Then just do your processing with opencv in your application in the loop for each frame read from videoCapture, and writing your processed frame into videoWriter.

You should pay attention to supported capabilities of each element with gst-inspect-1.0, each plugin describes its src and sink capabilities. Note the memory spaces such as memory:NVMM…if required, nvvidconv can copy from one to the other (say to/from CPU memory for sink/src capability video/x-raw with no memory space mention) if you provide the caps before and after nvvidconv.

I would advise to prototype your both splitted pipelines with gst-launch-1.0, using filesink instead of appsink, saving the frames in BGR format, and later for second part of splitted pipeline using filesrc instead of appsrc for sending the recorded file. Caps will have to be single quoted between elements for gst-launch, while in strings for opencv it should not, as in the example above.

When both will work, you should be able to see the caps linked by gst-launch between each element (you may have to add -v option to gst-launch for that, not sure). Adding these caps explicitly between elements in your opencv pipeline may help if it fails.

I’m having trouble recreating what @Honey_Patouceul describe above: taking a single GStreamer pipeline, breaking it up into filesink/filesrc instead of appsink/appsrc.

Here’s my pipeline that captures from a camera and saves it as a .mkv file:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)2592, height=(int)1944, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! videoconvert ! 'video/x-raw, format=(string)BGRx' ! videoconvert !'video/x-raw, format=(string)I420' ! omxh264enc ! matroskamux ! filesink location=video1.mkv

I tried to make it as close to the “split” pipelines in OpenCV as possible (converting to BGR and back). It works, records the camera fine – all good. I then split it up into the following two pipelines:

Camera Pipeline

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)2592, height=(int)1944, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! filesink location=video.raw

Encode Pipeline

gst-launch-1.0 filesrc location=video.raw ! 'video/x-raw, format=(string)BGR, width=(int)2592, height=(int)1944, framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)I420' ! omxh264enc ! matroskamux ! filesink location=video.mkv

The Camera Pipeline runs with no errors, but I see this repeated warning when reading from the file:
WARNING: from element /GstPipeline:pipeline0/GstVideoConvert:videoconvert0: Internal GStreamer error: code not implemented. Please file a bug at http://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer.
Additional debug info:
gstvideofilter.c(293): gst_video_filter_transform (): /GstPipeline:pipeline0/GstVideoConvert:videoconvert0:
invalid video buffer received

Any help would be appreciated.

I’m doing this exercise because I can’t seem to record the camera stream correctly from OpenCV using gstreamer in VideoWriter. The following code will save a .mkv that i can playback but the doesn’t look right (screen tearing (?) on every row).

gst_dst = "appsrc ! videoconvert ! omxh264enc ! matroskamux ! filesink location=test.mkv "
writer = cv2.VideoWriter(gst_dst, cv2.CAP_GSTREAMER, 0, 20.0, (1280, 720), True)

Note: I can preview the frames fine, and I’ve confirmed it’s in RGB format with the right shape. I can also encode fine using FFMPEG, but I prefer to use gstreamer if possible.

Possibly the blocksize is not correctly set.
You may add plugin videoparse between filesrc and videoconvert.

gst-inspect-1.0 videoparse

Be aware that raw format can fill your emmc very quickly, so I would advice to only record on external storage.

Yep i added the videoparse plugin and it works. Here’s my Encode Pipeline for the curious:

gst-launch-1.0 filesrc location=video.raw ! videoparse format=bgr width=2592 height=1944 framerate=30/1 ! videoconvert ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)I420' ! omxh264enc ! matroskamux ! filesink location=video.mkv

Thank you for the help @Honey_Patouceul!

I’d further suggest to use 1080p format at first :

#Record 10s from onboard camera in 1080p 30fps and save into BGR format file (be sure you have more than 2GB available before with 'df -H .'):
gst-launch-1.0 -v nvarguscamerasrc timeout=10 ! 'video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1' ! nvvidconv ! 'video/x-raw, format=BGRx' ! videoconvert ! 'video/x-raw, format=BGR' ! queue ! filesink location=video.bgr

#Play from BGR file into your window manager:
gst-launch-1.0 -v filesrc location=video.bgr ! videoparse format=16 width=1920 height=1080 framerate=30/1 ! 'video/x-raw, format=BGR' ! videoconvert ! xvimagesink

#Play from BGR file, encode it into H265 and put into matroska video container file:
gst-launch-1.0 -v filesrc location=video.bgr ! videoparse format=16 width=1920 height=1080 framerate=30/1 ! 'video/x-raw, format=BGR' ! videoconvert ! 'video/x-raw, format=BGRx' ! nvvidconv ! omxh265enc bitrate=80000000 ! h265parse ! matroskamux ! filesink location=video.mkv

[EDIT: For 10s capture to bgr file, be aware of this.]