From video-viewer to opencv

In my xavier AGX (JP:32.5.1), I’ve got a long process which works with OpenCV in c++. This process operates always latest raw image frame. Now I need to pass from analyzing videos to cameras as fast as possible. I learned I had to use gstreamer to accelerate that process of decoding camera ip input. I started playing with video-viewer. I’ve made it work with help, but know I’d like to scrap the parameters and insert them into cv2.VideoCapture.

I’ve described this error at the continuation of this issue:

CONTENT:
Hello
I am using another camera of our collection, and now is getting frames, but bad quality (blurry/huge pixels zone). I’ve tried default encoding and mpeg4 too.

Other issue is that if I set into gst-launch-1.0 the same parameters you use, video is recorded but there are no logs at each received frame:

Original video-viewer log:

(video-viewer:2624): GStreamer-CRITICAL **: 13:14:57.868: gst_caps_is_empty: assertion 'GST_IS_CAPS (caps)' failed

(video-viewer:2624): GStreamer-CRITICAL **: 13:14:57.868: gst_caps_truncate: assertion 'GST_IS_CAPS (caps)' failed

(video-viewer:2624): GStreamer-CRITICAL **: 13:14:57.868: gst_caps_fixate: assertion 'GST_IS_CAPS (caps)' failed

(video-viewer:2624): GStreamer-CRITICAL **: 13:14:57.868: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(video-viewer:2624): GStreamer-CRITICAL **: 13:14:57.868: gst_structure_get_string: assertion 'structure != NULL' failed

(video-viewer:2624): GStreamer-CRITICAL **: 13:14:57.868: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Allocating new output: 3072x1728 (x 15), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 3072, nFrameHeight = 1728 
[gstreamer] gstDecoder -- onPreroll()
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> manager
[gstreamer] gstreamer changed state from READY to PAUSED ==> manager
[gstreamer] gstreamer changed state from NULL to READY ==> rtpssrcdemux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpssrcdemux1
[gstreamer] gstreamer changed state from NULL to READY ==> rtpsession1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpsession1
[gstreamer] gstreamer changed state from NULL to READY ==> funnel2
[gstreamer] gstreamer changed state from READY to PAUSED ==> funnel2
[gstreamer] gstreamer changed state from NULL to READY ==> funnel3
[gstreamer] gstreamer changed state from READY to PAUSED ==> funnel3
[gstreamer] gstreamer changed state from NULL to READY ==> rtpstorage1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpstorage1
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> udpsink2
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsink2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsink2
[gstreamer] gstreamer changed state from NULL to READY ==> fakesrc1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> fakesrc1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> fakesrc1
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpssrcdemux1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpstorage1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpsession1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> funnel2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> funnel3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> manager
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc2
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc3
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from NULL to READY ==> rtpptdemux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpptdemux1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpptdemux1
[gstreamer] gstreamer changed state from NULL to READY ==> rtpjitterbuffer1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpjitterbuffer1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpjitterbuffer1
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ \(Main\ Profile\)";
[gstreamer] gstBufferManager -- map buffer size was less than max size (1008 vs 7962624)
[gstreamer] gstBufferManager recieve caps:  video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3072, height=(int)1728, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)0/1
[gstreamer] gstBufferManager -- recieved first frame, codec=h264 format=nv12 width=3072 height=1728 size=7962624
[gstreamer] gstBufferManager -- recieved 3072x1728 frame (7962624 bytes)
[gstreamer] gstBufferManager -- recieved NVMM memory
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstBufferManager -- recieved 3072x1728 frame (7962624 bytes)
[gstreamer] gstBufferManager -- recieved 3072x1728 frame (7962624 bytes)
nvbuf_utils: dmabuf_fd 1071 mapped entry NOT found
nvbuf_utils: NvReleaseFd Failed... Exiting...
[gstreamer] gstBufferManager -- recieved 3072x1728 frame (7962624 bytes)
nvbuf_utils: dmabuf_fd 1073 mapped entry NOT found


....

video-viewer:  captured 106 frames (3072 x 1728)
[gstreamer] gstEncoder -- appsrc requesting data (4096 bytes)
^Creceived SIGINT
[gstreamer] gstBufferManager -- recieved 3072x1728 frame (7962624 bytes)
video-viewer:  captured 107 frames (3072 x 1728)
video-viewer:  shutting down...
[gstreamer] gstDecoder -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstDecoder -- onPreroll()
[gstreamer] gstreamer changed state from PLAYING to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from PLAYING to PAUSED ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from PLAYING to PAUSED ==> h264parse1
[gstreamer] gstreamer changed state from PLAYING to PAUSED ==> rtph264depay1
[gstreamer] gstreamer changed state from PLAYING to PAUSED ==> queue0
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstreamer changed state from PLAYING to PAUSED ==> rtspsrc0
[gstreamer] gstreamer changed state from PLAYING to PAUSED ==> pipeline0
[gstreamer] gstEncoder -- appsrc requesting data (4096 bytes)
[gstreamer] gstreamer message progress ==> rtspsrc0
[gstreamer] gstDecoder -- pipeline stopped
[gstreamer] gstEncoder -- shutting down pipeline, sending EOS
[gstreamer] gstEncoder -- appsrc requesting data (4096 bytes)
[gstreamer] gstEncoder -- transitioning pipeline to GST_STATE_NULL
[gstreamer] gstEncoder -- pipeline stopped
^Creceived SIGINT

Ok,that is correct. Now I get this parameters and insert them into gst-launch-1.0:

root@263a942a2f44:~/repos/jetson-utils/build/aarch64/bin# gst-launch-1.0 -e rtspsrc location=rtsp://X:X@192.168.121.214 latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12" ! omxh264enc bitrate=4000000 ! video/x-h264 !  h264parse ! qtmux ! filesink location=lolo.mp4
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://X:X@192.168.121.214
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request

(gst-launch-1.0:2667): GStreamer-CRITICAL **: 13:22:09.925: gst_caps_is_empty: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:2667): GStreamer-CRITICAL **: 13:22:09.925: gst_caps_truncate: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:2667): GStreamer-CRITICAL **: 13:22:09.925: gst_caps_fixate: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:2667): GStreamer-CRITICAL **: 13:22:09.925: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:2667): GStreamer-CRITICAL **: 13:22:09.926: gst_structure_get_string: assertion 'structure != NULL' failed

(gst-launch-1.0:2667): GStreamer-CRITICAL **: 13:22:09.926: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Allocating new output: 3072x1728 (x 15), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 3072, nFrameHeight = 1728 
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
H264: Profile = 66, Level = 40 
^Chandling interrupt.
Interrupt: Stopping pipeline ...
EOS on shutdown enabled -- Forcing EOS on the pipeline
Waiting for EOS...
Got EOS from element "pipeline0".
EOS received - stopping pipeline...
Execution ended after 0:00:21.485239917
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

As you can see,I don’t see how frames are being received,but, video is recorded as is expected. But then I want to process frames in openCV as they are received… and… I use:

Camera input: rtspsrc location=rtsp://X:X@192.168.121.214 latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12" ! appsink drop=true sync=false
Thread: rtspsrc location=rtsp://X:X@192.168.121.214 latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12" ! appsink drop=true sync=false started
 data->gstreamer: 1
stream_mode: 1800
Listening to tcp://0.0.0.0:8887 waiting for COMPSs ack to start

(edge:2591): GStreamer-CRITICAL **: 13:07:41.567: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed

(edge:2591): GLib-GObject-WARNING **: 13:07:41.573: invalid cast from 'GstAppSink' to 'GstBin'

(edge:2591): GStreamer-CRITICAL **: 13:07:41.573: gst_bin_iterate_elements: assertion 'GST_IS_BIN (bin)' failed

(edge:2591): GStreamer-CRITICAL **: 13:07:41.573: gst_iterator_next: assertion 'it != NULL' failed

(edge:2591): GStreamer-CRITICAL **: 13:07:41.573: gst_iterator_free: assertion 'it != NULL' failed
[ WARN:0@9.260] global /root/repos/opencv/modules/videoio/src/cap_gstreamer.cpp (1226) open OpenCV | GStreamer warning: cannot find appsink in manual pipeline
[ WARN:0@9.260] global /root/repos/opencv/modules/videoio/src/cap_gstreamer.cpp (862) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

And what I get is this, nothing else at all,can’t use ctrl+c and I have to kill process from top.

Camera input: rtspsrc location=rtsp://X:X@192.168.121.214 latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12" ! appsink drop=true sync=false
Thread: rtspsrc location=rtsp://X:X@192.168.121.214 latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12" ! appsink drop=true sync=false started
 data->gstreamer: 1
stream_mode: 1800


(edge:2591): GStreamer-CRITICAL **: 13:07:41.567: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed

(edge:2591): GLib-GObject-WARNING **: 13:07:41.573: invalid cast from 'GstAppSink' to 'GstBin'

(edge:2591): GStreamer-CRITICAL **: 13:07:41.573: gst_bin_iterate_elements: assertion 'GST_IS_BIN (bin)' failed

(edge:2591): GStreamer-CRITICAL **: 13:07:41.573: gst_iterator_next: assertion 'it != NULL' failed

(edge:2591): GStreamer-CRITICAL **: 13:07:41.573: gst_iterator_free: assertion 'it != NULL' failed
[ WARN:0@9.260] global /root/repos/opencv/modules/videoio/src/cap_gstreamer.cpp (1226) open OpenCV | GStreamer warning: cannot find appsink in manual pipeline
[ WARN:0@9.260] global /root/repos/opencv/modules/videoio/src/cap_gstreamer.cpp (862) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

or v2.videoCapture the parameters you use in video-viewer,

this is the code where opencv really stops as you can check in the log:

 std::cout<<"Thread: "<<data->input<< " started" <<std::endl;
 auto stream_mode = data->gstreamer ? cv::CAP_GSTREAMER : cv::CAP_FFMPEG;
 std::cout<<" data->gstreamer: "<<  data->gstreamer <<std::endl;
 std::cout<<"stream_mode: "<<stream_mode <<std::endl;
 cv::VideoCapture cap(data->input, stream_mode);
 std::cout<<"openCV RIGHT!: "<<stream_mode <<std::endl;

Is not valid same pipeline to appsink? I mean, videoviewer used this:

[gstreamer] rtspsrc location=rtsp://elasticva:F0r-V1d30@192.168.121.181 latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! video/x-raw(memory:NVMM) ! appsink name=mysink
[video]  created gstDecoder from rtsp:/XXXX...

Hi @masip85, you can see my follow-up to your GitHub issue here: https://github.com/dusty-nv/jetson-utils/issues/118#issuecomment-1086228365

Perhaps others from the community here may be able to better help with OpenCV.

Another option you could consider, is to use jetson.utils.videoSource to capture the video (since that seems to be working OK for you), and then jetson.utils.cudaToNumpy() to convert it to a numpy array so that cv2 can use it:

Well,in my case, In dont use python. I use opencv in c++. Yo say that this is a problem of opencv then? In that case , in this forum, Ive found lots of different options to address this with unique jetson solitions. but I think most examples,libraries or instructions are outdated.

And is a typical process,so I’d like to get an updated way of doin this,just as years ago it was given too.

El El vie, 1 abr 2022 a las 21:00, dusty_nv via NVIDIA Developer Forums <nvidia@discoursemail.com> escribió:

With OpenCV C++ it should be even easier, because you can directly use the pointer to construct a cv::Mat object like shown here:

Ok, so in that link I convert to opencv format after receiving NV12 format. Right?

Otherwise, I can use nvconvert (inside gstreamer pipeline instead of my code)to BGRx as explained here:

And after that, convert to BGR opencv format also in pipeline.

rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink drop=true sync=false"

Is this an effective way to do this? How would be the fastest way to obtain frames as fast as possible?
It’s quite difficult to understand how latency option can affect me with live source. or which decode is faster: nvv4l2decoder or omxh264dec ?

Hi,
We have deprecated omx plugins so please use nvv4l2decoder.

Since OpenCV processes frame data in BGR format, and BGR is not supported by hardware compoments in Jetson chip, so we would need to use CPU to copy BGRx from NVMM buffer to CPU buffer, and then convert to BGR. This would take significant CPU sage. Your pipeline is optimal to get frame data in BGR.

For further enhancement you may check if you can use GpuMat and apply CUDA filters. Here is a patch for reference:
LibArgus EGLStream to nvivafilter - #14 by DaneLLL

We also have VPI functions:
VPI - Vision Programming Interface: Main Page
You can check if the required OpenCV functions are supported in VPI. If yes, may consider use VPI instead of OpenCV.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.