hello
I need capture udp h264 stream 1920x1080 30fps and display it. I decode stream using hw accelerator in two ways.
one way is just to use accelerated gstreamer and works very well.cpu utilization is 50%.
gst-launch-1.0 udpsrc multicast-group=224.1.1.5 port=40010 ! "application/x-rtp, media=video, encoding-name=H264" ! rtph264depay ! queue ! h264parse ! omxh264dec enable-low-outbuffer=1 ! nv3dsink -e
another way is to use opencv(nano default 3.3ver and 4.0.0ver) videocapture frome accelerated gstreamer appsink. cpu utilization upto 150%,picture is breaking up.
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/highgui/highgui_c.h>
#include <opencv2/cudaimgproc.hpp>
#include "constants_c.h"
int main()
{
const char* gst = "gst-launch-1.0 udpsrc multicast-group=224.1.1.5 port=40010 ! application/x-rtp, media=video, encoding-name=H264 !"
"rtph264depay ! queue ! h264parse ! omxh264dec enable-low-outbuffer=1 ! videocrop bottom=8 !"
"videoconvert ! appsink";
cv::VideoCapture cap(gst);
if(!cap.isOpened()) {
std::cout<< "Failed to open camera." << std::endl;
return (-1);
}
unsigned int width = cap.get(CV_CAP_PROP_FRAME_WIDTH);
unsigned int height = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
unsigned int fps = cap.get(CV_CAP_PROP_FPS);
unsigned int pixels = width*height;
std::cout <<" Frame size : "<<width<<" x "<<height<<", "<<pixels<<" Pixels "<<fps<<" FPS"<<std::endl;
cv::namedWindow("MyCameraPreview", CV_WINDOW_AUTOSIZE);
cv::Mat frame_in;
while(1)
{
if (!cap.read(frame_in)) {
std::cout<<"Capture read error"<<std::endl;
break;
}
else {
// cv::imshow("MyCameraPreview",frame_in);
cv::waitKey(33); // let imshow draw
}
}
cap.release();
return 0;
}
Hi,
Using opencv videocapture from gstreamer appsink is to get CPU buffers, so it takes CPU loading to copy NVMM buffers to CPU buffers.
You may use nvivafilter plugin. Please refer to
[url]https://devtalk.nvidia.com/default/topic/1022543/jetson-tx2/gstreamer-nvmm-lt-gt-opencv-gpumat/post/5311027/#5311027[/url]
Hi DaneLLL
Thank you for the reply.
Does nvivafilter plugin and nvoverlaysink work for VideoCapture?
I have tried nvivafilter → nvoverlaysink and nvivafilter → appsink. Both do not work.
const char* gst = "gst-launch-1.0 udpsrc multicast-group=224.1.1.5 port=40010 ! application/x-rtp, media=video, encoding-name=H264 !"
"rtph264depay ! queue ! h264parse ! omxh264dec enable-low-outbuffer=1 !"
"nvivafilter cuda-process=true customer-lib-name=\"libnvsample_cudaprocess.so\" !"
"'video/x-raw(memory:NVMM), format=(string)RGBA' ! nvoverlaysink
(Main:5618): GStreamer-CRITICAL **: 14:25:50.304: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
(Main:5618): GStreamer-CRITICAL **: 14:25:50.304: gst_element_link_pads_filtered: assertion ‘GST_IS_BIN (parent)’ failed
(Main:5618): GLib-GObject-WARNING **: 14:25:50.305: invalid cast from ‘GstNvOverlaySink-nvoverlaysink’ to ‘GstBin’
(Main:5618): GStreamer-CRITICAL **: 14:25:50.305: gst_bin_iterate_elements: assertion ‘GST_IS_BIN (bin)’ failed
(Main:5618): GStreamer-CRITICAL **: 14:25:50.305: gst_iterator_next: assertion ‘it != NULL’ failed
(Main:5618): GStreamer-CRITICAL **: 14:25:50.305: gst_iterator_free: assertion ‘it != NULL’ failed
(Main:5618): GStreamer-CRITICAL **: 14:25:50.305: gst_element_get_state: assertion ‘GST_IS_ELEMENT (element)’ failed
Failed to open camera.
const char* gst = "gst-launch-1.0 udpsrc multicast-group=224.1.1.5 port=40010 ! application/x-rtp, media=video, encoding-name=H264 !"
"rtph264depay ! queue ! h264parse ! omxh264dec enable-low-outbuffer=1 !"
"nvivafilter cuda-process=true customer-lib-name=\"libnvsample_cudaprocess.so\" !"
"'video/x-raw(memory:NVMM), format=(string)RGBA' ! appsink"
(Main:8932): GStreamer-CRITICAL **: 15:08:18.571: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
(Main:8932): GStreamer-CRITICAL **: 15:08:18.572: gst_element_link_pads_filtered: assertion ‘GST_IS_BIN (parent)’ failed
(Main:8932): GLib-GObject-WARNING **: 15:08:18.573: invalid cast from ‘GstAppSink’ to ‘GstBin’
(Main:8932): GStreamer-CRITICAL **: 15:08:18.573: gst_bin_iterate_elements: assertion ‘GST_IS_BIN (bin)’ failed
(Main:8932): GStreamer-CRITICAL **: 15:08:18.573: gst_iterator_next: assertion ‘it != NULL’ failed
(Main:8932): GStreamer-CRITICAL **: 15:08:18.573: gst_iterator_free: assertion ‘it != NULL’ failed
(Main:8932): GStreamer-CRITICAL **: 15:08:18.573: gst_element_get_state: assertion ‘GST_IS_ELEMENT (element)’ failed
Do you mean giving up the opencv?
regrds
Hi,
No. If you must use VideoCapture, you have to use appsink and take higher CPU loading. You can run ‘sudo jetson_clocks’ to perform in max CPU clocks and ‘sudo tegrastats’ to get runtime system status.
Hi DaneLLL
thank you.
I need rend mutli picture from different rtp, and try to use the nvoverlaysink overlay to layout the display.
I got the iv_renderer.overlay.yuv420 Error.
Is the overlay setting wrong?
gst-launch-1.0 udpsrc multicast-group=224.1.1.5 "port=40010" ! "application/x-rtp, media=video, encoding-name=H264" ! rtph264depay ! queue ! h264parse ! omxh264dec enable-low-outbuffer=1 ! nvivafilter cuda-process=true customer-lib-name=\"libnvsample_cudaprocess.so ! 'video/x-raw(memory:N\
VMM), format=(string)RGBA' ! nvoverlaysink overlay-x=0 overlay-y=0 overlay-w=1920 overlay-h=1080 overlay=2 -e &
gst-launch-1.0 udpsrc multicast-group=224.1.1.5 "port=40010" ! "application/x-rtp, media=video, encoding-name=H264" ! rtph264depay ! queue ! h264parse ! omxh264dec enable-low-outbuffer=1 ! nvivafilter cuda-process=true customer-lib-name=\"libnvsample_cudaprocess.so ! 'video/x-raw(memory:N\
VMM), format=(string)RGBA' ! nvoverlaysink overlay-x=0 overlay-y=1080 overlay-w=1920 overlay-h=1080 overlay=2 -e
NvxBaseWorkerFunction[2575] comp OMX.Nvidia.std.iv_renderer.overlay.yuv420 Error -2147479552
regards
Please check [VIDEO PLAYBACK WITH GSTREAMER-1.0] in
https://developer.nvidia.com/embedded/dlc/l4t-accelerated-gstreamer-guide-32-1
Also this is for discussion about opencv. For clearness, suggest you make a new post for different issues next time.