nvarguscamerasrc OpenCV (Solved)

If the quality is lost in encoding at sender side, there is nothing (reasonable) you can do to retreive lost quality at receiver side.
So, depending on your available resources for encoding and your available bandwith between sender and receiver, you may adjust bitrate and check image quality.

but dramatical difference is observer when using at the same xavier device two calls from cpp opencv:
one with nvarguscamerasrc - that is somewhat fine,
another with terminal gstreamer rtsp[as reciever] - that is somewhat fine,
and the worst case is if running reciever from opencv cpp with rtspsrc.
I understand as two last methods using the same transiever, but different parameters in recieving code . perhaps the issue might be with absent parameters of the code.
For example:
At Xavier 1
I run rtsp gstreamer.streaming
When I run from the same Xaier terminal gstreamer recieving command - it plays well.
But when I am using opencv gsteamer rtsp recieving code - the quality degrades dramatically.
Probably I shall approach some other ways.

You may try to run tegrastats and see if something has become a bottleneck.

If using videoconvert, you may try direct BGR format before appsink, instead of cvtColor in opencv.

You may also try to set caps video/x-raw, format=I420 (or NV12) as output of decodebin, if supported you may remove videoconvert (although if having nothing to do it shouldn’t use so much resources).

You may also check the caps used at each stage with gst-launch using -v and specify these caps for the opencv gstreamer pipeline.

shall it be for devkit onboard Omnivision sensor as listed below?

cvtColor(frame, bgr, CV_YUV2BGR_I420);

It depends more on the caps before appsink than on the sensor. If you are using nvarguscamerasrc and nvvidconv outputing NV12, then it would be cv::COLOR_YUV2BGR_NV12.

void

could you point out how to integrate it into the code?
It seems that extra library will be required as:

#include <opencv2/viz/types.hpp>

and as per the stackoverflow thread, it would need opencv built with

WITH_VTK=ON

as per https://docs.opencv.org/3.4/d4/dba/classcv_1_1viz_1_1Color.html
However, without the extra library it will probably be smth. like
cvtColor(frame, bgr, CV_YUV2BGR_NV12);
Thanks!

Try this pipeline:

VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin ! video/x-raw,format=NV12 ! appsink");

and convert with:

cvtColor(frame, bgr, cv::YUV2BGR_NV12);

g++ asks to change cv::YUV2BGR_NV12 -> COLOR_YUV2BGR_NV12
What works is:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin ! nvvidconv ! appsink");

 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, COLOR_YUV2BGR_NV12);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}

it works with an idleness of a blink with 1920x1280
but when to increase the resolution to the highest, it seems to freeze every 1 second or so.

You may try to increase kernel memory for sockets.

[EDIT: I’ve just verified now on TX2 R28.2.0 and opencv-3.4.0, MAXN mode and max clocks, and having increased UDP socket memory to 25MB:
Server:

./test-launch "( nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)2592, height=(int)1944, format=(string)I420, framerate=(fraction)30/1 ! omxh265enc bitrate=80000000 ! video/x-h265 ! rtph265pay name=pay0 pt=96 )"

Opencv client:

#include <iostream>

#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>

int main()
{
    const char* gst =  	"rtspsrc location=rtsp://127.0.0.1:8554/test ! application/x-rtp, media=(string)video ! "
			"decodebin ! "
			"nvvidconv ! video/x-raw, format=NV12 ! "
			"appsink";

    cv::VideoCapture cap(gst);
    if(!cap.isOpened()) {
	std::cout<<"Failed to open camera."<<std::endl;
	return (-1);
    }
    
    unsigned int width  = cap.get(cv::CAP_PROP_FRAME_WIDTH); 
    unsigned int height = cap.get(cv::CAP_PROP_FRAME_HEIGHT); 
    unsigned int fps    = cap.get(cv::CAP_PROP_FPS);
    unsigned int pixels = width*height;
    std::cout <<" Frame size : "<<width<<" x "<<height<<", "<<pixels<<" Pixels "<<fps<<" FPS"<<std::endl;

    cv::namedWindow("MyCameraPreview", cv::WINDOW_AUTOSIZE);
    cv::Mat frame_in, frame_rgb;

    while(1)
    {
    	if (!cap.read(frame_in)) {
		std::cout<<"Capture read error"<<std::endl;
		break;
	}
	else  {
		cv::cvtColor(frame_in, frame_rgb, cv::COLOR_YUV2BGR_NV12);
		cv::imshow("MyCameraPreview",frame_rgb);
		cv::waitKey(1); 
	}	
    }

    cap.release();

    return 0;

}

]

Thank you for your response.
I can observe a situation when rtsp gstreamer streamer got normalized and seems to has kind of high quality image, as can be seen from the screenshot .

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://10.0.0.3:8554/test  ! 'application/x-rtp, media=video' ! decodebin ! videoconvert ! ximagesink

However, when I am approaching recording of a file from gstreamer with same parameters the quality seems dramatically reduced.

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=2592, height=1944, format=NV12, framerate=30/1' ! omxh264enc ! 'video/x-h264,stream-format=byte-stream' ! filesink location="test2.h264" -e

Video from the same device seems to have noises or artifacts




The central bottom camera output comes from TX2 and has timestamp.
Is there a way to add timestamp to Xavier stream? Probably it could help sort out missed frames? However, it would need timestamp to come from source while with TX2 it is undetermined if it comes from the source or is placed by the player.
To sum all up:
probably ZED camera could bring better quality for recording an event than the onboard CSI sensor [ Omnivision]
?

it turned out that the rtsp streaming to opencv receptor turned out to work on jetson nano precisely the same way as it does on Xavier and even with the higher resolution comparing with the default onboard devkit camera, as Rpi v.2 camera supports higher resolution.

Hi Guys,
Have anyone solved or have an idea on how to incorporate gstreamer rtsp pileline input into opencv aruco module?
I have found some ready solution that allows to do so but it seems a cumbersome approach
Thanks

References:


What has been managed so far is to take first image from rtsp camera with aruco_simple example.

./aruco_simple "rtspsrc location=rtsp://127.0.0.1:8554/test latency=30 ! decodebin ! nvvidconv ! appsink"

opencv_contrib version seem cumbersome to implement: trying getting xavier camera calibration

./example_aruco_calibrate_camera ~/calib.txt -w=5 -h=7 -l=100 -s=10 -d=10 --ci:2

(example_aruco_calibrate_camera:10617): GStreamer-CRITICAL **: 06:41:15.663: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

Another example that works:

./aruco_test "rtspsrc location=rtsp://127.0.0.1:8554/test latency=30 ! decodebin ! nvvidconv ! appsink"

https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/samples/calibrate_camera.cpp

aruco_simple.cpp (5.58 KB)
aruco_test.cpp (17.3 KB)

I gave up passing network rtsp stream to aruco.
It just won’t work with my level of skills at the moment.
However, as aruco works with local camera- probably the code below will work to mount the network stream as a local camera.

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp, line 887
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:

/home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Exception :Could not open video

And when I am trying to read from video2 gstreamer fails:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=RGB, width=640, height=480, framerate=30/1' ! queue ! videoconvert ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.

However, when I start it with the approach below it works:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
Gtk-Message: 15:11:53.522: Failed to load module "canberra-gtk-module"

(in:7950): Gtk-CRITICAL **: 15:11:53.555: IA__gtk_window_resize: assertion 'width > 0' failed
VIDEOIO ERROR: V4L2: getting property #1 is not supported
Frame:-1
Time detection=55.546 milliseconds nmarkers=0 images resolution=[640 x 480]
VIDEOIO ERROR: V4L2: getting property #1 is not supported
Frame:-1

What might be the issue with the former approach?
Is it because of the mess with formats NV12 - RGB - I420?
How the correct line loopbacking a network steam to /dev/video2 will look like and how to check that it plays with gsteamer?
Thanks
Update: what I have managed to run is:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=I420, width=640, height=480, framerate=30/1' ! queue ! videoconvert ! xvimagesink

On the other hand what works is

./aruco_test live:2

with local v4l2loopback generated with sequence below:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

but it fails executing with sequence generated with the code below:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV

It seems to require format to be passed in RGB, and doesn’t seem to work with rtsp where NV12 or I420 are used. However, it works with the former example where /dev/video2 is created with RGB. Probably it is wrong conversion of formats somewhere that might be the cause.

You may try to add caps before v4l2sink, to ensure BGR format.

identity drop-allocation=1 is for a buffer problem with v4l2sink (in older gstreamer releases, a common but sometimes failing workaround was using tee before v4l2sink).

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=BGR,framerate=30/1 ! v4l2sink device=/dev/video2

Thank you for you response!
Do the following two commands cohere at your side if you execute it?

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1" 
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=RGB,framerate=30/1 ! v4l2sink device=/dev/video2

it seems to throw:

ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH265Dec-omxh265dec:omxh265dec-omxh265dec0: Internal data stream error.

Steps to reproduce the error:
I. setting up the aruco sample:

wget https://sourceforge.net/projects/aruco/files/3.1.0/aruco-3.1.0.zip/download
unzip download
cd aruco-3.1.0
mkdir build
cd build
cmake ..
make -j4
cd utils
./aruco_test live:2

II. setting up a loopback that will mount a network rtsp gstreamer stream as local camera /dev/video2

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1" 
stream ready at rtsp://127.0.0.1:8554/test

And in the lines below seems to exist some issue, as the first line somehow starts and it allows to default generic application to read from it, but won’t let the aruco code read from it. And the latter line won’t start throwing exception[error]

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=RGB,framerate=30/1 ! v4l2sink device=/dev/video2
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH265Dec-omxh265dec:omxh265dec-omxh265dec0: Internal data stream error.

Otherwise,

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
in that case some exception concerned in gstreamer is thrown to me when I start aruco_test code

III.
Additionally there is data that shows that aruco code processes a v4l2loopback mounted camera from CSI [ that is different from rtsp, as it rather uses nvarguscamerasrc]

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

at least it opens camera in this case and displays the video and edges as:

./aruco_test live:2
Opening camera index 2
...

what could be the cause of simple_opencv in cpp and in python work at Xaviers, but when I build opencv of the same version and of the same parameters at host_pc and run simple_opencv for a network rtsp stream or python equivalent, it would fail saing something like:

python simple_python.py 
Unable to query number of channels
capture filed
./simple_opencv 
VIDEOIO ERROR: V4L: device uridecodebin uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov ! videoconvert ! videoscale ! appsink: Unable to query number of channels
Not good, open camera failed

However the line of the type below works and plays the stream:

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue ! decodebin ! videoconvert ! xvimagesink

UPD:
resolved with
upgrade to opencv 4.1
UPD: It seems that simple_sample will need to be rewritten for opencv 4.1:

g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --libs opencv)
/tmp/ccSutWH9.o: In function `main':
simple_opencv.cpp:(.text+0x55): undefined reference to `cv::VideoCapture::VideoCapture(cv::String const&)'
simple_opencv.cpp:(.text+0x1b8): undefined reference to `cv::imshow(cv::String const&, cv::_InputArray const&)'
/tmp/ccSutWH9.o: In function `cv::String::String(char const*)':
simple_opencv.cpp:(.text._ZN2cv6StringC2EPKc[_ZN2cv6StringC5EPKc]+0x54): undefined reference to `cv::String::allocate(unsigned long)'
/tmp/ccSutWH9.o: In function `cv::String::~String()':
simple_opencv.cpp:(.text._ZN2cv6StringD2Ev[_ZN2cv6StringD5Ev]+0x14): undefined reference to `cv::String::deallocate()'
/tmp/ccSutWH9.o: In function `cv::String::operator=(cv::String const&)':
simple_opencv.cpp:(.text._ZN2cv6StringaSERKS0_[_ZN2cv6StringaSERKS0_]+0x28): undefined reference to `cv::String::deallocate()'
collect2: error: ld returned 1 exit status