nvarguscamerasrc OpenCV (Solved)

Try this pipeline:

VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin ! video/x-raw,format=NV12 ! appsink");

and convert with:

cvtColor(frame, bgr, cv::YUV2BGR_NV12);

g++ asks to change cv::YUV2BGR_NV12 -> COLOR_YUV2BGR_NV12
What works is:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin ! nvvidconv ! appsink");

 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, COLOR_YUV2BGR_NV12);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}

it works with an idleness of a blink with 1920x1280
but when to increase the resolution to the highest, it seems to freeze every 1 second or so.

You may try to increase kernel memory for sockets.

[EDIT: I’ve just verified now on TX2 R28.2.0 and opencv-3.4.0, MAXN mode and max clocks, and having increased UDP socket memory to 25MB:
Server:

./test-launch "( nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)2592, height=(int)1944, format=(string)I420, framerate=(fraction)30/1 ! omxh265enc bitrate=80000000 ! video/x-h265 ! rtph265pay name=pay0 pt=96 )"

Opencv client:

#include <iostream>

#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>

int main()
{
    const char* gst =  	"rtspsrc location=rtsp://127.0.0.1:8554/test ! application/x-rtp, media=(string)video ! "
			"decodebin ! "
			"nvvidconv ! video/x-raw, format=NV12 ! "
			"appsink";

    cv::VideoCapture cap(gst);
    if(!cap.isOpened()) {
	std::cout<<"Failed to open camera."<<std::endl;
	return (-1);
    }
    
    unsigned int width  = cap.get(cv::CAP_PROP_FRAME_WIDTH); 
    unsigned int height = cap.get(cv::CAP_PROP_FRAME_HEIGHT); 
    unsigned int fps    = cap.get(cv::CAP_PROP_FPS);
    unsigned int pixels = width*height;
    std::cout <<" Frame size : "<<width<<" x "<<height<<", "<<pixels<<" Pixels "<<fps<<" FPS"<<std::endl;

    cv::namedWindow("MyCameraPreview", cv::WINDOW_AUTOSIZE);
    cv::Mat frame_in, frame_rgb;

    while(1)
    {
    	if (!cap.read(frame_in)) {
		std::cout<<"Capture read error"<<std::endl;
		break;
	}
	else  {
		cv::cvtColor(frame_in, frame_rgb, cv::COLOR_YUV2BGR_NV12);
		cv::imshow("MyCameraPreview",frame_rgb);
		cv::waitKey(1); 
	}	
    }

    cap.release();

    return 0;

}

]

Thank you for your response.
I can observe a situation when rtsp gstreamer streamer got normalized and seems to has kind of high quality image, as can be seen from the screenshot .

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://10.0.0.3:8554/test  ! 'application/x-rtp, media=video' ! decodebin ! videoconvert ! ximagesink

However, when I am approaching recording of a file from gstreamer with same parameters the quality seems dramatically reduced.

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=2592, height=1944, format=NV12, framerate=30/1' ! omxh264enc ! 'video/x-h264,stream-format=byte-stream' ! filesink location="test2.h264" -e

Video from the same device seems to have noises or artifacts




The central bottom camera output comes from TX2 and has timestamp.
Is there a way to add timestamp to Xavier stream? Probably it could help sort out missed frames? However, it would need timestamp to come from source while with TX2 it is undetermined if it comes from the source or is placed by the player.
To sum all up:
probably ZED camera could bring better quality for recording an event than the onboard CSI sensor [ Omnivision]
?

it turned out that the rtsp streaming to opencv receptor turned out to work on jetson nano precisely the same way as it does on Xavier and even with the higher resolution comparing with the default onboard devkit camera, as Rpi v.2 camera supports higher resolution.

Hi Guys,
Have anyone solved or have an idea on how to incorporate gstreamer rtsp pileline input into opencv aruco module?
I have found some ready solution that allows to do so but it seems a cumbersome approach
Thanks

References:


What has been managed so far is to take first image from rtsp camera with aruco_simple example.

./aruco_simple "rtspsrc location=rtsp://127.0.0.1:8554/test latency=30 ! decodebin ! nvvidconv ! appsink"

opencv_contrib version seem cumbersome to implement: trying getting xavier camera calibration

./example_aruco_calibrate_camera ~/calib.txt -w=5 -h=7 -l=100 -s=10 -d=10 --ci:2

(example_aruco_calibrate_camera:10617): GStreamer-CRITICAL **: 06:41:15.663: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

Another example that works:

./aruco_test "rtspsrc location=rtsp://127.0.0.1:8554/test latency=30 ! decodebin ! nvvidconv ! appsink"

https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/samples/calibrate_camera.cpp

aruco_simple.cpp (5.58 KB)
aruco_test.cpp (17.3 KB)

I gave up passing network rtsp stream to aruco.
It just won’t work with my level of skills at the moment.
However, as aruco works with local camera- probably the code below will work to mount the network stream as a local camera.

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp, line 887
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:

/home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Exception :Could not open video

And when I am trying to read from video2 gstreamer fails:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=RGB, width=640, height=480, framerate=30/1' ! queue ! videoconvert ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.

However, when I start it with the approach below it works:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
Gtk-Message: 15:11:53.522: Failed to load module "canberra-gtk-module"

(in:7950): Gtk-CRITICAL **: 15:11:53.555: IA__gtk_window_resize: assertion 'width > 0' failed
VIDEOIO ERROR: V4L2: getting property #1 is not supported
Frame:-1
Time detection=55.546 milliseconds nmarkers=0 images resolution=[640 x 480]
VIDEOIO ERROR: V4L2: getting property #1 is not supported
Frame:-1

What might be the issue with the former approach?
Is it because of the mess with formats NV12 - RGB - I420?
How the correct line loopbacking a network steam to /dev/video2 will look like and how to check that it plays with gsteamer?
Thanks
Update: what I have managed to run is:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=I420, width=640, height=480, framerate=30/1' ! queue ! videoconvert ! xvimagesink

On the other hand what works is

./aruco_test live:2

with local v4l2loopback generated with sequence below:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

but it fails executing with sequence generated with the code below:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV

It seems to require format to be passed in RGB, and doesn’t seem to work with rtsp where NV12 or I420 are used. However, it works with the former example where /dev/video2 is created with RGB. Probably it is wrong conversion of formats somewhere that might be the cause.

You may try to add caps before v4l2sink, to ensure BGR format.

identity drop-allocation=1 is for a buffer problem with v4l2sink (in older gstreamer releases, a common but sometimes failing workaround was using tee before v4l2sink).

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=BGR,framerate=30/1 ! v4l2sink device=/dev/video2

Thank you for you response!
Do the following two commands cohere at your side if you execute it?

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1" 
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=RGB,framerate=30/1 ! v4l2sink device=/dev/video2

it seems to throw:

ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH265Dec-omxh265dec:omxh265dec-omxh265dec0: Internal data stream error.

Steps to reproduce the error:
I. setting up the aruco sample:

wget https://sourceforge.net/projects/aruco/files/3.1.0/aruco-3.1.0.zip/download
unzip download
cd aruco-3.1.0
mkdir build
cd build
cmake ..
make -j4
cd utils
./aruco_test live:2

II. setting up a loopback that will mount a network rtsp gstreamer stream as local camera /dev/video2

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1" 
stream ready at rtsp://127.0.0.1:8554/test

And in the lines below seems to exist some issue, as the first line somehow starts and it allows to default generic application to read from it, but won’t let the aruco code read from it. And the latter line won’t start throwing exception[error]

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=RGB,framerate=30/1 ! v4l2sink device=/dev/video2
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH265Dec-omxh265dec:omxh265dec-omxh265dec0: Internal data stream error.

Otherwise,

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
in that case some exception concerned in gstreamer is thrown to me when I start aruco_test code

III.
Additionally there is data that shows that aruco code processes a v4l2loopback mounted camera from CSI [ that is different from rtsp, as it rather uses nvarguscamerasrc]

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

at least it opens camera in this case and displays the video and edges as:

./aruco_test live:2
Opening camera index 2
...

what could be the cause of simple_opencv in cpp and in python work at Xaviers, but when I build opencv of the same version and of the same parameters at host_pc and run simple_opencv for a network rtsp stream or python equivalent, it would fail saing something like:

python simple_python.py 
Unable to query number of channels
capture filed
./simple_opencv 
VIDEOIO ERROR: V4L: device uridecodebin uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov ! videoconvert ! videoscale ! appsink: Unable to query number of channels
Not good, open camera failed

However the line of the type below works and plays the stream:

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue ! decodebin ! videoconvert ! xvimagesink

UPD:
resolved with
upgrade to opencv 4.1
UPD: It seems that simple_sample will need to be rewritten for opencv 4.1:

g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --libs opencv)
/tmp/ccSutWH9.o: In function `main':
simple_opencv.cpp:(.text+0x55): undefined reference to `cv::VideoCapture::VideoCapture(cv::String const&)'
simple_opencv.cpp:(.text+0x1b8): undefined reference to `cv::imshow(cv::String const&, cv::_InputArray const&)'
/tmp/ccSutWH9.o: In function `cv::String::String(char const*)':
simple_opencv.cpp:(.text._ZN2cv6StringC2EPKc[_ZN2cv6StringC5EPKc]+0x54): undefined reference to `cv::String::allocate(unsigned long)'
/tmp/ccSutWH9.o: In function `cv::String::~String()':
simple_opencv.cpp:(.text._ZN2cv6StringD2Ev[_ZN2cv6StringD5Ev]+0x14): undefined reference to `cv::String::deallocate()'
/tmp/ccSutWH9.o: In function `cv::String::operator=(cv::String const&)':
simple_opencv.cpp:(.text._ZN2cv6StringaSERKS0_[_ZN2cv6StringaSERKS0_]+0x28): undefined reference to `cv::String::deallocate()'
collect2: error: ld returned 1 exit status

Another concern is:
How to reduce delay?
When the stream is played at xavier and it is played at another device via gstreamer or somehow, it appears that there exist 2-3 seconds delay at the receiver.
I understand that for udpsink we were increasing sockets, but is there anything that can be done to resuce the delay with rtsp method?
Thanks

For the compilation error, you would add --cflags to pkg-config command in order to get include paths to headers, --libs only gives libraries for link. Be sure that

pkg-config --cflags --libs opencv

returns the right paths to expected version. You should see -lvideoio and -limgproc in libs. If not, you would have a look to /usr/lib/pkgconfig and check if opencv.pc is the expected version. If not, you may link to the one installed with your new opencv build. Note there is an option when configuring opencv build for pkg-config.

About latency, you would try to set property latency of rtspsrc (default is 2000 ms) :

rtspsrc location=rtsp://127.0.0.1:8554/test latency=0 ! queue ! ...

Thank you for your response.
Include path to the headers like that?

g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --cflags --libs opencv)

It seems that there are neither -lvideoio nor -limageproc in libs, not opencv.pc in /usr/lib/pkgconfig
reference found: https://github.com/opencv/opencv/issues/13154
I can see two solutions, either use cmake instead of g++, or recompile opencv with the parameter:

-D OPENCV_GENERATE_PKGCONFIG=YES

the result doesn’t appear to create opencv.pc file in the specified location but rather comes up with:

pkg-config --cflags opencv4
-I/usr/local/include/opencv4/opencv -I/usr/local/include/opencv4
g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --cflags --libs opencv4)
simple_opencv.cpp: In function ‘int main(int, char**)’:
simple_opencv.cpp:22:28: error: ‘CV_YUV2BGR_I420’ was not declared in this scope
       cvtColor(frame, bgr, CV_YUV2BGR_I420);

However, it builds with:

cvtColor(frame, bgr, COLOR_YUV2BGR_NV12);

Now it builds, but when I run it it connects to the stream, but does not pop up window with image

Thank you for pointing out the latency parameter!

works

import cv2
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst

def read_cam():
     cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
     if cap.isOpened():
         cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
         while True:
             ret_val, img = cap.read();
             cv2.imshow('demo',img)
             if cv2.waitKey(1) == ord('q'):
                  break
     else:
         print ("camera open failed")

     cv2.destroyAllWindows()


if __name__ == '__main__':
     print(cv2.getBuildInformation())
     Gst.debug_set_active(True)
     Gst.debug_set_default_threshold(0)
     read_cam()

credits to @Honey_Patouceul code which works with the latest Jetpack 4.4 with the latest opencv 4.3.

#include <iostream>
#include <signal.h>

#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>


static cv::VideoCapture *capPtr=NULL;
void my_handler(int s){
           printf("Caught signal %d\n",s);
       if(capPtr)
        capPtr->release();
           exit(1); 
}

int main()
{
    /* Install handler for catching Ctrl-C and close camera so that Argus keeps ok */
    struct sigaction sigIntHandler;
    sigIntHandler.sa_handler = my_handler;
    sigemptyset(&sigIntHandler.sa_mask);
    sigIntHandler.sa_flags = 0;
    sigaction(SIGINT, &sigIntHandler, NULL);


    const char* gst =  "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=640, height=480 !  nvvidconv ! video/x-raw,format=I420 ! appsink";
    capPtr = new cv::VideoCapture(gst, cv::CAP_GSTREAMER);
    if(!capPtr->isOpened()) {
    std::cout<<"Failed to open camera."<<std::endl;
    return (-1);
    }

    cv::namedWindow("MyCameraPreview", cv::WINDOW_AUTOSIZE);
    cv::Mat frame_in;
    while(1)
    {
        if (!capPtr->read(frame_in)) {
        std::cout<<"Capture read error"<<std::endl;
        break;
    }
    else  {
        cv::cvtColor(frame_in, frame_in, cv::COLOR_YUV2BGR_I420);
        cv::imshow("MyCameraPreview",frame_in);
        if((char)cv::waitKey(1) == (char)27)
            break;
    }    
    }

    capPtr->release();
    delete capPtr;
    return 0;
}

can be compiled e.g. with
g++ -std=c++11 -Wall -I/usr/include/opencv4/ -I/usr/local/cuda/targets/aarch64-linux/include simple_video.cpp -L/usr/lib -lopencv_core -lopencv_imgproc -lopencv_video -lopencv_videoio -lopencv_highgui -o simple_video

works:

#include “opencv2/opencv.hpp”
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap;
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(2))
return 0;
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
imshow(“this is you, smile! :)”, frame);
if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}

given the line below is executed in prior to running the binary;
gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

g++ -w vid.cpp -o vid $(pkg-config --cflags --libs opencv4)

@Andrey1984
Although v4l2loopback can be a convenient way for some cases, it is an expensive solution for CPU usage.
If your opencv version has gstreamer support, it would thus be better to avoid v4l2loopback for reading onboard camera.

Hi,
@Andrey1984 @Honey_Patouceul
I’m doing this to create sink for my virtual device:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video3

I have 2 queries:

  1. I have 2 devices namely /dev/video3 and /dev/video4. video3 works perfectly with this command but when it comes to video4 it is not showing camera output on my jetson nano browser even though it is running in the background. Why is it happening?
    To create virtual devices I’m using this command:
    sudo modprobe v4l2loopback devices=2 video_nr=3,4 max_buffers=2 exclusive_caps=1 card_label="VirtualCam,opencv"

  2. How can I change it incase I have 2 or more sinks?