nvarguscamerasrc OpenCV (Solved)

Hi Guys,
Have anyone solved or have an idea on how to incorporate gstreamer rtsp pileline input into opencv aruco module?
I have found some ready solution that allows to do so but it seems a cumbersome approach
Thanks

References:

https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/samples/detect_markers.cpp

What has been managed so far is to take first image from rtsp camera with aruco_simple example.

./aruco_simple "rtspsrc location=rtsp://127.0.0.1:8554/test latency=30 ! decodebin ! nvvidconv ! appsink"

opencv_contrib version seem cumbersome to implement: trying getting xavier camera calibration

./example_aruco_calibrate_camera ~/calib.txt -w=5 -h=7 -l=100 -s=10 -d=10 --ci:2

(example_aruco_calibrate_camera:10617): GStreamer-CRITICAL **: 06:41:15.663: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

Another example that works:

./aruco_test "rtspsrc location=rtsp://127.0.0.1:8554/test latency=30 ! decodebin ! nvvidconv ! appsink"

https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/samples/calibrate_camera.cpp

aruco_simple.cpp (5.58 KB)
aruco_test.cpp (17.3 KB)

I gave up passing network rtsp stream to aruco.
It just won’t work with my level of skills at the moment.
However, as aruco works with local camera- probably the code below will work to mount the network stream as a local camera.

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp, line 887
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:

/home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Exception :Could not open video

And when I am trying to read from video2 gstreamer fails:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=RGB, width=640, height=480, framerate=30/1' ! queue ! videoconvert ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.

However, when I start it with the approach below it works:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
Gtk-Message: 15:11:53.522: Failed to load module "canberra-gtk-module"

(in:7950): Gtk-CRITICAL **: 15:11:53.555: IA__gtk_window_resize: assertion 'width > 0' failed
VIDEOIO ERROR: V4L2: getting property #1 is not supported
Frame:-1
Time detection=55.546 milliseconds nmarkers=0 images resolution=[640 x 480]
VIDEOIO ERROR: V4L2: getting property #1 is not supported
Frame:-1

What might be the issue with the former approach?
Is it because of the mess with formats NV12 - RGB - I420?
How the correct line loopbacking a network steam to /dev/video2 will look like and how to check that it plays with gsteamer?
Thanks
Update: what I have managed to run is:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=I420, width=640, height=480, framerate=30/1' ! queue ! videoconvert ! xvimagesink

On the other hand what works is

./aruco_test live:2

with local v4l2loopback generated with sequence below:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

but it fails executing with sequence generated with the code below:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
./aruco_test live:2
Opening camera index 2
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV

It seems to require format to be passed in RGB, and doesn’t seem to work with rtsp where NV12 or I420 are used. However, it works with the former example where /dev/video2 is created with RGB. Probably it is wrong conversion of formats somewhere that might be the cause.

You may try to add caps before v4l2sink, to ensure BGR format.

identity drop-allocation=1 is for a buffer problem with v4l2sink (in older gstreamer releases, a common but sometimes failing workaround was using tee before v4l2sink).

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=BGR,framerate=30/1 ! v4l2sink device=/dev/video2

Thank you for you response!
Do the following two commands cohere at your side if you execute it?

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1" 
stream ready at rtsp://127.0.0.1:8554/test
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=RGB,framerate=30/1 ! v4l2sink device=/dev/video2

it seems to throw:

ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH265Dec-omxh265dec:omxh265dec-omxh265dec0: Internal data stream error.

Steps to reproduce the error:
I. setting up the aruco sample:

wget https://sourceforge.net/projects/aruco/files/3.1.0/aruco-3.1.0.zip/download
unzip download
cd aruco-3.1.0
mkdir build
cd build
cmake ..
make -j4
cd utils
./aruco_test live:2

II. setting up a loopback that will mount a network rtsp gstreamer stream as local camera /dev/video2

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1" 
stream ready at rtsp://127.0.0.1:8554/test

And in the lines below seems to exist some issue, as the first line somehow starts and it allows to default generic application to read from it, but won’t let the aruco code read from it. And the latter line won’t start throwing exception[error]

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! identity drop-allocation=1 ! video/x-raw,width=640,height=480,format=RGB,framerate=30/1 ! v4l2sink device=/dev/video2
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstOMXH265Dec-omxh265dec:omxh265dec-omxh265dec0: Internal data stream error.

Otherwise,

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue !  decodebin ! videoconvert ! v4l2sink device=/dev/video2
in that case some exception concerned in gstreamer is thrown to me when I start aruco_test code

III.
Additionally there is data that shows that aruco code processes a v4l2loopback mounted camera from CSI [ that is different from rtsp, as it rather uses nvarguscamerasrc]

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

at least it opens camera in this case and displays the video and edges as:

./aruco_test live:2
Opening camera index 2
...

what could be the cause of simple_opencv in cpp and in python work at Xaviers, but when I build opencv of the same version and of the same parameters at host_pc and run simple_opencv for a network rtsp stream or python equivalent, it would fail saing something like:

python simple_python.py 
Unable to query number of channels
capture filed
./simple_opencv 
VIDEOIO ERROR: V4L: device uridecodebin uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov ! videoconvert ! videoscale ! appsink: Unable to query number of channels
Not good, open camera failed

However the line of the type below works and plays the stream:

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue ! decodebin ! videoconvert ! xvimagesink

UPD:
resolved with
upgrade to opencv 4.1
UPD: It seems that simple_sample will need to be rewritten for opencv 4.1:

g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --libs opencv)
/tmp/ccSutWH9.o: In function `main':
simple_opencv.cpp:(.text+0x55): undefined reference to `cv::VideoCapture::VideoCapture(cv::String const&)'
simple_opencv.cpp:(.text+0x1b8): undefined reference to `cv::imshow(cv::String const&, cv::_InputArray const&)'
/tmp/ccSutWH9.o: In function `cv::String::String(char const*)':
simple_opencv.cpp:(.text._ZN2cv6StringC2EPKc[_ZN2cv6StringC5EPKc]+0x54): undefined reference to `cv::String::allocate(unsigned long)'
/tmp/ccSutWH9.o: In function `cv::String::~String()':
simple_opencv.cpp:(.text._ZN2cv6StringD2Ev[_ZN2cv6StringD5Ev]+0x14): undefined reference to `cv::String::deallocate()'
/tmp/ccSutWH9.o: In function `cv::String::operator=(cv::String const&)':
simple_opencv.cpp:(.text._ZN2cv6StringaSERKS0_[_ZN2cv6StringaSERKS0_]+0x28): undefined reference to `cv::String::deallocate()'
collect2: error: ld returned 1 exit status

Another concern is:
How to reduce delay?
When the stream is played at xavier and it is played at another device via gstreamer or somehow, it appears that there exist 2-3 seconds delay at the receiver.
I understand that for udpsink we were increasing sockets, but is there anything that can be done to resuce the delay with rtsp method?
Thanks

For the compilation error, you would add --cflags to pkg-config command in order to get include paths to headers, --libs only gives libraries for link. Be sure that

pkg-config --cflags --libs opencv

returns the right paths to expected version. You should see -lvideoio and -limgproc in libs. If not, you would have a look to /usr/lib/pkgconfig and check if opencv.pc is the expected version. If not, you may link to the one installed with your new opencv build. Note there is an option when configuring opencv build for pkg-config.

About latency, you would try to set property latency of rtspsrc (default is 2000 ms) :

rtspsrc location=rtsp://127.0.0.1:8554/test latency=0 ! queue ! ...

Thank you for your response.
Include path to the headers like that?

g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --cflags --libs opencv)

It seems that there are neither -lvideoio nor -limageproc in libs, not opencv.pc in /usr/lib/pkgconfig
reference found: https://github.com/opencv/opencv/issues/13154
I can see two solutions, either use cmake instead of g++, or recompile opencv with the parameter:

-D OPENCV_GENERATE_PKGCONFIG=YES

the result doesn’t appear to create opencv.pc file in the specified location but rather comes up with:

pkg-config --cflags opencv4
-I/usr/local/include/opencv4/opencv -I/usr/local/include/opencv4
g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --cflags --libs opencv4)
simple_opencv.cpp: In function ‘int main(int, char**)’:
simple_opencv.cpp:22:28: error: ‘CV_YUV2BGR_I420’ was not declared in this scope
       cvtColor(frame, bgr, CV_YUV2BGR_I420);

However, it builds with:

cvtColor(frame, bgr, COLOR_YUV2BGR_NV12);

Now it builds, but when I run it it connects to the stream, but does not pop up window with image

Thank you for pointing out the latency parameter!

works

import cv2
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst

def read_cam():
     cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
     if cap.isOpened():
         cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
         while True:
             ret_val, img = cap.read();
             cv2.imshow('demo',img)
             if cv2.waitKey(1) == ord('q'):
                  break
     else:
         print ("camera open failed")

     cv2.destroyAllWindows()


if __name__ == '__main__':
     print(cv2.getBuildInformation())
     Gst.debug_set_active(True)
     Gst.debug_set_default_threshold(0)
     read_cam()

credits to @Honey_Patouceul code which works with the latest Jetpack 4.4 with the latest opencv 4.3.

#include <iostream>
#include <signal.h>

#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>


static cv::VideoCapture *capPtr=NULL;
void my_handler(int s){
           printf("Caught signal %d\n",s);
       if(capPtr)
        capPtr->release();
           exit(1); 
}

int main()
{
    /* Install handler for catching Ctrl-C and close camera so that Argus keeps ok */
    struct sigaction sigIntHandler;
    sigIntHandler.sa_handler = my_handler;
    sigemptyset(&sigIntHandler.sa_mask);
    sigIntHandler.sa_flags = 0;
    sigaction(SIGINT, &sigIntHandler, NULL);


    const char* gst =  "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=640, height=480 !  nvvidconv ! video/x-raw,format=I420 ! appsink";
    capPtr = new cv::VideoCapture(gst, cv::CAP_GSTREAMER);
    if(!capPtr->isOpened()) {
    std::cout<<"Failed to open camera."<<std::endl;
    return (-1);
    }

    cv::namedWindow("MyCameraPreview", cv::WINDOW_AUTOSIZE);
    cv::Mat frame_in;
    while(1)
    {
        if (!capPtr->read(frame_in)) {
        std::cout<<"Capture read error"<<std::endl;
        break;
    }
    else  {
        cv::cvtColor(frame_in, frame_in, cv::COLOR_YUV2BGR_I420);
        cv::imshow("MyCameraPreview",frame_in);
        if((char)cv::waitKey(1) == (char)27)
            break;
    }    
    }

    capPtr->release();
    delete capPtr;
    return 0;
}

can be compiled e.g. with
g++ -std=c++11 -Wall -I/usr/include/opencv4/ -I/usr/local/cuda/targets/aarch64-linux/include simple_video.cpp -L/usr/lib -lopencv_core -lopencv_imgproc -lopencv_video -lopencv_videoio -lopencv_highgui -o simple_video

works:

#include “opencv2/opencv.hpp”
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap;
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(2))
return 0;
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
imshow(“this is you, smile! :)”, frame);
if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}

given the line below is executed in prior to running the binary;
gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

g++ -w vid.cpp -o vid $(pkg-config --cflags --libs opencv4)

@Andrey1984
Although v4l2loopback can be a convenient way for some cases, it is an expensive solution for CPU usage.
If your opencv version has gstreamer support, it would thus be better to avoid v4l2loopback for reading onboard camera.

Hi,
@Andrey1984 @Honey_Patouceul
I’m doing this to create sink for my virtual device:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video3

I have 2 queries:

  1. I have 2 devices namely /dev/video3 and /dev/video4. video3 works perfectly with this command but when it comes to video4 it is not showing camera output on my jetson nano browser even though it is running in the background. Why is it happening?
    To create virtual devices I’m using this command:
    sudo modprobe v4l2loopback devices=2 video_nr=3,4 max_buffers=2 exclusive_caps=1 card_label="VirtualCam,opencv"

  2. How can I change it incase I have 2 or more sinks?

Just in case someone has a similar error message. I got that problem because I was running out of resources I believe.

What is this test_launch file everyones talking about ? I cant find it on my system at all

It is an example from libgstrtspserver.
Be sure to install with:

sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev

Then you can get the example from here (be sure to use 1.14 branch) and build with:

gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)

Got it.

This is working but Im unable to access the stream on any of the LAN computers. Error :-

mpv "rtsp://192.168.31.165:8554"
[ffmpeg/demuxer] rtsp: method DESCRIBE failed: 404 Not Found
[lavf] avformat_open_input() failed
Failed to recognize file format.


Exiting... (Errors when loading file)

Though if I type the same command on jetson nano mpv “rtsp://192.168.31.165:8554”, it plays the stream on nano

You may check that no firewall blocks port 8554 from your Jetson IP.