nvarguscamerasrc OpenCV (Solved)

Another concern is:
How to reduce delay?
When the stream is played at xavier and it is played at another device via gstreamer or somehow, it appears that there exist 2-3 seconds delay at the receiver.
I understand that for udpsink we were increasing sockets, but is there anything that can be done to resuce the delay with rtsp method?
Thanks

For the compilation error, you would add --cflags to pkg-config command in order to get include paths to headers, --libs only gives libraries for link. Be sure that

pkg-config --cflags --libs opencv

returns the right paths to expected version. You should see -lvideoio and -limgproc in libs. If not, you would have a look to /usr/lib/pkgconfig and check if opencv.pc is the expected version. If not, you may link to the one installed with your new opencv build. Note there is an option when configuring opencv build for pkg-config.

About latency, you would try to set property latency of rtspsrc (default is 2000 ms) :

rtspsrc location=rtsp://127.0.0.1:8554/test latency=0 ! queue ! ...

Thank you for your response.
Include path to the headers like that?

g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --cflags --libs opencv)

It seems that there are neither -lvideoio nor -limageproc in libs, not opencv.pc in /usr/lib/pkgconfig
reference found: https://github.com/opencv/opencv/issues/13154
I can see two solutions, either use cmake instead of g++, or recompile opencv with the parameter:

-D OPENCV_GENERATE_PKGCONFIG=YES

the result doesn’t appear to create opencv.pc file in the specified location but rather comes up with:

pkg-config --cflags opencv4
-I/usr/local/include/opencv4/opencv -I/usr/local/include/opencv4
g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --cflags --libs opencv4)
simple_opencv.cpp: In function ‘int main(int, char**)’:
simple_opencv.cpp:22:28: error: ‘CV_YUV2BGR_I420’ was not declared in this scope
       cvtColor(frame, bgr, CV_YUV2BGR_I420);

However, it builds with:

cvtColor(frame, bgr, COLOR_YUV2BGR_NV12);

Now it builds, but when I run it it connects to the stream, but does not pop up window with image

Thank you for pointing out the latency parameter!

works

import cv2
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst

def read_cam():
     cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
     if cap.isOpened():
         cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
         while True:
             ret_val, img = cap.read();
             cv2.imshow('demo',img)
             if cv2.waitKey(1) == ord('q'):
                  break
     else:
         print ("camera open failed")

     cv2.destroyAllWindows()


if __name__ == '__main__':
     print(cv2.getBuildInformation())
     Gst.debug_set_active(True)
     Gst.debug_set_default_threshold(0)
     read_cam()

credits to @Honey_Patouceul code which works with the latest Jetpack 4.4 with the latest opencv 4.3.

#include <iostream>
#include <signal.h>

#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>


static cv::VideoCapture *capPtr=NULL;
void my_handler(int s){
           printf("Caught signal %d\n",s);
       if(capPtr)
        capPtr->release();
           exit(1); 
}

int main()
{
    /* Install handler for catching Ctrl-C and close camera so that Argus keeps ok */
    struct sigaction sigIntHandler;
    sigIntHandler.sa_handler = my_handler;
    sigemptyset(&sigIntHandler.sa_mask);
    sigIntHandler.sa_flags = 0;
    sigaction(SIGINT, &sigIntHandler, NULL);


    const char* gst =  "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=640, height=480 !  nvvidconv ! video/x-raw,format=I420 ! appsink";
    capPtr = new cv::VideoCapture(gst, cv::CAP_GSTREAMER);
    if(!capPtr->isOpened()) {
    std::cout<<"Failed to open camera."<<std::endl;
    return (-1);
    }

    cv::namedWindow("MyCameraPreview", cv::WINDOW_AUTOSIZE);
    cv::Mat frame_in;
    while(1)
    {
        if (!capPtr->read(frame_in)) {
        std::cout<<"Capture read error"<<std::endl;
        break;
    }
    else  {
        cv::cvtColor(frame_in, frame_in, cv::COLOR_YUV2BGR_I420);
        cv::imshow("MyCameraPreview",frame_in);
        if((char)cv::waitKey(1) == (char)27)
            break;
    }    
    }

    capPtr->release();
    delete capPtr;
    return 0;
}

can be compiled e.g. with
g++ -std=c++11 -Wall -I/usr/include/opencv4/ -I/usr/local/cuda/targets/aarch64-linux/include simple_video.cpp -L/usr/lib -lopencv_core -lopencv_imgproc -lopencv_video -lopencv_videoio -lopencv_highgui -o simple_video

works:

#include “opencv2/opencv.hpp”
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap;
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(2))
return 0;
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
imshow(“this is you, smile! :)”, frame);
if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}

given the line below is executed in prior to running the binary;
gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video2

g++ -w vid.cpp -o vid $(pkg-config --cflags --libs opencv4)

@Andrey1984
Although v4l2loopback can be a convenient way for some cases, it is an expensive solution for CPU usage.
If your opencv version has gstreamer support, it would thus be better to avoid v4l2loopback for reading onboard camera.

Hi,
@Andrey1984 @Honey_Patouceul
I’m doing this to create sink for my virtual device:

gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1' ! nvvidconv ! 'video/x-raw, width=640, height=480, format=I420, framerate=30/1' ! videoconvert ! identity drop-allocation=1 ! 'video/x-raw, width=640, height=480, format=RGB, framerate=30/1' ! v4l2sink device=/dev/video3

I have 2 queries:

  1. I have 2 devices namely /dev/video3 and /dev/video4. video3 works perfectly with this command but when it comes to video4 it is not showing camera output on my jetson nano browser even though it is running in the background. Why is it happening?
    To create virtual devices I’m using this command:
    sudo modprobe v4l2loopback devices=2 video_nr=3,4 max_buffers=2 exclusive_caps=1 card_label="VirtualCam,opencv"

  2. How can I change it incase I have 2 or more sinks?

Just in case someone has a similar error message. I got that problem because I was running out of resources I believe.

What is this test_launch file everyones talking about ? I cant find it on my system at all

It is an example from libgstrtspserver.
Be sure to install with:

sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev

Then you can get the example from here (be sure to use 1.14 branch) and build with:

gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)

Got it.

This is working but Im unable to access the stream on any of the LAN computers. Error :-

mpv "rtsp://192.168.31.165:8554"
[ffmpeg/demuxer] rtsp: method DESCRIBE failed: 404 Not Found
[lavf] avformat_open_input() failed
Failed to recognize file format.


Exiting... (Errors when loading file)

Though if I type the same command on jetson nano mpv “rtsp://192.168.31.165:8554”, it plays the stream on nano

You may check that no firewall blocks port 8554 from your Jetson IP.