nvarguscamerasrc OpenCV (Solved)

Hi guodebby,
For Xavier, it should be

CUDA_ARCH_BIN="<b>7.2</b>"

Do you change CUDA_ARCH_BIN to 7.2?

Please follow below steps:
1 Do not install OpenCV 3.3.1 via Jetpack. It is installed by default. Please un-check OpenCV 3.3.1
2 Get the script https://github.com/AastaNV/JEP/blob/master/script/install_opencv3.4.0_Xavier.sh
3 Modify CUDA_ARCH_BIN in the script

CUDA_ARCH_BIN="7.2"

4 Execute the script

$ mkdir OpenCV
$ ./install_opencv3.4.0_Xavier.sh OpenCV
$ sudo ldconfig -v

5 Build and run the sample

$ g++ -o simple_opencv -Wall -std=c++11 simple_opencv.cpp $(pkg-config --libs opencv)
$ export DISPLAY=:0
$ ./simple_opencv

simple_opencv.cpp (611 Bytes)

Did anyone verify if gstreamer works with OpenCV 3.4.3?

We have verified 3.4.0. Ideally it should work fine for 3.4.3. Other users may share their experience.

I tested it with 3.4.3 and it won’t work. Gives me the same error as above i.e. ‘gst_is_element’ failed

Your pipeline is wrong for nvarguscamerasrc. You may see this with gst-launch:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)24/1' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink
WARNING: erroneous pipeline: could not link nvarguscamerasrc0 to nvvconv0, nvarguscamerasrc0 can't handle caps video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)24/1

However, using 1080p resolution @30 fps in NV12 format, this works:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)BGRx' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink

So you would just change your example with:

VideoCapture cap("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink");

If you need to resize, nvvidconv will probably be able to do it, just specify wanted resolution in caps after nvvidconv and test with gst-launch (quoting caps).

for the installation script: if to add make -j8 instead of just make - it will work dramatically faster.
On the other hand, are there any ideas on how to get the scripted installation wo work with python cv2 and cv2.dnn?
Thanks

What am I missing for localhost gstreamer stream playing?

#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin ! nvvidconv ! appsink");
 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, CV_YUV2BGR_I420);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}
./simple_opencv
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/opencv-3.4.0/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Failed to open camera.

another terminal:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
stream ready at rtsp://127.0.0.1:8554/test
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
int main(void)
{
    cv::VideoCapture cap("rtsprc location=rtsp://127.0.0.1:8554/test ! videoconvert ! videoscale ! appsink");

    if( !cap.isOpened() )
    {
        std::cout << "Not good, open camera failed" << std::endl;
        return 0;
    }

    cv::Mat frame;
    while(true)
    {
        cap >> frame;
        cv::imshow("Frame", frame);
        cv::waitKey(1);
    }
    return 0;
}

references:
https://devtalk.nvidia.com/default/topic/1031294/jetson-tx1/opencv-videocapture-failed-in-capture-rtsp-video-stream/post/5247561/#5247561
https://devtalk.nvidia.com/default/topic/1007962/jetson-tx2/that-proceesing-to-open-ip-camera-with-gstreamer-and-opencv-only-display-a-still-picture-how-to-solve-it-/post/5149913/#5149913
[url]https://devtalk.nvidia.com/default/topic/1004914/gstreamer-pipeline-failed-to-open-ip-camera-in-cv-videocapture-function-/[/url

I’d suggest to try using videoconvert instead of nvvidconv.

External Media
External Mediawhat worked is:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, width=640, height=480, format=NV12, framerate=30/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"
#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! appsink");
 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, CV_YUV2BGR_I420);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}
./simple_opencv
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279 
Allocating new output: 640x480 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3528: Send OMX_EventPortSettingsChanged: nFrameWidth = 640, nFrameHeight = 480 
Gtk-Message: 00:03:21.734: Failed to load module "canberra-gtk-module"
---> NVMEDIA: Video-conferencing detected !!!!!!!!!

Thank you for pointing out!

Next challenge seems to be to improve quality of the video

You may set a higher bitrate. Check this post.

Hi Honey_Patouceul,
Thank you for sharing the link.
Do you mean somewhat like

#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! omxh265enc bitrate=50000000 ! h265parse ! omxh265dec ! nvoverlaysink ! appsink ");
 if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, CV_YUV2BGR_I420);
      imshow("original", bgr);
      waitKey(1);
    }

  cap.release();
}

and could it remove green artifacts and get picture more similar to the output of

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! queue ! decodebin ! videoconvert ! xvimagesink

Thanks
that one doesn’t seem to open up pop up window.
Will try to combinate futher until the picture is better athan in screeshots attached to previous posts

./simple_opencv
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279 
Allocating new output: 640x480 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3528: Send OMX_EventPortSettingsChanged: nFrameWidth = 640, nFrameHeight = 480 
Framerate set to : 0 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 8 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 8 
NVMEDIA: H265 : Profile : 1 
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279 
Allocating new output: 640x480 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3528: Send OMX_EventPortSettingsChanged: nFrameWidth = 640, nFrameHeight = 480

The terminal execution of gstreamer sequence seems fine. The quality issue arise when it is processed by the sample_opencv with rtsp
I mean that the line below returns fine video

VideoCapture cap("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1280, height=720,format=NV12, framerate=30/1 ! nvvidconv ! video/x-raw,format=I420 ! appsink");

but the line below seems to miss some parameters to return same quality video:

VideoCapture cap("rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! appsink ");

I meant to adjust bitrate for omxh265enc in sender so in the pipeline passed to test-launch.
The default bitrate being very low, the encoding looses much quality.

You may improve using a higher bitrate. It won’t be exactly the same as original, but may be enough depending on what you intend to do.

thank you for pointing out.
However, it seems that I have no arguments to add to the reciever below and thus will be approaching the transmitter parameters.

rtspsrc location=rtsp://127.0.0.1:8554/test  latency=30 ! decodebin !  videoconvert ! appsink

A this point I was just to play with opencv a csi stream from remote jetson. But then I intended to approach remote csi camera stream with opencv processing. e.g combine stream of two cameras into one or to kind of 360 view in case of more cameras

If the quality is lost in encoding at sender side, there is nothing (reasonable) you can do to retreive lost quality at receiver side.
So, depending on your available resources for encoding and your available bandwith between sender and receiver, you may adjust bitrate and check image quality.

but dramatical difference is observer when using at the same xavier device two calls from cpp opencv:
one with nvarguscamerasrc - that is somewhat fine,
another with terminal gstreamer rtsp[as reciever] - that is somewhat fine,
and the worst case is if running reciever from opencv cpp with rtspsrc.
I understand as two last methods using the same transiever, but different parameters in recieving code . perhaps the issue might be with absent parameters of the code.
For example:
At Xavier 1
I run rtsp gstreamer.streaming
When I run from the same Xaier terminal gstreamer recieving command - it plays well.
But when I am using opencv gsteamer rtsp recieving code - the quality degrades dramatically.
Probably I shall approach some other ways.

You may try to run tegrastats and see if something has become a bottleneck.

If using videoconvert, you may try direct BGR format before appsink, instead of cvtColor in opencv.

You may also try to set caps video/x-raw, format=I420 (or NV12) as output of decodebin, if supported you may remove videoconvert (although if having nothing to do it shouldn’t use so much resources).

You may also check the caps used at each stage with gst-launch using -v and specify these caps for the opencv gstreamer pipeline.

shall it be for devkit onboard Omnivision sensor as listed below?

cvtColor(frame, bgr, CV_YUV2BGR_I420);

It depends more on the caps before appsink than on the sensor. If you are using nvarguscamerasrc and nvvidconv outputing NV12, then it would be cv::COLOR_YUV2BGR_NV12.