JetPack4.2+opencv3.4.6 GStreamer-CRITICAL: assertion 'GST_IS_ELEMENT (element)' failed

Hello,

I want to capture the camera video and use Gstreamer pipeline in OpenCV. The camera’s output pixel format is UYVY 8bit. It works fine using gst-launch-1.0 directly.

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)30/1' ! xvimagesink  -ev

Then I build OpenCV3.4.6 with Gstreamer and use VideoCapture to fetch the video.
When I use OpenCV COLOR_YUV2BGR_UYVY, it can display on the screen however the CPU usage is high.

string pipeline = "v4l2src device=/dev/video0 ! 
    video/x-raw, width=(int)1920, height=(int)1080, 
    format=(string)UYVY, framerate=(fraction)30/1 ! appsink";
    VideoCapture cap(pipeline,CAP_GSTREAMER);
    // View video
    Mat frame;
    while (1) {
        cap >> frame;  // Get a new frame from camera
	Mat bgr;
	cvtColor(frame, bgr, COLOR_YUV2BGR_UYVY);
        // Display frame
        imshow("Display window", bgr);
        waitKey(1); //needed to show frame
    }

Then, I try to use nvvidconv to transfer UYVY to BGR format. The code is like:

string pipeline = "-v v4l2src device=/dev/video0 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)UYVY, framerate=(fraction)30/1 
    ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)BGR ! appsink";
    VideoCapture cap(pipeline,CAP_GSTREAMER);
        // View video
    Mat frame;
    while (1) {
        cap >> frame;  // Get a new frame from camera
        // Display frame
        imshow("Display window", frame);
        waitKey(1); //needed to show frame
    }

There is an error occurred.

(test_nvvidconv:18774): GStreamer-CRITICAL **: 20:28:25.106: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

Is it a gstreamer problem?
Thank you very much.

Hi,
It is hardware limitation. Please check
https://devtalk.nvidia.com/default/topic/1064944/jetson-nano/-gstreamer-nvvidconv-bgr-as-input/post/5392836/#5392836

You may check if you can set io-mode=2. We have seen better performance in some cases:
https://devtalk.nvidia.com/default/topic/1065092/deepstream-sdk/very-slow-framerate-on-mjpeg-using-deepstream-4-0-on-gtx-1080/post/5393827/#5393827

And also configure ‘sudo nvpmodel -m 0’ and ‘sudo jetson_clocks’ to run TX2 in max performance.

Thanks DaneLLL,

I follow your suggestions in
https://devtalk.nvidia.com/default/topic/1064944/jetson-nano/-gstreamer-nvvidconv-bgr-as-input/post/5392836/#5392836
The OpenCV code is as followed:

VideoCapture cap("v4l2src device=/dev/video0 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)UYVY, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)RGBA ! appsink", CAP_GSTREAMER)

It reports some errors.

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.697: gst_mini_object_copy: assertion 'mini_object != NULL' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.698: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.698: gst_structure_copy: assertion 'structure != NULL' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.699: gst_caps_append_structure_full: assertion 'GST_IS_CAPS (caps)' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.700: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.700: gst_structure_copy: assertion 'structure != NULL' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.701: gst_caps_append_structure_full: assertion 'GST_IS_CAPS (caps)' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.701: gst_mini_object_unref: assertion 'mini_object != NULL' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:31.702: gst_mini_object_ref: assertion 'mini_object != NULL' failed

(test_nvvidconv:10153): GStreamer-CRITICAL **: 15:36:32.147: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed

But when I try to pipeline to the nvoverlaysink, it works well.

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)RGBA' ! nvoverlaysink

That confuses me. Could you please provide more help?

Hi,
OpenCV only allows CPU buffer in appsink. You may try

v4l2src device=/dev/video0 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)UYVY, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)RGBA ! nvvidconv ! video/x-raw ! appsink

And run

img2 = cv2.cvtColor(img, CV_RGBA2BGR);

It is similar to
https://devtalk.nvidia.com/default/topic/1064944/jetson-nano/-gstreamer-nvvidconv-bgr-as-input/post/5397443/#5397443

Thanks DaneLLL,

Opencv videocapture can fetch from the gstreamer pipeline using

v4l2src device=/dev/video0 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)UYVY, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)RGBA ! nvvidconv ! video/x-raw ! appsink

But the cap >> frame function can’t get the exact frame size as input size. Then the cvtColor function runs in error.

VideoCapture cap(pipeline,CAP_GSTREAMER);
   Mat frame;
    while (1) {
        cap >> frame;  // Get a new frame from camera
	Mat bgr;
	cvtColor(frame, bgr, COLOR_RGBA2BGR);
   }

The size of the input image is width 1920, height 1080. However, the captured frame is width 1920, height 1620.
Looking forwards to your reply.

Hi,
It is not a good idea to use nvvidconv with v4l2src. Please simply run:

#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/types_c.h>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
  VideoCapture cap("v4l2src device=/dev/video1 ! video/x-raw,width=1920,height=1080,format=UYVY,framerate=30/1 ! appsink");

  if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      Mat bgr;
      cvtColor(frame, bgr, CV_YUV2BGR_UYVY);
      imshow("original", bgr);
      waitKey(1);

    }

  cap.release();
}

#4 is not a good suggestion, which has many extra memcpy().

Hi DaneLLL,

I have tried this before, it did work. However, I checked the CPU usage of the program. One of the core would reach 90% usage and the others would be around 30%. That is too heavy load for input video of 1080P@30Hz.

So I wonder if there is more efficient way to capture video of UYVY format?

Thanks.

AFAIK, opencv up to 4.0.0 version doesn’t support 4 channels format as video capture, so RGBA may not be a good solution for your case.

You could use nvvidconv for converting into BGRx and then videoconvert into BGR. You can also do the YUV to BGR conversion in opencv, but these conversions are not available from cv::cuda so would take place on CPU and would be a bit slower than videoconvert way that can run on a different core with a queue between videoconvert and appsink. Anyway this would have limited resolutions/framerates.

However, for your case you may have a look to this article. You would have to build the helper lib. It was for an older opencv3 and L4T versions, but it shouldn’t be that hard to adapt for your case.

Side note: Using a gstreamer pipeline like:

v4l2src device=/dev/video0 ! appsink

would probably be add gstreamer overhead for nothing. If you have no conversion, you would just use V4L2 API instead. For /dev/video0, you would use:

VideoCapture cap(0)

Hi,
An optimal solution is to use NvBuffer APIs in tegra_multimedia_api. FOr USB cameras, the reference sample is 12_camera_v4l2_cuda. And you can apply this patch to map RGBA in cv::Mat.

Also there is a patch of mapping RGBA in cv::gpuMat.