Set camera decoder in OpenCV on Jetson Nano

I have trouble when opening camera with cv::VideoCapture(gst_str, cv::CAP_GSTREAMER):

if I use nvjpegdec
In command line

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=480 ! nvjpegdec ! xvimagesink

It works well, and the fps is about 50. But if I use it in OpenCV:

std::string gst_str = "v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! nvjpegdec ! appsink"

the error is:

Bus error(core dump)

if I use jpegdec
In command line

gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg,width=1280,height=480 ! jpegdec ! xvimagesink

It also works well, and the fps is about 50, too. But if I use it in OpenCV:

std::string gst_str = "v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! jpegdec ! appsink"

the error is:

[WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp(1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[WARN:0] global /home/nvidia/host/build)opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp(886) open OpenCV | Gstreamer warning: unable to start pipeline
[WARN:0] global /home/nvidia/host/build_opencvnv_opencv/modules/videoio/src/cap_gstreamer.cpp(480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer:pipeline have not been created

If I use nvv4l2decoder
In command line

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=480 framerate=60/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! xvimagesink

It can work but the pictures it shows are black and white.
In OpenCV

std::string gst_str = "v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! appsink"

This time there is no error, and the fps is about 50, but the pictures are black and white. And the command line ouput is:

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 277
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 277
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=86, duration=-1

Camera imformations

470-W10DG:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
    Index         : 0
    Type          : Video Capture
    Pixel Format: 'MJPEG' (compressed)
    Name      : Motion-JPEG
		Size: Discrete 640x240
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 960x960
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1264x960
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1280x960
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1280x480
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 2560x720
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 2560x960
			Interval: Discrete 0.017s (60.000 fps)
    Index         : 1
    Type          : Video Capture
    PSixel Format : 'YUYV' 
    Name       : YUYV 4:2:2
		Size: Discrete 640x240
			Interval: Discrete 0.100s (10.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.100s (10.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.100s (10.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.200s (5.000 fps)
		Size: Discrete 960x960
			Interval: Discrete 0.100s (10.000 fps)
		Size: Discrete 1264x960
			Interval: Discrete 0.100s (10.000 fps)
		Size: Discrete 1280x960
			Interval: Discrete 0.100s (10.000 fps)
		Size: Discrete 1280x480
			Interval: Discrete 0.100s (10.000 fps)
		Size: Discrete 2560x720
			Interval: Discrete 0.200s (5.000 fps)
		Size: Discrete 2560x960
			Interval: Discrete 0.200s (5.000 fps)

More imformation:
Jetpack version: 4.4.1[L4T 32.4.4],
OpenCV: 4.1.1 compiled CUDA: NO

How to capture MJPG in OpenCV on Jeston Nano?

Hi,
Due to conflict of libjpeg.so and libnvjpeg.so, we don’t support the case. Please refer to
OpenCV with libnvjpeg - #5 by DaneLLL

Do you mean I can not use ‘MJPG’ format in OpenCV on Nano?

How can I get ‘MJPG’-format data from usb-camera on Jeston Nano and use it in my program?

Hi,
Please use jpegdec plugin and try the string:

std::string gst_str = "v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! jpegdec ! videoconvert ! video/x-raw,format=BGR ! appsink"

Thanks a lot. It works now.

Hi,
This string should work also:

std::string gst_str = "v4l2src io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=RGBA ! videoconvert ! video/x-raw,format=BGR ! appsink "

May give it a try.

Hi, I have tried this string

std::string gst_str = "v4l2src io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=RGBA ! videoconvert ! video/x-raw,format=BGR ! appsink

But it returned an error, the related warning was:

nvbuf_utils: Invalid memsize=0
NvBufferCreateEx with memtag 5376 failed
[ WARN:0] global /home/nvidia/host/build_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted:module nvv4l2decoder0 reported: Failed to allocate required memory.

And I found if I use jpegdec+videoconvert to get camera data in OPenCV

std::string gst_str = "v4l2src device=/dev/ssssvideo0 io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! jpegdec ! videoconvert ! video/x-raw,format=BGR ! appsink";
cv::VideoCapture cap(gst_str, cv::CAP_GSTREAMER);

and display by:

cv::Mat img
while(1)
    cap.read(img);
    cv::imshow("video", img);
    cv::waitKey(1);

It can work, but the program seems to take up a lot more CPU resources than capture and display the video in command line:

gst-launch-1.0 4l2src device=/dev/ssssvideo0 io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! jpegdec  ! xvimagesink

I use jtop to check CPU usage,
In OpenCV, the usage looks like:

CPU1    66%
CPU2    100%
CPU3    63%
CPU4    61%

In command line, the usage:

CPU1   42%
CPU2   23%
CPU3   25%
CPU4   21%

In order to test the CPU consumption of reading data and decoding it,
In OPenCV, I comment the cv::imShow()

while(1)
    cap.read(img);
    //cv::imshow("video", img);
    cv::waitKey(1);

And the CPU usage looks like:

CPU1   100%
CPU2   32%
CPU3   31%
CPU4   41%

In command line:

gst-launch-1.0 4l2src device=/dev/ssssvideo0 io-mode=2 ! image/jpeg,width=1280,height=480,framerate=60/1 ! jpegdec ! videoconvert ! video/x-raw,format=BGR ! appsink

The CPU usage is about:

CPU1  21%
CPU2  15%
CPU3  100%
CPU4  13%

So I find that it is the videoconvert ! video/x-raw, format=BGR that cost a lot of CPU resources.
My question is: Is there a way to read camera data decode and convert it to an OpenCV compatible format that use less CPU resources.

Hi,

No. Since BGR format is not supported by hardware blocks in Jetson platforms, would need to use software converter for the conversion, and consumes certain CPU usage.

Thanks