Bus error with gstreamer and opencv


I am trying to use the gstreamer pipeline for hardware decoding since my USB camera output format is MJPEG. Using CPU to decode the JPEG would cause lag and makes the video stream not real-time. Besides, the clock is already set to MAXN mode which means all 6 cores running at 2GHz. Thus I would like to utilize the hardware decoder for real-time purposes.

I can definitely acquire frames by using gst-launch-1.0. Here is the command.

v4l2src device=/dev/video1 io-mode=2 ! image/jpeg, width=1920, height = 1080, framerate=30/1 ! jpegparse ! nvjpegdec ! videoconvert ! nvoverlaysink sync=false

However, when I trying to implement the pipeline in the OpenCV, bus error shows up. Here is the code.

#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/core/cuda.hpp>
#include <opencv2/core/mat.hpp>
#include <string>
#include <iostream>

std::string pipeline = "v4l2src device=/dev/video0 io-mode=2 ! image/jpeg, width=1920, height=1080, framerate=30/1 ! jpegparse ! nvjpegdec ! videoconvert ! appsink sync=false";
cv::VideoCapture cap;
cv::Mat frame;

int main(void)
	std::cout << "Opening" << std::endl;
	cap.open(pipeline, cv::CAP_GSTREAMER);
	std::cout << "Open Successfully" << std::endl;
	while (1)
	return 0;

When switching the nvjpegdec to jpegdec, the pipeline works fine with lag.

In gstreamer pipeline, it opens /dev/video1. But in OpenCV code, it opens /dev/video0. Probably there is a typo.

Also please check if it works with fakesink:

$ gst-launch-1.0 v4l2src device=/dev/video1 io-mode=2 ! image/jpeg, width=1920, height = 1080, framerate=30/1 ! jpegparse ! nvjpegdec ! video/x-raw ! videoconvert ! video/x-raw,format=BGR ! fakesink sync=false

If it works with fakesink, it shoudl work fine by simply replacing fakesink with appsink in OpenCV code.

For more information, please share your release version( $ head -1 /etc/nv_tegra_release )

Hello DaneLLL, the jetpack version is 4.3. Also, I realize that there is a typo on the OpenCV pipeline string. It is actually /dev/video1 which refers to the USB camera.

Also, I tried the multimedia sample 12. It works fine, however, I cannot find a way to use the multimedia API outside the sample code. I expect the code to be build in a cmake environment.

12_camera_v4l2_cuda is build with Makefile. Please check


I would definitely look into that. However, is there any solution for the bus error since I prefer using the pipeline in the OpenCV. The issue is the plugin nvjpeg.

Actually, I think that might be the libjpeg that is conlict to libnvjpeg. I tried to include the libnvjpeg as the jpeg library for the OpenCV. However, building from the source would show error

[ 58%] Built target opencv_calib3d
[ 58%] Building CXX object modules/dnn/CMakeFiles/opencv_dnn.dir/src/torch/THFile.cpp.o
[ 58%] Building CXX object modules/dnn/CMakeFiles/opencv_dnn.dir/src/torch/THGeneral.cpp.o
[ 58%] Building CXX object modules/dnn/CMakeFiles/opencv_dnn.dir/src/torch/torch_importer.cpp.o
[ 58%] Building CXX object modules/dnn/CMakeFiles/opencv_dnn.dir/opencl_kernels_dnn.cpp.o
[ 58%] Linking CXX shared library ../../lib/libopencv_dnn.so
[ 58%] Built target opencv_dnn
Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2

Is there any modified OpenCV that I can use with jetson hardware acceleration?


You are not able to use jpeg hardware decoding in OpenCV because libjpeg and libnvjpeg cannot co-exist. In running gstreamer pipeline in OpenCV, please use jpegdec plugin, or have v4l2src output in other formats such as UYVY or YUYV. Please follow this to show all formats the source can support.

Thank you for the information. By the way, does jetson multimedia has a shared library that I can use for combining v4l2 and cuda together?

Please check cuda_postprocess() in 12_camera_v4l2_cuda. It demonstrates cuda processing through NvBuffer APIs. The APIs are defined in nvbuf_utils.h