Sony camera module cannot be opened with OpenCV on Xavier

I tested with the C++ code:

std::string gst_str = "v4l2src device=/dev/video0 ! video/x-raw, width=1280, height=720, format=(string)UYVY, framerate=(fraction)30/1 ! videoconvert ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";

cv::VideoCapture cap(gst_str, cv::CAP_GSTREAMER);

if(!cap.isOpened()) {
   std::cout<<"Failed to open camera."<<std::endl;
   return (-1);
}

It always gives “Failed to open camera.”

I’m using Jetpack 4.1.1, GMSL camera module with Sony ISX016.

Please advise what could be wrong.

Additional information:

  1. when using the following terminal command, it is ok:
gst-launch-1.0 -v v4l2src device="/dev/video0" ! "video/x-raw, width=1280,height=720, format=(string)UYVY" ! nvvidconv ! "video/x-raw(memory:NVMM), width=640, height=360" ! nvvidconv ! xvimagesink
  1. when using “nvarguscamerasrc” to replace “v4l2src device=/dev/video0” in the above C++ code, tested it with the TX2 onboard camera, it is ok.

Hi,
Please confirm the camera supports 1280x720p30 UYVY:

$ v4l2-ctl -d /dev/video0 --list-formats-ext

And Jetpack4.1.1 is developer preview. Suggest you upgrade to latest version.

Hi DaneLLL,
Sorry, when I saw your reply, my Xavier kit could be hacked and can not be booted up, it hangs at “Started update utmp about system runlevel changes”.

Yes, my camera does support 1280x720@30 UYVY. I suspect this issue could be related to the V4L2 driver, I’m not sure whether it works properly or not. The reason is that, when I change one line of the code to

cv::VideoCapture cap(0);

The camera can be opened! But it can only capture the video for several seconds and then the window becomes green. This will cause the system hang up.

This symptom looks similar to the link below (for raspberry pi web camera) which was resolved by installing proper v4l2 driver. But I’m not sure whether this is the case because my camera is using MIPI port connecting 6 cameras by a adapter board with Maxim serdes.

https://www.raspberrypi.org/forums/viewtopic.php?t=126358

I am still using Jetpack 4.1.1 because the camera module driver cannot support newer Jetpack version yet. Does OpenCV work well with Jetpack 4.1.1?

Hi,
We have verified the following sample on r32.2.1/Xavier.

#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/types_c.h>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
  VideoCapture cap("v4l2src device=/dev/video1 ! video/x-raw,width=1920,height=1080,format=UYVY,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink");

  if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      imshow("original", frame);
      waitKey(1);
    }

  cap.release();
}
$ g++ -o simple_opencv -Wall -std=c++11 simple_opencv_v4l2.cpp -I/usr/local/include/opencv4/ -L/usr/local/lib/ -lopencv_core -lopencv_highgui -lopencv_imgproc -lopencv_videoio

Script for installing OpenCV:

You may check if it works on r31.1. Upgrading to r32 releases is still required if you go to production stage.

Hi DaneLLL,
It works after the upgrading. Thanks!

Hi DaneLLL,

I’m using r32.2.1/Xavier. I expanded your code to support 5 CSI cameras on the MIPI port, it works. But the problems is that the display frame rate is quite slow at around 0.5 FPS. We need it to be 15 FPS at least. Is it the limitation of OpenCV? DeepStream 4 looks ok.

I have 3 more questions for your help:
(1) in your code, you use the following to capture camera video which use the default apiID.

VideoCapture cap("v4l2src device=/dev/video1 ! video/x-raw,width=1920,height=1080,format=UYVY,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink");

If changing apiID to cv::CAP_GSTREAMER, is it helpful to speed up the frame rate?

(2) I tried to add the cv::CAP_GSTREAMER to VideoCapture cap, but the compiling failed, what could be the reason ? (it needs OpenCV to support gStreamer?)

(3) Any other ways to increase the frame rate (e.g. by adding gpu/cuda support) ?

Thanks,
Andrew

Hi,
OpenCV is CPU-based implementation and one more thing you can try is to execute ‘sudo jetson_clocks’ to run all CPUs at max clocks.

We suggest you try tegra_multiemdia_api. You may starty eith 12_camera_v4l2_cuda, which is the sample for v4l2src capturing. Furthermore, you may refer to
https://devtalk.nvidia.com/default/topic/1047563/jetson-tx2/libargus-eglstream-to-nvivafilter/post/5319890/#5319890
Which demonstrates how to use cv::cuda APIs. The sample should also be valid on r32.2.1+OpenCV4.1.1.

Hi DaneLLL,

I tried to run the 12_camera_v4l2_cuda command, it shows a black window for a while and exit.
Do I need to add some source code, e.g. the gst pipeline string for the camera?

nvidia@nvidia-desktop:/usr/src/tegra_multimedia_api/samples/12_camera_v4l2_cuda$ ./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f UYVY -n 30 -c
[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 640 height 480
WARN: request_camera_buff(): (line:378) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:378) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:378) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:378) Camera v4l2 buf length is not expected
----------- Element = renderer0 -----------
Total Profiling time = 0
Average FPS = 0
Total units processed = 0
Num. of late units = 0
-------------------------------------
App run was successful

Another issue could be a known issue? OpenCV 4.1.1, and GStreamer 1.4.5. When opening more than 1 cameras, the system will be crashed after several minutes. It gives the warning once opening the camera:

[ WARN:0] global /home/nvidia/opencv4.1.1/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
[ WARN:0] global /home/nvidia/opencv4.1.1/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

Hi,
Since your canera supports 1280x720p30 UYVY, please run

12_camera_v4l2_cuda$ ./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f UYVY

Hi DaneLLL,
I see, thanks. It works now.
Let me try to work with multi cameras based on this code. Any advice is appreciated!

Hi DaneLLL,

I have several questions for your help, my basic target is to capture multi CSI camera videos (with V4L2), retrieve and convert the captured frame data to ROS (OpenCV Mat format).

(1) Is it possible to use the C++ code based on your example code as above mentioned? Per my test, with multi-cameras, the frame rate is slow, and it will cause the system to crash. Is it normal?

(2) I also tested 12_camera_v4l2_cuda code with one camera, it’s ok. Is there an easy way to make it support multi-cameras ?

(3) Deepstream 4 is a good tool to capture multi-cameras, would you please recommend a method to extract the video frame data from it for several streams? I found deepstream-test1-app can do the similar job, but it supports h264 input with 1 stream only.

I need a proven solution as my customer needs it urgently. Please help, thanks.

Andrew

Hi,

Does it work if you run

$ ./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f UYVY & ./camera_v4l2_cuda -d /dev/video1 -s 1280x720 -f UYVY

You have to customize the code into multi-thread. Each thread opens one camera through v4l2 interfaces:

ctx->cam_fdX = open("/dev/videoX", O_RDWR);
ioctl(ctx->cam_fdX, VIDIOC_S_FMT, &fmt);
ioctl(ctx->cam_fdX, VIDIOC_G_FMT, &fmt);
...

You can begin with deepstream-app. Below is a config file of running four USB cameras. FYR.
https://devtalk.nvidia.com/default/topic/1058334/deepstream-sdk/deepstream4-jetson-nano-multiple-webcams-issue/post/5390583/#5390583

(1) Does it work if you run

$ ./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f UYVY & ./camera_v4l2_cuda -d /dev/video1 -s 1280x720 -f UYVY

(2)You have to customize the code into multi-thread. Each thread opens one camera through v4l2 interfaces:

ctx->cam_fdX = open("/dev/videoX", O_RDWR);
ioctl(ctx->cam_fdX, VIDIOC_S_FMT, &fmt);
ioctl(ctx->cam_fdX, VIDIOC_G_FMT, &fmt);
...

Hi DaneLLL,

Yes, it works. Launch and exit it with Ctrl+C, the messages are as follows (some messages were added by me before). As the display window cannot be moved, I’m not sure there is/are one or two windows is/are there.

From the messages below, if I’m not wrong, we can see that before camera1 is opened, start_capture for camera0 was stopped, and also stop_stream for camera0 was executed. Does this mean only one camera is opened at any time?

I believe that, to make multi cameras to work at the same time, modifying start_capture() to do multi-thread processing could be necessary, but it looks not simple to do it. I could be wrong. Please advise.

BTW, I also tried to use this code in different terminals to open 6 cameras to let 6 windows working at the same time, it’s ok.

$ ./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f UYVY & ./camera_v4l2_cuda -d /dev/video1 -s 1280x720 -f UYVY
[1] 10634
GST stream number is 2 

Now device video ... 0

device video0
Try to open camera /dev/video0 ......
GST stream number is 2 

Now device video ... 0

device video0
Try to open camera /dev/video0 ......
Opend camera /dev/video0

Opend camera /dev/video0

[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 1280 height 720
[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 1280 height 720

init_components OK!

WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected

init_components OK!

ERROR: request_camera_buff(): (line:405) Failed to request v4l2 buffers: Device or resource busy (16)
ERROR: prepare_buffers(): (line:568) Failed to set up camera buff

start_stream OK!

Failed to prepare v4l2 buffsApp run failed
^CQuit due to exit command from user!
----------- Element = renderer0 -----------
Total Profiling time = 16.3269
Average FPS = 30.0119
Total units processed = 491
Num. of late units = 0
-------------------------------------
start_capture OK!

stop_stream OK!
Now device video ... 1

device video1
Try to open camera /dev/video1 ......
Opend camera /dev/video1

[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 1280 height 720

init_components OK!

WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:428) Camera v4l2 buf length is not expected

start_stream OK!

----------- Element = renderer0 -----------
Total Profiling time = 0
Average FPS = 0
Total units processed = 0
Num. of late units = 0
-------------------------------------
start_capture OK!

stop_stream OK!
App run was successful
[1]+  Exit 255                ./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f UYVY

Hi,
I don’t see existing code of running multiple v4l2 cameras in one process. You may do integration by yourself.

Also you can modify the code to configure x_offset, y_offset in NvEglRenderer, to show windows in different position.
https://docs.nvidia.com/jetson/archives/l4t-multimedia-archived/l4t-multimedia-322/classNvEglRenderer.html#a215b566918c6e9a8674f9fb6bdb06187

Thanks DaneLLL, let me try it…

Hi DaneLLL,
Regarding Deepstream which can work with our 6 CSI cameras. To retrieve 6 video streams frame data, I refer to
https://devtalk.nvidia.com/default/topic/1060956/deepstream-sdk/access-frame-pointer-in-deepstream-app/post/5375214/#5375214

I believe this should be able to get what I want, but currently not successful yet.

Someone said with Deepstream, we need to convert MIPI data to RTSP, then to get the frame data with some delay.
I’d like to get confirmation with you, if we use deepstream-app to get 6 CSI cameras frame data, is it necessary to convert MIPI to RTSP in advance? If yes, how do we convert it? If not, the code published in the abovementioned link is enough?

Hi,
Your cameras are v4l2src. Is is supported in running deepstream-app:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_config.3.2.html%23wwpID0E0QB0HA

Please refer to the config file which launches 4 USB cameras(also v4l2src).