I just started with OpenCV on the Nano today. I am still finding my way around the Nano.
I can compile simple OpenCV programs.
I added the Raspberry Pi RPi NoIR Camera V2.
This is the error I receive:
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV.
OpenCV Error: Unspecified error (CGStreamer: unable to start pipeline) in cvCaptureFromCAM_GStreamer, file /home/nvidia/build_opencv/OpenCV/modules/videoio/src/cap_gstreamer.cpp line 887 VIDEOIO(cvCreatCapture)CStreamer(CV_CAPGSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV Exception:
/home/nvidia/build_opencv/OpenCV/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer.
What didn’t I do?
Thanks for any help,
UPDATE: I found this code and I ran it using Terminal mode and I can see the video output of the camera so the camera is working, just not in my OpenCV app:
$ gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12’ ! nvvidconv flip-method=0 ! ‘video/x-raw,width=960, height=616’ ! nvvidconv ! nvegltransform ! nveglglessink -e
Do I need additional code for the OpenCV on the Nano that is not needed elsewhere?
I receive the error, that I posted previously, directly after calling: cv::VideoCapture camera(0);
Can you confirm the Pi Noir 2 camera works, both on a Pi, and outside of OpenCV on the Nano through other example camera applications?
In python3, can you please paste the output of:
>>> import cv2
It does work on my Nano. I ran it using the code in my UPDATE post.
Sorry but I am not familiar with python.
Do you have anything I can run from terminal mode?
I found code on GitHub that used text specifying the parameters of the camera instead of just using “0” as the camera index and it worked.
If you use a USB cam, you may just use its V4L index…such as 1 if it is /dev/video1. It depends on what formats your camera provides (see v4l2-ctl -d /dev/video1 --list-formats-ext) and what your opencv version accepts as input and what it can convert to. It may also be possible to use a gstreamer pipeline for conversion if your opencv build has gstreamer support.
For a CSI camera providing bayer frames, usually it uses a bypass through ISP for debayering, as does nvarguscamerasrc and provides YUV (such as NV12 or I420) format (into NVMM memory). In such case it might be mandatory to use a gstreamer pipeline using nvvidconv for copying to cpu memory and send to appsink.
An option is to use nvvidconv to output BGRx and videoconvert to provide BGR to appsink.
You may alternatively read I420 or NV12 frames from recent version of opencv, but if you need to convert to RGB it might not be faster than videoconvert (as today no CUDA conversion is available in opencv for this, AFAIK).