How do i capture an image (using the RP v2.0 camera) in openCV on the nano?

Hi,

How do i capture an image (using raspberry Py v2.0 camera) in openCV on the nano?
(raw so 8 bits for R,G,B total 24bit/pixel, 10bits would be better but well…)
The code I tried is below but doesn’t work.
If I don’t specify the capture (camera) it give an error so I looked around and found something but if I configure it like in the code below it still give an error

Any help would be appreciated, a simple code that captures an image.
Do I need to install additional components? If so please let me know.

thank!

#include <opencv2/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.hpp>

#include “opencv2/objdetect.hpp”
#include “opencv2/highgui.hpp”
#include “opencv2/imgproc.hpp”
#include

using namespace std;
using namespace cv;

int main()
{
VideoCapture cap;

//! from here:
// https://devtalk.nvidia.com/default/topic/1025356/how-to-capture-and-display-camera-video-with-python-on-jetson-tx2/

cap = VideoCapture(“udpsrc port=5000 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”, cv2.CAP_GSTREAMER);

// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(0))
    return 0;

for(;;)
  {
  Mat frame;
  cap >> frame;
  if( frame.empty() ) 
    break; // end of video stream

  imshow("Hello captured image :)", frame);

  if (waitKey(10) == 27)
    break; // stop capturing by pressing ESC 
  }

// the camera will be closed automatically upon exit
// cap.close();
return 0;

}

The error:

/home/nano/cv-hello/hello.cpp: In function ‘int main()’:
/home/nano/cv-hello/hello.cpp:17:21: error: expected primary-expression before ‘(’ token
cap = VideoCapture(“udpsrc port=5000 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”, cv.CAP_GSTREAMER);
^
/home/nano/cv-hello/hello.cpp:17:195: error: expected primary-expression before ‘.’ token
4dec ! videoconvert ! video/x-raw, format=(string)BGR ! appsink", cv.CAP_GSTREAMER);
^
CMakeFiles/cv_hello.dir/build.make:62: recipe for target ‘CMakeFiles/cv_hello.dir/hello.cpp.o’ failed
make[2]: *** [CMakeFiles/cv_hello.dir/hello.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target ‘CMakeFiles/cv_hello.dir/all’ failed
make[1]: *** [CMakeFiles/cv_hello.dir/all] Error 2
Makefile:83: recipe for target ‘all’ failed
make: *** [all] Error 2

You may try this:

// simple_gst_capture.cpp

#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>

int main()
{
     // std::cout << cv::getBuildInformation() << std::endl; 

     const char* gst = "nvarguscamerasrc  ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)640, height=(int)480, framerate=(fraction)30/1 ! \
			nvvidconv         ! video/x-raw,              format=(string)BGRx ! \
			videoconvert      ! video/x-raw,              format=(string)BGR  ! \
			appsink";

    cv::VideoCapture cap(gst);
    if(!cap.isOpened()) {
	std::cout<<"Failed to open camera."<<std::endl;
	return (-1);
    }
    
    unsigned int width  = cap.get(cv::CAP_PROP_FRAME_WIDTH); 
    unsigned int height = cap.get(cv::CAP_PROP_FRAME_HEIGHT); 
    unsigned int fps    = cap.get(cv::CAP_PROP_FPS);
    unsigned int pixels = width*height;
    std::cout <<" Frame size : "<<width<<" x "<<height<<", "<<pixels<<" Pixels "<<fps<<" FPS"<<std::endl;

    cv::namedWindow("MyCameraPreview", cv::WINDOW_AUTOSIZE);
    cv::Mat frame_in;

    while(1)
    {
    	if (!cap.read(frame_in)) {
		std::cout<<"Capture read error"<<std::endl;
		break;
	}
	
	cv::imshow("MyCameraPreview",frame_in);
	cv::waitKey(1); 
    }

    cap.release();
    return 0;
}

Build with:

g++ -std=c++11 -Wall -I<path_to_your_opencv_headers> simple_gst_capture.cpp -L<path_to_your_opencv_libs> -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_gst_capture

Argus is another choice to access the MIPI CSI camera instead of GST. You can have a try. You can find it under Multi Media API package.

thanks for the samples guys i get images!
Much appreciated!

How do you turn off the auto exposure on the raspberry Py v2.0 camera?

using the using raspberry Py v2.0 camera, i tried to turn off the auto exposure but wasn’t successful.
i tried via the pipeline and via the cap properties but neither seemed to work.

Is there some tutorial on the gstreamer pipeline parameters or what works/doesn’t with the opencv CAP_xxx properties?

i tried this (didn’t work):

const char* gst = "nvarguscamerasrc  ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)640, height=(int)480, framerate=(fraction)120/1 ! \
			nvvidconv         ! video/x-raw,              format=(string)BGRx ! \
			videoconvert      ! video/x-raw,              format=(string)BGR  ! \
			appsink, auto-exposure=0, exposure-time=.05";

and i tried this (doesn’t work either)

cap.set(cv::CAP_PROP_EXPOSURE, (double) 15000);  //! between 13000 and 683709000

Also i’m new to the nano/linux in general so i used cmake (default settings), i get images so that worked but would that matter?(compiler options)

Regards,
bjorn
ps: i tried to reply before but kept getting error 15, now its clear that has to do with the content.

Setting exposure the way you’re trying may only work if you were using V4L API, but for a gstreamer pipeline you would rather set properties in the camera source plugin.
Here you are using nvarguscamerasrc for controlling camera, so you would check available options with:

gst-inspect-1.0 nvarguscamerasrc

Then you can test options with gst-launch, using xvimagesink for display (it requires a X server, if remotely you would log in with ssh -Y), for example:

gst-launch-1.0 nvarguscamerasrc wbmode=0 awblock=true gainrange="8 8" ispdigitalgainrange="4 4" exposuretimerange="5000000 5000000" aelock=true ! nvvidconv ! xvimagesink

When you have your expected settings, just add these options after nvarguscamerasrc in the gstreamer pipeline in your opencv code, for example:

const char* gst = "nvarguscamerasrc wbmode=0 awblock=true gainrange=\"8 8\" ispdigitalgainrange=\"4 4\" exposuretimerange=\"5000000 5000000\" aelock=true ! video/x-raw(memory:NVMM), ...

Using a pipeline similar to the ones listed above I receive camera images. The image capture time is ~1ms, but the latency of the pipeline is extreme ~2 sec! The latency is a show stopper. Any suggestions would be greatly appreciated.

hi Rich,

i don’t have lag
code below (i had to cut some parts out that i was testing, so not tested this exact code, but this idea works fine), thanks to the help of HP
integration time is 5ms, framerate 120fps for 640x480

#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
#include <opencv/cv.h>

using cv::Vec3b;


//******************************************************************************************************
int main()
//******************************************************************************************************
{
double dblR, dblG, dblB;
long lClipCountR, lClipCountG, lClipCountB;
double dblRGBSum, dblRGBAvg;
int nKeepGoing = 1;

//works!!   const char* gst = "nvarguscamerasrc wbmode=0 awblock=true gainrange=\"8 8\" ispdigitalgainrange=\"4 4\" exposuretimerange=\"50000000 50000000\" aelock=true ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)640, height=(int)480, framerate=(fraction)120/1 ! \
			nvvidconv         ! video/x-raw,              format=(string)BGRx ! \
			videoconvert      ! video/x-raw,              format=(string)BGR  ! \
			appsink";
   const char* gst = "nvarguscamerasrc wbmode=0 awblock=true gainrange=\"1 1\" ispdigitalgainrange=\"2 2\" exposuretimerange=\"50000000 50000000\" aelock=true ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)640, height=(int)480, framerate=(fraction)120/1 ! \
			nvvidconv         ! video/x-raw,              format=(string)BGRx ! \
			videoconvert      ! video/x-raw,              format=(string)BGR  ! \
			appsink";


    cv::VideoCapture cap (gst);
    if (!cap.isOpened()) 
      {
    	std::cout<<"Failed to open camera."<<std::endl;
    	return (-1);
      }



    unsigned int width  = cap.get(cv::CAP_PROP_FRAME_WIDTH); 
    unsigned int height = cap.get(cv::CAP_PROP_FRAME_HEIGHT); 
    unsigned int fps    = cap.get(cv::CAP_PROP_FPS);
    unsigned int pixels = width*height;
    std::cout <<" Frame size : "<<width<<" x "<<height<<", "<<pixels<<" Pixels "<<fps<<" FPS"<<std::endl;


    cv::namedWindow("MyCameraPreview", cv::WINDOW_AUTOSIZE);
    cv::Mat frame_in;

    while (nKeepGoing > 0)
      {
		    if (!cap.read (frame_in)) 
		      {
					std::cout<<"Capture read error"<<std::endl;
					break;
					}
			else
			{

				cv::imshow("MyCameraPreview", frame_in);  //! show the frame 1 every 10 images

				if (cv::waitKey(1) >= 0)
					nKeepGoing = 0;
			  }
			} //! while(1)



    cap.release();
    return 0;
}

Thanks for the information. Actually Im new in using Jetson Nano. I also need to capture the image(Text) then saved it in a variable to later feed to OCR model.
the thing I need to have is that I need to capture the image whenever the user wants to capture that image.
Can you please let me know your ideas how can I do that using Jetson Nano, R pi camera V2?

Any help is much appreciated:)

Please refer to https://www.jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/ to see if can help.