Feeding NV12 into openCV2

Hi,

I’m trying to measure the latency of my camera by pointing it at a bright LED and measure the time between the LED turning on, and an output sent via GPIO. The GPIO output is triggered when a pixel in the center of the camera matrix goes above brightness threashold.

Currently, my code only works if I convert NV12 to BGR, is there a way to feed NV12 directly to “VideoCapture”?

For reference, here’s my code:

#include  <opencv2/opencv.hpp>
#include <JetsonGPIO.h>
#include <chrono>
#include <thread>
const int Led = 7;
const int Output = 15;
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
using namespace std;

std::string gstreamer_pipeline (int capture_width, int capture_height, int framerate) {
return "nvarguscamerasrc exposuretimerange='" + to_string(37000) + " " + to_string(37000)   + "'" + " aelock=true ispdigitalgainrange='" + to_string(1) + " " + to_string(1) + "'" + " gainrange='" + to_string(1) + " " + to_string(1) + "'" +" aeantibanding=0 wbmode=0 ee-mode=0 tnr-mode=0  ! video/x-raw(memory:NVM$
       std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) +
       "/1 ! nvvidconv"  + " ! video/x-raw" + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";
}




int main()
{
GPIO::setmode(GPIO::BOARD);
GPIO::setup(Led, GPIO::OUT, GPIO::LOW);

GPIO::setup(Output, GPIO::OUT, GPIO::LOW);

int capture_width = 1080 ;
int capture_height = 1440 ;
int framerate = 60 ;

std::string pipeline = gstreamer_pipeline(capture_width,capture_height, framerate);
std::cout << "Using pipeline: \n\t" << pipeline << "\n";
cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
if(!cap.isOpened()) {
    std::cout<<"Failed to open camera."<<std::endl;
    return (-1);
}
cv::Mat img;

while(true)
{
    if (!cap.read(img)) {
            std::cout<<"Capture read error"<<std::endl;
            break;
    }

   GPIO::output(Led, GPIO::HIGH);
   Vec3f pixel = img.at<cv::Vec3b>(720, 540);
   if(pixel.val[2] >= 10){
   GPIO::output(Output, GPIO::HIGH);
   break;}
   else{  }// cout << "Threashold not passed " << pixel.val[2] << endl;}



}
GPIO::output(Led, GPIO::LOW);
GPIO::output(Output, GPIO::LOW);
cout << "GPIO CLEANED" << endl;
cap.release();
cv::destroyAllWindows();
return 0;
}

Hi,
Please refer to this python sample:

It sends I420 to appsink, but NV12 should work. It is listed in OpenCV code:
opencv/cap_gstreamer.cpp at master · opencv/opencv · GitHub

    // we support 11 types of data:
    //     video/x-raw, format=BGR   -> 8bit, 3 channels
    //     video/x-raw, format=GRAY8 -> 8bit, 1 channel
    //     video/x-raw, format=UYVY  -> 8bit, 2 channel
    //     video/x-raw, format=YUY2  -> 8bit, 2 channel
    //     video/x-raw, format=YVYU  -> 8bit, 2 channel
    //     video/x-raw, format=NV12  -> 8bit, 1 channel (height is 1.5x larger than true height)
    //     video/x-raw, format=NV21  -> 8bit, 1 channel (height is 1.5x larger than true height)
    //     video/x-raw, format=YV12  -> 8bit, 1 channel (height is 1.5x larger than true height)
    //     video/x-raw, format=I420  -> 8bit, 1 channel (height is 1.5x larger than true height)
    //     video/x-bayer             -> 8bit, 1 channel
    //     image/jpeg                -> 8bit, mjpeg: buffer_size x 1 x 1

You can try to modify to NV12 and do NV12 to BGR conversion.

Hi,

I’m using opencv2 in C++ but I think I can try this pipeline. I’ve tried one similar to yours above:

nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)60/1 ! appsink

And the pipeline would work in terminal, but not in OpenCV. Let me try the pipeline you gave and I will report back with good or bad news.

opencv appsink cannot read from NVMM memory. You would use nvvidconv for copying into standard CPU memory:

const char* gst =  "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, format=NV12 ! appsink"; 
cv::VideoCapture(gst, cv::CAP_GSTREAMER);

You would receive a one channel mat. If you need BGR for processing, the cvtColor in opencv may be a bit worse than videoconvert in gstreamer.
It would make sense if you just need luminance, but in such case it would be simpler to use nvvidconv to convert into GRAY8 so in opencv you would just get a monochrome image:

const char* gst =  "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1920, height=1080, framerate=30/1 ! nvvidconv ! video/x-raw, format=GRAY8 ! appsink"; 

Honey, I have a question regarding the thing you said in your last paragraph. Do you mean that, opencv cvtColor NV12 to BGR is slower than just doing nvvidconv?