capture video from usb2.0 camera with opencv3.2+gstreamer on TX2

Hi,everyone
I’m working on a project that needs to capture video from a usb2.0 camera on TX2. And my enviroment is Ubuntu L4T 28.1 on TX2,opencv3.2 compiled with gstreamer.
The output parameters of the USB2.0 Camera are 1920*1080, 30fps, encoding in mjpeg.
Following the instructions on “Jetson_TX2_Accelerated_GStreamer_User_Guide.pdf”(https://developer.nvidia.com/embedded/dlc/l4t-tx2-accelerated-gstreamer-guide-28-1),I can successfully watch the video using the follow gstreamer pipeline in terminal.

gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420" ! nveglglessink
gst-launch-1.0 v4l2src device="/dev/video0" ! nvjpegdec ! nveglglessink

According the pipeline, I wrote my code as shown below

int main(int argc, char *argv[])
{
	const char *videosrc = "v4l2src device=/dev/video0 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420 ! videoconvert ! appsink";
	cv::VideoCapture cap;
	cap.open(videosrc);
	if (!m_cap.isOpened())
	{	
		std::cout<<"OPEN CAM ERROR\n";
		return 1;
	}
	
	cv::Mat nowMat;
	char ch;
	while (true)
	{
		if(!cap.read(nowMat))
		{
			break;
		}
	
		//some code for processing the nowMat
		
	
		cv::imshow("Hello", nowMat);
		ch = cv::waitKey(1);
		if(ch == 'q')break;
	}
	cap.release();
	nowMat.release();
	return 0;
}

Because my processing for nowMat is a little complex, the video is caton.And I saw CPUs on TX2 are high load, but GPU isn’t high load.
So, is there any way to capture and decode the video stream via GPU in my project,such as modifing the string passed to cap.open().
And because my code for processing is based on opencv3.2, I don’t want to rewrite my code on another framework.
Thanks

Hi hzpbanana,

I would like suggest you to follow mmapi sample but you only want a openCV framework here.

Hi WayneWWW,
The subject of my project is Face Detection, the code for processing the nowMatThe is for detection and is written by my partner.So I only want a OpenCV framework.

As you said, if I would like to follow mmapi samples, that means I will do my job in mmapi. Is there any way to transfer the frame data into Mat.data in OpenCV ?

PS.For processing the nowMat, I have already used multi-thread technique.
Thanks

Hi hzpbanana,

According to your description and sample, I only see camera capture from v4l2src and does not have a decode process. nowMat is a cpu buffer and I don’t know where you would like to improve through GPU.

Our mmapi sample reads the camera source and directly put it on a GPU memory for later processing.

Hi WayneWWW,
Thank you for your suggestion on mmapi, I would prepare to work on it.

And what else, I have used an IP camera previously, and the string passed to videocapture.open() is shown below

const char *videosrc = "rtspsrc location=rtsp://192.168.1.168/main latency=60 ! decodebin ! nvvidconv ! video/x-raw,format=(string)BGRx ! videoconvert ! video/x-raw,format=(string)BGR ! appsink";

Working on it, I have found that in this way, the GPU has a little load. I think it’s because “nvvidconv” may do transcoding in GPU. And the video plays more fluent than the video via USB camera. What’s your idea about it.

You may also try openCV 3.3 to eliminate ‘videoconvert’
https://devtalk.nvidia.com/default/topic/1024245/jetson-tx2/opencv-3-3-and-integrated-camera-problems-/post/5210735/#5210735