Get frame in GpuMat instead of Mat - OpenCV 3.4.2 - v4l2 - Jetson TX2

Hi qperret,
On OpenCV 3.4.2, you can send UYVY to appsink.

Please refer to
The pipeline is nvcamerasrc ! nvvidconv ! appsink and do cvtColor(frame, bgr, CV_YUV2BGR_I420).

For your case, it should be v4l2src ! appsink and cvtColor(frame, bgr, CV_YUV2BGR_UYVY).

Hi DaneLLL,

Thanks for your reply.
Can you give me a full pipeline with v4l2src, nvivafilter ans appsink please ?
I’m not sure how to make it.


You may use such pipeline for videoCapture in your CPU application:

const char* gst = "v4l2src device=/dev/video10 ! video/x-raw, format=UYVY, width=640, height=480, framerate=30/1 ! "
		  "nvvidconv ! video/x-raw(memory:NVMM) ! "
		  "nvivafilter customer-lib-name=<path_to_your_lib>/ cuda-process=true ! video/x-raw(memory:NVMM), format=RGBA, framerate=30/1 ! "
		  "nvvidconv ! video/x-raw, format=BGRx ! "
		  "videoconvert ! video/x-raw, format=BGR ! "

Without nvivafilter, you can also get frames from your camera in UYVY format into your CPU application (appsink).
What you would do would be to allocate a frame with unified memory, convert the received camera UYVY frame into your prefered format into the unified memory frame, then launch GPU processing writing output into another unified memory allocated output frame, further process on CPU…But I doubt it will be faster than nvivafilter.


I tried your pipeline but the program runs at 8 FPS, so I increase the framerate at 80/1 and it works like before, but it’s always at 30 FPS, no more.
Is it possible to read a video at more than 30 FPS on a Jetson TX2 ?

Hi qperret,
Please try the attached sample. It is modified based on

Two lines are modified:

VideoCapture cap("v4l2src device=/dev/video1 ! video/x-raw,width=1920,height=1080,format=UYVY ! appsink");
cvtColor(frame, bgr, CV_YUV2BGR_UYVY);

simple_opencv.cpp (568 Bytes)