Platform: Jetson Xavier NX
Jetpack: 4.4
Cuda: 10.2
OpenCV: 4.1.0
Gstreamer: 1.14.5
I’m using GStreamer and C++ OpenCV for building a video flow. This means that GStreamer reads the data from the camera and passes it to C++ OpenCV via the “appsink” (GStreamer plugin).
I’m trying to extract frames from the camera to a cv::cuda::GpuMat (GPU’s memory) without copying the frame to cv::Mat (CPU’s memory), using GStreamer API.
At first I extracted frames to cv::Mat using cv::VideoCapture with the following arguments:
- " nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)4000, height=(int)3000, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink".
- cv::VideoCaptureAPIs::CAP_GSTREAMER.
This takes too long to copy each frame to CPU memory and back to GPU to accelerate image processing.
Therefore, I wish to extract each frame to cv::cuda::GpuMat first and then execute my image processing methods.
Is there any way to pass frames directly to cv::cuda::GpuMat using GStreamer API?