How do I use GStreamer pipeline to extract frames to Cuda's GpuMat?

Platform: Jetson Xavier NX
Jetpack: 4.4
Cuda: 10.2
OpenCV: 4.1.0
Gstreamer: 1.14.5

I’m using GStreamer and C++ OpenCV for building a video flow. This means that GStreamer reads the data from the camera and passes it to C++ OpenCV via the “appsink” (GStreamer plugin).
I’m trying to extract frames from the camera to a cv::cuda::GpuMat (GPU’s memory) without copying the frame to cv::Mat (CPU’s memory), using GStreamer API.

At first I extracted frames to cv::Mat using cv::VideoCapture with the following arguments:

  1. " nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM), format=(string)NV12, width=(int)4000, height=(int)3000, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink".
  2. cv::VideoCaptureAPIs::CAP_GSTREAMER.

This takes too long to copy each frame to CPU memory and back to GPU to accelerate image processing.
Therefore, I wish to extract each frame to cv::cuda::GpuMat first and then execute my image processing methods.

Is there any way to pass frames directly to cv::cuda::GpuMat using GStreamer API?

You may find some ways to do that from this topic.

I saw this topic, but I don’t thing that it helps me because jetson-utils’s videoSource seems to transfer the frames to the CPU’s memory immediately after receiving them which forces me to reupload the frame to the GPU’s memory and that takes too much time.
I need to extract the frames straight into cv::cuda::GpuMat so that I only need to copy the frame to the CPU’s memory only once, which is after I finished running my image processing methods on said GpuMat.

Is it just your feeling or did you try it and faced reduced framerate ?

Also note if jetson-utils doesn’t fit your case that in same topic the post above also mentions 2 other methods: nvivafilter or nvvidconv src pad probe.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.