Built OpenCV with CUDA / GPU support. Need to display via gstreamer in python.

Hi everyone,

I’m trying to get the nano GPU perform the work to do facial recognition with OpenCV via python, gstreamer, and CUDA.

Using Jetson Nano and Raspberry Pi Camera v2

Steps:

  1. Performed a fresh install and got latest software on my Nano as of 11/29.
  2. Ran https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.1.1_Jetson.sh (replacing 4.1.1 with 4.1.2). This script claims to allow CUDA support in OpenCV.
  3. Downloaded https://github.com/JetsonHacksNano/CSI-Camera/blob/master/face_detect.py to use as a simple test.

When I run face_detect.py, I get a camera window showing up, but at a terrible framerate (assuming it’s loading from the CPU). This warning is printed in the terminal window:
[ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

The script then errors out when a face is detected:
Traceback (most recent call last):
File “face_detect.py”, line 97, in
face_detect()
File “face_detect.py”, line 78, in face_detect
eyes = eye_cascade.detectMultiScale(roi_gray)
cv2.error: OpenCV(4.1.2) /tmp/build_opencv/opencv/modules/objdetect/src/cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function ‘detectMultiScale’

The error is presumably because OpenCV is built to look at GPU output / memory, but the CPU is currently creating the output. I’m thinking all I need to change is my gstreamer pipeline, but I’m not sure how to have it go entirely through the GPU and output to python. Looks like appsink is required but is CPU-only?

Here’s my current “working” gstreamer pipeline string in python:

nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3280, height=2464, format=(string)NV12, framerate=21/1 ! nvvidconv flip-method=2 ! video/x-raw, width=820,height=616, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink

When I run this, I get a gstreamer window using the GPU at a good framerate:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM)' ! nvivafilter customer-lib-name=libnvsample_cudaprocess.so cuda-process=true ! 'video/x-raw(memory:NVMM),format=(string)RGBA' ! nvoverlaysink

How can I get the GPU gstreamer string working with python and OpenCV?

Any help would be appreciated!

Hi,
We can leverage cv::gpuMat in gstreamer + OpenCV. Here is a reference sample:
https://devtalk.nvidia.com/default/topic/1066465/jetson-nano/nano-not-using-gpu-with-gstreamer-python-slow-fps-dropped-frames/post/5403975/#5403975

However, in python, it only accepts CPU buffers in BGR format, it is not able to have performance optimization.
The optimal performance is to use NVMM buffers from src to sink in the gstreamer pileine.