I am running a Jetson Nano with GStreamer 1+, OpenCV 4+, and Python 3.6.8. My goal is to display video from a RPi cam v2 at 1080p/30fps on the Jetson Nano’s display with low latency. Currently, I am feeding a GStreamer pipeline to OpenCV with the following command:
cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12 ! nvvidconv flip-method=0 ! video/x-raw,width=1920, height=1080,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ")
When I run this pipeline, the video is choppy and has low FPS. In System Manager, I see that all CPU cores are at >90% load when displaying the 1080p video, and to my understanding a significant portion of the bottleneck is caused by using videoconvert from BGRx–>BGR.
When I run the following pipeline in Linux terminal, I get smooth 1080p/30fps and CPU cores are at ~25% load, so I know that hardware acceleration can be used for displaying video from the RPi cam, but I am not sure how to get similar speed/fps when using OpenCV with the GStreamer video.
$ gst-launch-1.0 nvarguscamerasrc ! \ 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! \ nvvidconv flip-method=0 ! 'video/x-raw,width=1920, height=1080' ! \ nvvidconv ! nvegltransform ! nveglglessink -e
Is there any way to get GStreamer with OpenCV to use hardware acceleration to speed up the first pipeline? Thanks