For a project I want to stream video from a MP1110m-vc camera with USB interface to a Jetson Nano, to then be processed with OpenCV in python.
I am currently using the following Gstreamer pipeline with OpenCV:
“v4l2src device=/dev/video2 ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)YUY2, framerate=(fraction)30/1
! videoconvert ! video/x-raw, format=(string)BGR ! appsink max-buffers=1 drop=true”
While this works, it uses around 125% CPU just for streaming 1080p video to OpenCV and display it.
Is there a way to optimise this, or a more efficient way to do it?