I’m trying to use VideoCapture and VideoWriter to stream a camera feed using python. I need to capture the frame in order to do some basic processing before sending it to the writer. However, I noticed that my pipeline has significant 1-2 ms delay compared to the pipeline I run in the terminal. I think it’s because I have more video conversion in the python pipeline, but I’m not sure if I’m able to optimzie/shorten the pipeline.
When I try that I get [ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1631) writeFrame OpenCV | GStreamer warning: cvWriteFrame() needs images with depth = IPL_DEPTH_8U and nChannels = 3.
And I get the same error: [ WARN:0] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1631) writeFrame OpenCV | GStreamer warning: cvWriteFrame() needs images with depth = IPL_DEPTH_8U and nChannels = 3.
I will try both of those but so far there isn’t much noticeable difference, I believe the bottleneck is still the videoconvert command. The nvpmodel command puts my system in 2 core 15W mode. Would there be any difference changing the power to either 20W or 10W?