I have an RPi V2 CSI camera. I would like to query the camera every 1-2 seconds for a single image at low resolutions, then rapidly start a video stream at high resolution based on image content.
I was initially using Gstreamer through OpenCV in Python, but this does not work as the frame rate / resolution are fixed tuples, the stream setup takes 2+ seconds, and reading images from the stream at a lower frame rate will lead to synchronization issues.
Based on reading, I think using libargus and just creating some Python bindings would be the best way forward. This is probably trivial, yet I cannot find documentation anywhere; where can I find libargus headers / source? I currently have flashed JP 4.4, do I really need to reflash with the L4T Multimedia API image? If I do this, won’t I lose all the benefits of JP 4.4, like cuDNN 8, TensorRT 7.1.3, CUDA 10.2, etc?
Sorry if these are simple questions, I am a fresh CS graduate and not a dedicated embedded developer.