I have USB camera and application that processes the image data, and record it into video.
The camera is See3CAM_CU20 from e-con systems.
Can’t specify acquisition pipeline since I’m still couldn’t make any work, but basically get frame from camera (frame format is UYVY), process it to be RGB, resize, drop some frames (1/2), use appsink to get frame data in application.
Pipeline would be something like v4l2src ! (UYVY) ! videoconvert ! (RGB) ! appsink, not sure.
After frame data (in RGB) being post-processed, application spits image out, converts it back to any format that omxh264enc can accept (by cpu for now) to use hw video encoder, and then save it as file.
Pipeline’s appsrc ! (RGB) ! videoconvert ! omx264enc ! h264parse ! qtmux ! filesink
Overall pipeline should be as much optimized as possible, not to use much CPU resources, and minimize latency of acquisition.
I can’t use CUDA as GPU is already fully used in other parts of application.
It seems CSI camera can use ISP to accelerate some parts during acquisition, e.g. pixel format conversion.
But I’m not sure about USB camera.
Is it possible to use any other hw accelerators in my case?