In my application I want to use CSI cameras to capture frames via the Argus API. However this cameras are triggered by a Hardware trigger to sync them up with other data streams. Therefore, I can get relative long delays between frames ( > 1 min) and I also don’t really want to give up the per Frame handling of the captured images.
I currently use a CUDA Kernel to convert the NvBuffer containing a YUV image to the kLINEAR input binding containing an RGB image.
So is there also a (more) efficient way without the overhead and abstractions of the GStreamer/Deepstream lib?.
Or is there a possibility to outsource this operation to a hardware Accelerator like the PVA or VIC?
I have three cameras which should take a panoramic picture together. All cameras are triggered by a hardware trigger generated by an external component (to sync up the cameras and the generated images with other data streams).
The three cameras should use the same configuration (White-balance, exposure, gain, ccm, gamma, …) for a capture.
My current solution manages all three cameras in one Argus CaptureSession to ensure that the captures are taken with the same settings. An then I convert the Images into the ARGB32 format (using NvBufferTransform) and use a CUDA kernel to copy the ARGB32 frame into a the (Linear) binding used for the inference.