Is there a better / more efficient way to convert a (Argus) NvBuffer to a Tensorrt InputBinding than manually copping the image from one memory to the other?
In my application I want to use CSI cameras to capture frames via the Argus API. However this cameras are triggered by a Hardware trigger to sync them up with other data streams. Therefore, I can get relative long delays between frames ( > 1 min) and I also don’t really want to give up the per Frame handling of the captured images.
I currently use a CUDA Kernel to convert the NvBuffer containing a YUV image to the kLINEAR input binding containing an RGB image.
So is there also a (more) efficient way without the overhead and abstractions of the GStreamer/Deepstream lib?.
Or is there a possibility to outsource this operation to a hardware Accelerator like the PVA or VIC?
I would need to control three cameras in one “Capture session” (so thy use the same capture parameters) and to have a per frame handling of the incoming data.
Is this possible with Gstreamer?
And also is there somewhere a documentation for the nvarguscamerasrc plugin, which I think I need to use when I switch to Gstreamer?
If this behavior is not achievable with gstreamer is there a better way of copping the data?
I have three cameras which should take a panoramic picture together. All cameras are triggered by a hardware trigger generated by an external component (to sync up the cameras and the generated images with other data streams).
The three cameras should use the same configuration (White-balance, exposure, gain, ccm, gamma, …) for a capture.
My current solution manages all three cameras in one Argus CaptureSession to ensure that the captures are taken with the same settings. An then I convert the Images into the ARGB32 format (using NvBufferTransform) and use a CUDA kernel to copy the ARGB32 frame into a the (Linear) binding used for the inference.