We have an application that requires connecting to multiple types of cameras and finally feeding the images fetched to TRT model. Currently, one camera is USB based industrial camera and the vendor only supplies us a CPU SDK which outputs OpenCV Mat image. Apart from this, we also have cameras that can be streamed with argus lib which generates EglImageKHR. Ultimately we want to streamline all the image preprocessing by leveraging CUDA (convert EglImageKHR into CUDA images and then do preprocessing) for efficiency. So it’s better we can convert the Mat into EglImageKHR to enable this.
Another possible route is to convert the EglImageKHR to OpenCV GPU mat and then do all the preprocessing with OpenCV compiled with CUDA. But for this route, I haven’t found any example showing how to directly feed OpenCV GPU Mat into a TRT model without a lot of data copying. So finally this option’s efficiency will be worse than the first one. But if you can provide an example of doing this efficiently, this can also be a valid option. Thanks.
The memory is created by the driver and then moved into an opencv image. I think the buffer is created by the driver. the opencv image is initiated from the driver buffer pointer and then cloned because we use multithreading for downstream processing and the driver buffer can incur race conditions if we don’t copy it out.