We have an application that requires connecting to multiple types of cameras and finally feeding the images fetched to TRT model. Currently, one camera is USB based industrial camera and the vendor only supplies us a CPU SDK which outputs OpenCV Mat image. Apart from this, we also have cameras that can be streamed with argus lib which generates EglImageKHR. Ultimately we want to streamline all the image preprocessing by leveraging CUDA (convert EglImageKHR into CUDA images and then do preprocessing) for efficiency. So it’s better we can convert the Mat into EglImageKHR to enable this.
Another possible route is to convert the EglImageKHR to OpenCV GPU mat and then do all the preprocessing with OpenCV compiled with CUDA. But for this route, I haven’t found any example showing how to directly feed OpenCV GPU Mat into a TRT model without a lot of data copying. So finally this option’s efficiency will be worse than the first one. But if you can provide an example of doing this efficiently, this can also be a valid option. Thanks.
Hi, currently we have a lot of legacy code. migrating to Deepstream at this moment is not possible. I’m wondering are we able to achieve the above mentioned without DeepStream
The memory is created by the driver and then moved into an opencv image. I think the buffer is created by the driver. the opencv image is initiated from the driver buffer pointer and then cloned because we use multithreading for downstream processing and the driver buffer can incur race conditions if we don’t copy it out.
Not sure if this is the information you want to know.
To make a buffer accessible to GPU, the CPU memory needs to be non-pageable.
Since the OpenCV buffer is created by the camera driver, it is no way to guarantee this.
So please use cudamemcpy to copy the data from cvmat into the GPU buffer.
Below is our document for Jetson’s different memory types for your reference: