Is there an example for converting Opencv Mat to EglImageKHR? Or Opencv GPU Mat to TRT without a lot of data copying?

We have an application that requires connecting to multiple types of cameras and finally feeding the images fetched to TRT model. Currently, one camera is USB based industrial camera and the vendor only supplies us a CPU SDK which outputs OpenCV Mat image. Apart from this, we also have cameras that can be streamed with argus lib which generates EglImageKHR. Ultimately we want to streamline all the image preprocessing by leveraging CUDA (convert EglImageKHR into CUDA images and then do preprocessing) for efficiency. So it’s better we can convert the Mat into EglImageKHR to enable this.

Another possible route is to convert the EglImageKHR to OpenCV GPU mat and then do all the preprocessing with OpenCV compiled with CUDA. But for this route, I haven’t found any example showing how to directly feed OpenCV GPU Mat into a TRT model without a lot of data copying. So finally this option’s efficiency will be worse than the first one. But if you can provide an example of doing this efficiently, this can also be a valid option. Thanks.


If you want to apply TensorRT for inference, it’s recommended to try our Deepstream SDK.
It can support CSI or USB camera as input directly.

More, for OpenCV input, we also have an example in the below topic:


Hi, currently we have a lot of legacy code. migrating to Deepstream at this moment is not possible. I’m wondering are we able to achieve the above mentioned without DeepStream


Just want to know more information about the data format first.

Could the USB camera use a pre-allocated memory to wrap an OpenCV image?
Or it needs the buffer created by the driver itself.

More, would the OpenCV buffer be reused or allocated a new one for each frame?


The memory is created by the driver and then moved into an opencv image. I think the buffer is created by the driver. the opencv image is initiated from the driver buffer pointer and then cloned because we use multithreading for downstream processing and the driver buffer can incur race conditions if we don’t copy it out.

Hi, is there any updates on this? Thanks


Not sure if this is the information you want to know.

To make a buffer accessible to GPU, the CPU memory needs to be non-pageable.
Since the OpenCV buffer is created by the camera driver, it is no way to guarantee this.

So please use cudamemcpy to copy the data from cvmat into the GPU buffer.
Below is our document for Jetson’s different memory types for your reference:


Thanks, i’ll have a try first and let you know, it can cost me some time

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.