I am using the Libargus to get image from camera in EGLStream. And I am getting EGLStream::IFrame. How can I convert it to EGLimageKHR so that I can use VPIImage wrapper and use VPI to convert colorspace?
I am trying convert image from YUV to RGB using VPI.
We would suggest call createNvBuffer() and copyToNvBuffer() to get frame data in NvBuffer, and then you can wrap it into VPI Image. Please check
VPI - Vision Programming Interface: NvBuffer Interoperability
Does this result in additional copy overhead from the existing memory?
If yes, is there a way to avoid this?. We want to avoid unnecessary buffer copy.
There is buffer copy through hardware VIC engine, instead of through CPU. It does not take CPU usage.
Hi @DaneLLL ,
Thank you for the info. Just one more question, what would be the latency involved in this? Any rough info is good enough. e.g for 1080p RGBA image, what would be the number?
We don’t have existing data, you can add code in
To profile createNvBuffer()/copyToNvBuffer() through gettimeofday() like:
// pseudo code
For maximum throughput, please refer to this post to run hardware converter at max clock:
Nvvideoconvert issue, nvvideoconvert in DS4 is better than Ds5? - #3 by DaneLLL