My application needs to render some images into OpenGL ES frame buffer.
Then we need to take the rendered image from OpenGL ES frame buffer, and send it to video encoder.
Currently, we firstly create a DMABuffer (which will be feed to video encoder),
and mmap it to a CPU virual addr,
then use glReadPixels() to read data from GLES frame buffer to this mmaped virtual addr.
We measured that glReadPixels() took about 15ms.
So we would like to improve this process so as to reduce this latency.
Is it possible to make GLES directly render to a DMABuffer, or a mmaped address / EGLImage of a DMABuffer on TX2/Linux?