Opengl format GL_RGB/GL_RGBA and NvBuffer format NvBufferColorFormat_ARGB32 question

My project on agx,and jetpacket version:
nvidia@localhost:~$ head -n 1 /etc/nv_tegra_release

R32 (release), REVISION: 4.3, GCID: 21589087, BOARD: t186ref, EABI: aarch64, DATE: Fri Jun 26 04:34:27 UTC 2020

My project is to use OpengL to do 360-degree 3D circumnavigation of 4 cameras,i need to composite the original 4 camera and the opengl 360-degree data,but the NvBufferComposite api need NvBuffer,while the opengl api cannot provide ,and the NvBufferColorFormat is NvBufferColorFormat_ARGB32,and glReadPixels format is GL_RGBA.
This requires both color format conversion and GPU to CPU copy. Is there an optimal solution for transferring data from OpengL to DMABUF FD


Hi,
Please refer to this sample:
Trying to process with OpenGL an EGLImage created from a dmabuf_fd - #9 by DaneLLL
Trying to process with OpenGL an EGLImage created from a dmabuf_fd - #12 by DaneLLL

thanks DaneLLL,i read this topic,I still have a question,My project is a 360-degree circular stitching of 4 cameras,8 vao texture of GL_TEXTURE_EXTERNAL_OES ,4 camera texture of GL_TEXTURE_2D,it have already NvEGLImageFromFd from 4 cameras dma_fd_input , so i do not know how to get gl to dmabuf_output,Can you explain the operation in more detail ?

Hi,
If each camera source is put in individual NvBuffer, a possible solution is to call NvBufferComposite() to composite the 4 sources into single video buffer.

yes,the original code is from the i-mx-surround-view-system of nxp,he code is ported to TX2,
Use glReadPixels retrieve data fusion in the opengl consumption of resources, want to directly through dms_fd way,but
the topic is not very clear , What should I do with dma in /DMA out mode containing the camera? Trying to process with OpenGL an EGLImage created from a dmabuf_fd - #9 by DaneLLL
Does nvidia tx2(agx) have a similar 3d panoramic program? Can VR Works 360 Stitch be used on TX2 ?

Hi,
This is the only way of hooking OpenGL/EGL with NvBuffer. If it does not work in your use-case, a possible solution is to create NvBuffer and get CUDA pointer. And implement CUDA code for copying frame data to NvBuffer.

Thank you for your reply!
Can you provide a simple pseudocode example?

Hi,
There is no existing code for this use-case. You would need to do implementation.

For demonstration of NvBufferComposite(), there is code in

/usr/src/jetson_multimedia_api/samples/13_multi_camera