Hello,
I’d like to integrate the Holoviz rendered video in our Qt-based GUI. Qt (QOpenGL) provides support for OpenGL but I was wondering how to exactly pass the off-screen rendered image (including overlays) from the Holoviz operator to a custom operator and map the gxf Framebuffer with the OpenGL framebuffer.
Are there any recommendations on how to achieve this? Help is much appreciated.
Hi,
you can configure the Holoviz operator to render in headless mode (set headless parameter to true) and enable render buffer output: Visualization - NVIDIA Docs. This will output a RGBA GXF VideoBuffer in GPU (CUDA) memory.
Create an operator using Qt and QtOpenGL, create an OpenGL texture, register it with CUDA (CUDA Runtime API :: CUDA Toolkit Documentation) and copy the VideoBuffer from CUDA memory to the OpenGL texture. Then draw the texture with OpenGL Qt.
Thanks for the rapid response.
I was trying exactly what you described in your answer but always run into the same error. Below you find an example which is based on holoviz_geometry.py.
Hi, unfortunately Holoviz outputs VideoBuffer objects instead of Tensor objects and VideoBuffer is not yet supported by the Python API of the Holoscan SDK.
For this to work the VideoBuffer needs to be converted to a Tensor, FormatConverterOp is doing that. Add the format converter between Holoviz and the WriteToOpenGLBufferOP like this: