GStreamer: GLMemory -> NVMM conversion

Hi,

since the functionality offered by the proprietary nvidia gstreamer plugins is quite limited, we would like to combine them with the gstreamer opengl plugins.

Since we have to keep latency as low as possible, we have to avoid unnecessary copies and a conversion from GLMemory to NVMM would be necessary.

What we imagine is something like this:

gst-launch-1.0 gltestsrc ! 'video/x-raw(memory:GLMemory), width=1920, height=1080' ! nvglmem2nvmm ! 'video/x-raw(memory:NVMM), width=1920, height=1080' ! nvoverlaysink

,

where nvglmem2nvmm is converting from GLMemory to NVMM memory. The Multimedia API allows to build a plugin that converts from NVMM to GLMemory, but the other way round we cannot see how to do this with the provided API functions.

Is there a possibility that you provide such a converter plugin?

We are working on Jetson TX2 using L4T 28.2.1.

Thank you.

Hi,
Could you share more detail about your usecase/pipeline? It looks like you can simply do rendering via EGL APIs.
The conversion is required unless you need TX2 to do HW encoding/decoding.

Hi DaneLLL,

thank you for the hint. But using the EGL API is not sufficient for our usecase. We have for example a stereopipeline that uses the glstereomix element from gstreamer, that provides GL Memory format and is not compatible with the EGL API.

Since there is already some latency that cannot be avoided, after the stereomixer a conversion to NVMM memory with as little overhead as possible is required. There are plenty of other usecases we have that also benefit from such a conversion.

I think such a plugin would be beneficial for anyobody who seriously considers using the Jetson Board with gstreamer for performance critical applications.

Thank you.

Hi,
It is still not clear to us why you need the conversion. If you use glstereomix, it should be good enough to use glimagesink instead of nvoverlaysink.

You may also consider using tegra_multimedia_api.
https://developer.nvidia.com/embedded/dlc/l4t-multimedia-api-reference-28-2-ga

Hi,

you are right, if that were the only usecase that would be an option. But I do have a complex pipeline system that also requires encoding for example.

I want to create overlays with opengl, convert them back to NVMM memory format and feed them into the encoder. I looked at the multimedia API but cannot see how to solve my problem with it.

Hi,
Please install tegra_multimedia_api samples via Jetpack and refer to tegra_multimedia_api\include\nvbuf_utils.h

The demo code is in multiple samples as shown below:

// Create EGLImage from dmabuf fd
    ctx->egl_image = NvEGLImageFromFd(ctx->egl_display, buffer->planes[0].fd);
    if (ctx->egl_image == NULL)
    {
        fprintf(stderr, "Error while mapping dmabuf fd (0x%X) to EGLImage\n",
                 buffer->planes[0].fd);
        return false;
    }

    // Running algo process with EGLImage via GPU multi cores
    HandleEGLImage(&ctx->egl_image);

    // Destroy EGLImage
    NvDestroyEGLImage(ctx->egl_display, ctx->egl_image);
    ctx->egl_image = NULL;

Each frame is an EGLImage and you can do processing on it.

Hi,

thanks for the hint. But what we need is converting from GLMemory to NVMM like in the initial example I gave in my first post. I do not see how to get there from the multimedia API examples?

We managed to convert EGL to GLMemory to use it with default GStreamer elements. What is missing is a way to go back to NVMM Memory.

As I said before, we are running many tasks in parallel where such a converter plugin would be needed for displaying/encoding/decoding.

We want to use nvoverlaysink for low-latency rendering which requires NVMM memory format.

Hi,
We have suggested you possible solutions based on current release. Hope you can give it a try.

Your request is a new feature. Please contact sales to review business opportunities.

Hello, I have a similar problem and the reason I can’t use glimagesink is because I can’t get a FullScreen display as I get from the nvoverlaysink, how can I do that?

Hi,

Please start a new topic and describe your issue in detail. Thanks.