Problem with modifying gstreamer pipeline in gstVideoencode sample code in tegra multimedia api

Hi everyone,

I am trying to modify the pipeline in gstVideoencode example in tegra multimedia API

The modified pipeline is

“nveglstreamsrc display=(pointer_to_display) eglstream=(pointer_to_egl_stream) ! “video/x-raw(memory:NVMM), format=I420, width=1920, height=1080, framerate=30/1” ! queue ! nvvidconv ! nvoverlaysink”

The above pipeline works perfect and i am able to get the preview.

When i add extra caps between nvvidconv and nvoverlay sink as below,

“nveglstreamsrc display=(pointer_to_display) eglstream=(pointer_to_egl_stream) ! “video/x-raw(memory:NVMM), format=I420, width=1920, height=1080, framerate=30/1” ! queue ! nvvidconv ! “video/x-raw, format=I420, width=1920, height=1080, framerate=30/1” ! nvoverlaysink”

No error is shown but the pipeline doesn’t work. Is there any limitation on using nveglstreamsrc?


The code for modified pipeline which is working : -

bool initialize(EGLStreamKHR eglStream, Argus::Size2D<uint32_t> resolution,
                int32_t framerate, int32_t bitrate,
                const char* encoder, const char* muxer, const char* output)
{

    // Initialize GStreamer.
    gst_init(NULL, NULL);

    // Create pipeline.
    m_pipeline = gst_pipeline_new("video_pipeline");
    if (!m_pipeline)
        ORIGINATE_ERROR("Failed to create video pipeline");

    // Create EGLStream video source.
    GstElement *videoSource = gst_element_factory_make("nveglstreamsrc", NULL);
    if (!videoSource)
        ORIGINATE_ERROR("Failed to create EGLStream video source");
    if (!gst_bin_add(GST_BIN(m_pipeline), videoSource))
    {
        gst_object_unref(videoSource);
        ORIGINATE_ERROR("Failed to add video source to pipeline");
    }

    g_object_set(G_OBJECT(videoSource), "display", g_display.get(), NULL);
    g_object_set(G_OBJECT(videoSource), "eglstream", eglStream, NULL);


    GstElement *vidconv = gst_element_factory_make("nvvidconv", NULL);
    if (!vidconv)
        ORIGINATE_ERROR("Failed to create nvvideo converter");
    if (!gst_bin_add(GST_BIN(m_pipeline), vidconv))
    {
        gst_object_unref(vidconv);
        ORIGINATE_ERROR("Failed to add nvvideo converter to pipeline");
    }


    GstElement *queue1 = gst_element_factory_make("queue", NULL);
    if (!queue1)
        ORIGINATE_ERROR("Failed to create queue 1");
    if (!gst_bin_add(GST_BIN(m_pipeline), queue1))
    {
        gst_object_unref(queue1);
        ORIGINATE_ERROR("Failed to add queue 1 to pipeline");
    }

    GstElement *queue2 = gst_element_factory_make("queue", NULL);
    if (!queue1)
        ORIGINATE_ERROR("Failed to create queue 2");
    if (!gst_bin_add(GST_BIN(m_pipeline), queue2))
    {
        gst_object_unref(queue2);
        ORIGINATE_ERROR("Failed to create queue 2 to pipeline");
    }


    GstElement *overlaysink = gst_element_factory_make("nvoverlaysink", NULL);
    if (!overlaysink)
        ORIGINATE_ERROR("Failed to create overlay sink");
    if (!gst_bin_add(GST_BIN(m_pipeline), overlaysink))
    {
        gst_object_unref(overlaysink);
        ORIGINATE_ERROR("Failed to add overlay sink to pipeline");
    }


    // Create caps filter to describe EGLStream image format.
    GstCaps *caps = gst_caps_new_simple("video/x-raw",
                                        "format", G_TYPE_STRING, "I420",
                                        "width", G_TYPE_INT, 1920,
                                        "height", G_TYPE_INT, 1080,
                                        "framerate", GST_TYPE_FRACTION, framerate, 1,
                                        NULL);
    if (!caps)
        ORIGINATE_ERROR("Failed to create caps");


    GstCapsFeatures *features = gst_caps_features_new("memory:NVMM", NULL);
    if (!features)
    {
        gst_caps_unref(caps);
        ORIGINATE_ERROR("Failed to create caps feature");
    }
    gst_caps_set_features(caps, 0, features);

    // Link EGLStream source to queue via caps filter.
    if (!gst_element_link_filtered(videoSource, queue1, caps))
    {
        gst_caps_unref(caps);
        ORIGINATE_ERROR("Failed to link EGLStream source to queue 1");
    }


if (!gst_element_link(queue1, vidconv ))
       ORIGINATE_ERROR("Failed to link queue 1  to video converter");


if (!gst_element_link(vidconv, overlaysink ))
       ORIGINATE_ERROR("Failed to link queue 2 to overlaysink");

    gst_caps_unref(caps);

    return true;
}

The code for modified pipeline which is not working(only an extra Caps is added) : -

bool initialize(EGLStreamKHR eglStream, Argus::Size2D<uint32_t> resolution,
                int32_t framerate, int32_t bitrate,
                const char* encoder, const char* muxer, const char* output)
{

    // Initialize GStreamer.
    gst_init(NULL, NULL);

    // Create pipeline.
    m_pipeline = gst_pipeline_new("video_pipeline");
    if (!m_pipeline)
        ORIGINATE_ERROR("Failed to create video pipeline");

    // Create EGLStream video source.
    GstElement *videoSource = gst_element_factory_make("nveglstreamsrc", NULL);
    if (!videoSource)
        ORIGINATE_ERROR("Failed to create EGLStream video source");
    if (!gst_bin_add(GST_BIN(m_pipeline), videoSource))
    {
        gst_object_unref(videoSource);
        ORIGINATE_ERROR("Failed to add video source to pipeline");
    }

    g_object_set(G_OBJECT(videoSource), "display", g_display.get(), NULL);
    g_object_set(G_OBJECT(videoSource), "eglstream", eglStream, NULL);


    GstElement *vidconv = gst_element_factory_make("nvvidconv", NULL);
    if (!vidconv)
        ORIGINATE_ERROR("Failed to create nvvideo converter");
    if (!gst_bin_add(GST_BIN(m_pipeline), vidconv))
    {
        gst_object_unref(vidconv);
        ORIGINATE_ERROR("Failed to add nvvideo converter to pipeline");
    }


    GstElement *queue1 = gst_element_factory_make("queue", NULL);
    if (!queue1)
        ORIGINATE_ERROR("Failed to create queue 1");
    if (!gst_bin_add(GST_BIN(m_pipeline), queue1))
    {
        gst_object_unref(queue1);
        ORIGINATE_ERROR("Failed to add queue 1 to pipeline");
    }

    GstElement *queue2 = gst_element_factory_make("queue", NULL);
    if (!queue1)
        ORIGINATE_ERROR("Failed to create queue 2");
    if (!gst_bin_add(GST_BIN(m_pipeline), queue2))
    {
        gst_object_unref(queue2);
        ORIGINATE_ERROR("Failed to create queue 2 to pipeline");
    }


    GstElement *overlaysink = gst_element_factory_make("nvoverlaysink", NULL);
    if (!overlaysink)
        ORIGINATE_ERROR("Failed to create overlay sink");
    if (!gst_bin_add(GST_BIN(m_pipeline), overlaysink))
    {
        gst_object_unref(overlaysink);
        ORIGINATE_ERROR("Failed to add overlay sink to pipeline");
    }


    // Create caps filter to describe EGLStream image format.
    GstCaps *caps = gst_caps_new_simple("video/x-raw",
                                        "format", G_TYPE_STRING, "I420",
                                        "width", G_TYPE_INT, 1920,
                                        "height", G_TYPE_INT, 1080,
                                        "framerate", GST_TYPE_FRACTION, framerate, 1,
                                        NULL);
    if (!caps)
        ORIGINATE_ERROR("Failed to create caps");

    GstCaps *caps1= gst_caps_new_simple("video/x-raw",
                                        "format", G_TYPE_STRING, "I420",
                                        "width", G_TYPE_INT, 1920,
                                        "height", G_TYPE_INT, 1080,
                                        "framerate", GST_TYPE_FRACTION, framerate, 1,
                                        NULL);
    if (!caps1)
        ORIGINATE_ERROR("Failed to create caps 1");


    GstCapsFeatures *features = gst_caps_features_new("memory:NVMM", NULL);
    if (!features)
    {
        gst_caps_unref(caps);
        ORIGINATE_ERROR("Failed to create caps feature");
    }
    gst_caps_set_features(caps, 0, features);

    // Link EGLStream source to queue via caps filter.
    if (!gst_element_link_filtered(videoSource, queue1, caps))
    {
        gst_caps_unref(caps);
        ORIGINATE_ERROR("Failed to link EGLStream source to queue 1");
    }


if (!gst_element_link(queue1, vidconv ))
       ORIGINATE_ERROR("Failed to link queue 1  to video converter");


   if (!gst_element_link_filtered(vidconv, queue2, caps1))
    {
        gst_caps_unref(caps1);
        ORIGINATE_ERROR("Failed to link video converter to queue2");
    }

if (!gst_element_link(queue2, overlaysink ))
       ORIGINATE_ERROR("Failed to link queue 2 to overlaysink");

    gst_caps_unref(caps);

    return true;
}

Please try

"nveglstreamsrc display=(pointer_to_display) eglstream=(pointer_to_egl_stream) ! "video/x-raw(memory:NVMM), format=I420, width=1920, height=1080, framerate=30/1" ! queue ! nvvidconv ! "<b>video/x-raw(memory::NVMM)</b>, format=I420, width=1920, height=1080, framerate=30/1" ! nvoverlaysink"

Thanks for your reply.

I am able to get preview by adding (memory:NVMM) in the caps.

But I have some other use case to convert the buffer from Hardware memory to software memory, so that i can use them in v4l2sink, filesink etc.

Any suggestion on how to convert the buffer from hardware memory to software memory to use v4l2sink and filesink?

The pipeline shown below does not save the correct frame,

"nveglstreamsrc display=(pointer_to_display) eglstream=(pointer_to_egl_stream) num-buffers=1 ! "video/x-raw(memory:NVMM), format=I420, width=1920, height=1080, framerate=30/1" ! queue ! nvvidconv ! "video/x-raw(memory::NVMM), format=I420, width=1920, height=1080, framerate=30/1" ! filesink location=testimage"

On running the above code, the size of testimage is only 808 bytes. (Its actual size must be 192010801.5 = 3110400)

The pipeline shown below does not save the frame,

"nveglstreamsrc display=(pointer_to_display) eglstream=(pointer_to_egl_stream) num-buffers=1 ! "video/x-raw(memory:NVMM), format=I420, width=1920, height=1080, framerate=30/1" ! queue ! nvvidconv ! "video/x-raw, format=I420, width=1920, height=1080, framerate=30/1" ! filesink location=testimage"

On running the above code, the size of testimage is 0.

any suggestions?

Hi Arun,
Please try

"nveglstreamsrc display=(pointer_to_display) eglstream=(pointer_to_egl_stream) num-buffers=1 ! "video/x-raw(memory:NVMM), format=<b>NV12</b>, width=1920, height=1080, framerate=30/1" ! queue ! nvvidconv ! "video/x-raw, format=I420, width=1920, height=1080, framerate=30/1" ! filesink location=testimage"

Thanks DaneLLL,

After changing the pipeline as suggested by you i am able to get filesink and v4l2sink working.

I am very curious to know how the pipeline worked by making that change. Can you please explain?

Thanks in advance…

Hi Arun,
Argus output format is NV12. We will correct it in the sample.