GStreamer pipeline halts when dynamically linking a bin containing nvv4l2h264enc

Hello,

• Hardware Platform: Jetson Xavier NX
• JetPack Version: 5.0.2

I have a PLAYING pipeline to which I’m trying to link a recording bin containing nvv4l2h264enc, the problem is that this immediately halts the pipeline for unknown reason. If I replace the encoder to x264enc - the linking is successful and the pipeline keeps PLAYING.

Additionally, if the bin is linked before the pipeline enters PLAYING, then everything works just fine.

I’ve been in touch with the GStreamer community through their IRC channel, they state the obvious that any plugins must handle linking in any scenario.

The use case is rather complex (involves enabling/disabling video recording at runtime), but here is a very simple test case which fully demonstrates the issue, this will halt as is but it’s possible to check all scenarios by commenting the defines combination:

#include <sstream>
#include <gst/gst.h>

#define NVV4L2H264ENC
#define LATE_LINK

static gboolean bus_call(GstBus *bus, GstMessage *msg, gpointer data) {

    GMainLoop *loop = (GMainLoop *)data;

    switch (GST_MESSAGE_TYPE (msg)) {

        case GST_MESSAGE_EOS:
            GST_ERROR ("End of stream\n");
            g_main_loop_quit(loop);
            break;

        case GST_MESSAGE_ERROR: {
            gchar  *debug;
            GError *error;

            gst_message_parse_error(msg, &error, &debug);
            g_free(debug);

            GST_ERROR ("Error: %s\n", error->message);
            g_error_free(error);

            g_main_loop_quit(loop);
            break;
        }

        default:
            break;
    }

    return TRUE;
}

int main (int argc, char *argv[])
{
    std::ostringstream launch_stream;
    GMainLoop *loop;
    GstElement *pipeline, *recording_bin, *input_tee;
    GstBus *bus;
    guint bus_watch_id;
    GstCaps *filtercaps;
    GstPad *pad;
    GError *err = nullptr;

    gst_init (&argc, &argv);
    loop = g_main_loop_new (NULL, FALSE);

    launch_stream
        << "nvv4l2camerasrc ! video/x-raw(memory:NVMM),format=UYVY,width=1280,height=720,framerate=50/1 ! "
        << "videorate max-rate=25 drop-only=true ! "
        << "tee name=input_tee ! "
        << "queue ! "
        << "nvvidconv ! "
        << "nvv4l2h264enc control-rate=constant_bitrate bitrate=3500000 iframeinterval=500 insert-sps-pps=true profile=Main num-B-Frames=0 ratecontrol-enable=true preset-level=UltraFastPreset EnableTwopassCBR=false maxperf-enable=true ! "
        << "h264parse ! "
        << "mpegtsmux alignment=7 ! "
        << "queue ! "
        << "udpsink sync=false host=224.0.0.1 port=5004";

    pipeline = gst_parse_launch(launch_stream.str().c_str(), &err);

    if (err) {
        GST_ERROR ("Failed to parse pipeline: %s\n", err->message);
        if (err) g_error_free(err);
        return -1;
    }

    launch_stream.str("");
    launch_stream.clear();
    launch_stream
        << "queue ! "
        << "nvvidconv ! "
#ifdef NVV4L2H264ENC
        << "nvv4l2h264enc bitrate=1500000 ! "
#else
        << "x264enc bitrate=1500 ! "
#endif
        << "h264parse ! "
        << "mpegtsmux ! "
        << "filesink location=/tmp/test.ts"; //also tried async=false

    input_tee = gst_bin_get_by_name(GST_BIN(pipeline), "input_tee");

    recording_bin = gst_parse_bin_from_description(launch_stream.str().c_str(), true, &err);

    if (err) {
        GST_ERROR ("Failed to parse recording bin: %s\n", err->message);
        if (err) g_error_free(err);
        return -1;
    }

#ifndef LATE_LINK
    gst_bin_add(GST_BIN(pipeline), recording_bin);
    gst_element_link(input_tee, recording_bin);
    gst_element_sync_state_with_parent(recording_bin);
#endif

    bus = gst_pipeline_get_bus(GST_PIPELINE (pipeline));
    bus_watch_id = gst_bus_add_watch(bus, bus_call, loop);
    gst_object_unref(bus);

    gst_element_set_state (pipeline, GST_STATE_PLAYING);

    if (gst_element_get_state (pipeline, NULL, NULL, -1) == GST_STATE_CHANGE_FAILURE) {
        GST_ERROR ("Failed to go into PLAYING state");
        return -1;
    }

#ifdef LATE_LINK
    g_usleep(2000000); //wait 2 seconds

    gst_bin_add(GST_BIN(pipeline), recording_bin);
    gst_element_link(input_tee, recording_bin);
    gst_element_sync_state_with_parent(recording_bin);
#endif

    g_main_loop_run (loop);

    g_source_remove (bus_watch_id);
    gst_element_set_state (pipeline, GST_STATE_NULL);
    gst_object_unref (pipeline);

    return 0;
}

Here is a level-4 runtime log gst-log.txt (115.1 KB), will appreciate any help, it’ll be great if a related Nvidia engineer is able to reproduce this issue.

Thank you!

Hi,
This use-case is not verified, so it is possible it does not work. We suggest send the frame date to appsink. So that you can decide to begin encoding and create another pipeline. Please check this sample:
Starvation (?) of gstreamer threads - #12 by DaneLLL

Or you can check source code of gst-v4l2 and customize nvv4l2h264enc plugin to handle this use-case. For further suggestion about this, it would need other users to share experience.

Hello @DaneLLL, thank you for your reply.

Well, nvv4l2h264enc is an Nvidia GStreamer plugin, so it must follow GStreamer guidelines for plugins and certainly not halt or crash a pipeline which is about the worst thing that can happen.

May I ask what do you mean by “This use-case is not verified”? If you check GStreamer documentation this use-case is widely used, it was designed for these cases, that’s exactly why it’s so powerful and was chosen by Nvidia as infrastructure for video processing, DeepStream etc. Companies choose Nvidia hardware for its performance that must be guaranteed by the backing software.

I’ll be grateful if you can forward that minimal test-case for reproduction, debugging and hopefully resolution.

Regarding your suggestion to use an appsink, would this require a buffer copy? I guess it would since passed frames’ content will change while traveling through the main pipeline, so there is a performance hit with this workaround, am I right?

Thank you!

Hi,
We will check with our teams and see if we can support the use-case in later release. For now please send the buffers to appsink and create another pipeline for video encoding. We can send NvbufSurface in appsrc to encoder without buffer copy. Please refer to
NvUtils NvBufSurface to gstreamer - #5 by DaneLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.