Customized Deepstream app crashes gst_memory_get_sizes: assertion 'mem != NULL' failed

Hello,
I am running Deepstream 5.1 development docker image on RTX 2080ti. My application is reading from 20 sources the pipeline is as follows:

uridecodebin → streammux → nvinfer → nvtracker → nvdsanalytics → nvvideoconverter → nvstreamtiler → nvosd ->nveglglessink

the application keeps crashing with this message

GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
[2:59 PM] deepstream_cpp exited with code 139

any idea of how to fix it ?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

  • RTX 2080ti
  • Deepstream 5.1 development docker image
  • Locally I don’t have Deepstream I have tensorrt version 7.1.3
  • I have CUDA 11.1 locally
  • Driver Version: 470.57.02
  • My application is reading from 20 sources it is based on deepstream-test4-app the pipeline is as follows:

uridecodebin → streammux → nvinfer → nvtracker → nvdsanalytics → nvvideoconverter → nvstreamtiler → nvosd ->nveglglessink

  • The issue is that the pipeline keeps crashing with this error message:
GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
[2:59 PM] deepstream_cpp exited with code 139

Will less cameras(e.g. 4 cameras) give the correct result?

I didn’t try but if you understand that the pipeline is not running at all then I didn’t clear it the pipeline runs for at least 3 - 4 hours with the 20 sources and then crashes giving this message. the time before crashing is unknown sometimes it stays for 8 - 10 hours some times it crashes after 2 - 3 hours.

Have you monitor the memory usage of the system while running the case. The error you show is a basic error in gstreamer core. Nobody can tell you the reason with just this piece of log.

Can you run the case with our deepstream-app sample?

Yes, I have been recording the RAM usage and it is stable it doesn’t increase over time.
and yes I can run deepstream-app sample because I am using the development image

So the failure can be reproduced with deepstream-app? If so, please post your deepstream-app configuration file and other related configure files for us to reproduce the problem.

I ran deepstream-app with this config file with changing the uri names to my own rtsp streams urls and it did not crash
config.txt (10.0 KB)

I have a question
is it essential to add queue after each plugin in case of reading from multiple sources ?
could this be what is causing the problem ?

Hello I ran deepstream-test3-app with the 20 sources and it crashed with this message.

gst_buffer_get_sizes_range: assertion ‘GST_IS_BUFFER (buffer)’ failed

knowing that my code for reading multiple sources is the same as the code in the deepstream-test3-app

but I am wrapping the pipeline making and starting in a class called visionPipeline this class have one public method called start() and a private method called make_pipeline()

this is the code for the make_pipeline():

GstElement * visionPipeline::make_pipeline()
{
    GstElement *pipeline = NULL,
                *sink = NULL, *pgie = NULL, *nvtracker = NULL, 
                *nvdsanalytics = NULL, *tiler = NULL, *nvvidconv = NULL,
                *nvosd = NULL, *nvstreammux = NULL;
    GstElement *queue = NULL, *queue2 = NULL, *queue3 = NULL, *queue4 = NULL, 
                *queue5 = NULL, *queue6 = NULL, *queue7 = NULL;
    guint tiler_rows, tiler_columns;
    bool live_source = false;

    long width = 1280;
    long height = 720;
    std::cout<<"Display off status: "<<_display_off<<std::endl;

    pipeline = gst_pipeline_new("dstest4-pipeline");
    nvstreammux = gst_element_factory_make("nvstreammux", "nvstreammux");

    gst_bin_add(GST_BIN(pipeline), nvstreammux);

    if (!pipeline || !nvstreammux)
    {
        g_printerr("(Line=%d) One element could not be created. Exiting.\n", __LINE__);
        exit(-1);
    }

    for(size_t i = 0; i < _num_sources; i++)
    {
        std::size_t found = std::string(_streams[i]).find("rtsp");
        if(found!=std::string::npos)
        {
            std::cout << ">> found live source" << std::endl;
            live_source = true;
            break;
        }
    }

    for(size_t i = 0; i < _num_sources; i++)
    {
        GstPad *sinkpad, *srcpad;
        gchar pad_name[16] = {};
        GstElement *source_bin = _create_source_bin(i , _streams[i]);
        if (!source_bin)
        {
            g_printerr("Failed to create source bin. Exiting.\n");
            exit(-1);
        }
        gst_bin_add(GST_BIN(pipeline), source_bin);

        g_snprintf(pad_name, 15, "sink_%u", (unsigned int)i);
        sinkpad = gst_element_get_request_pad(nvstreammux, pad_name);
        if (!sinkpad)
        {
            g_printerr("nvStreammux request sink pad failed. Exiting.\n");
            exit(-1);
        }

        srcpad = gst_element_get_static_pad(source_bin, "src");
        if (!srcpad)
        {
            g_printerr("Failed to get src pad of source bin. Exiting.\n");
            exit(-1);
        }

        if (gst_pad_link(srcpad, sinkpad) != GST_PAD_LINK_OK)
        {
            g_printerr("Failed to link source bin to stream muxer. Exiting.\n");
            exit(-1);
        }

        gst_object_unref(srcpad);
        gst_object_unref(sinkpad);
    }

    /* Use nvinfer to run inferencing on decoder's output,
    * behaviour of inferencing is set through config file */
    pgie = gst_element_factory_make("nvinfer", "primary-nvinference-engine");
    if (!pgie)
    {
        g_printerr("nvinfer could not be created. Exiting.\n");
        exit(-1);
    }

    nvtracker = gst_element_factory_make("nvtracker", "tracker");

    if (!nvtracker)
    {
        g_printerr("nvtracker could not be created. Exiting.\n");
        exit(-1);
    }

    nvdsanalytics = gst_element_factory_make("nvdsanalytics", "nvdsanalytics");
    if (!nvdsanalytics)
    {
        g_printerr("nvdsanalytics could not be created. Exiting.\n");
        exit(-1);
    }

    tiler = gst_element_factory_make("nvmultistreamtiler", "nvtiler");
    if (!tiler)
    {
        g_printerr("nvmultistreamtiler could not be created. Exiting.\n");
        exit(-1);
    }
    std::cout<<"Tiler made\n";

    /* Use convertor to convert from NV12 to RGBA as required by nvosd */
    nvvidconv = gst_element_factory_make("nvvideoconvert", "nvvideo-converter");
    if (!nvvidconv)
    {
        g_printerr("nvvideoconvert could not be created. Exiting.\n");
        exit(-1);
    }

    nvosd = gst_element_factory_make("nvdsosd", "nv-onscreendisplay");
    if (!nvosd)
    {
        g_printerr("nvosd could not be created. Exiting.\n");
        exit(-1);
    }
    std::cout<<"osd made\n";

    /* Create queues */
    queue = gst_element_factory_make("queue", "nvtee-que1");
    queue2 = gst_element_factory_make("queue", "nvtee-que2");
    queue3 = gst_element_factory_make("queue", "nvtee-que3");
    queue4 = gst_element_factory_make("queue", "nvtee-que4");
    queue5 = gst_element_factory_make("queue", "nvtee-que5");
    queue6 = gst_element_factory_make("queue", "nvtee-que6");
    queue7 = gst_element_factory_make("queue", "nvtee-que7");
    if (!queue || !queue2 || !queue3 || !queue4 || !queue5 || !queue6 || !queue7)
    {
        g_printerr("queue could not be created. Exiting.\n");
        exit(-1);
    }

   
    std::cout<<"Creating nveglglessink\n";
    sink = gst_element_factory_make("nveglglessink", "nvvideo-renderer");
    if(!sink)
    {
        g_printerr("sink could not be created. Exiting.\n");
         exit(-1);
    }
    std::cout<<"eglsing made\n";


    g_object_set(G_OBJECT(nvstreammux), "batch-size", _num_sources, NULL);

    g_object_set(G_OBJECT(nvstreammux), "width", _muxer_width, "height",
                _muxer_height,
                "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, "live-source", live_source, NULL);
    

    /* Set all the necessary properties of the nvinfer element,
    * the necessary ones are : */
    g_object_set(G_OBJECT(pgie),
                "config-file-path", strdup(_pgie_config_path.c_str()), NULL);
    g_object_set(G_OBJECT(pgie), "batch-size", _num_sources, NULL);

    if (!_set_tracker_properties(nvtracker))
    {
        g_printerr("Failed to set tracker properties. Exiting.\n");
        exit(-1);
    }
    g_object_set(G_OBJECT(nvtracker), "display-tracking-id", TRUE, NULL);

    /* Configure the nvdsanalytics element for using the particular analytics config file*/
    g_object_set(G_OBJECT(nvdsanalytics),
                "config-file", strdup(_analytics_config_path.c_str()),
                NULL);

    tiler_rows = (guint)sqrt(_num_sources);
    tiler_columns = (guint)ceil(1.0 * _num_sources / tiler_rows);
    /* we set the tiler properties here */
    g_object_set(G_OBJECT(tiler), "rows", tiler_rows, "columns", tiler_columns, 
                                "width", width, "height", height, NULL);
    g_object_set(G_OBJECT(nvosd), "display-text", TRUE, NULL);
    g_object_set(G_OBJECT(nvosd), "process-mode", 1, NULL);
    g_object_set(G_OBJECT(sink), "qos", 0, NULL);
    g_object_set(G_OBJECT(sink), "sync", false, NULL);
    

    gst_bin_add_many(GST_BIN(pipeline), queue,
                    pgie, queue2, nvtracker, queue3, nvdsanalytics,
                    queue4 , nvvidconv, queue5, tiler, queue6, nvosd, queue7, sink, NULL);
    std::cout<<"Render elements added to pipeline\n";

    if (!gst_element_link_many(nvstreammux, queue, pgie, queue2, nvtracker, 
                                queue3, nvdsanalytics, queue4, nvvidconv, 
                                queue5, tiler, queue6, nvosd, queue7, sink, NULL))
    {
        g_printerr("Elements could not be linked. Exiting.\n");
        exit(-1);
    }

    return pipeline;
}

and this is the code for the start()method:

void visionPipeline::start()
{
    GMainLoop *loop = NULL;
    GstElement *pipeline = NULL;
    GstBus *bus = NULL;
    guint bus_watch_id;
    GstMessage *msg;

    loop = g_main_loop_new (NULL, FALSE);

    pipeline = _make_pipeline();

    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
    gst_object_unref (bus);

    std::cout<<"Now Playing: "<<std::endl;
    for(size_t i = 0; i < _streams.size(); i++)
    {
        std::cout<<_streams[i]<<std::endl;
    }
    gst_element_set_state(pipeline, GST_STATE_PLAYING);
    g_print("Running...\n");

    g_main_loop_run (loop);

    gst_element_set_state(pipeline, GST_STATE_NULL);
    g_print("Deleting pipeline\n");
    gst_object_unref(GST_OBJECT(pipeline));
    g_source_remove (bus_watch_id);
    g_main_loop_unref (loop);
    exit(0);
}

the _streams is a std::vector<char*> that holds the uris and the _num_sources is the size of _streams vector

This piece of code keeps crashing giving me the error that I posted:

GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
[2:59 PM] deepstream_cpp exited with code 139

and my driver version is 470.57.02

Do I have a problem with this piece of code? and is this error message related to this issue Deepstream-app crash with nvbufsurface: NvBufSurfaceSysToHWCopy error - #24 by marmikshah ?

You can use DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums to identify the memory leak

Hello @Fiona.Chen deepstream-test3-app is crashing with the same error

Do you mean the original deepstream-test3-app failed? What kind of sources are you using?

@Fiona.Chen yes that’s what I mean the original deepstream-test3-app crashes but the deepstream-app don’t crash I am using rtsp sources generated from video files using another gstreamer pipeline

What kind of sources are you using? Local files, rtsp streams or csi camera?

@Fiona.Chen
it is generated from local video files