Deepstream python app crashes with "gst_memory_get_sizes: assertion 'mem != NULL' failed"

The following error happens after running the app for a few hours:

GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
GStreamer-CRITICAL **: 22:04:01.099: gst_memory_get_sizes: assertion 'mem != NULL' failed
deepstream_cpp exited with code 139

Debugging Info

My pipeline is created using the following _make_pipeline() function:

def _make_pipeline(self, pipeline):
        streammux = self._make_element("nvstreammux", "stream-muxer")
        pipeline.add(streammux)
        is_live = False
        for i in range(len(self.source_inputs)):
            print("Creating source_bin ", i, " \n ", flush=True)
            uri_name = self.source_inputs[i]
            if uri_name.find("rtsp://") == 0:
                is_live = True
            
            source_bin = self._create_source_bin(i, uri_name)
            if not source_bin:
                sys.stderr.write("Unable to create source bin \n")
            
            pipeline.add(source_bin)
            padname = f"sink_{i:.0f}"
            sinkpad = streammux.get_request_pad(padname)
            if not sinkpad:
                sys.stderr.write("Unable to create sink pad bin \n")
            srcpad = source_bin.get_static_pad("src")
            if not srcpad:
                sys.stderr.write("Unable to create src pad bin \n")
            srcpad.link(sinkpad)
        
        q1 = self._make_element("queue", "queue1")
        q2 = self._make_element("queue", "queue2")
        pgie = self._make_element("nvinfer", "primary-inference")
        nvtracker = self._make_element("nvtracker", "tracker")
        nvdsanalytics = self._make_element("nvdsanalytics", "nvdsanalytics")
        tee = self._make_element("tee", "tee")
        tiler = self._make_element("nvmultistreamtiler", "nvtiler")
        nvvidconv = self._make_element("nvvideoconvert", "convertor")
        nvosd = self._make_element("nvdsosd", "onscreendisplay")
        sink = self._make_element("nveglglessink", "nvvideo-renderer")
        appsink = self._make_element("appsink", "appsink")
        
        # setting elements properties
        streammux.set_property('width', self.muxer_shape[0])
        streammux.set_property('height', self.muxer_shape[1])
        streammux.set_property('batch-size', len(self.source_inputs))
        streammux.set_property('batched-push-timeout', 40000)
        if is_live:
            print("Atleast one of the sources is live", flush=True)
            streammux.set_property('live-source', 1)
        
        pgie.set_property('config-file-path', self.pgie_config_path)
        pgie_batch_size = pgie.get_property("batch-size")
        if (pgie_batch_size != len(self.source_inputs)):
            print("WARNING: Overriding infer-config batch-size", pgie_batch_size, 
                  " with number of sources ", len(self.source_inputs), " \n", flush=True)
            pgie.set_property("batch-size", len(self.source_inputs))
        
        self._set_tracker_properties(nvtracker)
        
        nvdsanalytics.set_property("config-file", self.analytics_config_path)
        tiler_rows=int(math.sqrt(len(self.source_inputs)))
        tiler_columns=int(math.ceil((1.0*len(self.source_inputs))/tiler_rows))
        tiler.set_property("rows", tiler_rows)
        tiler.set_property("columns", tiler_columns)
        tiler.set_property("width", 1280)
        tiler.set_property("height", 720)
        sink.set_property("qos", 0)
        sink.set_property("sync", False)
        nvosd.set_property('process-mode', 2)
        nvosd.set_property('display-text', 1)
        appsink.set_property("emit-signals", True)
        appsink.set_property("async", True)
        appsink.set_property("sync", False)
        
        # adding elements to pipeline; order doesn't matter.
        pipeline.add(q1)
        pipeline.add(q2)
        pipeline.add(pgie)
        pipeline.add(nvtracker)
        pipeline.add(nvdsanalytics)
        pipeline.add(tee)
        pipeline.add(tiler)
        pipeline.add(nvvidconv)
        pipeline.add(nvosd)
        pipeline.add(sink)
        pipeline.add(appsink)
        
        print("Linking elements in the Pipeline \n", flush=True)
        # streammux > pgie > nvtracker > analytics > nvvidconv > tee > q1 > tiler > nvosd > sink
        #                                                               |
        #                                                               `>>> q2 > appsink
        streammux.link(pgie)
        pgie.link(nvtracker)
        nvtracker.link(nvdsanalytics)
        nvdsanalytics.link(nvvidconv)
        nvvidconv.link(tee)
        # linking tee
        tee_srcpad_renderer = tee.get_request_pad("src_0")
        tee_srcpad_appsink = tee.get_request_pad("src_1")
        if not tee_srcpad_renderer or not tee_srcpad_appsink:
            sys.stderr.write("Unable to create src pad bin \n")
        
        q1_sinkpad = q1.get_static_pad("sink")
        q2_sinkpad = q2.get_static_pad("sink")
        if not q1_sinkpad or not q2_sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        
        tee_srcpad_renderer.link(q1_sinkpad)
        tee_srcpad_appsink.link(q2_sinkpad)
        #
        q1.link(tiler)
        tiler.link(nvosd)
        nvosd.link(sink)
        #
        q2.link(appsink)
        
        # attaching cb function to appsink samples
        appsink.connect("new-sample", self._appsink_cb, None)
        
        return pipeline

other info

I ran the app with 20 rtsp sources.

important note

This has also occurred using deepstream-test3 app with the same 20 rtsp sources.
However, deepstream-app with the same 20 rtsp sources ran for days without raising any errors.


Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
5.1
• TensorRT Version
I am using the deepstream devel docker image.
• NVIDIA GPU Driver Version (valid for GPU only)
465.19.01
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Please help me with this issue ASAP it’s urgent.

Sorry for the late response, we will do the investigation soon.

1 Like

Do you mean c/c++ sample deepstream-test3 will fail with 20 rtsp sources.

Is there complete log for the failure with deepstream-test3?

Yes, this is what I mean.
We don’t have any logs though.
I ran the app using the nvcr.io/nvidia/deepstream:5.1-21.02-devel docker image with the following command

sudo docker run --gpus all --rm -it --network=host --ipc=host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix deepstream_app

which starts the image deepstream_app which I created based on the nvcr.io/nvidia/deepstream:5.1-21.02-devel image with extra commands to build the app and copy a config file with 20 rtsp sources into the image.

How can we reproduce your failure with deepstream-test3?
Your GPU type?

GPU: 2080 TI
I navigated to the test3 app folder in the docker image and built the app with CUDA_VER=11.1 make and then ran the following command with 20 rtsp streams.

./deepstream-test3-app rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 .....

The issue appears after a long time (several hours) of running the app.