Nvstreammux_alpha cost memory very much and fast

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
V100 32G
• DeepStream Version
5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
7.2.3
• NVIDIA GPU Driver Version (valid for GPU only)
460
• Issue Type( questions, new requirements, bugs)
wonder if there were bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I first used the code below to get nvstreammux sink pad prepared.

Blockquote
for (guint i = 0; i < MAX_NUM_STREAMS; i++)
{
gchar mux_sink_pad_name[32];
g_snprintf(mux_sink_pad_name, 31, “sink_%u”, i);
std::string str(mux_sink_pad_name);
GstPad *mux_sink_pad = gst_element_get_request_pad(this->app_context_.main_bin.nvstreammux, mux_sink_pad_name);
gst_object_unref(mux_sink_pad);
}

Blockquote
for (guint i = 0; i < MAX_NUM_STREAMS; i++)
{
gchar demux_src_pad_name[32];
g_snprintf(demux_src_pad_name, 31, “src_%u”, i);
std::string str(demux_src_pad_name);
GstPad *demux_src_pad = gst_element_get_request_pad(this->app_context_.main_bin.nvstreamdemux, demux_src_pad_name);
gst_object_unref(demux_src_pad);
}

when a new stream came, dynamic link source bin src pad to nvstreammux sink pad, using gst_element_get_static_pad

I tryied three ways,

Blockquote
when MAX_NUM_STREAMS = 1, after pads linked, the memory cost very much and fast, about 300M / s.
when MAX_NUM_STREAMS = 4, if all 4 streams added and pads linked, it was normal, no memory leak or cost too much.
when MAX_NUM_STREAM = 4, not all 4 streams added and pads linked, the memory also cost very much and fast, same as the first one.

while adding the stream, sometimes there would be a warning

Blockquote
GStreamer-WARNING **: 20:19:11.825: gstpad.c:5203:store_sticky_event:nvstreamdemux0:src_3 Sticky event misordering, got ‘segment’ before ‘caps’

I also tried valgrind to check the location of momery problem.
the log was below

Blockquote
==78925== 20,209,104 bytes in 421,023 blocks are indirectly lost in loss record 7,825 of 7,836
==78925== at 0x4C33B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==78925== by 0x4E8FC30: g_malloc0 (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x5B0DDF2: nvds_create_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0DCB3: nvds_create_user_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0B610: nvds_create_batch_meta (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x11DD9021: gst_nvstreammux_src_push_loop(void*) (gstnvstreammux.cpp:966)
==78925== by 0x5200278: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1405.0)
==78925== by 0x4EB2C6F: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4EB22A4: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x56EB6DA: start_thread (pthread_create.c:463)
==78925==
==78925== 20,211,480 bytes in 842,145 blocks are indirectly lost in loss record 7,826 of 7,836
==78925== at 0x4C31B0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==78925== by 0x4E8FBD8: g_malloc (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4EA7A85: g_slice_alloc (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4E85E25: g_list_prepend (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x5B0DE3A: nvds_create_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0DB13: nvds_create_obj_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0B58C: nvds_create_batch_meta (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x11DD9021: gst_nvstreammux_src_push_loop(void*) (gstnvstreammux.cpp:966)
==78925== by 0x5200278: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1405.0)
==78925== by 0x4EB2C6F: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4EB22A4: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x56EB6DA: start_thread (pthread_create.c:463)
==78925==
==78925== 20,696,832 bytes in 53,895 blocks are indirectly lost in loss record 7,827 of 7,836
==78925== at 0x4C31B0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==78925== by 0x11DDB7D0: GstBatchBufferWrapper::GstBatchBufferWrapper(_GstNvStreamMux*, unsigned int, bool) (gstnvstreammux_impl.h:97)
==78925== by 0x11DD9003: gst_nvstreammux_src_push_loop(void*) (gstnvstreammux.cpp:965)
==78925== by 0x5200278: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1405.0)
==78925== by 0x4EB2C6F: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4EB22A4: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x56EB6DA: start_thread (pthread_create.c:463)
==78925==
==78925== 23,578,352 bytes in 421,042 blocks are indirectly lost in loss record 7,828 of 7,836
==78925== at 0x4C33B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==78925== by 0x4E8FC30: g_malloc0 (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x5B0DDF2: nvds_create_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0DB9F: nvds_create_classifier_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0B5B8: nvds_create_batch_meta (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x11DD9021: gst_nvstreammux_src_push_loop(void*) (gstnvstreammux.cpp:966)
==78925== by 0x5200278: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1405.0)
==78925== by 0x4EB2C6F: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4EB22A4: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x56EB6DA: start_thread (pthread_create.c:463)
==78925==
==78925== 30,436,056 (10,584 direct, 30,425,472 indirect) bytes in 441 blocks are definitely lost in loss record 7,829 of 7,836
==78925== at 0x4C31B0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==78925== by 0x4E8FBD8: g_malloc (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4EA7A85: g_slice_alloc (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4E85E25: g_list_prepend (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x5B0DE3A: nvds_create_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0D9FB: nvds_create_frame_meta_pool (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x5B0B560: nvds_create_batch_meta (in /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_meta.so)
==78925== by 0x11DD9021: gst_nvstreammux_src_push_loop(void*) (gstnvstreammux.cpp:966)
==78925== by 0x5200278: ??? (in /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0.1405.0)
==78925== by 0x4EB2C6F: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x4EB22A4: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.5600.4)
==78925== by 0x56EB6DA: start_thread (pthread_create.c:463)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Do you mean there is memory leak for some cases? Can you upload your test app?

Sorry, the company internal project, but I did confirm that the nvstreammux_alpha led to the memory problem.
I just replaced the old nvstreammux by setting env “USE_NEW_NVSTREAMMUX” to “no”.
After replaced, it was OK.
Could you help check about the problem, Thanks

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Can you provide the test application to reproduce the problem?