Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208… On | 00000000:01:00.0 On | N/A |
| 37% 66C P2 154W / 250W | 6938MiB / 10997MiB | 94% Default |
±------------------------------±---------------------±---------------------+
• DeepStream Version
DeepStream version 4.0.2
• TensorRT Version
TensorRT --version 6.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
440
I add the runtime add/delete function to deeepstream-app. However, after multiple times add and delete(6 sources for add/delete 6 rounds), the program crashed.
The error information is shown below:
Cuda failure: status=2
Error(-1) in buffer allocation
** (deepstream-app:11288): CRITICAL **: 15:09:24.333: gst_nvds_buffer_pool_alloc_buffer: assertion ‘mem’ failed
ERROR from src_bin_muxer: Failed to allocate the buffers inside the Nvstreammux output pool
Debug info: gstnvstreammux.c(566): gst_nvstreammux_alloc_output_buffers (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
Quitting
When program running, The GPU memory usage also keeping increasing until exhausted. It seems like the corresponding streammux buffer is not released after delete the source. Since streammux is not open source. I can’t get more information. I have done all functions in:
which is the logic in your runtime add/delete example on github.
Could you please help to find out where the problem is?


