Kmalloc-128 memory continues to rise

Please provide complete information as applicable to your setup.

• Hardware Platform :Jetson AGX Orin
• DeepStream Version:Deepstream-6.4
• JetPack Version:6.0
• TensorRT Version:8.6.2.3

I ran the deepstream program for two days and found that the memory kept rising continuously.Later, I checked the problem and found that Kmalloc-128 ate 33-35 GiB.

root@tegra-ubuntu:~# slabtop -o 

Active / Total Objects (% used) : 288815179 / 290684968 (99.4%) 
Active / Total Slabs (% used) : 9023712 / 9023712 (100.0%) 
Active / Total Caches (% used) : 108 / 145 (74.5%) 
Active / Total Size (% used) : 36643683.43K / 36840916.80K (99.5%) 
Minimum / Average / Maximum Object : 0.01K / 0.13K / 9.77K OBJS 

ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 
282144192 282143979 99% 0.12K 8817006 32 35268024K kmalloc-128

Later on, I discovered that this memory was continuously increasing in value:

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283515432 283515616    128   32    1 : tunables    0    0    0 : slabdata 8859863 8859863      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283515942 283516000    128   32    1 : tunables    0    0    0 : slabdata 8859875 8859875      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283516223 283516256    128   32    1 : tunables    0    0    0 : slabdata 8859883 8859883      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283516542 283516704    128   32    1 : tunables    0    0    0 : slabdata 8859897 8859897      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283516920 283517120    128   32    1 : tunables    0    0    0 : slabdata 8859910 8859910      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283517341 283517568    128   32    1 : tunables    0    0    0 : slabdata 8859924 8859924      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283517459 283517664    128   32    1 : tunables    0    0    0 : slabdata 8859927 8859927      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283517751 283517824    128   32    1 : tunables    0    0    0 : slabdata 8859932 8859932      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283517971 283518240    128   32    1 : tunables    0    0    0 : slabdata 8859945 8859945      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283518320 283518336    128   32    1 : tunables    0    0    0 : slabdata 8859948 8859948      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283518866 283519104    128   32    1 : tunables    0    0    0 : slabdata 8859972 8859972      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283519412 283519552    128   32    1 : tunables    0    0    0 : slabdata 8859986 8859986      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283520402 283520576    128   32    1 : tunables    0    0    0 : slabdata 8860018 8860018      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283520555 283520768    128   32    1 : tunables    0    0    0 : slabdata 8860024 8860024      0

root@tegra-ubuntu:/sys/kernel/debug/tracing# grep -A0 kmalloc-128 /proc/slabinfo
kmalloc-128       283520953 283521088    128   32    1 : tunables    0    0    0 : slabdata 8860034 8860034      0

How should I solve it?

Which sample program are you running? Python or native ? Have you used Valgrind to check for memory leaks in your application?

Using kmalloc-128 alone is not enough to determine if this is a memory leak/memory fragmentation.

Please use valgrind to check memory leak.

# python
PYTHONMALLOC=malloc  valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --suppressions=/usr/lib/valgrind/python3.supp python3 app.py > leak2.log 2>&1

# native
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all your_app > leak1.log 2>&1

@junshengy

I am using the deepstreamtest3 source code.

modified pipeline components:

 nvurisrcbin→streammux→queue1→nvinfer→queue2→fakesink

When I removed the nvinfer:

 nvurisrcbin→streammux→queue1→fakesink

Everything is normal, and my Kmalloc-128 mentioned above has not increased either.

Next, I will output according to what has been said.

Please refer to this topic, or directly upgrade Jetpack 6.1 and DeepStream-7.1 for Orin.

@junshengy
This is my log output according to your command:

@junshengy
The main problem now is that I cannot determine if it is a version issue. I need your help to check the logs above to see if it is caused by a memory leak

Only definite lost memory needs to be checked; there are no obvious memory leaks at the application layer.

Please test on a new Jetpack version to see if the memory leak is caused by the driver.

==765== LEAK SUMMARY:
==765==    definitely lost: 17,348 bytes in 25 blocks
==765==    indirectly lost: 556 bytes in 20 blocks
==765==      possibly lost: 75,453,452 bytes in 1,699 blocks
==765==    still reachable: 324,736,600 bytes in 665,049 blocks
==765==                       of which reachable via heuristic:
==765==                         stdstring          : 4,621,806 bytes in 70,402 blocks
==765==                         newarray           : 12,440 bytes in 16 blocks
==765==                         multipleinheritance: 2,608 bytes in 2 blocks
==765==         suppressed: 0 bytes in 0 blocks