MemAvailable decreasing on Jetpack6.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Jetson
• DeepStream Version : 7.0
• JetPack Version (valid for Jetson only) : 6.0
• TensorRT Version : 8.6.2
• Issue Type( questions, new requirements, bugs) : bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I ran deepstream-test5-app with Jetpack 6.0 twice, once with the resnet18_trafficcamnet model and once with the yolov3 model.
We also checked the memory behavior with /proc/meminfo and ps commands at the same time as running deepstream-test5-app.

MemAvailable of /proc/meminfo when running deepstream-test5-app using resnet18_trafficcamnet decreased by about 175 MB in 1 hour.
Also, RSS of ps command when running deepstream-test5-app using resnet18_trafficcamnet increased by about 17 KB in 1 hour.
resnet18_trafficcamnet_logs.zip (106.5 KB)

Likewise, MemAvailable in /proc/meminfo when running deepstream-test5-app using yolov3 decreased by about 459 MB in 1 hour.
Also, RSS of ps command when running deepstream-test5-app using yolov3 increased by about 17 KB in 1 hour.
yolov3_logs.zip (20.9 KB)

(While deepstream-test5-app is running, no processes other than the memory logging scripts are running.)

  1. Why is MemAvailable decreasing?
  2. Why is the decrease in RSS and MemAvailable for deepstream-test5-app different?

Commands to log memory

# Rewrite directory in crontab
cat crontab | crontab
nohup bash ./ps_5sec.sh psinfo.log

log_scripts.zip (1.3 KB)

Except for the date, the value of MemAvailable in meminfo.log is at position 3.
Except for the date, the value of RSS in psinfo.log is at position 6.


container when running deepstream-test5-app with resnet18_trafficcamnet model

nvcr.io/nvidia/deepstream:7.0-samples-multiarch

Commands when running deepstream-test5-app with resnet18_trafficcamnet model

cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5/configs/
# change the configuration file and RTSP streaming
vi test5_config_file_src_infer.txt
/opt/nvidia/deepstream/deepstream/bin/deepstream-test5-app -t -c test5_config_file_src_infer.txt

test5_config_file_src_infer.txt when running deepstream-test5-app with resnet18_trafficcamnet model

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2018-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0 # change to disable
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4 # change to RTSP
# change URI
uri=rtsp://<IP Address>:554/test.mpeg4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4 # change to RTSP
# change URI
uri=rtsp://<IP Address>:554/test.mpeg4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1 # change to fakesink
sync=0 # change to disable
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0 # change to disable
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2 # change to 2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=2 # change to 2
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=0 # change to disable
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
gpu-id=0
display-tracking-id=1

[tests]
file-loop=0

container when running deepstream-test5-app with yolov3 model

nvcr.io/nvidia/deepstream:7.0-triton-multiarch

Commands when running deepstream-test5-app with yolov3 model

cd /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo
export CUDA_VER=12.2
make -C nvdsinfer_custom_impl_Yolo
bash ./prebuild.sh
# change the configuration file and RTSP streaming
vi deepstream_app_config_yoloV3.txt
/opt/nvidia/deepstream/deepstream/bin/deepstream-test5-app -t -c deepstream_app_config_yoloV3.txt

test5_config_file_src_infer.txt when running deepstream-test5-app with yolov3 model

####################################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
####################################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0 # change to disable
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4 change to RTSP
# change URI
uri=rtsp://<IP Address>:554/test.mpeg4
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source1] # add source1 group
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4 change to RTSP
# change URI
uri=rtsp://<IP Address>:554/test.mpeg4
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1 # change to fakesink
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2 # change to 2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_gpu0_int8.engine
labelfile-path=labels.txt
batch-size=2 # change to 2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3.txt

[tracker]
enable=0 # change to disable
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
gpu-id=0
display-tracking-id=1

[tests]
file-loop=0

When I run deepstream-test5-app with yolov3 with 3 sources, the decrease in MemAvailable has increased from about 459MB to about 650MB in 1 hour.
3sources_yolov3_logs.zip (82.1 KB)

Increasing the number of sources also seems to increase the decrease in MemAvailable.

I am checking

please refer to this topic.

  1. could you provide more logs by nvmemstat ? Thanks! it can save the memory usage at intervals.
  2. can you provide more valgrind logs by this method vagrid , it can give memory leak details.

These files are the logs of valgrind and nvmemstat when running deepstream-test5-app using yolov3.
nvmemstat.log (1.2 MB)
valgrind.log (129.7 KB)

To add, I am running with the patch in the link below.

When I run deepstream-test5-app with the file source, MemAvailable does not decrease anymore,
but when I run deepstream-test5-app with the RTSP source, the decrease in MemAvailable is not fixed.

  1. About the valgrind.log you shared, the “definitely lost” memory leak is about 93kB. it is minimal, how long did you test?
  2. About the fix above, this is the original topic. the user confirmed the fix works.
  3. about rtsp test issue, seems it is related to rtsp source. could you check if testing the following cmd still has the same issue? Thanks!
gst-launch-1.0 rtspsrc location=rtsp://xxx  latency=100 ! rtph264depay ! fakesink
  1. I tested for about 2 hours.
  2. I checked.
    The above fixes have been applied, but MemAvailable has been decreased.
  3. I tested with the above command.
    But the same issue did not occur.
    There seem to be periods when MemAvailable is a constant value and moments when it decreases.
    rtspsrc.zip (223.2 KB)
  1. the difference is only source part. you can use this FAQ to dump the two pipelines. you can use gst-launch command-line to simulate the application. please refer to this pipeline.
gst-launch-1.0 rtspsrc latency=100 location=rtsp://admin:aIlab1234@192.168.1.101/h264/ch1/main/av_stream ! rtph264depay ! queue ! nvv4l2decoder !  mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer ! fakesink.
  1. please simplify the pipeline to narrow down this issue. for example,
    please check if “rtspsource+fakesink” pipeline still run into the issue.
  2. if “rtspsource+fakesink” pipeline still run into the issue, there is only one NV plugin nvv4l2decoder in rtspsource, to narrow down this issue, please check if using software decoder still run into the same issue. please refer to the following cmd:
gst-launch-1.0 filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! avdec_h264 ! fakesink

I have found that when RTSP reconnections occur in deepstream-test5-app, memory leaks occur.

Is the problem in this topic resolved in DeepStream 7.0?

Deepstream-app may cause mem leak with source reconnecting - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks! the issue in the above link was fixed. DS7.0 has not this issue. you can use valgrind to verify.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.