What are the detailed specifications of the properties of new nvstreammux configuration file?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : dGPU、Jetson
• DeepStream Version : 6.0、6.0.1、6.1.1
• JetPack Version (valid for Jetson only) : 4.6(DS6.0)、5.0.2(DS6.1.1)
• TensorRT Version : 8.0.1(DS6.0.1)、8.4.1(DS6.1.1)
• NVIDIA GPU Driver Version (valid for GPU only) : 525.147.05
• Issue Type( questions, new requirements, bugs) : questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am using the new nvstreammux to infer different FPS file sources or live sources, but I am not getting the desired behavior.
For example, when inferring two sources, one at 30 FPS and the other at 15 FPS, I expect the 30 FPS source to be inferred at 30 FPS and output a 30 FPS video, and the 15 FPS source to be inferred at 15 FPS and output a 15 FPS video.
What are the detailed specifications of the following properties that can be set in the [property] group in the new nvstreammux configuration file?

  • overall-max-fps-n
  • overall-max-fps-d
  • overall-min-fps-n
  • overall-min-fps-d
  • max-same-source
  • max-fps-control

I could not understand the new nvstreammux documentation.

please refer to this FAQ for How to set parameters reasonably

Thank you for showing URL.
I followed the FAQ and ran deepstream-test5-app with 15FPS and 30FPS video files as input, but found that frames drop in the output video.
I followed the documentation below and set max-latency to a value greater than 1/FPS of the slowest stream, but the frames still drop.
Why do the frames drop ?
https://docs.nvidia.com/metropolis/deepstream/6.0/dev-guide/text/DS_plugin_gst-nvstreammux2.html#important-tuning-parameters
https://docs.nvidia.com/metropolis/deepstream/6.0/dev-guide/text/DS_plugin_gst-nvstreammux2.html#observing-video-and-or-audio-stutter-low-framerate

I run this command in the docker container.

command

cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5/configs
mkdir /output
cp path/to/sample_1080p_h264_15fps.mp4 /output/ # copy sample_1080p_h264_15fps.mp4
vi test5_config_file_src_infer.txt # edit
vi config_streammux.txt # make nvstreammux configuration file
USE_NEW_NVSTREAMMUX=yes deepstream-test5-app -c test5_config_file_src_infer.txt

test5_config_file_src_infer.txt

################################################################################
# Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0 # change to disable
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2 # change to type 2
uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[source1] # change to 15FPS video's type and URI
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file:///output/sample_1080p_h264_15fps.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=0 # change to disable
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0 # change to disable
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2] # change to enable
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=1
sync=1
bitrate=2000000
output-file=/output/out0.mp4
source-id=0

[sink3]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=1
gpu-id=0
nvbuf-memory-type=0

[sink4] # add group for 15FPS stream output
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=1
sync=1
bitrate=2000000
output-file=/output/out1.mp4
source-id=1

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1
config-file=./config_streammux.txt # add config-file
sync-inputs=1 # change to enable
max-latency=66666667 # add max-latency

[primary-gie]
enable=1
gpu-id=0
batch-size=2
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=0

config_streammux.txt

[property]
algorithm-type=1
batch-size=2
overall-max-fps-n=120
overall-max-fps-d=1
overall-min-fps-n=30
overall-min-fps-d=1
max-same-source-frames=2
adaptive-batching=1
max-fps-control=1

[source-config-0]
max-fps-n=120
max-fps-d=1
min-fps-n=5
min-fps-d=1
priority=0
max-num-frames-per-batch=2

[source-config-1]
max-fps-n=120
max-fps-d=1
min-fps-n=5
min-fps-d=1
priority=0
max-num-frames-per-batch=1
  1. did the fps drop at the beginning or after a while? please refer to this linkfor performance improvement.
  2. to narrow down this issue. you can disable OSD, pgie, tracker first to check if the fps is fine. wondering it is nvstreammux’s issue or performance issue.
  1. fps dropped at the beginning.
  2. When I disabled OSD,pgie fps drop.
    When I disabled OSD,tracker, fps don’t drop.
    When I disabled pgie,tracker, fps don’t drop.
    When I disabled OSD,pgie,tracker, fps don’t drop.
    So, when I enable tracker, fps drop.

Thanks for the sharing!

  1. do you mean if disabling OSD or tracker fps will drop? what is the device model? when fps drop, is the CPU/GPU utilization 100%?
  2. please refer to this link for latency checking. By this method, please check which element consume too much time.