Same pipeline, run twice, different results !!!!

Hi Nvidia,

We are a company working on traffic detection (car/truck) from video streams on highways using Jetson embedded hardware.
Recently, we experienced a serious issue with DeepStream 4, we tried hard for months to solve it but couldn’t.

we are processing 2 locals videos (different lengths) on a deepstream pipeline, we are doing detection and tracking using customized yolov3-tiny and KLT tracker.

We are using interval=9 (in pgie group) and gie-kitti-output-dir (application group) to save frame-based detection results on one output file which has this format (this example file is from the second video detection results):

#Frame_num class 0.0 0 0.0 left top width height 0.0 0.0 0.0 0.0 0.0 0.0 0.0

008360 car 0.0 0 0.0 1310.00 512.00 1430.00 572.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008590 car 0.0 0 0.0 1615.00 544.00 1748.00 622.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008600 car 0.0 0 0.0 1453.00 526.00 1573.00 590.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008610 car 0.0 0 0.0 1320.00 503.00 1430.00 563.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008630 truck 0.0 0 0.0 627.00 530.00 973.00 806.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008640 truck 0.0 0 0.0 710.00 530.00 1070.00 843.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008650 truck 0.0 0 0.0 756.00 549.00 1199.00 895.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008720 car 0.0 0 0.0 923.00 586.00 1047.00 669.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008730 car 0.0 0 0.0 1029.00 623.00 1185.00 719.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
008740 car 0.0 0 0.0 1176.00 673.00 1356.00 783.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
009091 truck 0.0 0 0.0 770.00 516.00 1111.00 783.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
009092 truck 0.0 0 0.0 775.00 521.00 1135.00 788.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
009110 truck 0.0 0 0.0 978.00 567.00 1490.00 927.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
009111 truck 0.0 0 0.0 992.00 567.00 1518.00 936.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0
009148 car 0.0 0 0.0 876.00 576.00 996.00 659.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0


As you see the frame_num should always be divisible by 10 (since interval=9), this is true on the first 9000 frames of each video. After that, the first video finish, and the deepstream app start doing detection on 2 consecutive frames of the second video.
If I use 6 locals video at the same time with tracking included, and I run the app twice, it gives me different results each time. That s weird.

In my opinion, it s from the streammux plugin (batchsize parameter), but since it is not opensource I couldn’t go deeper and check it. Is there any way we can fix it ??

Thanks

https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.03.html

The muxer pushes the batch downstream when the batch is filled or the batch formation timeout batched-pushed-timeout is reached. The timeout starts running when the first buffer for a new batch is collected

Thanks for the quick reply,

I tried batched-push-timeout=100000 and batched-push-timeout=-1 , but I still have same issue, when the first stream finish, the second stream detections results are messed up.

This is my config file :

################################################################################

Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a

copy of this software and associated documentation files (the “Software”),

to deal in the Software without restriction, including without limitation

the rights to use, copy, modify, merge, publish, distribute, sublicense,

and/or sell copies of the Software, and to permit persons to whom the

Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in

all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

DEALINGS IN THE SOFTWARE.

################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#kitti-track-output-dir=metadata
kitti-track-output-dir=metadata/counting/2cams_004
gie-kitti-output-dir=metadata/detection/2cams_004

[tiled-display]
enable=1
rows=1
columns=2
#width=1280
#height=720
#width=1300
#height=500
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
#uri=rtsp://192.168.1.122:8554/test
uri=file://./videos_for_manual_counting/2748_2017-07-28_1335_D_FFT_Trim_5min.mp4
#uri=file://./videos_for_manual_counting/2754_2017-08-10_0027_N_FFT_Trim_1.mp4
#uri=file://./videos_for_manual_counting/2779_2017-10-12_1023_D_FFT_Trim_2.mp4
#uri=file://./videos_for_manual_counting/2779_2017-08-10_0134_N_FFT_Trim_2.mp4
#uri=file://./videos_for_manual_counting/2785_2017-08-09_2230_N_FFT_Trim_7.mp4
#uri=file://./videos_for_manual_counting/2793_2017-09-30_1335_D_Split_Trim_2_FFT_9min.mp4
num-sources=1
camera-width=1920
camera-height=1080
gpu-id=0
#drop-frame-interval=2

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file://./videos_for_manual_counting/2754_2017-08-10_0027_N_FFT_Trim_1.mp4
num-sources=1
camera-width=1920
camera-height=1080
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source2]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file://./videos_for_manual_counting/2779_2017-10-12_1023_D_FFT_Trim_2.mp4
num-sources=1
camera-width=1920
camera-height=1080
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source3]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file://./videos_for_manual_counting/2779_2017-08-10_0134_N_FFT_Trim_2.mp4
num-sources=1
camera-width=1920
camera-height=1080
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source4]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file://./videos_for_manual_counting/2785_2017-08-09_2230_N_FFT_Trim_7.mp4
num-sources=1
camera-width=1920
camera-height=1080
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[source5]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file://./videos_for_manual_counting/2793_2017-09-30_1335_D_Split_Trim_2_FFT_9min.mp4
num-sources=1
camera-width=1920
camera-height=1080
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#iframeinterval=10

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#iframeinterval=10

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=3
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
container=1
codec=1
output-file=metadata/augmented/2748_004.mp4

[sink3]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
qos=1
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000

set below properties in case of RTSPStreaming

rtsp-port=8554
udp-port=5400
nvbuf-memory-type=0
#width=100
#height=100
#iframeinterval=30

[sink4]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=5
sync=1
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1

[sink5]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
#msg-conv-config=config_brokers/cfg_msgconv.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_kafka_proto.so
#msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_azure_edge_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=192.168.1.118;9092;NanoTopic001
#msg-broker-conn-str=172.22.18.99;9092;NanoTopic001
#msg-broker-conn-str=10.50.7.187;9092;NanoTopic001
topic=NanoTopic001
#Optional:
#msg-broker-config=…/…/deepstream-test4/cfg_kafka.txt
#msg-broker-config=config_brokers/cfg_azure.txt

[osd]
enable=1
gpu-id=0
process-mode=2
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
#batched-push-timeout=40000
#batched-push-timeout=100000
batched-push-timeout=-1

Set muxer output width and height

width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
batch-size=2
interval=9
#Required by the app for OSD, not a plugin property, specify a color for each class
bbox-border-color0=0.5;0.5;0.5;1
bbox-border-color1=0;1;1;1

Fill the bbox with a background color, specify a color for each class

#bbox-bg-color0=0;1;0;1
#bbox-bg-color1=1;1;1;1
gie-unique-id=1
nvbuf-memory-type=0
#config-file=models/custom_yolov3_20191014/config_infer_primary_custom_yoloV3.txt
#config-file=models/custom_yolov3_20191029/config_infer_primary_custom_yoloV3.txt
#config-file=models/custom_yolov3_20191201/config_infer_primary_custom_yoloV3.txt
#config-file=models/custom_yolov3_tiny_Hend_3/config_infer_primary_custom_yoloV3_tiny.txt
#config-file=models/custom_yolov3_tiny_2754_N_FFT_Trim_2/config_infer_primary_custom_yoloV3_tiny.txt
#config-file=models/custom_yolov3_tiny_2748_D_FFT_trim/config_infer_primary_custom_yoloV3_tiny.txt
#config-file=models/custom_yolov3_tiny_20191020/config_infer_primary_custom_yoloV3_tiny.txt
config-file=models/custom_yolov3_tiny_20191209/config_infer_primary_custom_yoloV3_tiny.txt

[tracker]
enable=1
tracker-width=600
tracker-height=300
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#required ll-config-file for DCF/IOU only
#ll-config-file=./config_trackers/dcf_config.yml
#ll-config-file=./config_trackers/iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0

[tests]
file-loop=0

##drop-frame-interval / skip-frames check nvv4l2decoder

We will check this scenario of long and short streams for consistency.

Maybe there is bug about this case. We will fix it in next release.