DeepStream4.0.2 sample config outputs 0 bytes mp4 file

With the use of DeepStream4.0.2 sample config, deepstream-app outputs just 0 bytes mp4 file.
Does anyone know the cause of this ?

Command

$ docker run --runtime=nvidia --rm -ti -e NVIDIA_DRIVER_CAPABILITIES=video,compute,utility -e NVIDIA_VISIBLE_DEVICES=all {image name} /bin/bash
$ cd /opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app
$ deepstream-app -c /opt/nvid
ia/deepstream/deepstream-4.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt

stdout

(deepstream-app:142): GLib-GObject-WARNING **: 07:47:51.571: g_object_set_is_valid_property: object class 'avenc_mpeg4' has no property named 'iframeinterval'

(deepstream-app:142): GLib-GObject-WARNING **: 07:47:51.571: g_object_set_is_valid_property: object class 'avenc_mpeg4' has no property named 'bufapi-version'
Creating LL OSD context new
0:00:04.300828808   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:useEngineFile(): Failed to read from model engine file
0:00:04.300873623   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:initialize(): Trying to create engine from model files
0:00:04.301591172   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:04.301618604   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:generateTRTModel(): FP16 not supported by platform. Using FP32 mode.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:00:07.889892240   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_fp32.engine
0:00:08.001897443   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:useEngineFile(): Failed to read from model engine file
0:00:08.001918561   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:initialize(): Trying to create engine from model files
0:00:08.002635885   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:08.002660235   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:generateTRTModel(): FP16 not supported by platform. Using FP32 mode.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
y_gie_1> NvDsInferContext[UID 5]:generateTRTModel(): FP16 not supported by platform. Using FP32 mode.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:00:10.529467189   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_fp32.engine
0:00:10.599488489   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:useEngineFile(): Failed to read from model engine file
0:00:10.599509380   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:initialize(): Trying to create engine from model files
0:00:10.600211365   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:10.600235199   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:generateTRTModel(): FP16 not supported by platform. Using FP32 mode.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:00:12.881696614   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_fp32.engine
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
0:00:12.932568596   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:12.932598733   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:12.933307262   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:12.933332222   142 0x5639f2306b90 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): FP16 not supported by platform. Using FP32 mode.
0:00:14.866328903   142 0x5639f2306b90 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_fp32.engine

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)     FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)
**PERF: 0.00 (0.00)     0.00 (0.00)     0.00 (0.00)     0.00 (0.00)
** INFO: <bus_callback:189>: Pipeline ready

**PERF: 0.00 (0.00)     0.00 (0.00)     0.00 (0.00)     0.00 (0.00)
**PERF: 0.00 (0.00)     0.00 (0.00)     0.00 (0.00)     0.00 (0.00)

Environment

  • sample config.txt :source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt

  • DeepStreamSDK 4.0.2

  • CUDA Driver Version: 10.1

  • CUDA Runtime Version: 10.1

  • TensorRT Version: 6.0

  • cuDNN Version: 7.6

  • libNVWarp360 Version: 2.0.0d

  • NVIDIA GPU Driver Version (valid for GPU only): 410.79

  • Hardware Platform (Jetson / GPU)** : Tesla K80

  • with the use of NVIDIA docker

source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=4
gpu-id=1
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=3
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0
gpu-id=1

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
gpu-id=1

[osd]
enable=1
gpu-id=1
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=1
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_int8.engine
batch-size=4
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tracker]
enable=1
tracker-width=640
tracker-height=368
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=1
#enable-batch-process applicable to DCF only
enable-batch-process=1

[secondary-gie0]
enable=1
model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine
gpu-id=1
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes.txt

[secondary-gie1]
enable=1
model-engine-file=../../models/Secondary_CarColor/resnet18.caffemodel_b16_int8.engine
batch-size=16
gpu-id=1
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carcolor.txt

[secondary-gie2]
enable=1
model-engine-file=../../models/Secondary_CarMake/resnet18.caffemodel_b16_int8.engine
batch-size=16
gpu-id=1
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carmake.txt

[tests]
file-loop=0

@ISNA

I can run your perf successfully with your customized source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt on DeepStream 5.0 GA.

There is only one change in the config that deepstream-4.0 is modified to deepstream-5.0.
Is it convenient for you to upgrade to deepstream-5.0 GA?

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)    FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)
**PERF:  0.00 (0.00)    0.00 (0.00)     0.00 (0.00)     0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

KLT Tracker Init
KLT Tracker Init
KLT Tracker Init
KLT Tracker Init
**PERF:  99.25 (99.04)  99.25 (99.04)   99.25 (99.04)   99.25 (99.04)
**PERF:  100.07 (99.63) 100.07 (99.63)  100.07 (99.63)  100.07 (99.63)
** INFO: <bus_callback:204>: Received EOS. Exiting ...

Quitting
App run successful

Thank you for your response. I realized NVIDIA DRIVER was old for the DeepStream4.0.2.
After I updated NVIDIA DRIVER 440, it did work.