Deepstream-app is not working in deepstream docker... how can i do?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU GTX1660Ti
• DeepStream Version *docker 5.0.1-20.09-triton*
os : ubuntu 18.04
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 460.32.03
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have been using deepstream-app in docker(5.0.1-20.09-triton version).

But… suddenly, deepstream-app hasn’t worked since yesterday…

Tried list.

  1. docker image pull again (5.0.1-20.09-triton)

  2. docker reinstall

  3. ubuntu reinstall (ubuntu 18.04)

But!..
Still the same symptom.

how can i do???.. please

deepstream-app run capture image

deepstream_app_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=./kitti_data
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://ID:password@000.000.000.000:000/Streaming/Channels/101
num-sources=1
gpu-id=0
cudadec-memtype=0
#camera-fps-n=30
#camera-fps-d=1
camera-fps-n=15
camera-fps-d=1
camera-width=1280
camera-height=720

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=4
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#1=h264 2=h265
codec=1
rtsp-port=101
udp-port=5400
bitrate=1000000

[osd]
enable=1
gpu-id=0
border-width=3
text-size=12
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
model-engine-file=yolov4_1_3_416_416_fp16.engine
labelfile-path=labels.txt
#batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;1;0;1
bbox-border-color1=1;1;0;1
bbox-border-color2=0;1;0;1
interval=1
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV4.txt

[tracker]
enable=1
tracker-width=512
tracker-height=320
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
display-tracking-id=1
#compute-hw=0
#enable-batch-process=1
#enable-past-frame=1

config_infer_primary_yoloV4.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
model-engine-file=yolov4_1_3_416_416_fp16.engine
labelfile-path=labels.txt
##0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
gie-unique-id=1
network-type=0
##0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV4
custom-lib-path=libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
nms-iou-threshold=0.7
pre-cluster-threshold=0.5

Sorry for the late response, is this still an issue to support?

Thank you for your answer although late ^0^

I panicked… because … when I tried again that see your answer, deepstream worked.

but I fixed not anymore

Glad to know it’s working!