Blank screen (Black) while running the deepstream-app for trafficcamnet in samples/configs/tlt_pretrained_models

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Tesla T4
• DeepStream Version
DS 5.1
• TensorRT Version
7.2.1.6
• NVIDIA GPU Driver Version (valid for GPU only)
460.32.03
• Issue Type( questions, new requirements, bugs)
While running the following a black screen appears instead of the video file mentioned in the config:

cd /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models/
deepstream-app -c deepstream_app_source1_trafficcamnet.txt

This is my config file:

cat deepstream_app_source1_trafficcamnet.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file://../../streams/sample_720p.mp4
gpu-id=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=40000
#Set muxer output width and height
width=1920
height=1080

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial

[primary-gie]
enable=1
gpu-id=0
#Modify as necessary
model-engine-file=../../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
config-file=config_infer_primary_trafficcamnet.txt

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
#set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[tracker]
enable=1
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
ll-config-file=../deepstream-app/tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1

[tests]
file-loop=1

How to fix this black screen issue? I have already tried the same procedure using xrdp but output is still blank.

Hey, can you just run the deepstream-test1-app and see if it can give a correct output?

Output while running the deepstream-test1-app

Steps followed:

  1. cd apps/deepstream-test1/

  2. Add Cuda Version to Makefile and make

  3. ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264

    glueck@glueck-ProLiant-DL385-Gen10-Plus:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1$ ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264
    Now playing: /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264
    libEGL warning: DRI2: failed to authenticate
    0:00:01.193757875 27619 0x5632d56c6390 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
    INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
    0 INPUT kFLOAT input_1 3x368x640
    1 OUTPUT kFLOAT conv2d_bbox 16x23x40
    2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

    0:00:01.193837062 27619 0x5632d56c6390 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
    0:00:01.194704598 27619 0x5632d56c6390 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
    Running…
    cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
    Frame Number = 0 Number of objects = 6 Vehicle Count = 4 Person Count = 2
    0:00:01.402278729 27619 0x5632d515cd40 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: Internal data stream error.
    0:00:01.402291342 27619 0x5632d515cd40 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
    ERROR from element primary-nvinference-engine: Internal data stream error.
    Error details: gstnvinfer.cpp(1984): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
    streaming stopped, reason not-negotiated (-4)
    Returned, stopping playback
    Frame Number = 1 Number of objects = 6 Vehicle Count = 4 Person Count = 2
    Frame Number = 2 Number of objects = 6 Vehicle Count = 4 Person Count = 2
    Frame Number = 3 Number of objects = 6 Vehicle Count = 4 Person Count = 2
    Frame Number = 4 Number of objects = 5 Vehicle Count = 3 Person Count = 2
    Frame Number = 5 Number of objects = 6 Vehicle Count = 4 Person Count = 2
    Frame Number = 6 Number of objects = 6 Vehicle Count = 4 Person Count = 2
    Deleting pipeline

Another Video:
./deepstream-test1-app /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4
is stuck at running…

glueck@glueck-ProLiant-DL385-Gen10-Plus:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/

sample_apps/deepstream-test1$ ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4
Now playing: /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4
libEGL warning: DRI2: failed to authenticate
0:00:01.206095714 28596 0x5593396def90 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.206175844 28596 0x5593396def90 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:01.207062686 28596 0x5593396def90 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running…

Can you help me on this? The deepstream-test1-app also shows the same result.

Could you try export DISPLAY=:0 or :1 and check if the issue persist?

@bcao I tried exporting the display as mentioned by you in the previous comment but still the same black screen appears. This is run on T4 card whereas while running the same tlt model in the Jetson TX2 it works and display appears.

Can you please help me fix this issue?

Can you try to run xhost + before running the app?