Is Tiled Display Necessary?

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU): Jetson
**• DeepStream Version: 5.1
**• JetPack Version (valid for Jetson only): 4.5.1

Hi,
I am using the deepstream-occupancy-analytics, for the purposes of my application I would like to disable tiled-display( meaning that I don’t want to display the live streams on my jetson nano), and be able to get counting results from the 6 live streams I am running, would this be possible with tiled-display off?

I tried it but in the end, I got a live stream with the muxer width and height, this was the second try but on the first try I got no stream but I got the following message:

Thanks,

1 Like

Can you use fakesink if you want to disable display.

I did as recommended. The following is my test5 config file:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl #saves the output from primary detector in a modified KITTY metadata format
#kitti-track-output-dir=/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics/KITTY

[tiled-display]
enable=0
rows=1
columns=1
width=320
height=240
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

##start sources

##end sources

[sink0]
enable=1
type=2
#1=mp4 2=mkv./bin/kafka-topics --create --bootstrap-server localhost:9092
#–replication-factor 1 --partitions 1 --topic users
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=100000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=resnet.mp4
source-id=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=1
#sync=0 #new
#qos=1 #new
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=0 #was 0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=localhost;9092;quickstart-events
topic=quickstart-events
#Optional:
msg-broker-config=…/…/deepstream-test4/cfg_kafka.txt
#msg-broker-comp-id=2 #new w can be changed cause this is the default
#disable-msgconv=1 #also new
#source-id=1

[sink2]
enable=1
type=1
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4

only SW mpeg4 is supported right now.

codec=3
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0 #was 0

sink type = 6 by default creates msg converter + broker.

To use multiple brokers use this group for converter and use

sink type = 6 with disable-msgconv = 1

[message-converter]
enable=0 #was 0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=0

Name of library having custom implementation.

msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_msgconv.so

Id of component in case only selected message to parse.

#msg-conv-comp-id=2 #was commented

Configure this group to enable cloud message consumer.

[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
conn-str=localhost;9092
config-file=…/…/deepstream-test4/cfg_kafka.txt #was commented
subscribe-topic-list=quickstart-events

Use this option if message has sensor name as id instead of index (0,1,2 etc.).

sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=10
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

bbox-border-color0=1;0;0;1
#bbox-border-color1=0;1;1;1
#bbox-border-color2=0;1;1;1
#bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_int8.engine
labelfile-path=peoplenet/labels.txt
config-file=config_infer_primary_peoplenet.txt
#infer-raw-output-dir=…/…/…/…/…/samples/primary_detector_raw_output/

[tracker]
enable=1
tracker-width=1216
tracker-height=960
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
#ll-config-file required for DCF/IOU only
ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0

[nvds-analytics]
enable=1
config-file=config_nvdsanalytics.txt

[tests]
file-loop=0

And below is the message I received now that sink1 type is fakesink, any idea why the exit enter messages stop and I only get PERF messages?

Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-initial/config/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_int8.engine open error
0:00:18.909972568 18201 0x55bd460c00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-initial/config/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_int8.engine failed
0:00:18.915102258 18201 0x55bd460c00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-initial/config/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:18.915159291 18201 0x55bd460c00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-initial/config/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine opened error
0:02:41.087677753 18201 0x55bd460c00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-initial/config/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 3x34x60

0:02:41.145372444 18201 0x55bd460c00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-initial/config/config_infer_primary_peoplenet.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

**PERF: FPS 0 (Avg)
Wed Dec 8 16:02:49 2021
**PERF: 0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running

~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvdcf/src/modules/NvDCF/NvDCF.cpp, NvDCF() @line 670]: !!![WARNING] Can’t open config file (/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-initial/config/tracker_config.yml). Will go ahead with default values
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvdcf/src/modules/NvDCF/NvDCF.cpp, NvDCF() @line 682]: !!![WARNING] Invalid low-level config file is provided. Will go ahead with default values
[NvDCF] Initialized
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Enter: 0, Exit: 0
Wed Dec 8 16:02:54 2021
**PERF: 28.77 (7.33)
Wed Dec 8 16:02:59 2021
**PERF: 0.00 (1.45)
Wed Dec 8 16:03:04 2021
**PERF: 0.00 (0.80)
Wed Dec 8 16:03:09 2021
**PERF: 0.00 (0.55)
Wed Dec 8 16:03:14 2021
**PERF: 0.00 (0.42)
Wed Dec 8 16:03:19 2021
**PERF: 0.00 (0.34)
Wed Dec 8 16:03:24 2021
**PERF: 0.00 (0.29)
Wed Dec 8 16:03:29 2021
**PERF: 0.00 (0.25)
Wed Dec 8 16:03:34 2021
**PERF: 0.00 (0.22)
Wed Dec 8 16:03:39 2021
**PERF: 0.00 (0.19)
Wed Dec 8 16:03:44 2021
**PERF: 0.00 (0.18)
Wed Dec 8 16:03:49 2021
**PERF: 0.00 (0.16)
Wed Dec 8 16:03:54 2021
**PERF: 0.00 (0.15)
Wed Dec 8 16:03:59 2021
**PERF: 0.00 (0.14)
Wed Dec 8 16:04:04 2021
**PERF: 0.00 (0.13)
Wed Dec 8 16:04:09 2021
**PERF: 0.00 (0.12)
Wed Dec 8 16:04:14 2021
**PERF: 0.00 (0.11)
Wed Dec 8 16:04:19 2021
**PERF: 0.00 (0.10)
Wed Dec 8 16:04:24 2021
**PERF: 0.00 (0.10)

**PERF: FPS 0 (Avg)
Wed Dec 8 16:04:29 2021
**PERF: 0.00 (0.09)
Wed Dec 8 16:04:34 2021
**PERF: 0.00 (0.09)
Wed Dec 8 16:04:39 2021
**PERF: 0.00 (0.08)
Wed Dec 8 16:04:44 2021
**PERF: 0.00 (0.08)
Wed Dec 8 16:04:49 2021
**PERF: 0.00 (0.08)
Wed Dec 8 16:04:54 2021
**PERF: 0.00 (0.07)
Wed Dec 8 16:04:59 2021
**PERF: 0.00 (0.07)
Wed Dec 8 16:05:04 2021
**PERF: 0.00 (0.07)
Wed Dec 8 16:05:09 2021
**PERF: 0.00 (0.07)
ERROR from dsanalytics_queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline/GstBin:dsanalytics_bin/GstQueue:dsanalytics_queue:
streaming stopped, reason error (-5)
Quitting
ERROR from osd_queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:osd_bin/GstQueue:osd_queue:
streaming stopped, reason error (-5)
[NvDCF] De-initialized
App run failed

Thanks. I apologize if this is still blurry.

Hi @kesong
This is still an issue in my app. I would really appreciate the help thanks.
Also, can you please explain to me how fakesink affects the disabling of the display, please?

Tremendous thanks

fakesink will discard video frame without send to display.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.