Run Deepstream-test5 with AWS IoT

I want to setup the deepstream-test5 to run and send information on AWS IoT core using jetson nano 2gb.
I using this link to get test4 to work but can’t get test 5 to run as it keeps failing and when I try with RTSP I run into the same issue. I need help to get messages to the cloud from an RTSP camera
Here is the link I made use of - Link

My config file -

Help: How do I configure this to work with an RTSp camera and AWS IoT core

Moving to DeepStream forum.

Hi @bvgohmslf , do you mean you can use test4 to work, but test5 cannot? Could you attach your deepstream version and detailed deployment steps?

DS: 6.01 for jetson nano 2GB.

config files for both test4 and test5 respectively

test5_config_file_src_infer_aws.txt

################################################################################
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/libnvds_aws_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=test
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/cfg_aws.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=0

I want to see that data is sent to AWS IoT core. What do I need to change? .

test5_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

################################################################################
# Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=2
width=640
height=480
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://192.168.0.121:8554/
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# smart record cache size in seconds
#smart-rec-cache
# default duration of recording in seconds.
#smart-rec-default-duration
# duration of recording in seconds.
# this will override default value.
#smart-rec-duration
# seconds before the current time to start recording.
#smart-rec-start-time
# value in seconds to dump video stream.
#smart-rec-interval

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://192.168.0.121:8554/
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# smart record cache size in seconds
#smart-rec-cache

[sink0]
enable=1
Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
#(0): Create payload using NvdsEventMsgMeta
#(1): New Api to create payload using NvDsFrameMeta
msg-conv-msg2p-new-api=0
#Frame interval at which payload is generated
msg-conv-frame-interval=30
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/libnvds_aws_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=your/greengrass/response/topic
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/cfg_aws.txt
#new-api=0
#(0) Use message adapter library api's
#(1) Use new msgbroker library api's

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/libnvds_aws_proto.so
conn-str=<host>;<port>
config-file=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/cfg_aws.txt
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b2_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/


[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt
labelfile-path=../../../../../samples/models/Secondary_VehicleTypes/labels.txt
model-engine-file=../../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine

[secondary-gie1]
enable=1
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carcolor.txt
labelfile-path=../../../../../samples/models/Secondary_CarColor/labels.txt
model-engine-file=../../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine

[secondary-gie2]
enable=1
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carmake.txt
labelfile-path=../../../../../samples/models/Secondary_CarMake/labels.txt
model-engine-file=../../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine

[tests]
file-loop=0

I need to send data to AWS IoT core using RSTP streaming. What do I need to change to make this work

Deepstream test4 cannot support rtsp source now.
For deepstream test5, you should change the [message-consumer0] filed to your own server.

Error
`./deepstream-test5-app -c configs/test5_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
** WARN: <parse_sink:1637>: Unknown key ‘Type - 1’ for group [sink0]
** ERROR: <start_cloud_to_device_messaging:150>: Failed to connect to broker.
** ERROR: <create_pipeline:1307>: Failed to create message consumer
** ERROR: <create_pipeline:1326>: create_pipeline failed
** ERROR: main:1423: Failed to create pipeline
Quitting
App run failed

`
Setup

################################################################################
# Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=2
width=640
height=480
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://192.168.0.121:8554/
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# smart record cache size in seconds
#smart-rec-cache
# default duration of recording in seconds.
#smart-rec-default-duration
# duration of recording in seconds.
# this will override default value.
#smart-rec-duration
# seconds before the current time to start recording.
#smart-rec-start-time
# value in seconds to dump video stream.
#smart-rec-interval

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://192.168.0.121:8554/
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# smart record cache size in seconds
#smart-rec-cache

[sink0]
enable=2
Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
#(0): Create payload using NvdsEventMsgMeta
#(1): New Api to create payload using NvDsFrameMeta
msg-conv-msg2p-new-api=0
#Frame interval at which payload is generated
msg-conv-frame-interval=30
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/libnvds_aws_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=your/greengrass/response/topic
#Optional:
msg-broker-config=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/cfg_aws.txt
#new-api=0
#(0) Use message adapter library api's
#(1) Use new msgbroker library api's

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=1
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=1
proto-lib=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/libnvds_aws_proto.so
conn-str=al22o691usnhz-ats.iot.eu-west-1.amazonaws.com;8883
config-file=/opt/nvidia/deepstream/deepstream-6.0/sources/libs/aws_protocol_adaptor/device_client/cfg_aws.txt
subscribe-topic-list=test
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=0
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b2_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/


[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt
labelfile-path=../../../../../samples/models/Secondary_VehicleTypes/labels.txt
model-engine-file=../../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine

[secondary-gie1]
enable=1
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carcolor.txt
labelfile-path=../../../../../samples/models/Secondary_CarColor/labels.txt
model-engine-file=../../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine

[secondary-gie2]
enable=1
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carmake.txt
labelfile-path=../../../../../samples/models/Secondary_CarMake/labels.txt
model-engine-file=../../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine

[tests]
file-loop=0

There are some issues in your config file.
1.Please set the enable=0 for [sink0].

2.Please set the right ip;port;event for [sink1]

I understand host and port but I don’t know where to get the topic from on AWS. Where is this situated in AWS IoT core or greengrass

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

About where to get the topic from on AWS, you can directly fill the issue on the project: https://github.com/awslabs/aws-iot-core-integration-with-nvidia-deepstream/issues.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.