Deepstream app5 is not running on ubuntu with a Gforce 2050

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
5.0
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
NVIDIA-SMI 440.40 Driver Version: 440.40 CUDA Version: 10.2 GeForce RTX 2060
• Issue Type( questions, new requirements, bugs)
question
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am not able to run the app5 on ubuntu
sudo ./deepstream-test5-app -c configs/test5_config_file_src_infer_aws.txt -t --tiledtext

(gst-plugin-scanner:9199): GStreamer-WARNING **: 15:16:07.968: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so’: libtrtserver.so: cannot open shared object file: No such file or directory
** ERROR: main:1433: Failed to set pipeline to PAUSED
Quitting
ERROR from sink_sub_bin_sink2: Could not initialize supporting library.
Debug info: gstnvmsgbroker.c(303): gst_nvmsgbroker_start (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
unable to open shared library
ERROR from sink_sub_bin_sink2: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
Debug info: gstbasesink.c(5265): gst_base_sink_change_state (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
Failed to start
App run failed
===================Config file ==========================

Copyright (c) 2018-2020 NVIDIA Corporation. All rights reserved.

NVIDIA Corporation and its licensors retain all intellectual property

and proprietary rights in and to this software, related documentation

and any modifications thereto. Any use, reproduction, disclosure or

distribution of this software and related documentation without an express

license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://…/…/…/…/…/samples/streams/sample_1080p_h264.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://…/…/…/…/…/samples/streams/sample_1080p_h264.mp4
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.0/sources/libs/aws_protocol_adaptor/device_client/libnvds_aws_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=;;
topic=desktop_msg
#Optional:
msg-broker-config= /opt/nvidia/deepstream/deepstream-5.0/sources/libs/aws_protocol_adaptor/device_client/cfg_aws.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4

only SW mpeg4 is supported right now.

codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

sink type = 6 by default creates msg converter + broker.

To use multiple brokers use this group for converter and use

sink type = 6 with disable-msgconv = 1

[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=0

Name of library having custom implementation.

#msg-conv-msg2p-lib=

Id of component in case only selected message to parse.

#msg-conv-comp-id=

Configure this group to enable cloud message consumer.

[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
conn-str=;
config-file=
subscribe-topic-list=;;

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

If set to TRUE, system timestamp will be attached as ntp timestamp

If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached

attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=4

0=FP32, 1=INT8, 2=FP16 mode

bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=…/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=…/…/…/…/…/samples/models/Primary_Detector/labels.txt
config-file=…/…/…/…/…/samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=…/…/…/…/…/samples/primary_detector_raw_output/

[tracker]
enable=1
tracker-width=600
tracker-height=288
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0

[tests]
file-loop=0

Regards

Which protocl adapter you are using? please make sure you install all the dependency following README under ***_protocol_adaptor/

I am using MQTT protocol for aws IoT core,

Did you install the dependency follow README under ***_protocol_adaptor/?

yes I did

there no aws_protocol_adaptor under sources/libs, is this your own adapter?

I build the AWS IoT client using the GitHub - awslabs/aws-iot-core-integration-with-nvidia-deepstream
and did not the step 2 because it was optional.
and accept everything by default.

Regards

There is also the aws_protocol_adaptor

Regards

It’s written by other users outside nvidia. you can debug yourself or you can reach to the guys who developed the adapter.