Why is my IP camera not working when enable MsgConvBroker in my config file?

Please provide complete information as applicable to your setup.

• Jetson nano 4Gb
• DeepStream 4.0.2
• JetPack Version 4.3-b134
• TensorRT Version 6.0.1.10-1+cuda10.0

IP Camera: Dahua

Why I can’t get videoStream from my IP camera when I enable type=6 of [sink] group?
I’m following this tutorial to connect Deepstream SDK to the cloud of IoT Central.
But seems that the IP camera crush when I enable MsgConvBroker in the config file. I have ran the config file with [sink] type=5 , type=4 and the RTSP of my IP camera as source and it display de video fine with the object detection doing the inference, but once I enable the sink=6 it doesn’t display anything and neither send the telemetry data to the cloud.
this is my config file:

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=0
perf-measurement-interval-sec=5
gie-kitti-output-dir=/tmp/deepstream-detections-output-dir

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720

[source0]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=1
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://admin:12345678W@192.168.1.108:554/cam/realmonitor?channel=1/subtype=1

[source2]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=

[source3]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=Overlay 6=MsgConvBroker
type=5
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=Overlay 6=MsgConvBroker
type=6
msg-conv-config=./msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_azure_edge_proto.so
topic=mytopic
#Optional:
#msg-broker-config=./cfg_azure.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0

[sink3]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary_nano.txt

[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

IoT edge shows that all the four containerized modules are running, and when I run
$ iotedge logs -f NVIDIADeepStreamSDK to see it logs, show this:

Error from src_elem0: could not read from resource. 
Debug info: gstrtspsrc.c(5917): gst_rtsp_src_receive_response (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0: Could not receive message. (Timeout while waiting for server response) 

and

watch_source_status (null)
Reset sources pipeline reset_source_pipeline 0x7f86c22080

or you can see the error message in these screenshots:

I’m suspecting that the problem is my IP camera (I hope not), any suggestion to make this work?

We will try to reproduce the problem first.

Hi,
I did not see your edge module running from the pictures you attached, did you follow sources/libs/azure_protocol_adaptor/module_client/README to setup? to verify if the rtsp camera working, you can try with vlc player to see if it’s functioning properly.

I was following this tutorial, seems that is the same as sources/libs/azure_protocol_adaptor/module_client/README.
These are the running modules :

Since the problem seems to be with the NVIDIADeepStream module, I try to run separately my config file with deepstream-test5, it runs fine with my IP camera, until I enable type=6 of [sink] in my config file, I get this error:

(deepstream-test5-app:5104): GLib-CRITICAL **: 18:32:44.567: g_strrstr: assertion 'haystack != NULL' failed
Error: Time:Sat Jan 23 18:32:44 2021 File:/home/nvidia/azure/azure-iot-sdk-c/iothub_client/src/iothub_client_core_ll.c Func:retrieve_edge_environment_variabes Line:177 Environment IOTEDGE_AUTHSCHEME not set
Error: Time:Sat Jan 23 18:32:44 2021 File:/home/nvidia/azure/azure-iot-sdk-c/iothub_client/src/iothub_client_core_ll.c Func:IoTHubClientCore_LL_CreateFromEnvironment Line:1186 retrieve_edge_environment_variabes failed
Error: Time:Sat Jan 23 18:32:44 2021 File:/home/nvidia/azure/azure-iot-sdk-c/iothub_client/src/iothub_client_core.c Func:create_iothub_instance Line:924 Failure creating iothub handle
ERROR: iotHubModuleClientHandle is NULL! connect failed
** ERROR: <main:1123>: Failed to set pipeline to PAUSED
Quitting
ERROR from sink_sub_bin_sink2: Could not configure supporting library.
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmsgbroker/gstnvmsgbroker.c(332): gst_nvmsgbroker_start (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
unable to connect to broker library
ERROR from sink_sub_bin_sink2: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
Debug info: gstbasesink.c(5265): gst_base_sink_change_state (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
Failed to start
App run failed

I’m also using VLC to test it, seems that the problem is not specific to my camera actually.

Let’s clarify first, module client and device client is different, device client direct messaging from device to cloud, while module client is through Azure IoT Edge runtime.

I try to run separately my config file with deepstream-test5 , it runs fine with my IP camera, until I enable type=6 of [sink] in my config file, I get this error:

→ this is using device client, please do not use mistakenly. for device client, you need to follow sources/libs/azure_protocol_adaptor/device_client/README
for details, you can see document, Gst-nvmsgbroker — DeepStream 6.3 Release documentation