DeepStream Test5 app: Azure and message broker issue

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GeForce GTX 1050Ti
• DeepStream Version: 5.0.1
• TensorRT Version: 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only): 450.80.02
• Issue Type( questions, new requirements, bugs) bugs

I have DeepStream 5.0.1 installed on host machine with Ubuntu 18.04 OS. I can run the test5 app with EglSink type and see the output on screen. But when I tried for MsgConvBroker sink, it gives me below errors. I have created local dockers which has link as localhost:5000/desktop_ds_test5:latest and added same in Azure portal for respective device’s module settings. But when I run the test5 app with azure config, I get below errors. I have nvmsgbroker and iotedge libs in place. I have checked the IotEdge /etc/iotegde/config.yaml is also properly formatted.

(deepstream-test5-app:19793): GLib-CRITICAL **: 16:18:26.857: g_strrstr: assertion 'haystack != NULL' failed
    Error: Time:Thu Oct 29 16:18:26 2020 File:/home/arg/azure/azure-iot-sdk-c/iothub_client/src/iothub_client_core_ll.c Func:retrieve_edge_environment_variabes Line:177 Environment IOTEDGE_AUTHSCHEME not set
    Error: Time:Thu Oct 29 16:18:26 2020 File:/home/arg/azure/azure-iot-sdk-c/iothub_client/src/iothub_client_core_ll.c Func:IoTHubClientCore_LL_CreateFromEnvironment Line:1186 retrieve_edge_environment_variabes failed
    Error: Time:Thu Oct 29 16:18:26 2020 File:/home/arg/azure/azure-iot-sdk-c/iothub_client/src/iothub_client_core.c Func:create_iothub_instance Line:924 Failure creating iothub handle
    ERROR: iotHubModuleClientHandle is NULL! connect failed
    ** ERROR: <main:1451>: Failed to set pipeline to PAUSED
    Quitting
    ERROR from sink_sub_bin_sink2: Could not configure supporting library.
    Debug info: gstnvmsgbroker.c(388): legacy_gst_nvmsgbroker_start (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
    unable to connect to broker library
    ERROR from sink_sub_bin_sink2: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
    Debug info: gstbasesink.c(5265): gst_base_sink_change_state (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
    Failed to start
    App run failed

nvmsgbroker lib:

archana@archana-H310M-DS2:/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream$ ls
            libgstnvvideo4linux2.so  libnvdsgst_dsanalytics.so  libnvdsgst_inferserver.so  libnvdsgst_msgbroker.so    libnvdsgst_multistreamtiler.so  libnvdsgst_osd.so
            libgstnvvideoconvert.so  libnvdsgst_dsexample.so    libnvdsgst_infer.so        libnvdsgst_msgconv.so      libnvdsgst_of.so                libnvdsgst_segvisual.so
            libnvdsgst_dewarper.so   libnvdsgst_eglglessink.so  libnvdsgst_jpegdec.so      libnvdsgst_multistream.so  libnvdsgst_ofvisual.so          libnvdsgst_tracker.so

IoTEdge libs:

archana@archana-H310M-DS2:/opt/nvidia/deepstream/deepstream-5.0/lib$ ls
                gst-plugins               libnvds_amqp_proto.so        libnvds_dsanalytics.so                  libnvdsinfer_custom_impl_Yolo.so  libnvds_logger.so     libnvds_nvtxhelper.so        libnvvpi.so.0
                libcuvidv4l2.so           libnvds_azure_edge_proto.so  libnvdsgst_bufferpool.so                libnvds_infercustomparser.so      libnvds_meta.so       libnvds_opticalflow_dgpu.so  libnvvpi.so.0.3.0
                libiothub_client.so       libnvds_azure_proto.so       libnvdsgst_helper.so                    libnvds_infer_server.so           libnvds_mot_iou.so    libnvds_osd.so               libv4l
                libiothub_client.so.1     libnvds_batch_jpegenc.so     libnvdsgst_meta.so                      libnvds_infer.so                  libnvds_mot_klt.so    libnvds_tracker.so           pkg-config
                libnvbuf_fdmap.so         libnvdsbufferpool.so         libnvdsgst_smartrecord.so               libnvds_inferutils.so             libnvds_msgbroker.so  libnvds_utils.so             pyds.so
                libnvbufsurface.so        libnvds_csvparser.so         libnvdsinfer_custom_impl_fasterRCNN.so  libnvds_kafka_proto.so            libnvds_msgconv.so    libnvv4l2.so                 setup.py
                libnvbufsurftransform.so  libnvds_dewarper.so          libnvdsinfer_custom_impl_ssd.so         libnvds_lljpegdec.so              libnvds_nvdcf.so      libnvv4lconvert.so

Here is the output from sudo iotedge check:

Configuration checks
--------------------
√ config.yaml is well-formed - OK
√ config.yaml has well-formed connection string - OK
√ container engine is installed and functional - OK
√ config.yaml has correct hostname - OK
√ config.yaml has correct URIs for daemon mgmt endpoint - OK
√ latest security daemon - OK
√ host time is close to real time - OK
√ container time is close to host time - OK
‼ DNS server - Warning
    Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
    Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.
    You can ignore this warning if you are setting DNS server per module in the Edge deployment.
‼ production readiness: certificates - Warning
    The Edge device is using self-signed automatically-generated development certificates.
    They will expire in 72 days (at 2021-01-10 10:48:43 UTC) causing module-to-module and downstream device communication to fail on an active deployment.
    After the certs have expired, restarting the IoT Edge daemon will trigger it to generate new development certs.
    Please consider using production certificates instead. See https://aka.ms/iotedge-prod-checklist-certs for best practices.
‼ production readiness: container engine - Warning
    Device is not using a production-supported container engine (moby-engine).
    Please see https://aka.ms/iotedge-prod-checklist-moby for details.
‼ production readiness: logs policy - Warning
    Container engine is not configured to rotate module logs which may cause it run out of disk space.
    Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
    You can ignore this warning if you are setting log policy per module in the Edge deployment.
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
    The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
    Data might be lost if the module is deleted or updated.
    Please see https://aka.ms/iotedge-storage-host for best practices.
‼ production readiness: Edge Hub's storage directory is persisted on the host filesystem - Warning
    The edgeHub module is not configured to persist its /tmp/edgeHub directory on the host filesystem.
    Data might be lost if the module is deleted or updated.
    Please see https://aka.ms/iotedge-storage-host for best practices.

Connectivity checks
-------------------
√ host can connect to and perform TLS handshake with IoT Hub AMQP port - OK
√ host can connect to and perform TLS handshake with IoT Hub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with IoT Hub MQTT port - OK
√ container on the default network can connect to IoT Hub AMQP port - OK
√ container on the default network can connect to IoT Hub HTTPS / WebSockets port - OK
√ container on the default network can connect to IoT Hub MQTT port - OK
√ container on the IoT Edge module network can connect to IoT Hub AMQP port - OK
√ container on the IoT Edge module network can connect to IoT Hub HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to IoT Hub MQTT port - OK

17 check(s) succeeded.
6 check(s) raised warnings. Re-run with --verbose for more details.

Any updates to this issue?

@Fiona.Chen As suggested by you, I created this new topic, but no reply to this. I need to implement this in one of our projects and waiting for reply since last week.

Sorry for a late reply.
Did you deploy edge module on azure portal?
sources/libs/azure_protocol_adaptor/module_client/README

@Amycao Thanks for replying. Yes, I have deployed the model on Azure as per README in module_client and also referred test5 app’s README. Last time the same setup worked with DeepStream 5.0 version for X86 docker with a stored sample video. But since last week I started using DeepStream 5.0.1 docker and installed same version on host machine also. Both are giving same error now.
My docker deployed at localhost:5000/desktop_ds_test5:latest.

Will try to run with 5.0.1 docker, and update if progress, thanks for your patience.

@Amycao Did you get chance to check this issue?

Sorry for the late reply,
we did not have this issue you meet with 5.0.1 docker.
have you fixed this issue?
if not fixed, when you meet the error, can you run the command “systemctl status iotedge” to check which part got wrong?

I figured it out. You can re-produce it like this:

  1. Created first docker from 5.0.1 docker as base docker for Azure IoT, it has all the required libs at location /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream.
  2. Then run this docker and inside that I changed few configuration and saved docker using command ‘docker commit…’ either with same name and tag or new one.
  3. Then tried to run this newly saved/created docker, then few libs like msgbroker were missing inside this docker.

To fix this we are creating new docker every time we make config file changes. Because subsequent dockers created/saved from docker commit command have missing libs. We never faced this commit issue with other dockers.

This new docker works well when invoked by IoTEdge, if we try to run it directly like deepstream-test5-app -c ./configs/azure_config.txt, it gives same errors as mentioned in first post.

Yeah, you should run through iotedge for edge module, it’s the way processing. for device client, you can use test5 sample to run directly like you mentioned.

which port topic to specify for simple test5 trial with azure in the config sink together with device connection string?

<port>;<topic>