Libnvds_azure_edge_proto.so VS libnvds_azure_proto.so

Hardware Platform (Jetson / GPU) RTX3050
DeepStream Version 6.2
TensorRT Version 8.5.3
NVIDIA GPU Driver Version (valid for GPU only) 525.85.12
Issue Type( questions, new requirements, bugs) bug

Hi,

What is the difference between libnvds_azure_edge_proto.so and libnvds_azure_proto.so - I’ve tried using libnvds_azure_edge_proto.so but I get the below error. libnvds_azure_proto.so seems to work…

2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@312: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: Could not configure supporting library.
2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@312: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: Could not configure supporting library.2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@314: Debug info:
2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@314: Debug info:2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: gstnvmsgbroker.cpp(401): legacy_gst_nvmsgbroker_start (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:
gstnvmsgbroker.cpp(401): legacy_gst_nvmsgbroker_start (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: unable to connect to broker library
unable to connect to broker library2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@312: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@312: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@314: Debug info:
2023-04-03 15:58:38.704 ERROR extensions/nvdsbase/nvds_scheduler.cpp@314: Debug info:2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: gstbasesink.c(5367): gst_base_sink_change_state (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:
gstbasesink.c(5367): gst_base_sink_change_state (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: Failed to start
Failed to start2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.705 ERROR extensions/nvdsbase/nvds_scheduler.cpp@184: Failed to set GStreamer pipeline to PLAYING
2023-04-03 15:58:38.705 ERROR extensions/nvdsbase/nvds_scheduler.cpp@184: Failed to set GStreamer pipeline to PLAYING2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@349: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: Could not configure supporting library.
2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@349: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: Could not configure supporting library.2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@351: Debug info:
2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@351: Debug info:2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: gstnvmsgbroker.cpp(401): legacy_gst_nvmsgbroker_start (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:
gstnvmsgbroker.cpp(401): legacy_gst_nvmsgbroker_start (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: unable to connect to broker library
unable to connect to broker library2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,789ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@349: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@349: Error from /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.2023-04-03 13:58:38 [371,790ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,790ms] [Error] [omni.kit.app._impl] [py stderr]: 2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@351: Debug info:
2023-04-03 15:58:38.710 ERROR extensions/nvdsbase/nvds_scheduler.cpp@351: Debug info:2023-04-03 13:58:38 [371,790ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,790ms] [Error] [omni.kit.app._impl] [py stderr]: gstbasesink.c(5367): gst_base_sink_change_state (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:
gstbasesink.c(5367): gst_base_sink_change_state (): /GstPipeline:NvDsScheduler-Pipeline/GstDsNvMsgBrokerBin:Cloud Message Converter and Broker/Cloud Message Converter and Broker3/GstNvMsgBroker:nvmsgbrokersinkbin-nvmsgbroker:2023-04-03 13:58:38 [371,790ms] [Error] [omni.kit.app._impl] [py stderr]: 


2023-04-03 13:58:38 [371,790ms] [Error] [omni.kit.app._impl] [py stderr]: Failed to start
Failed to start2023-04-03 13:58:38 [371,790ms] [Error] [omni.kit.app._impl] [py stderr]: 


Error: Time:Mon Apr  3 15:58:38 2023 File:/home/unnik/work/deepstream/iot/azure-iot-sdk-c/iothub_client/src/iothub_client_core_ll.c Func:retrieve_edge_environment_variabes Line:177 Environment IOTEDGE_AUTHSCHEME not set
Error: Time:Mon Apr  3 15:58:38 2023 File:/home/unnik/work/deepstream/iot/azure-iot-sdk-c/iothub_client/src/iothub_client_core_ll.c Func:IoTHubClientCore_LL_CreateFromEnvironment Line:1186 retrieve_edge_environment_variabes failed
Error: Time:Mon Apr  3 15:58:38 2023 File:/home/unnik/work/deepstream/iot/azure-iot-sdk-c/iothub_client/src/iothub_client_core.c Func:create_iothub_instance Line:924 Failure creating iothub handle
ERROR: iotHubModuleClientHandle is NULL! connect failed
Returned, stopping playback
Deleting pipeline
2023-04-03 15:58:38.711 INFO  gxf/gxe/gxe.cpp@278: Deinitializing...
2023-04-03 15:58:38.711 INFO  gxf/gxe/gxe.cpp@285: Destroying context
2023-04-03 15:58:38.712 INFO  gxf/gxe/gxe.cpp@291: Context destroyed.
*******************************************************************
End One.yaml
*******************************************************************
[INFO] Graph installation dir

One.yaml (7.7 KB)

Attached - My Graph Definition

Also - Can the proto be the cause of the output being in the incorrect format? See below…

Current Pipeline Output Format :

Source 1: Frame Number = 17627 Total objects = 0 [ Person:0 ]
Source 0: Frame Number = 17641 Total objects = 0 [ Bicycle:0 Car:0 Person:0 Roadsign:0 ]
Source 1: Frame Number = 17628 Total objects = 0 [ Person:0 ]
Source 0: Frame Number = 17642 Total objects = 0 [ Bicycle:0 Car:0 Person:0 Roadsign:0 ]
Source 1: Frame Number = 17629 Total objects = 0 [ Person:0 ]
Source 0: Frame Number = 17643 Total objects = 0 [ Bicycle:0 Car:0 Person:0 Roadsign:0 ]
Source 1: Frame Number = 17630 Total objects = 0 [ Person:0 ]
Source 0: Frame Number = 17644 Total objects = 0 [ Bicycle:0 Car:0 Person:0 Roadsign:0 ]
Source 1: Frame Number = 17631 Total objects = 0 [ Person:0 ]
Source 0: Frame Number = 17645 Total objects = 0 [ Bicycle:0 Car:0 Person:0 Roadsign:0 ]
Source 1: Frame Number = 17632 Total objects = 0 [ Person:0 ]
Source 0: Frame Number = 17646 Total objects = 0 [ Bicycle:0 Car:0 Person:0 Roadsign:0 ]
Source 1: Frame Number = 17633 Total objects = 0 [ Person:0 ]

Expected Format:

[
{
“version”: “4.0”,
“id”: 0,
@timestamp”: “1970-01-01T00:00:00.000Z”,
“sensorId”: “Yard”,
“objects”: [
“1|274.667|139.412|474.667|322.941|Car”
]

}, 
{ 
    "version": "4.0", 
    "id": 26, 
    "@timestamp": "2020-05-03T20:47:27.435Z", 
    "sensorId": "Yard", 
    "objects": [ 
        "2|274.667|137.647|484|324.706|Car" 
    ] 
}, 

{ 
    "version": "4.0", 
    "id": 27, 
    "@timestamp": "2020-05-03T20:47:27.483Z", 
    "sensorId": "Yard", 
    "objects": [ 
        "2|276|137.647|486.667|326.471|Car" 
    ] 
}, 

{ 
    "version": "4.0", 
    "id": 28, 
    "@timestamp": "2020-05-03T20:47:27.498Z", 
    "sensorId": "Yard", 
    "objects": [ 
        "2|276|137.647|488|326.471|Car" 
    ] 
},

Please refer to our Guide: Azure MQTT Protocol Adapter Libraries

Hi,
I have seen the documentation - But it does not resolve the issue…

1.) The document indicates that the library automatically fetches the connection string from the file
/etc/iotedge/config.yaml - It seems that AZIOT actually creates config files in /etc/aziot/
I made a copy of the aziot directory and named it iotedge but still getting the above error.

IOTCHECK

Configuration checks (aziot-identity-service)

√ keyd configuration is well-formed - OK
√ certd configuration is well-formed - OK
√ tpmd configuration is well-formed - OK
√ identityd configuration is well-formed - OK
√ daemon configurations up-to-date with config.toml - OK
√ identityd config toml file specifies a valid hostname - OK
√ aziot-identity-service package is up-to-date - OK
√ host time is close to reference time - OK
√ preloaded certificates are valid - OK
√ keyd is running - OK
√ certd is running - OK
√ identityd is running - OK
√ read all preloaded certificates from the Certificates Service - OK
√ read all preloaded key pairs from the Keys Service - OK
√ check all EST server URLs utilize HTTPS - OK
√ ensure all preloaded certificates match preloaded private keys with the same ID - OK

Connectivity checks (aziot-identity-service)

√ host can connect to and perform TLS handshake with iothub AMQP port - OK
√ host can connect to and perform TLS handshake with iothub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with iothub MQTT port - OK

Configuration checks

√ aziot-edged configuration is well-formed - OK
√ configuration up-to-date with config.toml - OK
√ container engine is installed and functional - OK
√ configuration has correct URIs for daemon mgmt endpoint - OK
√ aziot-edge package is up-to-date - OK
√ container time is close to host time - OK
‼ DNS server - Warning
Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.
You can ignore this warning if you are setting DNS server per module in the Edge deployment.
caused by: Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
Please see Troubleshoot Azure IoT Edge common errors | Microsoft Learn for best practices.
You can ignore this warning if you are setting DNS server per module in the Edge deployment.
‼ production readiness: logs policy - Warning
Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see Prepare to deploy your solution in production - Azure IoT Edge | Microsoft Learn for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
caused by: Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see Prepare to deploy your solution in production - Azure IoT Edge | Microsoft Learn for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
‼ production readiness: Edge Agent’s storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see Prepare to deploy your solution in production - Azure IoT Edge | Microsoft Learn for best practices.
caused by: The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see Prepare to deploy your solution in production - Azure IoT Edge | Microsoft Learn for best practices.
‼ production readiness: Edge Hub’s storage directory is persisted on the host filesystem - Warning
The edgeHub module is not configured to persist its /tmp/edgeHub directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see Prepare to deploy your solution in production - Azure IoT Edge | Microsoft Learn for best practices.
caused by: The edgeHub module is not configured to persist its /tmp/edgeHub directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see Prepare to deploy your solution in production - Azure IoT Edge | Microsoft Learn for best practices.
√ Agent image is valid and can be pulled from upstream - OK
√ proxy settings are consistent in aziot-edged, aziot-identityd, moby daemon and config.toml - OK

Connectivity checks

√ container on the default network can connect to upstream AMQP port - OK
√ container on the default network can connect to upstream HTTPS / WebSockets port - OK
√ container on the default network can connect to upstream MQTT port - OK
skipping because of not required in this configuration
√ container on the IoT Edge module network can connect to upstream AMQP port - OK
√ container on the IoT Edge module network can connect to upstream HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to upstream MQTT port - OK
skipping because of not required in this configuration
31 check(s) succeeded.
4 check(s) raised warnings.
2 check(s) were skipped due to errors from other checks.

1.The Azure device client adapter library is named libnvds_azure_proto.so. The Azure module client adapter library is named libnvds_azure_edge_proto.so.
2. The msgbroker is open source, please refer to the code: sources\libs\azure_protocol_adaptor.
You can debug by yourself first.
3.Could you reproduce your problems with our demo app:sources\apps\sample_apps\deepstream-test4? If we could reproduce the problem in our env, it would be faster to analyze it.

Just for reference - this is a graph composer project… So I’m trying with the reference graph deepstream-test4 - but no luck so far…

I’m getting the same error - with the graph composer graph deepstream-test4 app with libnvds_azure_edge_proto.so…

I was able to run a non graph composer project using the libnvds_azure_edge_proto.so - It seems to be related to graph composer?

Maybe - can we backtrack for a bit…

1.) When I use libnvds_edge_proto.so and not libnvds_azure_edge_proto.so. - it works - but my output from the container is not in json format as expected…

So my message broker receives the output in json - but the container output is in the format below… which I need in JSON format…

Source 0: Frame Number = 1417 Total objects = 10 [ Bicycle:0 Car:7 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1418 Total objects = 9 [ Bicycle:0 Car:6 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1419 Total objects = 9 [ Bicycle:0 Car:6 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1420 Total objects = 9 [ Bicycle:0 Car:6 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1421 Total objects = 7 [ Bicycle:0 Car:5 Person:2 Roadsign:0 ]
Source 0: Frame Number = 1422 Total objects = 7 [ Bicycle:0 Car:5 Person:2 Roadsign:0 ]
Source 0: Frame Number = 1423 Total objects = 9 [ Bicycle:0 Car:7 Person:2 Roadsign:0 ]
Source 0: Frame Number = 1424 Total objects = 7 [ Bicycle:0 Car:5 Person:2 Roadsign:0 ]
Source 0: Frame Number = 1425 Total objects = 9 [ Bicycle:0 Car:7 Person:2 Roadsign:0 ]
Source 0: Frame Number = 1426 Total objects = 8 [ Bicycle:0 Car:5 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1427 Total objects = 7 [ Bicycle:0 Car:4 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1428 Total objects = 9 [ Bicycle:0 Car:5 Person:4 Roadsign:0 ]
Source 0: Frame Number = 1429 Total objects = 7 [ Bicycle:0 Car:5 Person:2 Roadsign:0 ]
Source 0: Frame Number = 1430 Total objects = 9 [ Bicycle:0 Car:6 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1431 Total objects = 8 [ Bicycle:0 Car:5 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1432 Total objects = 8 [ Bicycle:0 Car:5 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1433 Total objects = 12 [ Bicycle:0 Car:9 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1434 Total objects = 11 [ Bicycle:0 Car:8 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1435 Total objects = 11 [ Bicycle:0 Car:8 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1436 Total objects = 11 [ Bicycle:0 Car:8 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1437 Total objects = 10 [ Bicycle:0 Car:7 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1438 Total objects = 9 [ Bicycle:0 Car:6 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1439 Total objects = 10 [ Bicycle:0 Car:7 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1440 Total objects = 10 [ Bicycle:0 Car:7 Person:3 Roadsign:0 ]
Source 0: Frame Number = 1441 Total objects = 11 [ Bicycle:0 Car:8 Person:3 Roadsign:0 ]

In your One.yml, the msgconverter extension uses the /opt/nvidia/deepstream/deepstream/reference_graphs/WatchOne//msgconv_config.txt configuration. Can you share the configuration file?

Hi Fiona,

Attached as requested.

Thanks!

msgconv_config.txt (741 Bytes)

Please refer to the deepstream-test4 sample graph.
Your settings are :

components:

  • name: Cloud Message Converter and Broker3
    parameters:
    in: Static Data Input4
    msg-broker-conn-str: HostName=watchnvidiaiothub.azure-devices.net;DeviceId=jetson_composerpc_symeteric;SharedAccessKey=B7ykcTwXVjDVWlsvp0XHFcR5iGH80YDRerfS7el2boc=
    msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_azure_proto.so
    msg-conv-config: /opt/nvidia/deepstream/deepstream/reference_graphs/WatchOne//msgconv_config.txt
    msg-conv-payload-type: 1
    sync: false
    type: nvidia::deepstream::NvDsMsgConvBroker

Please set “msg-conv-payload-type” to 0 as the sample.

Hi! Thanks for the response - I’m able to use different payload types with non GraphComposer Deepstream apps. So - for now - I think we will stick to non GraphComposer apps as we are facing a few issues using it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.