Message broker to send GPS

HI, Andrey
I am not familiar with serial programming, you can start with serial programming tutorials,
https://www.cmrr.umn.edu/~strupp/serial.html

@Amycao
Hi, Thank you for your response.
However it doesn’t help to get serial data be delivered by DeepStream.
How could we send data from serial ouitput to Azure IoT with DeepStream?

About send data from serial ouitput to Azure IoT with DeepStream, please read comments back.

@Amycao
Thank you for following up;
However we need a working sample that we could use. Could you share any working sample, please?
Are you able to illustrate how to actually send some messages with the way referenced above in previous messages?

dynamic
as it gets updated within time

“nvmsgconv” plugin uses NVDS_EVENT_MSG_META type of metadata from the buffer
and generates the “DeepStream Schema” payload in Json format. Static properties of schema are read from configuration file in the form of key-value pair.
Check dstest4_msgconv_config.txt for reference. Generated payload is attached as NVDS_META_PAYLOAD type metadata to the buffer.

“nvmsgbroker” plugin extracts NVDS_META_PAYLOAD type of metadata from the buffer and sends that payload to the server using APIs in nvmsgbroker C library.

could you provide an example how to send a simple message with it?
for example the message that would only contain one word “test”?

You can refer to test sample sources/libs/amqp_protocol_adaptor/test_amqp_proto_async.c or test_amqp_proto_sync.c which demonstrates to connect, send “hello word” messages using amqp protocal adapter.

@Amycao could you provide an example how to send “hello world” with Azure IoT protocol, please via Deepstream, please?

Under sources/libs/azure_protocol_adaptor/device_client/

@amycao
Thank you for your response.
Exactly,
The readme of the location lists:

To run test program:
--------------------
 make -f Makefile.test
 ./test_azure_proto_async <path_to_libnvds_azure_proto.so>
 ./test_azure_proto_sync <path_to_libnvds_azure_proto.so>

However, It is possible to run the thing with

./test_azure_proto_sync /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_azure_proto.so
Adapter protocol=AZURE_DEVICE_CLIENT , version=2.0
connection signature string queried=
In sample prog: connect success
Azure: connect Success

But there is a limitation: “Note: Max length of the custom_msg_property string is limited to 512 bytes”

Moreover, how to address this particular azure adaptor after figuring out a simple “hello world” to send continiously input from /dev/ttyACM0 that will be text appending continiously. How to approach it at all?
Also this example doesn’t address the video streams content.
Could we somehow both send “hello world” but also send streams content to Azure MQTT either as stream or as events]?

If we could send with it hello world it would be a step forward anyway; but where to specify the message? in the terminal run command line?
It by implementation reads the message from .cfg text file.
How could we pass the messages from outside of the azure_cfg file?

The test sample demonstrates how to connect with broker and how to send message. for your user case, you need to do some customization. you want to send with GPS data or stream contents? or both? did your case involve inference? please give more details.

@Amycao The effort is to shift from the approach listed belw to nvidia deepstream ecosystem for sending to Azure IoT:

Our current implementation has an independent process which reads the GPS coordinates (20 time per seconds).

It has 2 responsabilities:

send all coordinates to another process that selects the image to run the inference on (aka broker). Hence all analyzed images get geo-located. The frequence could be up to 6 times per seconds
send an "heartbeat" to Azure to track the location of the device. The frequence is once every 20 seconds or so
The communication between the GPS reader and the broker is made thanks to ZMQ
The communication between the GPS and Azure is made thanks to Azure IoT

from devs

Inference server and zmq broker run on are the same server?

Hi @Amycao
Thank you for following up:

regarding the video streams we do not need to send the video , but just detections or the streams themselves need to be transferred over mqtt
Inference server and zmq broker run on are the same server?

yes, they are on the same server

regarding the video streams we do not need to send the video , but just detections or the streams themselves need to be transferred over mqtt

but what is the difference between video and streams?

@Amycao to clarify - only detections needs to be passed, if it simplifies things. However I had triedo passing the full video via mqtt that works somehow.
@Amycao could you extend on how to implement the shift from, ZMQ to Deepstream Azure IoT?

send all coordinates to another process that selects the image to run the inference on (aka broker). Hence all analyzed images get geo-located. The frequence could be up to 6 times per seconds

We do not support nvinfer which inference based on GPS coordination info. you need to do some customizaion for nvinfer plugin.

Is any serial output transmission supported within the deepstream framework?
How exactly is it possibe to implement the customization of the nvinfer plugin?

See marked with bold font.