• Hardware Platform (Jetson / GPU)
Jetson Xavier NX
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
Hi, I am trying to count the number of entry/exit in a building.
I succeeded in running deepstream occupancy analytics.
Here, I got a question. What to do for receiving the message(counts and other info) at a cloud server? I am ok with using Kafka or MQTT. Could anyone tell me how to make it?
Thanks in advance.
We recommend you to raise this query in TRITON Inference Server Github instance issues section.
@NVES, thank you for your reply.
I read the reference in github and I understand Triton Inference Server is for executing inference. However, what I want to do is to run the inference(deepstream) on Jetson and send the data to the server via kafka or mqtt. I would like to know how to do it.
We are moving this post to the Deepstream forum to get better help.
Hello @spolisetty , Do you mean which software/service should be used after receiving message from MQTT/Kafaka message? Basically deepstream publishes(produces) message via MQTT or Kafaka, and you need your own’s subscriber(consumer) to handle the message and do the following service logic (or simlely storing). Please be noted that this forum is focused on DeepStream, and the server side is not our strength.
The sample deepstream occupancy analytics is based on basic DeepStream sample deepsteam-test5 app. DeepStream Reference Application - deepstream-test5 app — DeepStream 6.1.1 Release documentation
You can also refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/README for the nvmsgbroker configurations.
@yingliu @Fiona.Chen, Thank you very much. I am referring to the document but still struggling. I am going to keep trying.
I finally succeeded in sending messages using Azure by following /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/README.
Thank you very much.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.