Inference Output to External MCU

Hi all, I know the question I am asking could have a number of correct answers, but I just needed a little guidance. This post itself might be better off in the software section, so mods can decide. Anyhow, I was looking to send the results of an inference framework (ie: Jetson-Inference or DeepStream) in short text messages to an external MCU, like an Xbee radio, for example.

I have tried to get my head around how to move forward, and considered a number of possibilities. The first was to deploy ROS on the Nano and use the deep learning nodes that have been developed. I thought I could just output inference results in a plain text file to serial on the Nano using the ROS publish/subscribe framework.

Other possibility might be to use an external library like pySerial, but I am not real familiar with it.

Could someone perhaps have a simpler approach?

Hi,

This should depends on how do you want to maintain your system.

To pass the result with a text is easy and flexible but use an external library might be more stability.
For your reference, Deepstream support to send out output buffer or meta data for several API.
Here is tutorial for your reference:
https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.15.html%23wwpID0E0OJ0HA

Thanks.

I believe this is is exactly what I was looking for; the SDK is impressive in its completeness.

Do appreciate the pointer, AastaLL. Thank you.