I have a high level question that I have not been able to find a concrete answer for in the Deepstream Docs or on the forums. I have a pipeline that uses nvinfer that I would like to send the inference data from to a cloud MQTT broker.
Using Amazon’s Deepstream/IoT core integration I am able to do so successfully. However, I am using AWS Greengrass to deploy my application. Greengrass is already running an MQTT client to my IoT Core backend as part its execution. So I don’t want to create a second client on the same device, which is what nvmsgbroker would do, and instead would prefer to handle the inference data messages from my application directly.
So what I am stuck on is if I should still use nvmsgconv to generate payloads to send over MQTT? Or is it preferred to just directly probe the results of nvinfer if I am not sending them to nvmsgbroker? I am wondering if there is something I am missing here.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am using a Xavier NX on Jetpack 5.1, Deepstream 6.2, and TensorRT 8.4.1. However, I don’t think this information is required to answer my question as it is fairly high level and agnostic of versions outside of being on a recent enough Deepstream to have these elements.
Yeah, that is what I am doing at the moment. It seems to work just fine, I wanted to verify that there were no pipeline / performance related benefits in nvmsgconv compared to using a simple probe function.