I have a scenario where when I add a camera, I need to control the AI function enabled by the camera, then choose which queues to use for inference, and finally publish a message through mqtt
I don’t know how to design with Deepstream
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
can you provide the information above?
what do you mean about “I need to control the AI function enabled by the camera”? what is the models used to do? could you provide a media pipeline or graph to describe the requirements? Thanks!
About inference and sending mqtt, please refer to ready-made example deepstream-test5.
I’m sorry I didn’t explain clearly. I’m wondering if we can use custom source caps to send data to different streammux instances. Each streammux would then be followed by a different inference pipeline, allowing us to build a dynamic and multifunctional pipeline.
nvstreammux already support dynamically add and delete sources. please refer to sample runtime_source_add_delete. there are more questions.
Okay, you’re right, thank you
Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.