How does deepstream sdk process ai inference results (metadata) to be sent according to the onvif Analytics Service Specification?

Hello,

How does deepstream sdk process ai inference results (metadata) to be sent according to the onvif Analytics Service Specification?

What kind of documentation should I look at to see how deepstream sdk handles that part?

Thank you.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)


1 Like

Hello,

• Hardware Platform (Jetson / GPU)
Jetson Nano/ Maxwell
• DeepStream Version
5
• JetPack Version (valid for Jetson only)
4.4.1
• TensorRT Version
7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ?
just questions
• Requirement details
How does deepstreamsdk transmit data according to onvif analytic specification when retransmitting ai metadata?
Is there any document or material that I can see in this regard?

( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Thank you.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Can you share more on “onvif Analytics Service Specification”? Why DeepStream need it? What is the benefit?

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.