I am working on a traffic analytics project using Deepstream on a Jetson nano 2gb. I want to run my code using the deepstream app and not the python code which uses gstreamer. But I cannot find a good way to extract all the metadata and then easily extract information from it like how many cars or trucks passed down from one lane. Can anyone please suggest a solution apart from using additional services like Kafka ?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hardware- Jetson nano 2gb
I just installed deepstream with python bindings And I want to run my code using the deepstream app and not the python code which uses gstreamer. But I cannot find a good way to extract all the metadata and then easily extract information from it like how many cars or trucks passed down from one lane. Can anyone please suggest a solution apart from using additional services like Kafka ?
Can you refer /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test1/ for parse metadata?
Ya I can surely use that but I want to use deepstream using config files and there is also no proper documentation to form a python deepstream app for custom models or applications. If you can recommend any proper sources from where I can learn how to make a deepstream pipeline using python.
Can below sample helps:
I have seen these examples but I cannot understand the code exactly as the documentation is quite poor.
Are you familiar with buffer probes in Gstreamer? You can connect a buffer probe to an element after your inference component and you should be able to access all the metadata that you need.
How about you try to run this [python file](https://github.com/NVIDIA-AI-
You should be able to get all the object metadata that you need in this function.
This function uses some Deepstream data structures that are defined here:
NvDsObjectMeta — Deepstream Python ##DeepStream_VERSION## documentation
Yes I am familiar with the probe functions they were not easy to understand and I am working with deepstream for about 6 months and now the code makes sense. So you should improve the examples or give detailed api usage. The graph composer is pretty good actually. Thanks for the help.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.