I replaced my yoloV5 model in deepStream_PYTHon_test3 and it ran successfully, but how do I save the object data framed in the video to TXT in real time, that is, how do I get the real time identification data?
NvAPI is not the correct section for this query. I have moved this topic to the DeepStream SDK section to start with.
NVAPI Forum Moderator
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
please refer to deepstream-test1, you can the real time identification data in callback function, like osd_sink_pad_buffer_probe. here is the link: deepstream_python_apps/deepstream_test_1.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.