– Specs:
tensorflow 1.15.0, windows 10, cuda 10, mobilnet v2 custom object detection API
JP 4.3
– Problem:
I converted the tensorflow frozen_inference_graph.pb to uff format. I want to use it in live detection w/ 20+ FPS, extract from this live detection the following information: the object label whether was car, person etc…, the object bounding box co-ordinates x and y, and after then send this data using the UART Protocol to another Microprocessor.
How to extract this needed info from live detection ?
Hi,
Is the IoT IoT Protocols an option for you?
If yes, please check our Deepstream SDK sample below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_test5.html
Thanks.
How to extract this information at the first place ? Let’s hold the iot part for now.
You may find some sample code from Python Sample Apps and Bindings Source Details — DeepStream 6.1 Release documentation (nvidia.com) to know how it works.
For DeepStream related issue, please open at Latest Intelligent Video Analytics/DeepStream SDK topics - NVIDIA Developer Forums
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.