How to deploy a Tao toolkit DetectNet_v2 model for real-time inferencing?
I am able to run the default notebook for detectNet_V2 but I am not sure how to build a real-time inferencing application(without DeepStream) using the generated models because all the deployment APIs shown involve reading and writing data from a directory.
Thank Run PeopleNet with tensorrt - #21 by carlos.alvarez worked for me. However, I am a bit confused about the parameter “box_norm = 35.0”, in his code. Can you explain hat it means?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Refer to DetectNet_v2 - NVIDIA Docs
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.