How to deploy a Tao generated DetectNet_v2 model using TensorRT runtime?

How to deploy a Tao toolkit DetectNet_v2 model for real-time inferencing?

I am able to run the default notebook for detectNet_V2 but I am not sure how to build a real-time inferencing application(without DeepStream) using the generated models because all the deployment APIs shown involve reading and writing data from a directory.

For detectnet_v2 inference without deepstream, officially, you can leverage GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

Or you can also leverage forum topic by other user: Run PeopleNet with tensorrt - #21 by carlos.alvarez

Thank Run PeopleNet with tensorrt - #21 by carlos.alvarez worked for me. However, I am a bit confused about the parameter “box_norm = 35.0”, in his code. Can you explain hat it means?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Refer to DetectNet_v2 - NVIDIA Docs