Detectnet_v2 Inference without Image Output

When we run “tao-deploy detectnet_v2 inference”, it produces two types of outputs: images (in “images_annotated” directory) and bbox labels.

Saving the image files takes too long time compared to inference itself. I don’t need the annotated image files actually so I would save the time spent for writing. I just need to save the label files so I can process it in later steps.

Is there a way in “detectnet_v2 inference” to achieve this? If not, what “tao-deploy” alternatives would you recommend? Thank you in advance.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Officially, you can have a look at GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton and then config the etlt model in it. Then, modify https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/ae6b5ec41c3a9651957c4dddfc262a43f47e263c/tao_triton/python/postprocessing/detectnet_processor.py#L106 to save labels only.

In other way, you can refer to a standalone way to run inference. For example, Run PeopleNet with tensorrt - #21 by carlos.alvarez

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.