Using nVidia TAO ecosystem to infer on a single image

• Hardware: Turing
• Network Type (LPRnet)
• Tao Toolkit Version(toolkit_version: 3.22.05)

I have succeeded in creating my own version of the LPRNet, using transfer learning. I have also managed to export the network to the TensorRT engine using tao converter.

I have to efficiently infer on single images (execution time matters) . I need an application, in which, ideally, I pass a variable containing an image or a path to an image and get the inference result back.

Now, I understand, that it is possible to use this .engine file to create a standalone TensorRT application in python, however I wonder, what is a proper way to do it in the TAO ecosystem? As far as I understand:

  1. Running “!tao lprnet evaluate” with a file sitting on disk is a wrong way, as the docker is invoked for scratch and the initialization time is terible
  2. Deepstream is designed for videos rather than single images.

Can you please point me in the right direction?

You can refer to GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

Thank you, it seems that’s exactly what I was looking for! I will investigate it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.