Do inference of tao generated engine in python without deepstream

Please provide the following information when requesting support.

• Hardware Jetson TX1
• Jetpack 4.4
• TensorRT 7.X
• Network Type (Yolo_v4) Object Detection
• TLT Version (3.21.08)
• Training spec file(If have, please share here)
• I want to inference of tao generated model in python without using deepstream is it possible

It is possible. You can directly deploy the etlt model with triton-app. GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

You can also refer to the preprocessing and postprocessing in
https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/entrypoints/tao_client.py and
https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/postprocessing/yolov3_postprocessor.py

More, you can also search and refer to some topics in forum.
For example,

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.