Do inference of tao generated engine in python without deepstream

Please provide the following information when requesting support.

• Hardware Jetson TX1
• Jetpack 4.4
• TensorRT 7.X
• Network Type (Yolo_v4) Object Detection
• TLT Version (3.21.08)
• Training spec file(If have, please share here)
• I want to inference of tao generated model in python without using deepstream is it possible

It is possible. You can directly deploy the etlt model with triton-app. GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

You can also refer to the preprocessing and postprocessing in
tao-toolkit-triton-apps/tao_client.py at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub and
tao-toolkit-triton-apps/yolov3_postprocessor.py at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub

More, you can also search and refer to some topics in forum.
For example,

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.