How can I run a TAO’s yolov4_tiny model on the triton server?
Please refer to yolov3 model in triton server.
The processing is similar.
Can you share the document link?
I have loaded the yolov4_tiny model on triton and the model is up and running, I need to run the inference…
For example I am able to run a classification inference using “image_client.py”… anything for object detection?
Refer to yolov3 part in GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.