Yolov4-tiny inferencing on triton


Is it possible to run yolov4-tiny models on triton-inference server?

Currently, you can refer to YOLOv3.

Does it mean that I can run yolov4-tiny models?

Yes, the preprocessing and postprocessing are the same.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.