Classification accuracy dropped a lot with triton server

  1. Could you use latest docker to run tao inference again to check the result?
    $ docker run --runtime=nvidia -it nvcr.io/nvidia/tao/tao-toolkit:4.0.1-tf1.15.5 /bin/bash
    Then,
    # classification_tf1 inference xxx

More info can be found in Image Classification (TF1) - NVIDIA Docs

Also, you can also run standalone inference, please refer to Inferring resnet18 classification etlt model with python - #10 by jazeel.jk and Inferring resnet18 classification etlt model with python - #41 by Morganh

  1. For triton inference, please refer to Tao-converted .plan model running in triton-server turned to bad accurate - #47 by Morganh