Generation of Triton Inference Server configuration for TensorRT exported model of TAO classification (resnet)

For classification inference in triton, please refer to classification section in GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton
https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/docs/configuring_the_client.md#classification

Refer to TAO unet input and output tensor shapes and order - #3 by Morganh

Yes, you can rename.

It is needed to generate tensorrt engine(i.e., model.plan)
See https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/scripts/download_and_convert.sh#L30