I note that for Image Classification in TLT, there is a “enable_center_crop” parameter in the eval_config (NVIDIA TAO Documentation). The evaluation accuracy that I get if I set enable_center_crop to False is different from if I set enable_center_crop to True. I would assume that there is some “center cropping” of the images before they are used for evaluation. My question is when deploying my image classification model in Triton (after exporting and converting it to a TensorRT Engine file), how should I pre-process my image to also “enable center cropping”?
• Hardware (T4/V100/Xavier/Nano/etc): -
• Network Type: Classification
• TLT Version: 3.0
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.): -
Yes, I have deployed my model on triton and it works. However, I just need to know how to properly pre-process my images before sending it to Triton for inference so that I can get the same results as how “classification evaluate” pre-processes the image.
So my question is how should I properly pre-process my images the same way that “classification evaluate” does? More specifically, what modification in pre-processing do I have to do for “enable_center_crop”?