"enable_center_crop" in eval_config for Image Classification and how to pre-process image to also enable center crop for inference in Triton?

Hello,

I note that for Image Classification in TLT, there is a “enable_center_crop” parameter in the eval_config (NVIDIA TAO Documentation). The evaluation accuracy that I get if I set enable_center_crop to False is different from if I set enable_center_crop to True. I would assume that there is some “center cropping” of the images before they are used for evaluation. My question is when deploying my image classification model in Triton (after exporting and converting it to a TensorRT Engine file), how should I pre-process my image to also “enable center cropping”?

• Hardware (T4/V100/Xavier/Nano/etc): -
• Network Type: Classification
• TLT Version: 3.0
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.): -

Firstly, please deploy your model via GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton , to check if it can work.
For “enable_center_crop”, there should be some modification in pre-processing.

Yes, I have deployed my model on triton and it works. However, I just need to know how to properly pre-process my images before sending it to Triton for inference so that I can get the same results as how “classification evaluate” pre-processes the image.

So my question is how should I properly pre-process my images the same way that “classification evaluate” does? More specifically, what modification in pre-processing do I have to do for “enable_center_crop”?

For “enable_center_crop”, we firstly resize to (target_width + CROP_PADDING, target_height + CROP_PADDING). Will resize while keeping aspect ratio.

The crop_padding is 32 pixels.

For example, if an ori image has 640x480 resolution, i.e,
ori_img : 640x 480
target: 224x224

Keep ratio, ori_img will resize to 341x256

The left_corner = int(round(341/2)) - int(round(224/2)) = 58
The top_corner = int(round(256/2)) - int(round(224/2)) = 16

Then crop this 341x256 image to 224x224.

You can modify

def as_numpy(self, image):
    """Return a numpy array."""
    image = image.resize((self.w, self.h), Image.ANTIALIAS)

to

   def as_numpy(self, image):
    """Return a numpy array."""
    image = image.resize((341, 256), Image.ANTIALIAS)  
    image = image.crop((58,16,282,240))

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.