Inference on images of different size than training

If I train a model with images of size 2000x1000 and I use this model to infer on images of size 1000x500, would TAO resize the images from 1000x500 to 2000x1000 during inferencing? or would it simply pad the images with zeros?

I am asking the question in the context of fasterrcnn but I assume the same would apply for other networks as well.

Please refer to https://github.com/NVIDIA/tao_tensorflow1_backend/blob/c7a3926ddddf3911842e057620bceb45bb5303cc/nvidia_tao_tf1/cv/faster_rcnn/utils/utils.py#L68, it will resize and normalize image.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.