When will the int8 mode will be supported in DINO

Continuing the discussion from DINO-FAN_base INT8 model is not faster than FP16 model:

Hello, We planned to use DINO model instead of Yolor model and since the inference speed is high for fp16. We planned on convert to int8 mode but found that it is not available for DINO. Will int8 mode for DINO will support in future, and if not what do you suggest to improve the inference speed.

Please reply.
Thank you.

C. Meenambika

I will sync internally for the feature request for the int8. For improving the inference speed, you can select a smaller backbone. For example, resnet_50, fan_tiny, gcvit_xxtiny.

More can be found in DINO - NVIDIA Docs and Overview - NVIDIA Docs

1 Like

Hi, Have few doubts, You have stated int8 is not supported for DINO, can I know what then here DINO | NVIDIA NGC what does the int8 stated here refer to. Any assistance provided regarding int8 mode is highly appreciated.

Thank you

C .Meenambika

Good catching. There should be typo in the model card. Currently, int8 is not supported for DINO. Source code is in tao_pytorch_backend/nvidia_tao_pytorch/cv/dino at 99e0a38a0d3ac00997c41c7e6ea6f02c6586bf4f · NVIDIA/tao_pytorch_backend · GitHub.

Then what would be the mode of the model which was used in this speed comparison.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

It is FP16. Please refer to Overview - NVIDIA Docs.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.