Please provide the following information when requesting support.
• Hardware (T4)
• Network Type (Dino and Yolo)
• How to reproduce the issue ?
I am trying to get a small model which can be effectively deployed to Jetson devices like Yolo models , but i want them accurate as bigger models like Dino
There is this distillation approach now available for Dino as shown in the tutorial below
Also i saw there is this code base as well
NVIDIA general distillation modules :
NVIDIA dino distillation models
I want to do distillation training , by using Dino model as a teacher and Yolo model as a student
Will this possible currently in the TAO pipeline
Or how this can be implemented inside the TAO repositories ,
Or is there any other way using External training code of Yolo to use the Dino training pipeline in TAO
to achieve this purpose
Currently in DINO, you can use (DINO+ Resnet50) as the student. Please try to follow notebeook to run distillation. I am afraid this student can get expected fps when comparing to YOLOv4.
Performance table can be found in Overview - NVIDIA Docs.
More info can be found in https://arxiv.org/pdf/2004.10934
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks