I’ve been trying official TAO sample codes, which utilized TAO 4.0 on Google Colab as I still have no access to a machine atm.
Saw the introduction and video clips regarding TAO 5.0 a bit. I’d like to further see a few things clarified.
AI-assisted data annotation
I saw the following video showing this feature.
A few questions I’m wondering: 1. Do users have to draw the bounding boxes by themselves before hand or can the bounding boxes be AUTOMATICALLY generated during the process?
I was expecting that the input for the AI-assisted data annotation is raw data, which refer to the images WITHOUT any annotations. This means that those images first need to be used as the input for some object detection model such as Yolo so as to generate the bounding boxes and then what’s displayed in the video is done.
If what I expected is incorrect, does that mean users have to first generate the bounding boxes for each of the image by either drawing them with some other tools such as labelme or doing inference using object detection models and saving the results and then converting them into COCO format so that it can then be used by the AI-assisted annotation tool?
2. Is it possible to try TAO 5.0 on google colab at the moment?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Hi,
After checking internally, for the wheel, it is needed to stick with 4.0.
More, for colab , actually there are only some changes like bug fixes and switching object detection data from kitti to synthetic has been made.
The newer models of TAO 5.0 are not added.
In short, there are not major changes for colab.