TAO Toolkit with Yolov4-Tiny and custom pretrained model

Actually I want to use my custom pretraied model for object detection in that case do I need to train my model through the TAO toolkit then use it for transfer learning? (Also my model will be yolov4-tiny and in this link there is a table but it is not clear for me, does it mean if I want to use yolov4tiny I need to use darknet 19/53?)
EDIT:
Here is what I understand:
I will use this colab link for yolov4tiny object detection.

  1. I need to prepare my dataset in Kitti format.
  2. Then I need to train my other dataset to use as pretrained model in TAO and it has to be cspdarknet 19/53 looking at this [link] (TAO Pretrained Object Detection | NVIDIA NGC) and its o/p will be hdf5 or anything(?) and I cant use BYOM because it does not support object detection yet.
  3. After all I got my transfer learning model in yolov4tiny format.
    If there is missing step or any tips could you provide?

May I know is it a 3rd party pretrained model which is not related to TAO?
If yes, currently not supported since BYOM only supports 3rd party Unet and Classification model.

3rd party pretrained model will be in the format yolov4 tiny and its backbone will be cspdarknet 19/53

Is it a 3rd party model?

Yeap I will create the model, in this case will it be 3rd party?

If it is a 3rd party model, it is not supported. Currently, only .tlt and ngc hdf5 file can be used as pretrained model in yolov4_tiny network.

so do I need to find most smilar model to mine and then take its .hdf5 or .tlt file and train my model? Or is there any user example tried to get own model’s .hdf5 or .tlt files?

TAO provides .hdf5 pretrained models in ngc.nvidia.com
And also some purpose-built networks have .tlt format models.

To getting started, suggest you to run jupyter notebook. It can be downloaded via the guide in TAO Toolkit Quick Start Guide - NVIDIA Docs

So if I use tensorrt for my model like you mentioned in this issue then can I use TAO with my custom pretrained model?
Also I found object recognition model in NGC maybe it is usable for my empty shelf detector model but I am not sure is it usable for my urpose because I need object detection model but this is recognition model. Could you give any tips about it , and I couldn’t find anything about labels. I have 7 different label in my dataset but if I use pretrained model with different labels do I need to change my label wrt pretrained model lables?

For custom pretrained model, only Unet and classification network are compatible. See BYOM for more information. BYOM Converter - NVIDIA Docs .
For other networks, you can train from scratch or use pretrained model in ngc.

For your case, could you please elaborate more? You are going to train an object detection model which will detect 7 kinds of objects, right?

Actually I pass to use ngc pretrained retail object detection model with efficientdet_tf2 to make transfer learning in colab code, I prepare my dataset,install retail .tlt file and create specs but when I try to create tfrecords bin/bash cannot find efficientdet_tf2 command but when I run tao -h it gives me efficiendet_tf2, I couldn’t contiue. Could you give more detail about it, I looked nvidia-tao-tf2 but it runs only with python3.8 and in nvidia-tao github there is tensorflow folder and setup_env.sh sets python version to 3.6 that’s why I cannot use nvidia-tao-tf2 that’s why I cannot use effieicntdet_tf2 if I’m wrong what could be the solution?

Can you run below successfully?
$ tao efficientdet_tf2

after changing 3.8? I didn’t try it yet

Please refer to TAO Toolkit Quick Start Guide - NVIDIA Docs
to set up a python environment using miniconda and run the tao launcher.

Installing conda in colab will solve the problem? In that case I need to change python3.8 to 3.6

Can you run below successfully without any changing?
$ tao efficientdet_tf2

I couldnt run bin/bash cannt found efficientdet_tf2 that’s why I thought could be python version

How about
$ tao info --verbose

Can it run successfully?

I run this part of the colab:
import os
if os.environ[“GOOGLE_COLAB”] == “1”:
os.environ[“bash_script”] = “setup_env.sh”
else:
os.environ[“bash_script”] = “setup_env_desktop.sh”

!sed -i “s|PATH_TO_COLAB_NOTEBOOKS|$COLAB_NOTEBOOKS_PATH|g” $COLAB_NOTEBOOKS_PATH/tensorflow/$bash_script

!sh $COLAB_NOTEBOOKS_PATH/tensorflow/$bash_script
it will take time

its o/p I got these errors:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests~=2.21.0, but you have requests 2.27.1 which is incompatible.
google-colab 1.0.0 requires six~=1.12.0, but you have six 1.15.0 which is incompatible.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
nvidia-tensorflow 1.15.4+nv20.10 requires numpy<1.19.0,>=1.16.0, but you have numpy 1.19.4 which is incompatible.
nvidia-tao 4.0.0 requires idna==2.10, but you have idna 2.7 which is incompatible.
nvidia-tao 4.0.0 requires six==1.15.0, but you have six 1.13.0 which is incompatible.
nvidia-tao 4.0.0 requires tabulate==0.8.7, but you have tabulate 0.7.5 which is incompatible.
nvidia-tao 4.0.0 requires urllib3>=1.26.5, but you have urllib3 1.24.3 which is incompatible.
google-colab 1.0.0 requires ipykernel~=4.6.0, but you have ipykernel 5.5.6 which is incompatible.
google-colab 1.0.0 requires ipython~=5.5.0, but you have ipython 7.16.3 which is incompatible.
google-colab 1.0.0 requires notebook~=5.2.0, but you have notebook 6.4.10 which is incompatible.
google-colab 1.0.0 requires pandas~=0.24.0, but you have pandas 0.25.3 which is incompatible.
google-colab 1.0.0 requires requests~=2.21.0, but you have requests 2.20.1 which is incompatible.
google-colab 1.0.0 requires six~=1.12.0, but you have six 1.13.0 which is incompatible.
google-colab 1.0.0 requires tornado~=4.5.0, but you have tornado 6.1 which is incompatible.

here is the o/p:

usage: tao [-h]
           {action_recognition,augment,bpnet,classification_tf1,classification_tf2,converter,deformable_detr,detectnet_v2,dssd,efficientdet_tf1,efficientdet_tf2,emotionnet,faster_rcnn,fpenet,gazenet,gesturenet,heartratenet,intent_slot_classification,lprnet,mask_rcnn,multitask_classification,n_gram,pointpillars,pose_classification,punctuation_and_capitalization,question_answering,re_identification,retinanet,segformer,spectro_gen,speech_to_text,speech_to_text_citrinet,speech_to_text_conformer,ssd,text_classification,token_classification,unet,vocoder,yolo_v3,yolo_v4,yolo_v4_tiny}
           ...
tao: error: invalid choice: 'info' (choose from 'action_recognition', 'augment', 'bpnet', 'classification_tf1', 'classification_tf2', 'converter', 'deformable_detr', 'detectnet_v2', 'dssd', 'efficientdet_tf1', 'efficientdet_tf2', 'emotionnet', 'faster_rcnn', 'fpenet', 'gazenet', 'gesturenet', 'heartratenet', 'intent_slot_classification', 'lprnet', 'mask_rcnn', 'multitask_classification', 'n_gram', 'pointpillars', 'pose_classification', 'punctuation_and_capitalization', 'question_answering', 're_identification', 'retinanet', 'segformer', 'spectro_gen', 'speech_to_text', 'speech_to_text_citrinet', 'speech_to_text_conformer', 'ssd', 'text_classification', 'token_classification', 'unet', 'vocoder', 'yolo_v3', 'yolo_v4', 'yolo_v4_tiny')