What does setting "preprocess_mode" flag in specs for tao classification_tf1 do. What is the effect of setting "pytorch","tf" and "caffe"?

I have a tltb model generated from pre-trained onnx model for classification, which took input image normalized to range of (0,1) by division by 255 as input. What preprocess mode and other flags should I set in specs to replicate its performance after?

Please provide the following information when requesting support.
• Hardware : rtx-3090
• Network Type : classification_tf1
• TLT Version : nvidia-tao 4.0.0

You can refer to keras-applications/imagenet_utils.py at master · keras-team/keras-applications · GitHub

1 Like

Thanks for the reply. I would also like to ask is it possible to prun tltb models directly. I find that the argument for bm made available in classification_tf1 notebook here but I do not understand for such command what should be passed in -m i.e. model path argument which is compulsary argument in pruning command.

Also I observe if I pass tltb model path itself for the -m argument then I get the error:

Else if I pass a tlt model which I generated from the tltb model post training, I still get an error.

For classification, there are two kind of notebooks. One is the original notebook without BYOM feature. Another is with BYOM.
I think you are using BYOM version.
For BYOM, build your own model, please refer to BYOM Image Classification . The .tltb is the pretrained model which is generated by the byom converter. Then user can use it to run training and do pruning, etc.
If you do not want to use BYOM, please use the notebook without BYOM.

I’m using the classification_tf1 notebook present at “getting_started_v4.0.0/notebooks/tao_launcher_starter_kit/classification_tf1/byom_voc/” thus I guess I’m using the correct notebook.
But its pruning command requires me to provide -m argument for model path even if I pass -bm argument, as stated in my earlier comment. The above errors are for that case only. I want to know what I should be sending in -m parameter cause that is where I face errors.

There is also another notebook without BYOM feature.
tao-getting-started_v4.0.0\notebooks\tao_launcher_starter_kit\classification_tf1\tao_voc

For pruning, see BYOM Image Classification or Image Classification .
Did you set correct key for “-k”?

Hi I have refined my question a bit more.
I would also like to ask how is it possible to prune tltb models directly. I find that the argument for tltb model path(denoted by -bm) made available in classification_tf1 notebook here but I do not understand for such command what should be passed in -m i.e. model path argument which is compulsory argument in pruning command, even if you want to prune tltb model provided by -bm argument(byom model path).

I did a few basic experiments to see what should be passed in -m parameter.
I observe if I pass tltb model path itself for the -m argument then I get the error:

Else if I pass a tlt model which I generated from the tltb model post training, I still get an error.

NOTE:
I’m using classification_tf1 notebook present in calssification_tf1/byom_voc .
I need to prune my tltb model using classification_tf1/byom_voc notebook.

Refer to Mobilenetv2 classification_tf1 pruning, error at pruned layer - #3 by Morganh to generate a new .tltb model.
I can run training and pruning successfully.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.