Tao TF1 classification export issue

Please provide the following information when requesting support.

• Hardware Xavier
• Network Type Classification
• TLT Version 5.0.0
• Training spec file: model_config {
#Model Architecture can be chosen from:
#[‘resnet’, ‘vgg’, ‘googlenet’, ‘alexnet’]
arch: “resnet”
#for resnet → n_layers can be [10, 18, 50]
#for vgg → n_layers can be [16, 19]
n_layers: 18
use_batch_norm: True
use_bias: False
all_projections: False
use_pooling: True
retain_head: True
resize_interpolation_method: BICUBIC
#if you want to use the pretrained model,
#image size should be “3,224,224”
#otherwise, it can be “3, X, Y”, where X,Y >= 16
input_image_size: “3,224,224”
train_config {
train_dataset_path: “/workspace/tao_classification/example_dataset/train”
val_dataset_path: “/workspace/tao_classification/example_dataset/val”
pretrained_model_path: “/workspace/tao_classification/resnet_18.hdf5”
#Only [‘sgd’, ‘adam’] are supported for optimizer
optimizer {
sgd {
lr: 0.01
decay: 0.0
momentum: 0.9
nesterov: False
batch_size_per_gpu: 50
n_epochs: 150
#Number of CPU cores for loading data
n_workers: 16
reg_config {
#regularizer type can be “L1”, “L2” or “None”.
type: “L2”
#if the type is not “None”,
#scope can be either “Conv2D” or “Dense” or both.
scope: “Conv2D,Dense”
#0 < weight decay < 1
weight_decay: 0.000015
lr_config {
cosine {
learning_rate: 0.04
soft_start: 0.0
enable_random_crop: True
enable_center_crop: True
enable_color_augmentation: True
mixup_alpha: 0.2
label_smoothing: 0.1
preprocess_mode: “caffe”
image_mean {
key: ‘b’
value: 103.9
image_mean {
key: ‘g’
value: 116.8
image_mean {
key: ‘r’
value: 123.7
eval_config {
eval_dataset_path: “/workspace/tao_classification/example_dataset/test”
model_path: “/workspace/tao_classification/results/resnet_18.tlt”
top_k: 3
batch_size: 256
n_workers: 8
enable_center_crop: True
• How to reproduce the issue: tao model classification_tf1 export -m /workspace/tao_classification/results/weights/resnet_150.hdf5 -o ‘/workspace/tao_classification/export/final_model’ -k TLT --classmap_json /workspace/tao_classification/results/weights/classmap.json

Everytime I run the above command I get this error:

Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/makenet/scripts/export.py”, line 70, in
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/makenet/scripts/export.py”, line 66, in main
raise e
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/makenet/scripts/export.py”, line 50, in main
run_export(Exporter, args=args, backend=backend)
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/common/export/app.py”, line 277, in run_export
exporter = Exporter(model_path, key,
File “/usr/local/lib/python3.8/dist-packages/nvidia_tao_tf1/cv/makenet/export/classification_exporter.py”, line 95, in init
if os.path.exists(experiment_spec_path):
File “/usr/lib/python3.8/genericpath.py”, line 19, in exists
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
Execution status: FAIL
2023-08-17 11:48:46,735 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 337: Stopping container.

This was on a different post earlier as well but adding --classmap_json did not fix the issue for me. Please help

Please add “-e your_spec_file” in the command line.
Refer to https://github.com/NVIDIA/tao_tutorials/blob/cdbafd28fec9da67fbfc4db9288ec0805076ce29/notebooks/tao_launcher_starter_kit/classification_tf1/tao_voc/classification.ipynb

Thank you that worked.

The documentation Image Classification (TF1) - NVIDIA Docs
marks -e as an optional argument. Maybe update the documentation accordingly.

Thanks again.

Yes, we will improve the document. Thanks for the finding.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.