Multitask_classification export to onnx inference

Hello,
I have trained a Multitask_classification model from a jupyter notebook in TAO getting started v5.3.0. I have exported the model in onnx format and now I want to run inference using onnxruntime. But when I tried to use the model on the validation set used in the notebook the accuracy was significantly worse (accuracy drop from 74%, 97%, 73% to just 32%, 48%, 47%). Is there any preprocessing that I need to do when running the inference outside of TAO? Or is it possible that exporting to ONNX format reduces accuracy?

• Hardware: RTX A6000
• Network Type: Resnet10
• TAO toolkit version: 5.3.0
model_config {
arch: “resnet”,
n_layers: 10

Setting these parameters to true to match the template downloaded from NGC.

use_batch_norm: true
all_projections: true
freeze_blocks: 0
input_image_size: “3,80,60”
}
training_config {
batch_size_per_gpu: 256
num_epochs: 100
checkpoint_interval: 1
learning_rate {
soft_start_cosine_annealing_schedule {
min_learning_rate: 1e-6
max_learning_rate: 1e-2
soft_start: 0.1
}
}
regularizer {
type: L1
weight: 9e-5
}
optimizer {
sgd {
momentum: 0.9
nesterov: False
}
}
pretrain_model_path: “/workspace/tao-experiments/multitask_classification/pretrained_resnet10/pretrained_classification_vresnet10/resnet_10.hdf5”
}
dataset_config {
train_csv_path: “/workspace/tao-experiments/data/myntradataset/train.csv”
val_csv_path: “/workspace/tao-experiments/data/myntradataset/val.csv”
image_directory_path: “/workspace/tao-experiments/data/myntradataset/images”
}

Please refer to the preprocessing code in the inference script. tao_tensorflow1_backend/nvidia_tao_tf1/cv/multitask_classification/scripts/inference.py at main · NVIDIA/tao_tensorflow1_backend · GitHub.

Thank you for your answer. I have implemented the preprocessing the same way as shown in the source code, but still, the onnx output is different.
This is my code running the onnx runtime:
ort_sess = ort.InferenceSession(“multitask_classification/export/multitask_resnet10_unpruned.onnx”)

image = Image.open('data/myntradataset/images/' + "1654.jpg")
image = image.resize((60, 80), Image.ANTIALIAS).convert("RGB")
inference_input = preprocess_input(np.array(image).astype(np.float32).transpose(2, 0, 1))
inference_input.shape = (1,) + inference_input.shape

prediction = ort_sess.run(None, {'input_1': inference_input})
print(prediction)

And the output looks like this:
[array([[3.2244731e-02, 1.8993086e-01, 7.2558713e-03, 6.2475264e-02,
5.6349825e-02, 5.8440310e-03, 4.3351520e-04, 4.3483416e-04,
4.1038208e-04, 6.4332151e-01, 1.2992544e-03]], dtype=float32), array([[5.9129968e-02, 4.7413315e-04, 3.8134237e-04, 2.6444846e-04,
2.0592567e-01, 5.0973587e-02, 8.4756932e-05, 8.3556450e-05,
6.8257177e-01, 1.1078640e-04]], dtype=float32), array([[0.31396538, 0.01392357, 0.644959 , 0.02715205]], dtype=float32)]

I have exported the model using the following command:
!tao model multitask_classification export -m $USER_EXPERIMENT_DIR/weights/multitask_cls_resnet10_epoch_020.hdf5
-cm $USER_EXPERIMENT_DIR/class_mapping.json
-o $USER_EXPERIMENT_DIR/export/multitask_resnet10_unpruned.onnx
–gen_ds_config

And I run the TAO evaluation using this command:
!tao model multitask_classification inference -m $USER_EXPERIMENT_DIR/weights/multitask_cls_resnet10_epoch_020.hdf5
-i /workspace/tao-experiments/data/myntradataset/images/1654.jpg
-cm $USER_EXPERIMENT_DIR/class_mapping.json

But the output is quite different:
Task base_color:
Predictions: [1.0295557e-05 4.0577157e-04 1.9063536e-05 4.1254712e-05 4.3332647e-04
1.7709788e-05 1.8649389e-05 3.0235946e-05 9.4475581e-05 2.0171115e-04
9.9872750e-01]
Class name = White


Task category:
Predictions: [3.0598819e-04 3.0944127e-04 2.4018568e-04 2.6610633e-04 3.3416005e-04
7.6537288e-04 3.4076901e-04 9.9696869e-01 2.9989387e-04 1.6940433e-04]
Class name = Shoes


Task season:
Predictions: [0.1125445 0.01940216 0.85093004 0.0171233 ]
Class name = Summer

Did I make a mistake somewhere in the process? Otherwise, it seems that exporting to onnx is not working properly.

Please export the onnx by adding --onnx_route tf2onnx and retry.

When I try it, it returns

multitask_classification export: error: argument /tasks: invalid choice: 'tf2onnx' (choose from 'train', 'prune', 'inference', 'export', 'evaluate', 'confmat')

And the --onnx_route argument is not described in --help.

Can you share your latest command?

Sure:
!tao model multitask_classification export -m $USER_EXPERIMENT_DIR/weights/multitask_cls_resnet10_epoch_020.hdf5
-cm $USER_EXPERIMENT_DIR/class_mapping.json
-o $USER_EXPERIMENT_DIR/export/multitask_resnet10_unpruned.onnx
–onnx_route tf2onnx

Please ignore this triage.

To narrow down, could you use the onnx file to generate tensorrt engine? Can it get the correct inference result? Please refer to the steps mentioned in the notebook or TAO doc.
More, you can also use polygraphy tool to run against onnx file or tensorrt engine. Both ways can use the same input array.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.