Export classification model using classification_tf1

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc): I used the T4 GPU.
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): I used the classification_tf1 model
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): Not applicable.
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.) I used this NVIDIA colab file, we trained, validated and tested the model, and at the end I tried to export the model with this command:

The exported model does not have the same accuracy with which it was trained.

How did you test the accuracy with this exported model?

I loaded the exported model with the following code and performed an inference test, the results are not correct, it returns the name of another class.




Seems that you are running with onnxruntime, please try to run tensorrt engine firstly as mentioned in the notebook. See tao_tutorials/notebooks/tao_launcher_starter_kit/classification_tf1/tao_voc/classification.ipynb at main · NVIDIA/tao_tutorials · GitHub. Also the source code for tao-deploy can be found in tao_deploy/nvidia_tao_deploy/cv/classification_tf1 at main · NVIDIA/tao_deploy · GitHub. You can refer to it.

The steps and the code in the screenshots only show the loaded model. To export it we have used:

  • TensorRT version: TensorRT 8.6 GA for Linux x86_64 and CUDA 12.0 and 12.1 TAR Package.
  • TensorFlow version: 1
    I add captures of the errors that it shows us when exporting it:


In other occasions it shows me this other error:

When using the command in the following screenshot, the model is exported wrong:


Here is the Colab we use and the errors it shows:

There is not error in the log. The onnx file is generated.

But when importing the model into another instance of Colab, the accuracy is lost or it is not exported correctly or completely. How did you export it correctly? Do I need to modify something in the command?

That may not be an issue from the onnx file itself. It may be related to your code. So, that is the reason why I ask you to generate tensorrt engine to test. That is the official way.