Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc): I used the T4 GPU.
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): I used the classification_tf1 model
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): Not applicable.
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.) I used this NVIDIA colab file, we trained, validated and tested the model, and at the end I tried to export the model with this command:
I loaded the exported model with the following code and performed an inference test, the results are not correct, it returns the name of another class.
But when importing the model into another instance of Colab, the accuracy is lost or it is not exported correctly or completely. How did you export it correctly? Do I need to modify something in the command?
That may not be an issue from the onnx file itself. It may be related to your code. So, that is the reason why I ask you to generate tensorrt engine to test. That is the official way.