Export model en tao talking

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : Quadro RTX 4000
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Classification (TF1)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) : I don´t have it installed
• Training spec file(If have, please share here):
classification_spec.txt (1.0 KB)
classification_retrain_spec.txt (1.0 KB)

• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

  1. when using the tao deploy command to generate a TensorRT engine from an ONNX model for classification:


    It is indicates that gen_trt_engine is not a valid subcommand for the classification task in TAO Toolkit TF1 version 5.0.0:

  2. Convert an ONNX model to a TensorRT engine using the tao converter command:

The error indicates that tao-converter in TAO Toolkit TF1 5.0.0 doesn’t recognize the --calibration_file option:

For item1, can you double check? Any log about it? The gen_trt_engine should be available.