Could not convert Emotionnet from NGC to Triton Plan format

• Hardware: RTX3080
• System: x86_64
• Docker: Docker file: tao-toolkit-triton-apps/Dockerfile at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub
• How to reproduce the issue ?
I follow the instruction here to install the Tao-converter docker. It works to convert the sample model in model_repository.
But when I add the Emotionnet model deployable_v1.0 version from NGC with config.pbtxt like this:

name: "emotionmlp_tlt"
platform: "tensorrt_plan"
max_batch_size: 1
input [
  {
    name: "input_landmarks:0"
    data_type: TYPE_FP32
    dims: [ 1, 136, 1 ]
  }
]
output [
  {
    name: "softmax/Softmax:0"
    data_type: TYPE_FP32
    dims: [ 6 ]
    label_filename: "labels.txt"
  }
]
parameters [
  {
    key: "target_classes"
    value: {string_value: "Neutral,Happy,Surprise,Squint,Disgust,Scream"}
  }
]
dynamic_batching { }

I enter to docker bash and type command:

tao-converter /tlt_models/emotionmlp_tlt/model.etlt \
              -k tlt_encode \
              -d 1,136,1 \
              -o softmax/Softmax:0 \
              -t fp32 \
              -m 1 \
              -e /model_repository/emotionmlp_tlt/1/model.plan```

But the log show ERROR: 

[INFO] ----------------------------------------------------------------
[INFO] Input filename: /tmp/fileh7jOp8
[INFO] ONNX IR version: 0.0.0
[INFO] Opset version: 0
[INFO] Producer name:
[INFO] Producer version:
[INFO] Domain:
[INFO] Model version: 0
[INFO] Doc string:
[INFO] ----------------------------------------------------------------
[INFO] Model has no dynamic shape.
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
/tao_triton/download_and_convert.sh: line 47: 52 Segmentation fault (core dumped) tao-converter /tlt_models/emotionmlp_tlt/model.etlt -k tlt_encode -d 1,136,1 -o softmax/Softmax:0 -t fp32 -m 1 -e /model_repository/emotionmlp_tlt/1/model.plan```

Please help me to fix this.
Thank you very much.

Please modify command.

        tao-converter /tlt_models/emotionmlp_tlt/model.etlt \
             -k tlt_encode \
             -t fp32 \
            -p input_landmarks:0,1x1x136x1,1x1x136x1,2x1x136x1 \
            -e  /model_repository/emotionmlp_tlt/1/model.plan \

Reference:
The tao_cv_compile.sh inside tao_cv_inference_pipeline_quick_start_vv0.3-ga
https://docs.nvidia.com/tao/tao-toolkit/text/tao_cv_inf_pipeline/requirements_and_installation.html#download-the-tao-toolkit-cv-inference-pipeline-quick-start

Follow your command, It results in another error:

Converting the emotionmlp_tlt model
Error: no input dimensions given

Sorry, please modify key to nvidia_tlt.

   tao-converter /tlt_models/emotionmlp_tlt/model.etlt \
         -k nvidia_tlt \
         -t fp32 \
        -p input_landmarks:0,1x1x136x1,1x1x136x1,2x1x136x1 \
        -e  /model_repository/emotionmlp_tlt/1/model.plan \

See NVIDIA NGC

  • Model load key: nvidia_tlt

Finally, it works with your load key. Thank you so much !

Converting the emotionmlp_tlt model
[INFO] ----------------------------------------------------------------
[INFO] Input filename:   /tmp/filetjAklb
[INFO] ONNX IR version:  0.0.5
[INFO] Opset version:    10
[INFO] Producer name:    tf2onnx
[INFO] Producer version: 1.6.3
[INFO] Domain:           
[INFO] Model version:    0
[INFO] Doc string:       
[INFO] ----------------------------------------------------------------
[WARNING] /home/jenkins/workspace/OSS/L0_MergeRequest/oss/parsers/onnx/onnx2trt_utils.cpp:226: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (-1, 1, 136, 1)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 1, 136, 1) for input: input_landmarks:0
[INFO] Using optimization profile opt shape: (1, 1, 136, 1) for input: input_landmarks:0
[INFO] Using optimization profile max shape: (2, 1, 136, 1) for input: input_landmarks:0
[INFO] Detected 1 inputs and 1 output network tensors.