• Hardware: RTX3080
• System: x86_64
• Docker: Docker file: tao-toolkit-triton-apps/Dockerfile at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub
• How to reproduce the issue ?
I follow the instruction here to install the Tao-converter docker. It works to convert the sample model in model_repository.
But when I add the Emotionnet model deployable_v1.0 version from NGC with config.pbtxt like this:
name: "emotionmlp_tlt"
platform: "tensorrt_plan"
max_batch_size: 1
input [
{
name: "input_landmarks:0"
data_type: TYPE_FP32
dims: [ 1, 136, 1 ]
}
]
output [
{
name: "softmax/Softmax:0"
data_type: TYPE_FP32
dims: [ 6 ]
label_filename: "labels.txt"
}
]
parameters [
{
key: "target_classes"
value: {string_value: "Neutral,Happy,Surprise,Squint,Disgust,Scream"}
}
]
dynamic_batching { }
I enter to docker bash and type command:
tao-converter /tlt_models/emotionmlp_tlt/model.etlt \
-k tlt_encode \
-d 1,136,1 \
-o softmax/Softmax:0 \
-t fp32 \
-m 1 \
-e /model_repository/emotionmlp_tlt/1/model.plan```
But the log show ERROR:
[INFO] ----------------------------------------------------------------
[INFO] Input filename: /tmp/fileh7jOp8
[INFO] ONNX IR version: 0.0.0
[INFO] Opset version: 0
[INFO] Producer name:
[INFO] Producer version:
[INFO] Domain:
[INFO] Model version: 0
[INFO] Doc string:
[INFO] ----------------------------------------------------------------
[INFO] Model has no dynamic shape.
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
/tao_triton/download_and_convert.sh: line 47: 52 Segmentation fault (core dumped) tao-converter /tlt_models/emotionmlp_tlt/model.etlt -k tlt_encode -d 1,136,1 -o softmax/Softmax:0 -t fp32 -m 1 -e /model_repository/emotionmlp_tlt/1/model.plan```
Please help me to fix this.
Thank you very much.