EmotionNet convert to engine file failure

I’m trying to convert the emotionNet model from etlt file to engine file using tlt-convertor. The command I used is as follows:

./tlt-converter ./model.etlt
-k nvidia_tlt
-t fp16
-e ./fp16.deploy.engine
-p Input_1,1x1x136x1,1x1x136x1,2x1x136x1 \

I got following error:

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[ERROR] Wrong input name specified in -p, please double check.

Looks like the input name is not right. The input name I used is ‘input_1’, which can work for the Gesture model in NGC. So where can I find the right input name for the emotionNet model? I searched the website and read the docs, I can’t find the parameter. Please help!

Download scripts in Requirements and Installation — TAO Toolkit 3.0 documentation

ngc registry resource download-version “nvidia/tao/tao_cv_inference_pipeline_quick_start:v0.3-ga”

You can find it in tlt_cv_inference_pipeline_quick_start_v0.3-ga/scripts/tlt_cv_compile.sh