I’m trying to convert the emotionNet model from etlt file to engine file using tlt-convertor. The command I used is as follows:
./tlt-converter ./model.etlt
-k nvidia_tlt
-t fp16
-e ./fp16.deploy.engine
-p Input_1,1x1x136x1,1x1x136x1,2x1x136x1 \
I got following error:
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[ERROR] Wrong input name specified in -p, please double check.
Aborted
Looks like the input name is not right. The input name I used is ‘input_1’, which can work for the Gesture model in NGC. So where can I find the right input name for the emotionNet model? I searched the website and read the docs, I can’t find the parameter. Please help!