Hello,
I am using Jetson Nano 2GB, Jetpack 4.6, DS 6.0.
I am trying to generate the engine of facial landmarks and run it on deepstream.
This is the command I am using:
./tao-converter /home/user/Downloads/deepstream_tao_apps/models/faciallandmark/faciallandmarks.etlt -k nvidia_tlt -p input_face_images:0,1x1x80x80,4x1x80x80,16x1x80x80 -e ./model.plan -t fp16
[INFO] [MemUsageChange] Init CUDA: CPU +230, GPU +3, now: CPU 248, GPU 1924 (MiB)
[INFO] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 248 MiB, GPU 1918 MiB
[INFO] [MemUsageSnapshot] End constructing builder kernel library: CPU 277 MiB, GPU 1921 MiB
[INFO] ----------------------------------------------------------------
[INFO] Input filename: /tmp/filegGPaFQ
[INFO] ONNX IR version: 0.0.5
[INFO] Opset version: 10
[INFO] Producer name: keras2onnx
[INFO] Producer version: 1.8.1
[INFO] Domain: onnxmltools
[INFO] Model version: 0
[INFO] Doc string:
[INFO] ----------------------------------------------------------------
[WARNING] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[ERROR] Wrong input name specified in -p, please double check.
Aborted (core dumped)
I am not able to generate the model.plan to deploy it on DS 6.0.
Please I need help on this manner.