Nx yolo tlt-converter error

• Hardware (NX tensorrt 7.1.3 cuda10.2 cudnn8 )

R32 (release), REVISION: 4.3, GCID: 21589087, BOARD: t186ref, EABI: aarch64, DATE: Fri Jun 26 04:34:27 UTC 2020

• Network Type (Yolov4,yolov3)
tlt-converter download form here https://developer.download.nvidia.cn/assets/TLT/Secure/tlt_7.1.zip?yfMzK6M1oinp8h0y1EzQkDv3lbv9HWctJ_Z3zCqWq2IkgNbSBUpVfQMzISE6LupNS-LJ6bFPK-h1BULoKFD1gHnjAuz9JCrfiWQLnXJdgTEI-AvpaTpEEj66n3w

models form here

https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip

nvinfer_plugin used here

./tlt-converter -k nvidia_tlt -d 3,544,960 -e trt.fp16.engine -t fp16 -p Input,1x3x544x960,8x3x544x960,16x3x544x960 yolov4_resnet18.etlt
[ERROR] Number of optimization profiles does not match model input node number.
Aborted (core dumped)

changed another nx device,it works,doesn’t know why

Please compare the Jetpack version.
In the github, libnvinfer_plugin.so.7.1.3 provided in this folder was built with:

Jetson NX
Jetpack4.4GA (CUDA-10.2, cuDNN v8.0, TensorRT 7.1.3)

have same Jetpack version

R32 (release), REVISION: 4.3, GCID: 21589087, BOARD: t186ref, EABI: aarch64, DATE: Fri Jun 26 04:34:27 UTC 2020

same cuda-10.2 cudnn8.0 TensorRT7.1.3

Please double check the difference of your nx devices.
Especially the TRT OSS plugin.
$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*