Error while using tlt-infer

So I am trying to perform inference using the unpruned model which I have retrained. But I get the following error whenever I try to run tlt-infer. The paths are correct and the model resides in the specified directory.

Command used -
!tlt-infer detectnet_v2 -i $USER_EXPERIMENT_DIR/data/testing/image_2
-o $USER_EXPERIMENT_DIR/tlt_infer_testing
-m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt
-cp $SPECS_DIR/detectnet_v2_clusterfile_kitti.json
-k $KEY
-lw 3
-g 0
-bs 64

Output -

Using TensorFlow backend.
2020-01-06 05:45:36,344 [INFO] iva.detectnet_v2.scripts.inference: Overlain images will not be saved in the output path.
2020-01-06 05:45:36,344 [INFO] iva.detectnet_v2.inferencer.build_inferencer: Constructing inferencer
2020-01-06 05:45:36.345145: I tensorflow/core/platform/] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-06 05:45:36.436650: I tensorflow/stream_executor/cuda/] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-01-06 05:45:36.437045: I tensorflow/compiler/xla/service/] XLA service 0x6b25bf0 executing computations on platform CUDA. Devices:
2020-01-06 05:45:36.437063: I tensorflow/compiler/xla/service/] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2020-01-06 05:45:36.458594: I tensorflow/core/platform/profile_utils/] CPU Frequency: 4008000000 Hz
2020-01-06 05:45:36.459101: I tensorflow/compiler/xla/service/] XLA service 0x6c41220 executing computations on platform Host. Devices:
2020-01-06 05:45:36.459117: I tensorflow/compiler/xla/service/] StreamExecutor device (0): ,
2020-01-06 05:45:36.459194: I tensorflow/core/common_runtime/gpu/] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 10.22GiB
2020-01-06 05:45:36.459208: I tensorflow/core/common_runtime/gpu/] Adding visible gpu devices: 0
2020-01-06 05:45:36.459649: I tensorflow/core/common_runtime/gpu/] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-06 05:45:36.459661: I tensorflow/core/common_runtime/gpu/] 0
2020-01-06 05:45:36.459667: I tensorflow/core/common_runtime/gpu/] 0: N
2020-01-06 05:45:36.459715: I tensorflow/core/common_runtime/gpu/] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9939 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-01-06 05:45:36,460 [INFO] iva.detectnet_v2.inferencer.tlt_inferencer: Loading model from /workspace/tlt-experiments/experiment_dir_unpruned/weights/resnet18_detector.tlt:
Traceback (most recent call last):
File “/usr/local/bin/tlt-infer”, line 8, in
File “./common/”, line 35, in main
File “./detectnet_v2/scripts/”, line 222, in main
File “./detectnet_v2/scripts/”, line 147, in inference_wrapper_batch
File “./detectnet_v2/inferencer/”, line 77, in network_init
File “./detectnet_v2/model/”, line 96, in model_io
File “./common/”, line 154, in decode_to_keras
IOError: Invalid decryption. Unable to open file (File signature not found)

It should be resulted from your key.

Please set $kEY explicitly into the command and retry.
Please also make sure your key is the same as the key which is used in tlt-train.

Also, please refer to below topics.