IOError: Invalid decryption. Unable to open file (File signature not found) tlt-prune command

As tlt doc mentioned,

-k, –key : User specific encoding key to save or load a .tlt model.

yes, I understood,

but what is the REASON to give $KEY argument.

Hi sdeep038,
The reason is to save or load a .tlt model.

Where to save and where to load?
On cloud?
Suppose I have custom dataset and I want to train model then my dataset will share on cloud or not?
Bcz I gave key at training time…

Thanks,
Deep

No,your training is running at docker locally,not on cloud. After tlt-train, a tlt model will be generated locally. This generation needs the key. The tlt format model is saved locally. And your training dataset will not share at all.

I’m facing the same problem running the “classification” example that comes with the TLT docker.
I regenerated API keys and also made sure the tlt file exists. But tlt-evaluate, and tlt-infer kept giving me this “IOError: Invalid decryption. Unable to open file (File signature not found)”. tlt-train works though.

Full error message:
Using TensorFlow backend.
2019-12-10 18:09:01.372936: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-10 18:09:01.449477: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-10 18:09:01.450145: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x71110a0 executing computations on platform CUDA. Devices:
2019-12-10 18:09:01.450171: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1060, Compute Capability 6.1
2019-12-10 18:09:01.451985: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2019-12-10 18:09:01.452506: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x7179e10 executing computations on platform Host. Devices:
2019-12-10 18:09:01.452558: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-12-10 18:09:01.452953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705
pciBusID: 0000:01:00.0
totalMemory: 5.94GiB freeMemory: 4.99GiB
2019-12-10 18:09:01.452993: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-12-10 18:09:01.453699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-10 18:09:01.453713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-12-10 18:09:01.453743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-12-10 18:09:01.453930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4805 MB memory) → physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-12-10 18:09:01,455 [INFO] iva.makenet.scripts.evaluate: Loading experiment spec at /workspace/tlt-experiments/abel/classification_abel/classification_spec.cfg.
2019-12-10 18:09:01,456 [INFO] iva.makenet.spec_handling.spec_loader: Merging specification from /workspace/tlt-experiments/abel/classification_abel/classification_spec.cfg
Traceback (most recent call last):
File “/usr/local/bin/tlt-evaluate”, line 10, in
sys.exit(main())
File “./common/magnet_evaluate.py”, line 29, in main
File “./makenet/scripts/evaluate.py”, line 155, in main
File “./makenet/scripts/evaluate.py”, line 97, in run_evaluate
File “./makenet/utils/helper.py”, line 46, in model_io
File “./common/utils.py”, line 154, in decode_to_keras
IOError: Invalid decryption. Unable to open file (File signature not found)

Hi bob20001,
Please check below items.

  1. The $KEY is really set when you run tlt-train
  2. The $KEY is really set when you run tlt-evaluate or tlt-infer
  3. The key is correct when you run run tlt-evaluate or tlt-infer
  4. The key should be exactly the same as your run tlt-train

Hi Morganh and sdeep038, thank you for the information! Now it works well.
My problem was caused by failing to change the path to the newly trained tlt file. As a result, the tlt-infer and tlt-evaluate still try to use the model trained with the old key.