How can I know $key is correct or not?
How can I know $key is correct or not?
See “Get an NGC API key” from tlt doc at https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#installing_magnet_topic . Make sure your $KEY is ngc API key.
If you already know your key,you can also type the $key explicitly in your command.
I have checked key it is correct.
Could you please check /workspace/output/Faster-rcnn/frcnn_kitti.epoch11.tlt is available?
And how about running against other tlt files under that folder(/workspace/output/Faster-rcnn)?
Yes it is available
Could you please double check your $KEY? Did you export the correct ngc API key into $KEY?
On my side, if I type a wrong key into command, I can reproduce the same error as yours.
So, it is high possibility that there is something wrong in your $KEY.
I would tell you an easy method to ignore typing $KEY.
If your $KEY is abcedfg, then you can type it explicitly in your command as below.
tlt-prune -pm /workspace/output/Faster-rcnn/frcnn_kitti.epoch11.tlt -o /workspace/output -pth 0.7 -k abcdefg
Can I create key multiple time?
Bcz I forgot the key given at training model time.
You can login https://ngc.nvidia.com/setup/api-key for help.
Click Generate API Key to create your own API Key. If you have forgotten or lost your API Key, you can come back to that page to create a new one at any time.
still i am facing an issue…
Could you attach your frcnn_kitti.epoch11.tlt here?
I will test it on my side with my key.
As we synced offline,please check your ngc key setting in training spec. Retain again with your new key got from ngc.
Also please check if other notebooks work well.
Sorry for late reply.
After using new KEY we are able to prune the model.
But I have one question:
Que:What is the need to give key at training time?
waiting for your reply…
You can search “$API_KEY” in tlt doc.
$API_KEY is needed in training spec, tlt-converter, etc.
I understood but why we gave $API_KEY at training time?
that’s my question.
i understood while downloading model from cloud we have to give API_KEY.
but what is the need to give key at training time?
As tlt doc mentioned,
-k, –key : User specific encoding key to save or load a .tlt model.
yes, I understood,
but what is the REASON to give $KEY argument.
The reason is to save or load a .tlt model.
Where to save and where to load?
Suppose I have custom dataset and I want to train model then my dataset will share on cloud or not?
Bcz I gave key at training time…
No,your training is running at docker locally,not on cloud. After tlt-train, a tlt model will be generated locally. This generation needs the key. The tlt format model is saved locally. And your training dataset will not share at all.
I’m facing the same problem running the “classification” example that comes with the TLT docker.
I regenerated API keys and also made sure the tlt file exists. But tlt-evaluate, and tlt-infer kept giving me this “IOError: Invalid decryption. Unable to open file (File signature not found)”. tlt-train works though.
Full error message:
Using TensorFlow backend.
2019-12-10 18:09:01.372936: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-10 18:09:01.449477: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-10 18:09:01.450145: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x71110a0 executing computations on platform CUDA. Devices:
2019-12-10 18:09:01.450171: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1060, Compute Capability 6.1
2019-12-10 18:09:01.451985: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2019-12-10 18:09:01.452506: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x7179e10 executing computations on platform Host. Devices:
2019-12-10 18:09:01.452558: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-12-10 18:09:01.452953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705
totalMemory: 5.94GiB freeMemory: 4.99GiB
2019-12-10 18:09:01.452993: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-12-10 18:09:01.453699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-10 18:09:01.453713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-12-10 18:09:01.453743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-12-10 18:09:01.453930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4805 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-12-10 18:09:01,455 [INFO] iva.makenet.scripts.evaluate: Loading experiment spec at /workspace/tlt-experiments/abel/classification_abel/classification_spec.cfg.
2019-12-10 18:09:01,456 [INFO] iva.makenet.spec_handling.spec_loader: Merging specification from /workspace/tlt-experiments/abel/classification_abel/classification_spec.cfg
Traceback (most recent call last):
File “/usr/local/bin/tlt-evaluate”, line 10, in
File “./common/magnet_evaluate.py”, line 29, in main
File “./makenet/scripts/evaluate.py”, line 155, in main
File “./makenet/scripts/evaluate.py”, line 97, in run_evaluate
File “./makenet/utils/helper.py”, line 46, in model_io
File “./common/utils.py”, line 154, in decode_to_keras
IOError: Invalid decryption. Unable to open file (File signature not found)