TAO toolkit 4.0 actionrecognitionnet training error

Please provide the following information when requesting support.

• Hardware (GTX 1050ti)
• Network Type (ActionRecognitionNet)
• TLT Version (4.0.0-pyt)
• Training spec file(
train_rgb_3d_64_i3d.yaml (949 Bytes)
)
• How to reproduce the issue? (tao action_recognition train -e /workspace/tao-experiments/specs/train_rgb_3d_64_i3d.yaml -r /workspace/tao-experiments/result -k OW41MDI1N21ra3ZvdHRjMjY3ZzY5aTA0ZWs6YjFkYTM3NDEtMGFmYi00NGFkLTgyYTUtYzA2Yzc1ZWMyZTJi model_config.rgb_pretrained_model_path=/workspace/tao-experiments/models/resnet18_3d_of_hmdb5_32_a100.tlt)

After running the above command, I’m getting mentioned below error message:

_pickle.UnpicklingError: invalid load key, ‘\xbe’.

Please find below the complete error message log:

Error executing job with overrides: ['output_dir=/workspace/tao-experiments/result', 'encryption_key=$KEY', 'model_config.rgb_pretrained_model_path=/workspace/tao-experiments/models/resnet18_3d_of_hmdb5_32_a100.tlt']
An error occurred during Hydra's exception formatting:
AssertionError()
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py", line 252, in run_and_report
    assert mdl is not None
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "</opt/conda/lib/python3.8/site-packages/nvidia_tao_pytorch/cv/action_recognition/scripts/train.py>", line 3, in <module>
  File "<frozen cv.action_recognition.scripts.train>", line 81, in <module>
  File "<frozen cv.super_resolution.scripts.configs.hydra_runner>", line 99, in wrapper
  File "/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py", line 377, in _run_hydra
    run_and_report(
  File "/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py", line 294, in run_and_report
    raise ex
  File "/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
    return func()
  File "/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py", line 378, in <lambda>
    lambda: hydra.run(
  File "/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 111, in run
    _ = ret.return_value
  File "/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py", line 233, in return_value
    raise self._return_value
  File "/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py", line 160, in run_job
    ret.return_value = task_function(task_cfg)
  File "<frozen cv.action_recognition.scripts.train>", line 75, in main
  File "<frozen cv.action_recognition.scripts.train>", line 30, in run_experiment
  File "<frozen cv.action_recognition.model.pl_ar_model>", line 33, in __init__
  File "<frozen cv.action_recognition.model.pl_ar_model>", line 39, in _build_model
  File "<frozen cv.action_recognition.model.build_nn_model>", line 76, in build_ar_model
  File "<frozen cv.action_recognition.model.ar_model>", line 97, in get_basemodel3d
  File "<frozen cv.action_recognition.model.ar_model>", line 31, in load_pretrained_weights
  File "<frozen cv.action_recognition.utils.common_utils>", line 29, in patch_decrypt_checkpoint
  File "<frozen core.checkpoint_encryption>", line 30, in decrypt_checkpoint
_pickle.UnpicklingError: invalid load key, '\xbe'.
Execution status: FAIL
2023-08-31 12:28:25,383 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Is this .tlt model downloaded from ngc or somewhere else?

its downloaded from ngc only

So, please find the key in the ngc website. Maybe the key is “nvidia_tao”. Then, please use nvidia_tao instead of your own ngc key.

Thanks, @Morganh. I was using my API key as the encryption key.

I downloaded it using direct download and I get the same error, should I download it using ngc?

@mhmdsab55
Please cerate a new topic if needed. For downloading a pretrained model in ngc, you can find the command when you click “…”. Usually , the wget command is available.
More, you can also use ngc command.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.