Hi @rferrandis and @Morganh
I tried with the python3.7 env and TAO==5.0.0 but getting below issue.
!tao model re_identification export \
-e $SPECS_DIR/experiment_market1501.yaml \
-r $RESULTS_DIR/market1501 \
-k $KEY \
export.checkpoint=$RESULTS_DIR/market1501/resnet50_market1501_model.tlt \
export.onnx_file=$RESULTS_DIR/market1501/export/resnet50_market1501_model.onnx
2023-10-05 18:29:30,616 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2023-10-05 18:29:30,660 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 361: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-pyt
2023-10-05 18:29:30,679 [TAO Toolkit] [WARNING] nvidia_tao_cli.components.docker_handler.docker_handler 267:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/smarg/.tao_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
2023-10-05 18:29:30,679 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 275: Printing tty value True
sys:1: UserWarning:
'experiment_market1501.yaml' is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.
<frozen core.hydra.hydra_runner>:107: UserWarning:
'experiment_market1501.yaml' is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.
/usr/local/lib/python3.8/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
Export results will be saved at: /results/market1501/export
<frozen core.loggers.api_logging>:245: UserWarning: Log file already exists at /results/market1501/export/status.json
Starting Re-identification export
module 'nvidia_tao_pytorch.cv.re_identification.config.default_config' has no attribute 'ReIDTrainConfig'
Error executing job with overrides: ['encryption_key=nvidia_tao', 'export.checkpoint=/results/market1501/resnet50_market1501_model.tlt', 'export.onnx_file=/results/market1501/export/resnet50_market1501_model.onnx']
An error occurred during Hydra's exception formatting:
AssertionError()
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 254, in run_and_report
assert mdl is not None
AssertionError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_pytorch/cv/re_identification/scripts/export.py>", line 3, in <module>
File "<frozen cv.re_identification.scripts.export>", line 150, in <module>
File "<frozen core.hydra.hydra_runner>", line 107, in wrapper
File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 389, in _run_hydra
_run_app(
File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 452, in _run_app
run_and_report(
File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 296, in run_and_report
raise ex
File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 213, in run_and_report
return func()
File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/utils.py", line 453, in <lambda>
lambda: hydra.run(
File "/usr/local/lib/python3.8/dist-packages/hydra/_internal/hydra.py", line 132, in run
_ = ret.return_value
File "/usr/local/lib/python3.8/dist-packages/hydra/core/utils.py", line 260, in return_value
raise self._return_value
File "/usr/local/lib/python3.8/dist-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "<frozen cv.re_identification.scripts.export>", line 71, in main
File "<frozen cv.re_identification.scripts.export>", line 60, in main
File "<frozen cv.re_identification.scripts.export>", line 123, in run_export
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/core/saving.py", line 137, in load_from_checkpoint
return _load_from_checkpoint(
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/core/saving.py", line 158, in _load_from_checkpoint
checkpoint = pl_load(checkpoint_path, map_location=map_location)
File "/usr/local/lib/python3.8/dist-packages/lightning_lite/utilities/cloud_io.py", line 48, in _load
return torch.load(f, map_location=map_location)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 804, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1151, in _load
result = unpickler.load()
File "/usr/lib/python3.8/pickle.py", line 1212, in load
dispatch[key[0]](self)
File "/usr/lib/python3.8/pickle.py", line 1528, in load_global
klass = self.find_class(module, name)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 1144, in find_class
return super().find_class(mod_name, name)
File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/_graveyard/legacy_import_unpickler.py", line 24, in find_class
return super().find_class(new_module, name)
File "/usr/lib/python3.8/pickle.py", line 1583, in find_class
return getattr(sys.modules[module], name)
AttributeError: module 'nvidia_tao_pytorch.cv.re_identification.config.default_config' has no attribute 'ReIDTrainConfig'
Execution status: FAIL
2023-10-05 18:29:44,714 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 337: Stopping container.
Thanks.