Error in TAO-Toolkit while training

I am trying to train action recognition net inside TAO toolkit container by following the nvidia blog for ActionRecognitionNet.

I have started the container using the following command in my personal machine:

docker run --net=host -ti -v /var/run/docker.sock:/var/run/docker.sock --gpus=all -e DISPLAY=$DISPLAY nvcr.io/nvidia/tao/tao-toolkit-pyt:v3.21.11-py3

Inside this, I was able to successfully follow the jupyter notebook as mentioned in the blog up till the training part. When i run the following command

tao action_recognition train
-e /workspace/specs/train_rgb_3d_finetune.yaml
-r $RESULTS_DIR/rgb_3d_ptm
-k $KEY
model_config.rgb_pretrained_model_path=$RESULTS_DIR/pretrained/actionrecognitionnet_vtrainable_v1.0/resnet18_3d_rgb_hmdb5_32.tlt
model_config.rgb_pretrained_num_classes=5

I am getting error:

Traceback (most recent call last):
** File “/opt/conda/bin/tao”, line 8, in **
** sys.exit(main())**
** File “/opt/conda/lib/python3.8/site-packages/tlt/entrypoint/entrypoint.py”, line 113, in main**
** local_instance.launch_command(**
** File “/opt/conda/lib/python3.8/site-packages/tlt/components/instance_handler/local_instance.py”, line 296, in launch_command**
** docker_logged_in(required_registry=self.task_map[task].docker_registry)**
** File “/opt/conda/lib/python3.8/site-packages/tlt/components/instance_handler/utils.py”, line 129, in docker_logged_in**
** data = load_config_file(docker_config)**
** File “/opt/conda/lib/python3.8/site-packages/tlt/components/instance_handler/utils.py”, line 64, in load_config_file**
** assert os.path.exists(config_path), (**
AssertionError: Config path must be a valid unix path. No file found at: /root/.docker/config.json. Did you run docker login?

Note that I am able to do docker login nvcr.io in my system but cant do the same inside this container. Because when I try to do so I get the error as:

root@predator:/workspace/tlt/samples# docker login nvcr.io
bash: docker: command not found

• Hardware Platform (Jetson / GPU) 1050Ti
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 470.42.01
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

You did not trigger tao with tao-launcher. And you are just triggering the tao docker via below way.

docker run --net=host -ti -v /var/run/docker.sock:/var/run/docker.sock --gpus=all -e DISPLAY=$DISPLAY nvcr.io/nvidia/tao/tao-toolkit-pyt:v3.21.11-py3

So, please directly use action_recognition train instead of tao action_recognition train.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.