Please provide the following information when requesting support.
• Hardware:Orin Nano
• Network Type:Classification
• TLT Version:5.0.0
Hi,
When trying to run “tao modelclassification_tf1 --help”, I get the following error.
~/.tao_mounts.json wasn’t found. Falling back to obtain mount points and docker configs from ~/.tao_mounts.json.
Please note that this will be deprecated going forward.
2023-08-02 01:58:53,329 [TAO Toolkit] [INFO] root 160: Registry: [‘nvcr.io’]
2023-08-02 01:58:53,442 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2023-08-02 01:58:53,487 [TAO Toolkit] [INFO] root 98: No mount points were found in the /home/nvidia/.tao_mounts.json file.
2023-08-02 01:58:53,488 [TAO Toolkit] [WARNING] nvidia_tao_cli.components.docker_handler.docker_handler 262:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/nvidia/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
2023-08-02 01:58:53,488 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 275: Printing tty value True
Docker instantiation failed with error: 500 Server Error: Internal Server Error (“failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as ‘csv’
invoking the NVIDIA Container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime (e.g. specify the --runtime=nvidia flag) instead.: unknown”)
I know I can’t run training in Jetson devices, but I can run inference.
How can I solve this problem?
Let me know if any more information is required.
Thanks