Unable to run TAO toolkit in Jetson device

Please provide the following information when requesting support.

• Hardware:Orin Nano
• Network Type:Classification
• TLT Version:5.0.0

When trying to run “tao modelclassification_tf1 --help”, I get the following error.

~/.tao_mounts.json wasn’t found. Falling back to obtain mount points and docker configs from ~/.tao_mounts.json.
Please note that this will be deprecated going forward.
2023-08-02 01:58:53,329 [TAO Toolkit] [INFO] root 160: Registry: [‘nvcr.io’]
2023-08-02 01:58:53,442 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5
2023-08-02 01:58:53,487 [TAO Toolkit] [INFO] root 98: No mount points were found in the /home/nvidia/.tao_mounts.json file.
2023-08-02 01:58:53,488 [TAO Toolkit] [WARNING] nvidia_tao_cli.components.docker_handler.docker_handler 262:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/nvidia/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
2023-08-02 01:58:53,488 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 275: Printing tty value True
Docker instantiation failed with error: 500 Server Error: Internal Server Error (“failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as ‘csv’
invoking the NVIDIA Container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime (e.g. specify the --runtime=nvidia flag) instead.: unknown”)

I know I can’t run training in Jetson devices, but I can run inference.
How can I solve this problem?
Let me know if any more information is required.

Yes, for training, please use dpgu or cloud.

Sorry, this is my first time using TAO Toolkit.
I get the same error when I use “tao modelclassification_tf1 inference --help”.
Can you give me the method to run inference in Jetson device?

For classifcation, officially, users can deploy the .etlt(In TAO5.0, it is .onnx) file with deepstream-app. Please refer to https://docs.nvidia.com/tao/tao-toolkit/text/ds_tao/classification_ds.html
An example is mentioned in
Issue with image classification tutorial and testing with deepstream-app - #21 by Morganh

For how to generate tensorrt engine,
In TAO 5.0, please generate tensorrt engine based on .onnx file. Refer to TRTEXEC with Classification TF1/TF2/PyT - NVIDIA Docs.
In 4.0 or previous version, please use tao-converter to generate tensorrt engine based on .etlt file. Refer to TAO Converter with Classification TF1/TF2 - NVIDIA Docs.

For classification, for how to run inference with tensorrt engine, there are also below ways.
Please see Resnet18 trained with TAO has low accuracy on some classes after exporting to TensorRT and serving with Triton - #2 by Morganh

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.