Problem in "Get an NGC account and API key"

Hello,

I just bought a Jetson Nano Orin kit and was following the tutorial (Beginners - NVIDIA Docs). I have problem to “Get an NGC account and API key”. When I go to NGC and click the TAO container, it brings me to “What is TAO Toolkit” (TAO Toolkit | NVIDIA NGC) instead of asking me for " “Sign in to access the PULL feature of this repository”. Any suggestions?

Hi @G-Tsuan ,
The ngc setup page might change now. Please refer to below to generate API key.

Thank you for the reply. Yes, now I can finally move on and start to install the TAO launcher.

The problem is that, when I run the quickstart_launcher.sh with “–install” and “–upgrade”, it complains an error “ERROR: nvidia-docker not found.” After that, when I invoke command “tao --help”, it shows

"
Traceback (most recent call last):
File “/home/gq/miniconda3/bin/tao”, line 5, in
from nvidia_tao_cli.entrypoint.tao_launcher import main
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/entrypoint/tao_launcher.py”, line 23, in
from nvidia_tao_cli.components.instance_handler.builder import get_launcher
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/builder.py”, line 24, in
from nvidia_tao_cli.components.instance_handler.local_instance import LocalInstance
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/local_instance.py”, line 29, in
from nvidia_tao_cli.components.docker_handler.docker_handler import (
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/docker_handler/docker_handler.py”, line 29, in
import docker
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/init.py”, line 2, in
from .api import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/init.py”, line 2, in
from .client import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/client.py”, line 8, in
import websocket
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/init.py”, line 23, in
from ._app import WebSocketApp
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_app.py”, line 36, in
from ._core import WebSocket, getdefaulttimeout
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_core.py”, line 34, in
from ._handshake import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_handshake.py”, line 30, in
from ._http import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_http.py”, line 33, in
from ._url import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_url.py”, line 27, in
from six.moves.urllib.parse import urlparse
ModuleNotFoundError: No module named ‘six.moves’
"
Apparently there is something wrong here. My installations of the prerequisite software seem fine. What did I do wrong?

Should I open another topic on this problem?

Please install nvidia-docker.

$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -

$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo pkill -SIGHUP dockerd
$ sudo systemctl restart docker.service

Thank you for the reply.

When I followed your instructions and installed nvidia-docker, I got an warning message;

“Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK”

I hope this is OK. When I tried to install Tao Toolkit using “bash setup/quickstart_launcher.sh --install”, it again complained “RROR: nvidia-docker not found.” but seemed to start installation anyway. When I ran “bash setup/quickstart_launcher.sh --upgrade”, it again complained “ERROR: nvidia-docker not found.” but also “INFO: TAO Toolkit was found.” and “ModuleNotFoundError: No module named ‘six.moves’”, as shown in the attached file.
output.txt (3.5 KB)


When I ran “tao --help”, it again complains “No module named ‘six.moves’” as shown in the attached file below.

Am I doing something wrong?

Can you share the full log when you run below?

$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -

$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo pkill -SIGHUP dockerd
$ sudo systemctl restart docker.service

More, can you run below and share the log?
$ docker run --rm --runtime=nvidia nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt /bin/bash

The screen shot is listed as follows:

Thank you again for the assistance.

When I ran “docker run --rm --runtime=nvidia nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt /bin/bash”, here is what I’ve got:

gq@gq-desktop:~/tao_tutorials$ docker run --rm --runtime=nvidia nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt /bin/bash
Unable to find image ‘nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt’ locally
5.5.0-pyt: Pulling from nvidia/tao/tao-toolkit
bccd10f490ab: Pulling fs layer
18ea39f449f2: Pulling fs layer
4f4fb700ef54: Pulling fs layer
a2f9ef49a25e: Waiting
1c7f2e233b5f: Waiting
6d00df402cf8: Waiting
fe4f2f514559: Waiting
999f45ff216e: Waiting
74e0ce3e6cbf: Waiting
dfae8e751f33: Waiting
6b6f1f09276f: Waiting
d9e68ca30619: Waiting
02006324a966: Waiting
3d1a32501ee0: Pulling fs layer
d288c7980a63: Pulling fs layer
45713d170b89: Pulling fs layer
f721866d12a2: Waiting
f25f9e364356: Waiting
3d1a32501ee0: Waiting
03c2cb17f080: Waiting
d288c7980a63: Waiting
1311cb24b935: Waiting
7b7af44a7557: Waiting
fbdb7c06a1a5: Waiting
9cc53d5db2fe: Pull complete
7ac39c150553: Pull complete
02c11b868e4d: Pull complete
40bce9133b72: Pull complete
6ebc30604161: Pull complete

Digest: sha256:d0d24bc5608832246ed6f7f768b8dbbe429e0e41c580582a0b89606bb9e752a9
Status: Downloaded newer image for nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt
WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
exec /opt/nvidia/nvidia_entrypoint.sh: exec format error

The tao docker is expected to run in x86-based machines.

So I cannot run Tao on Jetson Nano Orin board?

For training, yes, we can not run training on Jetson devices.
But for inference, we can run inference on both dgpu devices and Jetson devices.

The problem is, I cannot even install Tao correctly–when I run “tao --help”, the output is not what it is supposed to be as shown in the tutorial.

(base) gq@gq-desktop:~$ tao --help
Traceback (most recent call last):
File “/home/gq/miniconda3/bin/tao”, line 5, in
from nvidia_tao_cli.entrypoint.tao_launcher import main
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/entrypoint/tao_launcher.py”, line 23, in
from nvidia_tao_cli.components.instance_handler.builder import get_launcher
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/builder.py”, line 24, in
from nvidia_tao_cli.components.instance_handler.local_instance import LocalInstance
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/local_instance.py”, line 29, in
from nvidia_tao_cli.components.docker_handler.docker_handler import (
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/docker_handler/docker_handler.py”, line 29, in
import docker
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/init.py”, line 2, in
from .api import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/init.py”, line 2, in
from .client import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/client.py”, line 8, in
import websocket
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/init.py”, line 23, in
from ._app import WebSocketApp
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_app.py”, line 36, in
from ._core import WebSocket, getdefaulttimeout
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_core.py”, line 34, in
from ._handshake import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_handshake.py”, line 30, in
from ._http import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_http.py”, line 33, in
from ._url import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_url.py”, line 27, in
from six.moves.urllib.parse import urlparse
ModuleNotFoundError: No module named ‘six.moves’

As such, I cannot run any sample TAO notebook, neither can I download the pre-trained Tao models. How can I inference with the Tao model then?

The steps mentioned in the TAO notebook are designed to run in dgpu machine. Currently, you get stuck at running the tao-launcher. That 's expected because you are running in Jetson devices. As mentioned above, you can run notebooks in your local dgpu machines or cloud machines(Running TAO in the Cloud - NVIDIA Docs).

To download a pretrained model, you can use “ngc cli” (https://org.ngc.nvidia.com/setup/installers/cli) with expected version. For your case, it is needed to download “ARM64 Linux” version.
You can also click the pretrained model and copy the “wget” command.
For example,

After training, you can export to onnx file. Then, run trtexec to generate tensorrt engine. (Optimizing and Profiling with TensorRT - NVIDIA Docs).
You can run inference with deepstream or your standalone code.
More, we also provide steps to run tao-deploy in Jetson devices. See GitHub - NVIDIA/tao_deploy: Package for deploying deep learning models from TAO Toolkit. Please flash Jetpack 5.x.

Thank you very much for your reply. Now I can download the Tao pretrained models.

However, I still have the problem using the Tao’s pretrained model. I was following the tutorial on “HELLO AI WORLD” (jetson-inference/docs/detectnet-tao.md at master · dusty-nv/jetson-inference · GitHub) using the Jetson Nano Orin board. Everything went smoothly until I got to “Using TAO Detection Models”. I ran the command

$ detectnet --model=peoplenet pedestrians.mp4 pedestrians_peoplenet.mp4

but failed, as output in the attached file.
output.txt (7.0 KB)

Originally I thought I need to install The TAO Toolkit and download the pre-trained models. After I finally can download these models, I still cannot make it work. I tried to copy the downloaded models to the folder jetson-inference/build/aarch64/bin/networks, but it still didn’t work.

What I am trying to do is to follow the tutorial and make the TAO Detection Model work on my Jetson Nano Orin board. Any suggestions? Thank you.

This is not the official release from TAO.

Please run with this github: GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream.

Will the authors of the tutorial “HELLO AI WORLD” update the material such that we can use and take advantage of the TAO models in Jetson Nano Orin? It will be great if they do. I assume they are from Nvidia.

I guess I will just move on at this time and come back to this problem in the future.

Thank you for your help.

I think you can ask questions in author’s github.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This was the only mention of error “ModuleNotFoundError: No module named ‘six.moves’” I found here which I encountered today. If anyone’s struggling with it even if you’re on regular machine (not edge devices) create conda environment with python 3.10, not python 3.12. Python 3.12 seems to be incompatible with tao cli tool.

Thanks for the sharing.