I just bought a Jetson Nano Orin kit and was following the tutorial (Beginners - NVIDIA Docs). I have problem to “Get an NGC account and API key”. When I go to NGC and click the TAO container, it brings me to “What is TAO Toolkit” (TAO Toolkit | NVIDIA NGC) instead of asking me for " “Sign in to access the PULL feature of this repository”. Any suggestions?
Thank you for the reply. Yes, now I can finally move on and start to install the TAO launcher.
The problem is that, when I run the quickstart_launcher.sh with “–install” and “–upgrade”, it complains an error “ERROR: nvidia-docker not found.” After that, when I invoke command “tao --help”, it shows
"
Traceback (most recent call last):
File “/home/gq/miniconda3/bin/tao”, line 5, in
from nvidia_tao_cli.entrypoint.tao_launcher import main
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/entrypoint/tao_launcher.py”, line 23, in
from nvidia_tao_cli.components.instance_handler.builder import get_launcher
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/builder.py”, line 24, in
from nvidia_tao_cli.components.instance_handler.local_instance import LocalInstance
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/local_instance.py”, line 29, in
from nvidia_tao_cli.components.docker_handler.docker_handler import (
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/docker_handler/docker_handler.py”, line 29, in
import docker
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/init.py”, line 2, in
from .api import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/init.py”, line 2, in
from .client import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/client.py”, line 8, in
import websocket
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/init.py”, line 23, in
from ._app import WebSocketApp
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_app.py”, line 36, in
from ._core import WebSocket, getdefaulttimeout
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_core.py”, line 34, in
from ._handshake import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_handshake.py”, line 30, in
from ._http import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_http.py”, line 33, in
from ._url import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_url.py”, line 27, in
from six.moves.urllib.parse import urlparse
ModuleNotFoundError: No module named ‘six.moves’
"
Apparently there is something wrong here. My installations of the prerequisite software seem fine. What did I do wrong?
When I followed your instructions and installed nvidia-docker, I got an warning message;
“Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK”
I hope this is OK. When I tried to install Tao Toolkit using “bash setup/quickstart_launcher.sh --install”, it again complained “RROR: nvidia-docker not found.” but seemed to start installation anyway. When I ran “bash setup/quickstart_launcher.sh --upgrade”, it again complained “ERROR: nvidia-docker not found.” but also “INFO: TAO Toolkit was found.” and “ModuleNotFoundError: No module named ‘six.moves’”, as shown in the attached file. output.txt (3.5 KB)
Digest: sha256:d0d24bc5608832246ed6f7f768b8dbbe429e0e41c580582a0b89606bb9e752a9
Status: Downloaded newer image for nvcr.io/nvidia/tao/tao-toolkit:5.5.0-pyt
WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
exec /opt/nvidia/nvidia_entrypoint.sh: exec format error
The problem is, I cannot even install Tao correctly–when I run “tao --help”, the output is not what it is supposed to be as shown in the tutorial.
(base) gq@gq-desktop:~$ tao --help
Traceback (most recent call last):
File “/home/gq/miniconda3/bin/tao”, line 5, in
from nvidia_tao_cli.entrypoint.tao_launcher import main
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/entrypoint/tao_launcher.py”, line 23, in
from nvidia_tao_cli.components.instance_handler.builder import get_launcher
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/builder.py”, line 24, in
from nvidia_tao_cli.components.instance_handler.local_instance import LocalInstance
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/instance_handler/local_instance.py”, line 29, in
from nvidia_tao_cli.components.docker_handler.docker_handler import (
File “/home/gq/miniconda3/lib/python3.12/site-packages/nvidia_tao_cli/components/docker_handler/docker_handler.py”, line 29, in
import docker
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/init.py”, line 2, in
from .api import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/init.py”, line 2, in
from .client import APIClient
File “/home/gq/miniconda3/lib/python3.12/site-packages/docker/api/client.py”, line 8, in
import websocket
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/init.py”, line 23, in
from ._app import WebSocketApp
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_app.py”, line 36, in
from ._core import WebSocket, getdefaulttimeout
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_core.py”, line 34, in
from ._handshake import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_handshake.py”, line 30, in
from ._http import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_http.py”, line 33, in
from ._url import *
File “/home/gq/miniconda3/lib/python3.12/site-packages/websocket/_url.py”, line 27, in
from six.moves.urllib.parse import urlparse
ModuleNotFoundError: No module named ‘six.moves’
As such, I cannot run any sample TAO notebook, neither can I download the pre-trained Tao models. How can I inference with the Tao model then?
The steps mentioned in the TAO notebook are designed to run in dgpu machine. Currently, you get stuck at running the tao-launcher. That 's expected because you are running in Jetson devices. As mentioned above, you can run notebooks in your local dgpu machines or cloud machines(Running TAO in the Cloud - NVIDIA Docs).
To download a pretrained model, you can use “ngc cli” (https://org.ngc.nvidia.com/setup/installers/cli) with expected version. For your case, it is needed to download “ARM64 Linux” version.
You can also click the pretrained model and copy the “wget” command.
For example,
but failed, as output in the attached file. output.txt (7.0 KB)
Originally I thought I need to install The TAO Toolkit and download the pre-trained models. After I finally can download these models, I still cannot make it work. I tried to copy the downloaded models to the folder jetson-inference/build/aarch64/bin/networks, but it still didn’t work.
What I am trying to do is to follow the tutorial and make the TAO Detection Model work on my Jetson Nano Orin board. Any suggestions? Thank you.
Will the authors of the tutorial “HELLO AI WORLD” update the material such that we can use and take advantage of the TAO models in Jetson Nano Orin? It will be great if they do. I assume they are from Nvidia.
I guess I will just move on at this time and come back to this problem in the future.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
This was the only mention of error “ModuleNotFoundError: No module named ‘six.moves’” I found here which I encountered today. If anyone’s struggling with it even if you’re on regular machine (not edge devices) create conda environment with python 3.10, not python 3.12. Python 3.12 seems to be incompatible with tao cli tool.