TLT 3.0 on Jetson Nano, can't set org in ngc

I installed TLT 3.0 on my Jetson Nano (Jetpack 4.5) and am going through the yolo_v4.ipynb notebook. As I installed this on my Nano, I modified section 2.3 and changed the CLI env to “”.
After this when I try and run the next command
!ngc registry model list nvidia/tlt_pretrained_object_detection:*
I get a “Missing org - If apikey is set, org is also required”
My api was already set earlier. I tried setting the org by running
ngc configure set --org nvidia
but get “error: unrecognized arguments: --org”
Further running
ngc configure set
does not ask me to set the org after accepting my api key.
Therefore, does the arm64 version of NGC not accept the --org flag?

Noticed same issue with @adventuredaisy here TLT 3 jupyter notebooks cant get past a section - #4 by adventuredaisy

Please refer to Troubleshooting Guide — Transfer Learning Toolkit 3.0 documentation

1 Like

Thank you, using
docker login
ngc config clear
in the notebook fixed the issue.
I guess that doc you linked hasn’t been cached by google yet, hence why I couldn’t find it.

Also, I was getting a permission denied when attempting to enter my password key with
docker login
as docker hadn’t been set up to run without sudo, so had to follow these instructions here as well.

For without sudo, please see Requirements and Installation — Transfer Learning Toolkit 3.0 documentation

If you have followed the default installation instructions for docker-ce you may need to have sudo access to run docker commands. In order to circumvent this, TLT recommends you to follow these post-installation steps to make sure that the docker commands can be run without sudo.

1 Like

Now I’m starting to get confused. I tried fir days to get tlt 2.0 to get working on my xavier nx and just found out a moderator in another nvidia forum thread told it would only run on x86/64. 2 questions: Can tlt 2.0 be run on Arm and can tlt 3.0 be run on arm?

For training,
The TLT is designed to run on x86 systems with an NVIDIA GPU (e.g., GPU-powered workstation, DGX system) or can be run in any cloud with an NVIDIA GPU.

For inference, models can be deployed on any edge device such as an embedded Jetson platform or in a data center with GPUs like T4 or A100. This page lists recommended system requirements for the installation and use of the TLT.

See Requirements and Installation — Transfer Learning Toolkit 3.0 documentation