Unable to log in my local location in the workbench

hello! i am trying to access my location in the workbench but i keep getting the error: “Trouble Connecting to Your Current Location”.
if i inspect the log there are these errors:

{“level”:“info”,“gitServerUrl”:“https://github.com”,“status”:“”,“time”:“2024-08-24T12:03:05+02:00”,“message”:“Detected git server with valid credentials during auto git author configuration.”}
{“level”:“error”,“error”:“couldn’t query NGC for Bearer Token: 401 Unauthorized”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-24T16:30:47+02:00”,“message”:“couldn’t query NGC for Bearer Token: 401 Unauthorized”}
{“level”:“error”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-25T12:49:48+02:00”,“message”:“Workbench service not reachable.”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:00:19+02:00”,“message”:“NGC personal keys are not supported at this time”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:04:56+02:00”,“message”:“NGC personal keys are not supported at this time”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:05:27+02:00”,“message”:“NGC personal keys are not supported at this time”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:05:50+02:00”,“message”:“NGC personal keys are not supported at this time”}

anyone know how to solve? i am sure i’m using a normal api key from NGC, not a “personal key” from NGC

Bug or Error
Feature Request
Documentation Issue
Other

Hi, let me get a better understanding what is going on.

Are you trying to connect to your local location? Or is this a remote location?

The NGC personal key typically start with nvapi-.... Make sure you have connected the NGC API Key (eg. a string of alphanumeric characters) and not the personal key.

hey, I am facing this issue:
^C$ flask run --host=0.0.0.0 --port=8080

  • Serving Flask app ‘app.py’
  • Debug mode: off
    WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on all addresses (0.0.0.0)
  • Running on http://127.0.0.1:8080
  • Running on http://172.18.0.3:8080
    Press CTRL+C to quit

so when i run the ip address its simply not opening on my local machine.

I checked through curl to verify whther its some other issue, but curl returned with the right output. idk what’s the issue anymore.

it is my local location, not a remote

i get this message:

Trouble Connecting to Your Current Location

We’ve encountered an issue accessing your current location. Ensure your network is active and stable to continue without interruption. For more information view the error log file.

and in the log file:

{“level”:“info”,“gitServerUrl”:“https://github.com”,“status”:“”,“time”:“2024-08-24T12:03:05+02:00”,“message”:“Detected git server with valid credentials during auto git author configuration.”}
{“level”:“error”,“error”:“couldn’t query NGC for Bearer Token: 401 Unauthorized”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-24T16:30:47+02:00”,“message”:“couldn’t query NGC for Bearer Token: 401 Unauthorized”}
{“level”:“error”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-25T12:49:48+02:00”,“message”:“Workbench service not reachable.”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:00:19+02:00”,“message”:“NGC personal keys are not supported at this time”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:04:48+02:00”,“message”:“NGC personal keys are not supported at this time”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:04:56+02:00”,“message”:“NGC personal keys are not supported at this time”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:05:27+02:00”,“message”:“NGC personal keys are not supported at this time”}
{“level”:“error”,“error”:“NGC personal keys are not supported at this time”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-30T17:05:50+02:00”,“message”:“NGC personal keys are not supported at this time”}

i used a new standard NGC API Key, not personal.
as you can see in the log file from the date, at some point yesterday i tryed to use a personal key, just to try everythink. but after some try, i returned to using standard NGC API Key

i noteiced that this log file is not changing since yesterday, no update at all

i have done a connection stability test and the results are 36-40 ping, great stability overall.

any way we can try to solve this? i tryed to reinstall deleting the hidden config folder, but it still give the same problem
i want to work on the hakaton! <3

i found this thread speaking of one of the errors in my log file, the: {“level”:“error”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-08-25T12:49:48+02:00”,“message”:“Workbench service not reachable.”}

i am not actualy using podman, i am using docker, so i cant’ do podman images.

i created a support bundle:

ai-workbench-support-bundle.zip (180.3 KB)

hope this can help to solve the problem

so i tryed to read the other log files:

in the workbench.log i found:

{“level”:“info”,“time”:“2024-09-01T14:23:37+02:00”,“message”:“starting host introspection”}
{“level”:“error”,“error”:“exit status 1”,“command”:“dpkg-query -Wf ‘${db:Status-Status}\n’ "nvidia-container-toolkit"”,“stderr”:“dpkg-query: no packages found matching nvidia-container-toolkit\n”,“stdout”:“”,“time”:“2024-09-01T14:23:38+02:00”,“message”:“error when calling bash command”}
{“level”:“info”,“state”:{“HostOS”:“ubuntu”,“HostIsWSL2”:true,“HostArch”:“amd64”,“HostOSVersion”:“22.04”,“WSL2Info”:{“WindowsProduct”:“Microsoft Windows 11 Pro”,“WindowsVersion”:“10.0.22631”,“WindowsWSLDistroName”:“NVIDIA-Workbench”},“GitVersion”:“2.34.1”,“GitLFSVersion”:“3.5.1”,“HomebrewVersion”:“”,“PodmanVersion”:“”,“PodmanMachineExists”:false,“DockerClientVersion”:“27.1.1”,“DockerServerVersion”:“27.1.1”,“DockerIsRunning”:true,“DockerDesktopInstalled”:true,“NeedDockerGroup”:false,“ContainerToolkitVersion”:“”,“ContainerToolkitCdiIsConfigured”:true,“GPUDriverMissing”:false,“GPUDevices”:[{“Index”:0,“UUID”:“GPU-a8a5bbdf-989d-96c2-cacd-39c80ebafc09”,“Name”:“NVIDIA GeForce RTX 3080 Laptop GPU”,“DriverVersion”:“560.81”}]},“time”:“2024-09-01T14:23:38+02:00”,“message”:“host introspection complete”}
{“level”:“info”,“time”:“2024-09-01T14:23:38+02:00”,“message”:“starting host introspection”}
{“level”:“error”,“error”:“exit status 1”,“command”:“dpkg-query -Wf ‘${db:Status-Status}\n’ "nvidia-container-toolkit"”,“stderr”:“dpkg-query: no packages found matching nvidia-container-toolkit\n”,“stdout”:“”,“time”:“2024-09-01T14:23:39+02:00”,“message”:“error when calling bash command”}
{“level”:“info”,“state”:{“HostOS”:“ubuntu”,“HostIsWSL2”:true,“HostArch”:“amd64”,“HostOSVersion”:“22.04”,“WSL2Info”:{“WindowsProduct”:“Microsoft Windows 11 Pro”,“WindowsVersion”:“10.0.22631”,“WindowsWSLDistroName”:“NVIDIA-Workbench”},“GitVersion”:“2.34.1”,“GitLFSVersion”:“3.5.1”,“HomebrewVersion”:“”,“PodmanVersion”:“”,“PodmanMachineExists”:false,“DockerClientVersion”:“27.1.1”,“DockerServerVersion”:“27.1.1”,“DockerIsRunning”:true,“DockerDesktopInstalled”:true,“NeedDockerGroup”:false,“ContainerToolkitVersion”:“”,“ContainerToolkitCdiIsConfigured”:true,“GPUDriverMissing”:false,“GPUDevices”:[{“Index”:0,“UUID”:“GPU-a8a5bbdf-989d-96c2-cacd-39c80ebafc09”,“Name”:“NVIDIA GeForce RTX 3080 Laptop GPU”,“DriverVersion”:“560.81”}]},“time”:“2024-09-01T14:23:39+02:00”,“message”:“host introspection complete”}
{“level”:“info”,“version”:“0.42.12”,“builtOn”:“Thu Aug 15 00:25:03 UTC 2024”,“channel”:“stable”,“container.buildtime”:“docker”,“container.runtime”:“docker”,“time”:“2024-09-01T14:23:39+02:00”,“message”:“Starting AI Workbench server”}
{“level”:“info”,“time”:“2024-09-01T14:23:39+02:00”,“message”:“running project container cleanup”}
{“level”:“info”,“time”:“2024-09-01T14:23:39+02:00”,“message”:“project container cleanup complete”}
{“level”:“error”,“error”:“exit status 1”,“stderr”:“Error response from daemon: No such container: workbench-proxy\n”,“time”:“2024-09-01T14:23:39+02:00”,“message”:“error stopping traefik container”}
{“level”:“warn”,“error”:“error stopping traefik container: exit status 1: Error response from daemon: No such container: workbench-proxy\n”,“time”:“2024-09-01T14:23:39+02:00”,“message”:“An error occurred while trying to clean up proxy container and resources. This is be expected if the proxy was gracefully cleaned up during service shutdown”}
{“level”:“info”,“time”:“2024-09-01T14:23:39+02:00”,“message”:“server running. waiting for shutdown signal”}
{“level”:“info”,“time”:“2024/09/01 - 14:23:40”,“status”:200,“latency”:“761.935µs”,“client-ip”:“127.0.0.1”,“method”:“GET”,“path”:“/v1/version”,“time”:“2024-09-01T14:23:40+02:00”,“message”:“GIN-Request”}
{“level”:“info”,“time”:“2024/09/01 - 14:23:40”,“status”:200,“latency”:“13.787µs”,“client-ip”:“127.0.0.1”,“method”:“GET”,“path”:“/v1/version”,“time”:“2024-09-01T14:23:40+02:00”,“message”:“GIN-Request”}
{“level”:“info”,“time”:“2024/09/01 - 14:23:40”,“status”:200,“latency”:“877.093µs”,“client-ip”:“127.0.0.1”,“method”:“POST”,“path”:“/v1/credentials”,“time”:“2024-09-01T14:23:40+02:00”,“message”:“GIN-Request”}
{“level”:“info”,“time”:“2024/09/01 - 14:23:44”,“status”:200,“latency”:“29.127µs”,“client-ip”:“127.0.0.1”,“method”:“GET”,“path”:“/v1/version”,“time”:“2024-09-01T14:23:44+02:00”,“message”:“GIN-Request”}
{“level”:“info”,“time”:“2024-09-01T14:23:50+02:00”,“message”:“Caching base environment metadata in the dataloader”}

and in the traefik.log i found:

time=“2024-09-01T11:43:21Z” level=info msg=“Traefik version 2.10.7 built on 2023-12-06T15:54:59Z”
time=“2024-09-01T11:43:21Z” level=info msg=“\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://doc.traefik.io/traefik/contributing/data-collection/\n
time=“2024-09-01T11:43:21Z” level=info msg=“Starting provider aggregator aggregator.ProviderAggregator”
time=“2024-09-01T11:43:21Z” level=info msg=“Starting provider *file.Provider”
time=“2024-09-01T11:43:21Z” level=info msg=“Starting provider *traefik.Provider”
time=“2024-09-01T11:43:21Z” level=info msg=“Starting provider *acme.ChallengeTLSALPN”
time=“2024-09-01T11:43:49Z” level=info msg=“I have to go…”
time=“2024-09-01T11:43:49Z” level=info msg=“Stopping server gracefully”
time=“2024-09-01T11:43:49Z” level=error msg=“accept tcp [::]:10000: use of closed network connection” entryPointName=web
time=“2024-09-01T11:43:49Z” level=error msg=“close tcp [::]:10000: use of closed network connection” entryPointName=web
time=“2024-09-01T11:43:49Z” level=info msg=“Server stopped”
time=“2024-09-01T11:43:49Z” level=info msg=“Shutting down”
time=“2024-09-01T12:23:40Z” level=info msg=“Traefik version 2.10.7 built on 2023-12-06T15:54:59Z”
time=“2024-09-01T12:23:40Z” level=info msg=“\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://doc.traefik.io/traefik/contributing/data-collection/\n
time=“2024-09-01T12:23:40Z” level=info msg=“Starting provider aggregator aggregator.ProviderAggregator”
time=“2024-09-01T12:23:40Z” level=info msg=“Starting provider *file.Provider”
time=“2024-09-01T12:23:40Z” level=info msg=“Starting provider *traefik.Provider”
time=“2024-09-01T12:23:40Z” level=info msg=“Starting provider *acme.ChallengeTLSALPN”

hi nic

thanks for sending. we will look and get back to you.

tyler

The logs show a bunch of stuff done, and so it may have gotten into a funky state.

A couple questions:

  • What version of Docker Desktop are you running?
    • What happens if you update to the latest version? Latest version info is here.

Hi mrsubhanshud13, thanks for reaching out! Let’s track this issue in this separate thread.

hello,
i have tryed both the version installed by the workbench installer and the updated version

What version are you on? And what output do you see when you run docker ps inside the AI Workbench distro?

Hi, just taking a look through the logs, eg. desktop_app_main.txt–can you confirm your system language?

wsl --status DistribuzionepredefinitaNVIDIAWorkbenchVersionepredefinita2

If your system language is not in English, the installer may not be able to detect WSL as installed properly when it is. Our recommendation is to switch your system language to English and restarting the installation process.

This check runs every time you open NVIDIA AI Workbench, so you may need to be in English mode to use AI Workbench.

Hello i face the same issue and I tried running this in my cmd prompt. it gives me this output what should I do next?

> Default Distribution: NVIDIA-Workbench
> Default Version: 2

I have reinstalled the workbench as mentioned in the error message

this is my error log:
{“level”:“info”,“gitServerUrl”:“https://github.com”,“status”:“”,“time”:“2024-09-10T01:35:15+08:00”,“message”:“Detected git server with valid credentials during auto git author configuration.”}
{“level”:“error”,“error”:“exit status 1”,“cmd”:“/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool”,“stderr”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“time”:“2024-09-10T01:39:23+08:00”,“message”:“RunCommand failed.”}
{“level”:“error”,“error”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-09-10T01:39:23+08:00”,“message”:“an error occurred while executing ‘/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool’”}
{“level”:“error”,“error”:“exit status 1”,“cmd”:“/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool”,“stderr”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“time”:“2024-09-10T01:40:58+08:00”,“message”:“RunCommand failed.”}
{“level”:“error”,“error”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-09-10T01:40:58+08:00”,“message”:“an error occurred while executing ‘/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool’”}
{“level”:“error”,“error”:“exit status 1”,“cmd”:“/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool”,“stderr”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“time”:“2024-09-10T01:43:56+08:00”,“message”:“RunCommand failed.”}
{“level”:“error”,“error”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-09-10T01:43:56+08:00”,“message”:“an error occurred while executing ‘/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool’”}
{“level”:“error”,“error”:“exit status 1”,“cmd”:“/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool”,“stderr”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“time”:“2024-09-10T02:02:56+08:00”,“message”:“RunCommand failed.”}
{“level”:“error”,“error”:“container tool failed to reach ready state. try again: failed to reach podman, even though it appears to be ready. Verify podman is functioning properly and you can run podman ps from your terminal.: bash: line 1: podman: command not found\n\n\n”,“isWrapped”:false,“isInteractive”:false,“engine”:“json”,“detail”:“detail”,“time”:“2024-09-10T02:02:56+08:00”,“message”:“an error occurred while executing ‘/home/workbench/.nvwb/bin/wb-svc -quiet start-container-tool’”}

Thank you

Hi, thanks for reaching out. I believe your podman installation may be in a strange state.

Can you open the NVIDIA-Workbench WSL distro in a terminal and run podman ps? If working properly, it should list out the container images, if any, on your machine.

If not, I recall it giving you a command to run to resolve/clean up the state, if so try to run that command and restart AI Workbench to see if that works for you.

If not, let me know here with any updated logs. Hope this helps!

Hi, thank you I restarted my device and it works now.

However, I am trying to clone the hybrid RAG example and the building stays in 1/19 for 2 hours now.


I tried cloning my fork and the original repo owned by Nvidia but both face the same issue.

this is my output:
#6 sha256:d94e2f7f0510b102387fba62ac6b900827601bd8ef72d67c8291f9b66c71888f 5.24MB / 10.36MB 50.0s

#6 sha256:1a7c71aa66c0078f538f3814f0cda266b9b71e9901c73f76d29bf4910e7a4077 60.82MB / 136.02MB 50.2s

#6 sha256:d94e2f7f0510b102387fba62ac6b900827601bd8ef72d67c8291f9b66c71888f 6.29MB / 10.36MB 50.2s

#6 sha256:d94e2f7f0510b102387fba62ac6b900827601bd8ef72d67c8291f9b66c71888f 8.39MB / 10.36MB 50.4s

#6 sha256:d94e2f7f0510b102387fba62ac6b900827601bd8ef72d67c8291f9b66c71888f 9.44MB / 10.36MB 50.5s

#6 sha256:d94e2f7f0510b102387fba62ac6b900827601bd8ef72d67c8291f9b66c71888f 10.36MB / 10.36MB 50.5s done

#6 sha256:369a2aaf445d33c31e803510171c98c663143376b3a3cbd34a02e92913517eaf 0B / 1.06GB 50.7s

#6 sha256:1a7c71aa66c0078f538f3814f0cda266b9b71e9901c73f76d29bf4910e7a4077 69.21MB / 136.02MB 51.0s

#6 sha256:1a7c71aa66c0078f538f3814f0cda266b9b71e9901c73f76d29bf4910e7a4077 76.55MB / 136.02MB 51.7s

#6 sha256:1a7c71aa66c0078f538f3814f0cda266b9b71e9901c73f76d29bf4910e7a4077 83.89MB / 136.02MB 52.7s

#6 sha256:1a7c71aa66c0078f538f3814f0cda266b9b71e9901c73f76d29bf4910e7a4077 91.23MB / 136.02MB 53.6s

#6 sha256:718bd449ab3313959ebb09da77970980ebdf586ddd8fe7403efe3c04a9e4e4a4 440.40MB / 3.66GB 54.4s

#6 sha256:1a7c71aa66c0078f538f3814f0cda266b9b71e9901c73f76d29bf4910e7a4077 98.57MB / 136.02MB 54.7s

#6 sha256:1a7c71aa66c0078f538f3814f0cda266b9b71e9901c73f76d29bf4910e7a4077 105.91MB / 136.02MB 55.6s

#6 sha256:369a2aaf445d33c31e803510171c98c663143376b3a3cbd34a02e92913517eaf 34.60MB / 1.06GB 55.8s

is there anything i miss? or is this related to my local location error previously? thank you

This initial step is just pulling the base container down from the container registry. Containers can be quite large, so you need to make sure you have a strong and consistent network connection to download these containers.

This step is functionally equivalent to a docker pull <container_image>, so if even running this command outside of AI Workbench is slow, then you may need to adjust or strengthen your network connection, if possible.

But as long as you are seeing progress in the logs, the containers should be getting pulled properly, even on a slow connection.