Trying to run this project GitHub - NVIDIA/workbench-llamafactory: This is an NVIDIA AI Workbench example project that demonstrates an end-to-end model development workflow using Llamafactory.
The project fails when trying to build the container with a 403 on an artifact download. Normally there is a secret in the Environment
tab where you can put your keys. I don’t see that for this project and it isn’t in .projects/spec.yaml
. There is a Huggingface secret to get access to the llama model.
Instructions followed are here RTX-AI-Toolkit/tutorial-llama3-finetune.md at main · NVIDIA/RTX-AI-Toolkit · GitHub
--------------------
1 | >>> FROM nvcr.io/nvidia/ai-workbench/python-cuda122:1.0.3
2 |
3 | WORKDIR /opt/project/build/
--------------------
ERROR: failed to solve: nvcr.io/nvidia/ai-workbench/python-cuda122:1.0.3: failed to resolve source metadata for nvcr.io/nvidia/ai-workbench/python-cuda122:1.0.3: failed to authorize: failed to fetch oauth token: unexpected status from GET request to https://nvcr.io/proxy_auth?scope=repository%3Anvidia%2Fai-workbench%2Fpython-cuda122%3Apull: 403 Forbidden
#0 building with "default" instance using docker driver
#1 [internal] load build definition from Containerfile
#1 transferring dockerfile: 1.56kB done
#1 DONE 0.0s
#2 [auth] nvidia/ai-workbench/python-cuda122:pull token for nvcr.io
#2 DONE 0.0s
#3 [internal] load metadata for nvcr.io/nvidia/ai-workbench/python-cuda122:1.0.3
#3 ERROR: failed to authorize: failed to fetch oauth token: unexpected status from GET request to https://nvcr.io/proxy_auth?scope=repository%3Anvidia%2Fai-workbench%2Fpython-cuda122%3Apull: 403 Forbidden
------
> [internal] load metadata for nvcr.io/nvidia/ai-workbench/python-cuda122:1.0.3:
Is there a CLI to create credentials on disk?
Please tick the appropriate box to help us categorize your post
Bug or Error
Feature Request
Documentation Issue
Other