Hi! This is the support thread for the NIM Anywhere Example Project on GitHub. Further, feel free to discuss, raise issues, and ask for assistance in this thread.
Please keep discussion in this thread project-related. Any issues with the AI Workbench application itself should be raised as a standalone thread. Thanks!
I found out today that part of the reason the project wasn’t working was because I had an out-of-date config.yaml file. Today I updated that to the latest from Github and updated config.yaml definitions and it just worked.
Then it took me a while to realize that I was running an LLM locally (llm-nim-0) but it wasn’t actually using it because the configured models were all deployed in the NVIDIA cloud. One URL change in config.yaml followed by a container restart and the LLM was running locally with the other two services still on their remote endpoints.
The default NIM LLM model requires 20GB so a lot of people will probably just stick with the remote APIs while exploring.
I made this image while trying to understand the pieces of the NIM anywhere project. Yeah, I know it will be out of date as NVIDIA continues to iterate