[SUPPORT] Workbench Example Project: NIM Anywhere

Hi! This is the support thread for the NIM Anywhere Example Project on GitHub. Further, feel free to discuss, raise issues, and ask for assistance in this thread.

Please keep discussion in this thread project-related. Any issues with the AI Workbench application itself should be raised as a standalone thread. Thanks!

I found out today that part of the reason the project wasn’t working was because I had an out-of-date config.yaml file. Today I updated that to the latest from Github and updated config.yaml definitions and it just worked.

Then it took me a while to realize that I was running an LLM locally (llm-nim-0) but it wasn’t actually using it because the configured models were all deployed in the NVIDIA cloud. One URL change in config.yaml followed by a container restart and the LLM was running locally with the other two services still on their remote endpoints.

The default NIM LLM model requires 20GB so a lot of people will probably just stick with the remote APIs while exploring.

Ah! That’s what’s supposed to happen but it doesn’t always.

Yes. Things are big so we try and provide options.

For the moment, I think the best NIM UX is to just self-host them on a dedicate remote and not mess with them locally.

We are still working through the technical approach to this in Projects. Will be great to get your feedback on some updates coming soon.

I made this image while trying to understand the pieces of the NIM anywhere project. Yeah, I know it will be out of date as NVIDIA continues to iterate

A short exploration https://youtu.be/05A7oMcx36M

This is great.

Note, it is NVIDIA AI Workbench, not Workstation.

Note, it is NVIDIA AI Workbench, not Workstation.

My bad. I thought I caught all those.

1 Like

Layout change in project spec file causes dirty file system for GIT Layout of spec.yaml changed in PR 31 and workstation rewrites that file out back in the old format. · Issue #42 · NVIDIA/nim-anywhere · GitHub