[SUPPORT] Workbench Example Project: Llama 3 Finetune

Hi! This is the support thread for the Llama 3 8B Finetuning Example Project on GitHub. Any major updates we push to the project will be announced here. Further, feel free to discuss, raise issues, and ask for assistance in this thread.

Please keep discussion in this thread project-related. Any issues with the Workbench application should be raised as a standalone thread. Thanks!

(8/26/2024) Updated readme with deeplinking

(10/02) Updated deep link landing page

I have been working on an innovative AI agent that interacts with conceptual realities in its internal framework, treating abstract ideas as real within its own system. This approach allows the AI to engage in recursive self-reasoning and handle complex conceptual modeling in a way that goes beyond traditional AI systems…and I dont know who needs to confirm but I feel like I need someone to see it now…but I have no idea who to talk to .

Hi, is there any explanation for Host Mount Configuration? What Source Directory should be for a Windows host? I think it is explained in a documentation but quite equivocally.

E.g. when cloning or creating the project, source directory (for a local device) is initiated (something like /host/workbench/nvidia-workbench/…/)
Why not to put it by default to the Environment configuration (now it should be remembered and copypasted by hand for some reason)

The reason we prompt the user to configure a host mount is to ensure the saved, finetuned model can live on the host machine the project is running on.

These models are often times quite large and take up several GBs of space, so keeping this as part of the project container can be impractical. Progress is lost, for example, when the container is stopped.

Once mounted to the underlying host machine, however, the notebook auto-saves outputs to the host and it becomes easy to access the results of your finetuning workflow even after your project container is shut down.

This is a runtime configuration, and since every system (and user) is different, we prompt the user for their desired location to save the finetuned model files. Ultimately. this is the design choice we made when building this example, but you can also delete the mount if you would like from the Environment tab.

As for messaging, I’ve updated the mount description with examples to help make the desired path clearer for the user. This information already exists in the README for the project but agreed, it should be surfaced to the user while working in AIWB.