[SUPPORT] Workbench Example Project: Mistral Finetune

Hi! This is the support thread for the Mistral Finetune Example Project on GitHub. Any major updates we push to the project will be announced here. Further, feel free to discuss, raise issues, and ask for assistance in this thread.

Please keep discussion in this thread project-related. Any issues with the Workbench application should be raised as a standalone thread. Thanks!

I have installed NVidia AI Workbench and am attempting to run the mistral-finetune jupyter notebook.

The notebook is stuck on Kernel Connecting on Step 3: Load In The Base Model.

and then has the error

ReadTimeoutError: HTTPSConnectionPool(host=‘cdn-lfs.huggingface.co’, port=443): Read timed out.

The code is below:

%%capture

model_id = "mistralai/Mistral-7B-v0.1"
bb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
)

model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bb_config)

I am using Ubuntu 22.04 and NVidia AI Workbench 0.28.29-x86_64