[SUPPORT] Workbench Example Project: Phi-3 Finetune

Hi! This is the support thread for the Phi-3 Mini Finetuning Example Project on GitHub. Any major updates we push to the project will be announced here. Further, feel free to discuss, raise issues, and ask for assistance in this thread.

Please keep discussion in this thread project-related. Any issues with the Workbench application should be raised as a standalone thread. Thanks!

(8/26/2024) Updated readme with deeplinking

(10/02) Updated deep link landing page

issues running phi-3-mini-finetune on the DGX Spark**:**

  • the specified version of bitsandbytes won’t load (no such version exists for the Spark)

    • I deleted it

    • I loaded bitsandbytes without any version number, cleared the cache and rebuilt

    • the project built successfully; is that ok?

  • the notebook mostly works on my DGX Spark but fails in the third code cell of LoRA, preventing the rest of the notebook from executing:
    AttributeError: 'MatmulLtState' object has no attribute 'memory_efficient_backward'

  • In several places I get the warning:

NVIDIA GB10 with CUDA capability sm_121 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_80 sm_86 sm_90 compute_90.

Should I care? Is the project running at proper speed?

1 Like