LLama factory conflicts torch version

Hi Team,

I was following the playbook (LLaMA Factory | DGX Spark) to fine-tune the LLM with LLama factory in my dgx spark. The docker environment seems to be conflicting with the LLama installation (in step 4) and here is the error message:

and apparently, the CUDA is not activated.

Should I install all dependent packages manually? Or it is an internal bug…

Hi!

I’m having the same issue also I’m unable to use the Llama Factory WebUI interface. Is there something I’m missing?

Thank you very much.

Hey folks,

I have moved this post over to the DGX Spark section of the forums to get some better visibility on the issue.

Best,

Aharpster