Is it possible to run QLoRA (4-bit quantization) in Jetson Nano 4GB?

Please, I would like to know if it is possible to run QLoRA 4-bit quantization (Dettmers 2023) on Jetson Nano 4GB ? Otherwise, which other Jetson Nano do you recommend for that? Thanks.

Hi @srevandros, I believe bitsandbytes (presumably the dependency of QLoRA fine-tuning) only compiles on JetPack 5+ and has this fork for making it work on aarch64+igpu -

Even with quantization, it would probably have to be quite a small SLM to stand a change of training in 4GB memory. For LoRA I run those on AGX Orin and have heard of it working on Orin NX 16GB. I’ve also heard of using Google Colab for QLoRA.

1 Like

Thank you @dusty_nv. Your contributions and knowledge are really valuable.

I will look for the mentioned reference and possible studies for that. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.