Hi All,
Is it possible, or has anyone been able to, run inference on any of the GPT-2 models on a Nano? I’ve been fine-tuning via online services, which obviously I couldn’t do on a Nano, but would very much like to generate output locally.
Thanks for any insight or guidance.