GPT-2 Inference possible on a Nano?

Hi All,

Is it possible, or has anyone been able to, run inference on any of the GPT-2 models on a Nano? I’ve been fine-tuning via online services, which obviously I couldn’t do on a Nano, but would very much like to generate output locally.

Thanks for any insight or guidance.


Sorry that we don’t have too much experience on the GPT-2 model.
But some of our users has successfully deployed the model on Nano: