rally12
September 16, 2023, 5:48am
1
Hi,
I managed to run an LLM on the Jetson Orin Nano board. Obviously not out of the box, with minimal changes to open-source components.
The model is the phi-1_5 described in arXiv:2309.05463 .
Do you have guidelines or recommendations for writing a Blog article on this? with the focus on how to avoid license/EULA and other legal pitfalls?
Nice! I welcome you to join us in this running thread about LLMs on Jetson that we have going over here:
NVIDIA Jetson Orin hardware enables local LLM execution in a small form factor to suitably run 13B and 70B parameter LLama 2 models.
In this article we will demonstrate how to run variants of the recently released Llama 2 LLM from Meta AI on NVIDIA Jetson Hardware. What is amazing is how simple it is to get up and running. Everything needed to reproduce this content is more or less as easy as spinning up a Docker container that has provides all of the required software prerequisites for you. Yo…
You should be able to run quantized llama-2-7b on Orin Nano without much issue. Some more resources that may be of interest to you:
I’m not a lawyer and this is not legal advice, personally however I would recommend using models like Llama-2 that have a commercial/permissive license.
1 Like
rally12
September 22, 2023, 4:07pm
4
Thank you for the warm welcome :)
system
Closed
October 9, 2023, 5:01am
6
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.