Can we fully deploy llm models on edge devices without loosing precision and performance?
AastaLLL
3
Hi,
To deploy the LLM model, we recommend the Orin series (AGX Orin/Orin NX/Orin Nano) and Thor.
Thanks.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| AI Models That Run on Jetson Orin Nano Super (8GB) — A Practical Guide | 4 | 1585 | April 16, 2026 | |
| LLM on Jetson Nano 4GB B01 | 13 | 4638 | August 12, 2024 | |
| Running LLM in jetson agx orin | 4 | 258 | March 4, 2026 | |
| Tensort-RT LLM Support for Jetson | 2 | 120 | January 22, 2026 | |
| Available with Small Language Model on tutorial | 3 | 1016 | May 3, 2024 | |
| LLaMa 2 LLMs w/ NVIDIA Jetson and textgeneration-web-ui | 86 | 26324 | May 10, 2024 | |
| Can ı fine-tune an LLM model that has 7B parameters by using Jetson Orin Nano Super? | 2 | 413 | January 9, 2025 | |
| LLM run on Jetson Orin Nano | 3 | 12963 | October 9, 2023 | |
| Meet NVIDIA Llama Nemotron Nano 4B | 0 | 331 | May 23, 2025 | |
| Ollama C++ or Python Example | 5 | 447 | August 27, 2025 |