I am trying to deploy llama3.2 vision 11b in Jetson HW, I guess Orin nx 16g should be powerful enough, but Jetson AI lab web mentioned at least jetson AGX Orin 32g is needed, so I want to confirm whether Orin nx 16g is also possible to be used and the reason why it is not listed in ‘what you need::device’, thx
Hi,
Here are some suggestions for the common issues:
1. Performance
Please run the below command before benchmarking deep learning use case:
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
2. Installation
Installation guide of deep learning frameworks on Jetson:
- TensorFlow: Installing TensorFlow for Jetson Platform - NVIDIA Docs
- PyTorch: Installing PyTorch for Jetson Platform - NVIDIA Docs
We also have containers that have frameworks preinstalled:
Data Science, Machine Learning, AI, HPC Containers | NVIDIA NGC
3. Tutorial
Startup deep learning tutorial:
- Jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson
- TensorRT sample: Jetson/L4T/TRT Customized Example - eLinux.org
4. Report issue
If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.
Thanks!
Hi,
The model requires more memory so it’s recommended to have 32GB.
Thanks.
Thanks, we want to use it in our motorcycle which is very sensitive for cost, the price of Nx 16G seems acceptable for this application, but AGX orin too high. So 32GB memory is recommanded, do you think it is possible to use NX 16G during development, and extend memory later for mass production? Thx
Hi
The model targets high-resolution input so it requires more resources.
Is LLaVA 13B an option for you?
The model can work on a 16GB memory environment.
Please find the details below:
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.