Is 16GB the minimum requirement to run VLM

We are trying to set up VLM on our 8GB Jetson Nano Orin.

We encountered an issue where the VLM model fails to load, and we followed the troubleshooting guide to add the swap as recommended:
https://docs.nvidia.com/jetson/jps/inference-services/vlm.html#vlm-fails-to-load

After following the guide, our board essentially froze when attempting to load the model. We waited for more than 20 minutes, but nothing happened.

We noticed that the example in the guide references the Jetson Nano Orin 16GB version. Is 16GB the minimum requirement or is there another guide where we can run VLM on a 8GB Jetson orin ?

Hi,

Based on the document, VLM requires a least 32GB of memory for quantization.

OrinNX can work by adding additional swaps to reach the requirement.
(16GB physical Memory + 8GB default swap + 8GB additional swap)

Thanks.

1 Like