We are trying to set up VLM on our 8GB Jetson Nano Orin.
We encountered an issue where the VLM model fails to load, and we followed the troubleshooting guide to add the swap as recommended:
https://docs.nvidia.com/jetson/jps/inference-services/vlm.html#vlm-fails-to-load
After following the guide, our board essentially froze when attempting to load the model. We waited for more than 20 minutes, but nothing happened.
We noticed that the example in the guide references the Jetson Nano Orin 16GB version. Is 16GB the minimum requirement or is there another guide where we can run VLM on a 8GB Jetson orin ?