Can't start NanoVLM on Orin Nano 8GB

Hello,

I’m following this tutorial for NanoVLM

I’ve tried running models VILA1.5-3b and Obsidian-3B (the ones specified that can run on Orin Nano 8GB) using the following commands - modified the model line when it is Obsidian-3B

jetson-containers run $(autotag nano_llm) \
  python3 -m nano_llm.chat --api=mlc \
    --model Efficient-Large-Model/VILA1.5-3b \
    --max-context-len 256 \
    --max-new-tokens 32

This is my setup:

Package: nvidia-jetpack
Source: nvidia-jetpack (6.1)
Version: 6.1+b123
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.1+b123), nvidia-jetpack-dev (= 6.1+b123)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.1+b123_arm64.deb
Size: 29312
SHA256: b6475a6108aeabc5b16af7c102162b7c46c36361239fef6293535d05ee2c2929
SHA1: f0984a6272c8f3a70ae14cb2ca6716b8c1a09543
MD5sum: a167745e1d88a8d7597454c8003fa9a4
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

During quantization, it would terminate and give this error message -

died with <Signals.SIGKILL: 9>

I’ve run into the same errors for both models. Can I still run these VLMs on the Orin Nano? TIA for the help

Hi,

It might be out of memory.
Please refer to this topics to try Mounting Swap or disable GUI to get enough memory.

Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.