Jetson orin nano fail to quanization NanoVLM model

Following this tutorial (NanoVLM - NVIDIA Jetson AI Lab) in my jetson nano (Microsd 128gb only)

jetson-containers run $(autotag nano_llm)
python3 -m nano_llm.chat --api=mlc
–model Efficient-Large-Model/VILA1.5-3b
–max-context-len 256
–max-new-tokens 32

After downloading the model. the process has stucked in this message and automatically reboot (In my guess, memory shortage problem)

Start computin and quantizing weights… This may take a while
get old param : 1% 2/197
set new param 0% 1/***

Please help the problem

Hi,

It looks like you met a similar issue as the below topic:

Please try the suggestion and test it again.

Thanks.

Problem solved by reply with first suggestion, But I had to use swap memory by microSD card only (not that slow than I thought). Following this instrucion instead (How to Increase Swap Space on Jetson Modules?)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.