VILA 1.5 3B on Jetson Orin Nano

Following NanoVLM - NVIDIA Jetson AI Lab

The benchmarks show VILA1.5-3B running on the Jetson Orin Nano.

I am attempting to run the example:

jetson-containers run $(autotag nano_llm) \
  python3 -m nano_llm.chat --api=mlc \
    --model Efficient-Large-Model/VILA1.5-3b \
    --max-context-len 256 \
    --max-new-tokens 32

On a Jetson Orin Nano Dev Kit (8GB) and have added 20GB swap memory. The above code locks out my device. In running it, the terminal output proceeds to the point in which the model architecture is printed but does not continue and my device reboots.

htop shows ~7.15GB/7.44GB ram used, and only about 2-3GB of the swap.

Looking for help in getting this working.

(cc: @dusty_nv )

My screen during lockup:

Hi @Ashis.Ghosh, try pulling the latest nano_llm container image (docker pull dustynv/nano_llm:r36.2.0) and then if your device is still locking up, I would recommend disabling the UI like here:

Also there are instructions for disabling Z-RAM, if you didn’t already do that when you mounted more swap.

That newer nano_llm container image should allow you to get past that current stage, but if it persists try manually specifying --vision-api=hf on the command-line when you start the chat program.

The updated docker seemed to do the trick! Thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.