NVIDI LLaVA VILA models

I am running the VILA1.5-3b but when I run the video query script, it is unable to do so as I do not have the local_llm. does anyone know where I should be finding this from GitHub to clone? it seems like I can’t find the right one to make my script work. Thanks

Hi,

You can find some samples in the page below:

Thanks.

Hello, thank you for the reply, I am running a VILA 1.5-3b model on the Jetson Orin Nano, im currently trying to use @dusty_nv 's code in his Live LLaVA demo but that does not seem to be working, with the issue of the llm module not being found.

Hi,

For VILA, have you tried the server.py included in this container image: dustynv/vila:r36.4.0-cu128-24.04?

We use the chat.completion microservice interface for LLM/VLM now instead of library-level access in NanoLLM from projects like vLLM and SGLang that keep up with VLM releases.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.