Live LLaVA webUI don't show NanoDB webUI

I tried Live LLaVA Tutorial following below instruction, but it not shows NanoDB UI.

How can I integrate NanoDB to Live LLaVA appropriately?

My webUI of LLaVA is like below.

In same environment, NanoDB is works fine.

And “chrome://flags/#enable-webrtc-hide-local-ips-with-mdns” is disabled.

I use below equipment.
Jetson AGX Orin 32GB Devkit / JetPack 6.0DP / 2.0 TB NVMe SSD / Logicool C615 HD Webcam

Best regards,

Hi @masaki_yamagishi, what is the command-line you used to start the VideoQuery agent? Did you give it your path to --nanodb ? It would seem that it is not finding it or creating empty database - can you post the terminal log from the program?

That is good that you confirmed NanoDB to be independently working in it’s own server. Also the WebRTC settings only impact the live video view, not the NanoDB widget.

Hi, @dusty_nv

Thank you for your reply.

I executed below command.

jetson-containers run $(autotag nano_llm)
python3 -m nano_llm.agents.video_query --api=mlc
–model Efficient-Large-Model/VILA1.5-3b
–max-context-len 256
–max-new-tokens 32
–video-input /dev/video0
–video-output webrtc://@:8554/output
–nanodb /data/nanodb/coco/2017

And the output of it is attached.
live_llava.txt (110.5 KB)
Errors as below occurred repeatedly.

Traceback (most recent call last):
File “/opt/NanoLLM/nano_llm/plugin.py”, line 201, in run
self.dispatch(input, **kwargs)
File “/opt/NanoLLM/nano_llm/plugin.py”, line 216, in dispatch
outputs = self.process(input, **kwargs)
File “/opt/NanoLLM/nano_llm/plugins/nanodb.py”, line 52, in process
indexes, similarity = self.db.search(input, k=k)
File “/opt/nanodb/nanodb/nanodb.py”, line 61, in search
embedding = self.embed(query)
File “/opt/nanodb/nanodb/nanodb.py”, line 244, in embed
raise RuntimeError(“nanodb was created without an embedding model”)
RuntimeError: nanodb was created without an embedding model

The settings of NanoDB was done and worked correctly, so I don’t know why these errors occurred…

Best regards,

Ok gotcha @masaki_yamagishi, I realize what is going on now: VILA-2.7B used the same openai/clip-vit-large-patch14-336 vision model that the NanoDB was created with, however VILA1.5-3B uses a SigLIP vision encoder that it custom-trained, and the embedding dimensions are different.

I will have to do some rework of NanoDB to support using arbitrary embedding models, and the database will need to be re-indexed with the particular model the VLM is using (should you want to reuse the embeddings and not have to recalculate them)

In the nearer term, I will have to add a flag to the VideoQuery agent to disable reusing the embeddings, and then NanoDB will go back to calculating them with the original CLIP model. Until then, unfortunately I would go back to using VILA-2.7B if you require the live NanoDB integration, sorry about that.

1 Like

Hi, @dusty_nv

Thank you for your reply.

You mean this issue derived from difference of models used NanoDB and Live LLaVA, and if I want to Live LLaVA with NanoDB, I should use Efficient-Large-Model/VILA-2.7b network right now, right?

I tried that with below command, but UI of NanoDB was still disappeared.

jetson-containers run $(autotag nano_llm)
python3 -m nano_llm.agents.video_query --api=mlc
–model Efficient-Large-Model/VILA-2.7b
–max-context-len 256
–max-new-tokens 32
–video-input /dev/video0
–video-output webrtc://@:8554/output
–nanodb /data/nanodb/coco/2017

The output of this is attached.
vila_2_7b_nanodb.txt (159.2 KB)
I found occurring errors repeatedly like below.

Traceback (most recent call last):
File “/opt/NanoLLM/nano_llm/plugin.py”, line 201, in run
self.dispatch(input, **kwargs)
File “/opt/NanoLLM/nano_llm/plugin.py”, line 216, in dispatch
outputs = self.process(input, **kwargs)
File “/opt/NanoLLM/nano_llm/plugins/nanodb.py”, line 52, in process
indexes, similarity = self.db.search(input, k=k)
File “/opt/nanodb/nanodb/nanodb.py”, line 63, in search
indexes, distances = self.index.search(embedding, k=k)
File “/opt/nanodb/nanodb/vector_index.py”, line 156, in search
raise ValueError(f"queries need to use {self.dtype} dtype (was type {queries.dtype})")
ValueError: queries need to use float16 dtype (was type float32)

I’ll have exhibition in this May where I show Live LLaVA, so if you can, please tell next step of method to improve this issue.

I’m sorry to bother you.

Best regards,

Hi @masaki_yamagishi, I made a temporary fix to get it working again with Efficient-Large-Model/VILA1.5-3b and NanoDB - can you try pulling the latest container:

docker pull dustynv/nano_llm:r36.2.0

And then run this command:

jetson-containers run $(autotag nano_llm) \
python3 -m nano_llm.agents.video_query --api=mlc \
    --model Efficient-Large-Model/VILA1.5-3b \
    --max-context-len 256 \
    --max-new-tokens 32 \
    --video-input /dev/video0 \
    --video-output webrtc://@:8554/output \
    --nanodb /data/nanodb/coco/2017
1 Like

Not the OP, but it works for me. (I had the same issue as the OP before)

1 Like

OK great - thanks for confirming it on your end @tokada1

Hi, @dusty_nv

Hooray!

I can run Live LLaVA with NanoDB.

And RAG is also working fine.

Thank you for your dedicated support.
And I found update about this on Live LLaVA Tutorial site.

I’m so glad to contribute Jetson Developer community.

Best regards,

Awesome @masaki_yamagishi - glad you were both able to get this running! Thanks for your involvement and helping to work through the issue 👍

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.