04:52:52 | INFO | loading /data/models/huggingface/models–Efficient-Large-Model–VILA1.5-3b/snapshots/42d1dda6807cc521ef27674ca2ae157539d17026 with MLC
04:52:56 | INFO | NumExpr defaulting to 6 threads.
Traceback (most recent call last):
File “/home/ailab/Desktop/llmaster/lvlm.py”, line 46, in
model = NanoLLM.from_pretrained(
File “/opt/NanoLLM/nano_llm/nano_llm.py”, line 91, in from_pretrained
model = MLCModel(model_path, **kwargs)
File “/opt/NanoLLM/nano_llm/models/mlc.py”, line 60, in init
quant = MLCModel.quantize(self.model_path, self.config, method=quantization, max_context_len=max_context_len, **kwargs)
File “/opt/NanoLLM/nano_llm/models/mlc.py”, line 258, in quantize
os.symlink(model, model_path, target_is_directory=True)
FileNotFoundError: [Errno 2] No such file or directory: ‘/data/models/huggingface/models–Efficient-Large-Model–VILA1.5-3b/snapshots/42d1dda6807cc521ef27674ca2ae157539d17026/llm’ → ‘/data/models/mlc/dist/models/VILA1.5-3b’
I tried delete and re download the model, restart the container, and other things like copy file by my hands on /data/models/mlc like below
root@ubuntu:/data/models/mlc/dist/models/VILA1.5-3b# ls
config.json model.safetensors.index.json
generation_config.json special_tokens_map.json
model-00001-of-00002.safetensors tokenizer_config.json
model-00002-of-00002.safetensors tokenizer.model
but only config.json file is white color text, the others are red color
how can I modify sample code or docker to run nanollm sample code
It looks like there is no VILA model in your environment:
FileNotFoundError: [Errno 2] No such file or directory: ‘/data/models/huggingface/models–Efficient-Large-Model–VILA1.5-3b/snapshots/42d1dda6807cc521ef27674ca2ae157539d17026/llm’ → ‘/data/models/mlc/dist/models/VILA1.5-3b’
for VILA model, I think I don’t need any access before use the model.
So I just run the code on the docker env and got the error about missing file directory,
I saw all the model data had been downloaded, and I can see ```
‘/data/models/huggingface/models–Efficient-Large-Model–VILA1.5-3b/snapshots/42d1dda6807cc521ef27674ca2ae157539d17026/llm’
Hi @jksim1833, sorry for the trouble - I noticed this looks like a path on the host device - do you have your users home directory mounted into the container, or are you trying to run it from outside the container?
Inside the container, can you try manually creating the symlink like this, and see if it works?
If you did not start the container with jetson-containers run command, that automatically mounts --volume jetson-containers/data:/data and you would need to add that to your docker run command if starting it manually.
Yes, I inheritant nanollm docker and make my custom docker with adding some libraries. lvlm.py is just copy of example code ( Multimodal — NanoLLM 24.7 documentation and give a permission for accessing local file.
after changing my docker opening promt, it works well