I try to practice LLaVA tutorial from LLaVA - NVIDIA Jetson AI Lab with my AGX orin 32GB devkit but it returns “ERROR The model could not be loaded because its checkpoint file in .bin/.pt/.safetensors format could not be located.”.
Detailed log as bellow:
aaa@aaaadmin:~$ jetson-containers run --workdir=/opt/text-generation-webui $(autotag text-generation-webui) \
sudo docker run --runtime nvidia -it --rm --network host --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc /enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run /dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.so ck --volume /home/qsiadmin/jetson-containers/data:/data --device /dev/snd --device /dev/bus/usb --device /dev/video0 --device /dev/vide o1 --device /dev/video2 --device /dev/video3 --device /dev/video4 --device /dev/video5 --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-3 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-6 --device /dev/i2c-7 --device /dev/i2c-8 --d evice /dev/i2c-9 -v /run/jtop.sock:/run/jtop.sock --workdir=/opt/text-generation-webui dustynv/text-generation-webui:r35.4.1-cp310 pyth on3 server.py --listen --model-dir /data/models/text-generation-webui --model TheBloke_llava-v1.5-13B-GPTQ --multimodal-pipeline llava- v1.5-13b --loader autogptq --disable_exllama --verbose
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:124: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
07:55:09-964020 INFO Starting Text generation web UI
07:55:09-969277 WARNING
You are potentially exposing the web UI to the entire internet without any access password.
You can create one with the “–gradio-auth” flag like this:
--gradio-auth username:password
Make sure to replace username:password with your own.
07:55:09-971591 INFO Loading settings from “settings.yaml”
07:55:09-976041 INFO Loading “TheBloke_llava-v1.5-13B-GPTQ”
07:55:10-022885 ERROR The model could not be loaded because its checkpoint file in .bin/.pt/.safetensors format could not be
located.
07:55:10-024882 INFO Loading the extension “multimodal”
07:55:11-719097 INFO LLaVA - Loading CLIP from openai/clip-vit-large-patch14-336 as torch.float16 on cuda:0…
07:55:14-307723 INFO LLaVA - Loading projector from liuhaotian/llava-v1.5-13b as torch.float16 on cuda:0…
07:55:14-722383 INFO LLaVA supporting models loaded, took 3.00 seconds
07:55:14-725033 INFO Multimodal: loaded pipeline llava-v1.5-13b from pipelines/llava (LLaVA_v1_5_13B_Pipeline)
I had issue first instruction “jetson-containers run --workdir=/opt/text-generation-webui $(autotag text-generation-webui)
python3 download-model.py --output=/data/models/text-generation-webui
TheBloke/llava-v1.5-13B-GPTQ”. it shows model downloaded but I can’t find it. as bellow:
aaa@aaaadmin:~$ jetson-containers run --workdir=/opt/text-generation-webui $(autotag text-generation-webui) \
I tried again and upgrade L4T to Jetson linux R36.3 with JetPack6.0. but still failed to run. I can pull required images but just failed to loading the model. detailed log as bellow:
Found compatible container dustynv/stable-diffusion-webui:r36.2.0 (2024-02-02, 8.9GB) - would you like to pull it? [Y/n] Y
dustynv/stable-diffusion-webui:r36.2.0
loading stable diffusion model: FileNotFoundError
Traceback (most recent call last):
File “/usr/lib/python3.10/threading.py”, line 973, in _bootstrap
self._bootstrap_inner()
File “/usr/lib/python3.10/threading.py”, line 1016, in _bootstrap_inner
self.run()
File “/usr/lib/python3.10/threading.py”, line 953, in run
self._target(*self._args, **self._kwargs)
File “/opt/stable-diffusion-webui/modules/initialize.py”, line 147, in load_model
shared.sd_model # noqa: B018
File “/opt/stable-diffusion-webui/modules/shared_items.py”, line 128, in sd_model
return modules.sd_models.model_data.get_sd_model()
File “/opt/stable-diffusion-webui/modules/sd_models.py”, line 531, in get_sd_model
load_model()
File “/opt/stable-diffusion-webui/modules/sd_models.py”, line 602, in load_model
checkpoint_info = checkpoint_info or select_checkpoint()
File “/opt/stable-diffusion-webui/modules/sd_models.py”, line 224, in select_checkpoint
raise FileNotFoundError(error_message)
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
file /opt/stable-diffusion-webui/model.ckpt
directory /data/models/stable-diffusion/models/Stable-diffusionCan’t run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.
Stable diffusion model failed to load
Applying attention optimization: xformers… done.
/opt/stable-diffusion-webui/extensions-builtin/stable-diffusion-webui-tensorrt/ui_trt.py:64: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead.
with gr.Row().style(equal_height=False):
Running on local URL: http://0.0.0.0:7860
Could you share more information to change the model to 2.0? just download model from huggingfface and replace it to /data/models/stable-diffusion/models/Stable-diffusion/?
thanks.