Custom docker nano llm live problem

I inheritented the jetson vlm container below
(jetson-containers run -v /home/ailab:/ailab $(autotag nano_llm)
Namespace(packages=[‘nano_llm’], prefer=[‘local’, ‘registry’, ‘build’], disable=[‘’], user=‘dustynv’, output=‘/tmp/autotag’, quiet=False, verbose=False))

and commited as new container,

sudo docker run -it --network host --env=“DISPLAY” --volume=“/tmp/.X11-unix:/tmp/.X11-unix:rw” -v /home/ailab:/ailab -v /home/ailab/Desktop/jetson-containers/data:/data --user root lllm:lm

and run this code with my newly installed libraries

python3 -m nano_llm.vision.video --model Efficient-Large-Model/VILA1.5-3b --max-context-len 256 --max-new-tokens 32

but always got this error code,

Traceback (most recent call last):
File “/usr/lib/python3.10/runpy.py”, line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File “/usr/lib/python3.10/runpy.py”, line 110, in _get_module_details
import(pkg_name)
File “/ailab/Desktop/NanoLLM-main/nano_llm/init.py”, line 2, in
from .nano_llm import NanoLLM
File “/ailab/Desktop/NanoLLM-main/nano_llm/nano_llm.py”, line 15, in
from .vision import CLIPVisionModel, TIMMVisionModel, MMProjector
File “/ailab/Desktop/NanoLLM-main/nano_llm/vision/init.py”, line 3, in
from .clip import CLIPVisionModel, TIMMVisionModel
File “/ailab/Desktop/NanoLLM-main/nano_llm/vision/clip.py”, line 3, in
from clip_trt import CLIPVisionModel, TIMMVisionModel
File “/opt/clip_trt/clip_trt/init.py”, line 2, in
from .text import CLIPTextModel
File “/opt/clip_trt/clip_trt/text.py”, line 10, in
import torch2trt
File “/usr/local/lib/python3.10/dist-packages/torch2trt/init.py”, line 1, in
from .torch2trt import *
File “/usr/local/lib/python3.10/dist-packages/torch2trt/torch2trt.py”, line 2, in
import tensorrt as trt
File “/usr/lib/python3.10/dist-packages/tensorrt/init.py”, line 67, in
from .tensorrt import *
ImportError: /usr/lib/aarch64-linux-gnu/nvidia/libnvdla_compiler.so: file too short

Do I have to add more running condition when I start my new custom docker?

sudo docker run -it --runtime nvidia --network host --env=“DISPLAY” --volume=“/tmp/.X11-unix:/tmp/.X11-unix:rw” -v /home/ailab:/ailab -v /home/ailab/Desktop/jetson-containers/data:/data --user root vlmaster:llmaster

solve with adding --runtime nvidia, but Can you guys guess why? or does any one have same situation?

Hi @hisong6029, you need --runtime nvidia to enable the GPU drivers inside container (like that libnvdla_compiler.so library)

You can set it to be the default runtime like here: https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#docker-default-runtime

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.