Jetson-containers ollama Permission error after upgrade of Jetpack

I upgraded today from R35 to R36. When now trying to run ollama:

jetson-containers run --name ollama $(autotag ollama)

(with or without sudo) it just crashes with the error /bin/sh: 1: /start_ollama Permission denied

Full Stack:

sudo jetson-containers run --name ollama $(autotag ollama)
Namespace(packages=['ollama'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.0  JETPACK_VERSION=6.1  CUDA_VERSION=12.6
-- Finding compatible container image for ['ollama']
dustynv/ollama:0.4.0-r36.4.0
V4L2_DEVICES: 
### DISPLAY environmental variable is already set: ":1"
localuser:root being added to access control list
/ssd/jetson-containers/run.sh: line 307: /tmp/nv_jetson_model: Permission denied
+ docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /ssd/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd --device /dev/bus/usb -e DISPLAY=:1 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth -e XAUTHORITY=/tmp/.docker.xauth --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-3 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-6 --device /dev/i2c-7 --device /dev/i2c-8 --device /dev/i2c-9 --name ollama dustynv/ollama:0.4.0-r36.4.0
/bin/sh: 1: /start_ollama: Permission denied

Hi,

The script run normally in our JetPack 6.1 environment:

$ jetson-containers run --name ollama $(autotag ollama)
Namespace(packages=['ollama'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.0  JETPACK_VERSION=6.1  CUDA_VERSION=12.6
-- Finding compatible container image for ['ollama']

Found compatible container dustynv/ollama:r36.4.0 (2024-09-30, 3.4GB) - would you like to pull it? [Y/n] Y
dustynv/ollama:r36.4.0
V4L2_DEVICES: 
+ sudo docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/nvidia/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/bus/usb --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-3 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-6 --device /dev/i2c-7 --device /dev/i2c-8 --device /dev/i2c-9 --name ollama dustynv/ollama:r36.4.0
Unable to find image 'dustynv/ollama:r36.4.0' locally
r36.4.0: Pulling from dustynv/ollama
a186900671ab: Pull complete 
8341bb9e50df: Pull complete 
91c93038087e: Pull complete 
f97768af92a0: Pull complete 
bda217e28d3f: Pull complete 
9f53f555f624: Pull complete 
51d7bdf714c7: Pull complete 
f241b34c44b2: Pull complete 
Digest: sha256:c0f0a62dfe3b8a100361f2e5840c0fe1843d28ad76a924dccefc5a1e5b70ee99
Status: Downloaded newer image for dustynv/ollama:r36.4.0

Starting ollama server

Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGVrMCiF2t8AVoD6t0nbXDDyahy2yYhMRuZkRYN6Mkyt

2024/11/20 02:58:52 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/models/ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-20T02:58:52.633Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-11-20T02:58:52.633Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-11-20T02:58:52.634Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.0.0)"
time=2024-11-20T02:58:52.635Z level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama560808631/runners
time=2024-11-20T02:58:54.289Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu]"
time=2024-11-20T02:58:54.290Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-11-20T02:58:54.291Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2024-11-20T02:58:54.291Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2024-11-20T02:58:54.291Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2024-11-20T02:58:54.546Z level=INFO source=types.go:107 msg="inference compute" id=GPU-b59a364f-47e7-573d-8247-9363343019d8 library=cuda variant=jetpack6 compute=8.7 driver=12.6 name=Orin total="61.4 GiB" available="57.8 GiB"

OLLAMA_MODELS /data/models/ollama/models
OLLAMA_LOGS   /data/logs/ollama.log

ollama server is now started, and you can run commands here like 'ollama run llama3'

root@tegra-ubuntu:/#

Based on your error: /tmp/nv_jetson_model: Permission denied.
Could you check if you have read/write permission for the folder?

$ ll /tmp/nv_jetson_model 
-rw-rw-r-- 1 nvidia nvidia 37 Nov 20 02:54 /tmp/nv_jetson_model

Thanks.

I’m receiving the same error as @mb306

I’ve opened an issue on github: Permission issue for /start_ollama in dustynv/ollama:0.4.0-r36.4.0 · Issue #742 · dusty-nv/jetson-containers · GitHub

Another user added the same issue at: /start_ollama:Permission denied · Issue #745 · dusty-nv/jetson-containers · GitHub

I’ve encountered this error on both the Jetson AGX Xavier 16Gb developer’s kit and the Jetson AGX Orin 64Gb developer kit.

Both units were flashed multiple times to rule out issues with a faulty install.

intelemodel@ubuntu:~$ jetson-containers run --name ollama $(autotag ollama)
Namespace(packages=['ollama'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.0  JETPACK_VERSION=6.1  CUDA_VERSION=12.6
-- Finding compatible container image for ['ollama']

Found compatible container dustynv/ollama:0.4.0-r36.4.0 (2024-11-09, 3.3GB) - would you like to pull it? [Y/n] Y
dustynv/ollama:0.4.0-r36.4.0
V4L2_DEVICES:
+ sudo docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/intelemodel/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/bus/usb --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-3 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-6 --device /dev/i2c-7 --device /dev/i2c-8 --device /dev/i2c-9 --name ollama dustynv/ollama:0.4.0-r36.4.0
[sudo] password for intelemodel:
Unable to find image 'dustynv/ollama:0.4.0-r36.4.0' locally
0.4.0-r36.4.0: Pulling from dustynv/ollama
a186900671ab: Pull complete
8341bb9e50df: Pull complete
91c93038087e: Pull complete
f97768af92a0: Pull complete
102653227cae: Pull complete
52595ca88337: Pull complete
94f1946bedfc: Pull complete
ed8f29c189c4: Pull complete
38046f0c1fe3: Pull complete
b195513fabfb: Pull complete
4a9a90b74561: Pull complete
7a64e4b65531: Pull complete
Digest: sha256:395aef2cc3992b3b5a111cf76bb6573dc14a961d4214623d3bbf5759e5a9f5b2
Status: Downloaded newer image for dustynv/ollama:0.4.0-r36.4.0
/bin/sh: 1: /start_ollama: Permission denied

verifying permission on /tmp/nv_jetson_model:

intelemodel@ubuntu:~$ ll /tmp/nv_jetson_model
-rw-rw-r-- 1 intelemodel intelemodel 37 Dec  9 13:26 /tmp/nv_jetson_model

I examined the included Dockerfile and found there was no WORKDIR specified, nor was a non-root user defined in the process. I’m not sure if these are related to this permission issue, but it seems that more people are experiencing this than expected.

Hi,
your solution does not work on Jetson Orin NX 8Gb

 jetson-containers run --name ollama $(autotag ollama)
Namespace(packages=['ollama'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.0  JETPACK_VERSION=6.1  CUDA_VERSION=12.6
-- Finding compatible container image for ['ollama']
dustynv/ollama:0.4.0-r36.4.0
V4L2_DEVICES: 
+ docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/bosa/workspace/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/bus/usb --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-7 --device /dev/i2c-9 --name ollama dustynv/ollama:0.4.0-r36.4.0
/bin/sh: 1: /start_ollama: Permission denied

Getting this error despite giving higher permissions to /tmp/nv_jetson_model

ll /tmp/nv_jetson_model 
-rw-rw-rw- 1 bosa bosa 58 Dec 12 18:54 /tmp/nv_jetson_model

Do you have other solution for this issue?

It doesn’t look like the image is functioning properly for multiple people, myself included, and on both AGX Xavier Development Kit, and AGX Orin Development Kit.

Both myself and another user opened similar issues on github.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.