Jetson-containers ollama Permission error after upgrade of Jetpack

Hi,

The script run normally in our JetPack 6.1 environment:

$ jetson-containers run --name ollama $(autotag ollama)
Namespace(packages=['ollama'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.0  JETPACK_VERSION=6.1  CUDA_VERSION=12.6
-- Finding compatible container image for ['ollama']

Found compatible container dustynv/ollama:r36.4.0 (2024-09-30, 3.4GB) - would you like to pull it? [Y/n] Y
dustynv/ollama:r36.4.0
V4L2_DEVICES: 
+ sudo docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/nvidia/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/bus/usb --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-3 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-6 --device /dev/i2c-7 --device /dev/i2c-8 --device /dev/i2c-9 --name ollama dustynv/ollama:r36.4.0
Unable to find image 'dustynv/ollama:r36.4.0' locally
r36.4.0: Pulling from dustynv/ollama
a186900671ab: Pull complete 
8341bb9e50df: Pull complete 
91c93038087e: Pull complete 
f97768af92a0: Pull complete 
bda217e28d3f: Pull complete 
9f53f555f624: Pull complete 
51d7bdf714c7: Pull complete 
f241b34c44b2: Pull complete 
Digest: sha256:c0f0a62dfe3b8a100361f2e5840c0fe1843d28ad76a924dccefc5a1e5b70ee99
Status: Downloaded newer image for dustynv/ollama:r36.4.0

Starting ollama server

Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGVrMCiF2t8AVoD6t0nbXDDyahy2yYhMRuZkRYN6Mkyt

2024/11/20 02:58:52 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/models/ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-11-20T02:58:52.633Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-11-20T02:58:52.633Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-11-20T02:58:52.634Z level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.0.0)"
time=2024-11-20T02:58:52.635Z level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama560808631/runners
time=2024-11-20T02:58:54.289Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cuda_v12 cpu]"
time=2024-11-20T02:58:54.290Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-11-20T02:58:54.291Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2024-11-20T02:58:54.291Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2024-11-20T02:58:54.291Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2024-11-20T02:58:54.546Z level=INFO source=types.go:107 msg="inference compute" id=GPU-b59a364f-47e7-573d-8247-9363343019d8 library=cuda variant=jetpack6 compute=8.7 driver=12.6 name=Orin total="61.4 GiB" available="57.8 GiB"

OLLAMA_MODELS /data/models/ollama/models
OLLAMA_LOGS   /data/logs/ollama.log

ollama server is now started, and you can run commands here like 'ollama run llama3'

root@tegra-ubuntu:/#

Based on your error: /tmp/nv_jetson_model: Permission denied.
Could you check if you have read/write permission for the folder?

$ ll /tmp/nv_jetson_model 
-rw-rw-r-- 1 nvidia nvidia 37 Nov 20 02:54 /tmp/nv_jetson_model

Thanks.