I finally got around to set up my Jetson Orin Nano 8GB. Updating everything to run the latest Jetpack 6.2 tool quite a while, so when I finally finished it, I thought I could just run Ollama using docker. Well, not so fast - when I try to do so, I always get the same errors (full logs):
OLLAMA_MODELS /ollama
OLLAMA_LOGS /data/logs/ollama.log
ollama server is now started, and you can run commands here like 'ollama run llama3'
Starting ollama server
2025/10/27 13:27:01 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICE
S: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:``http://0.0.0.0:11434`` OLLAMA_INTEL_G
PU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:1 OLLAMA_MODELS:/ollama OLLA
MA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[``http://localhost`` ``https://localhost`` ``http://loca
lhost:* ``https://localhost``:* ``http://127.0.0.1`` ``https://127.0.0.1`` ``http://127.0.0.1``:* ``https://127.0.0.1``:* ``http://0.0.0.0`` https:/
/0.0.0.0 ``http://0.0.0.0``:* ``https://0.0.0.0``:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA
_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2025-10-27T13:27:01.430Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2025-10-27T13:27:01.430Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2025-10-27T13:27:01.431Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 5f7b4a5)"
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama19
54372172/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama26
66259034/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama32
58924229/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama33
58660065/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama40
47954133/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama42
60528539/ollama.pid
time=2025-10-27T13:27:01.432Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3755986592/runn
ers
time=2025-10-27T13:27:03.575Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cuda_v12]"
time=2025-10-27T13:27:03.575Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2025-10-27T13:27:03.576Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2025-10-27T13:27:03.577Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
double free or corruption (out)
SIGABRT: abort
PC=0xffff8b19f200 m=4 sigcode=18446744073709551610
signal arrived during cgo execution
(…)
goroutine 43 gp=0x4000500fc0 m=nil [select]:
runtime.gopark(0x4000125f40?, 0x3?, 0x8?, 0x1?, 0x4000125d3a?)
runtime/proc.go:402 +0xc8 fp=0x4000125bd0 sp=0x4000125bb0 pc=0x458d88
runtime.selectgo(0x4000125f40, 0x4000125d34, 0x400051a660?, 0x0, 0x4000507558?, 0x1)
runtime/select.go:327 +0x614 fp=0x4000125ce0 sp=0x4000125bd0 pc=0x46b864
github.com/ollama/ollama/server.(*Scheduler).processCompleted``(0x40001149c0, {0x39fe910, 0x4000117040})
``github.com/ollama/ollama/server/sched.go:316`` +0xa8 fp=0x4000125fa0 sp=0x4000125ce0 pc=0xe21f68
github.com/ollama/ollama/server.(*Scheduler).Run.func2()``
``github.com/ollama/ollama/server/sched.go:111`` +0x28 fp=0x4000125fd0 sp=0x4000125fa0 pc=0xe20e28
runtime.goexit({})
runtime/asm_arm64.s:1222 +0x4 fp=0x4000125fd0 sp=0x4000125fd0 pc=0x48d724
created by ``github.com/ollama/ollama/server.(*Scheduler).Run`` in goroutine 1
``github.com/ollama/ollama/server/sched.go:110`` +0x11c
r0 0x0
r1 0x1b
r2 0x6
r3 0xffff2ffff0e0
r4 0xffff8b60db58
r5 0xffff8b605024
r6 0xa
r7 0x6320726f20656572
r8 0x83
r9 0x0
r10 0xa
r11 0x0
r12 0x6974707572726f63
r13 0x2974756f28206e6f
r14 0x0
r15 0xffff200008e0
r16 0x1
r17 0xffff8b1adbd4
r18 0x0
r19 0x1b
r20 0xffff2ffff0e0
r21 0x6
r22 0xffff2fffd460
r23 0x1
r24 0xffff8b26f7b0
r25 0x20
r26 0x1
r27 0xffff2003aa40
r28 0xffff2fffd758
r29 0xffff2fffd1f0
lr 0xffff8b19f1ec
sp 0xffff2fffd1f0
pc 0xffff8b19f200
fault 0x0
after which, the docker container restarts.
Here is my docker-compose I use to run it:
volumes:
ollama_data:
ollama_logs:
services:
ollama:
image: dustynv/ollama:r36.2.0
container_name: jetson-ollama
runtime: nvidia
network_mode: host
volumes:
- ollama_data:/ollama
- ollama_logs:/data/logs
restart: unless-stopped
environment:
- NVIDIA_VISIBLE_DEVICES=all
- OLLAMA_HOST=0.0.0.0
- OLLAMA_MODELS=/ollama
Any idea what I should do differently? Thank you!