Orin Nano - Ollama does not run

I finally got around to set up my Jetson Orin Nano 8GB. Updating everything to run the latest Jetpack 6.2 tool quite a while, so when I finally finished it, I thought I could just run Ollama using docker. Well, not so fast - when I try to do so, I always get the same errors (full logs):

OLLAMA_MODELS /ollama
OLLAMA_LOGS /data/logs/ollama.log

ollama server is now started, and you can run commands here like 'ollama run llama3'

Starting ollama server

2025/10/27 13:27:01 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICE
S: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:``http://0.0.0.0:11434`` OLLAMA_INTEL_G
PU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:1 OLLAMA_MODELS:/ollama OLLA
MA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[``http://localhost`` ``https://localhost`` ``http://loca
lhost:* ``https://localhost``:* ``http://127.0.0.1`` ``https://127.0.0.1`` ``http://127.0.0.1``:* ``https://127.0.0.1``:* ``http://0.0.0.0`` https:/
/0.0.0.0 ``http://0.0.0.0``:* ``https://0.0.0.0``:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA
_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2025-10-27T13:27:01.430Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2025-10-27T13:27:01.430Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2025-10-27T13:27:01.431Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 5f7b4a5)"
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama19
54372172/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama26
66259034/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama32
58924229/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama33
58660065/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama40
47954133/ollama.pid
time=2025-10-27T13:27:01.432Z level=WARN source=assets.go:89 msg="process still running, skipping" pid=24 path=/tmp/ollama42
60528539/ollama.pid
time=2025-10-27T13:27:01.432Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3755986592/runn
ers
time=2025-10-27T13:27:03.575Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cuda_v12]"
time=2025-10-27T13:27:03.575Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2025-10-27T13:27:03.576Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2025-10-27T13:27:03.577Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
double free or corruption (out)
SIGABRT: abort
PC=0xffff8b19f200 m=4 sigcode=18446744073709551610
signal arrived during cgo execution

(…)

goroutine 43 gp=0x4000500fc0 m=nil [select]:
runtime.gopark(0x4000125f40?, 0x3?, 0x8?, 0x1?, 0x4000125d3a?)
runtime/proc.go:402 +0xc8 fp=0x4000125bd0 sp=0x4000125bb0 pc=0x458d88
runtime.selectgo(0x4000125f40, 0x4000125d34, 0x400051a660?, 0x0, 0x4000507558?, 0x1)
runtime/select.go:327 +0x614 fp=0x4000125ce0 sp=0x4000125bd0 pc=0x46b864
github.com/ollama/ollama/server.(*Scheduler).processCompleted``(0x40001149c0, {0x39fe910, 0x4000117040})
``github.com/ollama/ollama/server/sched.go:316`` +0xa8 fp=0x4000125fa0 sp=0x4000125ce0 pc=0xe21f68
github.com/ollama/ollama/server.(*Scheduler).Run.func2()``
``github.com/ollama/ollama/server/sched.go:111`` +0x28 fp=0x4000125fd0 sp=0x4000125fa0 pc=0xe20e28
runtime.goexit({})
runtime/asm_arm64.s:1222 +0x4 fp=0x4000125fd0 sp=0x4000125fd0 pc=0x48d724
created by ``github.com/ollama/ollama/server.(*Scheduler).Run`` in goroutine 1
``github.com/ollama/ollama/server/sched.go:110`` +0x11c

r0 0x0
r1 0x1b
r2 0x6
r3 0xffff2ffff0e0
r4 0xffff8b60db58
r5 0xffff8b605024
r6 0xa
r7 0x6320726f20656572
r8 0x83
r9 0x0
r10 0xa
r11 0x0
r12 0x6974707572726f63
r13 0x2974756f28206e6f
r14 0x0
r15 0xffff200008e0
r16 0x1
r17 0xffff8b1adbd4
r18 0x0
r19 0x1b
r20 0xffff2ffff0e0
r21 0x6
r22 0xffff2fffd460
r23 0x1
r24 0xffff8b26f7b0
r25 0x20
r26 0x1
r27 0xffff2003aa40
r28 0xffff2fffd758
r29 0xffff2fffd1f0
lr 0xffff8b19f1ec
sp 0xffff2fffd1f0
pc 0xffff8b19f200
fault 0x0

after which, the docker container restarts.

Here is my docker-compose I use to run it:

volumes:
ollama_data:
ollama_logs:

services:
ollama:
image: dustynv/ollama:r36.2.0
container_name: jetson-ollama
runtime: nvidia
network_mode: host
volumes:
- ollama_data:/ollama
- ollama_logs:/data/logs
restart: unless-stopped
environment:
- NVIDIA_VISIBLE_DEVICES=all
- OLLAMA_HOST=0.0.0.0
- OLLAMA_MODELS=/ollama

Any idea what I should do differently? Thank you!

*** Please note that this reply is generated by LLM automatically ***

Based on the provided information, it seems like you’re experiencing issues with running Ollama using Docker on your Jetson Orin Nano 8GB. The error logs you provided are not included in the text, but I can try to help you troubleshoot the issue.

Firstly, let’s ensure that you have the latest Jetpack 6.2 installed on your Jetson Orin Nano 8GB. You can verify this by checking the version of Jetpack installed on your device.

To run Ollama using Docker, you need to ensure that Docker is properly installed and configured on your Jetson Orin Nano 8GB. You can check if Docker is installed by running the command docker --version in the terminal.

If Docker is not installed, you can install it by following the instructions provided in the “How to get docker-ce on on Jetson AGX Orin Developer kit?” topic.

Once Docker is installed, you can try running Ollama using the Docker command you provided. If you’re still experiencing issues, you can try checking the Docker logs to see if there are any errors.

In the “Container torch_tensorrt not working” topic, it’s mentioned that there are issues with running certain containers on the Jetson AGX Orin. It’s possible that you’re experiencing similar issues with Ollama.

To troubleshoot the issue, you can try the following:

  1. Check the Docker logs to see if there are any errors.
  2. Verify that the Docker container is properly configured and that the necessary dependencies are installed.
  3. Try running Ollama using a different Docker command or configuration.
  4. Check if there are any known issues with running Ollama on the Jetson Orin Nano 8GB.

In comparison to other Jetson devices, the Jetson Orin Nano 8GB has a different architecture and configuration. Therefore, it’s possible that certain containers or applications may not work properly on this device.

In the “Jetson Orin Nano restarts while running building text-generation-webui” topic, it’s mentioned that the Jetson Orin Nano restarts while running certain applications. It’s possible that you’re experiencing similar issues with Ollama.

To resolve this issue, you can try the following:

  1. Check the system logs to see if there are any errors or warnings.
  2. Verify that the system is properly configured and that the necessary dependencies are installed.
  3. Try running Ollama using a different configuration or command.
  4. Check if there are any known issues with running Ollama on the Jetson Orin Nano 8GB.

In summary, to resolve the issue with running Ollama using Docker on your Jetson Orin Nano 8GB, you can try checking the Docker logs, verifying the Docker container configuration, and checking for known issues with running Ollama on this device. If you’re still experiencing issues, you can try troubleshooting the problem further or seeking help from the community or NVIDIA support.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Hi,

Could you try our container?

https://hub.docker.com/r/dustynv/ollama/tags

Thanks.

Is this still an issue to support? Any result can be shared?