Installed Ollama container from jetson-containers, error: no compatible GPUs found

Hi,
On my AGX Xavier, I installed ollama container using @dusty_nv 's jeston-containers
https://hub.docker.com/r/dustynv/ollama#user-content-images

Started the server using
docker run --runtime nvidia -it --rm --network=host -v ~/ollama:/ollama -e OLLAMA_MODELS=/ollama dustynv/ollama:r36.2.0

I see the following errors in /data/logs/ollama.log, unable to load libcuda.so.1.1. But when i checked, libcuda.so.1.1 files are there in respective directories. Any idea what could be the issue. Here are the logs

root@drishtic-xavier:~# cat /data/logs/ollama.log | grep GPU
2025/02/27 23:51:50 routes.go:1125: INFO server config env=“map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/ollama OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]”
time=2025-02-27T23:51:52.059Z level=INFO source=gpu.go:200 msg=“looking for compatible GPUs”
time=2025-02-27T23:51:52.066Z level=INFO source=gpu.go:568 msg=“unable to load cuda driver library” library=/usr/local/cuda/compat/libcuda.so.1.1 error=“Unable to load /usr/local/cuda/compat/libcuda.so.1.1 library to query for Nvidia GPUs: libnvrm_gpu.so: cannot open shared object file: No such file or directory”
time=2025-02-27T23:51:52.067Z level=INFO source=gpu.go:568 msg=“unable to load cuda driver library” library=/usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1 error=“Unable to load /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1 library to query for Nvidia GPUs: libnvrm_gpu.so: cannot open shared object file: No such file or directory”
time=2025-02-27T23:51:52.067Z level=INFO source=gpu.go:568 msg=“unable to load cuda driver library” library=/usr/lib/aarch64-linux-gnu/libcuda.so.1.1 error=“Unable to load /usr/lib/aarch64-linux-gnu/libcuda.so.1.1 library to query for Nvidia GPUs: libnvrm_gpu.so: cannot open shared object file: No such file or directory”
time=2025-02-27T23:51:52.078Z level=INFO source=gpu.go:347 msg=“no compatible GPUs were discovered”
root@drishtic-xavier:~# ls -al /usr/local/cuda/compat/libcuda
libcudadebugger.so.1 libcuda.so libcuda.so.1 libcuda.so.1.1
root@drishtic-xavier:~# ls -al /usr/local/cuda/compat/libcuda.so.1.1
-rw-r–r-- 1 root root 29497400 Aug 16 2023 /usr/local/cuda/compat/libcuda.so.1.1
root@drishtic-xavier:~# ls -al /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1
-rw-r–r-- 1 root root 15870592 Oct 28 2020 /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1
root@drishtic-xavier:~# ls -al /usr/lib/aarch64-linux-gnu/libcuda.so.1.1
-rw-r–r-- 1 root root 15870592 Oct 28 2020 /usr/lib/aarch64-linux-gnu/libcuda.so.1.1
root@drishtic-xavier:~#

I was able to install and run deepseek-r1:7b and Open WebUI but as expected, it is running very slow.

Any idea how to get Ollama identify the GPUs

Thanks,
-B

1 Like

Hi,

r36.2.0 doesn’t support Xavier.
Please use the version aligned to your BSP.

For example, please trydustynv/ollama:r35.4.1 for JetPack 5.1.x.

Thanks.

1 Like

Thanks. Let me check this version and get back to you.

-B

Thank you that worked. I am not getting any more GPU errors. But I am not able to run deepseek though. When I tried to run
ollama run deepseek-r1:1.5b
ollama run deepseek-r1:7b

it downloaded deepseek, I got the message prompt but then I am not getting any response to the messages. It worked earlier on r36.2.0 but on CPU not GPU
.
I tried mistral, it runs without any issues. The response is pretty quick.

Thanks,
-B

@dusty_nv - Did you have any luck in running Deepseek on dustynv/ollama:r35.4.1? Or do I need to wait for dustynv/ollama:r36.2.0.

Appreciate your response.

-B