Ollama unable to detect gpu on JetPack 6.1

I’ve flashed my AGX Orin 64GB Dev kit with JetPack 6.1.

After running sudo apt dist-upgrade and sudo apt install nvidia-jetpack I also installed jetson-container: jetson-containers/docs/setup.md at master · dusty-nv/jetson-containers · GitHub

Then I wanted to test Ollama from ollama - NVIDIA Jetson AI Lab but the standard command fails:

user@agx:~$ jetson-containers run --name ollama $(autotag ollama)
Namespace(packages=['ollama'], prefer=['local', 'registry', 'build'], disable=[''], user='dustynv', output='/tmp/autotag', quiet=False, verbose=False)
-- L4T_VERSION=36.4.0  JETPACK_VERSION=6.1  CUDA_VERSION=12.6
-- Finding compatible container image for ['ollama']
dustynv/ollama:r36.2.0
V4L2_DEVICES:
csi_indexes:
basename: unrecognized option '--name'
Try 'basename --help' for more information.
/home/user/code/jetson-containers/run.sh: line 200: /tmp/nv_jetson_model: Is a directory
+ docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/user/code/jetson-containers/data:/data --device /dev/snd --device /dev/bus/usb --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-3 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-6 --device /dev/i2c-7 --device /dev/i2c-8 --device /dev/i2c-9 -v /run/jtop.sock:/run/jtop.sock --name my_jetson_container__20241010_092119 --name ollama dustynv/ollama:r36.2.0

Starting ollama server

Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
Your new public key is:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMfki8dNQrhoIsbk/eJArsLhy5HMN8JkHoQdZrKnKTvq

2024/10/10 13:21:19 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/data/models/ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-10-10T13:21:19.911Z level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-10-10T13:21:19.911Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-10T13:21:19.912Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 5f7b4a5)"
time=2024-10-10T13:21:19.912Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2078835510/runners
time=2024-10-10T13:21:21.586Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cuda_v12]"
time=2024-10-10T13:21:21.586Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-10-10T13:21:21.587Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
time=2024-10-10T13:21:21.587Z level=WARN source=gpu.go:669 msg="unable to locate gpu dependency libraries"
double free or corruption (out)
SIGABRT: abort
PC=0xffff993cf200 m=3 sigcode=18446744073709551610
signal arrived during cgo execution

I was previously (earlier JetPack versions) able to run Ollama with this setup so not sure why unable to locate gpu dependency libraries?

Additional information:

user@agx:~$ cat /etc/nv_tegra_release
# R36 (release), REVISION: 4.0, GCID: 37537400, BOARD: generic, EABI: aarch64, DATE: Fri Sep 13 04:36:44 UTC 2024
# KERNEL_VARIANT: oot
TARGET_USERSPACE_LIB_DIR=nvidia
TARGET_USERSPACE_LIB_DIR_PATH=usr/lib/aarch64-linux-gnu/nvidia

user@agx:~$ dpkg -l | grep -i 'jetpack'
ii  nvidia-jetpack                               6.1+b123                                          arm64        NVIDIA Jetpack Meta Package
ii  nvidia-jetpack-dev                           6.1+b123                                          arm64        NVIDIA Jetpack dev Meta Package
ii  nvidia-jetpack-runtime                       6.1+b123                                          arm64        NVIDIA Jetpack runtime Meta Package

user@agx:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:14:07_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0

I’ve checked whether other jetson-containers can access GPU, I pulled Pytorch

jetson-containers run $(autotag l4t-pytorch)

Once inside the container I can successfully reference torch.cuda like so

root@agx:/# python3 -c "import torch; print(torch.cuda.device_count()); print(torch.cuda.get_device_name(0)); print(torch.cuda.get_device_capability(0)); print(torch.version.cuda)"
1
Orin
(8, 7)
12.6

Hi @nekton, can you try running dustynv/ollama:r36.4.0 container image instead?

1 Like

I updated to the latest jetpack and same cuda12.6 release and I get the same error.

Hi @dusty_nv I encountered the same issue as @nav-intel, but with dustynv/ollama:r36.4.0, everything works fine. I’m using Jetson Orion 64.

@dusty_nv Hi Dusty, I did a complete re-install of Jetpack 6.1 because I had replaced a 1TB drive with a 2TB drive you know (Space the final frontier ;) ) and I thought something I had done might have broken Ollama but the result was the same dustynv/ollama:r36.3.0 fails on Jetpack 6.1 on a fresh install as before, However dustynv/ollama:r36.4.0 works. I have been trying to find the file to update so that jetson_containers run --name ollama $(autotag ollama) runs the dustynv/ollama:r36.4.0 by default but I haven’t found it yet - can you point me to the right place? Thanks, Hillary

Hello nav-intel
if you pull ollama:r36.4.0 using
docker pull dustynv/ollama:r36.4.0
when the image is downloaded the
jetson_containers run --name ollama $ (autotag ollama) will default to 36.4.0

As an aside I have just updated my system to 2TB. and jetpack 6.1 and had problem with co-pilot not working but updating the ollama image and jetson-copilot to 36.4.0 every thing started to work.
Hope this helps !

@paulrrh thanks for the helpful reply which set me on the right path to the solution. I had already tried “docker pull dustynv/ollama:r36.4.0” using the same flags as Jetson_containers use and it ran r36.4.0 fine. But when I used “jetson_containers run --name ollama $ (autotag ollama).” it defaulted to r36.3.0 and failed. I tried “docker rm dustynv/ollama:r36.3.0” and got the error no such container. The solution was to use docker rmi dustynv/ollama:r36.3.0 which deleted the r36.3.0 image and now “jetson_containers run --name ollama $(autotag ollama).” works as expected and runs r36.4.0.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.