Introducing Ollama Support for Jetson Devices

@tokada1 yes, you need to set LD_LIBRARY_PATH=/usr/lib/cuda/lib64 (or include that directory in your library path if you already have it set) because the bundled libraries don’t seem to work on Jetson devices, we haven’t quite worked that problem out yet.

1 Like

I am getting same error and my LD_LIBRARY_PATH is already set. To /usr/local/cuda-12.2/lib64

yeah, I had to build ollama from source to make it work on JP6 GA system. I guess you need to build it for the specific CUDA (major?) version on the system?
@remy415 have you tried it on JP6 GA? If it works for you, can you comment here?:

@tokada1 if I build it on JP6, it works. There’s some kind of incompatibility with JP6 that hasn’t been worked out on the Ollama side. I haven’t had much time to poke around and see what the issue is, I know dhiltgen on the Ollama git has been working on it too. If you find anything please let me know.

1 Like

@tokada1 I rebuilt ollama container for R36.3: dustynv/ollama:r36.3.0

Other R36.2 container images that I tried on R36.3 were compatible (I did not try this one yet)

1 Like

Thanks for the new image. :)
IIRC, R36.2 image worked fine on the JP6 GA system.
I 'm sure you know this, but just to be clear, the problem I have is when I run Ollama on the host directly, not using docker.
The pre-built binary is built against CUDA 11, but I thought it should work with CUDA 12 driver as long as the binary includes the v11 runtime.
Anyway, this is not a jetson-container issue. ;)

I found that 35.4.1 works fine on my Jetson Xavier NX. I was unable to get 36.2.0 working as it seems to require pytorch 2.3. I couldn’t find a way to get that one installed.