@Dusty_nv has anyone managed to get Ollama running with llama3.2-vision yet?

I get a core dump using the Current Jetson-containers Ollama
root@ubuntu:/# ollama run llama3.2-vision
pulling manifest
pulling 11f274007f09… 100% ▕██████████████████████████████▏ 6.0 GB
pulling ece5e659647a… 100% ▕██████████████████████████████▏ 1.9 GB
pulling 715415638c9c… 100% ▕██████████████████████████████▏ 269 B
pulling 0b4284c1f870… 100% ▕██████████████████████████████▏ 7.7 KB
pulling fefc914e46e6… 100% ▕██████████████████████████████▏ 32 B
pulling fbd313562bb7… 100% ▕██████████████████████████████▏ 572 B
verifying sha256 digest
writing manifest
success
Error: llama runner process has terminated: signal: aborted (core dumped)

Hi,

Which JetPack version and which Ollama container do you use?
Please use dustynv/ollama:r36.3.0 for JetPack 6.0 and dustynv/ollama:r36.4.0 for JetPack 6.1.

https://hub.docker.com/r/dustynv/ollama/tags

Thanks.

I am running dustynv/ollama:r36.4.0 on JetPack 6.1.
I think there are 2 possible reasons, Latest docker doesn’t have a very recent Ollama build as you have to have Ollama V 0.4 or later to run Llama3.2-Vision and llama3.2 Vision-Instruct. I can’t tell what version of Ollama is running in the container because it reports version “0.0.0.0”. Or if its working for other people that my install is broken.

I checked again am definitely running dustynv/ollama:r36.4.0and I tried again but get the same result (Core dumped) trying to run Llama3.2-vision models, however llama3.2:latest (Non Vision) runs fine and Quen2.5:32B-Coder runs great (surprisingly fast) So I don’t think the container is broken I think it needs a later version of Ollama.

Hi,

Could you give 0.4.2. a try?
One of our users reports the version can work without issue.

Thanks.

@nav-intel here is a newer version with ollama 0.5.1: dustynv/ollama:0.5.1-r36.4.0
`

@dusty_nv Hey Thanks Dusty! Thats great. I am managing to run Ollama Natively with correct CUDA Libs and also messing around with Open-webui but need to get the correct Jetson and CUDA libs into the Docker Compose for Open-Webui. But I am having fun. The only thing is its distracting me from building the Marine Navigation AI platform thats supposed to be our main focus.
What I have noticed is something peculiar about the Llama3.2 Vision Models. So Nemotron :70B runs fine no issues. Draws up to 57watts total. Llama3.3:70B runs fine no issues draws up to 54 watts total. No alerts no issues. Llava vision models 45 watts no issues. llama3.2:11B-vision hits 47 watts and triggers an over current alert - all the other llama vision models do the same but nothing else does. Odd!?

But the Jetson Platform is awesome !
Cheers H.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.