Ollama with nemotron-mini

Hi all!
I would like to use Ollama along with nemotron-mini.
I decided to work the nemotron-mini:4b-instruct-q4_K_M model that is pulled from Ollama. I can see that sometimes it has issues with tool handling, especially when multiple tools are defined. It wasn’t the case with other models, that I have tried. It seems that the output JSON misses an arguments part and this is the reason Ollama doesn’t handle the tool calling properly.
I’m also using ollama-python using following example as a base solution: Ollama Python library 0.4 with function calling improvements · Ollama Blog. Due to memory constraints (8GB Orin Nano), I can’t have larger model.

Could you please guide me with the required/recommended steps to proceed?
Thank you!

Hi,

Do you follow the doc to optimize your memory?

Thanks

Thank you for your response.
I already have GUI disabled and swap (16GB enabled).

What, I thought about would be rather a customized prompt or template. With the prompt being not adjusted, Ollama doesn’t handle the tool calling properly. It often fails in getting tool name, it’s arguments. I thought you could point me to some docs or tips to follow when dealing with Ollama and Nemotron-mini when it comes to proper tool calling.