Why does OPENCLAW call the OLLAMA API, and the model doesn't remember my previous question? How to set up OLLAMA?

curl -fsSL https://ollama.com/install.sh | sh

Can you provide more details on your issue? Logs and commands you run would be much appreciated.

Vibe Coding in VS Code

OLLAMA INSTALL

OPENCLAW

“ollama”: {
“baseUrl”: “http://192.168.2.94:11434/v1”,
“apiKey”: “ollama-local”,
“api”: “openai-completions”,
“models”: [
{
“id”: “glm-4.7-flash:bf16”,
“name”: “GLM-4.7 Flash BF16”,
“reasoning”: false,
“input”: [
“text”
],
“cost”: {
“input”: 0,
“output”: 0,
“cacheRead”: 0,
“cacheWrite”: 0
},
“contextWindow”: 131072,
“maxTokens”: 8192
}
]
}

Respectfully, using openclaw, generally, is way beyond the scope of this forum and you are unlikely to get much handholding. Setting up Ollama is straightforward with this playbook: Open WebUI with Ollama | DGX Spark

There is nothing specific to Spark about openclaw. Try doing a Google search for the proper syntax for openclaw.json.

1 Like

Surprisingly there is a dedicated guide for this.

OP - this is all you should need.

Anything beyond that you’d be on your own.

2 Likes

Wow! Did not know.

You could/should run llama.cpp or vLLM and set the
“baseUrl”: “http://localhost:1234/v1” to your local IP … it’s more pure, more honest ;)

@hongde
Have you enabled the following settings? When I use ollama glm-4.7, it has memory capability. You might also try updating to the latest version of OpenClaw to see if that helps.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.