vLLM Compatibility Problem with GPT OSS 120B and OpenClaw by spark-vllm-docker

“models”: {
“providers”: {
“vllm”: {
“baseUrl”: “http://192.168.178.39:8888/v1”,
“apiKey”: “dummy”,
“api”: “openai-completions”,
“models”: [
{
“id”: “vllm/Qwen/Qwen3-Coder-Next-FP8”,
“name”: “Qwen3 Coder Next”,
“api”: “openai-completions”,
“reasoning”: false,
“input”: [
“text” ],
“cost”: {
“input”: 0,
“output”: 0,
“cacheRead”: 0,
“cacheWrite”: 0
},
“contextWindow”: 262144,
“maxTokens”: 8196
}
]
}
}
},
“agents”: {
“defaults”: {
“model”: {
“primary”: “vllm/Qwen/Qwen3-Coder-Next-FP8”

we tested this setup it was acceptable to test with. we have the same strict errors but works fine.