Gemma 4 Models - which vLLM version? Any PRs spotted?

I did see a reference to these flags in another post, but then are using llama-server with reasoning, Guide: Gemma 4 31B on DGX Spark via NemoClaw — Dual-Model Setup Guide - #2 by Digital_David

But when I try to use them with vLLM via spark-vllm-docker I run into an error. “Error: Missing parameter in recipe command: ‘“enable_thinking”’”

Is there some method needed to enable these or have the fixes been added int vLLM?