Qwen/Qwen3.6-35B-A3B (and FP8) has landed

Just wanted to chime in with the Qwen/Qwen3.6-35B-A3B-FP8 recipe that has been really solid for me with OpenCode for a couple days now.

cat recipes/qwen3.6-35b-a3b-fp8.yaml

# Recipe: Qwen/Qwen3.6-35B-A3B model in native FP8 format

recipe_version: "1"
name: Qwen36-35B-A3B
description: vLLM serving Qwen3.6-35B-A3B-FP8

# HuggingFace model to download (optional, for --download-model)
model: Qwen/Qwen3.6-35B-A3B-FP8

solo_only: true

# Container image to use
container: vllm-node-tf5

# Mod
mods:
  - mods/fix-qwen3.6-chat-template

# Default settings (can be overridden via CLI)
defaults:
  port: 8000
  host: 0.0.0.0
  gpu_memory_utilization: 0.85
  max_model_len: 262144
  max_num_batched_tokens: 32768

# Environment variables
env:
  VLLM_MARLIN_USE_ATOMIC_ADD: 1

# The vLLM serve command template
command: |
  vllm serve Qwen/Qwen3.6-35B-A3B-FP8 \
    --served-model-name qwen36 \
    --host {host} \
    --port {port} \
    --kv-cache-dtype bfloat16 \
    --max-model-len {max_model_len} \
    --max-num-batched-tokens {max_num_batched_tokens} \
    --gpu-memory-utilization {gpu_memory_utilization} \
    --enable-auto-tool-choice \
    --trust-remote-code \
    --tool-call-parser qwen3_coder \
    --reasoning-parser qwen3 \
    --attention-backend flashinfer \
    --load-format instanttensor \
    --default-chat-template-kwargs '{{"preserve_thinking": true}}' \
    --override-generation-config '{{"temperature": 0.6, "top_p": 0.95, "top_k": 20, "presence_penalty": 0.0, "repetition_penalty": 1.0}}' \
    --speculative-config '{{"method":"mtp","num_speculative_tokens":3}}' \
    --max-num-seqs 4 \
    --language-model-only \
    --enable-prefix-caching

cat mods/fix-qwen3.6-chat-template/run.sh (which could probably be cleaner/smarter):

#!/bin/bash
set -e

CHAT_TEMPLATE="qwen3.5-enhanced.jinja"

if [ -f "${CHAT_TEMPLATE}" ] && [ -s "${CHAT_TEMPLATE}" ]
then
  cp ${CHAT_TEMPLATE} ${WORKSPACE_DIR}/${CHAT_TEMPLATE}
  echo "=======> to apply chat template, use --chat-template ${CHAT_TEMPLATE}"
else
  echo "# See https://github.com/allanchan339/vLLM-Qwen3.5-27B/tree/main and"
  echo "# https://github.com/allanchan339/vLLM-Qwen3.5-27B/blob/main/qwen3.5-enhanced.jinja"
  exit 1
fi

Then I run with ./run-recipe.sh qwen3.6-35b-a3b-fp8 --chat-template qwen3.5-enhanced.jinja -e HF_TOKEN=${HF_TOKEN}

If it matters, running vLLM reports: 0.19.1rc1.dev374+g1174723eb.d20260417

Token generation usually in the 30-40/sec range, which Iโ€™m very happy with!


For OpenCode, I read a hint somewhere that "npm": "@ai-sdk/anthropic" would help reduce tool-call failures; since I set that, I have not encountered a single tool call fail.

opencode.json (YMMV on the various reserved/context/input/output values):

{
    "$schema": "https://opencode.ai/config.json",
    "compaction": {
      "auto": true,
      "prune": true,
      "reserved": 16384
    },
    "model": "local/qwen36",
    "provider": {
        "local": {
            "npm": "@ai-sdk/anthropic",
            "name": "local",
            "options": {
                "baseURL": "http://PUT_YOUR_IP_ADDRESS_HERE:8000/v1",
                "apiKey": "dummy"
            },
            "models": {
                "qwen36": {
                    "name": "qwen36",
                    "limit": {
                        "context": 212992,
                        "input": 180224,
                        "output": 32768
                    }
                }
            }
        }
    },
    "agent": {
        "build": {
            "temperature": 0.6,
            "top_p": 0.95,
            "max_tokens": 32768
        },
        "plan": {
            "temperature": 0.6,
            "top_p": 0.95,
            "max_tokens": 32768
        }
    },
}

Feedback welcome!

6 Likes

Yes, the anthropic trick for Opencode did wonders for me, I interchange both Claude Code and OpenCode so this was a blessing :) Iโ€™m running a very similar recipe to yours. My main difference is daily driving 2 MTP tokens instead, but Iโ€™ll try more benchs and real-world work with 4 to see if I see net-win improvement or not.

2 Likes

The DFlash drafter for Qwen3.6-35B-A3B was just updated. They removed the comment that training was ongoing, so this may be essentially its final form.

This version is not dramatically different for me in off-code and off-benchmark testing. I see maybe 1-2 tok/s improved overall throughput. We really need DDTree to make long block diffusion drafters more generally useful outside syntheticsโ€ฆ

Meanwhile, it seems like Qwen improved the built-in MTP for 3.6 because I definitely notice higher acceptance rates out to 3 positions regardless of task.

2 Likes

I just let the startup running during nightโ€ฆ now i saw it was going through and Startup was completed, but it took over 3 hours to finish. I just canยดt get it, are there some connection problems ? llama-benchy results looking good, so no power-delivery bug.

My case was simply a model snapshot which was not transferred to the node, but yours looks different unfortunately.

Woah sick! I just got a DGX Spark, how do I run this on mine? I read the thread and am still confused

Comparativa de modelos โ€” Tool Eval Bench (last run c/u)                                                                                             
                                                                                                                                                      
  โ”Œโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”                              
  โ”‚  #  โ”‚                 Modelo                 โ”‚ Score โ”‚   Pts   โ”‚ Deploy โ”‚ Respons. โ”‚ Turn med โ”‚ Pass โ”‚ Part โ”‚ Fail โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ค                              
  โ”‚ 1   โ”‚ nvidia/Qwen3.5-397B-A17B-NVFP4         โ”‚    90 โ”‚ 124/138 โ”‚     70 โ”‚       25 โ”‚    6.3 s โ”‚   57 โ”‚   10 โ”‚    2 โ”‚                              
  โ”œโ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ค                              
  โ”‚ 2   โ”‚ Intel/Qwen3.5-397B-A17B-int4-AutoRound โ”‚    88 โ”‚ 122/138 โ”‚     72 โ”‚       36 โ”‚    4.4 s โ”‚   56 โ”‚   10 โ”‚    3 โ”‚                              
  โ”œโ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ค                              
  โ”‚ 2   โ”‚ cyankiwi/MiniMax-M2.7-AWQ-4bit         โ”‚    88 โ”‚ 122/138 โ”‚     79 โ”‚       58 โ”‚    2.4 s โ”‚   55 โ”‚   12 โ”‚    2 โ”‚                              
  โ”œโ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”ค                              
  โ”‚ 4   โ”‚ Intel/Qwen3.6-35B-A3B-int4-AutoRound   โ”‚    87 โ”‚ 120/138 โ”‚     78 โ”‚       58 โ”‚    2.4 s โ”‚   55 โ”‚   10 โ”‚    4 โ”‚
  โ””โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”˜                              
                                                                  
  Throughput (single stream @ depth=0, pp2048/tg128)                                                                                                  
                                                                  
  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                                           
  โ”‚                 Modelo                 โ”‚ pp t/s โ”‚ tg t/s โ”‚  TTFT  โ”‚ c2 tg/s โ”‚ c4 tg/s โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                           
  โ”‚ Intel/Qwen3.6-35B-A3B-int4-AutoRound   โ”‚  5,489 โ”‚   70.6 โ”‚ 457 ms โ”‚   124.1 โ”‚   183.3 โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
  โ”‚ cyankiwi/MiniMax-M2.7-AWQ-4bit         โ”‚  4,514 โ”‚   50.0 โ”‚ 574 ms โ”‚    67.7 โ”‚    87.1 โ”‚                                                           
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                           
  โ”‚ nvidia/Qwen3.5-397B-A17B-NVFP4         โ”‚  3,112 โ”‚   24.6 โ”‚ 816 ms โ”‚    47.2 โ”‚    63.0 โ”‚                                                           
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                           
  โ”‚ Intel/Qwen3.5-397B-A17B-int4-AutoRound โ”‚  3,088 โ”‚   37.5 โ”‚ 793 ms โ”‚    56.2 โ”‚    86.1 โ”‚
  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                                                           
                                                                  
  Resumen por ganador                                                                                                                                 
                                                                  
  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                                                           
  โ”‚       Mรฉtrica        โ”‚                    Ganador                     โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                                           
  โ”‚ Calidad pura (score) โ”‚ nvidia/Qwen3.5-397B-NVFP4 (90)                 โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
  โ”‚ Deployability        โ”‚ MiniMax-M2.7-AWQ-4bit (79)                     โ”‚                                                                           
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
  โ”‚ Throughput prefill   โ”‚ Intel/Qwen3.6-35B (5.5k t/s)                   โ”‚                                                                           
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                                           
  โ”‚ Throughput decode    โ”‚ Intel/Qwen3.6-35B (70 t/s)                     โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                                           
  โ”‚ Latencia TTFT        โ”‚ Intel/Qwen3.6-35B (457 ms)                     โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                                           
  โ”‚ Menos fallos         โ”‚ nvidia-NVFP4 / MiniMax (2)                     โ”‚
  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                                           
  โ”‚ Mejor balance        โ”‚ MiniMax (88 score + 2.4 s turn + mejor deploy) โ”‚
  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
``` Minimax 2.7 TP 4 Qwen3.5 397B TP 4

Please specify - the Minimax model has a speed of 50t/s - is it achieved on two DGX Spark? According to your data.
I was able to get a maximum of 38 t/s (

MInimax 2.7 awq y qwen3.5 397 autoround The tests were TP4

Category Breakdown                                                                  
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Category                                           โ”ƒ        Score         โ”ƒ Bar                                               โ”ƒ       Earned       โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ Tool Selection                                     โ”‚         100%         โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ                              โ”‚        6/6         โ”‚
โ”‚ Parameter Precision                                โ”‚         67%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘                              โ”‚        4/6         โ”‚
โ”‚ Multi-Step Chains                                  โ”‚         100%         โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ                              โ”‚        8/8         โ”‚
โ”‚ Restraint & Refusal                                โ”‚         83%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘                              โ”‚        5/6         โ”‚
โ”‚ Error Recovery                                     โ”‚         83%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘                              โ”‚        5/6         โ”‚
โ”‚ Localization                                       โ”‚         100%         โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ                              โ”‚        6/6         โ”‚
โ”‚ Structured Reasoning                               โ”‚         100%         โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ                              โ”‚        6/6         โ”‚
โ”‚ Instruction Following                              โ”‚         100%         โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ                              โ”‚       10/10        โ”‚
โ”‚ Context & State                                    โ”‚         70%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘                              โ”‚       14/20        โ”‚
โ”‚ Code Patterns                                      โ”‚         100%         โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ                              โ”‚        6/6         โ”‚
โ”‚ Safety & Boundaries                                โ”‚         92%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘                              โ”‚       24/26        โ”‚
โ”‚ Toolset Scale                                      โ”‚         75%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘                              โ”‚        6/8         โ”‚
โ”‚ Autonomous Planning                                โ”‚         83%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘                              โ”‚        5/6         โ”‚
โ”‚ Creative Composition                               โ”‚         83%          โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘                              โ”‚        5/6         โ”‚
โ”‚ Structured Output                                  โ”‚         100%         โ”‚ โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ                              โ”‚       12/12        โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ๐Ÿ† Benchmark Complete โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                                                                                    โ”‚
โ”‚    Model:  cyankiwi/MiniMax-M2.7-AWQ-4bit                                                                                                          โ”‚
โ”‚    Score:  88 / 100                                                                                                                                โ”‚
โ”‚    Rating: โ˜…โ˜…โ˜…โ˜… Good                                                                                                                               โ”‚
โ”‚                                                                                                                                                    โ”‚
โ”‚    โœ… 55 passed   โš ๏ธ  12 partial   โŒ 2 failed                                                                                                     โ”‚
โ”‚    Points: 122/138                                                                                                                                 โ”‚
โ”‚                                                                                                                                                    โ”‚
โ”‚    Quality:        88/100                                                                                                                          โ”‚
โ”‚    Responsiveness: 58/100  (median turn: 2.4s)                                                                                                     โ”‚
โ”‚    Deployability:  79/100  (ฮฑ=0.7)                                                                                                                 โ”‚
โ”‚    Weakest: B Parameter Precision (67%)                                                                                                            โ”‚
โ”‚                                                                                                                                                    โ”‚
โ”‚    Completed in 542.9s                                                                                                                             โ”‚
โ”‚                                                                                                                                                    โ”‚
โ”‚    ๐Ÿ“Š Token Usage:                                                                                                                                 โ”‚
โ”‚    Total: 231,164 tokens  โ”‚  Efficiency: 0.5 pts/1K tokens                                                                                         โ”‚
โ”‚                                                                                                                                                    โ”‚
โ”‚    โšก Throughput:                                                                                                                                  โ”‚
โ”‚    Single:  4,514 pp t/s  โ”‚  50.0 tg t/s  โ”‚  TTFT 574ms                                                                                            โ”‚
โ”‚    c2:      3,564 pp t/s  โ”‚  67.7 tg t/s                                                                                                           โ”‚
โ”‚    c4:      3,833 pp t/s  โ”‚  87.1 tg t/s                                                                                                           โ”‚
โ”‚                                                                                                                                                    โ”‚
โ”‚    โ”€โ”€ How this score is calculated โ”€โ”€                                                                                                              โ”‚
โ”‚    โ€ข Each scenario: pass=2pt, partial=1pt, fail=0pt                                                                                                โ”‚
โ”‚    โ€ข Category %: earned / max per category                                                                                                         โ”‚
โ”‚    โ€ข Final score: (total points / max points) ร— 100                                                                                                โ”‚
โ”‚    โ€ข Deployability: 0.7ร—quality + 0.3ร—responsiveness                                                                                               โ”‚
โ”‚    โ€ข Responsiveness: logistic curve (100 at <1s, ~50 at 3s, 0 at >10s)  ```

llama-benchy Results
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Test โ”ƒ c โ”ƒ pp t/s โ”ƒ tg t/s โ”ƒ TTFT (ms) โ”ƒ Total (ms) โ”ƒ Tokens โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ pp2048 tg128 @ d0 โ”‚ c1 โ”‚ 4,514 โ”‚ 50.0 โ”‚ 574 โ”‚ 3,018 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d0 โ”‚ c2 โ”‚ 3,564 โ”‚ 67.7 โ”‚ 898 โ”‚ 4,267 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d0 โ”‚ c4 โ”‚ 3,833 โ”‚ 87.1 โ”‚ 1,354 โ”‚ 6,229 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d4096 โ”‚ c1 โ”‚ 3,526 โ”‚ 47.8 โ”‚ 1,859 โ”‚ 4,418 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d4096 โ”‚ c2 โ”‚ 3,242 โ”‚ 48.3 โ”‚ 2,835 โ”‚ 6,810 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d4096 โ”‚ c4 โ”‚ 3,263 โ”‚ 50.2 โ”‚ 4,952 โ”‚ 11,277 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d8192 โ”‚ c1 โ”‚ 3,009 โ”‚ 45.7 โ”‚ 3,519 โ”‚ 6,203 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d8192 โ”‚ c2 โ”‚ 2,870 โ”‚ 35.9 โ”‚ 5,344 โ”‚ 9,898 โ”‚ 2048+128 โ”‚
โ”‚ pp2048 tg128 @ d8192 โ”‚ c4 โ”‚ 2,892 โ”‚ 33.2 โ”‚ 9,092 โ”‚ 17,013 โ”‚ 2048+128 โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜```

Wow)) Iโ€™ll have to work on my recipe and try to achieve similar results!
Your settings are excellent)

1 Like
# Recipe: MiniMax-M2.7-AWQ on 4x Spark cluster (TP=4)
# cyankiwi/MiniMax-M2.7-AWQ-4bit

recipe_version: "1"
name: MiniMax-M2.7-AWQ
description: vLLM serving MiniMax-M2.7-AWQ-4bit across 4 Sparks with Ray distributed backend

# HuggingFace model to download (optional, for --download-model)
model: cyankiwi/MiniMax-M2.7-AWQ-4bit

# Container image to use
container: vllm-node-tf5

# Can only be run in a cluster
cluster_only: true

# No mods required
mods: []

# Default settings (can be overridden via CLI)
defaults:
  port: 8000
  host: 0.0.0.0
  tensor_parallel: 4
  gpu_memory_utilization: 0.85
  max_model_len: 196608
  max_num_seqs: 32

# Environment variables
env:
  VLLM_DISTRIBUTED_EXECUTOR_CONFIG: '{"placement_group_options":{"strategy":"SPREAD"}}'

# The vLLM serve command template
command: |
  vllm serve cyankiwi/MiniMax-M2.7-AWQ-4bit \
      --trust-remote-code \
      --port {port} \
      --host {host} \
      --gpu-memory-utilization {gpu_memory_utilization} \
      -tp {tensor_parallel} \
      --distributed-executor-backend ray \
      --max-model-len {max_model_len} \
      --max-num-seqs {max_num_seqs} \
      --kv-cache-dtype fp8_e4m3 \
      --load-format instanttensor \
      --enable-auto-tool-choice \
      --enable-prefix-caching \
      --tool-call-parser minimax_m2 \
      --reasoning-parser minimax_m2

The recipe is normal, the difference may be TP4 are 4 dgx nodes

3 Likes

Try the firmware update ASUS Ascent GX10 Firmware Update saw some wierd improvements in my setup that are hard to explain but started after โ€ฆ

Thankโ€™s for sharing! Looks promising! Can you please explain the part with mods in recipe. Iโ€™ve got error trying to run with those parameters:
โ€œโ€
Applying mod โ€˜fix-qwen3.6-chat-templateโ€™ to 127.0.0.1โ€ฆ
Copying directory content to containerโ€ฆ
Successfully copied 2.56kB to vllm_node:/workspace/mods/fix-qwen3.6-chat-template/
Running patch script on 127.0.0.1โ€ฆ
Error: Patch script failed on 127.0.0.1
โ€œโ€"
Can you expalin how to apply this patch?

Explained it in detail here

3 Likes

A few million tokens in and ~700 tool calls, no major errors or crashes in Openclaw. @whpthomas model is dead stable and fast Bfloat16 Quality = Speed? - #9 by whpthomas

Iโ€™m sticking with this one, until the next disruptive model release :-)

2 Likes

Succeeded! Thank you very much for help (still learning all needed conf params for vLLM)!

My understanding was that the patch in the chat template wasnโ€™t necessary in Qwen 3.6. Did I get that wrong ?

Last nightโ€ฆ

1 Like

I still find this patch more reliable even with 3.6. YMMV

2 Likes

@whpthomas I benchmarked it with the patch vs without and for me I got higher score on tool-eval-bench without the patch with the default template. and parser is qwen3_coder instead of xml. Now I donโ€™t know how much the benchmark compares to real world tasks that we all do.