An abnormal delay of 300 milliseconds was detected.
When we were conducting stress testing on vllm with a load of 30QPS, we found an anomaly with a 300ms delay, which occurred more frequently as the QPS increased. Looking from nsight, one thread’s utilization rate was at 100%, but the Python stack was empty.
The input token for the experimental data was 40, and the output token was 20.
This situation is very strange. For more specific experimental data, please see [Performance]: Why does VLLM perform worse than TGI in Speculative decoding? · Issue #7540 · vllm-project/vllm · GitHub
How to determine where this problem occurs and resolve it.