Comparison of benchmark results before and after optimization:
Benchmark Case
baseline (vLLM without any optimizations)
Optimized
ShareGPT Profile
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Total TPS: 11768.22 (+2.09%) Mean TPOT(ms): 15.85
Throughput Profile
Total TPS: 29822.16 Mean TPOT(ms): 34.63
Total TPS: 34470.14 (+15.59%) Mean TPOT(ms): 36.97
Long Context Profile
Total TPS: 32416.84 Mean TPOT(ms): 10.21
Total TPS: 32658.73 (+0.75%) Mean TPOT(ms): 9.97
Generation Heavy Profile
Total TPS: 2973.42 Mean TPOT(ms): 0.33
Total TPS: 3025.11 (+1.74%) Mean TPOT(ms): 0.12
Note
Our benchmark tests do not cover all possible optimization combinations. For example, we select the inference engine that performs best under its default configuration as the starting point for further tuning. This pruning approach yields a local optimum, which may not be the global optimum.
There are other optimization methods that depend on specific user scenarios, including max batch size, schedule configuration, extended KV cache, CUDA graph, etc. The conclusions in this document can serve as a starting point for more targeted optimizations.
The tests are conducted on specific hardware and software setups. Advances in the inference engine may lead to new conclusions.
Although using quantization may impact accuracy. FP8 quantization can achieve less than 1% accuracy drop for most models. See the evaluation results for more details. Therefore, it is highly recommended to use FP8 quantization for low-latency serving scenarios.
Speculative decoding can significantly reduce latency for low-concurrency requests. However, the acceleration effect may vary depending on the data distribution of different benchmark datasets and the choice of draft models. For example, the chosen draft model here is trained on English data, which may lead to suboptimal performance on other languages.
If there are any missing points or updates reflecting new changes, please let us know.
Experimental Setup
Model
Qwen/Qwen3.5-9B
Hardware
NVIDIA H100 80GB HBM3
Engine Version
vLLM v0.17.1
SGLang v0.5.9
Benchmark Method
This project uses GPUStack's one-click benchmark capability for serving workloads. The benchmark tests in this document were executed with that workflow.
GPUStack's benchmark implementation is built on top of guidellm via the wrapper project benchmark-runner.
GPUStack handles model deployment, benchmark job submission, and result collection for the benchmark configurations listed below.
============ Serving Benchmark Result ============
Successful requests: 1000
Maximum request concurrency: 512
Benchmark duration (s): 96.01
Total input tokens: 342058
Total generated tokens: 281412
Request throughput (req/s): 10.42
Output token throughput (tok/s): 2933.60
Peak output token throughput (tok/s): 92831.68
Peak concurrent requests: 512.00
Total Token throughput (tok/s): 6499.41
----------------------Latency---------------------
Mean Latency(s): 39.08
Median Latency(s): 41.18
P99 Latency(s): 74.40
---------------Time to First Token----------------
Mean TTFT (ms): 26217.15
Median TTFT (ms): 34401.34
P95 TTFT (ms): N/A
P99 TTFT (ms): 41101.27
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 138.86
Median TPOT (ms): 112.19
P95 TPOT (ms): N/A
P99 TPOT (ms): 591.70
---------------Inter-token Latency----------------
Mean ITL (ms): 45.86
Median ITL (ms): 49.31
P95 ITL (ms): N/A
P99 ITL (ms): 59.71
==================================================
Summary: vLLM Total TPS = 11527.14, SGLang Total TPS = 6499.41. vLLM is faster by 5027.72 tok/s (77.36%); Mean TPOT = 15.12 ms vs 138.86 ms, reduced by 123.74 ms (89.11%).
============ Serving Benchmark Result ============
Successful requests: 1000
Maximum request concurrency: 512
Benchmark duration (s): 55.28
Total input tokens: 342058
Total generated tokens: 281412
Request throughput (req/s): 18.09
Output token throughput (tok/s): 5140.19
Peak output token throughput (tok/s): 23553347.13
Peak concurrent requests: 512.00
Total Token throughput (tok/s): 11388.13
----------------------Latency---------------------
Mean Latency(s): 24.34
Median Latency(s): 22.16
P95 Latency(s): 48.86
P99 Latency(s): 51.63
---------------Time to First Token----------------
Mean TTFT (ms): 1853.85
Median TTFT (ms): 1166.25
P95 TTFT (ms): 5543.84
P99 TTFT (ms): 6126.03
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 16.91
Median TPOT (ms): 4.35
P95 TPOT (ms): 100.86
P99 TPOT (ms): 129.26
---------------Inter-token Latency----------------
Mean ITL (ms): 10.36
Median ITL (ms): 0.00
P95 ITL (ms): 94.30
P99 ITL (ms): 118.29
==================================================
Summary: Seqs 1024 Total TPS = 11532.64, Seqs 2048 Total TPS = 11388.13. Seqs 1024 is faster by 144.51 tok/s (1.27%); Mean TPOT = 16.87 ms vs 16.91 ms, reduced by 0.04 ms (0.25%).
============ Serving Benchmark Result ============
Successful requests: 1000
Maximum request concurrency: 512
Benchmark duration (s): 54.20
Total input tokens: 342058
Total generated tokens: 281412
Request throughput (req/s): 18.45
Output token throughput (tok/s): 5268.65
Peak output token throughput (tok/s): 29985425.61
Peak concurrent requests: 512.00
Total Token throughput (tok/s): 11672.72
----------------------Latency---------------------
Mean Latency(s): 23.72
Median Latency(s): 21.21
P99 Latency(s): 50.44
---------------Time to First Token----------------
Mean TTFT (ms): 1970.35
Median TTFT (ms): 1344.71
P95 TTFT (ms): N/A
P99 TTFT (ms): 5206.15
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 16.53
Median TPOT (ms): 5.04
P95 TPOT (ms): N/A
P99 TPOT (ms): 125.98
---------------Inter-token Latency----------------
Mean ITL (ms): 9.56
Median ITL (ms): 0.00
P95 ITL (ms): N/A
P99 ITL (ms): 111.59
==================================================
Summary: 32k Total TPS = 11672.72, 24k Total TPS = 11512.90. 32k is faster by 159.82 tok/s (1.39%); Mean TPOT = 16.53 ms vs 15.83 ms, increased by 0.70 ms (4.43% slower).
============ Serving Benchmark Result ============
Successful requests: 1000
Maximum request concurrency: 512
Benchmark duration (s): 53.72
Total input tokens: 342058
Total generated tokens: 281412
Request throughput (req/s): 18.62
Output token throughput (tok/s): 5311.75
Peak output token throughput (tok/s): 23159852.52
Peak concurrent requests: 512.00
Total Token throughput (tok/s): 11768.22
----------------------Latency---------------------
Mean Latency(s): 23.50
Median Latency(s): 21.13
P99 Latency(s): 50.04
---------------Time to First Token----------------
Mean TTFT (ms): 1826.53
Median TTFT (ms): 1004.67
P95 TTFT (ms): N/A
P99 TTFT (ms): 5582.79
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 15.85
Median TPOT (ms): 4.49
P95 TPOT (ms): N/A
P99 TPOT (ms): 123.40
---------------Inter-token Latency----------------
Mean ITL (ms): 9.39
Median ITL (ms): 0.00
P95 ITL (ms): N/A
P99 ITL (ms): 112.55
==================================================
Summary: Max Num Seqs 512 Total TPS = 11768.22, Max Num Seqs 256 Total TPS = 11019.18. Max Num Seqs 512 is faster by 749.05 tok/s (6.80%); Mean TPOT = 15.85 ms vs 44.58 ms, reduced by 28.73 ms (64.45%).
============ Serving Benchmark Result ============
Successful requests: 1000
Maximum request concurrency: 512
Benchmark duration (s): 54.07
Total input tokens: 342058
Total generated tokens: 281412
Request throughput (req/s): 18.49
Output token throughput (tok/s): 5248.67
Peak output token throughput (tok/s): 36318641.50
Peak concurrent requests: 512.00
Total Token throughput (tok/s): 11628.47
----------------------Latency---------------------
Mean Latency(s): 23.72
Median Latency(s): 21.75
P95 Latency(s): N/A
P99 Latency(s): 50.48
---------------Time to First Token----------------
Mean TTFT (ms): 1817.87
Median TTFT (ms): 1821.59
P95 TTFT (ms): N/A
P99 TTFT (ms): 5041.22
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 15.65
Median TPOT (ms): 4.57
P95 TPOT (ms): N/A
P99 TPOT (ms): 123.59
---------------Inter-token Latency----------------
Mean ITL (ms): 9.23
Median ITL (ms): 0.00
P95 ITL (ms): N/A
P99 ITL (ms): 114.09
==================================================
Summary: Batch Token 48k and Seqs 768 Total TPS = 11628.47, Batch Token 32k and Seqs 512 Total TPS = 11333.89. Batch Token 48k and Seqs 768 is faster by 294.57 tok/s (2.60%); Mean TPOT = 15.65 ms vs 16.39 ms, reduced by 0.74 ms (4.51%).
Summary of Optimization Options
Benchmark Cases
Optimized
Baseline
Choosing the Inference Engine
Total TPS: 11527.14 (+0.00%) Mean TPOT(ms): 15.12
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Prefix Cache
Total TPS: 11224.70 (-2.62%) Mean TPOT(ms): 16.66
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Speculative Decodng
Total TPS: 15059.89 (Success rate: 51.5%, optimization skipped) Mean TPOT(ms): 42.50
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Performance Mode
Total TPS: 11496.39 (-0.27%) Mean TPOT(ms): 16.41
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Max Num Seqs
Total TPS: 11532.64 (+0.05%) Mean TPOT(ms): 16.87
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Max Batched Tokens && Performance Mode
Total TPS: 11672.72 (+1.26%) Mean TPOT(ms): 16.53
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Max Num Seqs && Performance Mode
Total TPS: 11768.22 (+2.09%) Mean TPOT(ms): 15.85
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Max Num Seqs && Max Batched Tokens
Total TPS: 11333.89 (-1.68%) Mean TPOT(ms): 16.39
Total TPS: 11527.14 Mean TPOT(ms): 15.12
Max Num Seqs && Max Batched Tokens && Performance Mode