Skip to content

Overview


GPUStack


License Discord WeChat

Star Watch Fork

GPUStack is an open-source GPU cluster manager designed for efficient AI model deployment. It lets you run models efficiently on your own GPU hardware by choosing the best inference engines, scheduling GPU resources, analyzing model architectures, and automatically configuring deployment parameters.

The following figure shows how GPUStack delivers improved inference throughput over the unoptimized vLLM baseline:

a100-throughput-comparison

For detailed benchmarking methods and results, visit our Inference Performance Lab.

Tested Inference Engines, GPUs, and Models

GPUStack uses a plug-in architecture that makes it easy to add new AI models, inference engines, and GPU hardware. We work closely with partners and the open-source community to test and optimize emerging models across different inference engines and GPUs. Below is the current list of supported inference engines, GPUs, and models, which will continue to expand over time.

Tested Inference Engines:

  • vLLM
  • SGLang
  • TensorRT-LLM
  • MindIE

Tested GPUs:

  • NVIDIA A100
  • NVIDIA H100/H200
  • Ascend 910B

Tuned Models:

  • Qwen3
  • gpt-oss
  • GLM-4.5-Air
  • GLM-4.5/4.6
  • DeepSeek-R1

Architecture

GPUStack enables development teams, IT organizations, and service providers to deliver Model-as-a-Service at scale. It supports industry-standard APIs for LLM, voice, image, and video models. The platform includes built-in user authentication and access control, real-time monitoring of GPU performance and utilization, and detailed metering of token usage and API request rates.

The figure below illustrates how a single GPUStack server can manage multiple GPU clusters across both on-premises and cloud environments. The GPUStack scheduler allocates GPUs to maximize resource utilization and selects the appropriate inference engines for optimal performance. Administrators also gain full visibility into system health and metrics through integrated Grafana and Prometheus dashboards.

gpustack-v2-architecture

GPUStack provides a powerful framework for deploying AI models. Its core features include:

  • Multi-Cluster GPU Management. Manages GPU clusters across multiple environments. This includes on-premises servers, Kubernetes clusters, and cloud providers.
  • Pluggable Inference Engines. Automatically configures high-performance inference engines such as vLLM, SGLang, and TensorRT-LLM. You can also add custom inference engines as needed.
  • Performance-Optimized Configurations. Offers pre-tuned modes for low latency or high throughput. GPUStack supports extended KV cache systems like LMCache and HiCache to reduce TTFT. It also includes built-in support for speculative decoding methods such as EAGLE3, MTP, and N-grams.
  • Enterprise-Grade Operations. Offers support for automated failure recovery, load balancing, monitoring, authentication, and access control.