Skip to content

Inference Backends

GPUStack supports the following inference backends:

When users deploy a model, the backend is selected automatically based on the following criteria:

  • If the model is a GGUF model, llama-box is used.
  • If the model is a known Text-to-Speech or Speech-to-Text model, vox-box is used.
  • Otherwise, vLLM is used.

llama-box

llama-box is a LM inference server based on llama.cpp and stable-diffusion.cpp.

Supported Platforms

The llama-box backend supports Linux, macOS and Windows (with CPU offloading only on Windows ARM architecture) platforms.

Supported Models

Supported Features

Allow CPU Offloading

After enabling CPU offloading, GPUStack prioritizes loading as many layers as possible onto the GPU to optimize performance. If GPU resources are limited, some layers will be offloaded to the CPU, with full CPU inference used only when no GPU is available.

Allow Distributed Inference Across Workers

Enable distributed inference across multiple workers. The primary Model Instance will communicate with backend instances on one or more others workers, offloading computation tasks to them.

Multimodal Language Models

Llama-box supports the following multimodal language models. When using a vision language model, image inputs are supported in the chat completion API.

  • Qwen2-VL

Note

When deploying a vision language model, GPUStack downloads and uses the multimodal projector file with the pattern *mmproj*.gguf by default. If multiple files match the pattern, GPUStack selects the file with higher precision (e.g., f32 over f16). If the default pattern does not match the projector file or you want to use a specific one, you can customize the multimodal projector file by setting the --mmproj parameter in the model configuration. You can specify the relative path to the projector file in the model source. This syntax acts as shorthand, and GPUStack will download the file from the source and normalize the path when using it.

Parameters Reference

See the full list of supported parameters for llama-box here.

vLLM

vLLM is a high-throughput and memory-efficient LLMs inference engine. It is a popular choice for running LLMs in production. vLLM seamlessly supports most state-of-the-art open-source models, including: Transformer-like LLMs (e.g., Llama), Mixture-of-Expert LLMs (e.g., Mixtral), Embedding Models (e.g. E5-Mistral), Multi-modal LLMs (e.g., LLaVA)

By default, GPUStack estimates the VRAM requirement for the model instance based on the model's metadata. You can customize the parameters to fit your needs. The following vLLM parameters might be useful:

  • --gpu-memory-utilization (default: 0.9): The fraction of GPU memory to use for the model instance.
  • --max-model-len: Model context length. For large-context models, GPUStack automatically sets this parameter to 8192 to simplify model deployment, especially in resource constrained environments. You can customize this parameter to fit your needs.
  • --tensor-parallel-size: Number of tensor parallel replicas. By default, GPUStack sets this parameter given the GPU resources available and the estimation of the model's memory requirement. You can customize this parameter to fit your needs.

For more details, please refer to vLLM documentation.

Supported Platforms

The vLLM backend works on AMD64 Linux.

Note

  1. When users install GPUStack on amd64 Linux using the installation script, vLLM is automatically installed.
  2. When users deploy a model using the vLLM backend, GPUStack sets worker label selectors to {"os": "linux", "arch": "amd64"} by default to ensure the model instance is scheduled to proper workers. You can customize the worker label selectors in the model configuration.

Supported Models

Please refer to the vLLM documentation for supported models.

Supported Features

Multimodal Language Models

vLLM supports multimodal language models listed here. When users deploy a vision language model using the vLLM backend, image inputs are supported in the chat completion API.

Distributed Inference Across Workers (Experimental)

vLLM supports distributed inference across multiple workers using Ray. You can enable a Ray cluster in GPUStack by using the --enable-ray start parameter, allowing vLLM to run distributed inference across multiple workers.

Known Limitations

  1. The GPUStack server and all participating workers must run on Linux and use the same version of Python, which is a requirement of Ray.
  2. Model files must be accessible at the same path on all participating workers. You must either use a shared file system or download the model files to the same path on all participating workers.
  3. Each worker can only be assigned to one distributed vLLM model instance at a time.

Auto-scheduling is supported with the following conditions:

  • Participating workers have the same number of GPUs.
  • All GPUs in the worker satisfy the gpu_memory_utilization(defaults to 0.9) requirement.
  • The total number of GPUs can be divided by the number of attention heads.
  • The total VRAM claim is greater than the estimated VRAM claim.

If the above conditions are not met, the model instance will not be scheduled automatically. However, you can manually schedule it by selecting the desired workers/GPUs in the model configuration.

Parameters Reference

See the full list of supported parameters for vLLM here.

vox-box

vox-box is an inference engine designed for deploying text-to-speech and speech-to-text models. It also provides an API that is fully compatible with the OpenAI audio API.

Supported Platforms

The vox-box backend supports Linux, macOS and Windows platforms.

Note

  1. To use Nvidia GPUs, ensure the following NVIDIA libraries are installed on workers:
  2. When users install GPUStack on Linux, macOS and Windows using the installation script, vox-box is automatically installed.
  3. CosyVoice models are natively supported on Linux AMD architecture and macOS. However, these models are not supported on Linux ARM or Windows architectures.

Supported Models

Model Type Link Supported Platforms
Faster-whisper-large-v3 speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-large-v2 speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-large-v1 speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-medium speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-medium.en speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-small speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-small.en speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-distil-whisper-large-v3 speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-distil-whisper-large-v2 speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-distil-whisper-medium.en speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-tiny speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
Faster-whisper-tiny.en speech-to-text Hugging Face, ModelScope Linux, macOS, Windows
CosyVoice-300M-Instruct text-to-speech Hugging Face, ModelScope Linux(ARM not supported), macOS, Windows(Not supported)
CosyVoice-300M-SFT text-to-speech Hugging Face, ModelScope Linux(ARM not supported), macOS, Windows(Not supported)
CosyVoice-300M text-to-speech Hugging Face, ModelScope Linux(ARM not supported), macOS, Windows(Not supported)
CosyVoice-300M-25Hz text-to-speech ModelScope Linux(ARM not supported), macOS, Windows(Not supported)
CosyVoice2-0.5B text-to-speech Hugging Face, ModelScope Linux(ARM not supported), macOS, Windows(Not supported)

Supported Features

Allow GPU/CPU Offloading

vox-box supports deploying models to NVIDIA GPUs. If GPU resources are insufficient, it will automatically deploy the models to the CPU.

Ascend MindIE (Experimental)

Ascend MindIE is a high-performance inference service on Ascend hardware.

Supported Platforms

The Ascend MindIE backend works on Linux platforms only, including ARM64 and x86_64 architectures.

Supported Models

Ascend MindIE supports various models listed here.

Within GPUStack, support large language models (LLMs) and multimodal language models (VLMs) . However, embedding models and multimodal generation models are not supported yet.

Supported Features

Ascend MindIE owns a variety of features outlined here.

At present, GPUStack supports a subset of these capabilities, including Quantization, Mixture of Experts(MoE), Prefix Caching, Function Calling, Multimodal Understanding, Multi-head Latent Attention(MLA).

Note

  1. Quantization needs specific weight, and must adjust the model's config.json, please follow the reference(guide) to prepare the correct weight.
  2. For Multimodal Understanding feature, some versions of Ascend MindIE's API are incompatible with OpenAI, please track this issue for more support.
  3. Extending Context Size feature is WIP, please track this issue for more details.
  4. Some features are mutually exclusive, so be careful when using them.

Parameters Reference

Parameter Default Description
--trust-remote-code Trust remote code (for model loading).
--npu-memory-fraction 0.9 Fraction of NPU memory to be used for the model executor (0 to 1). Example: 0.5 means 50% memory utilization.
--max-link-num 1000 Maximum number of parallel requests.
--max-seq-len 8192 Model context length. If unspecified, it will be derived from the model config.
--max-input-token-len Maximum input token length. If unspecified, it will be derived from --max-seq-len.
--truncation Truncate the input token length when it exceeds the minimum of --max-input-token-len and --max-seq-len - 1.
--cpu-mem-size 5 CPU swap space size in GiB. If unspecified, the default value will be used.
--cache-block-size 128 KV cache block size. Must be a power of 2.
--max-batch-size 200 Maximum number of requests batched during decode stage.
--max-prefill-batch-size 50 Maximum number of requests batched during prefill stage. Must be less than --max-batch-size.
--max-preempt-count 0 Maximum number of preempted requests allowed during decoding. Must be less than --max-batch-size.
--max-queue-delay-microseconds 5000 Maximum queue wait time in microseconds.
--prefill-time-ms-per-req 150 Estimated prefill time per request (ms). Used to decide between prefill and decode stage.
--prefill-policy-type 0 Prefill stage strategy:
0: FCFS (First Come First Serve)
1: STATE (same as FCFS)
2: PRIORITY (priority queue)
3: MLFQ (Multi-Level Feedback Queue)
--decode-time-ms-per-req 50 Estimated decode time per request (ms). Used with --prefill-time-ms-per-req for batch selection.
--decode-policy-type 0 Decode stage strategy:
0: FCFS
1: STATE (prioritize preempted or swapped requests)
2: PRIORITY
3: MLFQ
--support-select-batch Enable batch selection. Determines execution priority based on --prefill-time-ms-per-req and --decode-time-ms-per-req.
--enable-prefix-caching Enable prefix caching. Use --no-enable-prefix-caching to disable explicitly.
--enforce-eager Emit operators in eager mode.
--metrics Expose metrics at /metrics endpoint.
--log-level Info Log level for MindIE. Options: Verbose, Info, Warning, Warn, Error, Debug.