Quickstart
This guide will walk you through running GPUStack on your own self-hosted GPU servers. To use cloud GPUs, or integrating with an existing Kubernetes cluster, see the relevant tutorials.
Prerequisites
- A node with at least one NVIDIA GPU. For other GPU types, please check the guidelines in the GPUStack UI when adding a worker, or refer to the Installation documentation for more details.
- Ensure the NVIDIA driver, Docker and NVIDIA Container Toolkit are installed on the worker node.
- (Optional) A CPU node for hosting the GPUStack server. The GPUStack server does not require a GPU and can run on a CPU-only machine. Docker must be installed. Docker Desktop (for Windows and macOS) is also supported. If no dedicated CPU node is available, the GPUStack server can be installed on the same machine as a GPU worker node.
- Only Linux is supported for GPUStack worker nodes. If you use Windows, consider using WSL2 and avoid using Docker Desktop. macOS is not supported for GPUStack worker nodes.
Install GPUStack
Run the following command to install and start the GPUStack server using Docker:
sudo docker run -d --name gpustack \
--restart unless-stopped \
-p 80:80 \
--volume gpustack-data:/var/lib/gpustack \
gpustack/gpustack
Alternative: Use Quay Container Registry Mirror
If you cannot pull images from Docker Hub or the download is very slow, you can use our Quay Container Registry mirror by pointing your registry to quay.io:
sudo docker run -d --name gpustack \
--restart unless-stopped \
-p 80:80 \
--volume gpustack-data:/var/lib/gpustack \
quay.io/gpustack/gpustack \
--system-default-container-registry quay.io
Check the GPUStack startup logs:
sudo docker logs -f gpustack
After GPUStack starts, run the following command to get the default admin password:
sudo docker exec gpustack cat /var/lib/gpustack/initial_admin_password
Open your browser and navigate to http://your_host_ip to access the GPUStack UI. Use the default username admin and the password you retrieved above to log in.
Set Up a GPU Cluster
-
On the GPUStack UI, navigate to the
Clusterspage. -
Click the
Add Clusterbutton. -
Select
Dockeras the cluster provider. -
Fill in the
NameandDescriptionfields for the new cluster, then click theSavebutton. -
Follow the UI guidelines to configure the new worker node. You will need to run a Docker command on the worker node to connect it to the GPUStack server. The command will look similar to the following:
sudo docker run -d --name gpustack-worker \
--restart=unless-stopped \
--privileged \
--network=host \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume gpustack-data:/var/lib/gpustack \
--runtime nvidia \
gpustack/gpustack \
--server-url http://your_gpustack_server_url \
--token your_worker_token \
--advertise-address 192.168.1.2
-
Execute the command on the worker node to connect it to the GPUStack server.
-
After the worker node connects successfully, it will appear on the
Workerspage in the GPUStack UI.
Deploy a Model
-
Navigate to the
Catalogpage in the GPUStack UI. -
Select the
Qwen3 0.6Bmodel from the list of available models. -
After the deployment compatibility checks pass, click the
Savebutton to deploy the model.
- GPUStack will start downloading the model files and deploying the model. When the deployment status shows
Running, the model has been deployed successfully.
Note
GPUStack uses containers to run models. The first-time model deployment may take some time to download the model files and container images. You can click View Logs in the UI to monitor the deployment progress.
- Click
Playground - Chatin the navigation menu, check that the modelqwen3-0.6bis selected from the top-rightModeldropdown. Now you can chat with the model in the UI playground.
Use the model via API
-
Hover over the user avatar and navigate to the
API Keyspage, then click theNew API Keybutton. -
Fill in the
Nameand click theSavebutton. -
Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.
-
You can now use the API key to access the OpenAI-compatible API endpoints provided by GPUStack. For example, use curl as the following:
# Replace `your_api_key` and `your_gpustack_server_url`
# with your actual API key and GPUStack server URL.
export GPUSTACK_API_KEY=your_api_key
curl http://your_gpustack_server_url/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GPUSTACK_API_KEY" \
-d '{
"model": "qwen3-0.6b",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Tell me a joke."
}
],
"stream": true
}'
Cleanup
After you complete using the deployed model, you can go to the Deployments page in the GPUStack UI and delete the model to free up resources.


