Integrating with OpenClaw
OpenClaw is a personal AI assistant that runs on local devices and supports interaction through common channels such as Telegram, WhatsApp, Slack, and Discord.
Deploy the Model
Please refer to the Model Deployment section in the GPUStack documentation to complete model deployment.
API Access Info
- Log in to the GPUStack Web UI
- Navigate to the Routes page
- From the menu on the right side of the target model, select API Access Info
Record the following information (if an API Key has not been created yet, follow the on-page instructions to create one):
- Access URL
- Model Name
- API Key
Install OpenClaw
Follow the official OpenClaw documentation to complete the installation: https://docs.openclaw.ai/install
Configure GPUStack in OpenClaw
- Start the interactive configuration wizard:
openclaw onboard --install-daemon
- In the Model / Auth Provider selection, choose Custom Provider
-
Fill in the information provided by GPUStack as prompted:
- API Base URL: Access URL
- API Key: API Key
- Model ID: Model Name
After completing these steps, OpenClaw will use GPUStack to invoke the corresponding model for inference.
Configure Channels
Follow the official OpenClaw documentation to configure the desired communication channel: https://docs.openclaw.ai/channels



