OpenClaw
06·3 分钟阅读·免费

模型配置

模型配置指南

OpenClaw 支持多种 AI 模型 Provider,全部在 openclaw.jsonmodels 字段配置。


1. 内置 Provider 一览

Provider 名系列是否需要代理
anthropicClaude国内需代理
openaiGPT国内需代理
openrouter多模型聚合国内需代理
deepseekDeepSeek国内直连
moonshotKimi国内直连
minimaxMiniMax国内直连
ollama本地模型不需要

2. 国内模型配置示例

千问(阿里云百炼)

json
{
"models": {
"mode": "merge",
"providers": {
"qwen": {
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"apiKey": "${QWEN_API_KEY}",
"api": "openai-completions",
"models": [
{ "id": "qwen-max", "name": "千问Max", "contextWindow": 32000, "maxTokens": 8000 },
{ "id": "qwen-plus", "name": "千问Plus", "contextWindow": 131072, "maxTokens": 8000 }
]
}
}
}
}
bash
openclaw env set QWEN_API_KEY sk-xxx

豆包(火山引擎)

json
{
"models": {
"mode": "merge",
"providers": {
"doubao": {
"baseUrl": "https://ark.cn-beijing.volces.com/api/v3",
"apiKey": "${DOUBAO_API_KEY}",
"api": "openai-completions",
"models": [
{ "id": "doubao-pro-32k", "name": "豆包Pro-32k", "contextWindow": 32000, "maxTokens": 4096 },
{ "id": "doubao-pro-4k", "name": "豆包Pro-4k", "contextWindow": 4000, "maxTokens": 2048 }
]
}
}
}
}
bash
openclaw env set DOUBAO_API_KEY xxx

Moonshot(Kimi)

json
{
"models": {
"mode": "merge",
"providers": {
"moonshot": {
"baseUrl": "https://api.moonshot.cn/v1",
"apiKey": "${MOONSHOT_API_KEY}",
"api": "openai-completions",
"models": [
{ "id": "moonshot-v1-8k", "name": "Kimi-8k", "contextWindow": 8000, "maxTokens": 4096 },
{ "id": "moonshot-v1-128k", "name": "Kimi-128k", "contextWindow": 128000, "maxTokens": 8192 }
]
}
}
}
}

DeepSeek

json
{
"models": {
"mode": "merge",
"providers": {
"deepseek": {
"baseUrl": "https://api.deepseek.com/v1",
"apiKey": "${DEEPSEEK_API_KEY}",
"api": "openai-completions",
"models": [
{ "id": "deepseek-chat", "name": "DeepSeek V3", "contextWindow": 64000, "maxTokens": 8192 },
{ "id": "deepseek-reasoner", "name": "DeepSeek R1", "contextWindow": 64000, "maxTokens": 8192 }
]
}
}
}
}
`api` 字段说明:大多数国内服务用 openai-completions。OpenAI 原生 Responses API 才用 openai-responses,不要混用。

3. 别名 + 主模型 + 备用链

json
{
"agents": {
"defaults": {
"models": {
"anthropic/claude-sonnet-4-5": { "alias": "sonnet" },
"deepseek/deepseek-chat": { "alias": "ds" },
"qwen/qwen-max": { "alias": "qwen" },
"doubao/doubao-pro-32k": { "alias": "db" }
},
"model": {
"primary": "anthropic/claude-sonnet-4-5",
"fallbacks": ["deepseek/deepseek-chat", "qwen/qwen-max"]
}
}
}
}

对话中快速切换:

bash
/model ds # 切到 DeepSeek(节省成本)
/model qwen # 切到千问
/model sonnet # 切回 Claude

4. 查看消耗 & 成本追踪

bash
openclaw status # 查看当前使用的模型
openclaw dashboard # 详细 Token 用量统计

回复消息前缀显示使用的模型,方便核对:

json
{ "messages": { "responsePrefix": "[{model}] " } }

5. 模型选择建议

场景推荐模型原因
日常对话 / 写作claude-sonnet-4-5效果好,速度快
复杂推理 / 代码claude-opus-4-6最强效果
高频自动化任务deepseek-chat 或 豆包成本低,国内直连
长文档处理moonshot-v1-128k128k 上下文
离线 / 私有化ollama 本地模型数据不出本地