Codex CLI / Codex App
Codex supports two wire protocols: responses and
chat. First write the config content into
~/.codex/config.toml (Windows:
%USERPROFILE%\.codex\config.toml), then set
XAI_API_KEY in your shell and run the matching
command. The responses example below is based on a
real working config.toml template and omits
project-specific [projects."..."] entries.
Order: copy Option A or B into the config file first, then copy
the launch commands for your shell.
model_provider = "xai"
model = "gpt-5.4"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "none"
model_verbosity = "medium"
model_context_window = 1050000
model_auto_compact_token_limit = 945000
tool_output_token_limit = 6000
approval_policy = "never"
sandbox_mode = "danger-full-access"
[model_providers.xai]
name = "xai"
base_url = ""
wire_api = "responses"
requires_openai_auth = false
env_key = "XAI_API_KEY"
approval_policy = "never"
sandbox_mode = "danger-full-access"
[model_providers.xai]
name = "xai"
base_url = ""
env_key = "XAI_API_KEY"
wire_api = "chat"
requires_openai_auth = false
[profiles.minimax]
model = "MiniMax-M2.5"
model_provider = "xai"
export XAI_API_KEY="sk-Xvs..."
# Option A (responses)
codex
# Option B (chat)
codex --profile minimax
set XAI_API_KEY=sk-Xvs...
:: Option A (responses)
codex
:: Option B (chat)
codex --profile minimax
$env:XAI_API_KEY="sk-Xvs..."
# Option A (responses)
codex
# Option B (chat)
codex --profile minimax
Verify with: codex (responses) or codex --profile minimax (chat)
Claude Code (gpt-5.4)
Claude Code integration is primarily environment-variable based.
The following examples map Claude defaults to
gpt-5.4 using
.
Order: copy the environment variables for your shell first, then
run claude.
export XAI_API_KEY="sk-Xvs..."
export ANTHROPIC_AUTH_TOKEN="$XAI_API_KEY"
export ANTHROPIC_BASE_URL=""
# Optional: custom default model mapping for Claude families (not required)
export ANTHROPIC_DEFAULT_OPUS_MODEL="gpt-5.4"
export ANTHROPIC_DEFAULT_SONNET_MODEL="gpt-5.4"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="gpt-5.4"
set XAI_API_KEY=sk-Xvs...
set ANTHROPIC_AUTH_TOKEN=%XAI_API_KEY%
set ANTHROPIC_BASE_URL=
:: Optional: custom default model mapping for Claude families (not required)
set ANTHROPIC_DEFAULT_OPUS_MODEL=gpt-5.4
set ANTHROPIC_DEFAULT_SONNET_MODEL=gpt-5.4
set ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-5.4
$env:XAI_API_KEY="sk-Xvs..."
$env:ANTHROPIC_AUTH_TOKEN=$env:XAI_API_KEY
$env:ANTHROPIC_BASE_URL=""
# Optional: custom default model mapping for Claude families (not required)
$env:ANTHROPIC_DEFAULT_OPUS_MODEL="gpt-5.4"
$env:ANTHROPIC_DEFAULT_SONNET_MODEL="gpt-5.4"
$env:ANTHROPIC_DEFAULT_HAIKU_MODEL="gpt-5.4"
Verify with: claude
OpenCode (Responses: gpt-5.4 / Chat: MiniMax-M2.5)
OpenCode should use the global config file
~/.config/opencode/opencode.jsonc (Windows:
%USERPROFILE%\.config\opencode\opencode.jsonc).
First write either Profile A or Profile B into the config file,
then set XAI_API_KEY in your shell and run the
verification command.
Order: choose the API profile first (A = Responses, B = Chat),
write it to the config file, then copy the shell commands for
your OS.
{
"$schema": "https://opencode.ai/config.json",
"model": "openai/gpt-5.4",
"small_model": "openai/gpt-5.4",
"provider": {
"openai": {
"options": {
"baseURL": "",
"apiKey": "{env:XAI_API_KEY}"
}
}
}
}
{
"$schema": "https://opencode.ai/config.json",
"model": "xai-chat/MiniMax-M2.5",
"small_model": "xai-chat/MiniMax-M2.5",
"provider": {
"xai-chat": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "",
"apiKey": "{env:XAI_API_KEY}"
},
"models": {
"MiniMax-M2.5": {}
}
}
}
}
export XAI_API_KEY="sk-Xvs..."
opencode debug config
opencode run "hello"
set XAI_API_KEY=sk-Xvs...
opencode debug config
opencode run "hello"
$env:XAI_API_KEY="sk-Xvs..."
opencode debug config
opencode run "hello"
curl /responses \
-H "Authorization: Bearer ${XAI_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"model":"gpt-5.4",
"input":"Explain the purpose of a microservice gateway in one sentence"
}'
curl /chat/completions \
-H "Authorization: Bearer ${XAI_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"model":"MiniMax-M2.5",
"messages":[{"role":"user","content":"Explain the purpose of a microservice gateway in one sentence"}]
}'
Verify with: opencode debug config (config) and opencode run "hello" (request)
OpenClaw
OpenClaw can connect to OpenAI API and Claude API, and can also
be extended to OpenAI Responses API. XAI Router supports OpenAI
API and Claude API by default; the recommended setup is
api = "openai-responses". Config path:
~/.openclaw/openclaw.json on Linux / macOS, and
%USERPROFILE%\.openclaw\openclaw.json on Windows.
Order: write one of the JSON configs below to the config file,
then set XAI_API_KEY for your shell, then run the
verification command.
{
"agents": {
"defaults": {
"model": { "primary": "xairouter/gpt-5.4" }
}
},
"models": {
"mode": "merge",
"providers": {
"xairouter": {
"baseUrl": "",
"apiKey": "${XAI_API_KEY}",
"api": "openai-responses",
"models": [{ "id": "gpt-5.4", "name": "gpt-5.4" }]
}
}
}
}
{
"agents": {
"defaults": {
"model": { "primary": "xairouter/MiniMax-M2.5" }
}
},
"models": {
"mode": "merge",
"providers": {
"xairouter": {
"baseUrl": "",
"apiKey": "${XAI_API_KEY}",
"api": "anthropic-messages",
"models": [{ "id": "MiniMax-M2.5", "name": "MiniMax-M2.5" }]
}
}
}
}
{
"agents": {
"defaults": {
"model": { "primary": "xairouter/MiniMax-M2.5" }
}
},
"models": {
"mode": "merge",
"providers": {
"xairouter": {
"baseUrl": "",
"apiKey": "${XAI_API_KEY}",
"api": "openai-completions",
"models": [{ "id": "MiniMax-M2.5", "name": "MiniMax-M2.5" }]
}
}
}
}
export XAI_API_KEY="sk-Xvs..."
set XAI_API_KEY=sk-Xvs...
$env:XAI_API_KEY="sk-Xvs..."
Verify with: openclaw models status