π—π€πˆ π—π€ππˆ

π»π‘œπ‘™π‘œπ‘”π‘Ÿπ‘Žπ‘β„Žπ‘–π‘ Β· 𝑆𝑒𝑏-π‘–π‘›π‘“π‘œπ‘Ÿπ‘šπ‘Žπ‘‘π‘–π‘œπ‘› πΆπ‘œπ‘›π‘π‘’π‘π‘‘ 𝐷𝑒𝑠𝑖𝑔𝑛

Don't have an account?Sign up·User Manual

Activation code and API key will be sent to this email. (May be delayed or filtered as spam.)

Verification Code Sent!

Please click the activation link in your email to complete registration.

API Base URL:

The API Key will also be included in the activation email. Please keep it safe.

May be delayed or in spam.

Already have an account? Login now

Β© 2022-2026 XABC Labs

π—π€πˆ π—π€ππˆ

Unlicensed
Sub-account tools are disabled for this account. To manage sub-accounts, please use a pay-as-you-go account with a balance greater than the sub-account threshold, or contact an admin.

Today's real-time usage statistics

You can query data within 3 years

Total Cost

\$0.00

Total Requests

0

Total Tokens

0

Images

0

Input Tokens

0

Output Tokens

0

Cache Tokens

0

Model Cost Distribution

Model Request Distribution

Model Usage Details

Model Requests Input Tokens Output Tokens Cache Tokens Images Cost Percentage

Daily Usage Statistics

Date Requests Cost Tokens Images Primary Model

Account Balance Query

Supported format: XAI API Keys starting with sk-Xvs...

Query Results

🌐 Supported AI Service Providers

✨ The XAI platform is compatible with virtually all major AI providers and model ecosystems, supporting unified integration and flexible switching.

🧭 How It Works

One entrypoint handles auth, routing, and normalization before reaching model providers.

XAI Router
XAI Router architecture diagram
XAI Router sits between clients and providers to apply policies, map models, and return consistent responses.
1

Single entrypoint

Use one XAI API key and a unified base_url.

2

Smart routing

Policies, model mapping, rate limits, and observability live in the router.

3

Provider fan-out

Requests go to OpenAI, Claude, Gemini, and more, then normalize on the way back.

πŸ’» SDK Examples

OpenAI SDK Example

import os
from openai import OpenAI

XAI_API_KEY = os.getenv("XAI_API_KEY")
client = OpenAI(
    api_key=XAI_API_KEY,
    base_url="",
)

completion = client.chat.completions.create(
    model="gpt-5",
    messages=[
	{"role": "system", "content": "You are AI"},
	{"role": "user", "content": "What is the meaning of life, the universe, and everything?"},
    ],
)

print(completion.choices[0].message)

Anthropic SDK Example

import os
from anthropic import Anthropic

XAI_API_KEY = os.getenv("XAI_API_KEY")
client = Anthropic(
    api_key=XAI_API_KEY,
    base_url="",
)
message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=128,
    system="You are AI.",
    messages=[
	{
	    "role": "user",
	    "content": "What is the meaning of life, the universe, and everything?",
	},
    ],
)
print(message.content)

πŸ§ͺ cURL Examples

OpenAI /chat/completions

curl /chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $XAI_API_KEY" \
  -d '{
    "model": "gpt-5.2",
    "messages": [
      {
        "role": "developer",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ]
  }'

Anthropic /messages

curl /v1/messages \
  -H 'Content-Type: application/json' \
  -H 'anthropic-version: 2023-06-01' \
  -H "X-Api-Key: $XAI_API_KEY" \
  -d '{
    "max_tokens": 1024,
    "messages": [
      {
        "content": "Hello, world",
        "role": "user"
      }
    ],
    "model": "claude-sonnet-4-5-20250929"
  }'

Create New Sub-account

Username must be unique and cannot duplicate existing accounts
Email address must be unique and cannot duplicate existing accounts

API Call Examples

cURL

Python

JavaScript

Fund/Deduct Sub-account

Support query by username or user ID

Must be > 0 and <= 365 days; defaults to 365 when empty

API Call Examples

cURL

Python

JavaScript

Top Up Your Account

WeChat top-ups sync automatically after payment, and repeated recharges add subscription months.

Monthly Quickly enable target models
Loading subscription plans...
Addon Subscriber add-on pack for use beyond your plan's daily quota
Loading addons...

Available only to subscribers that meet the balance threshold. Addon credits expire after 31 days and can be purchased multiple times.

WeChat Pay

The generated QR code will be displayed here

Scan with WeChat to Pay

Pay-as-you-go supports the full API catalog. If you pick a plan, add-on, or remote service, the amount will lock to that product's price.

Current Order

Order
Amount
Credit Granted
Status
Awaiting Payment

Value-Added Services

For manually delivered services such as OpenClaw remote installation. After payment, please contact the WeChat support shown on this page to confirm the installation.

Remote Service OpenClaw remote installation
Loading service products...

Customer Support

Need help with a recharge, invoice, or issue? Scan to add WeChat support and mention β€œXAI”.

  • Hours: Weekdays 10:00 - 19:00 (Beijing time)
  • Have your order number or account email ready for faster assistance

The support QR code will appear here

Support WeChat QR code

Recharge History

No recharge records

Service Orders

No service orders

Configuration Guide

Unified setup notes for Codex CLI / Codex App / Claude Code / OpenCode / OpenClaw.

Need help?

Recharge / Invoice / Setup Support

Scan to contact support when setup or recharge gets stuck. Mention "XAI" for faster help.

The support QR code will appear here

Support WeChat QR code
Codex CLI / App Claude Code OpenCode OpenClaw

Unified prerequisite: use gateway (OpenAI uses ); and configure XAI_API_KEY.

Codex CLI / Codex App

Codex supports two wire protocols: responses and chat. First write the config content into ~/.codex/config.toml (Windows: %USERPROFILE%\.codex\config.toml), then set XAI_API_KEY in your shell and run the matching command. The responses example below is based on a real working config.toml template and omits project-specific [projects."..."] entries.

Order: copy Option A or B into the config file first, then copy the launch commands for your shell.

Option A: put this in ~/.codex/config.toml (wire_api = "responses")

model_provider = "xai"
model = "gpt-5.4"
model_reasoning_effort = "xhigh"
plan_mode_reasoning_effort = "xhigh"
model_reasoning_summary = "none"
model_verbosity = "medium"
model_context_window = 1050000
model_auto_compact_token_limit = 945000
tool_output_token_limit = 6000
approval_policy = "never"
sandbox_mode = "danger-full-access"

[model_providers.xai]
name = "xai"
base_url = ""
wire_api = "responses"
requires_openai_auth = false
env_key = "XAI_API_KEY"

Option B: put this in ~/.codex/config.toml (wire_api = "chat")

approval_policy = "never"
sandbox_mode = "danger-full-access"

[model_providers.xai]
name = "xai"
base_url = ""
env_key = "XAI_API_KEY"
wire_api = "chat"
requires_openai_auth = false

[profiles.minimax]
model = "MiniMax-M2.5"
model_provider = "xai"

Linux / macOS (set key and launch)

export XAI_API_KEY="sk-Xvs..."

# Option A (responses)
codex

# Option B (chat)
codex --profile minimax

Windows CMD (set key and launch)

set XAI_API_KEY=sk-Xvs...

:: Option A (responses)
codex

:: Option B (chat)
codex --profile minimax

Windows PowerShell (set key and launch)

$env:XAI_API_KEY="sk-Xvs..."

# Option A (responses)
codex

# Option B (chat)
codex --profile minimax

Verify with: codex (responses) or codex --profile minimax (chat)

Claude Code (gpt-5.4)

Claude Code integration is primarily environment-variable based. The following examples map Claude defaults to gpt-5.4 using .

Order: copy the environment variables for your shell first, then run claude.

Environment variables (Linux / macOS)

export XAI_API_KEY="sk-Xvs..."
export ANTHROPIC_AUTH_TOKEN="$XAI_API_KEY"
export ANTHROPIC_BASE_URL=""
# Optional: custom default model mapping for Claude families (not required)
export ANTHROPIC_DEFAULT_OPUS_MODEL="gpt-5.4"
export ANTHROPIC_DEFAULT_SONNET_MODEL="gpt-5.4"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="gpt-5.4"

Environment variables (Windows CMD)

set XAI_API_KEY=sk-Xvs...
set ANTHROPIC_AUTH_TOKEN=%XAI_API_KEY%
set ANTHROPIC_BASE_URL=
:: Optional: custom default model mapping for Claude families (not required)
set ANTHROPIC_DEFAULT_OPUS_MODEL=gpt-5.4
set ANTHROPIC_DEFAULT_SONNET_MODEL=gpt-5.4
set ANTHROPIC_DEFAULT_HAIKU_MODEL=gpt-5.4

Environment variables (Windows PowerShell)

$env:XAI_API_KEY="sk-Xvs..."
$env:ANTHROPIC_AUTH_TOKEN=$env:XAI_API_KEY
$env:ANTHROPIC_BASE_URL=""
# Optional: custom default model mapping for Claude families (not required)
$env:ANTHROPIC_DEFAULT_OPUS_MODEL="gpt-5.4"
$env:ANTHROPIC_DEFAULT_SONNET_MODEL="gpt-5.4"
$env:ANTHROPIC_DEFAULT_HAIKU_MODEL="gpt-5.4"

Launch and verify

claude

Verify with: claude

OpenCode (Responses: gpt-5.4 / Chat: MiniMax-M2.5)

OpenCode should use the global config file ~/.config/opencode/opencode.jsonc (Windows: %USERPROFILE%\.config\opencode\opencode.jsonc). First write either Profile A or Profile B into the config file, then set XAI_API_KEY in your shell and run the verification command.

Order: choose the API profile first (A = Responses, B = Chat), write it to the config file, then copy the shell commands for your OS.

Profile A: put this in opencode.jsonc (Responses API)

{
  "$schema": "https://opencode.ai/config.json",
  "model": "openai/gpt-5.4",
  "small_model": "openai/gpt-5.4",
  "provider": {
    "openai": {
      "options": {
        "baseURL": "",
        "apiKey": "{env:XAI_API_KEY}"
      }
    }
  }
}

Profile B: put this in opencode.jsonc (Chat API)

{
  "$schema": "https://opencode.ai/config.json",
  "model": "xai-chat/MiniMax-M2.5",
  "small_model": "xai-chat/MiniMax-M2.5",
  "provider": {
    "xai-chat": {
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "baseURL": "",
        "apiKey": "{env:XAI_API_KEY}"
      },
      "models": {
        "MiniMax-M2.5": {}
      }
    }
  }
}

Linux / macOS (set key and verify)

export XAI_API_KEY="sk-Xvs..."
opencode debug config
opencode run "hello"

Windows CMD (set key and verify)

set XAI_API_KEY=sk-Xvs...
opencode debug config
opencode run "hello"

Windows PowerShell (set key and verify)

$env:XAI_API_KEY="sk-Xvs..."
opencode debug config
opencode run "hello"

Request example A: Responses API (gpt-5.4)

curl /responses \
  -H "Authorization: Bearer ${XAI_API_KEY}" \
  -H "Content-Type: application/json" \
  -d '{
    "model":"gpt-5.4",
    "input":"Explain the purpose of a microservice gateway in one sentence"
  }'

Request example B: Chat API (MiniMax-M2.5)

curl /chat/completions \
  -H "Authorization: Bearer ${XAI_API_KEY}" \
  -H "Content-Type: application/json" \
  -d '{
    "model":"MiniMax-M2.5",
    "messages":[{"role":"user","content":"Explain the purpose of a microservice gateway in one sentence"}]
  }'

Verify with: opencode debug config (config) and opencode run "hello" (request)

OpenClaw

OpenClaw can connect to OpenAI API and Claude API, and can also be extended to OpenAI Responses API. XAI Router supports OpenAI API and Claude API by default; the recommended setup is api = "openai-responses". Config path: ~/.openclaw/openclaw.json on Linux / macOS, and %USERPROFILE%\.openclaw\openclaw.json on Windows.

Order: write one of the JSON configs below to the config file, then set XAI_API_KEY for your shell, then run the verification command.

Mode 1: OpenAI Responses API compatible (recommended, api = "openai-responses")

{
  "agents": {
    "defaults": {
      "model": { "primary": "xairouter/gpt-5.4" }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "xairouter": {
        "baseUrl": "",
        "apiKey": "${XAI_API_KEY}",
        "api": "openai-responses",
        "models": [{ "id": "gpt-5.4", "name": "gpt-5.4" }]
      }
    }
  }
}

Mode 2: Claude API compatible (api = "anthropic-messages")

{
  "agents": {
    "defaults": {
      "model": { "primary": "xairouter/MiniMax-M2.5" }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "xairouter": {
        "baseUrl": "",
        "apiKey": "${XAI_API_KEY}",
        "api": "anthropic-messages",
        "models": [{ "id": "MiniMax-M2.5", "name": "MiniMax-M2.5" }]
      }
    }
  }
}

Mode 3: OpenAI Chat API compatible (api = "openai-completions")

{
  "agents": {
    "defaults": {
      "model": { "primary": "xairouter/MiniMax-M2.5" }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "xairouter": {
        "baseUrl": "",
        "apiKey": "${XAI_API_KEY}",
        "api": "openai-completions",
        "models": [{ "id": "MiniMax-M2.5", "name": "MiniMax-M2.5" }]
      }
    }
  }
}

Linux / macOS (set key)

export XAI_API_KEY="sk-Xvs..."

Windows CMD (set key)

set XAI_API_KEY=sk-Xvs...

Windows PowerShell (set key)

$env:XAI_API_KEY="sk-Xvs..."

Verify command

openclaw models status

Verify with: openclaw models status

View Sub-account Information

Leave empty to display all sub-account information

API Call Examples

cURL

Python

JavaScript

Update Sub-account Information

Enter the username or user ID of the sub-account to update

Basic Information

Quota Settings

Rate Limits

Model-Specific Limits

Quick fill
Supports JSON; you can also enter "*" or "=" to clear all, or "-gpt-5, -gpt-5-nano" to remove specific models. Each model can have rpm, rph, rpd, tpm, tph, tpd limits. Leave empty to keep current settings.

Access Control

Model mapping is incremental: use "*" or "=" to clear, "-key" to remove.
Leave blank to keep the current setting. To fall back to the system default QR code, clear it in the drawer and save.

API Call Examples

cURL

Python

JavaScript

Sub-account List

0 sub-accounts
Loading sub-accounts...

No sub-accounts yet

This account has no sub-accounts

Page 1 / 1 Β· Total 0

Delete Sub-account

This action cannot be undone. The sub-account Key will be immediately invalidated.

Please carefully confirm the sub-account identifier to delete

API Call Examples

cURL

Python

JavaScript

Sub-account Insights

to

Loading billing data...

Failed to load billing data

Please verify network access and API permissions, then retry

Total Cost

$0.00

Total Requests

0

Total Tokens

0

Total Images

0

Input Tokens

0

Output Tokens

0

Cache Tokens

0

Model Cost Distribution

Model Request Distribution

Cost Trend Analysis

Daily Trend

Model Spend & Requests

Summarize request share and cost structure to pinpoint primary cost drivers.

Model Requests Prompt Completion Spend Request Share Spend Share

Sub-account Overview

Compare sub-account request and spend distribution to flag anomalies quickly.

User Requests Input Tokens Output Tokens Cache Tokens Images Highest-Cost Model Most Requested Model Spend Spend Share Request Share

Daily Timeline

Review daily totals, model share, and cache hits across the selected range.

Activity Logs

Time Action Target Details IP

Loading logs...

No activity logs available

Operation Result



          

πŸ”‘ Secret Key

Please keep this key secure, it will be used for sub-account API authentication

πŸ’‘ Tip: Please notify the sub-account owner to check their email for the API key

Sub-account Editor

Sub-account

Basic Info

Usage Limits

Rate Limits

Models & Access

Model mapping is incremental: use "*" or "=" to clear, "-key" to remove.
Supports JSON; use "*" or "=" to clear, or "-gpt-5" to remove a model.

Other

Clear and save to fall back to the system default support QR code.

Confirm Action