Integrations with Agents
This chapter explains, from a normal-user perspective, how to integrate tokenfor.me into popular agents and tools. We focus on configuration patterns and concrete examples.
Universal rule:
- Base host: see the "API base URLs" section under "API Keys"
- Common paths:
- OpenAI / Anthropic compatible APIs:
/v1 - Gemini compatible APIs:
/v1beta
- OpenAI / Anthropic compatible APIs:
- API Key / Token: a key created in the tokenfor.me console (one key per provider group)
If a tool supports OpenAI/Anthropic-style APIs, it can usually work with tokenfor.me by changing the base URL and key.
Generic HTTP template
Most HTTP-based LLM clients follow a similar pattern:
POST <API_BASE_URL>/v1/chat/completions # see "API base URLs" under "API Keys" for the actual endpoint
Authorization: Bearer <YOUR_API_KEY>
Content-Type: application/json
{
"model": "gpt-4", // model name must match what tokenfor.me supports
"messages": [
{ "role": "user", "content": "Hello, testing tokenfor.me." }
]
}The following sections show how this maps to specific tools.
Codex example
Example only – exact configuration may differ in your Codex version.
Assume Codex reads configuration from a config.json file:
{
"apiBase": "<API_BASE_URL>", // see "API base URLs" under "API Keys" for the actual endpoint
"apiKey": "YOUR_API_KEY",
"model": "gpt-4"
}Steps:
- Open Codex's config file or settings UI.
- Change the original OpenAI
apiBaseto the endpoint described in the "API base URLs" section under "API Keys". - Set
apiKeyto the key you created in tokenfor.me. - Save the configuration.
- Start a conversation in Codex; if you receive normal model responses, integration is successful.
Claude Code CLI example
Assume Claude Code CLI supports environment variables or a config file.
- Environment variable example:
export CLAUDE_API_BASE="<API_BASE_URL>" # see "API base URLs" under "API Keys"
export CLAUDE_API_KEY="YOUR_API_KEY"- Config file example (
config.toml):
api_base = "<API_BASE_URL>" # see "API base URLs" under "API Keys"
api_key = "YOUR_API_KEY"
model = "claude-3-opus" # use a model enabled in tokenfor.meThen, use the CLI as usual. All calls will be routed via tokenfor.me.
OpenClaw examples
If you are already using OpenClaw, you can simply add three tokenfor.me providers (GPT, Claude, Gemini) and point their baseUrl to tokenfor.me, without changing your existing workflows. Configure each AI vendor with its own key.
- GPT (OpenAI-compatible)
{
"models": {
"providers": {
"tokenforme-gpt": {
"baseUrl": "<API_BASE_URL>/v1", // see "API base URLs" under "API Keys" for the actual endpoint
"apiKey": "sk-your-key",
"models": [
{
"id": "gpt-5",
"name": "gpt-5",
"api": "openai-responses",
"reasoning": true,
"input": ["text"]
}
]
}
}
}
}- Claude (Anthropic-compatible)
{
"models": {
"providers": {
"tokenforme-claude": {
"baseUrl": "<API_BASE_URL>/v1", // see "API base URLs" under "API Keys" for the actual endpoint
"apiKey": "sk-your-key",
"models": [
{
"id": "claude-sonnet-4-6",
"name": "claude-sonnet-4-6",
"api": "anthropic-messages",
"reasoning": true,
"input": ["text"]
}
]
}
}
}
}- Gemini (Google Generative AI-compatible)
{
"models": {
"providers": {
"tokenforme-gemini": {
"baseUrl": "<API_BASE_URL>/v1beta", // see "API base URLs" under "API Keys" for the actual endpoint
"apiKey": "sk-your-key",
"api": "google-generative-ai",
"models": [
{
"id": "gemini-3.1-flash-image-preview",
"name": "gemini-3.1-flash-image-preview",
"api": "google-generative-ai",
"reasoning": false,
"input": ["text", "image"]
}
],
"authHeader": true,
"request": {
"headers": {
"Authorization": "Bearer ${models.providers.tokenforme-gemini.apiKey}"
}
}
}
}
}
}Notes:
Antigravity / Qoder / Sursor examples
For these tools, the pattern is similar:
- Open the settings panel for Models or API.
- Set the base URL or endpoint to the API base URL configured as described under "API base URLs".
- Paste your tokenfor.me API key into the key/token field.
- Select or type a model name that is enabled in tokenfor.me.
- Save and run a simple test message.
If the tool provides a Test Connection button, use it to verify connectivity before heavy usage.
Debugging and troubleshooting
- Authentication errors:
- Double-check your API key and ensure the key is enabled in the console.
- Connection or timeout issues:
- Make sure the base URL matches the endpoint described in the "API base URLs" section under "API Keys".
- Check local network and proxy/firewall settings.
- Model not found:
- Confirm the model is enabled for your key in tokenfor.me.
- Use the exact model identifier shown in the console or documentation.