Skip to content
This repository was archived by the owner on Feb 23, 2026. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
84a19da
security: remove unused hono dependency and update override to v4.11.8
iam-brain Feb 6, 2026
4806546
feat: implement dynamic model instructions and gpt-5.3-codex support
iam-brain Feb 7, 2026
184b2fd
Align model resolution with server catalogs and per-account access, a…
iam-brain Feb 7, 2026
db00396
feat: harden model validation and cache-backed personalities
iam-brain Feb 7, 2026
4a3f7b6
fix: allow base gpt-5.x model slugs
iam-brain Feb 7, 2026
85de0fe
feat: enforce server catalog model validation
iam-brain Feb 7, 2026
27ae491
feat: add hard-stop config options
iam-brain Feb 7, 2026
5dd611c
feat: add synthetic error response helper
iam-brain Feb 7, 2026
e5663e9
feat: hard-stop on all-accounts blocked
iam-brain Feb 7, 2026
c96ae21
feat: hard-stop on unsupported models
iam-brain Feb 7, 2026
1cdcb86
docs: explain hard-stop error behavior
iam-brain Feb 7, 2026
0f4b3fb
test: seed model catalog cache in fetch orchestrator
iam-brain Feb 7, 2026
010d92a
docs: align README and multi-account tooling
iam-brain Feb 7, 2026
5f3e0a2
docs: refresh architecture for current pipeline
iam-brain Feb 7, 2026
616fc49
docs: prune legacy troubleshooting references
iam-brain Feb 7, 2026
f308f6d
docs: clarify catalog validation and safety
iam-brain Feb 7, 2026
1b1a54a
docs: expand plugin configuration reference
iam-brain Feb 7, 2026
08e582f
docs: add hybrid selection details
iam-brain Feb 7, 2026
07b38e0
docs: clarify catalog validation in architecture
iam-brain Feb 7, 2026
efb184c
docs: update privacy cache details
iam-brain Feb 7, 2026
04e54c7
docs: sync config flow and fields
iam-brain Feb 7, 2026
a30741e
docs: note model catalog cache failures
iam-brain Feb 7, 2026
eec642a
docs: tidy configuration env list
iam-brain Feb 7, 2026
68e56c0
fix: redact prompt_cache_key in request logs
iam-brain Feb 7, 2026
88ec3e1
fix: drop invalid model catalog caches
iam-brain Feb 7, 2026
6dcd761
refactor: remove legacy codexMode flag
iam-brain Feb 7, 2026
95b2c86
docs: update changelog for catalog safety
iam-brain Feb 7, 2026
ee3f3ec
fix: restore strict model catalog validation
iam-brain Feb 7, 2026
c3b9d0b
test: align codex model catalog expectations
iam-brain Feb 7, 2026
26c1ad2
fix: hard-stop on model catalog errors
iam-brain Feb 7, 2026
1f1ce82
test: seed codex instruction cache
iam-brain Feb 7, 2026
3b0bda6
test: align instruction cache path
iam-brain Feb 7, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,10 @@ node_modules/
bun.lockb
pnpm-lock.yaml
dist/
coverage/
.worktrees/
docs/plans/
docs/progress/
docs/research/
.DS_Store
.history/
Expand Down
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,20 @@

All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).

## [Unreleased]

### Added
- **Dynamic model discovery**: authoritative `/backend-api/codex/models` catalog with per-account cache and strict allowlist.
- **Personality caching**: seeds Friendly/Pragmatic defaults from runtime model metadata when available.

### Changed
- **Logging safety**: request logs redact `prompt_cache_key` when request logging is enabled.
- **Catalog cache hygiene**: invalid `codex-models-cache-<hash>.json` files are deleted on read.
- **Config surface**: removed legacy `codexMode` flag (no longer supported).

### Docs
- Refresh configuration, architecture, and troubleshooting to match hard-stop and catalog behavior.

## [4.6.0] - 2026-02-04

**Quarantine + Multi-Account Reliability release**: safer storage handling, clearer recovery, and
Expand Down
48 changes: 29 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
![Image 1: opencode-openai-codex-auth](assets/readme-hero.svg)


**This project is now EOL and no further developments will be made. A complete rewrite, based on the current native implementation of OpenAI's OAuth in Opencode, is now underway and will be available at [https://github.com/iam-brain/opencode-openai-multi](https://github.com/iam-brain/opencode-openai-multi) when complete.**
**Maintenance fork:** This project continues to receive hardening and compatibility updates while a full rewrite (based on OpenCode's native OAuth) is underway at [https://github.com/iam-brain/opencode-openai-multi](https://github.com/iam-brain/opencode-openai-multi).


Fork maintained by [iam-brain](https://github.com/iam-brain).
Expand Down Expand Up @@ -33,12 +33,12 @@ npx -y opencode-openai-codex-multi-auth@latest
Then:
```bash
opencode auth login
opencode run "write hello world to test.txt" --model=openai/gpt-5.2 --variant=medium
opencode run "write hello world to test.txt" --model=openai/gpt-5.3-codex --variant=medium
```
Legacy OpenCode (v1.0.209 and below):
```bash
npx -y opencode-openai-codex-multi-auth@latest --legacy
opencode run "write hello world to test.txt" --model=openai/gpt-5.2-medium
opencode run "write hello world to test.txt" --model=openai/gpt-5.3-codex-medium
```
Uninstall:
```bash
Expand All @@ -57,6 +57,7 @@ opencode auth login

---
## 📦 Models
- **gpt-5.3-codex** (low/medium/high/xhigh)
- **gpt-5.2** (none/low/medium/high/xhigh)
- **gpt-5.2-codex** (low/medium/high/xhigh)
- **gpt-5.1-codex-max** (low/medium/high/xhigh)
Expand All @@ -68,34 +69,36 @@ opencode auth login
- Modern (OpenCode v1.0.210+): `config/opencode-modern.json`
- Legacy (OpenCode v1.0.209 and below): `config/opencode-legacy.json`
- Installer template source: latest GitHub release → GitHub `main` → bundled static template fallback
- Runtime model metadata source: Codex `/backend-api/codex/models` → local cache → GitHub `models.json` (release/main) → static template defaults
- Runtime model metadata source: Codex `/backend-api/codex/models` → per-account local cache (server-derived). Requests fail closed if the catalog is unavailable.

Minimal configs are not supported for GPT‑5.x; use the full configs above.

Personality is supported for all current and future models via `options.personality`:
Personality is configured in `~/.config/opencode/openai-codex-auth-config.json` via `custom_settings`:

```json
{
"provider": {
"openai": {
"options": {
"personality": "friendly"
},
"models": {
"gpt-5.3-codex": {
"options": {
"personality": "pragmatic"
}
"custom_settings": {
"options": {
"personality": "Idiot"
},
"models": {
"gpt-5.3-codex": {
"options": {
"personality": "pragmatic"
}
}
}
}
}
```

Accepted values: `none`, `friendly`, `pragmatic` (case-insensitive).
Personality descriptions come from:
- Project-local `.opencode/Personalities/*.md`
- Global `~/.config/opencode/Personalities/*.md`

Legacy note: `codexMode` is deprecated and now a no-op.
The filename (case-insensitive) defines the key (e.g., `Idiot.md`), and the file contents are used verbatim.

Built-ins: `none`, `default` (uses model runtime defaults), `friendly`, `pragmatic` (fallback if unset). Any other key requires a matching personality file.
---
## ⌨️ Slash Commands (TUI)
In the OpenCode TUI, you can use these commands to manage your accounts and monitor usage:
Expand All @@ -105,23 +108,30 @@ In the OpenCode TUI, you can use these commands to manage your accounts and moni
| `/codex-status` | Shows current rate limits (5h/Weekly), credits, and account status (percent left). |
| `/codex-switch-accounts <index>` | Switch the active account by its 1-based index from the status list. |
| `/codex-toggle-account <index>` | Enable or disable an account by its 1-based index (prevents auto-selection). |
| `/codex-remove-account <index>` | Remove an account by its 1-based index. |

---
## ✅ Features
- ChatGPT Plus/Pro OAuth authentication (official flow)
- 22 model presets across GPT‑5.2 / GPT‑5.2 Codex / GPT‑5.1 families
- Model presets across GPT‑5.3 Codex / GPT‑5.2 / GPT‑5.2 Codex / GPT‑5.1 families
- Variant system support (v1.0.210+) + legacy presets
- Multimodal input enabled for all models
- Usage‑aware errors + automatic token refresh
- Online-first template/model metadata resolution with resilient fallbacks
- Authoritative model catalog validation (`/codex/models`) with per-account cache
- Multi-account support with sticky selection + PID offset (great for parallel agents)
- Account enable/disable management (via `opencode auth login` manage)
- Hard-stop safety loops for unavailable accounts and unsupported models
- Strict account identity matching (`accountId` + `email` + `plan`)
- Hybrid account selection strategy (health score + token bucket + LRU bias)
- Optional round-robin account rotation (maximum throughput)
- OpenCode TUI toasts + `codex-status` / `codex-switch-accounts` tools
- **Authoritative Codex Status**: Real-time rate limit monitoring (5h/Weekly) with ASCII status bars
---
## 🛡️ Safety & Reliability
- Hard-stop safety gate for all-accounts rate-limit/auth-failure loops
- Strict model allowlist from `/backend-api/codex/models` (per-account cached)
- Synthetic error responses that surface the exact failure reason
---
## 📚 Docs
- Getting Started: `docs/getting-started.md`
- Configuration: `docs/configuration.md`
Expand Down
69 changes: 64 additions & 5 deletions assets/openai-codex-auth-config.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,48 @@
"type": "string",
"description": "JSON schema reference for editor autocompletion"
},
"codexMode": {
"type": "boolean",
"default": false,
"deprecated": true,
"description": "Deprecated legacy field. Bridge mode has been removed and this flag is now a no-op."
"custom_settings": {
"type": "object",
"description": "Override provider options (including personality) without editing opencode.json.",
"properties": {
"options": {
"type": "object",
"description": "Global OpenAI provider option overrides.",
"properties": {
"personality": {
"type": "string",
"description": "Personality key (built-ins: none/default/friendly/pragmatic or a custom .md file name)."
}
},
"additionalProperties": true
},
"models": {
"type": "object",
"description": "Per-model overrides keyed by model id.",
"additionalProperties": {
"type": "object",
"properties": {
"options": {
"type": "object",
"properties": {
"personality": {
"type": "string",
"description": "Personality key override for this model."
}
},
"additionalProperties": true
},
"variants": {
"type": "object",
"description": "Per-variant overrides keyed by reasoning effort.",
"additionalProperties": true
}
},
"additionalProperties": true
}
}
},
"additionalProperties": true
},
"accountSelectionStrategy": {
"type": "string",
Expand Down Expand Up @@ -121,6 +158,28 @@
"minimum": 0,
"default": 1,
"description": "Maximum number of all-accounts wait cycles."
},
"hardStopMaxWaitMs": {
"type": "number",
"minimum": 0,
"default": 10000,
"description": "Maximum wait (ms) before returning a hard-stop error when no accounts are available."
},
"hardStopOnUnknownModel": {
"type": "boolean",
"default": true,
"description": "Return a hard-stop error when the requested model is not in the server catalog."
},
"hardStopOnAllAuthFailed": {
"type": "boolean",
"default": true,
"description": "Return a hard-stop error when all accounts are in auth-failure cooldown."
},
"hardStopMaxConsecutiveFailures": {
"type": "number",
"minimum": 0,
"default": 5,
"description": "Maximum consecutive failures before returning a hard-stop error."
}
}
}
13 changes: 7 additions & 6 deletions config/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ OpenCode v1.0.210+ introduced a **variants system** that allows defining reasoni
| `opencode-legacy.json` | 6 | Separate model entries | 20 individual model definitions |

Both configs provide:
- ✅ All supported GPT 5.2/5.1 variants: gpt-5.2, gpt-5.2-codex, gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini
- ✅ All supported GPT 5.x variants: gpt-5.3-codex, gpt-5.2, gpt-5.2-codex, gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini
- ✅ Proper reasoning effort settings for each variant (including `xhigh` for Codex Max/5.2)
- ✅ Context limits (272k context / 128k output for all Codex families)
- ✅ Required options: `store: false`, `include: ["reasoning.encrypted_content"]`
Expand Down Expand Up @@ -68,12 +68,12 @@ Both configs provide:
3. **Run opencode**:
```bash
# Modern config (v1.0.210+):
opencode run "task" --model=openai/gpt-5.2 --variant=medium
opencode run "task" --model=openai/gpt-5.2 --variant=high
opencode run "task" --model=openai/gpt-5.3-codex --variant=medium
opencode run "task" --model=openai/gpt-5.3-codex --variant=high

# Legacy config:
opencode run "task" --model=openai/gpt-5.2-medium
opencode run "task" --model=openai/gpt-5.2-high
opencode run "task" --model=openai/gpt-5.3-codex-medium
opencode run "task" --model=openai/gpt-5.3-codex-high
```

> **⚠️ Important**: Use the config file appropriate for your OpenCode version. Using the modern config with an older OpenCode version (v1.0.209 or below) will not work correctly.
Expand All @@ -84,14 +84,15 @@ Both configs provide:

Both configs provide access to the same model families:

- **gpt-5.3-codex** (low/medium/high/xhigh) - Primary Codex model
- **gpt-5.2** (none/low/medium/high/xhigh) - Latest GPT 5.2 model with full reasoning support
- **gpt-5.2-codex** (low/medium/high/xhigh) - GPT 5.2 Codex presets
- **gpt-5.1-codex-max** (low/medium/high/xhigh) - Codex Max presets
- **gpt-5.1-codex** (low/medium/high) - Codex model presets
- **gpt-5.1-codex-mini** (medium/high) - Codex mini tier presets
- **gpt-5.1** (none/low/medium/high) - General-purpose reasoning presets

All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5.1 High (OAuth)", etc.
All appear in the opencode model selector as "GPT 5.3 Codex Low (OAuth)", "GPT 5.3 High (OAuth)", etc.

## Configuration Options

Expand Down
2 changes: 1 addition & 1 deletion config/minimal-opencode.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@
}
}
},
"model": "openai/gpt-5-codex"
"model": "openai/gpt-5.3-codex"
}
100 changes: 100 additions & 0 deletions config/opencode-legacy.json
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,106 @@
"store": false
}
},
"gpt-5.3-codex-low": {
"name": "GPT 5.3 Codex Low (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"modalities": {
"input": [
"text",
"image"
],
"output": [
"text"
]
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.3-codex-medium": {
"name": "GPT 5.3 Codex Medium (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"modalities": {
"input": [
"text",
"image"
],
"output": [
"text"
]
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.3-codex-high": {
"name": "GPT 5.3 Codex High (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"modalities": {
"input": [
"text",
"image"
],
"output": [
"text"
]
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.3-codex-xhigh": {
"name": "GPT 5.3 Codex Extra High (OAuth)",
"limit": {
"context": 272000,
"output": 128000
},
"modalities": {
"input": [
"text",
"image"
],
"output": [
"text"
]
},
"options": {
"reasoningEffort": "xhigh",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.2-codex-low": {
"name": "GPT 5.2 Codex Low (OAuth)",
"limit": {
Expand Down
Loading