From 63ac93ba84224e018209c09fcea8e5e0d49ca008 Mon Sep 17 00:00:00 2001 From: Krishna Chandra Date: Thu, 12 Feb 2026 19:20:00 +0530 Subject: [PATCH 1/3] docs: update how to configure portkey on codex --- integrations/libraries/codex.mdx | 79 +++++++++++++++++--------------- 1 file changed, 43 insertions(+), 36 deletions(-) diff --git a/integrations/libraries/codex.mdx b/integrations/libraries/codex.mdx index 42762ea8..e0c4e365 100644 --- a/integrations/libraries/codex.mdx +++ b/integrations/libraries/codex.mdx @@ -1,22 +1,22 @@ --- -title: 'OpenAI Codex CLI' -description: 'Add usage tracking, cost controls, and security guardrails to your Codex CLI deployment' +title: 'OpenAI Codex' +description: 'Add usage tracking, cost controls, and security guardrails to Codex with Portkey' --- -OpenAI Codex CLI is a lightweight coding agent that runs in your terminal. Add Portkey to get: +**Codex** is OpenAI’s coding agent that runs in the terminal, IDE extension, and CLI. It uses a shared config (user-level `~/.codex/config.toml` and optional project-level `.codex/config.toml`) to set the default model, approval policies, sandbox settings, and provider details. Add Portkey to get: -- **1600+ LLMs** through one interface - switch providers instantly -- **Observability** - track costs, tokens, and latency for every request -- **Reliability** - automatic fallbacks, retries, and caching -- **Governance** - budget limits, usage tracking, and team access controls +- **1600+ LLMs** through one interface — switch providers by changing the model in config +- **Observability** — track costs, tokens, and latency for every request +- **Reliability** — automatic fallbacks, retries, and caching +- **Governance** — budget limits, usage tracking, and team access controls -This guide shows how to configure Codex CLI with Portkey in under 5 minutes. +This guide shows how to configure Codex with Portkey in a few minutes. For enterprise deployments across teams, see [Enterprise Governance](#3-enterprise-governance). -# 1. Setup +## 1. Setup @@ -40,25 +40,23 @@ Go to [API Keys](https://app.portkey.ai/api-keys) and generate your Portkey API -# 2. Configure Codex CLI - -Create or edit `~/.codex/config.json`: - -```json -{ - "provider": "portkey", - "model": "@openai-prod/gpt-4o", - "providers": { - "portkey": { - "name": "Portkey", - "baseURL": "https://api.portkey.ai/v1", - "envKey": "PORTKEY_API_KEY" - } - } -} +## 2. Configure Portkey in Codex + +Codex reads configuration from **TOML** files. User-level config lives at `~/.codex/config.toml`; you can override per project with `.codex/config.toml` in the repo (see [Config basics](https://developers.openai.com/codex/config-basic) for precedence). + +Create or edit `~/.codex/config.toml` and add Portkey as the provider. Use `model_provider` (the provider id from `model_providers`) and define Portkey under `[model_providers.portkey]` with `base_url` and `env_key` per the [Config reference](https://developers.openai.com/codex/config-reference): + +```toml +model_provider = "portkey" +model = "@openai-prod/gpt-4o" + +[model_providers.portkey] +name = "Portkey" +base_url = "https://api.portkey.ai/v1" +env_key = "PORTKEY_API_KEY" ``` -Set your environment variable: +Set the API key in your environment: ```shell export PORTKEY_API_KEY="your-portkey-api-key" @@ -68,28 +66,37 @@ export PORTKEY_API_KEY="your-portkey-api-key" Add to `~/.zshrc` or `~/.bashrc` for persistence. -Test your integration: +Test the integration: ```shell codex "explain this repository to me" ``` -Done! Monitor usage in the [Portkey Dashboard](https://app.portkey.ai/dashboard). +Monitor usage in the [Portkey Dashboard](https://app.portkey.ai/dashboard). -## Switch Providers +## 3. Change providers and models -Change models by updating the `model` field in your config: +Codex uses the `model` value in `config.toml` to decide which model to call. With Portkey, the model is a **virtual key** in the form `@/`. Change providers or models by updating `model` in your config. +**Examples:** + +```toml +# OpenAI +model = "@openai-prod/gpt-4o" + +# Anthropic +model = "@anthropic-prod/claude-3-5-sonnet-20241022" + +# Google +model = "@google-prod/gemini-2.0-flash-exp" ``` -@anthropic-prod/claude-3-5-sonnet-20241022 -@openai-prod/gpt-4o -@google-prod/gemini-2.0-flash-exp -``` + +Use the provider slugs you created in the [Model Catalog](https://app.portkey.ai/model-catalog). After editing `~/.codex/config.toml` (or `.codex/config.toml` in the project), the next Codex run uses the new model. -**Want fallbacks, load balancing, or caching?** Create a [Portkey Config](/product/ai-gateway/configs) and attach it to your API key. See [Enterprise Governance](#3-enterprise-governance) for examples. +**Fallbacks, load balancing, or caching?** Create a [Portkey Config](/product/ai-gateway/configs), attach it to your API key, and set `model` to the config’s virtual model. See [Enterprise Governance](#3-enterprise-governance) for examples. import AdvancedFeatures from '/snippets/portkey-advanced-features.mdx'; - \ No newline at end of file + From ba6ca411bac4412472112bffe5cb697b86d128d2 Mon Sep 17 00:00:00 2001 From: Krishna Chandra Date: Fri, 13 Feb 2026 13:45:07 +0530 Subject: [PATCH 2/3] docs: add provider protocol options while configuring portkey on codex --- integrations/libraries/codex.mdx | 60 ++++++++++++++++++++++++++++---- 1 file changed, 53 insertions(+), 7 deletions(-) diff --git a/integrations/libraries/codex.mdx b/integrations/libraries/codex.mdx index e0c4e365..a49e380d 100644 --- a/integrations/libraries/codex.mdx +++ b/integrations/libraries/codex.mdx @@ -54,6 +54,7 @@ model = "@openai-prod/gpt-4o" name = "Portkey" base_url = "https://api.portkey.ai/v1" env_key = "PORTKEY_API_KEY" +# wire_api = "chat" # optional: "chat" (default) or "responses" ``` Set the API key in your environment: @@ -78,20 +79,65 @@ Monitor usage in the [Portkey Dashboard](https://app.portkey.ai/dashboard). Codex uses the `model` value in `config.toml` to decide which model to call. With Portkey, the model is a **virtual key** in the form `@/`. Change providers or models by updating `model` in your config. -**Examples:** +Example: ```toml -# OpenAI model = "@openai-prod/gpt-4o" +``` + +In the [Model Catalog](https://app.portkey.ai/model-catalog), use the copy button next to a provider or virtual model to copy its virtual key to the clipboard, then paste it into `model`. After editing `~/.codex/config.toml` (or `.codex/config.toml` in the project), the next Codex run uses the new model. + +### Provider and model options + +Tune the Portkey provider and the active model with these options in `config.toml`. All keys follow the [Codex config reference](https://developers.openai.com/codex/config-reference). + +**Provider protocol (`wire_api`)** +Under `[model_providers.portkey]`, set `wire_api` to choose which API protocol Codex uses when talking to the provider: + +| Value | Description | +| ----------- | ----------- | +| `"chat"` | Chat Completions API (default if omitted). Use for standard chat/completion models. | +| `"responses"` | [Responses API](https://developers.openai.com/docs/guides/responses). Use for models that support structured reasoning and tool use via the Responses API. | + +Example with protocol and optional retry/timeout tuning: -# Anthropic -model = "@anthropic-prod/claude-3-5-sonnet-20241022" +```toml +[model_providers.portkey] +name = "Portkey" +base_url = "https://api.portkey.ai/v1" +env_key = "PORTKEY_API_KEY" +wire_api = "responses" +# request_max_retries = 4 +# stream_idle_timeout_ms = 300000 +``` -# Google -model = "@google-prod/gemini-2.0-flash-exp" +**Reasoning and behavior (top-level)** +These apply to the current session model; they are especially relevant when using reasoning-capable models or the Responses API: + +| Key | Values | Description | +| --- | ------ | ----------- | +| `model_reasoning_effort` | `minimal`, `low`, `medium`, `high`, `xhigh` | How much reasoning effort the model uses (Responses API). Higher values can improve quality and latency. `xhigh` is model-dependent. | +| `model_reasoning_summary` | `auto`, `concise`, `detailed`, `none` | How much reasoning summary to include or to disable summaries. | +| `personality` | `none`, `friendly`, `pragmatic` | Default communication style for models that support it. Overridable per thread or via `/personality` in-session. | + +Example: + +```toml +model_provider = "portkey" +model = "@openai-prod/gpt-4o" + +model_reasoning_effort = "high" +model_reasoning_summary = "concise" +personality = "pragmatic" + +[model_providers.portkey] +name = "Portkey" +base_url = "https://api.portkey.ai/v1" +env_key = "PORTKEY_API_KEY" +wire_api = "chat" ``` -Use the provider slugs you created in the [Model Catalog](https://app.portkey.ai/model-catalog). After editing `~/.codex/config.toml` (or `.codex/config.toml` in the project), the next Codex run uses the new model. +Use `wire_api = "responses"` when your Portkey virtual model is backed by a Responses API–capable model; pair it with `model_reasoning_effort` and `model_reasoning_summary` as needed. **Fallbacks, load balancing, or caching?** Create a [Portkey Config](/product/ai-gateway/configs), attach it to your API key, and set `model` to the config’s virtual model. See [Enterprise Governance](#3-enterprise-governance) for examples. From d80094e25797327bae205bfca49561554fde606e Mon Sep 17 00:00:00 2001 From: Krishna Chandra Date: Fri, 13 Feb 2026 15:01:59 +0530 Subject: [PATCH 3/3] docs: made the docs clear and concise, added more model capabilities etc --- integrations/libraries/codex.mdx | 69 +++++++++++++++++++++++--------- 1 file changed, 50 insertions(+), 19 deletions(-) diff --git a/integrations/libraries/codex.mdx b/integrations/libraries/codex.mdx index a49e380d..8efa1e17 100644 --- a/integrations/libraries/codex.mdx +++ b/integrations/libraries/codex.mdx @@ -3,14 +3,14 @@ title: 'OpenAI Codex' description: 'Add usage tracking, cost controls, and security guardrails to Codex with Portkey' --- -**Codex** is OpenAI’s coding agent that runs in the terminal, IDE extension, and CLI. It uses a shared config (user-level `~/.codex/config.toml` and optional project-level `.codex/config.toml`) to set the default model, approval policies, sandbox settings, and provider details. Add Portkey to get: +**Codex** is OpenAI’s coding agent for terminal, IDE extension, and CLI. It uses a shared config (user-level `~/.codex/config.toml` and optional project-level `.codex/config.toml`) for default model, approval policies, sandbox settings, and provider details. Add Portkey to get: -- **1600+ LLMs** through one interface — switch providers by changing the model in config +- **1600+ LLMs** through one interface — switch providers by updating `model` in config - **Observability** — track costs, tokens, and latency for every request - **Reliability** — automatic fallbacks, retries, and caching - **Governance** — budget limits, usage tracking, and team access controls -This guide shows how to configure Codex with Portkey in a few minutes. +Configure Codex with Portkey in a few minutes. For enterprise deployments across teams, see [Enterprise Governance](#3-enterprise-governance). @@ -28,7 +28,7 @@ Go to [Model Catalog](https://app.portkey.ai/model-catalog) → **Add Provider** -Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like `openai-prod`. +Select provider (OpenAI, Anthropic, etc.), enter API key, and create a slug like `openai-prod`. @@ -42,9 +42,9 @@ Go to [API Keys](https://app.portkey.ai/api-keys) and generate your Portkey API ## 2. Configure Portkey in Codex -Codex reads configuration from **TOML** files. User-level config lives at `~/.codex/config.toml`; you can override per project with `.codex/config.toml` in the repo (see [Config basics](https://developers.openai.com/codex/config-basic) for precedence). +Codex loads config from `~/.codex/config.toml` (overridable with `.codex/config.toml` in a repo; see [Config basics](https://developers.openai.com/codex/config-basic) for precedence). -Create or edit `~/.codex/config.toml` and add Portkey as the provider. Use `model_provider` (the provider id from `model_providers`) and define Portkey under `[model_providers.portkey]` with `base_url` and `env_key` per the [Config reference](https://developers.openai.com/codex/config-reference): +Add Portkey as the provider by setting `model_provider` and defining `[model_providers.portkey]` with `base_url` and `env_key` (see [Config reference](https://developers.openai.com/codex/config-reference)): ```toml model_provider = "portkey" @@ -57,10 +57,31 @@ env_key = "PORTKEY_API_KEY" # wire_api = "chat" # optional: "chat" (default) or "responses" ``` -Set the API key in your environment: +### Multiple model providers + +Define multiple entries under `model_providers` to switch between environments or backends by changing `model_provider`: + +```toml +model_provider = "portkey-prod" +model = "@openai-prod/gpt-4o" + +[model_providers.portkey-prod] +name = "Portkey (prod)" +base_url = "https://api.portkey.ai/v1" +env_key = "PORTKEY_API_KEY" + +[model_providers.portkey-dev] +name = "Portkey (dev)" +base_url = "https://api.portkey.ai/v1" +env_key = "PORTKEY_API_KEY_DEV" +``` + +Use `.codex/config.toml` in a repository to override `model_provider` and `model` for that project while keeping the shared `~/.codex/config.toml` as the default. + +Set `PORTKEY_API_KEY` in the environment: ```shell -export PORTKEY_API_KEY="your-portkey-api-key" +export PORTKEY_API_KEY="" ``` @@ -75,9 +96,9 @@ codex "explain this repository to me" Monitor usage in the [Portkey Dashboard](https://app.portkey.ai/dashboard). -## 3. Change providers and models +## 3. Using Codex with 1600+ Models -Codex uses the `model` value in `config.toml` to decide which model to call. With Portkey, the model is a **virtual key** in the form `@/`. Change providers or models by updating `model` in your config. +Codex uses the `model` value in `config.toml` to decide which model to call. With Portkey, set `model` to a [Model Catalog](/product/model-catalog) slug in the form `@/`. Change providers or models by updating `model` in the config. Example: @@ -85,11 +106,9 @@ Example: model = "@openai-prod/gpt-4o" ``` -In the [Model Catalog](https://app.portkey.ai/model-catalog), use the copy button next to a provider or virtual model to copy its virtual key to the clipboard, then paste it into `model`. After editing `~/.codex/config.toml` (or `.codex/config.toml` in the project), the next Codex run uses the new model. - -### Provider and model options +### Using Responses API -Tune the Portkey provider and the active model with these options in `config.toml`. All keys follow the [Codex config reference](https://developers.openai.com/codex/config-reference). +Codex supports OpenAI's Responses API natively. Configure `config.toml` with keys from the [Codex config reference](https://developers.openai.com/codex/config-reference). **Provider protocol (`wire_api`)** Under `[model_providers.portkey]`, set `wire_api` to choose which API protocol Codex uses when talking to the provider: @@ -111,14 +130,23 @@ wire_api = "responses" # stream_idle_timeout_ms = 300000 ``` -**Reasoning and behavior (top-level)** -These apply to the current session model; they are especially relevant when using reasoning-capable models or the Responses API: + + + +### Adding Model Capabilities + +**Reasoning, output, and tools (top-level)** +These top-level keys apply to the current session model and control reasoning, output, and tool behavior: | Key | Values | Description | | --- | ------ | ----------- | -| `model_reasoning_effort` | `minimal`, `low`, `medium`, `high`, `xhigh` | How much reasoning effort the model uses (Responses API). Higher values can improve quality and latency. `xhigh` is model-dependent. | -| `model_reasoning_summary` | `auto`, `concise`, `detailed`, `none` | How much reasoning summary to include or to disable summaries. | +| `model_reasoning_effort` | `minimal`, `low`, `medium`, `high`, `xhigh` | How much reasoning effort the model uses (Responses API). Higher values can improve quality; `xhigh` is model-dependent. | +| `model_reasoning_summary` | `auto`, `concise`, `detailed`, `none` | How much reasoning summary to include or whether to disable summaries. | | `personality` | `none`, `friendly`, `pragmatic` | Default communication style for models that support it. Overridable per thread or via `/personality` in-session. | +| `temperature` | `0`–`2` (for example `0.1`) | Sampling temperature. Lower values make outputs more deterministic; `0.1` is a good default for coding and tooling. | +| `max_output_tokens` | Integer (for example `8192`) | Maximum number of tokens in the response. Prevents run-away output; upper bound is model-dependent. | +| `parallel_tool_calls` | `true` / `false` | Allow the model to call multiple tools in parallel when tools are configured. | +| `tool_choice` | `"auto"`, `"required"`, `"none"` | Control whether the model decides when to call tools (`"auto"`), must call tools, or never uses tools. | Example: @@ -129,6 +157,10 @@ model = "@openai-prod/gpt-4o" model_reasoning_effort = "high" model_reasoning_summary = "concise" personality = "pragmatic" +temperature = 0.1 +max_output_tokens = 8192 +parallel_tool_calls = true +tool_choice = "auto" [model_providers.portkey] name = "Portkey" @@ -137,7 +169,6 @@ env_key = "PORTKEY_API_KEY" wire_api = "chat" ``` -Use `wire_api = "responses"` when your Portkey virtual model is backed by a Responses API–capable model; pair it with `model_reasoning_effort` and `model_reasoning_summary` as needed. **Fallbacks, load balancing, or caching?** Create a [Portkey Config](/product/ai-gateway/configs), attach it to your API key, and set `model` to the config’s virtual model. See [Enterprise Governance](#3-enterprise-governance) for examples.