Skip to content

fix(providers): support Azure OpenAI chat completions#404

Open
Yong-yuan-X wants to merge 3 commits into
rohitg00:mainfrom
Yong-yuan-X:fix/374-azure-openai
Open

fix(providers): support Azure OpenAI chat completions#404
Yong-yuan-X wants to merge 3 commits into
rohitg00:mainfrom
Yong-yuan-X:fix/374-azure-openai

Conversation

@Yong-yuan-X
Copy link
Copy Markdown

@Yong-yuan-X Yong-yuan-X commented May 15, 2026

Fixes #374

Summary

  • Add an OpenAI-compatible LLM provider for compression and summarization.
  • Detect Azure-shaped OPENAI_BASE_URL values and call /chat/completions without adding /v1.
  • Add api-version for Azure OpenAI requests, defaulting to 2024-10-21.
  • Use api-key auth for Azure key-based requests.
  • Keep non-Azure OpenAI-compatible endpoints using Authorization: Bearer.
  • Avoid duplicating /v1 or /chat/completions when users provide a base URL that already includes those path segments.
  • Document AZURE_OPENAI_API_VERSION in README and .env.example.

Verification

  • npm.cmd test -- openai-provider.test.ts
  • npx.cmd tsdown

Summary by CodeRabbit

  • New Features

    • Added OpenAI provider support, including Azure OpenAI deployment compatibility and broader OpenAI‑compatible provider support.
  • Documentation

    • Updated README and .env example to show OpenAI options (API key, model, base URL, Azure API version) and clarified provider table.
  • Tests

    • Added tests covering OpenAI provider behavior, including request timeout handling.

Review Change Stack

Signed-off-by: Yong-yuan-X <2463436064@qq.com>
Signed-off-by: Yong-yuan-X <2463436064@qq.com>
@vercel
Copy link
Copy Markdown

vercel Bot commented May 15, 2026

@Yong-yuan-X is attempting to deploy a commit to the rohitg00's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 15, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a5574945-44e7-44e6-82d2-be5de555051e

📥 Commits

Reviewing files that changed from the base of the PR and between 08de94d and f5f0215.

📒 Files selected for processing (2)
  • src/providers/openai.ts
  • test/openai-provider.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • test/openai-provider.test.ts
  • src/providers/openai.ts

📝 Walkthrough

Walkthrough

Adds an OpenAI-compatible LLM provider with Azure deployment detection and auth, wires OPENAI_API_KEY into provider detection and factory, extends ProviderType and fallback allowlist, updates README and .env.example, and adds comprehensive provider tests including a 30s request timeout/abort test.

Changes

OpenAI Provider Implementation

Layer / File(s) Summary
Provider type contract
src/types.ts
ProviderType union now includes the "openai" literal.
OpenAIProvider class
src/providers/openai.ts
New exported class implementing MemoryProvider with constructor resolving base URL (env or default), Azure detection, URL normalization (Azure /chat/completions + api-version handling, non-Azure /v1/chat/completions rules), header selection (api-key for Azure, Authorization: Bearer otherwise), JSON body with system/user messages and max_tokens (include model only for non-Azure), shared call() using AbortController with timeout, HTTP error reporting, JSON parsing, and extracting the first choice's message content.
Configuration detection and factory wiring
src/config.ts, src/providers/index.ts, src/functions/summarize.ts
detectProvider() now recognizes OPENAI_API_KEY (uses OPENAI_MODEL default gpt-4o-mini and optional OPENAI_BASE_URL) before other keys. detectLlmProviderKind() treats OPENAI_API_KEY as "llm". VALID_PROVIDERS accepts "openai". createBaseProvider() adds an "openai" branch constructing OpenAIProvider. User-facing messages updated to list OPENAI_API_KEY.
Test coverage
test/openai-provider.test.ts
Vitest suite adds/extends tests for default OpenAI requests (Bearer auth, /v1/chat/completions, model + max_tokens), Azure deployment URL construction with api-version and api-key auth (no model), base URL normalization (avoid duplicating /v1 or /chat/completions), Azure api-version default, and a timeout/abort test asserting a 30s AbortController timeout.
Configuration documentation
.env.example, README.md
.env.example and README updated to add an "OpenAI-compatible" provider row and a commented OpenAI env block (OPENAI_API_KEY, OPENAI_MODEL, OPENAI_BASE_URL, AZURE_OPENAI_API_VERSION); duplicate OPENAI env lines removed from elsewhere in the template.

Sequence Diagram

sequenceDiagram
  participant Caller
  participant OpenAIProvider
  participant URLBuilder
  participant HeaderBuilder
  participant BodyBuilder
  participant OpenAI_API
  Caller->>OpenAIProvider: compress(systemPrompt, userPrompt)
  OpenAIProvider->>URLBuilder: buildRequestUrl()
  alt isAzure detected
    URLBuilder-->>OpenAIProvider: {deploymentURL}/chat/completions?api-version=...
  else OpenAI compatible
    URLBuilder-->>OpenAIProvider: https://api.openai.com/v1/chat/completions
  end
  OpenAIProvider->>HeaderBuilder: buildHeaders()
  alt isAzure detected
    HeaderBuilder-->>OpenAIProvider: api-key: <key>
  else OpenAI compatible
    HeaderBuilder-->>OpenAIProvider: Authorization: Bearer <key>
  end
  OpenAIProvider->>BodyBuilder: buildRequestBody()
  alt isAzure detected
    BodyBuilder-->>OpenAIProvider: {messages, max_tokens}
  else OpenAI compatible
    BodyBuilder-->>OpenAIProvider: {model, messages, max_tokens}
  end
  OpenAIProvider->>OpenAI_API: POST with URL, headers, body (with timeout)
  OpenAI_API-->>OpenAIProvider: {choices:[{message:{content}}]}
  OpenAIProvider-->>Caller: extracted content string
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 OpenAI hops in with Azure in sight,

Headers switch paths — api-key or Bearer by right.
Deployments get versions, requests time out on cue,
Tests and docs updated to show what is new.
A rabbit-approved provider, succinct and true.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'fix(providers): support Azure OpenAI chat completions' accurately summarizes the main change: adding Azure OpenAI support to the OpenAI provider implementation.
Linked Issues check ✅ Passed All coding requirements from #374 are met: Azure endpoint detection, api-version query parameter handling with proper default (2024-10-21), api-key header support for Azure, Bearer auth for non-Azure endpoints, and proper path handling without /v1 duplication.
Out of Scope Changes check ✅ Passed All changes directly support Azure OpenAI provider integration: config detection, environment variables, documentation updates, test coverage, and provider implementation with no unrelated modifications.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

ESLint skipped: no ESLint configuration detected in root package.json. To enable, add eslint to devDependencies.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/providers/openai.ts`:
- Around line 97-101: The outbound fetch call that creates `response` (calling
this.buildRequestUrl(), this.buildHeaders(), this.buildBody()) needs an
AbortController-based timeout so stalled upstream/network requests don't block;
create an AbortController, pass controller.signal to fetch, start a setTimeout
that calls controller.abort() after a configured timeout, and ensure you clear
that timeout in a finally block so it doesn't leak; update the fetch invocation
to include the signal and handle the abort error path as appropriate in the
surrounding method.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d6f97af1-2ab4-4489-b7e5-076a9a2f3ac5

📥 Commits

Reviewing files that changed from the base of the PR and between 372c6a6 and 08de94d.

📒 Files selected for processing (8)
  • .env.example
  • README.md
  • src/config.ts
  • src/functions/summarize.ts
  • src/providers/index.ts
  • src/providers/openai.ts
  • src/types.ts
  • test/openai-provider.test.ts

Comment thread src/providers/openai.ts Outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenAI provider: detect Azure-shaped OPENAI_BASE_URL and append api-version

1 participant