Skip to content

fix(providers): unify OPENAI_TIMEOUT_MS + AGENTMEMORY_LLM_TIMEOUT_MS (#446)#453

Merged
rohitg00 merged 2 commits into
mainfrom
fix/446-timeout-env-unify
May 17, 2026
Merged

fix(providers): unify OPENAI_TIMEOUT_MS + AGENTMEMORY_LLM_TIMEOUT_MS (#446)#453
rohitg00 merged 2 commits into
mainfrom
fix/446-timeout-env-unify

Conversation

@rohitg00
Copy link
Copy Markdown
Owner

@rohitg00 rohitg00 commented May 17, 2026

Closes #446.

Summary

v0.9.17 shipped OPENAI_TIMEOUT_MS scoped to the OpenAI LLM provider (inline AbortController, default 60s). PR #379 then shipped AGENTMEMORY_LLM_TIMEOUT_MS in the shared src/providers/_fetch.ts helper used by every other raw-fetch provider (Gemini, OpenRouter, MiniMax, OpenAI/Cohere/Voyage/OpenRouter embedding).

Two env vars with the same value but different names — ops confusion. This PR unifies them under the global name while preserving exact v0.9.17 behaviour for existing users.

Resolution

The OpenAI LLM provider now resolves its outbound-fetch timeout in this order:

  1. OPENAI_TIMEOUT_MS — OpenAI-scoped alias, takes precedence (back-compat)
  2. AGENTMEMORY_LLM_TIMEOUT_MS — global fall-back across providers
  3. 60_000 ms default

OpenAIProvider.call() now routes through the shared fetchWithTimeout helper from src/providers/_fetch.ts, dropping ~30 lines of duplicate AbortController + clearTimeout plumbing. The shared helper already owns the abort + cleanup path used by every other raw-fetch provider.

Behaviour

  • Existing users with OPENAI_TIMEOUT_MS=... keep the exact v0.9.17 behaviour.
  • New users setting AGENTMEMORY_LLM_TIMEOUT_MS=... get the OpenAI LLM path covered too.
  • Both set → OPENAI_TIMEOUT_MS wins (locked by an explicit precedence test).
  • Neither set → 60s default (locked by an explicit default test).

Docs

  • .env.example documents AGENTMEMORY_LLM_TIMEOUT_MS as the canonical name.
  • README.md keeps a one-line note that OPENAI_TIMEOUT_MS is the OpenAI-scoped alias and recommends the global form for new configs.

Files

  • src/providers/openai.ts — refactor to use fetchWithTimeout, add precedence resolver, update JSDoc.
  • test/fetch-timeout.test.ts — 4 new precedence tests under OpenAIProvider timeout env precedence (#446).
  • README.md — clarify both env vars and the precedence rule.
  • .env.example — document AGENTMEMORY_LLM_TIMEOUT_MS as the canonical name.

Test plan

  • npm test — 991 tests pass (was 987 + 4 new).
  • npm run build — clean.
  • All 4 new precedence assertions pass (OPENAI_TIMEOUT_MS alone, AGENTMEMORY_LLM_TIMEOUT_MS alone, both set, neither set).
  • Existing 11 timeout tests continue to pass.

Summary by CodeRabbit

  • New Features

    • Added a global LLM/embedding timeout env var and an OpenAI-scoped timeout alias; OpenAI alias takes precedence, default 60,000ms when unset or invalid.
  • Documentation

    • Clarified timeout configuration, precedence rules, and back-compat guidance for OpenAI timeout alias.
  • Tests

    • Added tests validating precedence, defaults, and handling of malformed timeout values.

Review Change Stack

…446)

v0.9.17 shipped OPENAI_TIMEOUT_MS scoped to the OpenAI LLM provider
(inline AbortController, default 60s). PR #379 then shipped
AGENTMEMORY_LLM_TIMEOUT_MS in the shared src/providers/_fetch.ts helper
used by every other raw-fetch provider (Gemini, OpenRouter, MiniMax,
OpenAI/Cohere/Voyage/OpenRouter embedding).

Two env vars, same value, different names — ops confusion.

Unify on the global name while keeping back-compat:

  1. OPENAI_TIMEOUT_MS         — OpenAI-scoped alias, takes precedence
  2. AGENTMEMORY_LLM_TIMEOUT_MS — global fall-back across providers
  3. 60 000 ms default

The OpenAI LLM provider now routes through the shared fetchWithTimeout
helper, dropping ~30 lines of duplicate AbortController + clearTimeout
plumbing. Existing users with OPENAI_TIMEOUT_MS set keep the exact
v0.9.17 behaviour; new users setting AGENTMEMORY_LLM_TIMEOUT_MS get
the OpenAI LLM path covered too.

README + .env.example now document AGENTMEMORY_LLM_TIMEOUT_MS as the
canonical name and note OPENAI_TIMEOUT_MS as the OpenAI-scoped alias.

4 new precedence tests in test/fetch-timeout.test.ts cover all four
env-var combinations.
@vercel
Copy link
Copy Markdown

vercel Bot commented May 17, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
agentmemory Ready Ready Preview, Comment May 17, 2026 11:16am

Request Review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 17, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1479b9e2-ddc4-4c6e-893d-8f55dc5b7787

📥 Commits

Reviewing files that changed from the base of the PR and between fbf149e and d110836.

📒 Files selected for processing (2)
  • src/providers/openai.ts
  • test/fetch-timeout.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/providers/openai.ts

📝 Walkthrough

Walkthrough

This PR unifies LLM provider timeout configuration by introducing AGENTMEMORY_LLM_TIMEOUT_MS as the canonical timeout variable across providers, while maintaining back-compat with OPENAI_TIMEOUT_MS in the OpenAI provider (where it takes precedence). OpenAIProvider is refactored to use the shared fetchWithTimeout helper and implements explicit precedence resolution logic with new helper functions.

Changes

OpenAI provider timeout unification with precedence

Layer / File(s) Summary
Timeout configuration and documentation
.env.example, README.md
Introduces AGENTMEMORY_LLM_TIMEOUT_MS as canonical timeout configuration for outbound LLM/embedding requests; documents OPENAI_TIMEOUT_MS as an OpenAI-scoped back-compat alias that takes precedence, with explicit precedence notes in README.
OpenAI provider timeout refactor with precedence resolution
src/providers/openai.ts
OpenAIProvider imports and uses shared fetchWithTimeout helper, replacing local AbortController/setTimeout logic. Constructor now resolves timeout via new resolveTimeout() function implementing precedence (OPENAI_TIMEOUT_MSAGENTMEMORY_LLM_TIMEOUT_MS → 60s default); call() method uses the helper and detects AbortError with updated messaging; new parsePositiveInt() helper added for validation.
Regression test suite for timeout precedence
test/fetch-timeout.test.ts
New test cases verify that OPENAI_TIMEOUT_MS triggers abort for OpenAI provider, AGENTMEMORY_LLM_TIMEOUT_MS works as fallback when unset, OPENAI_TIMEOUT_MS takes precedence when both are set, and default 60_000ms is used when neither env var is present; also tests malformed OPENAI_TIMEOUT_MS handling.

Sequence Diagram(s)

sequenceDiagram
  participant Constructor
  participant resolveTimeout
  participant Environment
  participant call
  participant fetchWithTimeout

  Constructor->>resolveTimeout: initialize timeoutMs
  resolveTimeout->>Environment: read OPENAI_TIMEOUT_MS
  Environment-->>resolveTimeout: value or unset
  resolveTimeout->>Environment: read AGENTMEMORY_LLM_TIMEOUT_MS
  Environment-->>resolveTimeout: value or unset
  resolveTimeout-->>Constructor: return timeoutMs (or default 60000)
  call->>fetchWithTimeout: POST request with timeoutMs
  fetchWithTimeout-->>call: response or AbortError
  call->>call: catch AbortError, rethrow timeout error mentioning env vars
Loading

Possibly related PRs

  • rohitg00/agentmemory#379: Introduced shared fetchWithTimeout helper and AGENTMEMORY_LLM_TIMEOUT_MS across embedding/LLM providers; directly related to this refactor.
  • rohitg00/agentmemory#307: Previously added OpenAI provider timeout handling with OPENAI_TIMEOUT_MS; this PR preserves that alias for back-compat.

Poem

🐰 I nibble at env vars late at night,
One timeout to keep requests light,
AGENTMEMORY leads the burrowed way,
OPENAI's old name still has its sway,
Hops of milliseconds, snug and right. 🥕

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: unifying two timeout environment variables (OPENAI_TIMEOUT_MS and AGENTMEMORY_LLM_TIMEOUT_MS) with clear reference to the issue being fixed.
Linked Issues check ✅ Passed The PR successfully implements all coding requirements from issue #446: unified timeout precedence in OpenAIProvider, refactored to use shared fetchWithTimeout helper, comprehensive test coverage for precedence, and documentation updates.
Out of Scope Changes check ✅ Passed All changes are directly aligned with issue #446 objectives: timeout env var unification, precedence resolution, refactoring to shared helper, test coverage, and documentation updates. No extraneous modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/446-timeout-env-unify

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

ESLint skipped: no ESLint configuration detected in root package.json. To enable, add eslint to devDependencies.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/providers/openai.ts (1)

116-122: ⚡ Quick win

Remove WHAT-style inline comments in the changed blocks.

These comments describe implementation mechanics rather than intent; prefer clear naming/docblocks and keep call-site comments minimal.

As per coding guidelines, "Do not use code comments explaining WHAT — use clear naming instead".

Also applies to: 167-171

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/providers/openai.ts` around lines 116 - 122, Remove the WHAT-style inline
comments within the fetch block around the fetchWithTimeout call and the OPENAI
timeout logic; instead add a short docblock at the top of the
providers/openai.ts module (or rename local helpers) that documents the intent
and precedence: that fetchWithTimeout owns AbortController/cleanup and that
OPENAI_TIMEOUT_MS falls back to AGENTMEMORY_LLM_TIMEOUT_MS and then 60s. Update
or rename any local helper variables/functions if needed to make the mechanics
obvious (e.g., fetchWithTimeout, OPENAI_TIMEOUT_MS, AGENTMEMORY_LLM_TIMEOUT_MS)
and delete the inline explanatory comment near the `let response: Response;`
declaration (and the similar block at lines 167-171) so call-site comments are
minimal and intent is captured by naming/docblock.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/providers/openai.ts`:
- Around line 184-188: The current parsePositiveInt uses parseInt which allows
trailing non-digits; change parsePositiveInt to validate the raw string strictly
(e.g., trim it and ensure it matches a digits-only pattern like /^\d+$/) before
converting, then parse with Number or parseInt and return the positive integer
or undefined; update the logic in parsePositiveInt to reject malformed values
(e.g., "30ms") so OPENAI_TIMEOUT_MS and similar envs only accept strictly
numeric positive integers.

---

Nitpick comments:
In `@src/providers/openai.ts`:
- Around line 116-122: Remove the WHAT-style inline comments within the fetch
block around the fetchWithTimeout call and the OPENAI timeout logic; instead add
a short docblock at the top of the providers/openai.ts module (or rename local
helpers) that documents the intent and precedence: that fetchWithTimeout owns
AbortController/cleanup and that OPENAI_TIMEOUT_MS falls back to
AGENTMEMORY_LLM_TIMEOUT_MS and then 60s. Update or rename any local helper
variables/functions if needed to make the mechanics obvious (e.g.,
fetchWithTimeout, OPENAI_TIMEOUT_MS, AGENTMEMORY_LLM_TIMEOUT_MS) and delete the
inline explanatory comment near the `let response: Response;` declaration (and
the similar block at lines 167-171) so call-site comments are minimal and intent
is captured by naming/docblock.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3b806b87-505f-41e3-827f-8458ad2706ef

📥 Commits

Reviewing files that changed from the base of the PR and between c93c715 and fbf149e.

📒 Files selected for processing (4)
  • .env.example
  • README.md
  • src/providers/openai.ts
  • test/fetch-timeout.test.ts

Comment thread src/providers/openai.ts
CodeRabbit caught parseInt("30ms", 10) silently returning 30 in the
timeout-resolve path. Real users hitting this would think they bound
the call to 30ms when the regex would have rejected it.

parsePositiveInt now rejects anything that isn't pure digits via
/^\d+$/ (after trim). parseInt's lenience on trailing units /
underscores / signs is gone — those fall back to the 60s default
instead of masquerading as an aggressive bound.

New regression test covers "30ms", "1_000", "60s", "30abc", "-30",
"0". Whitespace padding (e.g. " 30 ") is still accepted — that's
normal env-var handling.

992/992 tests pass on the worktree.
@rohitg00 rohitg00 merged commit 49377db into main May 17, 2026
5 checks passed
@rohitg00 rohitg00 deleted the fix/446-timeout-env-unify branch May 17, 2026 11:41
@rohitg00 rohitg00 mentioned this pull request May 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unify timeout env vars: OPENAI_TIMEOUT_MS + AGENTMEMORY_LLM_TIMEOUT_MS

1 participant