Skip to content

feat(cli): add --batch-size parameter to prevent context leaking#2055

Open
Hellnight2005 wants to merge 12 commits intolingodotdev:mainfrom
Hellnight2005:fix/1733-batch-size
Open

feat(cli): add --batch-size parameter to prevent context leaking#2055
Hellnight2005 wants to merge 12 commits intolingodotdev:mainfrom
Hellnight2005:fix/1733-batch-size

Conversation

@Hellnight2005
Copy link
Copy Markdown

@Hellnight2005 Hellnight2005 commented Mar 24, 2026

Summary

Implements the --batch-size parameter for CLI commands to control the number of translation strings sent to the LLM per request, preventing context leaking of translator hints between unrelated keys.

Changes

  • Added --batch-size <number> flag to the run and i18n CLI commands.
  • Updated createBasicTranslator in src/cli/processor/basic.ts to break down and process inputs into configurable chunks.
  • Updated createExplicitLocalizer in src/cli/localizer/explicit.ts to execute batches properly and consolidate the chunked responses.
  • Fixed a bug causing the pseudo-localizer to crash by implementing the missing validateSettings method.
  • Handled edge cases for JSON.parse exceptions gracefully using jsonrepair in the AI feedback loop.

Testing

Business logic tests added:

  • Verified chunking logic correctly isolates inputs when batchSize is strictly set to 1.
  • Verified the default behavior reliably places smaller files into a single batch without chunking unnecessary data boundaries.
  • All tests pass locally (verified the 846 unit tests in packages/cli succeed).

Visuals

Required for UI/UX changes:

  • [N/A] Before/after screenshots attached (CLI backend behavior only)
  • [N/A] Video demo for interactions (< 30s)

Checklist

  • Changeset added (if version bump needed)
  • Tests cover business logic (not just happy path)
  • No breaking changes (or documented below)

Closes #1733

Summary by CodeRabbit

  • New Features

    • Added --batch-size CLI option to run and i18n commands, allowing users to customize translation batch processing with flexible batch sizes between 1 and 250
    • Implemented intelligent chunked processing during AI-driven localization workflows with real-time progress reporting for each batch being processed
  • Tests

    • Added comprehensive test coverage for batch processing behavior and correctness

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

A --batch-size CLI parameter is added to the run and i18n commands, validated between 1–250, to control how many translation strings are processed per AI model request. The batch size is threaded through localizer and processor initialization, and chunking logic is extracted into a reusable utility to split translation data accordingly.

Changes

Cohort / File(s) Summary
Changeset
.changeset/add-batch-size.md
Documents a minor version bump for the new --batch-size parameter addressing context leaking prevention.
CLI Command Definitions
packages/cli/src/cli/cmd/run/index.ts, packages/cli/src/cli/cmd/i18n.ts
Added --batch-size <number> CLI option with parseInt conversion; option is validated and passed downstream to the processor.
CLI Type Schemas
packages/cli/src/cli/cmd/run/_types.ts
Added optional batchSize?: number property to flagsSchema with Zod validation constraining values to 1–250.
Localizer Initialization
packages/cli/src/cli/cmd/run/setup.ts, packages/cli/src/cli/localizer/index.ts
Updated createLocalizer and related calls to accept and forward the optional batchSize parameter through the initialization chain.
Localizer Implementation
packages/cli/src/cli/localizer/explicit.ts
Refactored createExplicitLocalizer to accept batchSize, split processableData into chunks via extractPayloadChunks, iterate per-chunk with individual AI requests, and invoke onProgress callback after each chunk. Response parsing and per-chunk hint filtering are integrated into the loop.
Localizer Pseudo Provider
packages/cli/src/cli/localizer/pseudo.ts
Added validateSettings method to the returned localizer object returning { valid: true }.
Processor Integration
packages/cli/src/cli/processor/index.ts
Updated createProcessor signature to accept optional batchSize and merge it into provider settings before translator instantiation.
Processor Implementation
packages/cli/src/cli/processor/basic.ts
Added batchSize?: number to ModelSettings; delegated chunking to extractPayloadChunks utility and adjusted progress reporting to reflect post-chunk completion.
Chunking Utility
packages/cli/src/cli/utils/chunk.ts
New module exporting extractPayloadChunks (splits payload into chunks respecting item count and content size limits with configurable batch size, default 25) and countWordsInRecord (recursive word-count helper).
Batch Processing Tests
packages/cli/src/cli/processor/basic.spec.ts
New test suite validating createBasicTranslator batching behavior: confirms single-chunk requests by default, multiple chunks when exceeding limits, and correct call counts for specified batch sizes (1 or 2).

Sequence Diagram

sequenceDiagram
    participant CLI as CLI Command
    participant Proc as Processor
    participant Loc as Localizer
    participant Chunk as Chunk Utility
    participant AI as AI Model

    CLI->>Proc: createProcessor(..., batchSize)
    Proc->>Loc: createLocalizer(..., batchSize)
    Loc->>Loc: Initialize with batchSize

    CLI->>Proc: Execute translation
    Proc->>Loc: localize(input, onProgress)

    Loc->>Chunk: extractPayloadChunks(data, batchSize)
    Chunk-->>Loc: chunks[]

    loop For each chunk
        Loc->>AI: generateText(chunk + hints)
        AI-->>Loc: response
        Loc->>Loc: parseResponse & merge result
        Loc->>Loc: onProgress(percent, chunk, result)
    end

    Loc-->>Proc: merged translations
    Proc-->>CLI: complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • feat: vNext migration #2035 — Modifies the same CLI localizer and processor command surface; both extend createLocalizer and createProcessor signatures in related ways.

Suggested reviewers

  • vrcprl
  • cherkanovart

Poem

🐰 With batches small, no hints will stray,
Each chunk processed in its own way,
No context leaks from far and wide—
Just whiskers twitching, full of pride! 🌕✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Out of Scope Changes check ❓ Inconclusive Most changes are in-scope, but the description mentions implementing missing validateSettings method in pseudo-localizer and handling JSON.parse exceptions with jsonrepair, which are not explicitly mentioned in issue #1733. Clarify whether the validateSettings bug fix and jsonrepair error handling are separate issues or discovered during this work; document if they are critical dependencies for the batch-size feature.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding a --batch-size parameter to prevent context leaking, which is the primary feature introduced in this PR.
Description check ✅ Passed The description covers all required template sections including summary, changes, testing, and checklist, with specific details about implementation and test coverage.
Linked Issues check ✅ Passed The PR fully implements requirements from issue #1733: adds configurable --batch-size parameter to CLI commands, supports batch size of 1 for strict isolation, defaults to reasonable batching, and includes comprehensive tests validating chunking behavior.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
packages/cli/src/cli/processor/basic.ts (1)

114-115: Minor optimization: move effectiveBatchSize calculation outside the loop.

The effectiveBatchSize value doesn't change per iteration since it only depends on batchSize and payloadEntries.length. Computing it once before the loop would be cleaner.

♻️ Proposed refactor
+  const effectiveBatchSize =
+    batchSize && batchSize > 0 ? batchSize : payloadEntries.length || 1;
+
   const payloadEntries = Object.entries(payload);
   for (let i = 0; i < payloadEntries.length; i++) {
     const [key, value] = payloadEntries[i];
     currentChunk[key] = value;
     currentChunkItemCount++;

     const currentChunkSize = countWordsInRecord(currentChunk);
-    const effectiveBatchSize =
-      batchSize && batchSize > 0 ? batchSize : payloadEntries.length || 1;
     if (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/processor/basic.ts` around lines 114 - 115, The
calculation of effectiveBatchSize inside the loop is redundant; compute it once
before iterating. Move the expression const effectiveBatchSize = batchSize &&
batchSize > 0 ? batchSize : payloadEntries.length || 1; out of the loop that
processes payloadEntries so the loop uses the precomputed effectiveBatchSize
value instead of recalculating it each iteration.
packages/cli/src/cli/cmd/i18n.ts (1)

93-97: Consider aligning the description with the run command.

The run command's --batch-size description mentions "(not applicable when using lingo.dev provider)", but this description omits that detail. While the i18n command is deprecated, consistency would help users understand the limitation.

📝 Suggested description update
   .option(
     "--batch-size <number>",
-    "Number of translations to process in a single batch",
+    "Number of translations to process in a single batch (not applicable when using lingo.dev provider)",
     parseInt,
   )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/cmd/i18n.ts` around lines 93 - 97, Update the
"--batch-size <number>" option description in the i18n command to match the run
command by appending the limitation text "(not applicable when using lingo.dev
provider)"; locate the .option call for "--batch-size <number>" in the i18n
command (packages/cli/src/cli/cmd/i18n.ts) and change the second argument string
to include the parenthetical note so the help text is consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/cli/src/cli/localizer/explicit.ts`:
- Around line 296-317: The current branch that handles string responses to
populate finalResult silently drops data when result?.data is a truthy
non-object string that lacks "{" and "}" (so index/lastIndex are -1); update the
else branch after computing index/lastIndex to handle that case by either
logging a warning and including the snippet or throwing an Error so translations
aren't silently lost: modify the block that references result.data, index,
lastIndex, jsonrepair and finalResult to add an else that calls
processLogger.warn or console.warn with a clear message and the first 200 chars
of result.data (or throw new Error(`Unparsable response: ...`)) and ensure any
thrown error includes the original result/data for debugging; keep the existing
try/catch for jsonrepair/JSON.parse as-is.
- Around line 286-291: The error messages incorrectly mention "Lingo.dev" while
this code path handles BYOK providers (e.g., OpenAI, Anthropic); update the
console.error and thrown Error (the lines using console.error(...`Response
snippet: ${snippet}`) and throw new Error(...`Failed to parse response from
Lingo.dev: ${e2}`)) to reference the actual configured provider (or a generic
"configured provider"/providerName variable) and include the existing error (e2)
and snippet variables so the message reads like "Failed to parse response from
configured provider <providerName>: <error> (Snippet: <snippet>)".
- Around line 337-340: The function signature for extractPayloadChunks
incorrectly types the payload parameter as Record<string, string> even though
LocalizerData.processableData contains arrays, nested objects and plural forms;
change the parameter type to Record<string, any> (or a more specific union
matching LocalizerData.processableData) so it aligns with countWordsInRecord's
handling of nested structures and avoids type errors; update the
extractPayloadChunks declaration and any related references to use the broader
type while keeping the internal logic unchanged.

In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 92-96: The JSDoc for extractPayloadChunks is stale — it claims
"default: 25" but the implementation uses payloadEntries.length when batchSize
is omitted (so all keys are processed in one batch). Update the function's JSDoc
to accurately describe the default behavior: that if batchSize is not provided,
the function will use the full payload length (i.e., create a single batch
containing all keys), and keep references to the parameters payload and
batchSize in the description.

---

Nitpick comments:
In `@packages/cli/src/cli/cmd/i18n.ts`:
- Around line 93-97: Update the "--batch-size <number>" option description in
the i18n command to match the run command by appending the limitation text "(not
applicable when using lingo.dev provider)"; locate the .option call for
"--batch-size <number>" in the i18n command (packages/cli/src/cli/cmd/i18n.ts)
and change the second argument string to include the parenthetical note so the
help text is consistent.

In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 114-115: The calculation of effectiveBatchSize inside the loop is
redundant; compute it once before iterating. Move the expression const
effectiveBatchSize = batchSize && batchSize > 0 ? batchSize :
payloadEntries.length || 1; out of the loop that processes payloadEntries so the
loop uses the precomputed effectiveBatchSize value instead of recalculating it
each iteration.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7aaddc5f-6b94-4510-b8df-8802fb6bf23e

📥 Commits

Reviewing files that changed from the base of the PR and between 1c0dee4 and c916e41.

📒 Files selected for processing (12)
  • .changeset/add-batch-size.md
  • packages/cli/src/cli/cmd/i18n.ts
  • packages/cli/src/cli/cmd/run/_types.ts
  • packages/cli/src/cli/cmd/run/index.ts
  • packages/cli/src/cli/cmd/run/setup.ts
  • packages/cli/src/cli/localizer/explicit.ts
  • packages/cli/src/cli/localizer/index.ts
  • packages/cli/src/cli/localizer/pseudo.ts
  • packages/cli/src/cli/processor/basic.spec.ts
  • packages/cli/src/cli/processor/basic.ts
  • packages/cli/src/cli/processor/index.ts
  • packages/react/src/client/index.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/cli/src/cli/processor/basic.ts (1)

98-127: ⚠️ Potential issue | 🟠 Major

--batch-size still counts top-level objects, not logical translation entries.

This helper now explicitly accepts nested payloads, but it still batches on Object.entries(payload). A namespace like { common: { save: "...", cancel: "..." } } is treated as one item, so --batch-size 1 can still send multiple keys together and miss the per-key isolation this flag is meant to provide. Split on logical translation entries instead of only top-level properties.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/processor/basic.ts` around lines 98 - 127,
extractPayloadChunks currently counts only top-level keys so nested translation
entries (e.g., {common:{save:...,cancel:...}}) are treated as one item; change
extractPayloadChunks to iterate logical translation entries instead: first
flatten the payload into an array of entries representing each translation key
path and value (e.g., ["common.save", value] or path array), then build chunks
by adding one logical entry at a time and reconstructing the nested object shape
when pushing currentChunk; keep the same batching logic (use countWordsInRecord
to measure size and effectiveBatchSize for item counting) but increment
currentChunkItemCount per logical entry rather than per top-level property, and
add a small helper (or inline logic) to set nested values into currentChunk when
adding an entry so output chunks preserve the original nested structure.
🧹 Nitpick comments (1)
packages/cli/src/cli/processor/basic.ts (1)

5-8: Keep batchSize out of the AI SDK call settings.

batchSize is internal batching state, but the same settings object is still reused later for generateText(...). That leaks a CLI-only flag into provider request options and makes this path depend on adapters tolerating unknown fields. Split batchSize from the actual model settings before building the request.

Suggested change
 export function createBasicTranslator(
   model: LanguageModel,
   systemPrompt: string,
   settings: ModelSettings = {},
 ) {
+  const { batchSize, ...modelSettings } = settings;
+
   return async (input: LocalizerInput, onProgress: LocalizerProgressFn) => {
     const chunks = extractPayloadChunks(
       input.processableData,
-      settings.batchSize,
+      batchSize,
     );
@@
     const response = await generateText({
       model,
-      ...settings,
+      ...modelSettings,
       messages: [

Also applies to: 16-18

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/processor/basic.ts` around lines 5 - 8, The
ModelSettings type currently includes batchSize which is internal and is being
passed through into provider calls; update the code so you split out batchSize
from the settings object before building SDK request options: keep ModelSettings
as-is for typing if desired but when preparing the call (where you pass settings
into generateText / the provider request) destructure like `const { batchSize,
...modelSettings } = settings` and use modelSettings for the AI SDK/generateText
call and batchSize only for local batching logic; ensure any place that forwards
`settings` to the provider (e.g., the generateText invocation) uses the cleaned
modelSettings to avoid leaking the CLI-only flag.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/cli/src/cli/localizer/explicit.ts`:
- Around line 199-202: The current chunking leaks sibling hints because
extractPayloadChunks splits only on top-level entries so chunkHints carries
entire top-level subtrees; update the chunking logic (extractPayloadChunks and
the code that builds chunkHints) to operate at the leaf/key-path level: when
batching, enumerate leaf key paths from processableData and create chunks of
those full paths respecting params.batchSize, and when building chunkHints
prune/copy only the hint nodes that correspond to the exact leaf key paths
included in that chunk (not the entire top-level subtree). Ensure functions
referenced (extractPayloadChunks, any chunkHints builder code that consumes
processableData) use key-path traversal rather than top-level splitting so
--batch-size 1 isolates sibling keys and prevents hint bleed.
- Around line 278-324: The code currently only inspects result.data and drops
top-level parsed payloads; change the logic to first normalize the parsed output
into a single variable (e.g., rawData = result?.data ?? result) and then run the
existing object/string handling against rawData instead of result.data so plain
objects or top-level JSON strings aren't ignored; update all subsequent
references (the object branch, string branch, error messages and places using
result.data) to use rawData, keeping parseModelResponse, finalResult and
jsonrepair behavior intact and preserving the same error/snippet reporting that
includes params.id.

---

Outside diff comments:
In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 98-127: extractPayloadChunks currently counts only top-level keys
so nested translation entries (e.g., {common:{save:...,cancel:...}}) are treated
as one item; change extractPayloadChunks to iterate logical translation entries
instead: first flatten the payload into an array of entries representing each
translation key path and value (e.g., ["common.save", value] or path array),
then build chunks by adding one logical entry at a time and reconstructing the
nested object shape when pushing currentChunk; keep the same batching logic (use
countWordsInRecord to measure size and effectiveBatchSize for item counting) but
increment currentChunkItemCount per logical entry rather than per top-level
property, and add a small helper (or inline logic) to set nested values into
currentChunk when adding an entry so output chunks preserve the original nested
structure.

---

Nitpick comments:
In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 5-8: The ModelSettings type currently includes batchSize which is
internal and is being passed through into provider calls; update the code so you
split out batchSize from the settings object before building SDK request
options: keep ModelSettings as-is for typing if desired but when preparing the
call (where you pass settings into generateText / the provider request)
destructure like `const { batchSize, ...modelSettings } = settings` and use
modelSettings for the AI SDK/generateText call and batchSize only for local
batching logic; ensure any place that forwards `settings` to the provider (e.g.,
the generateText invocation) uses the cleaned modelSettings to avoid leaking the
CLI-only flag.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0194db5f-69f6-420b-9b1d-0f02f2f07e99

📥 Commits

Reviewing files that changed from the base of the PR and between c916e41 and 7d2b6ef.

📒 Files selected for processing (2)
  • packages/cli/src/cli/localizer/explicit.ts
  • packages/cli/src/cli/processor/basic.ts

Comment on lines +278 to +324
let result: any;
try {
result = parseModelResponse(response.text);
} catch (e2) {
const snippet =
response.text.length > 500
? `${response.text.slice(0, 500)}…`
: response.text;
console.error(
`Failed to parse response from ${params.id}. Response snippet: ${snippet}`,
);
throw new Error(
`Failed to parse response from ${params.id}: ${e2} (Snippet: ${snippet})`,
);
}
let finalResult: Record<string, any> = {};

// Handle both object and string responses
if (typeof result.data === "object" && result.data !== null) {
return result.data;
// Handle both object and string responses
if (typeof result?.data === "object" && result.data !== null) {
finalResult = result.data;
} else if (result?.data) {
// Handle string responses - extract and repair JSON
const index = result.data.indexOf("{");
const lastIndex = result.data.lastIndexOf("}");
if (index !== -1 && lastIndex !== -1) {
try {
const trimmed = result.data.slice(index, lastIndex + 1);
const repaired = jsonrepair(trimmed);
const parsed = JSON.parse(repaired);
finalResult = parsed.data || parsed || {};
} catch (e) {
console.error(
`Failed to parse nested JSON response. Snippet: ${result.data.slice(0, 100)}...`,
);
throw new Error(
`Failed to parse nested JSON response: ${e} (Snippet: ${result.data.slice(0, 100)}...)`,
);
}
} else {
console.error(
`Unexpected response format - no JSON object found. Snippet: ${String(result.data).slice(0, 100)}...`,
);
throw new Error(
`Unexpected response format from ${params.id} - no JSON object found in response`,
);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle top-level parsed payloads before assuming a data wrapper.

This branch only succeeds when the parsed response is stored under result.data. If parseModelResponse() returns a plain object or a top-level JSON string, finalResult stays {} and the whole chunk is silently dropped. Normalize result itself into rawData first, then run the existing object/string handling on that value.

Suggested change
-        let finalResult: Record<string, any> = {};
-
-        // Handle both object and string responses
-        if (typeof result?.data === "object" && result.data !== null) {
-          finalResult = result.data;
-        } else if (result?.data) {
+        let finalResult: Record<string, any> = {};
+        const rawData =
+          typeof result === "object" && result !== null && "data" in result
+            ? result.data
+            : result;
+
+        if (typeof rawData === "object" && rawData !== null) {
+          finalResult = rawData as Record<string, any>;
+        } else if (typeof rawData === "string" && rawData) {
           // Handle string responses - extract and repair JSON
-          const index = result.data.indexOf("{");
-          const lastIndex = result.data.lastIndexOf("}");
+          const index = rawData.indexOf("{");
+          const lastIndex = rawData.lastIndexOf("}");
           if (index !== -1 && lastIndex !== -1) {
             try {
-              const trimmed = result.data.slice(index, lastIndex + 1);
+              const trimmed = rawData.slice(index, lastIndex + 1);
               const repaired = jsonrepair(trimmed);
               const parsed = JSON.parse(repaired);
               finalResult = parsed.data || parsed || {};
             } catch (e) {
               console.error(
-                `Failed to parse nested JSON response. Snippet: ${result.data.slice(0, 100)}...`,
+                `Failed to parse nested JSON response. Snippet: ${rawData.slice(0, 100)}...`,
               );
               throw new Error(
-                `Failed to parse nested JSON response: ${e} (Snippet: ${result.data.slice(0, 100)}...)`,
+                `Failed to parse nested JSON response: ${e} (Snippet: ${rawData.slice(0, 100)}...)`,
               );
             }
           } else {
             console.error(
-              `Unexpected response format - no JSON object found. Snippet: ${String(result.data).slice(0, 100)}...`,
+              `Unexpected response format - no JSON object found. Snippet: ${String(rawData).slice(0, 100)}...`,
             );
             throw new Error(
               `Unexpected response format from ${params.id} - no JSON object found in response`,
             );
           }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/localizer/explicit.ts` around lines 278 - 324, The code
currently only inspects result.data and drops top-level parsed payloads; change
the logic to first normalize the parsed output into a single variable (e.g.,
rawData = result?.data ?? result) and then run the existing object/string
handling against rawData instead of result.data so plain objects or top-level
JSON strings aren't ignored; update all subsequent references (the object
branch, string branch, error messages and places using result.data) to use
rawData, keeping parseModelResponse, finalResult and jsonrepair behavior intact
and preserving the same error/snippet reporting that includes params.id.

currentChunkItemCount++;

const currentChunkSize = countWordsInRecord(currentChunk);
const effectiveBatchSize =
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now, when --batch-size isn’t provided, the effective batch size defaults to the total number of keys. I would keep 25 as the fallback instead of payloadEntries.length.

export * from "./locale-switcher";
export * from "./attribute-component";
export * from "./locale";
export { getLocaleFromCookies, setLocaleInCookies } from "./utils";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem related to batch size

debounce: z.number().positive().prefault(5000), // 5 seconds default
sound: z.boolean().optional(),
pseudo: z.boolean().optional(),
batchSize: z.number().min(1).optional(),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a maximum value of 250?

file: Z.array(Z.string()).optional(),
interactive: Z.boolean().prefault(false),
debug: Z.boolean().prefault(false),
batchSize: Z.number().min(1).optional(),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as run: use a maximum value of 250.

const provider = isPseudo ? "pseudo" : ctx.config?.provider;
const engineId = ctx.config?.engineId;
ctx.localizer = createLocalizer(provider, engineId, ctx.flags.apiKey);
ctx.localizer = createLocalizer(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use prettier to match the project formatting.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into this and tried reformatting it. I also cross-verified it (including with AI and Prettier), and it seems the line is being split due to the printWidth: 80 rule in the project config.

* @param batchSize - Max number of keys per chunk (default: 25)
* @returns An array of payload chunks
*/
function extractPayloadChunks(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extractPayloadChunks and countWordsInRecord are now copy-pasted in two places - packages/cli/src/cli/processor/basic.ts and at the bottom of packages/cli/src/cli/localizer/explicit.ts. Please extract them into a shared utility so we don't maintain two copies.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve removed the duplicate implementations from processor/basic.ts and localizer/explicit.ts

I extracted them into a shared utility (packages/cli/src/cli/utils/chunk.ts) and imported them back into both places so we only maintain them in one place

// Handle both object and string responses
if (typeof result?.data === "object" && result.data !== null) {
finalResult = result.data;
} else if (result?.data) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the problems with the current repair logic? This looks overly complex for what it does.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original repair logic was added to handle inconsistencies across different LLM providers. In some cases, models return the data field as double-stringified JSON (e.g. {"data": "{\"message\": \"Hola\"}"}) instead of an object.

The manual extraction (indexOf('{') + jsonrepair) was intended to safely recover that payload and prevent crashes during batch localization.

That said, I agree the implementation was overly complex. I’ve simplified it by removing the manual slicing and reusing parseModelResponse() recursively, which handles the double-stringification more cleanly with less code.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/cli/src/cli/cmd/run/index.ts (1)

126-130: Improve --batch-size parsing to reject malformed input at CLI level.

Currently, parseInt(val) on line 129 silently coerces malformed input: "2abc"2 and "1.9"1. While downstream schema validation (flagsSchema.parse()) catches invalid values, early CLI-level validation provides immediate feedback. Since interactive-commander supports error throwing in option parser callbacks, reject non-integer values directly:

Proposed change
   .option(
     "--batch-size <number>",
     "Number of translations to process in a single batch (not applicable when using lingo.dev provider)",
-    (val: string) => parseInt(val),
+    (val: string) => {
+      const parsed = Number(val);
+      if (!Number.isInteger(parsed) || parsed < 1) {
+        throw new Error("batch-size must be a positive integer");
+      }
+      return parsed;
+    },
   )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/cmd/run/index.ts` around lines 126 - 130, The CLI option
parser for "--batch-size <number>" currently uses parseInt(val) which silently
accepts malformed input like "2abc" or "1.9"; update the option parser callback
for the "--batch-size" option to perform strict integer validation (e.g., verify
the string is all digits or that Number(val) is finite and Number.isInteger) and
throw an Error when the input is not a valid integer so the commander CLI
reports the problem immediately (this is the code path where parseInt(val) is
used; downstream validation still occurs in flagsSchema.parse()).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/cli/src/cli/cmd/run/index.ts`:
- Around line 126-130: The CLI option parser for "--batch-size <number>"
currently uses parseInt(val) which silently accepts malformed input like "2abc"
or "1.9"; update the option parser callback for the "--batch-size" option to
perform strict integer validation (e.g., verify the string is all digits or that
Number(val) is finite and Number.isInteger) and throw an Error when the input is
not a valid integer so the commander CLI reports the problem immediately (this
is the code path where parseInt(val) is used; downstream validation still occurs
in flagsSchema.parse()).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9e8b37e1-dff1-42c8-8c32-3218aa929879

📥 Commits

Reviewing files that changed from the base of the PR and between 7d2b6ef and e20377f.

📒 Files selected for processing (2)
  • packages/cli/src/cli/cmd/i18n.ts
  • packages/cli/src/cli/cmd/run/index.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/cli/src/cli/cmd/i18n.ts (1)

444-449: ⚠️ Potential issue | 🟡 Minor

Avoid overriding provider settings.batchSize with undefined.

Line 448 always includes batchSize, even when not set. In packages/cli/src/cli/processor/index.ts:15-31, this can override provider settings with undefined.

🔧 Proposed fix
                 let processPayload = createProcessor(i18nConfig!.provider, {
                   apiKey: settings.auth.apiKey,
                   apiUrl: settings.auth.apiUrl,
                   engineId: i18nConfig!.engineId,
-                  batchSize: flags.batchSize,
+                  ...(flags.batchSize !== undefined
+                    ? { batchSize: flags.batchSize }
+                    : {}),
                 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/cmd/i18n.ts` around lines 444 - 449, The current call to
createProcessor(i18nConfig!.provider, { ... }) always sets batchSize:
flags.batchSize which can overwrite provider defaults with undefined; update the
call so the options object only includes batchSize when flags.batchSize is
defined (e.g., build the options object and conditionally add batchSize if
flags.batchSize !== undefined) so createProcessor receives no batchSize key when
the flag isn't set; look for createProcessor, flags.batchSize and
i18nConfig!.provider to modify this call.
🧹 Nitpick comments (1)
packages/cli/src/cli/utils/chunk.ts (1)

19-30: Split before overflow so the 250-word target is actually respected.

The current check runs after Line 19 adds the new entry, so the emitted chunk can exceed the ideal word limit. Pre-checking before append keeps chunk sizes closer to the intended threshold.

♻️ Refactor sketch
-    currentChunk[key] = value;
-    currentChunkItemCount++;
-
-    const currentChunkSize = countWordsInRecord(currentChunk);
+    const entrySize = countWordsInRecord({ [key]: value });
+    const currentChunkSize = countWordsInRecord(currentChunk);
     const effectiveBatchSize = batchSize ?? 25;
+    if (
+      currentChunkItemCount > 0 &&
+      (currentChunkSize + entrySize > idealBatchItemSize ||
+        currentChunkItemCount >= effectiveBatchSize)
+    ) {
+      result.push(currentChunk);
+      currentChunk = {};
+      currentChunkItemCount = 0;
+    }
+    currentChunk[key] = value;
+    currentChunkItemCount++;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/utils/chunk.ts` around lines 19 - 30, The chunking logic
currently appends the new entry to currentChunk before measuring size, which
allows emitted chunks to exceed idealBatchItemSize; modify the flow in the
function that builds result (use symbols currentChunk, countWordsInRecord,
idealBatchItemSize, batchSize, currentChunkItemCount, payloadEntries, i) to
check whether adding the next record would push currentChunk over
idealBatchItemSize or effectiveBatchSize and, if so, push the existing
currentChunk to result and reset it before adding the new entry; ensure you
still handle the case where currentChunk is empty but a single record itself
exceeds idealBatchItemSize by allowing that record into its own chunk and
incrementing counts accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/cli/src/cli/cmd/i18n.ts`:
- Around line 93-97: Replace the direct use of parseInt in the
.option("--batch-size <number>", ...) call with a dedicated wrapper like (v) =>
Number.parseInt(v, 10) to avoid Commander passing a previousValue as a radix;
also update the validation schema for batchSize to enforce an integer (add
.int() to the batchSize schema entry) so batchSize is validated as a
whole-number count.

In `@packages/cli/src/cli/utils/chunk.ts`:
- Around line 23-27: The code currently silently falls back to 25 for invalid
batchSize; instead validate batchSize when provided and reject invalid values.
In the chunking logic (where effectiveBatchSize, batchSize, currentChunkSize,
currentChunkItemCount, idealBatchItemSize, i, and payloadEntries are used) check
if batchSize is defined and if so enforce it is an integer within 1..250; if it
is out of range or not an integer throw an error (or return a rejected Promise)
so callers get immediate feedback; only default to 25 when batchSize is entirely
omitted/undefined. Ensure the validation happens before computing
effectiveBatchSize and before the loop that uses currentChunkItemCount and i.

---

Outside diff comments:
In `@packages/cli/src/cli/cmd/i18n.ts`:
- Around line 444-449: The current call to createProcessor(i18nConfig!.provider,
{ ... }) always sets batchSize: flags.batchSize which can overwrite provider
defaults with undefined; update the call so the options object only includes
batchSize when flags.batchSize is defined (e.g., build the options object and
conditionally add batchSize if flags.batchSize !== undefined) so createProcessor
receives no batchSize key when the flag isn't set; look for createProcessor,
flags.batchSize and i18nConfig!.provider to modify this call.

---

Nitpick comments:
In `@packages/cli/src/cli/utils/chunk.ts`:
- Around line 19-30: The chunking logic currently appends the new entry to
currentChunk before measuring size, which allows emitted chunks to exceed
idealBatchItemSize; modify the flow in the function that builds result (use
symbols currentChunk, countWordsInRecord, idealBatchItemSize, batchSize,
currentChunkItemCount, payloadEntries, i) to check whether adding the next
record would push currentChunk over idealBatchItemSize or effectiveBatchSize
and, if so, push the existing currentChunk to result and reset it before adding
the new entry; ensure you still handle the case where currentChunk is empty but
a single record itself exceeds idealBatchItemSize by allowing that record into
its own chunk and incrementing counts accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: be6382fd-bf16-41b5-b60a-3fcf16fe838a

📥 Commits

Reviewing files that changed from the base of the PR and between e20377f and 6a251f2.

📒 Files selected for processing (12)
  • .changeset/add-batch-size.md
  • packages/cli/src/cli/cmd/i18n.ts
  • packages/cli/src/cli/cmd/run/_types.ts
  • packages/cli/src/cli/cmd/run/index.ts
  • packages/cli/src/cli/cmd/run/setup.ts
  • packages/cli/src/cli/localizer/explicit.ts
  • packages/cli/src/cli/localizer/index.ts
  • packages/cli/src/cli/localizer/pseudo.ts
  • packages/cli/src/cli/processor/basic.spec.ts
  • packages/cli/src/cli/processor/basic.ts
  • packages/cli/src/cli/processor/index.ts
  • packages/cli/src/cli/utils/chunk.ts
✅ Files skipped from review due to trivial changes (2)
  • .changeset/add-batch-size.md
  • packages/cli/src/cli/processor/basic.spec.ts
🚧 Files skipped from review as they are similar to previous changes (7)
  • packages/cli/src/cli/cmd/run/setup.ts
  • packages/cli/src/cli/localizer/pseudo.ts
  • packages/cli/src/cli/localizer/index.ts
  • packages/cli/src/cli/cmd/run/_types.ts
  • packages/cli/src/cli/processor/index.ts
  • packages/cli/src/cli/processor/basic.ts
  • packages/cli/src/cli/localizer/explicit.ts

Comment on lines +93 to +97
.option(
"--batch-size <number>",
"Number of translations to process in a single batch",
parseInt,
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

interactive-commander option parser callback signature: does the parser receive (value, previousValue), and is passing JavaScript parseInt directly considered unsafe because of its second-argument radix behavior?

💡 Result:

No, the option parser (argParser) callback in interactive-commander receives only a single value parameter, not (value, previousValue). This is shown in official documentation examples where .argParser(Number) and .argParser((value) => { ... }) are used, with the lambda taking one argument. Yes, passing JavaScript parseInt directly is considered unsafe in Commander.js option processing (which interactive-commander extends and is compatible with). The parser receives (value, previousValue) for repeatable options, so parseInt(value, previousValue) treats previousValue as radix, often causing NaN (e.g., radix=1). Multiple GitHub issues confirm this: defaults become NaN (Issue #523), and wrappers like v => parseInt(v) are recommended (Issue #1007). MDN notes parseInt requires explicit radix (10) for safety.

Citations:


Use a dedicated numeric parser and add integer validation for --batch-size.

Line 96 passes parseInt directly as the parser callback. When interactive-commander extends Commander.js behavior, repeatable options pass both value and previousValue to the callback, causing parseInt(value, previousValue) to treat the previous value as a radix argument, often producing unexpected results. Use an explicit wrapper: (value) => Number.parseInt(value, 10).

Additionally, since batchSize represents a count and must be a whole number, add .int() validation to the schema at line 671.

🔧 Proposed fix
   .option(
     "--batch-size <number>",
     "Number of translations to process in a single batch",
-    parseInt,
+    (value: string) => Number.parseInt(value, 10),
   )
-    batchSize: Z.number().min(1).max(250).optional(),
+    batchSize: Z.number().int().min(1).max(250).optional(),

Also applies to: 671-672

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/cmd/i18n.ts` around lines 93 - 97, Replace the direct
use of parseInt in the .option("--batch-size <number>", ...) call with a
dedicated wrapper like (v) => Number.parseInt(v, 10) to avoid Commander passing
a previousValue as a radix; also update the validation schema for batchSize to
enforce an integer (add .int() to the batchSize schema entry) so batchSize is
validated as a whole-number count.

Comment on lines +23 to +27
const effectiveBatchSize = batchSize && batchSize > 0 ? batchSize : 25;
if (
currentChunkSize > idealBatchItemSize ||
currentChunkItemCount >= effectiveBatchSize ||
i === payloadEntries.length - 1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject invalid batchSize instead of silently falling back to 25.

Current logic masks invalid values and makes misconfiguration hard to detect. This utility should enforce the same contract as CLI input (1..250) when a value is provided.

🔧 Proposed fix
 export function extractPayloadChunks(
   payload: Record<string, any>,
   batchSize?: number,
 ): Record<string, any>[] {
   const idealBatchItemSize = 250;
+  if (
+    batchSize !== undefined &&
+    (!Number.isInteger(batchSize) || batchSize < 1 || batchSize > 250)
+  ) {
+    throw new RangeError("batchSize must be an integer between 1 and 250");
+  }
   const result: Record<string, any>[] = [];
   let currentChunk: Record<string, any> = {};
   let currentChunkItemCount = 0;
@@
-    const effectiveBatchSize = batchSize && batchSize > 0 ? batchSize : 25;
+    const effectiveBatchSize = batchSize ?? 25;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/cli/src/cli/utils/chunk.ts` around lines 23 - 27, The code currently
silently falls back to 25 for invalid batchSize; instead validate batchSize when
provided and reject invalid values. In the chunking logic (where
effectiveBatchSize, batchSize, currentChunkSize, currentChunkItemCount,
idealBatchItemSize, i, and payloadEntries are used) check if batchSize is
defined and if so enforce it is an integer within 1..250; if it is out of range
or not an integer throw an error (or return a rejected Promise) so callers get
immediate feedback; only default to 25 when batchSize is entirely
omitted/undefined. Ensure the validation happens before computing
effectiveBatchSize and before the loop that uses currentChunkItemCount and i.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add batch size parameter to prevent context leaking of translator notes/hints

2 participants