feat(cli): add --batch-size parameter to prevent context leaking#2055
feat(cli): add --batch-size parameter to prevent context leaking#2055Hellnight2005 wants to merge 12 commits intolingodotdev:mainfrom
Conversation
📝 WalkthroughWalkthroughA Changes
Sequence DiagramsequenceDiagram
participant CLI as CLI Command
participant Proc as Processor
participant Loc as Localizer
participant Chunk as Chunk Utility
participant AI as AI Model
CLI->>Proc: createProcessor(..., batchSize)
Proc->>Loc: createLocalizer(..., batchSize)
Loc->>Loc: Initialize with batchSize
CLI->>Proc: Execute translation
Proc->>Loc: localize(input, onProgress)
Loc->>Chunk: extractPayloadChunks(data, batchSize)
Chunk-->>Loc: chunks[]
loop For each chunk
Loc->>AI: generateText(chunk + hints)
AI-->>Loc: response
Loc->>Loc: parseResponse & merge result
Loc->>Loc: onProgress(percent, chunk, result)
end
Loc-->>Proc: merged translations
Proc-->>CLI: complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
packages/cli/src/cli/processor/basic.ts (1)
114-115: Minor optimization: moveeffectiveBatchSizecalculation outside the loop.The
effectiveBatchSizevalue doesn't change per iteration since it only depends onbatchSizeandpayloadEntries.length. Computing it once before the loop would be cleaner.♻️ Proposed refactor
+ const effectiveBatchSize = + batchSize && batchSize > 0 ? batchSize : payloadEntries.length || 1; + const payloadEntries = Object.entries(payload); for (let i = 0; i < payloadEntries.length; i++) { const [key, value] = payloadEntries[i]; currentChunk[key] = value; currentChunkItemCount++; const currentChunkSize = countWordsInRecord(currentChunk); - const effectiveBatchSize = - batchSize && batchSize > 0 ? batchSize : payloadEntries.length || 1; if (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/processor/basic.ts` around lines 114 - 115, The calculation of effectiveBatchSize inside the loop is redundant; compute it once before iterating. Move the expression const effectiveBatchSize = batchSize && batchSize > 0 ? batchSize : payloadEntries.length || 1; out of the loop that processes payloadEntries so the loop uses the precomputed effectiveBatchSize value instead of recalculating it each iteration.packages/cli/src/cli/cmd/i18n.ts (1)
93-97: Consider aligning the description with theruncommand.The
runcommand's--batch-sizedescription mentions "(not applicable when using lingo.dev provider)", but this description omits that detail. While thei18ncommand is deprecated, consistency would help users understand the limitation.📝 Suggested description update
.option( "--batch-size <number>", - "Number of translations to process in a single batch", + "Number of translations to process in a single batch (not applicable when using lingo.dev provider)", parseInt, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/cmd/i18n.ts` around lines 93 - 97, Update the "--batch-size <number>" option description in the i18n command to match the run command by appending the limitation text "(not applicable when using lingo.dev provider)"; locate the .option call for "--batch-size <number>" in the i18n command (packages/cli/src/cli/cmd/i18n.ts) and change the second argument string to include the parenthetical note so the help text is consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/cli/src/cli/localizer/explicit.ts`:
- Around line 296-317: The current branch that handles string responses to
populate finalResult silently drops data when result?.data is a truthy
non-object string that lacks "{" and "}" (so index/lastIndex are -1); update the
else branch after computing index/lastIndex to handle that case by either
logging a warning and including the snippet or throwing an Error so translations
aren't silently lost: modify the block that references result.data, index,
lastIndex, jsonrepair and finalResult to add an else that calls
processLogger.warn or console.warn with a clear message and the first 200 chars
of result.data (or throw new Error(`Unparsable response: ...`)) and ensure any
thrown error includes the original result/data for debugging; keep the existing
try/catch for jsonrepair/JSON.parse as-is.
- Around line 286-291: The error messages incorrectly mention "Lingo.dev" while
this code path handles BYOK providers (e.g., OpenAI, Anthropic); update the
console.error and thrown Error (the lines using console.error(...`Response
snippet: ${snippet}`) and throw new Error(...`Failed to parse response from
Lingo.dev: ${e2}`)) to reference the actual configured provider (or a generic
"configured provider"/providerName variable) and include the existing error (e2)
and snippet variables so the message reads like "Failed to parse response from
configured provider <providerName>: <error> (Snippet: <snippet>)".
- Around line 337-340: The function signature for extractPayloadChunks
incorrectly types the payload parameter as Record<string, string> even though
LocalizerData.processableData contains arrays, nested objects and plural forms;
change the parameter type to Record<string, any> (or a more specific union
matching LocalizerData.processableData) so it aligns with countWordsInRecord's
handling of nested structures and avoids type errors; update the
extractPayloadChunks declaration and any related references to use the broader
type while keeping the internal logic unchanged.
In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 92-96: The JSDoc for extractPayloadChunks is stale — it claims
"default: 25" but the implementation uses payloadEntries.length when batchSize
is omitted (so all keys are processed in one batch). Update the function's JSDoc
to accurately describe the default behavior: that if batchSize is not provided,
the function will use the full payload length (i.e., create a single batch
containing all keys), and keep references to the parameters payload and
batchSize in the description.
---
Nitpick comments:
In `@packages/cli/src/cli/cmd/i18n.ts`:
- Around line 93-97: Update the "--batch-size <number>" option description in
the i18n command to match the run command by appending the limitation text "(not
applicable when using lingo.dev provider)"; locate the .option call for
"--batch-size <number>" in the i18n command (packages/cli/src/cli/cmd/i18n.ts)
and change the second argument string to include the parenthetical note so the
help text is consistent.
In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 114-115: The calculation of effectiveBatchSize inside the loop is
redundant; compute it once before iterating. Move the expression const
effectiveBatchSize = batchSize && batchSize > 0 ? batchSize :
payloadEntries.length || 1; out of the loop that processes payloadEntries so the
loop uses the precomputed effectiveBatchSize value instead of recalculating it
each iteration.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7aaddc5f-6b94-4510-b8df-8802fb6bf23e
📒 Files selected for processing (12)
.changeset/add-batch-size.mdpackages/cli/src/cli/cmd/i18n.tspackages/cli/src/cli/cmd/run/_types.tspackages/cli/src/cli/cmd/run/index.tspackages/cli/src/cli/cmd/run/setup.tspackages/cli/src/cli/localizer/explicit.tspackages/cli/src/cli/localizer/index.tspackages/cli/src/cli/localizer/pseudo.tspackages/cli/src/cli/processor/basic.spec.tspackages/cli/src/cli/processor/basic.tspackages/cli/src/cli/processor/index.tspackages/react/src/client/index.ts
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/cli/src/cli/processor/basic.ts (1)
98-127:⚠️ Potential issue | 🟠 Major
--batch-sizestill counts top-level objects, not logical translation entries.This helper now explicitly accepts nested payloads, but it still batches on
Object.entries(payload). A namespace like{ common: { save: "...", cancel: "..." } }is treated as one item, so--batch-size 1can still send multiple keys together and miss the per-key isolation this flag is meant to provide. Split on logical translation entries instead of only top-level properties.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/processor/basic.ts` around lines 98 - 127, extractPayloadChunks currently counts only top-level keys so nested translation entries (e.g., {common:{save:...,cancel:...}}) are treated as one item; change extractPayloadChunks to iterate logical translation entries instead: first flatten the payload into an array of entries representing each translation key path and value (e.g., ["common.save", value] or path array), then build chunks by adding one logical entry at a time and reconstructing the nested object shape when pushing currentChunk; keep the same batching logic (use countWordsInRecord to measure size and effectiveBatchSize for item counting) but increment currentChunkItemCount per logical entry rather than per top-level property, and add a small helper (or inline logic) to set nested values into currentChunk when adding an entry so output chunks preserve the original nested structure.
🧹 Nitpick comments (1)
packages/cli/src/cli/processor/basic.ts (1)
5-8: KeepbatchSizeout of the AI SDK call settings.
batchSizeis internal batching state, but the samesettingsobject is still reused later forgenerateText(...). That leaks a CLI-only flag into provider request options and makes this path depend on adapters tolerating unknown fields. SplitbatchSizefrom the actual model settings before building the request.Suggested change
export function createBasicTranslator( model: LanguageModel, systemPrompt: string, settings: ModelSettings = {}, ) { + const { batchSize, ...modelSettings } = settings; + return async (input: LocalizerInput, onProgress: LocalizerProgressFn) => { const chunks = extractPayloadChunks( input.processableData, - settings.batchSize, + batchSize, ); @@ const response = await generateText({ model, - ...settings, + ...modelSettings, messages: [Also applies to: 16-18
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/processor/basic.ts` around lines 5 - 8, The ModelSettings type currently includes batchSize which is internal and is being passed through into provider calls; update the code so you split out batchSize from the settings object before building SDK request options: keep ModelSettings as-is for typing if desired but when preparing the call (where you pass settings into generateText / the provider request) destructure like `const { batchSize, ...modelSettings } = settings` and use modelSettings for the AI SDK/generateText call and batchSize only for local batching logic; ensure any place that forwards `settings` to the provider (e.g., the generateText invocation) uses the cleaned modelSettings to avoid leaking the CLI-only flag.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/cli/src/cli/localizer/explicit.ts`:
- Around line 199-202: The current chunking leaks sibling hints because
extractPayloadChunks splits only on top-level entries so chunkHints carries
entire top-level subtrees; update the chunking logic (extractPayloadChunks and
the code that builds chunkHints) to operate at the leaf/key-path level: when
batching, enumerate leaf key paths from processableData and create chunks of
those full paths respecting params.batchSize, and when building chunkHints
prune/copy only the hint nodes that correspond to the exact leaf key paths
included in that chunk (not the entire top-level subtree). Ensure functions
referenced (extractPayloadChunks, any chunkHints builder code that consumes
processableData) use key-path traversal rather than top-level splitting so
--batch-size 1 isolates sibling keys and prevents hint bleed.
- Around line 278-324: The code currently only inspects result.data and drops
top-level parsed payloads; change the logic to first normalize the parsed output
into a single variable (e.g., rawData = result?.data ?? result) and then run the
existing object/string handling against rawData instead of result.data so plain
objects or top-level JSON strings aren't ignored; update all subsequent
references (the object branch, string branch, error messages and places using
result.data) to use rawData, keeping parseModelResponse, finalResult and
jsonrepair behavior intact and preserving the same error/snippet reporting that
includes params.id.
---
Outside diff comments:
In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 98-127: extractPayloadChunks currently counts only top-level keys
so nested translation entries (e.g., {common:{save:...,cancel:...}}) are treated
as one item; change extractPayloadChunks to iterate logical translation entries
instead: first flatten the payload into an array of entries representing each
translation key path and value (e.g., ["common.save", value] or path array),
then build chunks by adding one logical entry at a time and reconstructing the
nested object shape when pushing currentChunk; keep the same batching logic (use
countWordsInRecord to measure size and effectiveBatchSize for item counting) but
increment currentChunkItemCount per logical entry rather than per top-level
property, and add a small helper (or inline logic) to set nested values into
currentChunk when adding an entry so output chunks preserve the original nested
structure.
---
Nitpick comments:
In `@packages/cli/src/cli/processor/basic.ts`:
- Around line 5-8: The ModelSettings type currently includes batchSize which is
internal and is being passed through into provider calls; update the code so you
split out batchSize from the settings object before building SDK request
options: keep ModelSettings as-is for typing if desired but when preparing the
call (where you pass settings into generateText / the provider request)
destructure like `const { batchSize, ...modelSettings } = settings` and use
modelSettings for the AI SDK/generateText call and batchSize only for local
batching logic; ensure any place that forwards `settings` to the provider (e.g.,
the generateText invocation) uses the cleaned modelSettings to avoid leaking the
CLI-only flag.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0194db5f-69f6-420b-9b1d-0f02f2f07e99
📒 Files selected for processing (2)
packages/cli/src/cli/localizer/explicit.tspackages/cli/src/cli/processor/basic.ts
| let result: any; | ||
| try { | ||
| result = parseModelResponse(response.text); | ||
| } catch (e2) { | ||
| const snippet = | ||
| response.text.length > 500 | ||
| ? `${response.text.slice(0, 500)}…` | ||
| : response.text; | ||
| console.error( | ||
| `Failed to parse response from ${params.id}. Response snippet: ${snippet}`, | ||
| ); | ||
| throw new Error( | ||
| `Failed to parse response from ${params.id}: ${e2} (Snippet: ${snippet})`, | ||
| ); | ||
| } | ||
| let finalResult: Record<string, any> = {}; | ||
|
|
||
| // Handle both object and string responses | ||
| if (typeof result.data === "object" && result.data !== null) { | ||
| return result.data; | ||
| // Handle both object and string responses | ||
| if (typeof result?.data === "object" && result.data !== null) { | ||
| finalResult = result.data; | ||
| } else if (result?.data) { | ||
| // Handle string responses - extract and repair JSON | ||
| const index = result.data.indexOf("{"); | ||
| const lastIndex = result.data.lastIndexOf("}"); | ||
| if (index !== -1 && lastIndex !== -1) { | ||
| try { | ||
| const trimmed = result.data.slice(index, lastIndex + 1); | ||
| const repaired = jsonrepair(trimmed); | ||
| const parsed = JSON.parse(repaired); | ||
| finalResult = parsed.data || parsed || {}; | ||
| } catch (e) { | ||
| console.error( | ||
| `Failed to parse nested JSON response. Snippet: ${result.data.slice(0, 100)}...`, | ||
| ); | ||
| throw new Error( | ||
| `Failed to parse nested JSON response: ${e} (Snippet: ${result.data.slice(0, 100)}...)`, | ||
| ); | ||
| } | ||
| } else { | ||
| console.error( | ||
| `Unexpected response format - no JSON object found. Snippet: ${String(result.data).slice(0, 100)}...`, | ||
| ); | ||
| throw new Error( | ||
| `Unexpected response format from ${params.id} - no JSON object found in response`, | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Handle top-level parsed payloads before assuming a data wrapper.
This branch only succeeds when the parsed response is stored under result.data. If parseModelResponse() returns a plain object or a top-level JSON string, finalResult stays {} and the whole chunk is silently dropped. Normalize result itself into rawData first, then run the existing object/string handling on that value.
Suggested change
- let finalResult: Record<string, any> = {};
-
- // Handle both object and string responses
- if (typeof result?.data === "object" && result.data !== null) {
- finalResult = result.data;
- } else if (result?.data) {
+ let finalResult: Record<string, any> = {};
+ const rawData =
+ typeof result === "object" && result !== null && "data" in result
+ ? result.data
+ : result;
+
+ if (typeof rawData === "object" && rawData !== null) {
+ finalResult = rawData as Record<string, any>;
+ } else if (typeof rawData === "string" && rawData) {
// Handle string responses - extract and repair JSON
- const index = result.data.indexOf("{");
- const lastIndex = result.data.lastIndexOf("}");
+ const index = rawData.indexOf("{");
+ const lastIndex = rawData.lastIndexOf("}");
if (index !== -1 && lastIndex !== -1) {
try {
- const trimmed = result.data.slice(index, lastIndex + 1);
+ const trimmed = rawData.slice(index, lastIndex + 1);
const repaired = jsonrepair(trimmed);
const parsed = JSON.parse(repaired);
finalResult = parsed.data || parsed || {};
} catch (e) {
console.error(
- `Failed to parse nested JSON response. Snippet: ${result.data.slice(0, 100)}...`,
+ `Failed to parse nested JSON response. Snippet: ${rawData.slice(0, 100)}...`,
);
throw new Error(
- `Failed to parse nested JSON response: ${e} (Snippet: ${result.data.slice(0, 100)}...)`,
+ `Failed to parse nested JSON response: ${e} (Snippet: ${rawData.slice(0, 100)}...)`,
);
}
} else {
console.error(
- `Unexpected response format - no JSON object found. Snippet: ${String(result.data).slice(0, 100)}...`,
+ `Unexpected response format - no JSON object found. Snippet: ${String(rawData).slice(0, 100)}...`,
);
throw new Error(
`Unexpected response format from ${params.id} - no JSON object found in response`,
);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/localizer/explicit.ts` around lines 278 - 324, The code
currently only inspects result.data and drops top-level parsed payloads; change
the logic to first normalize the parsed output into a single variable (e.g.,
rawData = result?.data ?? result) and then run the existing object/string
handling against rawData instead of result.data so plain objects or top-level
JSON strings aren't ignored; update all subsequent references (the object
branch, string branch, error messages and places using result.data) to use
rawData, keeping parseModelResponse, finalResult and jsonrepair behavior intact
and preserving the same error/snippet reporting that includes params.id.
| currentChunkItemCount++; | ||
|
|
||
| const currentChunkSize = countWordsInRecord(currentChunk); | ||
| const effectiveBatchSize = |
There was a problem hiding this comment.
Now, when --batch-size isn’t provided, the effective batch size defaults to the total number of keys. I would keep 25 as the fallback instead of payloadEntries.length.
packages/react/src/client/index.ts
Outdated
| export * from "./locale-switcher"; | ||
| export * from "./attribute-component"; | ||
| export * from "./locale"; | ||
| export { getLocaleFromCookies, setLocaleInCookies } from "./utils"; |
There was a problem hiding this comment.
This doesn't seem related to batch size
| debounce: z.number().positive().prefault(5000), // 5 seconds default | ||
| sound: z.boolean().optional(), | ||
| pseudo: z.boolean().optional(), | ||
| batchSize: z.number().min(1).optional(), |
There was a problem hiding this comment.
Could you please add a maximum value of 250?
packages/cli/src/cli/cmd/i18n.ts
Outdated
| file: Z.array(Z.string()).optional(), | ||
| interactive: Z.boolean().prefault(false), | ||
| debug: Z.boolean().prefault(false), | ||
| batchSize: Z.number().min(1).optional(), |
There was a problem hiding this comment.
Same as run: use a maximum value of 250.
| const provider = isPseudo ? "pseudo" : ctx.config?.provider; | ||
| const engineId = ctx.config?.engineId; | ||
| ctx.localizer = createLocalizer(provider, engineId, ctx.flags.apiKey); | ||
| ctx.localizer = createLocalizer( |
There was a problem hiding this comment.
Please use prettier to match the project formatting.
There was a problem hiding this comment.
I looked into this and tried reformatting it. I also cross-verified it (including with AI and Prettier), and it seems the line is being split due to the printWidth: 80 rule in the project config.
| * @param batchSize - Max number of keys per chunk (default: 25) | ||
| * @returns An array of payload chunks | ||
| */ | ||
| function extractPayloadChunks( |
There was a problem hiding this comment.
extractPayloadChunks and countWordsInRecord are now copy-pasted in two places - packages/cli/src/cli/processor/basic.ts and at the bottom of packages/cli/src/cli/localizer/explicit.ts. Please extract them into a shared utility so we don't maintain two copies.
There was a problem hiding this comment.
I’ve removed the duplicate implementations from processor/basic.ts and localizer/explicit.ts
I extracted them into a shared utility (packages/cli/src/cli/utils/chunk.ts) and imported them back into both places so we only maintain them in one place
| // Handle both object and string responses | ||
| if (typeof result?.data === "object" && result.data !== null) { | ||
| finalResult = result.data; | ||
| } else if (result?.data) { |
There was a problem hiding this comment.
What are the problems with the current repair logic? This looks overly complex for what it does.
There was a problem hiding this comment.
The original repair logic was added to handle inconsistencies across different LLM providers. In some cases, models return the data field as double-stringified JSON (e.g. {"data": "{\"message\": \"Hola\"}"}) instead of an object.
The manual extraction (indexOf('{') + jsonrepair) was intended to safely recover that payload and prevent crashes during batch localization.
That said, I agree the implementation was overly complex. I’ve simplified it by removing the manual slicing and reusing parseModelResponse() recursively, which handles the double-stringification more cleanly with less code.
…loss (CodeRabbit review)
- Fallback to 25 instead of total keys for batch size - Add Zod max(250) validation for batchSize flag - Extract duplicate JSON chunking repair logic to utils - Revert unintended export from react/src/client/index.ts - Format modified files with Prettier
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/cli/src/cli/cmd/run/index.ts (1)
126-130: Improve--batch-sizeparsing to reject malformed input at CLI level.Currently,
parseInt(val)on line 129 silently coerces malformed input:"2abc"→2and"1.9"→1. While downstream schema validation (flagsSchema.parse()) catches invalid values, early CLI-level validation provides immediate feedback. Since interactive-commander supports error throwing in option parser callbacks, reject non-integer values directly:Proposed change
.option( "--batch-size <number>", "Number of translations to process in a single batch (not applicable when using lingo.dev provider)", - (val: string) => parseInt(val), + (val: string) => { + const parsed = Number(val); + if (!Number.isInteger(parsed) || parsed < 1) { + throw new Error("batch-size must be a positive integer"); + } + return parsed; + }, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/cmd/run/index.ts` around lines 126 - 130, The CLI option parser for "--batch-size <number>" currently uses parseInt(val) which silently accepts malformed input like "2abc" or "1.9"; update the option parser callback for the "--batch-size" option to perform strict integer validation (e.g., verify the string is all digits or that Number(val) is finite and Number.isInteger) and throw an Error when the input is not a valid integer so the commander CLI reports the problem immediately (this is the code path where parseInt(val) is used; downstream validation still occurs in flagsSchema.parse()).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/cli/src/cli/cmd/run/index.ts`:
- Around line 126-130: The CLI option parser for "--batch-size <number>"
currently uses parseInt(val) which silently accepts malformed input like "2abc"
or "1.9"; update the option parser callback for the "--batch-size" option to
perform strict integer validation (e.g., verify the string is all digits or that
Number(val) is finite and Number.isInteger) and throw an Error when the input is
not a valid integer so the commander CLI reports the problem immediately (this
is the code path where parseInt(val) is used; downstream validation still occurs
in flagsSchema.parse()).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9e8b37e1-dff1-42c8-8c32-3218aa929879
📒 Files selected for processing (2)
packages/cli/src/cli/cmd/i18n.tspackages/cli/src/cli/cmd/run/index.ts
e20377f to
6a251f2
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/cli/src/cli/cmd/i18n.ts (1)
444-449:⚠️ Potential issue | 🟡 MinorAvoid overriding provider
settings.batchSizewithundefined.Line 448 always includes
batchSize, even when not set. Inpackages/cli/src/cli/processor/index.ts:15-31, this can override provider settings withundefined.🔧 Proposed fix
let processPayload = createProcessor(i18nConfig!.provider, { apiKey: settings.auth.apiKey, apiUrl: settings.auth.apiUrl, engineId: i18nConfig!.engineId, - batchSize: flags.batchSize, + ...(flags.batchSize !== undefined + ? { batchSize: flags.batchSize } + : {}), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/cmd/i18n.ts` around lines 444 - 449, The current call to createProcessor(i18nConfig!.provider, { ... }) always sets batchSize: flags.batchSize which can overwrite provider defaults with undefined; update the call so the options object only includes batchSize when flags.batchSize is defined (e.g., build the options object and conditionally add batchSize if flags.batchSize !== undefined) so createProcessor receives no batchSize key when the flag isn't set; look for createProcessor, flags.batchSize and i18nConfig!.provider to modify this call.
🧹 Nitpick comments (1)
packages/cli/src/cli/utils/chunk.ts (1)
19-30: Split before overflow so the 250-word target is actually respected.The current check runs after Line 19 adds the new entry, so the emitted chunk can exceed the ideal word limit. Pre-checking before append keeps chunk sizes closer to the intended threshold.
♻️ Refactor sketch
- currentChunk[key] = value; - currentChunkItemCount++; - - const currentChunkSize = countWordsInRecord(currentChunk); + const entrySize = countWordsInRecord({ [key]: value }); + const currentChunkSize = countWordsInRecord(currentChunk); const effectiveBatchSize = batchSize ?? 25; + if ( + currentChunkItemCount > 0 && + (currentChunkSize + entrySize > idealBatchItemSize || + currentChunkItemCount >= effectiveBatchSize) + ) { + result.push(currentChunk); + currentChunk = {}; + currentChunkItemCount = 0; + } + currentChunk[key] = value; + currentChunkItemCount++;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/cli/utils/chunk.ts` around lines 19 - 30, The chunking logic currently appends the new entry to currentChunk before measuring size, which allows emitted chunks to exceed idealBatchItemSize; modify the flow in the function that builds result (use symbols currentChunk, countWordsInRecord, idealBatchItemSize, batchSize, currentChunkItemCount, payloadEntries, i) to check whether adding the next record would push currentChunk over idealBatchItemSize or effectiveBatchSize and, if so, push the existing currentChunk to result and reset it before adding the new entry; ensure you still handle the case where currentChunk is empty but a single record itself exceeds idealBatchItemSize by allowing that record into its own chunk and incrementing counts accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/cli/src/cli/cmd/i18n.ts`:
- Around line 93-97: Replace the direct use of parseInt in the
.option("--batch-size <number>", ...) call with a dedicated wrapper like (v) =>
Number.parseInt(v, 10) to avoid Commander passing a previousValue as a radix;
also update the validation schema for batchSize to enforce an integer (add
.int() to the batchSize schema entry) so batchSize is validated as a
whole-number count.
In `@packages/cli/src/cli/utils/chunk.ts`:
- Around line 23-27: The code currently silently falls back to 25 for invalid
batchSize; instead validate batchSize when provided and reject invalid values.
In the chunking logic (where effectiveBatchSize, batchSize, currentChunkSize,
currentChunkItemCount, idealBatchItemSize, i, and payloadEntries are used) check
if batchSize is defined and if so enforce it is an integer within 1..250; if it
is out of range or not an integer throw an error (or return a rejected Promise)
so callers get immediate feedback; only default to 25 when batchSize is entirely
omitted/undefined. Ensure the validation happens before computing
effectiveBatchSize and before the loop that uses currentChunkItemCount and i.
---
Outside diff comments:
In `@packages/cli/src/cli/cmd/i18n.ts`:
- Around line 444-449: The current call to createProcessor(i18nConfig!.provider,
{ ... }) always sets batchSize: flags.batchSize which can overwrite provider
defaults with undefined; update the call so the options object only includes
batchSize when flags.batchSize is defined (e.g., build the options object and
conditionally add batchSize if flags.batchSize !== undefined) so createProcessor
receives no batchSize key when the flag isn't set; look for createProcessor,
flags.batchSize and i18nConfig!.provider to modify this call.
---
Nitpick comments:
In `@packages/cli/src/cli/utils/chunk.ts`:
- Around line 19-30: The chunking logic currently appends the new entry to
currentChunk before measuring size, which allows emitted chunks to exceed
idealBatchItemSize; modify the flow in the function that builds result (use
symbols currentChunk, countWordsInRecord, idealBatchItemSize, batchSize,
currentChunkItemCount, payloadEntries, i) to check whether adding the next
record would push currentChunk over idealBatchItemSize or effectiveBatchSize
and, if so, push the existing currentChunk to result and reset it before adding
the new entry; ensure you still handle the case where currentChunk is empty but
a single record itself exceeds idealBatchItemSize by allowing that record into
its own chunk and incrementing counts accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: be6382fd-bf16-41b5-b60a-3fcf16fe838a
📒 Files selected for processing (12)
.changeset/add-batch-size.mdpackages/cli/src/cli/cmd/i18n.tspackages/cli/src/cli/cmd/run/_types.tspackages/cli/src/cli/cmd/run/index.tspackages/cli/src/cli/cmd/run/setup.tspackages/cli/src/cli/localizer/explicit.tspackages/cli/src/cli/localizer/index.tspackages/cli/src/cli/localizer/pseudo.tspackages/cli/src/cli/processor/basic.spec.tspackages/cli/src/cli/processor/basic.tspackages/cli/src/cli/processor/index.tspackages/cli/src/cli/utils/chunk.ts
✅ Files skipped from review due to trivial changes (2)
- .changeset/add-batch-size.md
- packages/cli/src/cli/processor/basic.spec.ts
🚧 Files skipped from review as they are similar to previous changes (7)
- packages/cli/src/cli/cmd/run/setup.ts
- packages/cli/src/cli/localizer/pseudo.ts
- packages/cli/src/cli/localizer/index.ts
- packages/cli/src/cli/cmd/run/_types.ts
- packages/cli/src/cli/processor/index.ts
- packages/cli/src/cli/processor/basic.ts
- packages/cli/src/cli/localizer/explicit.ts
| .option( | ||
| "--batch-size <number>", | ||
| "Number of translations to process in a single batch", | ||
| parseInt, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
interactive-commander option parser callback signature: does the parser receive (value, previousValue), and is passing JavaScript parseInt directly considered unsafe because of its second-argument radix behavior?
💡 Result:
No, the option parser (argParser) callback in interactive-commander receives only a single value parameter, not (value, previousValue). This is shown in official documentation examples where .argParser(Number) and .argParser((value) => { ... }) are used, with the lambda taking one argument. Yes, passing JavaScript parseInt directly is considered unsafe in Commander.js option processing (which interactive-commander extends and is compatible with). The parser receives (value, previousValue) for repeatable options, so parseInt(value, previousValue) treats previousValue as radix, often causing NaN (e.g., radix=1). Multiple GitHub issues confirm this: defaults become NaN (Issue #523), and wrappers like v => parseInt(v) are recommended (Issue #1007). MDN notes parseInt requires explicit radix (10) for safety.
Citations:
- 1: https://www.npmjs.com/package/interactive-commander
- 2: https://registry.npmjs.org/interactive-commander
- 3: https://github.com/fardjad/node-interactive-commander
- 4: option() default value doesn't work with parseInt tj/commander.js#523
- 5: Howto: option with int value and default ? tj/commander.js#1007
- 6: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt
- 7: https://www.jsdocs.io/package/commander
Use a dedicated numeric parser and add integer validation for --batch-size.
Line 96 passes parseInt directly as the parser callback. When interactive-commander extends Commander.js behavior, repeatable options pass both value and previousValue to the callback, causing parseInt(value, previousValue) to treat the previous value as a radix argument, often producing unexpected results. Use an explicit wrapper: (value) => Number.parseInt(value, 10).
Additionally, since batchSize represents a count and must be a whole number, add .int() validation to the schema at line 671.
🔧 Proposed fix
.option(
"--batch-size <number>",
"Number of translations to process in a single batch",
- parseInt,
+ (value: string) => Number.parseInt(value, 10),
)- batchSize: Z.number().min(1).max(250).optional(),
+ batchSize: Z.number().int().min(1).max(250).optional(),Also applies to: 671-672
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/cmd/i18n.ts` around lines 93 - 97, Replace the direct
use of parseInt in the .option("--batch-size <number>", ...) call with a
dedicated wrapper like (v) => Number.parseInt(v, 10) to avoid Commander passing
a previousValue as a radix; also update the validation schema for batchSize to
enforce an integer (add .int() to the batchSize schema entry) so batchSize is
validated as a whole-number count.
| const effectiveBatchSize = batchSize && batchSize > 0 ? batchSize : 25; | ||
| if ( | ||
| currentChunkSize > idealBatchItemSize || | ||
| currentChunkItemCount >= effectiveBatchSize || | ||
| i === payloadEntries.length - 1 |
There was a problem hiding this comment.
Reject invalid batchSize instead of silently falling back to 25.
Current logic masks invalid values and makes misconfiguration hard to detect. This utility should enforce the same contract as CLI input (1..250) when a value is provided.
🔧 Proposed fix
export function extractPayloadChunks(
payload: Record<string, any>,
batchSize?: number,
): Record<string, any>[] {
const idealBatchItemSize = 250;
+ if (
+ batchSize !== undefined &&
+ (!Number.isInteger(batchSize) || batchSize < 1 || batchSize > 250)
+ ) {
+ throw new RangeError("batchSize must be an integer between 1 and 250");
+ }
const result: Record<string, any>[] = [];
let currentChunk: Record<string, any> = {};
let currentChunkItemCount = 0;
@@
- const effectiveBatchSize = batchSize && batchSize > 0 ? batchSize : 25;
+ const effectiveBatchSize = batchSize ?? 25;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/cli/utils/chunk.ts` around lines 23 - 27, The code currently
silently falls back to 25 for invalid batchSize; instead validate batchSize when
provided and reject invalid values. In the chunking logic (where
effectiveBatchSize, batchSize, currentChunkSize, currentChunkItemCount,
idealBatchItemSize, i, and payloadEntries are used) check if batchSize is
defined and if so enforce it is an integer within 1..250; if it is out of range
or not an integer throw an error (or return a rejected Promise) so callers get
immediate feedback; only default to 25 when batchSize is entirely
omitted/undefined. Ensure the validation happens before computing
effectiveBatchSize and before the loop that uses currentChunkItemCount and i.
Summary
Implements the
--batch-sizeparameter for CLI commands to control the number of translation strings sent to the LLM per request, preventing context leaking of translator hints between unrelated keys.Changes
--batch-size <number>flag to therunandi18nCLI commands.createBasicTranslatorinsrc/cli/processor/basic.tsto break down and process inputs into configurable chunks.JSON.parseexceptions gracefully usingjsonrepairin the AI feedback loop.Testing
Business logic tests added:
batchSizeis strictly set to 1.packages/clisucceed).Visuals
Required for UI/UX changes:
Checklist
Closes #1733
Summary by CodeRabbit
New Features
--batch-sizeCLI option torunandi18ncommands, allowing users to customize translation batch processing with flexible batch sizes between 1 and 250Tests