feat: Add Incremental Entity Provider and Global API Pagination Support#140
feat: Add Incremental Entity Provider and Global API Pagination Support#140InduwaraSMPN wants to merge 61 commits into
Conversation
Introduces generic helpers and refactors API calls to use cursor-based pagination for all list endpoints, improving scalability and reliability for large data sets. Updates OpenAPI schema and client types to support metadata-driven pagination, replaces page/size parameters with limit/continue, and implements a default max page size for efficiency. Enhances error handling for paginated requests and prevents infinite loops. Refactors several backend services and entity providers to leverage the new pagination utilities, ensuring consistent resource synchronization across organizations, projects, and components. Motivated by the need for reliable handling of large deployments and to align with upstream OpenChoreo API changes.
Introduces a new incremental ingestion backend module supporting burst-based, cursor-driven processing for large-scale OpenChoreo deployments. Enables efficient, fault-tolerant, and memory-conscious catalog updates by fetching entities in resumable batches with database-persisted state and management APIs. Improves scalability, observability, and operational control over catalog ingestion. Relates to large dataset handling and platform scalability needs.
Prevents migration failures by ensuring PostgreSQL CREATE INDEX CONCURRENTLY commands run outside transaction blocks, as required by the database. Improves migration reliability for performance optimizations.
Updates test cases to reflect revised tag logic, ensuring tags now include 'openchoreo', the component name, and workload type, rather than inferring tags from component name parts. Cleans up unused imports for clarity.
Aligns tests with changes to CI/CD configuration, including renaming UI fields, updating workflow selection logic, and switching boolean widget expectations. Improves cursor expiration handling assertions and mocks for increased clarity and accuracy. Reflects recent logic changes to section generation and parameter spreading, ensuring tests accurately validate intended behaviors.
Refactors test and source files for improved readability, including consistent formatting in function arguments and object initializations. Enhances error handling by making log messages for expired pagination tokens more descriptive, aiding in debugging and operational clarity.
Introduces a new backend module and service factory to support immediate delta mutations in the catalog, enabling real-time entity ingestion for OpenChoreo without relying on legacy scheduled providers. Updates configuration example and dependencies to facilitate large-scale local testing and integration with scaffolder actions.
Replaces hardcoded page limit with a shared constant to ensure consistency and easier maintenance. Reflects updated OpenAPI schema cap and avoids potential request errors from exceeding limits.
Aligns the default pagination limit with standard system values, potentially improving consistency and compatibility with related systems.
Increases the maximum allowed value for the limit parameter from 500 to 512 in both the API specification and documentation. Aligns configuration with backend capabilities and clarifies usage for clients.
Adds mocked headers with content-length values to test HTTP responses to better simulate real-world scenarios and enable more accurate testing of code that depends on response headers.
Introduces generic helpers and refactors API calls to use cursor-based pagination for all list endpoints, improving scalability and reliability for large data sets. Updates OpenAPI schema and client types to support metadata-driven pagination, replaces page/size parameters with limit/continue, and implements a default max page size for efficiency. Enhances error handling for paginated requests and prevents infinite loops. Refactors several backend services and entity providers to leverage the new pagination utilities, ensuring consistent resource synchronization across organizations, projects, and components. Motivated by the need for reliable handling of large deployments and to align with upstream OpenChoreo API changes.
Introduces a new incremental ingestion backend module supporting burst-based, cursor-driven processing for large-scale OpenChoreo deployments. Enables efficient, fault-tolerant, and memory-conscious catalog updates by fetching entities in resumable batches with database-persisted state and management APIs. Improves scalability, observability, and operational control over catalog ingestion. Relates to large dataset handling and platform scalability needs.
Prevents migration failures by ensuring PostgreSQL CREATE INDEX CONCURRENTLY commands run outside transaction blocks, as required by the database. Improves migration reliability for performance optimizations.
Updates test cases to reflect revised tag logic, ensuring tags now include 'openchoreo', the component name, and workload type, rather than inferring tags from component name parts. Cleans up unused imports for clarity.
Aligns tests with changes to CI/CD configuration, including renaming UI fields, updating workflow selection logic, and switching boolean widget expectations. Improves cursor expiration handling assertions and mocks for increased clarity and accuracy. Reflects recent logic changes to section generation and parameter spreading, ensuring tests accurately validate intended behaviors.
Refactors test and source files for improved readability, including consistent formatting in function arguments and object initializations. Enhances error handling by making log messages for expired pagination tokens more descriptive, aiding in debugging and operational clarity.
Introduces a new backend module and service factory to support immediate delta mutations in the catalog, enabling real-time entity ingestion for OpenChoreo without relying on legacy scheduled providers. Updates configuration example and dependencies to facilitate large-scale local testing and integration with scaffolder actions.
Replaces hardcoded page limit with a shared constant to ensure consistency and easier maintenance. Reflects updated OpenAPI schema cap and avoids potential request errors from exceeding limits.
Aligns the default pagination limit with standard system values, potentially improving consistency and compatibility with related systems.
Increases the maximum allowed value for the limit parameter from 500 to 512 in both the API specification and documentation. Aligns configuration with backend capabilities and clarifies usage for clients.
Adds mocked headers with content-length values to test HTTP responses to better simulate real-world scenarios and enable more accurate testing of code that depends on response headers.
Cleans up duplicate and unreachable error handling logic to streamline control flow and improve maintainability. Reduces noise by eliminating repeated or obsolete try/catch and entity transformation code. Ensures only necessary error checks remain, making the codebase easier to comprehend and maintain.
Improves trait selection performance and scalability by introducing cursor-based pagination to the trait-fetching endpoint and UI, allowing users to load large trait lists efficiently without overloading the frontend. Updates the backend to support pagination parameters and metadata. Enhances incremental ingestion configuration with options for concurrent requests and batch delay, and adds logic to avoid duplicate processing of organizations when cursors expire. Strengthens config validation and documentation for large-scale operations. Warns about potential memory issues in generic pagination utilities, encouraging chunked processing for large datasets.
Prevents runaway memory usage and unresponsive behavior by adding a configurable timeout and AbortSignal support to the pagination utility. Replaces the fixed page count safeguard with time-based limits, improving flexibility and reliability for large or slow data sources.
This commit updates the test suite for catalog backend modules with: - Increased jest timeout from 60s to 120s for database/provider tests - Added conditional CI skipping to prevent test timeouts - Updated cursor expectations to match implementation (reordered properties, new sets) - Modified tag splitting logic in template converter tests - Changed component name picker to EntityNamePicker in UI tests - Updated boolean widget expectations for enableBackup property These changes ensure tests run reliably in CI environments while accurately reflecting current component behavior.
Replace config.d.ts with config.ts, moving detailed zod schema definitions and exports out of the type declaration file for better separation of concerns and clearer runtime imports.
4e7a898 to
3449126
Compare
Update multiple Babel packages including core, generator, parser, and related dependencies. This includes upgrades to @babel/core@7.28.5, @babel/generator@7.28.5, @babel/parser@7.28.5, @babel/compat-data@7.28.5, and various helper packages. Also updates the resolve package from 1.22.8 to 1.22.11. These updates provide the latest bug fixes and improvements to the JavaScript compilation toolchain.
📝 WalkthroughWalkthroughThis PR introduces cursor-based incremental catalog ingestion for OpenChoreo with a new backend plugin, refactors existing services to use centralized pagination utilities, updates the OpenAPI contract to support pagination, and enhances the frontend trait-loading UI with pagination support. ChangesIncremental Catalog Ingestion Plugin
Pagination Infrastructure & Service Refactoring
Frontend Integration & Configuration
Sequence Diagram(s)sequenceDiagram
participant Client
participant TraitField as Trait Field (UI)
participant Backend as OpenChoreo Backend
participant APIClient as OpenChoreo API
Note over Client,APIClient: Initial Load (limit=100, no cursor)
TraitField->>Backend: GET /traits?limit=100
Backend->>APIClient: GET /orgs/{org}/traits?limit=100
APIClient-->>Backend: { items: [...], metadata: { hasMore: true, continue: "token123" } }
Backend-->>TraitField: { items: [...], hasMore: true, continueToken: "token123" }
TraitField->>TraitField: Render dropdown + "Load more" button
Note over Client,APIClient: User Clicks "Load More"
Client->>TraitField: handleLoadMoreTraits()
TraitField->>Backend: GET /traits?limit=100&continue=token123
Backend->>APIClient: GET /orgs/{org}/traits?limit=100&continue=token123
APIClient-->>Backend: { items: [...], metadata: { hasMore: false, continue: null } }
Backend-->>TraitField: { items: [...], hasMore: false }
TraitField->>TraitField: Append items, hide "Load more"
sequenceDiagram
participant Scheduler
participant Engine as Ingestion Engine
participant DB as Database Manager
participant Provider as Entity Provider
participant API as OpenChoreo API
participant Catalog as Catalog
Note over Scheduler,Catalog: Burst Cycle Start
Scheduler->>Engine: taskFn()
Engine->>DB: getCurrentIngestionRecord()
DB-->>Engine: { nextAction: 'ingest', ... }
Engine->>Engine: handleNextAction() → ingestOneBurst()
Note over Provider,API: Cursor-Based Traversal (orgs → projects → components)
loop Until done or timeout
Engine->>Provider: next(context, cursor?)
Provider->>API: GET /orgs?limit=100&continue=cursor
API-->>Provider: { items: [org1, org2], metadata: { hasMore, continue } }
Provider->>Provider: Translate to Domain entities
Provider-->>Engine: { entities: [...], cursor: { phase, ... }, done: false }
Engine->>Engine: mark(entities, cursor)
Engine->>DB: createMark(markRecord)
DB->>Catalog: applyMutation(added: entities)
end
Note over Engine,DB: Final Burst Page (done:true)
Engine->>DB: computeRemoved()
DB-->>Engine: { removed: [...] }
Engine->>Catalog: applyMutation(added, removed)
Engine->>DB: setProviderResting()
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes The PR introduces a large, intricate incremental ingestion subsystem with 25+ new files spanning database transactions, cursor pagination, state machines, entity translation, and API routing. Multiple interdependent layers require separate reasoning (database schema design, transaction error handling, cursor state tracking, phase-based traversal logic, removal computation, event integration). Simultaneously, refactoring across 7 existing services to use a new pagination utility demands consistency checks. Complex control flows (burst/rest/backoff cycling, HTTP 410 retry logic, batch processing with concurrency caps, deduplication across reset boundaries) and heterogeneous changes (schemas, migrations, OpenAPI contract, component translation logic) compound the effort. Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
|
There was a problem hiding this comment.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/openchoreo-client-node/openapi/openchoreo-api.yaml (1)
2829-2900:⚠️ Potential issue | 🟠 Major | ⚡ Quick winTrait pagination still advertises the old response contract.
These two operations now accept
limit/continue, but their responses still expose top-leveltotalCount,page, andpageSizeinstead ofdata.metadata, and they never narrowitemstoComponentTraitResponse. That leaves the trait endpoints out of sync with the newListResponseshape used everywhere else.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/openchoreo-client-node/openapi/openchoreo-api.yaml` around lines 2829 - 2900, The response schemas for the trait list and updateComponentTraits operations still expose top-level totalCount/page/pageSize and don't narrow items to ComponentTraitResponse; update both responses so "data" follows the centralized ListResponse shape (i.e., move pagination into data.metadata and remove top-level totalCount/page/pageSize) and ensure data.items is narrowed to components/schemas/ComponentTraitResponse — do this by replacing the current allOf block under each 200 response (which currently merges APIResponse and an object with top-level totalCount/page/pageSize) with an allOf that merges APIResponse and an object where "data" is either $ref: '#/components/schemas/ListResponse' with an inline override to set data.properties.items.items.$ref to '#/components/schemas/ComponentTraitResponse' (or an equivalent allOf that composes ListResponse and then constrains items), and remove the stray top-level pagination properties; apply the same change to the list operation and the updateComponentTraits operation.plugins/catalog-backend-module-openchoreo/src/provider/OpenChoreoEntityProvider.ts (1)
227-251:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winCritical: Variable
compDataused outside its scope.The
compDatavariable is declared inside the try block (line 229-240) but is referenced after the catch block (lines 248-251). This will cause aReferenceErrorat runtime becausecompDatais not accessible outside its declaring block.The try-catch structure appears to be incomplete - the component processing logic should be inside the try block, not after it.
Proposed fix
// Get components for the project and create Component entities try { const { data: compData, error: compError, response: compResponse, } = await client.GET( '/orgs/{orgName}/projects/{projectName}/components', { params: { path: { orgName: org.name!, projectName: project.name! }, }, }, ); - } catch (error) { - this.logger.warn( - `Failed to fetch components for project ${project.name}: ${error}`, - ); - continue; - } - const components = - compData?.success && compData?.data?.items - ? compData.data.items - : []; + if (compError || !compResponse.ok) { + this.logger.warn( + `Failed to fetch components for project ${project.name}: ${compResponse.status}`, + ); + continue; + } + + const components = + compData?.success && compData?.data?.items + ? compData.data.items + : []; + // ... rest of component processing logic ... + } catch (error) { + this.logger.warn( + `Failed to fetch components for project ${project.name}: ${error}`, + ); + continue; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo/src/provider/OpenChoreoEntityProvider.ts` around lines 227 - 251, The code uses compData outside the try block causing a ReferenceError; to fix, either declare compData (and compError/compResponse if needed) before the try or move the component-processing logic (the calculation of components and any usage of compData) inside the try that wraps the client.GET call; update the block around client.GET in OpenChoreoEntityProvider (where client.GET is called and this.logger.warn is used) so that compData is in scope when computing components and handling results, and ensure the catch still logs the error via this.logger.warn and continues.
🧹 Nitpick comments (23)
plugins/catalog-backend-module-openchoreo-incremental/migrations/20221116073152_init.js (1)
63-65: ⚡ Quick win
last_erroris defined asstring(VARCHAR 255) but expanded toTEXTin a subsequent migration — define it astextdirectly to avoid the two-step.Error messages and stack traces routinely exceed 255 characters. While a follow-up migration (
20240110000003_expand_last_error_field.js) widens the column toTEXT, defining it correctly upfront removes the dependency and eliminates the risk of silent truncation on MySQL (or a constraint error on PostgreSQL) if the expand migration ever fails to run or is applied out of order.♻️ Proposed fix
- table - .string('last_error') - .comment('records any error that occurred in the previous burst attempt'); + table + .text('last_error') + .comment('records any error that occurred in the previous burst attempt');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/migrations/20221116073152_init.js` around lines 63 - 65, The migration defines the column last_error using table.string('last_error') which creates a VARCHAR(255); change it to table.text('last_error') in the migration (migration 20221116073152_init.js where last_error is declared) so the column is created as TEXT initially and avoids a later expansion migration; update the table.comment(...) call to remain unchanged and ensure any rollback/alter logic (if present) matches the TEXT type.plugins/openchoreo-common/src/utils/pagination.ts (1)
71-86: 💤 Low valueConsider consolidating the duplicate token-advancement checks.
The check at lines 72-79 validates that the API's
continuetoken differs from the token we just sent. The check at lines 81-86 comparescontinueToken(current request) againstpreviousToken(previous request), but given the code flow wherepreviousTokenis assigned after these checks (line 89), this second check would only trigger in edge cases not covered by the first. Both produce identical error messages, making debugging harder if either fires.If you intend to detect cyclic tokens (e.g., A → B → A), a
Set<string>of seen tokens would be more robust. Otherwise, consider removing the second check or differentiating the error messages.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/openchoreo-common/src/utils/pagination.ts` around lines 71 - 86, Duplicate token-advancement checks should be consolidated and made robust: introduce a Set<string> (e.g., seenTokens) in the pagination routine and before sending/accepting tokens, check if the incoming response.metadata?.continue or continueToken is already in seenTokens and throw a clear cyclic-token error (e.g., "Pagination token repeated - cyclic pagination detected"); also keep a simpler check that response.metadata?.continue equals the token we just sent (continueToken) but use a distinct message (e.g., "API returned same continue token we supplied"); remove the redundant previousToken === continueToken check or replace it by adding the previousToken into seenTokens when advancing so repeated tokens are reliably detected via the Set; update references to response.metadata?.continue, continueToken, previousToken, and ensure seenTokens is updated each iteration.plugins/openchoreo-backend/src/services/EnvironmentService/EnvironmentInfoService.ts (1)
216-224: 💤 Low valueRedundant type cast at line 224.
environmentsListis already cast toModelsEnvironment[]at line 216. The second cast at line 224 is unnecessary.- const environments = environmentsList as ModelsEnvironment[]; + const environments = environmentsList;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/openchoreo-backend/src/services/EnvironmentService/EnvironmentInfoService.ts` around lines 216 - 224, The variable environmentsList is already typed as ModelsEnvironment[] when assigned, so remove the redundant cast when assigning to environments; update the code in EnvironmentInfoService.ts to either assign environments = environmentsList (no cast) or simply use environmentsList directly where environments is used, ensuring references to environmentsList remain typed as ModelsEnvironment[] and no duplicate cast remains.plugins/catalog-backend-module-openchoreo-incremental/src/config.d.ts (1)
21-21: 💤 Low valueUnused import in declaration file.
The
import { z } from 'zod'is not used in this declaration file since onlyimport('zod').ZodTypeAnyis referenced.🧹 Remove unused import
-import { z } from 'zod'; +// Zod types are referenced via import('zod').ZodTypeAny🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/config.d.ts` at line 21, The declaration file contains an unused import "import { z } from 'zod'" which should be removed; edit the declaration file to delete the unused import statement and keep using the inline type reference import('zod').ZodTypeAny where needed (no other code changes), ensuring only active references remain and there are no lingering unused symbols like z in the file.plugins/catalog-backend-module-openchoreo/src/converters/CtdToTemplateConverter.ts (1)
325-353: 💤 Low valueRedundant property constraint inside conditional
thenblock.Lines 330 and 345 include
useBuiltInCI: { const: true/false }inside thethen.properties, which duplicates the condition in theifblock. This is unnecessary since theifcondition already constrains the value.🧹 Remove redundant constraints
then: { properties: { - useBuiltInCI: { const: true }, workflow_name: workflowNameField, workflow_parameters: { title: 'Workflow Parameters',then: { properties: { - useBuiltInCI: { const: false }, external_ci_note: { type: 'null',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo/src/converters/CtdToTemplateConverter.ts` around lines 325 - 353, The schema's conditional branches under the allOf array redundantly repeat the useBuiltInCI const constraint inside the then.properties (currently alongside workflow_name/workflow_parameters and external_ci_note); remove the duplicate useBuiltInCI: { const: true } and useBuiltInCI: { const: false } entries from the then.blocks so the if conditions remain the single source of that constraint and only include the branch-specific properties (workflow_name, workflow_parameters, external_ci_note) in the then.properties.packages/app/src/scaffolder/TraitsField/TraitsFieldExtension.tsx (2)
177-222: ⚡ Quick winConsider extracting shared fetch logic to reduce duplication.
The
handleLoadMoreTraitsfunction duplicates most of the logic fromfetchTraitsinsideuseEffect. Both define the sameextractOrgNamehelper and perform identical API calls with similar error handling.♻️ Proposed refactor to extract shared logic
+ // Shared helper - extract outside useEffect + const extractOrgName = (fullOrgName: string): string => { + const parts = fullOrgName.split('/'); + return parts[parts.length - 1]; + }; + + // Shared fetch function + const fetchTraitsPage = async (cursor?: string): Promise<{ + items: TraitListItem[]; + hasMore: boolean; + nextCursor?: string; + } | null> => { + if (!organizationName) return null; + + const baseUrl = await discoveryApi.getBaseUrl('openchoreo'); + const orgName = extractOrgName(organizationName); + const url = new URL(`${baseUrl}/traits`); + url.searchParams.set('organizationName', orgName); + url.searchParams.set('limit', '100'); + if (cursor) { + url.searchParams.set('continue', cursor); + } + + const response = await fetchApi.fetch(url.toString()); + if (!response.ok) { + throw new Error(`HTTP ${response.status}: ${response.statusText}`); + } + + const result = await response.json(); + if (result.success) { + return { + items: result.data.items || [], + hasMore: result.data.metadata?.hasMore === true, + nextCursor: result.data.metadata?.continue, + }; + } + return null; + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/src/scaffolder/TraitsField/TraitsFieldExtension.tsx` around lines 177 - 222, handleLoadMoreTraits duplicates the fetch logic from the fetchTraits defined in the useEffect (including extractOrgName, URL construction, fetch and error handling); extract that shared behavior into a single helper (e.g., fetchTraitsPage or fetchTraitsFromApi) that accepts parameters like organizationName, limit, and continueToken, reuse extractOrgName inside that helper, and then call it from both fetchTraits (useEffect) and handleLoadMoreTraits to set state (setAvailableTraits, setHasMoreTraits, setContinueToken, setError, setLoadingMoreTraits) based on the returned result or thrown error.
355-372: 💤 Low valueMenuItem onClick inside Select may have unexpected UX.
Using
onClickon aMenuIteminside aSelectis non-standard. When clicked, theSelectmay also try to treat it as a selection (triggeringonChange), though the empty value prevents actual selection. Consider placing the "Load more" button outside the dropdown or using a different pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/app/src/scaffolder/TraitsField/TraitsFieldExtension.tsx` around lines 355 - 372, The MenuItem inside the Select should not trigger the Select's selection behavior; update the "Load more traits" control so clicks don't bubble into the Select - replace the current MenuItem onClick usage by preventing the selection event and invoking handleLoadMoreTraits directly (for example change the MenuItem to a non-selectable container/component and call handleLoadMoreTraits from an onMouseDown handler that does event.stopPropagation() and event.preventDefault(), or render a ListSubheader/div with an explicit button that calls handleLoadMoreTraits), keep the loadingMoreTraits/CircularProgress UI and the hasMoreTraits/loadingTraits guards, and ensure the symbol names handleLoadMoreTraits and loadingMoreTraits remain used so behavior is unchanged.plugins/catalog-backend-module-openchoreo-incremental/src/utils/ConfigValidator.ts (2)
160-166: 💤 Low valueMagic number in chunk size validation.
The estimate of
burstLength * 10(10 entities/second) is arbitrary. Consider documenting this assumption or making it configurable.📝 Add explanatory comment
// Validate chunk size vs burst length - const maxEntitiesPerBurst = incremental.burstLength * 10; // Rough estimate + // Estimate ~10 entities processed per second based on typical API latency + const ESTIMATED_ENTITIES_PER_SECOND = 10; + const maxEntitiesPerBurst = incremental.burstLength * ESTIMATED_ENTITIES_PER_SECOND;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/utils/ConfigValidator.ts` around lines 160 - 166, The validation uses a hardcoded rate estimate (maxEntitiesPerBurst = incremental.burstLength * 10) which is a magic number; update ConfigValidator (the chunkSize vs burstLength check referencing incremental.chunkSize, incremental.burstLength, and maxEntitiesPerBurst) to either (a) extract the “10 entities/sec” value into a named constant with a descriptive comment or (b) make it a configurable parameter (e.g., rateEstimate or entitiesPerSecond) that’s read from config and validated, and update the logger.warn message to mention the configurable rate assumption so the rationale is documented for future readers/operators.
118-118: ⚡ Quick winTODO: Backoff array parsing not implemented.
The backoff configuration is hardcoded to
undefinedwith a TODO comment. This means users cannot configure custom backoff intervals via the config file.Do you want me to generate the implementation for parsing the backoff array from configuration, or would you prefer to track this as a separate issue?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/utils/ConfigValidator.ts` at line 118, The backoff field is stubbed to undefined in ConfigValidator.ts; update the validator (e.g., in the ConfigValidator class and its validateConfig / validateRetryPolicy function) to read the backoff value from the incoming config, ensure it's an array, validate each element is a finite positive number (or numeric string coercible to number), convert/normalize items to numbers, and assign that array to the backoff property (or push a validation error if invalid). Make sure to preserve existing error reporting flow (use the same errors collection/throwing behavior in ConfigValidator) and keep the previous default (undefined) when backoff is absent.plugins/catalog-backend-module-openchoreo-incremental/src/utils/ApiErrorHandler.ts (1)
133-141: ⚖️ Poor tradeoffHTTP status detection via string matching is fragile.
The retry logic detects HTTP status codes by checking if
message.includes('http 429'). This relies on the error message format being consistent (e.g., "HTTP 429" vs "http 429" vs "429"). Consider accepting an HTTP status code parameter or checking for astatusproperty on the error object.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/utils/ApiErrorHandler.ts` around lines 133 - 141, The current retry detection in ApiErrorHandler relies on case-sensitive substring matches of the error message (e.g., message.includes('http 429')), which is fragile; update the logic in the function handling retries (referenced by ApiErrorHandler and its retry decision block) to first check for a numeric status property on the error object (e.g., error.status or error.response?.status) and use that for 429/502/503/504 checks, then fall back to a robust parsing of the message (case-insensitive and matching whole status codes via regex like /\b429\b/) only if no status property exists; ensure you update any callers to pass a status when available.plugins/catalog-backend-module-openchoreo/src/provider/OpenChoreoEntityProvider.ts (2)
530-563: 💤 Low valueInconsistent 410 handling across fetch methods.
fetchAllOrganizationsincludes explicit 410 Gone handling with a warning log (lines 506-510), butfetchAllEnvironmentsand other fetch methods don't have this handling. For consistency, either all methods should handle 410 similarly, or none should iffetchAllResourceshandles it internally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo/src/provider/OpenChoreoEntityProvider.ts` around lines 530 - 563, fetchAllEnvironments currently throws on any non-OK response but lacks the 410 Gone special-case handling used in fetchAllOrganizations; update fetchAllEnvironments to detect response.status === 410 and call the same warning behavior as fetchAllOrganizations (log a warning and return an empty list/stop iteration) instead of throwing, mirroring the logic used there so all fetch* methods behave consistently with fetchAllResources expectations.
644-678: ⚡ Quick win
fetchAllComponentsmethod is defined but not used.The
fetchAllComponentsprivate method is implemented but never called. Therun()method still fetches components inline (lines 228-240) without using this paginated helper. This appears to be an incomplete refactoring - either use this method for consistency with other resources, or remove it to avoid dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo/src/provider/OpenChoreoEntityProvider.ts` around lines 644 - 678, The private helper fetchAllComponents is implemented but unused; update run() to use it (or remove it). Locate the run method and replace the inline paginated component fetch logic with a call to await this.fetchAllComponents(client, orgName, projectName), then map or flatten the returned ModelsComponent[] items as the code expects (ensure any existing variables that used the inline result now use the helper's return value), or if you prefer to remove dead code, delete the fetchAllComponents method entirely; reference symbols: fetchAllComponents and run.plugins/catalog-backend-module-openchoreo-incremental/src/providers/OpenChoreoIncrementalEntityProvider.ts (1)
489-497: ⚡ Quick winDirect mutation of cursor parameter.
Lines 490-493 directly mutate the
cursorparameter (cursor.projectQueue = ...,cursor.projectApiCursor = undefined). This is a side effect that can cause issues if the caller retains a reference to the original cursor.Consider creating a new cursor object with the updated values instead.
Proposed fix
- // Clear the existing project queue for this org and rebuild it - cursor.projectQueue = cursor.projectQueue.filter( - p => p.org !== currentOrg, - ); - cursor.projectApiCursor = undefined; + // Create updated cursor without mutating the original + const updatedProjectQueue = cursor.projectQueue.filter( + p => p.org !== currentOrg, + ); + // Use the updated queue in the return statement instead🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/providers/OpenChoreoIncrementalEntityProvider.ts` around lines 489 - 497, The code currently mutates the incoming cursor (cursor.projectQueue = ..., cursor.projectApiCursor = undefined); instead, create a new cursor object based on the existing one (e.g., newCursor = { ...cursor, projectQueue: cursor.projectQueue.filter(p => p.org !== currentOrg), projectApiCursor: undefined }) and use that newCursor for subsequent logic (and where you previously used cursor); update references in this block (including any use of restartResult.data/response/error that follow) to use the new cursor variable so the original cursor parameter is not mutated.plugins/catalog-backend-module-openchoreo-incremental/dev/index.ts (1)
43-49: 💤 Low valueReplace
logger: anywith the concreteLoggerServicetype.
LoggerServiceis already available from the@backstage/backend-plugin-apiimport on line 23-26, so no new import is needed.♻️ Proposed fix
+import { coreServices, createBackendModule, + LoggerService, } from '@backstage/backend-plugin-api'; // ... async init({ logger, providers, }: { - logger: any; + logger: LoggerService; providers: OpenChoreoIncrementalProviderExtensionPoint; }) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/dev/index.ts` around lines 43 - 49, The init method's parameter currently types logger as any; change the parameter type to LoggerService (from `@backstage/backend-plugin-api` already imported) so the signature becomes init({ logger, providers }: { logger: LoggerService; providers: OpenChoreoIncrementalProviderExtensionPoint }) — update the parameter type only, preserving the function name init and the providers type.plugins/catalog-backend-module-openchoreo-incremental/src/module/WrapperProviders.test.ts (2)
91-94: ⚡ Quick winAwaiting
waitForReady()directly is more reliable than inspecting a side-effect flag.The current pattern sets
resolvedvia a.then()callback onwaitForReady()and then immediately checks the flag afterawait wrapped2.connect(). Whether the callback has already executed depends on the depth of the internal promise chain insideconnect()— a race across microtask boundaries that can make this assertion flaky.♻️ Proposed fix
- let resolved = false; - providers.waitForReady().then(() => { - resolved = true; - }); + const readyPromise = providers.waitForReady(); // ... connect providers ... await wrapped2.connect({} as any); - expect(resolved).toBe(true); + await expect(readyPromise).resolves.toBeUndefined();Also applies to: 110-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/module/WrapperProviders.test.ts` around lines 91 - 94, Replace the flaky side-effect check by awaiting the promise returned by providers.waitForReady() directly: instead of setting a resolved flag in providers.waitForReady().then(...) and asserting the flag after await wrapped2.connect(), call await providers.waitForReady() after await wrapped2.connect() to ensure the readiness promise has settled; apply the same change for the other occurrence that mirrors lines 110-112 so both tests await providers.waitForReady() rather than relying on a boolean set in a .then() handler.
30-30: ⚡ Quick winSQLite tests are also silently skipped in CI.
describe.skipis used for the whole suite whenprocess.env.CIis truthy. SQLite tests don't require external infrastructure and could safely run in CI, so the blanket skip leaves a coverage gap for the SQLite code path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/module/WrapperProviders.test.ts` at line 30, The test suite is being globally skipped in CI via describeOrSkip which hides SQLite-only tests; change the logic around the describeOrSkip symbol in WrapperProviders.test.ts so CI does not skip suites that exercise the SQLite code path (e.g., only apply describe.skip when CI AND the suite requires external infra, or add a separate flag to allow SQLite in CI). Update the suite selection so SQLite tests run unskipped in CI while still skipping suites that need external services.plugins/catalog-backend-module-openchoreo-incremental/src/database/migrations/20240110000003_expand_last_error_field.ts (1)
24-37: ⚡ Quick winExplicitly chain
.nullable()before.alter()for cross-engine safety.Knex's
.alter()behavior differs per dialect: PostgreSQL only changes the type, but MySQL rewrites the entire column definition with whatever constraints Knex generates. Without an explicit.nullable(), the generated DDL depends on Knex's type defaults, which can vary between versions. Bothupanddownshould be hardened:🛡️ Proposed fix
export async function up(knex: Knex): Promise<void> { await knex.schema.alterTable('ingestions', table => { - table.text('last_error').alter(); + table.text('last_error').nullable().alter(); }); } export async function down(knex: Knex): Promise<void> { await knex.schema.alterTable('ingestions', table => { - table.string('last_error', 255).alter(); + table.string('last_error', 255).nullable().alter(); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/database/migrations/20240110000003_expand_last_error_field.ts` around lines 24 - 37, The migration should explicitly set nullable before calling alter to avoid dialect-specific defaults: in the up function change the alter call on the 'last_error' column within the alterTable callback in export async function up(knex: Knex) to mark the text column as nullable before calling .alter(), and in the down function do the same on the string('last_error', 255) alteration; locate the table.text('last_error') and table.string('last_error', 255) calls and chain .nullable() immediately before .alter() so both export async function up and export async function down explicitly specify nullability.plugins/catalog-backend-module-openchoreo-incremental/src/module/catalogModuleIncrementalIngestionEntityProvider.ts (1)
54-80: ⚡ Quick winJSDoc example contains incomplete syntax.
The example code block has incomplete property values (
burstInterval:,,burstLength:,,restLength: ,) and empty method bodies. This will confuse developers trying to use the extension point.📝 Suggested fix for the example
* `@example` * * ```ts backend.add(createBackendModule({ pluginId: 'catalog', moduleId: 'my-openchoreo-incremental-provider', register(env) { env.registerInit({ deps: { extension: openchoreoIncrementalProvidersExtensionPoint, }, async init({ extension }) { extension.addProvider({ options: { - burstInterval:, - burstLength:, - restLength: , + burstInterval: { seconds: 30 }, + burstLength: { seconds: 10 }, + restLength: { minutes: 30 }, }, provider: { - next(context, cursor) { - }, + getProviderName() { return 'my-provider'; }, + async next(context, cursor) { + // Return entities and cursor + return { done: false, entities: [], cursor: newCursor }; + }, }, }); }, }); })) * ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/module/catalogModuleIncrementalIngestionEntityProvider.ts` around lines 54 - 80, The JSDoc example has incomplete option values and empty provider methods; update the example inside the comment where extension.addProvider is used to supply realistic option objects for burstInterval, burstLength, and restLength (e.g., duration objects), and implement the provider shape by adding getProviderName() and a working async next(context, cursor) that returns a result object like { done, entities, cursor } instead of empty bodies so consumers can see the expected return contract for openchoreoIncrementalProvidersExtensionPoint and extension.addProvider.plugins/catalog-backend-module-openchoreo-incremental/src/database/OpenChoreoIncrementalIngestionDatabaseManager.ts (2)
1232-1240: 💤 Low valueVerbose logging inside batch insert loop.
Logging at
infolevel for each batch duringcreateMarkEntitiescould generate significant log volume during large ingestions. Consider usingdebuglevel or logging only a summary after all batches complete.♻️ Suggested change
- this.logger.info( + this.logger.debug( `Batch ${ Math.floor(i / MARK_ENTITY_INSERT_BATCH_SIZE) + 1 }/${Math.ceil( newRefs.length / MARK_ENTITY_INSERT_BATCH_SIZE, )} completed: inserted ${ chunk.length } entities for mark ${markId}`, );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/database/OpenChoreoIncrementalIngestionDatabaseManager.ts` around lines 1232 - 1240, The current per-batch info logging in createMarkEntities (using this.logger.info with MARK_ENTITY_INSERT_BATCH_SIZE and markId) is too verbose for large ingestions; change those per-batch logs to this.logger.debug (or remove them) and instead emit a single summary info log after all batches complete that reports total inserted entities for the markId and number of batches processed; update any tests/consumers that expect the per-batch info accordingly.
768-777: ⚡ Quick winO(n²) complexity in orphan detection loop.
The
removed.find()call inside the loop creates O(n²) complexity. For large catalogs with many orphaned entities, this could become slow. Consider using a Set for deduplication.♻️ Suggested optimization
+ const removedSet = new Set<string>(); for (const entity of filteredCatalogEntities) { if (!currentEntityRefs.has(entity.entity_ref)) { - if (!removed.find(e => e.entityRef === entity.entity_ref)) { + if (!removedSet.has(entity.entity_ref)) { this.logger.info( `computeRemoved: Found orphaned catalog entity ${entity.entity_ref} not in current or previous ingestion, marking for removal`, ); removed.push({ entityRef: entity.entity_ref }); + removedSet.add(entity.entity_ref); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/database/OpenChoreoIncrementalIngestionDatabaseManager.ts` around lines 768 - 777, Loop uses removed.find(...) causing O(n²); replace that dedupe with a Set: create a Set (e.g., removedRefs) that tracks already-marked entity refs, check currentEntityRefs.has(entity.entity_ref) and if not and removedRefs.has(entity.entity_ref) is false then log and push to removed and add to removedRefs; update references to the removed entries (entityRef) to match the Set contents and keep the rest of computeRemoved logic unchanged (look for filteredCatalogEntities, currentEntityRefs, removed).plugins/catalog-backend-module-openchoreo-incremental/src/router/routes.ts (1)
219-223: 💤 Low valueDELETE endpoint succeeds silently for non-existent providers.
Unlike other endpoints that return 404 for unknown providers, this endpoint always returns success. Consider validating the provider exists before purging, or at minimum returning metadata indicating whether any records were actually deleted.
♻️ Suggested improvement
router.delete('/incremental/providers/:provider', async (req, res) => { const { provider } = req.params; + const providers = await this.manager.listProviders(); + if (!providers.includes(provider)) { + res.status(404).json({ + success: false, + error: `Provider '${provider}' not found`, + }); + return; + } const result = await this.manager.purgeAndResetProvider(provider); res.json({ success: true, data: result }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/router/routes.ts` around lines 219 - 223, The DELETE handler at router.delete('/incremental/providers/:provider') always returns success even for unknown providers; update the handler to first validate the provider exists (e.g., call a lookup method on this.manager such as getProvider or hasProvider) and if not found return a 404, otherwise call this.manager.purgeAndResetProvider(provider) and include metadata in the response (e.g., deletedCount/affectedRows or a boolean indicating anything was deleted) so callers can distinguish no-op vs successful purge; if purgeAndResetProvider already returns deletion info, surface that instead of unconditionally { success: true }.plugins/catalog-backend-module-openchoreo-incremental/src/engine/OpenChoreoIncrementalIngestionEngine.ts (1)
146-158: ⚡ Quick winString-based error type detection is fragile.
Comparing
(error as Error).message === 'CANCEL'relies on exact string matching. If the error message changes or includes additional context, this check will fail silently and treat cancellations as regular errors (triggering backoff instead of cancel flow).Consider using a custom error class or error code property instead.
♻️ Suggested approach
// In a separate errors.ts or types.ts file: export class IngestionCancelledError extends Error { readonly code = 'INGESTION_CANCELLED'; constructor(message = 'CANCEL') { super(message); this.name = 'IngestionCancelledError'; } } // Then in the catch block: if (error instanceof IngestionCancelledError) { // handle cancellation }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/engine/OpenChoreoIncrementalIngestionEngine.ts` around lines 146 - 158, Replace fragile string matching of cancellation in the catch block by introducing a custom error class (e.g., IngestionCancelledError) with a distinct code (e.g., 'INGESTION_CANCELLED'), update any places that currently signal cancellation to throw that class instead of throwing an Error with message 'CANCEL', and change the catch logic in OpenChoreoIncrementalIngestionEngine (the try/catch around ingestionId handling) to use `error instanceof IngestionCancelledError` to call this.options.logger.info and await this.manager.setProviderCanceling(ingestionId, error.message); leaving all other errors to the existing backoff/error path.plugins/catalog-backend-module-openchoreo-incremental/src/module/WrapperProviders.ts (1)
90-98: 💤 Low value
adminRouter()creates a new database manager on each call.Each invocation of
adminRouter()instantiates a newOpenChoreoIncrementalIngestionDatabaseManager. While this works correctly, ifadminRouter()is called multiple times (even accidentally), it creates redundant manager instances. Consider caching the router or manager.♻️ Suggested improvement
export class WrapperProviders { private migrate: Promise<void> | undefined; private numberOfProvidersToConnect = 0; private readonly readySignal = createDeferred(); + private cachedRouter: express.Router | undefined; // ... adminRouter(): express.Router { + if (this.cachedRouter) { + return this.cachedRouter; + } - return new IncrementalProviderRouter( + this.cachedRouter = new IncrementalProviderRouter( new OpenChoreoIncrementalIngestionDatabaseManager({ client: this.options.client, logger: this.options.logger, }), this.options.logger, ).createRouter(); + return this.cachedRouter; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/catalog-backend-module-openchoreo-incremental/src/module/WrapperProviders.ts` around lines 90 - 98, The adminRouter() method currently constructs a new OpenChoreoIncrementalIngestionDatabaseManager and IncrementalProviderRouter on every call (via IncrementalProviderRouter(...).createRouter()), leading to redundant manager instances; change the class to cache and reuse a single instance (either store a private field like cachedManager or cachedRouter) and return that cached router on subsequent adminRouter() calls, creating the manager/router lazily on first call using the same this.options.client and this.options.logger so existing callers of adminRouter(), OpenChoreoIncrementalIngestionDatabaseManager, IncrementalProviderRouter, and createRouter continue to work but without repeated instantiation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b3ea51a9-a80e-45b6-bdf1-9aff78d064bc
⛔ Files ignored due to path filters (4)
packages/openchoreo-client-node/src/generated/openchoreo/component-type.tsis excluded by!**/generated/**packages/openchoreo-client-node/src/generated/openchoreo/index.tsis excluded by!**/generated/**packages/openchoreo-client-node/src/generated/openchoreo/types.tsis excluded by!**/generated/**yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (57)
app-config.local.yaml.exampleapp-config.production.yamlapp-config.yamlbackstage-diff.mdpackage.jsonpackages/app/src/scaffolder/TraitsField/TraitsFieldExtension.tsxpackages/backend/package.jsonpackages/backend/src/index.tspackages/openchoreo-client-node/openapi/openchoreo-api.yamlplugins/catalog-backend-module-openchoreo-incremental/.eslintrc.jsplugins/catalog-backend-module-openchoreo-incremental/README.mdplugins/catalog-backend-module-openchoreo-incremental/dev/index.tsplugins/catalog-backend-module-openchoreo-incremental/migrations/20221116073152_init.jsplugins/catalog-backend-module-openchoreo-incremental/migrations/20240110000003_expand_last_error_field.jsplugins/catalog-backend-module-openchoreo-incremental/package.jsonplugins/catalog-backend-module-openchoreo-incremental/src/config.d.tsplugins/catalog-backend-module-openchoreo-incremental/src/config.tsplugins/catalog-backend-module-openchoreo-incremental/src/database/OpenChoreoIncrementalIngestionDatabaseManager.test.tsplugins/catalog-backend-module-openchoreo-incremental/src/database/OpenChoreoIncrementalIngestionDatabaseManager.tsplugins/catalog-backend-module-openchoreo-incremental/src/database/errors.tsplugins/catalog-backend-module-openchoreo-incremental/src/database/migrations.tsplugins/catalog-backend-module-openchoreo-incremental/src/database/migrations/20240110000001_add_performance_indexes.tsplugins/catalog-backend-module-openchoreo-incremental/src/database/migrations/20240110000003_expand_last_error_field.tsplugins/catalog-backend-module-openchoreo-incremental/src/database/tables.tsplugins/catalog-backend-module-openchoreo-incremental/src/engine/OpenChoreoIncrementalIngestionEngine.tsplugins/catalog-backend-module-openchoreo-incremental/src/index.tsplugins/catalog-backend-module-openchoreo-incremental/src/module.tsplugins/catalog-backend-module-openchoreo-incremental/src/module/WrapperProviders.test.tsplugins/catalog-backend-module-openchoreo-incremental/src/module/WrapperProviders.tsplugins/catalog-backend-module-openchoreo-incremental/src/module/catalogModuleIncrementalIngestionEntityProvider.test.tsplugins/catalog-backend-module-openchoreo-incremental/src/module/catalogModuleIncrementalIngestionEntityProvider.tsplugins/catalog-backend-module-openchoreo-incremental/src/module/index.tsplugins/catalog-backend-module-openchoreo-incremental/src/module/openchoreoIncrementalProviderModule.tsplugins/catalog-backend-module-openchoreo-incremental/src/openchoreoImmediateCatalogIncremental.tsplugins/catalog-backend-module-openchoreo-incremental/src/providers/OpenChoreoIncrementalEntityProvider.test.tsplugins/catalog-backend-module-openchoreo-incremental/src/providers/OpenChoreoIncrementalEntityProvider.tsplugins/catalog-backend-module-openchoreo-incremental/src/providers/componentBatchProcessor.tsplugins/catalog-backend-module-openchoreo-incremental/src/providers/entityTranslator.tsplugins/catalog-backend-module-openchoreo-incremental/src/router/routes.tsplugins/catalog-backend-module-openchoreo-incremental/src/types.tsplugins/catalog-backend-module-openchoreo-incremental/src/utils/ApiErrorHandler.tsplugins/catalog-backend-module-openchoreo-incremental/src/utils/ConfigValidator.tsplugins/catalog-backend-module-openchoreo/src/converters/CtdToTemplateConverter.test.tsplugins/catalog-backend-module-openchoreo/src/converters/CtdToTemplateConverter.tsplugins/catalog-backend-module-openchoreo/src/provider/OpenChoreoEntityProvider.tsplugins/openchoreo-backend/src/router.tsplugins/openchoreo-backend/src/services/BuildService/BuildInfoService.tsplugins/openchoreo-backend/src/services/CellDiagramService/CellDiagramInfoService.tsplugins/openchoreo-backend/src/services/DashboardService/DashboardInfoService.tsplugins/openchoreo-backend/src/services/EnvironmentService/EnvironmentInfoService.tsplugins/openchoreo-backend/src/services/TraitService/TraitInfoService.tsplugins/openchoreo-common/package.jsonplugins/openchoreo-common/src/constants.tsplugins/openchoreo-common/src/index.tsplugins/openchoreo-common/src/utils/pagination.tsplugins/platform-engineer-core-backend/package.jsonplugins/platform-engineer-core-backend/src/services/PlatformEnvironmentService.ts
| "lint-staged": { | ||
| "*.{js,jsx,ts,tsx,mjs,cjs}": [ | ||
| "eslint --fix", | ||
| "prettier --write" | ||
| "yarn backstage-cli package lint --fix", | ||
| "yarn prettier --write" | ||
| ], | ||
| "*.{json,md}": [ | ||
| "prettier --write" | ||
| "yarn prettier --write" | ||
| ] |
There was a problem hiding this comment.
backstage-cli package lint is designed for directories and runs in the package CWD — calling it from the monorepo root with file paths forwarded by lint-staged may silently apply the wrong ESLint config.
The backstage-cli package lint signature is [directories...], not individual file paths. lint-staged always passes a list of all staged files as arguments to the task, so the actual invocation becomes yarn backstage-cli package lint --fix plugins/foo/src/bar.ts from the monorepo root. When called from the root, process.cwd() is the monorepo root, and the CLI may resolve the ESLint config from the root rather than from each staged file's own plugin directory. This means the newly added plugins/catalog-backend-module-openchoreo-incremental/.eslintrc.js (and any other per-plugin ESLint configs) would be bypassed during pre-commit hooks.
The lint-staged recommended pattern for monorepos is to install lint-staged at the root but add separate configuration files in each package, so that tasks run in the correct working directory for each package.
Consider either:
- Using
backstage-cli repo lint --fixat the root (which is monorepo-aware), or - Adding per-package
lint-stagedconfig files to each plugin.
🔧 Option 1: Use the repo-scoped lint command (simplest fix)
"lint-staged": {
"*.{js,jsx,ts,tsx,mjs,cjs}": [
- "yarn backstage-cli package lint --fix",
+ "yarn backstage-cli repo lint --fix",
"yarn prettier --write"
],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "lint-staged": { | |
| "*.{js,jsx,ts,tsx,mjs,cjs}": [ | |
| "eslint --fix", | |
| "prettier --write" | |
| "yarn backstage-cli package lint --fix", | |
| "yarn prettier --write" | |
| ], | |
| "*.{json,md}": [ | |
| "prettier --write" | |
| "yarn prettier --write" | |
| ] | |
| "lint-staged": { | |
| "*.{js,jsx,ts,tsx,mjs,cjs}": [ | |
| "yarn backstage-cli repo lint --fix", | |
| "yarn prettier --write" | |
| ], | |
| "*.{json,md}": [ | |
| "yarn prettier --write" | |
| ] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` around lines 53 - 60, The lint-staged config currently invokes
"backstage-cli package lint --fix" from the monorepo root which passes file
paths and causes ESLint to resolve from the root instead of each package; to
fix, update the lint-staged task(s) (the entries under "lint-staged" that call
"backstage-cli package lint --fix") to either call the repo-aware command "yarn
backstage-cli repo lint --fix" so linting runs with correct per-package
resolution, or implement per-package lint-staged configs (add a lint-staged
section or .lintstagedrc to each plugin/package) so "backstage-cli package lint
--fix" runs with the package CWD. Ensure you update the command strings that
currently reference "backstage-cli package lint --fix" accordingly.
🧩 Analysis chain
🌐 Web query:
Does @backstage/cliversion 0.34.3package lint subcommand accept and forward extra file path arguments to ESLint, suitable for use with lint-staged from a monorepo root?
💡 Result:
Yes, the @backstage/cli version 0.34.3 package lint subcommand accepts extra file path arguments and forwards them to ESLint. This makes it suitable for use with lint-staged from a monorepo root. The official documentation shows the usage as backstage-cli package lint [options] [directories...] Lint a package, explicitly indicating that directory/file paths can be provided as positional arguments after options. It also states: "default to linting the entire directory if no specific files are listed," confirming that when paths are provided, they are used instead, consistent with standard ESLint behavior where extra arguments are file paths passed to ESLint. In a monorepo setup, lint-staged at the root can invoke backstage-cli package lint --cwd packages/my-package (using lint-staged's cwd and path-relative features), where staged files are relative to the package directory. The CLI changes cwd to the package and runs ESLint on those specific files. The Backstage monorepo itself uses yarn lint-staged in its husky pre-commit hook, supporting this workflow. No breaking changes affecting this behavior appear in changelogs up to recent versions (e.g., 0.36.0 exists, confirming 0.34.3 is stable).
Citations:
- 1: https://backstage.io/docs/tooling/cli/commands/
- 2: https://github.com/backstage/backstage/blob/master/packages/cli/cli-report.md
- 3: https://backstage.io/docs/tooling/package-metadata/
- 4: https://www.npmjs.com/package/@backstage/cli
- 5: https://github.com/backstage/backstage/blob/master/.husky/pre-commit
Add --cwd option to specify the package directory when running lint-staged from monorepo root.
The backstage-cli package lint command accepts file paths from lint-staged; however, without the --cwd option pointing to the specific package directory, the command runs from the monorepo root and may lint against the wrong (root) ESLint config. Update the configuration to:
"lint-staged": {
"*.{js,jsx,ts,tsx,mjs,cjs}": [
"yarn backstage-cli package lint --cwd . --fix",
"yarn prettier --write"
],
For workspace packages referenced elsewhere, use the appropriate --cwd path to their package directory.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` around lines 53 - 60, Update the lint-staged commands so the
backstage-cli package lint runs from the package directory by adding the --cwd
option; specifically modify the "*.{js,jsx,ts,tsx,mjs,cjs}" array entry that
currently runs "yarn backstage-cli package lint --fix" to include "--cwd .
--fix" (i.e. "yarn backstage-cli package lint --cwd . --fix") so ESLint resolves
the package’s config, and for any workspace packages referenced elsewhere use
the appropriate "--cwd <package-dir>" value for those package-specific
lint-staged entries.
| parameters: | ||
| limitParam: | ||
| name: limit | ||
| in: query | ||
| required: false | ||
| description: Maximum number of items to return per page | ||
| schema: | ||
| type: integer | ||
| minimum: 0 | ||
| maximum: 512 | ||
| default: 100 | ||
| continueParam: | ||
| name: continue | ||
| in: query | ||
| required: false | ||
| schema: | ||
| type: string | ||
| description: Opaque continuation token from previous response | ||
|
|
||
| # Authorization types |
There was a problem hiding this comment.
components.parameters is opened before the schema section is finished.
From SubjectType onward, these definitions are now nested under components.parameters instead of components.schemas. That makes later refs like #/components/schemas/SubjectType invalid and will break spec validation / client generation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/openchoreo-client-node/openapi/openchoreo-api.yaml` around lines
1419 - 1438, The OpenAPI fragment accidentally closes into components.parameters
before the schemas block, causing definitions like SubjectType to be nested
under components.parameters instead of components.schemas; move the parameters
block (limitParam and continueParam) so that components.parameters is a sibling
of components.schemas (not opened before it), then ensure the SubjectType and
subsequent type definitions are placed under components.schemas so refs like
`#/components/schemas/SubjectType` resolve correctly.
| exports.down = async function down(knex) { | ||
| await knex.schema.alterTable('ingestions', table => { | ||
| // Revert back to VARCHAR(255) | ||
| // Note: This may truncate existing error messages longer than 255 characters | ||
| table.string('last_error', 255).alter(); |
There was a problem hiding this comment.
Make the rollback explicit for oversized last_error values.
Changing the column back to VARCHAR(255) here will fail once any row contains more than 255 characters; PostgreSQL will not truncate those values for you. The current down path becomes non-runnable precisely after this migration starts being useful.
Possible fix
exports.down = async function down(knex) {
- await knex.schema.alterTable('ingestions', table => {
- // Revert back to VARCHAR(255)
- // Note: This may truncate existing error messages longer than 255 characters
- table.string('last_error', 255).alter();
- });
+ await knex.raw(`
+ ALTER TABLE ingestions
+ ALTER COLUMN last_error TYPE VARCHAR(255)
+ USING LEFT(last_error, 255)
+ `);
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| exports.down = async function down(knex) { | |
| await knex.schema.alterTable('ingestions', table => { | |
| // Revert back to VARCHAR(255) | |
| // Note: This may truncate existing error messages longer than 255 characters | |
| table.string('last_error', 255).alter(); | |
| exports.down = async function down(knex) { | |
| await knex.raw(` | |
| ALTER TABLE ingestions | |
| ALTER COLUMN last_error TYPE VARCHAR(255) | |
| USING SUBSTRING(last_error, 1, 255) | |
| `); | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@plugins/catalog-backend-module-openchoreo-incremental/migrations/20240110000003_expand_last_error_field.js`
around lines 38 - 42, The down migration must truncate any values >255 before
altering the column to avoid PostgreSQL errors; in exports.down (the down
function) run a pre-alter UPDATE to shorten oversized values (e.g. via knex.raw
with LEFT/substring or UPDATE ingestions SET last_error = LEFT(last_error, 255)
WHERE char_length(last_error) > 255), then call
knex.schema.alterTable('ingestions', table => table.string('last_error',
255).alter()); ensure the truncate UPDATE runs in the same migration before the
alter.
| "postpack": "backstage-cli package postpack" | ||
| }, | ||
| "dependencies": { | ||
| "@backstage/backend-defaults": "^0.12.1", |
There was a problem hiding this comment.
Publishing dev/ breaks consumers: it imports a devDependency and inflates runtime deps.
dev/index.ts imports mockServices from @backstage/backend-test-utils, which is only listed under devDependencies. Any consumer who installs the published package and attempts to run or import these dev files will get a runtime "Cannot find module" error.
Additionally, @backstage/backend-defaults appears in dependencies solely to support dev/index.ts. If dev/ is removed from the published artifact, this can move to devDependencies, removing unnecessary weight from every consumer's dependency graph.
🛠️ Proposed fix
"dependencies": {
- "@backstage/backend-defaults": "^0.12.1",
"@backstage/backend-plugin-api": "^1.3.0",
...
},
"devDependencies": {
+ "@backstage/backend-defaults": "^0.12.1",
"@backstage/backend-test-utils": "^1.3.1",
"@backstage/cli": "^0.32.0",
...
},
"files": [
"dist",
- "migrations/**/*.{js,d.ts}",
- "dev/**/*.{ts,js}"
+ "migrations/**/*.{js,d.ts}"
]Also applies to: 67-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/catalog-backend-module-openchoreo-incremental/package.json` at line
40, Package is publishing dev/ which imports dev-only libs; move
"@backstage/backend-defaults" out of "dependencies" into "devDependencies" and
prevent dev/ from being included in the published artifact (so consumers don't
get runtime imports like dev/index.ts -> `@backstage/backend-test-utils`). Update
package.json: remove dev/ from the "files" array (or add dev/ to .npmignore) so
dev/index.ts and any dev-only modules are not packaged, and relocate
"@backstage/backend-defaults" to devDependencies; verify dev/index.ts’s import
of mockServices (from `@backstage/backend-test-utils`) remains only in
devDependencies.
| ## Database Migrations | ||
|
|
||
| The module includes automatic database migrations to create the necessary tables for state persistence: | ||
|
|
||
| - `openchoreo_incremental_ingestion_state` - Stores cursor state and ingestion metadata | ||
| - `openchoreo_incremental_entity_refs` - Tracks entity references for staleness detection | ||
|
|
There was a problem hiding this comment.
Database Migrations section documents incorrect table names.
The README states the module creates openchoreo_incremental_ingestion_state and openchoreo_incremental_entity_refs, but the actual migration schema (confirmed by the OpenChoreoIncrementalIngestionDatabaseManager.test.ts fixture fields such as ingestion_id) creates ingestions, ingestion_marks, and ingestion_mark_entities. Operators using these names for debugging or manual queries will hit "table not found" errors.
📝 Corrected section
## Database Migrations
The module includes automatic database migrations to create the necessary tables for state persistence:
-- `openchoreo_incremental_ingestion_state` - Stores cursor state and ingestion metadata
-- `openchoreo_incremental_entity_refs` - Tracks entity references for staleness detection
+- `ingestions` - Stores cursor state and ingestion metadata
+- `ingestion_marks` - Tracks pagination checkpoints per ingestion cycle
+- `ingestion_mark_entities` - Tracks entity references for staleness detection🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/catalog-backend-module-openchoreo-incremental/README.md` around lines
139 - 145, The README's "Database Migrations" section lists incorrect table
names; update it to document the actual migration tables used by the code and
tests—replace `openchoreo_incremental_ingestion_state` and
`openchoreo_incremental_entity_refs` with the real tables `ingestions`,
`ingestion_marks`, and `ingestion_mark_entities` (as reflected by fixtures and
fields like `ingestion_id` in
OpenChoreoIncrementalIngestionDatabaseManager.test.ts); ensure descriptions
match the schema purpose (e.g., ingestions = cursor/metadata, ingestion_marks =
marks per ingestion, ingestion_mark_entities = tracked entities).
| processedOrgs?: Set<string>; | ||
| processedProjects?: Set<string>; | ||
| processedComponents?: Set<string>; | ||
| } |
There was a problem hiding this comment.
Set<string> in cursor state may not serialize correctly.
The cursor interface includes processedOrgs, processedProjects, and processedComponents as Set<string> types. If this cursor is serialized to JSON (e.g., for database persistence between ingestion runs), Sets will serialize as empty objects {}, losing all tracked state.
Consider using arrays instead, or implement custom serialization/deserialization for the cursor.
Proposed fix using arrays
interface CursorTraversalCursor {
// ...
- processedOrgs?: Set<string>;
- processedProjects?: Set<string>;
- processedComponents?: Set<string>;
+ processedOrgs?: string[];
+ processedProjects?: string[];
+ processedComponents?: string[];
}
// Then in usage, convert to Set for O(1) lookups:
// const processedOrgsSet = new Set(cursor.processedOrgs || []);
// ...
// cursor.processedOrgs = Array.from(updatedProcessedOrgs);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| processedOrgs?: Set<string>; | |
| processedProjects?: Set<string>; | |
| processedComponents?: Set<string>; | |
| } | |
| processedOrgs?: string[]; | |
| processedProjects?: string[]; | |
| processedComponents?: string[]; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@plugins/catalog-backend-module-openchoreo-incremental/src/providers/OpenChoreoIncrementalEntityProvider.ts`
around lines 47 - 50, The cursor fields processedOrgs, processedProjects, and
processedComponents are typed as Set<string> which will not JSON-serialize
correctly; change their types to string[] in the cursor interface and update any
code that reads/writes those fields (e.g., places that iterate, add, or check
membership) to use array operations (includes/push/filter) or convert between
Set and array at boundaries (new Set(array) when you need fast membership checks
and use Array.from(set) before persisting). Ensure any cursor load/save paths
reconstruct arrays so serialized JSON preserves state.
| // Restart from the beginning without cursor | ||
| return await this.nextCursorMode(context, undefined); | ||
| } |
There was a problem hiding this comment.
Top-level 410 handler discards all progress, contradicting phase-level handlers.
The top-level error handler (lines 140-161) catches 410 errors and restarts with undefined cursor, discarding all progress including the processedOrgs, processedProjects, and processedComponents sets.
However, the phase-level 410 handlers (e.g., lines 320-353) carefully preserve these sets to avoid duplicate entity emission. This inconsistency means some 410 scenarios will cause duplicate entities while others won't.
Consider preserving the tracking sets in the top-level handler as well.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@plugins/catalog-backend-module-openchoreo-incremental/src/providers/OpenChoreoIncrementalEntityProvider.ts`
around lines 159 - 161, The top-level 410 handler in nextCursorMode currently
restarts by calling nextCursorMode(context, undefined) and thus drops the
processedOrgs, processedProjects, and processedComponents tracking sets; update
that handler to preserve and pass the existing tracking sets into the restart so
progress isn't lost—locate the 410 handling block in the nextCursorMode method
and ensure it reuses the current
processedOrgs/processedProjects/processedComponents (or forwards them via
context/state) when calling nextCursorMode again instead of restarting with only
an undefined cursor.
| // Booleans now use default checkbox/switch widgets; no explicit radio widget | ||
| expect(props.enableBackup['ui:widget']).toBe('radio'); |
There was a problem hiding this comment.
Test assertion contradicts the comment.
The comment states "Booleans now use default checkbox/switch widgets; no explicit radio widget" but the assertion on line 262 still expects 'radio'. Either the comment is incorrect, or the assertion needs to be updated to reflect the actual implementation change (e.g., removing the assertion or checking for undefined).
Proposed fix if booleans no longer use radio widget
- // Booleans now use default checkbox/switch widgets; no explicit radio widget
- expect(props.enableBackup['ui:widget']).toBe('radio');
+ // Booleans now use default checkbox/switch widgets; no explicit radio widget
+ expect(props.enableBackup['ui:widget']).toBeUndefined();Or if radio is still expected, fix the comment:
- // Booleans now use default checkbox/switch widgets; no explicit radio widget
+ // Booleans use radio widget for explicit true/false selection
expect(props.enableBackup['ui:widget']).toBe('radio');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Booleans now use default checkbox/switch widgets; no explicit radio widget | |
| expect(props.enableBackup['ui:widget']).toBe('radio'); | |
| // Booleans now use default checkbox/switch widgets; no explicit radio widget | |
| expect(props.enableBackup['ui:widget']).toBeUndefined(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@plugins/catalog-backend-module-openchoreo/src/converters/CtdToTemplateConverter.test.ts`
around lines 261 - 262, The test's comment says booleans use default
checkbox/switch widgets but the assertion on props.enableBackup['ui:widget']
still expects 'radio'; update the assertion in CtdToTemplateConverter.test.ts to
reflect the implementation change by replacing
expect(props.enableBackup['ui:widget']).toBe('radio') with an assertion that the
widget is not set (e.g.,
expect(props.enableBackup['ui:widget']).toBeUndefined()) so the test matches the
comment and current behavior of booleans.
| type TraitListResponse = OpenChoreoComponents['schemas']['APIResponse'] & { | ||
| data?: OpenChoreoComponents['schemas']['ListResponse'] & { | ||
| data?: { | ||
| items?: OpenChoreoComponents['schemas']['TraitResponse'][]; | ||
| }; | ||
| }; |
There was a problem hiding this comment.
Type definition missing metadata field.
TraitListResponse (lines 12-16) doesn't include metadata in its data type, but the return statement at lines 94-95 assigns metadata: data.data.metadata. The as TraitListResponse cast masks this inconsistency.
Consider updating the type to include metadata, similar to ComponentTraitListResponse:
Proposed fix
type TraitListResponse = OpenChoreoComponents['schemas']['APIResponse'] & {
- data?: {
+ data?: OpenChoreoComponents['schemas']['ListResponse'] & {
items?: OpenChoreoComponents['schemas']['TraitResponse'][];
};
};Also applies to: 91-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/openchoreo-backend/src/services/TraitService/TraitInfoService.ts`
around lines 12 - 16, The TraitListResponse type is missing the metadata field
under its data property which conflicts with the return in TraitInfoService
(where you set metadata: data.data.metadata); update TraitListResponse to mirror
ComponentTraitListResponse by adding a metadata?: { total?: number; page?:
number; size?: number } (or the appropriate metadata shape used across the
service) under data, then ensure the service return still casts to
TraitListResponse; adjust any uses of TraitListResponse accordingly so the type
matches the actual returned object (refer to TraitListResponse type and the
return that references data.data.metadata).
| /** | ||
| * Default page limit for API pagination requests. | ||
| * Set to 500 to align with the API's MaxPageLimit for optimal performance, | ||
| * reducing the number of HTTP requests needed for large deployments. | ||
| */ | ||
| export const DEFAULT_PAGE_LIMIT = 512; |
There was a problem hiding this comment.
JSDoc says 500 but the constant value is 512.
The comment reads "Set to 500 to align with the API's MaxPageLimit" while the actual value is 512. Either the value or the comment needs to be corrected — if 512 is intentional, update the doc; if the API's MaxPageLimit is actually 500, the constant should be 500.
📝 Proposed fix (if 512 is intentional)
-/**
- * Default page limit for API pagination requests.
- * Set to 500 to align with the API's MaxPageLimit for optimal performance,
- * reducing the number of HTTP requests needed for large deployments.
- */
-export const DEFAULT_PAGE_LIMIT = 512;
+/**
+ * Default page limit for API pagination requests.
+ * Set to 512 for optimal performance, reducing the number of HTTP requests
+ * needed for large deployments.
+ */
+export const DEFAULT_PAGE_LIMIT = 512;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Default page limit for API pagination requests. | |
| * Set to 500 to align with the API's MaxPageLimit for optimal performance, | |
| * reducing the number of HTTP requests needed for large deployments. | |
| */ | |
| export const DEFAULT_PAGE_LIMIT = 512; | |
| /** | |
| * Default page limit for API pagination requests. | |
| * Set to 512 for optimal performance, reducing the number of HTTP requests | |
| * needed for large deployments. | |
| */ | |
| export const DEFAULT_PAGE_LIMIT = 512; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/openchoreo-common/src/constants.ts` around lines 34 - 39, The JSDoc
for DEFAULT_PAGE_LIMIT is inconsistent with its value (doc says 500, constant is
512); confirm the API's actual MaxPageLimit and either update the constant or
the comment: if the API limit is 512, change the comment to reference 512 and
why (aligns with API MaxPageLimit), otherwise set export const
DEFAULT_PAGE_LIMIT = 500 (and update the JSDoc to 500) so the code and
documentation match; reference the symbol DEFAULT_PAGE_LIMIT when making the
change.
Purpose
This PR addresses scalability and performance limitations when syncing large numbers of entities from OpenChoreo. It introduces a new Incremental Entity Provider to handle large datasets efficiently via cursor-based pagination and updates the entire plugin ecosystem to support the new paginated API endpoints.
Goals
Approach
1. New Incremental Backend Module (
catalog-backend-module-openchoreo-incremental)ingestions,ingestion_marks, andingestion_mark_entitiestables for persisting cursor state and tracking processed entities.OpenChoreoIncrementalIngestionEnginewhich manages the ingestion lifecycle (Burst -> Interstitial -> Rest) and handles backoff strategies for errors.OpenChoreoIncrementalEntityProviderto traverse resources in a specific order (Organizations -> Projects -> Components) using the new APIcontinuetokens./incremental/...) to monitor health, trigger runs, and reset provider state.2. API Client & Common Utilities
openapi/openchoreo-api.yamland generated types to supportlimitandcontinuequery parameters on all list endpoints.ResponseMetadata(resourceVersion, hasMore).fetchAllResourcesutility inopenchoreo-commonto standardize pagination logic across legacy and new services.410 Goneerrors to automatically reset cursors if tokens expire.3. Existing Backend Refactoring
OpenChoreoEntityProviderand various info services (EnvironmentInfoService,TraitInfoService, etc.) to use the newfetchAllResourcesutility, ensuring they work with the updated paginated API.CtdToTemplateConverterto improve tag inference, UI widget selection (e.g., usingradiofor booleans), and conditional CI/CD setup sections.4. Frontend Updates
TraitsFieldExtension.tsxto support "Load More" functionality for retrieving traits incrementally in the Scaffolder.5. Configuration
openchoreo.incrementalinapp-config.yamlto control burst length, intervals, and chunk sizes.User stories
Release note
@openchoreo/plugin-catalog-backend-module-openchoreo-incrementalfor scalable, cursor-based entity ingestion.Component Templategeneration logic (better tag inference and UI widgets).Documentation
README.mdinplugins/catalog-backend-module-openchoreo-incrementaldetailing configuration and architecture.app-config.yamlexamples with comments explaining how to enable the incremental provider.Training
N/A
Certification
N/A
Marketing
N/A
Automation tests
OpenChoreoIncrementalEntityProvider.test.ts: Verifies cursor traversal, phase transitions (Org -> Project -> Component), and 410 error recovery.OpenChoreoIncrementalIngestionDatabaseManager.test.ts: Verifies database persistence for cursors and marks.CtdToTemplateConverter.test.ts: Updated tests to reflect changes in template generation (tags, CI setup).dev/index.tsin the new module for local testing with a dummy provider.Security checks
Samples
N/A
Related PRs
N/A
Migrations (if applicable)
catalog-backend-module-openchoreo-incrementalplugin.20221116073152_init.js: Creates initial ingestion tables.20240110000001_add_performance_indexes.ts: Adds indexes for performance.20240110000003_expand_last_error_field.ts: Expands error logging column size.app-config.yamlto configureopenchoreo.incrementaland register the new module inpackages/backend/src/index.ts.Test environment
Learning
incremental-ingestionbackend pattern (inspired by the official Backstage incremental provider) to handle large datasets efficiently.Summary by CodeRabbit
Release Notes
New Features
Improvements
Configuration