refactor: Validations for ecosystem details exist#1575
refactor: Validations for ecosystem details exist#1575pranalidhanavade wants to merge 6 commits intomainfrom
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds Keycloak configuration and client-registration integrations, extends JWT payload with ecosystem_access, shifts ecosystem role checks to JWT-based guard, propagates ecosystem access from payload, updates ecosystem service to sync Keycloak attributes, and adds scripts/URL helpers for Keycloak admin operations. Changes
Sequence DiagramssequenceDiagram
participant Client
participant APIGateway as API Gateway
participant JWT as JwtStrategy
participant Guard as EcosystemRolesGuard
Client->>APIGateway: Request (includes ecosystemId)
APIGateway->>JWT: Validate token -> return JwtPayload (includes ecosystem_access)
APIGateway->>Guard: Provide request + payload
Guard->>Guard: Extract ecosystemId (param/query/body), trim, validate UUID
Guard->>Guard: Check payload.ecosystem_access for ecosystemId in lead or member lists
alt found
Guard->>APIGateway: Attach selectedEcosystem to user -> Allow
APIGateway->>Client: 200 / proceed
else not found
Guard->>Client: 403 Forbidden
end
sequenceDiagram
participant EcosystemService
participant ClientReg as ClientRegistrationService
participant TokenSvc as PlatformMgmtToken
participant Keycloak
EcosystemService->>ClientReg: createEcosystem() / inviteMember()
ClientReg->>TokenSvc: getPlatformManagementToken()
TokenSvc->>Keycloak: client_credentials -> mgmt token
Keycloak->>ClientReg: return token
ClientReg->>Keycloak: update user/service-account `ecosystem_access` attribute (lead/member)
Keycloak->>ClientReg: update confirmation
ClientReg->>EcosystemService: callback / complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/ecosystem/src/ecosystem.service.ts (1)
343-348:⚠️ Potential issue | 🟡 MinorDuplicate condition detected.
Line 347 contains a redundant condition:
checkUser.status === Invitation.PENDING && checkUser.status === Invitation.PENDING. This appears to be a copy-paste error.🐛 Proposed fix
if ( checkUser?.ecosystemId === ecosystemId && checkUser.invitedOrg === orgId && checkUser.status === Invitation.PENDING && - checkUser.status === Invitation.PENDING && checkUser.type === InviteType.MEMBER ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ecosystem/src/ecosystem.service.ts` around lines 343 - 348, The conditional block checking checkUser currently repeats the same status check twice; remove the duplicate `checkUser.status === Invitation.PENDING` so the if reads: checkUser?.ecosystemId === ecosystemId && checkUser.invitedOrg === orgId && checkUser.status === Invitation.PENDING && checkUser.type === InviteType.MEMBER (locate the check near the checkUser variable in the ecosystem service and update that if-statement).
🧹 Nitpick comments (4)
libs/keycloak-url/src/keycloak-url.service.ts (1)
78-92: Two protocol-mapper URL methods are duplicate; keep one source of truth.
GetClientProtocolMappersURLandGetClientProtocolMappersByIdURLcurrently return the same path. This is easy to drift over time.🔧 Proposed refactor (delegate one to the other)
async GetClientProtocolMappersURL(realm: string, clientIdpId: string): Promise<string> { - return `${process.env.KEYCLOAK_DOMAIN}admin/realms/${realm}/clients/${clientIdpId}/protocol-mappers/models`; + return this.GetClientProtocolMappersByIdURL(realm, clientIdpId); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/keycloak-url/src/keycloak-url.service.ts` around lines 78 - 92, Get rid of the duplicated URL logic by keeping a single implementation and delegating the other to it: remove the duplicate body from GetClientProtocolMappersByIdURL and have it call/return GetClientProtocolMappersURL(realm, clientIdpId) (or vice versa), so only one place constructs `${process.env.KEYCLOAK_DOMAIN}admin/realms/${realm}/clients/${clientIdpId}/protocol-mappers/models`; update the method signatures accordingly and ensure both methods remain exported/available for callers.scripts/cleanup-ecosystem-mappers.ts (1)
12-15: Validate required environment variables before any network call.Fail fast with a clear message when required
KEYCLOAK_*env vars are missing.🔧 Proposed guard clause
const { KEYCLOAK_DOMAIN } = process.env; const { KEYCLOAK_REALM } = process.env; const { KEYCLOAK_MANAGEMENT_CLIENT_ID } = process.env; const { KEYCLOAK_MANAGEMENT_CLIENT_SECRET } = process.env; + +for (const [name, value] of Object.entries({ + KEYCLOAK_DOMAIN, + KEYCLOAK_REALM, + KEYCLOAK_MANAGEMENT_CLIENT_ID, + KEYCLOAK_MANAGEMENT_CLIENT_SECRET +})) { + if (!value) { + throw new Error(`Missing required environment variable: ${name}`); + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/cleanup-ecosystem-mappers.ts` around lines 12 - 15, Add a guard clause at the top of the script that validates the four required env vars (KEYCLOAK_DOMAIN, KEYCLOAK_REALM, KEYCLOAK_MANAGEMENT_CLIENT_ID, KEYCLOAK_MANAGEMENT_CLIENT_SECRET) before any network call and exit immediately with a clear error message if any are missing; specifically check each of those process.env values, build a helpful message listing the missing keys, log it (or throw) and stop execution so downstream code that uses these variables cannot proceed with undefined values.apps/api-gateway/src/authz/guards/ecosystem-roles.guard.ts (1)
10-15: Consider reusing types from jwt-payload.interface.ts.The
EcosystemAccessEntryinterface duplicates the structure ofEcosystemAccessandEcosystemRoleinterfaces already defined injwt-payload.interface.ts. Consider importing and reusing those types for consistency.♻️ Proposed refactor
+import { EcosystemAccess } from './jwt-payload.interface'; import { validate as isValidUUID } from 'uuid'; -interface EcosystemAccessEntry { - ecosystem_role?: { - lead?: string[]; - member?: string[]; - }; -} +// Use EcosystemAccess from jwt-payload.interface.ts instead🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api-gateway/src/authz/guards/ecosystem-roles.guard.ts` around lines 10 - 15, Replace the locally declared EcosystemAccessEntry interface with the shared types from jwt-payload.interface.ts: import and use the existing EcosystemAccess and/or EcosystemRole types instead of duplicating structure; update any references to EcosystemAccessEntry within ecosystem-roles.guard.ts (e.g., function params, type annotations) to use the imported types so the guard reuses the canonical JWT payload types.apps/ecosystem/src/ecosystem.service.ts (1)
205-207: Remove commented-out dead code.This code is a duplicate check that was already performed at lines 188-192. Leaving commented code creates confusion about intent. Either remove it entirely or restore it with a clear purpose.
🧹 Proposed fix
- // if (!invitation) { - // throw new ForbiddenException(ResponseMessages.ecosystem.error.invitationRequired); - // } - const ecosystem = await this.prisma.$transaction(async (tx) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ecosystem/src/ecosystem.service.ts` around lines 205 - 207, Remove the commented-out duplicate invitation check in ecosystem.service.ts (the block that reads "// if (!invitation) { throw new ForbiddenException(ResponseMessages.ecosystem.error.invitationRequired); }"); since the same validation is already performed earlier (around the existing invitation variable usage), either delete this dead code entirely or, if the intent differs, restore it with a clear comment and unique behavior inside the same method where "invitation" is referenced so there is no duplicate/ambiguous check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api-gateway/src/authz/guards/ecosystem-roles.guard.ts`:
- Around line 83-96: Replace the current logic that grabs the first entry from
ecosystemAccessValues and instead directly look up the requested ecosystem using
user?.ecosystem_access?.[ecosystemId]; check that entry’s ecosystem_role.lead
and ecosystem_role.member lists to determine access (instead of using
ecosystemAccessValues and ecosystemEntry). Update the failure branch that
currently throws ForbiddenException with
ResponseMessages.ecosystem?.error?.ecosystemNotFound to throw a clearer
permission error (e.g., "Access denied to ecosystem" or a dedicated
ResponseMessages.ecosystem?.error?.accessDenied) when hasAccess is false. Ensure
the new lookup still throws the original "Ecosystem not found" message only when
no entry exists for the requested ecosystemId.
- Around line 76-100: The guard currently validates ecosystemId but then
continues and ultimately denies when orgId is missing; update the guard (in the
canActivate method of ecosystem-roles.guard.ts) so that after successful
ecosystem validation (where you set user.selectedEcosystem) it returns true for
requests that require only ecosystem-level roles (e.g.,
OrgRoles.ECOSYSTEM_LEAD/ECOSYSTEM_MEMBER) or otherwise conditionally skips the
orgId requirement; specifically detect the required roles for the route (the
same logic that checks role requirements) and if they are ecosystem-scoped,
short-circuit and return true once ecosystem access is confirmed, otherwise fall
through to the existing orgId checks.
In `@apps/ecosystem/src/ecosystem.service.ts`:
- Around line 240-254: The current try/catch in updateKeycloakEcosystemLeads
swallows errors from clientRegistrationService.updateUserEcosystemLeads and
updateOrgServiceAccountEcosystem, causing silent Keycloak-sync failures; change
it to (a) log the full error with context (include user.id, orgId, ecosystemId
and error.stack/message) and (b) surface the failure by either rethrowing the
error or returning a structured result (e.g., boolean or enum) indicating
partial success so callers can handle rollback or retry; update callers to
expect the new behavior if you choose to return a status. Ensure references:
updateKeycloakEcosystemLeads,
clientRegistrationService.updateUserEcosystemLeads,
updateOrgServiceAccountEcosystem, and this.logger are updated accordingly.
In `@libs/client-registration/src/client-registration.service.ts`:
- Around line 804-812: The logger calls in updateUserEcosystemAccess that print
raw attributes and parsed ecosystemAccess (e.g.,
this.logger.log(`[updateUserEcosystemAccess] Current attributes:
${JSON.stringify(attributes)}`) and the parsed ecosystem_access log) expose
sensitive user data; change these to redact or summarize sensitive fields (e.g.,
log only keys, counts, or masked values) before logging and replace
JSON.stringify(attributes) / JSON.stringify(ecosystemAccess) with the sanitized
summary. Apply the same redaction/summarization approach to all other logger.log
usages that print full user attributes or payloads in the file (the other
occurrences noted in the review) so no raw identity/attribute data is emitted.
- Around line 796-862: updateUserEcosystemAccess currently does a
read-modify-write (httpGet -> mutate attributes.ecosystem_access -> httpPut)
which is race-prone and can overwrite concurrent updates; change this to a
conflict-safe approach by serializing updates per user or by adding optimistic
concurrency + retry-and-merge: obtain a user-specific lock/mutex (or use
Keycloak user version/ETag if available), re-fetch latest currentUser inside the
lock, parse and merge attributes.ecosystem_access (ensure orgId/ecosystem_role
arrays are created), add ecosystemId to role if missing, then persist with
httpPut and finally release the lock; implement retry with a small backoff if an
update conflict is detected; apply the same pattern for the other similar block
(lines referenced 968-1025) so concurrent requests for the same keycloakUserId
are serialized or safely merged.
- Around line 807-816: In updateUserEcosystemAccess, do not initialize
ecosystemAccess = {} and continue when
JSON.parse(attributes.ecosystem_access[0]) throws; instead detect the parse
failure, log a clear warning/error including the raw string, and abort the
update (return or throw) to avoid overwriting existing data; refer to the
variables ecosystemAccess and attributes.ecosystem_access and the
updateUserEcosystemAccess function to locate where to change the catch branch so
it stops the flow rather than setting an empty object.
In `@libs/keycloak-config/src/keycloak-config.service.ts`:
- Around line 257-259: The current user fetch builds usersUrl with `?max=100`
and only calls `this.commonService.httpGet` once (in the block that uses
`usersUrl`, `realm`, and `token`), which misses additional pages when realms
have >100 users; change the logic in the method that constructs `usersUrl`
(referencing `usersUrl`, `this.commonService.httpGet`, and `this.getAuthHeader`)
to paginate: loop requesting pages using Keycloak pagination params (e.g., add
`first` and `max`/pageSize query params), increment `first` by the page size
after each successful `httpGet`, append/aggregate returned users into a single
array, and stop when a page returns fewer users than the page size (or an empty
array), then proceed with verification using the aggregated list.
- Around line 29-34: The catch block in keycloak-config.service.ts currently
logs errors in the Keycloak initialization (the catch in the async initializer
or the method that calls Keycloak setup) but swallows them; update the catch to
fail fast by rethrowing the error or terminating startup when Keycloak setup
fails (e.g., throw error or call process.exit(1)) so the app cannot boot with a
broken auth config, or implement an explicit opt-out using a env flag (e.g.,
FAIL_ON_KEYCLOAK_ERROR or SKIP_KEYCLOAK_ON_ERROR) checked in the same block to
decide whether to rethrow/exit or only log; target the catch surrounding the
Keycloak setup in KeycloakConfigService (the method performing initialization)
and ensure the logger still records the details before rethrowing/exiting.
- Around line 177-185: The implicit-arrow-linebreak lint errors are caused by
line breaks inside the arrow callbacks used for computing targetClients and
excludedClients; fix by reformatting the arrow functions for clients.filter so
the arrow (=>) and its body are on the same line or break after the whole arrow
expression, e.g. adjust the callbacks used in targetClients and excludedClients
(the filter callbacks that reference client.clientId, client.bearerOnly, and
systemClients.includes) so they satisfy implicit-arrow-linebreak formatting
rules.
In `@libs/prisma-service/prisma/seed.ts`:
- Line 700: Remove the hardcoded Keycloak admin password value ('Password@1')
used in the seeding object and instead read the credential from a runtime
secret/environment variable (e.g., process.env.KEYCLOAK_ADMIN_PASSWORD); update
the seeding logic that assigns the password property (the object with password:
'Password@1') to either throw or abort if the env var is missing, or set it to
an empty/placeholder value so no real secret is checked into source control, and
ensure any helper function that creates the admin (e.g., seedKeycloakAdmin or
the admin object in the seed routine) uses the env var rather than the literal.
In `@scripts/cleanup-ecosystem-mappers.ts`:
- Around line 107-108: The current single-page fetch using clientsUrl and
request('GET', clientsUrl, token) only retrieves up to max items and misses
remaining entries; change both client and user fetches to paginate until no more
results by repeatedly calling request('GET', ...) with a page offset (or page
param) and aggregating into the clients (and users) arrays (the variables
clients and the analogous users fetch at the other occurrence), adjusting the
URL construction (clientsUrl) to include start/first/offset + max/pageSize and
stop when an empty page or fewer-than-max items is returned; ensure the same
pagination logic is applied to the other fetch block referenced (lines 142-143).
- Around line 35-64: The HTTP request currently created via lib.request
(variable req) can hang indefinitely; add a request timeout and ensure the
promise rejects with an explicit timeout error: define a timeoutMs (e.g.,
30000), call req.setTimeout(timeoutMs, () => req.destroy(new Error(`Request
timed out after ${timeoutMs}ms: ${method} ${url}`))); and also create a manual
timer (const timer = setTimeout(...)) or clear any timers on successful response
end/error to avoid leaks; update the req.on('error', ...) and the res.on('end',
...) handlers to clear the timer and propagate the explicit timeout error via
reject so the surrounding promise always resolves or rejects within timeoutMs.
---
Outside diff comments:
In `@apps/ecosystem/src/ecosystem.service.ts`:
- Around line 343-348: The conditional block checking checkUser currently
repeats the same status check twice; remove the duplicate `checkUser.status ===
Invitation.PENDING` so the if reads: checkUser?.ecosystemId === ecosystemId &&
checkUser.invitedOrg === orgId && checkUser.status === Invitation.PENDING &&
checkUser.type === InviteType.MEMBER (locate the check near the checkUser
variable in the ecosystem service and update that if-statement).
---
Nitpick comments:
In `@apps/api-gateway/src/authz/guards/ecosystem-roles.guard.ts`:
- Around line 10-15: Replace the locally declared EcosystemAccessEntry interface
with the shared types from jwt-payload.interface.ts: import and use the existing
EcosystemAccess and/or EcosystemRole types instead of duplicating structure;
update any references to EcosystemAccessEntry within ecosystem-roles.guard.ts
(e.g., function params, type annotations) to use the imported types so the guard
reuses the canonical JWT payload types.
In `@apps/ecosystem/src/ecosystem.service.ts`:
- Around line 205-207: Remove the commented-out duplicate invitation check in
ecosystem.service.ts (the block that reads "// if (!invitation) { throw new
ForbiddenException(ResponseMessages.ecosystem.error.invitationRequired); }");
since the same validation is already performed earlier (around the existing
invitation variable usage), either delete this dead code entirely or, if the
intent differs, restore it with a clear comment and unique behavior inside the
same method where "invitation" is referenced so there is no duplicate/ambiguous
check.
In `@libs/keycloak-url/src/keycloak-url.service.ts`:
- Around line 78-92: Get rid of the duplicated URL logic by keeping a single
implementation and delegating the other to it: remove the duplicate body from
GetClientProtocolMappersByIdURL and have it call/return
GetClientProtocolMappersURL(realm, clientIdpId) (or vice versa), so only one
place constructs
`${process.env.KEYCLOAK_DOMAIN}admin/realms/${realm}/clients/${clientIdpId}/protocol-mappers/models`;
update the method signatures accordingly and ensure both methods remain
exported/available for callers.
In `@scripts/cleanup-ecosystem-mappers.ts`:
- Around line 12-15: Add a guard clause at the top of the script that validates
the four required env vars (KEYCLOAK_DOMAIN, KEYCLOAK_REALM,
KEYCLOAK_MANAGEMENT_CLIENT_ID, KEYCLOAK_MANAGEMENT_CLIENT_SECRET) before any
network call and exit immediately with a clear error message if any are missing;
specifically check each of those process.env values, build a helpful message
listing the missing keys, log it (or throw) and stop execution so downstream
code that uses these variables cannot proceed with undefined values.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
apps/api-gateway/src/app.module.tsapps/api-gateway/src/authz/guards/ecosystem-roles.guard.tsapps/api-gateway/src/authz/jwt-payload.interface.tsapps/api-gateway/src/authz/jwt.strategy.tsapps/api-gateway/src/ecosystem/ecosystem.controller.tsapps/api-gateway/src/ecosystem/intent/intent.controller.tsapps/ecosystem/src/ecosystem.module.tsapps/ecosystem/src/ecosystem.service.tslibs/client-registration/src/client-registration.module.tslibs/client-registration/src/client-registration.service.tslibs/keycloak-config/src/index.tslibs/keycloak-config/src/keycloak-config.module.tslibs/keycloak-config/src/keycloak-config.service.tslibs/keycloak-url/src/keycloak-url.service.tslibs/prisma-service/prisma/seed.tsscripts/cleanup-ecosystem-mappers.tstsconfig.json
| try { | ||
| const token = await this.clientRegistrationService.getPlatformManagementToken(); | ||
|
|
||
| // Update user attributes in Keycloak with ecosystem lead info | ||
| if (user.keycloakUserId) { | ||
| await this.clientRegistrationService.updateUserEcosystemLeads(user.keycloakUserId, orgId, ecosystemId, token); | ||
| this.logger.log(`Successfully updated Keycloak ecosystem leads for user ${user.id}, ecosystem ${ecosystemId}`); | ||
| } else { | ||
| this.logger.warn(`User ${user.id} does not have a keycloakUserId, skipping user attribute update`); | ||
| } | ||
|
|
||
| await this.updateOrgServiceAccountEcosystem(orgId, ecosystemId, 'lead', token); | ||
| } catch (error) { | ||
| this.logger.error(`Failed to update Keycloak ecosystem leads: ${error.message}`); | ||
| } |
There was a problem hiding this comment.
Silent failure on Keycloak sync may cause inconsistent state.
The updateKeycloakEcosystemLeads method catches all errors and only logs them. If Keycloak sync fails, the ecosystem is created in the database but the user's Keycloak attributes are not updated, leading to an inconsistent state where the user cannot access the ecosystem.
Consider whether this should throw or at least return a status indicating partial success. At minimum, consider logging at error level with more context.
💡 Suggested improvement
} catch (error) {
- this.logger.error(`Failed to update Keycloak ecosystem leads: ${error.message}`);
+ this.logger.error(`Failed to update Keycloak ecosystem leads for user ${user.id}, ecosystem ${ecosystemId}: ${error.message}`, error.stack);
+ // Consider: throw error; or return { success: false, error } for caller to handle
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/ecosystem/src/ecosystem.service.ts` around lines 240 - 254, The current
try/catch in updateKeycloakEcosystemLeads swallows errors from
clientRegistrationService.updateUserEcosystemLeads and
updateOrgServiceAccountEcosystem, causing silent Keycloak-sync failures; change
it to (a) log the full error with context (include user.id, orgId, ecosystemId
and error.stack/message) and (b) surface the failure by either rethrowing the
error or returning a structured result (e.g., boolean or enum) indicating
partial success so callers can handle rollback or retry; update callers to
expect the new behavior if you choose to return a status. Ensure references:
updateKeycloakEcosystemLeads,
clientRegistrationService.updateUserEcosystemLeads,
updateOrgServiceAccountEcosystem, and this.logger are updated accordingly.
| const currentUser = await this.commonService.httpGet(userUrl, this.getAuthHeader(token)); | ||
| this.logger.log(`[updateUserEcosystemAccess] Current user fetched: ${currentUser ? 'success' : 'null'}`); | ||
|
|
||
| if (!currentUser) { | ||
| throw new NotFoundException(`User not found in Keycloak: ${keycloakUserId}`); | ||
| } | ||
|
|
||
| const attributes = currentUser.attributes || {}; | ||
| this.logger.log(`[updateUserEcosystemAccess] Current attributes: ${JSON.stringify(attributes)}`); | ||
|
|
||
| let ecosystemAccess: Record<string, { ecosystem_role: { lead: string[]; member: string[] } }> = {}; | ||
| if (attributes.ecosystem_access && attributes.ecosystem_access[0]) { | ||
| try { | ||
| ecosystemAccess = JSON.parse(attributes.ecosystem_access[0]); | ||
| this.logger.log( | ||
| `[updateUserEcosystemAccess] Parsed existing ecosystem_access: ${JSON.stringify(ecosystemAccess)}` | ||
| ); | ||
| } catch (error) { | ||
| this.logger.warn(`[updateUserEcosystemAccess] Failed to parse existing ecosystem_access, initializing empty`); | ||
| ecosystemAccess = {}; | ||
| } | ||
| } | ||
|
|
||
| if (!ecosystemAccess[orgId]) { | ||
| ecosystemAccess[orgId] = { | ||
| ecosystem_role: { | ||
| lead: [], | ||
| member: [] | ||
| } | ||
| }; | ||
| this.logger.log(`[updateUserEcosystemAccess] Initialized ecosystem_access for org ${orgId}`); | ||
| } | ||
|
|
||
| if (!ecosystemAccess[orgId].ecosystem_role) { | ||
| ecosystemAccess[orgId].ecosystem_role = { lead: [], member: [] }; | ||
| } | ||
| if (!ecosystemAccess[orgId].ecosystem_role.lead) { | ||
| ecosystemAccess[orgId].ecosystem_role.lead = []; | ||
| } | ||
| if (!ecosystemAccess[orgId].ecosystem_role.member) { | ||
| ecosystemAccess[orgId].ecosystem_role.member = []; | ||
| } | ||
|
|
||
| const existingEcosystems = ecosystemAccess[orgId].ecosystem_role[role]; | ||
| this.logger.log(`[updateUserEcosystemAccess] Existing ${role} ecosystems: ${JSON.stringify(existingEcosystems)}`); | ||
|
|
||
| if (!existingEcosystems.includes(ecosystemId)) { | ||
| existingEcosystems.push(ecosystemId); | ||
| this.logger.log(`[updateUserEcosystemAccess] Added ecosystem ${ecosystemId} to ${role}`); | ||
| } else { | ||
| this.logger.log(`[updateUserEcosystemAccess] Ecosystem ${ecosystemId} already exists in ${role}`); | ||
| } | ||
|
|
||
| ecosystemAccess[orgId].ecosystem_role[role] = existingEcosystems; | ||
| this.logger.log(`[updateUserEcosystemAccess] Updated ecosystem_access: ${JSON.stringify(ecosystemAccess)}`); | ||
|
|
||
| attributes.ecosystem_access = [JSON.stringify(ecosystemAccess)]; | ||
|
|
||
| // Update user in Keycloak | ||
| const updatePayload = { | ||
| ...currentUser, | ||
| attributes | ||
| }; | ||
| this.logger.log(`[updateUserEcosystemAccess] Sending update to Keycloak...`, updatePayload); | ||
|
|
||
| await this.commonService.httpPut(userUrl, updatePayload, this.getAuthHeader(token)); | ||
|
|
There was a problem hiding this comment.
Read-modify-write updates are race-prone and can lose ecosystem assignments.
Both methods fetch the user, mutate local state, then PUT the full representation. Concurrent requests for the same user can overwrite each other.
Use a conflict-safe update strategy (e.g., serialized per-user updates + retry-on-conflict merge) before persisting ecosystem_access.
Also applies to: 968-1025
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/client-registration/src/client-registration.service.ts` around lines 796
- 862, updateUserEcosystemAccess currently does a read-modify-write (httpGet ->
mutate attributes.ecosystem_access -> httpPut) which is race-prone and can
overwrite concurrent updates; change this to a conflict-safe approach by
serializing updates per user or by adding optimistic concurrency +
retry-and-merge: obtain a user-specific lock/mutex (or use Keycloak user
version/ETag if available), re-fetch latest currentUser inside the lock, parse
and merge attributes.ecosystem_access (ensure orgId/ecosystem_role arrays are
created), add ecosystemId to role if missing, then persist with httpPut and
finally release the lock; implement retry with a small backoff if an update
conflict is detected; apply the same pattern for the other similar block (lines
referenced 968-1025) so concurrent requests for the same keycloakUserId are
serialized or safely merged.
| this.logger.log(`[updateUserEcosystemAccess] Current attributes: ${JSON.stringify(attributes)}`); | ||
|
|
||
| let ecosystemAccess: Record<string, { ecosystem_role: { lead: string[]; member: string[] } }> = {}; | ||
| if (attributes.ecosystem_access && attributes.ecosystem_access[0]) { | ||
| try { | ||
| ecosystemAccess = JSON.parse(attributes.ecosystem_access[0]); | ||
| this.logger.log( | ||
| `[updateUserEcosystemAccess] Parsed existing ecosystem_access: ${JSON.stringify(ecosystemAccess)}` | ||
| ); |
There was a problem hiding this comment.
Avoid logging raw user attributes and full update payloads.
These logs include sensitive identity/attribute data and should be redacted or summarized.
🔧 Proposed fix (safe, minimal logs)
- this.logger.log(`[updateUserEcosystemAccess] Current attributes: ${JSON.stringify(attributes)}`);
+ this.logger.debug(`[updateUserEcosystemAccess] Attributes loaded for user ${keycloakUserId}`);
@@
- this.logger.log(`[updateUserEcosystemAccess] Updated ecosystem_access: ${JSON.stringify(ecosystemAccess)}`);
+ this.logger.debug(`[updateUserEcosystemAccess] ecosystem_access updated for org ${orgId}`);
@@
- this.logger.log(`[updateUserEcosystemAccess] Sending update to Keycloak...`, updatePayload);
+ this.logger.debug(`[updateUserEcosystemAccess] Sending update to Keycloak for user ${keycloakUserId}`);
@@
- this.logger.log(`[removeUserEcosystemAccess] Current attributes: ${JSON.stringify(attributes)}`);
+ this.logger.debug(`[removeUserEcosystemAccess] Attributes loaded for user ${keycloakUserId}`);
@@
- this.logger.log(`[removeUserEcosystemAccess] Sending update to Keycloak...`);
+ this.logger.debug(`[removeUserEcosystemAccess] Sending update to Keycloak for user ${keycloakUserId}`);Also applies to: 850-859, 975-983, 1022-1028
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/client-registration/src/client-registration.service.ts` around lines 804
- 812, The logger calls in updateUserEcosystemAccess that print raw attributes
and parsed ecosystemAccess (e.g., this.logger.log(`[updateUserEcosystemAccess]
Current attributes: ${JSON.stringify(attributes)}`) and the parsed
ecosystem_access log) expose sensitive user data; change these to redact or
summarize sensitive fields (e.g., log only keys, counts, or masked values)
before logging and replace JSON.stringify(attributes) /
JSON.stringify(ecosystemAccess) with the sanitized summary. Apply the same
redaction/summarization approach to all other logger.log usages that print full
user attributes or payloads in the file (the other occurrences noted in the
review) so no raw identity/attribute data is emitted.
| const targetClients = clients.filter( | ||
| (client: { clientId: string; bearerOnly: boolean }) => | ||
| !client.clientId.startsWith('realm-') && !client.bearerOnly && !systemClients.includes(client.clientId) | ||
| ); | ||
|
|
||
| const excludedClients = clients.filter( | ||
| (client: { clientId: string; bearerOnly: boolean }) => | ||
| client.clientId.startsWith('realm-') || client.bearerOnly || systemClients.includes(client.clientId) | ||
| ); |
There was a problem hiding this comment.
ESLint issue in implicit arrow linebreak formatting.
Static analysis flags this block; it should be reformatted to satisfy implicit-arrow-linebreak at Line 179 and Line 184.
🔧 Proposed lint-safe formatting
- const targetClients = clients.filter(
- (client: { clientId: string; bearerOnly: boolean }) =>
- !client.clientId.startsWith('realm-') && !client.bearerOnly && !systemClients.includes(client.clientId)
- );
+ const targetClients = clients.filter((client: { clientId: string; bearerOnly: boolean }) =>
+ !client.clientId.startsWith('realm-') && !client.bearerOnly && !systemClients.includes(client.clientId)
+ );
- const excludedClients = clients.filter(
- (client: { clientId: string; bearerOnly: boolean }) =>
- client.clientId.startsWith('realm-') || client.bearerOnly || systemClients.includes(client.clientId)
- );
+ const excludedClients = clients.filter((client: { clientId: string; bearerOnly: boolean }) =>
+ client.clientId.startsWith('realm-') || client.bearerOnly || systemClients.includes(client.clientId)
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const targetClients = clients.filter( | |
| (client: { clientId: string; bearerOnly: boolean }) => | |
| !client.clientId.startsWith('realm-') && !client.bearerOnly && !systemClients.includes(client.clientId) | |
| ); | |
| const excludedClients = clients.filter( | |
| (client: { clientId: string; bearerOnly: boolean }) => | |
| client.clientId.startsWith('realm-') || client.bearerOnly || systemClients.includes(client.clientId) | |
| ); | |
| const targetClients = clients.filter((client: { clientId: string; bearerOnly: boolean }) => | |
| !client.clientId.startsWith('realm-') && !client.bearerOnly && !systemClients.includes(client.clientId) | |
| ); | |
| const excludedClients = clients.filter((client: { clientId: string; bearerOnly: boolean }) => | |
| client.clientId.startsWith('realm-') || client.bearerOnly || systemClients.includes(client.clientId) | |
| ); |
🧰 Tools
🪛 ESLint
[error] 179-179: Expected no linebreak before this expression.
(implicit-arrow-linebreak)
[error] 184-184: Expected no linebreak before this expression.
(implicit-arrow-linebreak)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/keycloak-config/src/keycloak-config.service.ts` around lines 177 - 185,
The implicit-arrow-linebreak lint errors are caused by line breaks inside the
arrow callbacks used for computing targetClients and excludedClients; fix by
reformatting the arrow functions for clients.filter so the arrow (=>) and its
body are on the same line or break after the whole arrow expression, e.g. adjust
the callbacks used in targetClients and excludedClients (the filter callbacks
that reference client.clientId, client.bearerOnly, and systemClients.includes)
so they satisfy implicit-arrow-linebreak formatting rules.
| const usersUrl = `${process.env.KEYCLOAK_DOMAIN}admin/realms/${realm}/users?max=100`; | ||
| this.logger.log(`Fetching users from: ${usersUrl}`); | ||
| const users = await this.commonService.httpGet(usersUrl, this.getAuthHeader(token)); |
There was a problem hiding this comment.
User verification only checks one page (max=100).
The audit can report false confidence in realms with more than 100 users.
🔧 Proposed fix (paginate users in verification step)
- const usersUrl = `${process.env.KEYCLOAK_DOMAIN}admin/realms/${realm}/users?max=100`;
- this.logger.log(`Fetching users from: ${usersUrl}`);
- const users = await this.commonService.httpGet(usersUrl, this.getAuthHeader(token));
+ const users: any[] = [];
+ const pageSize = 100;
+ let first = 0;
+ while (true) {
+ const usersUrl = `${process.env.KEYCLOAK_DOMAIN}admin/realms/${realm}/users?first=${first}&max=${pageSize}`;
+ this.logger.log(`Fetching users from: ${usersUrl}`);
+ const page = await this.commonService.httpGet(usersUrl, this.getAuthHeader(token));
+ if (!Array.isArray(page) || 0 === page.length) {
+ break;
+ }
+ users.push(...page);
+ if (page.length < pageSize) {
+ break;
+ }
+ first += page.length;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/keycloak-config/src/keycloak-config.service.ts` around lines 257 - 259,
The current user fetch builds usersUrl with `?max=100` and only calls
`this.commonService.httpGet` once (in the block that uses `usersUrl`, `realm`,
and `token`), which misses additional pages when realms have >100 users; change
the logic in the method that constructs `usersUrl` (referencing `usersUrl`,
`this.commonService.httpGet`, and `this.getAuthHeader`) to paginate: loop
requesting pages using Keycloak pagination params (e.g., add `first` and
`max`/pageSize query params), increment `first` by the page size after each
successful `httpGet`, append/aggregate returned users into a single array, and
stop when a page returns fewer users than the page size (or an empty array),
then proceed with verification using the aggregated list.
| firstName: cachedConfig.platformName, | ||
| lastName: cachedConfig.platformName, | ||
| password: decryptedPassword.toString(CryptoJS.enc.Utf8) | ||
| password: 'Password@1' |
There was a problem hiding this comment.
Remove hardcoded Keycloak admin password immediately.
Line 700 hardcodes a predictable credential ('Password@1') for a privileged account. This is a release-blocking security issue. Load it from environment/runtime secret instead.
🔐 Proposed fix
export async function createKeycloakUser(): Promise<void> {
logger.log(`✅ Creating keycloak user for platform admin`);
const { platformAdminData } = JSON.parse(configData);
- if (!platformAdminData?.password) {
- throw new Error('platformAdminData password is missing from credebl-master-table.json');
- }
if (!cachedConfig) {
throw new Error('failed to load platform config data from db');
}
const {
KEYCLOAK_DOMAIN,
KEYCLOAK_REALM,
PLATFORM_ADMIN_KEYCLOAK_ID,
PLATFORM_ADMIN_KEYCLOAK_SECRET,
- CRYPTO_PRIVATE_KEY
+ CRYPTO_PRIVATE_KEY,
+ PLATFORM_ADMIN_USER_PASSWORD
} = process.env;
if (
!KEYCLOAK_DOMAIN ||
!KEYCLOAK_REALM ||
!PLATFORM_ADMIN_KEYCLOAK_ID ||
!PLATFORM_ADMIN_KEYCLOAK_SECRET ||
- !CRYPTO_PRIVATE_KEY
+ !CRYPTO_PRIVATE_KEY ||
+ !PLATFORM_ADMIN_USER_PASSWORD
) {
throw new Error(
- 'Missing required environment variables for either PLATFORM_ADMIN_USER_PASSWORD or KEYCLOAK_DOMAIN or KEYCLOAK_REALM or PLATFORM_ADMIN_KEYCLOAK_ID or PLATFORM_ADMIN_KEYCLOAK_SECRET or CRYPTO_PRIVATE_KEY'
+ 'Missing required environment variables for PLATFORM_ADMIN_USER_PASSWORD, KEYCLOAK_DOMAIN, KEYCLOAK_REALM, PLATFORM_ADMIN_KEYCLOAK_ID, PLATFORM_ADMIN_KEYCLOAK_SECRET, or CRYPTO_PRIVATE_KEY'
);
}
const token = await getKeycloakToken();
const user = {
username: cachedConfig.platformEmail,
email: cachedConfig.platformEmail,
firstName: cachedConfig.platformName,
lastName: cachedConfig.platformName,
- password: 'Password@1'
+ password: PLATFORM_ADMIN_USER_PASSWORD
};Based on learnings: seed data keeps sensitive fields empty and populates them from .env at runtime.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/prisma-service/prisma/seed.ts` at line 700, Remove the hardcoded
Keycloak admin password value ('Password@1') used in the seeding object and
instead read the credential from a runtime secret/environment variable (e.g.,
process.env.KEYCLOAK_ADMIN_PASSWORD); update the seeding logic that assigns the
password property (the object with password: 'Password@1') to either throw or
abort if the env var is missing, or set it to an empty/placeholder value so no
real secret is checked into source control, and ensure any helper function that
creates the admin (e.g., seedKeycloakAdmin or the admin object in the seed
routine) uses the env var rather than the literal.
| const req = lib.request( | ||
| { | ||
| hostname: parsed.hostname, | ||
| port: parsed.port, | ||
| path: parsed.pathname + parsed.search, | ||
| method, | ||
| headers | ||
| }, | ||
| (res) => { | ||
| let data = ''; | ||
| res.on('data', (chunk) => (data += chunk)); | ||
| res.on('end', () => { | ||
| if (400 <= res.statusCode) { | ||
| reject(new Error(`${method} ${url} returned ${res.statusCode}: ${data}`)); | ||
| return; | ||
| } | ||
| try { | ||
| resolve(data ? JSON.parse(data) : null); | ||
| } catch { | ||
| resolve(data); | ||
| } | ||
| }); | ||
| } | ||
| ); | ||
| req.on('error', reject); | ||
| if (body) { | ||
| req.write(body); | ||
| } | ||
| req.end(); | ||
| }); |
There was a problem hiding this comment.
HTTP requests can hang indefinitely without timeout controls.
A stuck Keycloak/network call will block the script forever.
🔧 Proposed fix (request timeout + explicit timeout error)
- const req = lib.request(
+ const req = lib.request(
{
hostname: parsed.hostname,
port: parsed.port,
path: parsed.pathname + parsed.search,
method,
- headers
+ headers,
+ timeout: 15000
},
(res) => {
let data = '';
res.on('data', (chunk) => (data += chunk));
res.on('end', () => {
@@
}
);
+ req.setTimeout(15000, () => {
+ req.destroy(new Error(`${method} ${url} timed out after 15000ms`));
+ });
req.on('error', reject);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const req = lib.request( | |
| { | |
| hostname: parsed.hostname, | |
| port: parsed.port, | |
| path: parsed.pathname + parsed.search, | |
| method, | |
| headers | |
| }, | |
| (res) => { | |
| let data = ''; | |
| res.on('data', (chunk) => (data += chunk)); | |
| res.on('end', () => { | |
| if (400 <= res.statusCode) { | |
| reject(new Error(`${method} ${url} returned ${res.statusCode}: ${data}`)); | |
| return; | |
| } | |
| try { | |
| resolve(data ? JSON.parse(data) : null); | |
| } catch { | |
| resolve(data); | |
| } | |
| }); | |
| } | |
| ); | |
| req.on('error', reject); | |
| if (body) { | |
| req.write(body); | |
| } | |
| req.end(); | |
| }); | |
| const req = lib.request( | |
| { | |
| hostname: parsed.hostname, | |
| port: parsed.port, | |
| path: parsed.pathname + parsed.search, | |
| method, | |
| headers, | |
| timeout: 15000 | |
| }, | |
| (res) => { | |
| let data = ''; | |
| res.on('data', (chunk) => (data += chunk)); | |
| res.on('end', () => { | |
| if (400 <= res.statusCode) { | |
| reject(new Error(`${method} ${url} returned ${res.statusCode}: ${data}`)); | |
| return; | |
| } | |
| try { | |
| resolve(data ? JSON.parse(data) : null); | |
| } catch { | |
| resolve(data); | |
| } | |
| }); | |
| } | |
| ); | |
| req.setTimeout(15000, () => { | |
| req.destroy(new Error(`${method} ${url} timed out after 15000ms`)); | |
| }); | |
| req.on('error', reject); | |
| if (body) { | |
| req.write(body); | |
| } | |
| req.end(); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/cleanup-ecosystem-mappers.ts` around lines 35 - 64, The HTTP request
currently created via lib.request (variable req) can hang indefinitely; add a
request timeout and ensure the promise rejects with an explicit timeout error:
define a timeoutMs (e.g., 30000), call req.setTimeout(timeoutMs, () =>
req.destroy(new Error(`Request timed out after ${timeoutMs}ms: ${method}
${url}`))); and also create a manual timer (const timer = setTimeout(...)) or
clear any timers on successful response end/error to avoid leaks; update the
req.on('error', ...) and the res.on('end', ...) handlers to clear the timer and
propagate the explicit timeout error via reject so the surrounding promise
always resolves or rejects within timeoutMs.
| const clientsUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/clients?max=1000`; | ||
| const clients = await request('GET', clientsUrl, token); |
There was a problem hiding this comment.
Cleanup is incomplete in larger realms due single-page fetches.
The script states “all clients/users” but only requests one page (max=1000 / max=500). Entries beyond that are skipped.
🔧 Proposed fix (paginate until exhausted)
+async function fetchAll(urlBase: string, token: string, pageSize = 200): Promise<any[]> {
+ const all: any[] = [];
+ let first = 0;
+ while (true) {
+ const sep = urlBase.includes('?') ? '&' : '?';
+ const page = await request('GET', `${urlBase}${sep}first=${first}&max=${pageSize}`, token);
+ if (!Array.isArray(page) || page.length === 0) {
+ break;
+ }
+ all.push(...page);
+ if (page.length < pageSize) {
+ break;
+ }
+ first += page.length;
+ }
+ return all;
+}
+
- const clientsUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/clients?max=1000`;
- const clients = await request('GET', clientsUrl, token);
+ const clientsUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/clients`;
+ const clients = await fetchAll(clientsUrl, token);
...
- const usersUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/users?max=500`;
- const users = await request('GET', usersUrl, token);
+ const usersUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/users`;
+ const users = await fetchAll(usersUrl, token);Also applies to: 142-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/cleanup-ecosystem-mappers.ts` around lines 107 - 108, The current
single-page fetch using clientsUrl and request('GET', clientsUrl, token) only
retrieves up to max items and misses remaining entries; change both client and
user fetches to paginate until no more results by repeatedly calling
request('GET', ...) with a page offset (or page param) and aggregating into the
clients (and users) arrays (the variables clients and the analogous users fetch
at the other occurrence), adjusting the URL construction (clientsUrl) to include
start/first/offset + max/pageSize and stop when an empty page or fewer-than-max
items is returned; ensure the same pagination logic is applied to the other
fetch block referenced (lines 142-143).
643108d to
900fe8a
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (9)
libs/client-registration/src/client-registration.service.ts (3)
807-816:⚠️ Potential issue | 🟠 MajorParse-failure path still resets
ecosystemAccessto empty object.If existing
ecosystem_accessJSON is malformed, continuing with{}risks overwriting previously stored assignments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/client-registration/src/client-registration.service.ts` around lines 807 - 816, The parse-failure branch in updateUserEcosystemAccess currently sets ecosystemAccess = {} which risks overwriting stored assignments; instead, on JSON.parse failure of attributes.ecosystem_access[0] do not initialize an empty object—either set ecosystemAccess to null/undefined and abort/skip the update logic that would overwrite existing ecosystem assignments, or surface the parse error so the caller can handle it; update the catch in updateUserEcosystemAccess to log the error (include error details) and return/skip merging rather than assigning {} to ecosystemAccess.
804-812:⚠️ Potential issue | 🟠 MajorSensitive attribute/payload logging is still present.
Raw
attributes, parsedecosystem_access, and outbound payload logging can leak identity/authorization data to logs.Also applies to: 850-859, 975-983, 1022-1023
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/client-registration/src/client-registration.service.ts` around lines 804 - 812, Avoid logging sensitive attribute payloads in updateUserEcosystemAccess: remove or redact raw attributes, parsed ecosystemAccess, and any outbound payload logs (e.g., the this.logger.log calls that print attributes, ecosystemAccess, and payload) and replace them with non-sensitive messages (e.g., log only that parsing succeeded/failed or the number of entries). Ensure parsing errors still log an error without including sensitive content by using generic messages and include only safe context (function name updateUserEcosystemAccess and whether parse succeeded/failed). Also apply the same redaction/remove change to the other logger.log calls that print these values elsewhere in this file (the similar blocks around attributes/ecosystem_access handling).
796-862:⚠️ Potential issue | 🟠 MajorRead-modify-write updates are still race-prone (lost updates risk).
Both update/remove flows still fetch user → mutate local JSON → put full user document, which is vulnerable under concurrent writes.
Also applies to: 968-1025
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/client-registration/src/client-registration.service.ts` around lines 796 - 862, The code in updateUserEcosystemAccess does a read-modify-write of the whole user (commonService.httpGet → mutate attributes → commonService.httpPut userUrl), which is race-prone; change it to a safe partial/atomic update: either call Keycloak's attribute-only update (use the Keycloak Admin partial/attributes endpoint instead of PUTting the full user) or implement a CAS-style retry loop where you fetch the latest attributes, merge the ecosystem_access change, attempt the update, and on conflict (or if the attributes changed) re-fetch and retry a few times; update references to updateUserEcosystemAccess, commonService.httpGet/commonService.httpPut, userUrl, getAuthHeader and attributes.ecosystem_access to use the partial update or retry/compare-and-swap flow.libs/keycloak-config/src/keycloak-config.service.ts (3)
29-34:⚠️ Potential issue | 🟠 MajorInitialization failure is still swallowed in
onModuleInit.This remains unresolved: startup logs an error but continues execution after configuration failure.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/keycloak-config/src/keycloak-config.service.ts` around lines 29 - 34, The onModuleInit catch block in keycloak-config.service.ts currently logs the failure but swallows the error, allowing the app to continue; modify the catch in onModuleInit (the block catching "error" where this.logger.error(...) is called) to rethrow the error or call process.exit(1) after logging so initialization failure stops startup (e.g., throw error or process.exit with a clear logged message) to ensure the service does not continue running in a bad state.
257-259:⚠️ Potential issue | 🟠 MajorUser verification still checks only first page (
max=100).This can miss users in larger realms and produce incomplete verification results.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/keycloak-config/src/keycloak-config.service.ts` around lines 257 - 259, The user listing only fetches a single page (max=100) and can miss users; update the method that builds usersUrl and calls this.commonService.httpGet (where users is assigned) to implement pagination: set pageSize (e.g., 100) and loop increasing the "first" offset parameter (or follow next links if available) appending results until the returned page length is less than pageSize; aggregate into a single users array, use this.getAuthHeader(token) for each request, and keep the this.logger.log message but log each paginated fetch or final total for visibility.
177-185:⚠️ Potential issue | 🟡 MinorESLint
implicit-arrow-linebreakis still present.This formatting issue at Line 179 and Line 184 appears unchanged.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/keycloak-config/src/keycloak-config.service.ts` around lines 177 - 185, Adjust the arrow function formatting for the targetClients and excludedClients filters to fix the implicit-arrow-linebreak lint error: ensure the arrow (=>) is on the same line as the parameter list (or reformat to use a block-bodied arrow function with braces) for the callbacks used in clients.filter, i.e. the anonymous functions passed to clients.filter that reference client.clientId and client.bearerOnly should have their parameter list and => on the same line so the expressions for targetClients and excludedClients comply with the ESLint rule.apps/ecosystem/src/ecosystem.service.ts (1)
236-307:⚠️ Potential issue | 🟠 MajorDo not swallow Keycloak sync failures in helper methods.
updateKeycloakEcosystemLeads,updateKeycloakEcosystemMember, andupdateOrgServiceAccountEcosystemcurrently log and continue, which can silently leave DB state and Keycloak state out of sync.🔧 Proposed fix pattern (apply to all three helpers)
private async updateKeycloakEcosystemLeads( @@ ): Promise<void> { try { @@ await this.updateOrgServiceAccountEcosystem(orgId, ecosystemId, 'lead', token); } catch (error) { - this.logger.error(`Failed to update Keycloak ecosystem leads: ${error.message}`); + const err = error instanceof Error ? error : new Error(String(error)); + this.logger.error( + `Failed to update Keycloak ecosystem leads for user ${user.id}, org ${orgId}, ecosystem ${ecosystemId}: ${err.message}`, + err.stack + ); + throw err; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ecosystem/src/ecosystem.service.ts` around lines 236 - 307, The three helper methods updateKeycloakEcosystemLeads, updateKeycloakEcosystemMember, and updateOrgServiceAccountEcosystem are catching errors and only logging them (swallowing failures); change each to either remove the try/catch or rethrow the caught error after logging so failures propagate to the caller (e.g., catch (error) { this.logger.error(..., error); throw error; }), include the full error object/details in the log, and ensure any callers of these methods handle the propagated errors appropriately.scripts/cleanup-ecosystem-mappers.ts (2)
35-64:⚠️ Potential issue | 🟠 MajorAdd request timeout handling to avoid indefinite hangs.
http/https.requestcurrently has no timeout guard, so a stuck Keycloak/network call can block the script forever.🔧 Proposed fix
function request(method: string, url: string, token?: string, body?: string): Promise<any> { return new Promise((resolve, reject) => { + const timeoutMs = 30000; const parsed = new URL(url); const isHttps = 'https:' === parsed.protocol; const lib = isHttps ? https : http; @@ const req = lib.request( @@ ); - req.on('error', reject); + const timer = setTimeout(() => { + req.destroy(new Error(`Request timed out after ${timeoutMs}ms: ${method} ${url}`)); + }, timeoutMs); + + req.on('error', (err) => { + clearTimeout(timer); + reject(err); + }); + + req.on('response', (res) => { + res.on('end', () => clearTimeout(timer)); + res.on('error', () => clearTimeout(timer)); + }); + if (body) { req.write(body); } req.end(); }); }#!/bin/bash # Verify timeout protections in request helper rg -n "lib\\.request|req\\.setTimeout|timeoutMs|setTimeout\\(" scripts/cleanup-ecosystem-mappers.ts🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/cleanup-ecosystem-mappers.ts` around lines 35 - 64, The HTTP request lacks a timeout so lib.request calls (the req created in the callback handling parsed, method, url, body) can hang indefinitely; add a timeoutMs constant (or parameter) and call req.setTimeout(timeoutMs, () => { req.destroy(new Error(`Request timed out after ${timeoutMs}ms`)); reject(new Error(...)); }) and also listen for the 'timeout'/'error' events to ensure the promise is rejected and the socket/req is cleaned up (use req.destroy or req.abort) when the timeout fires; ensure you don't double-reject by guarding that resolve/reject are only called once.
107-108:⚠️ Potential issue | 🟠 MajorPaginate clients/users fetches; single-page reads miss records.
Current
max=1000andmax=500fetches skip entries in larger realms, so cleanup is incomplete.🔧 Proposed fix
+async function fetchAll(urlBase: string, token: string, pageSize = 200): Promise<any[]> { + const all: any[] = []; + let first = 0; + while (true) { + const sep = urlBase.includes('?') ? '&' : '?'; + const page = await request('GET', `${urlBase}${sep}first=${first}&max=${pageSize}`, token); + if (!Array.isArray(page) || page.length === 0) break; + all.push(...page); + if (page.length < pageSize) break; + first += page.length; + } + return all; +} @@ - const clientsUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/clients?max=1000`; - const clients = await request('GET', clientsUrl, token); + const clientsUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/clients`; + const clients = await fetchAll(clientsUrl, token); @@ - const usersUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/users?max=500`; - const users = await request('GET', usersUrl, token); + const usersUrl = `${KEYCLOAK_DOMAIN}admin/realms/${KEYCLOAK_REALM}/users`; + const users = await fetchAll(usersUrl, token);Also applies to: 142-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/cleanup-ecosystem-mappers.ts` around lines 107 - 108, The clients and users fetches currently use a single GET with max=1000/500 which misses records; update the logic around clientsUrl/clients and the analogous usersUrl/users fetch at the later block to paginate: perform repeated request('GET', ...) calls using Keycloak pagination parameters (e.g., add/update query params like first=<offset> and max=<pageSize>), increment offset by pageSize until a request returns fewer than pageSize items, and aggregate results into the clients/users arrays before proceeding with cleanup; ensure you keep using the same request function and tokens (token, KEYCLOAK_DOMAIN, KEYCLOAK_REALM) and stop when no more items are returned.
🧹 Nitpick comments (2)
libs/keycloak-url/src/keycloak-url.service.ts (1)
78-92: Deduplicate protocol-mapper URL helpers.
GetClientProtocolMappersURLandGetClientProtocolMappersByIdURLcurrently build the same URL. Keeping both with duplicated logic invites drift.♻️ Suggested simplification
async GetClientProtocolMappersByIdURL(realm: string, clientIdpId: string): Promise<string> { - return `${process.env.KEYCLOAK_DOMAIN}admin/realms/${realm}/clients/${clientIdpId}/protocol-mappers/models`; + return this.GetClientProtocolMappersURL(realm, clientIdpId); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/keycloak-url/src/keycloak-url.service.ts` around lines 78 - 92, GetClientProtocolMappersURL and GetClientProtocolMappersByIdURL produce identical URLs so remove the duplication by keeping a single helper (e.g., GetClientProtocolMappersURL) and delete the other, or implement GetClientProtocolMappersByIdURL to delegate to GetClientProtocolMappersURL; update all call sites to use the single method name (referencing GetClientProtocolMappersURL and GetClientProtocolMappersByIdURL in the diff) and ensure the chosen method signature (realm: string, clientIdpId: string) and returned string format remain unchanged.apps/ecosystem/src/ecosystem.service.ts (1)
206-208: Remove commented-out dead code.These commented lines add noise and should be dropped now that the new validation flow is in place.
🔧 Proposed cleanup
- // if (!invitation) { - // throw new ForbiddenException(ResponseMessages.ecosystem.error.invitationRequired); - // }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/ecosystem/src/ecosystem.service.ts` around lines 206 - 208, Remove the commented-out dead code in ecosystem.service.ts: delete the three-line block that checks `invitation` and would throw `ForbiddenException(ResponseMessages.ecosystem.error.invitationRequired)` (the commented `if (!invitation) { ... }` lines), since the new validation flow makes them obsolete and they only add noise.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api-gateway/src/authz/guards/ecosystem-roles.guard.ts`:
- Around line 131-134: In ecosystem-roles.guard.ts replace the debug placeholder
strings in the ForbiddenException calls (the '1111111' and '222222' message
arguments) with the standard, actionable message from
ResponseMessages.errorMessages.forbidden; update both occurrences where
ForbiddenException is constructed (including the instance that also sets cause:
new Error('error')) so they pass the canonical forbidden message and preserve
the existing metadata (cause and description) in the exception.
- Around line 92-105: The guard currently stops at membership check (using
ecosystemAccess and hasAccess) and unconditionally returns true after setting
user.selectedEcosystem, which bypasses enforcement of requiredRoles; update the
logic in the guard (the block around hasAccess, user.selectedEcosystem and the
final return) to, after verifying membership, evaluate the requiredRoles for the
request and ensure the user has one of those roles (distinguish lead vs member
using entry.ecosystem_role.lead and .member from EcosystemRoleGroup), throwing
ForbiddenException if none match, and only return true when both membership and
requiredRoles checks pass.
In `@apps/api-gateway/src/ecosystem/ecosystem.controller.ts`:
- Line 111: The EcosystemRolesGuard is being applied on the route but there is
no role metadata provided, so the guard will short-circuit; either add the
appropriate `@Roles`(...) decorator with the required role(s) to the same
controller method (or controller class) to enable authorization checks, or
remove EcosystemRolesGuard from the `@UseGuards`(...) list if no role-based
restriction is intended—look for the controller method annotated with
`@UseGuards`(AuthGuard('jwt'), EcosystemRolesGuard) and update it to include
`@Roles`('RequiredRole') or drop EcosystemRolesGuard accordingly.
In `@scripts/cleanup-ecosystem-mappers.ts`:
- Around line 12-16: Add a fail-fast check immediately after reading the
Keycloak env vars (KEYCLOAK_DOMAIN, KEYCLOAK_REALM,
KEYCLOAK_MANAGEMENT_CLIENT_ID, KEYCLOAK_MANAGEMENT_CLIENT_SECRET) to verify none
are empty/undefined; if any are missing, log a clear error listing the missing
variables (via console.error/processLogger.error) and exit the process with
non‑zero status (process.exit(1)) so URL construction and auth do not proceed
with invalid config.
---
Duplicate comments:
In `@apps/ecosystem/src/ecosystem.service.ts`:
- Around line 236-307: The three helper methods updateKeycloakEcosystemLeads,
updateKeycloakEcosystemMember, and updateOrgServiceAccountEcosystem are catching
errors and only logging them (swallowing failures); change each to either remove
the try/catch or rethrow the caught error after logging so failures propagate to
the caller (e.g., catch (error) { this.logger.error(..., error); throw error;
}), include the full error object/details in the log, and ensure any callers of
these methods handle the propagated errors appropriately.
In `@libs/client-registration/src/client-registration.service.ts`:
- Around line 807-816: The parse-failure branch in updateUserEcosystemAccess
currently sets ecosystemAccess = {} which risks overwriting stored assignments;
instead, on JSON.parse failure of attributes.ecosystem_access[0] do not
initialize an empty object—either set ecosystemAccess to null/undefined and
abort/skip the update logic that would overwrite existing ecosystem assignments,
or surface the parse error so the caller can handle it; update the catch in
updateUserEcosystemAccess to log the error (include error details) and
return/skip merging rather than assigning {} to ecosystemAccess.
- Around line 804-812: Avoid logging sensitive attribute payloads in
updateUserEcosystemAccess: remove or redact raw attributes, parsed
ecosystemAccess, and any outbound payload logs (e.g., the this.logger.log calls
that print attributes, ecosystemAccess, and payload) and replace them with
non-sensitive messages (e.g., log only that parsing succeeded/failed or the
number of entries). Ensure parsing errors still log an error without including
sensitive content by using generic messages and include only safe context
(function name updateUserEcosystemAccess and whether parse succeeded/failed).
Also apply the same redaction/remove change to the other logger.log calls that
print these values elsewhere in this file (the similar blocks around
attributes/ecosystem_access handling).
- Around line 796-862: The code in updateUserEcosystemAccess does a
read-modify-write of the whole user (commonService.httpGet → mutate attributes →
commonService.httpPut userUrl), which is race-prone; change it to a safe
partial/atomic update: either call Keycloak's attribute-only update (use the
Keycloak Admin partial/attributes endpoint instead of PUTting the full user) or
implement a CAS-style retry loop where you fetch the latest attributes, merge
the ecosystem_access change, attempt the update, and on conflict (or if the
attributes changed) re-fetch and retry a few times; update references to
updateUserEcosystemAccess, commonService.httpGet/commonService.httpPut, userUrl,
getAuthHeader and attributes.ecosystem_access to use the partial update or
retry/compare-and-swap flow.
In `@libs/keycloak-config/src/keycloak-config.service.ts`:
- Around line 29-34: The onModuleInit catch block in keycloak-config.service.ts
currently logs the failure but swallows the error, allowing the app to continue;
modify the catch in onModuleInit (the block catching "error" where
this.logger.error(...) is called) to rethrow the error or call process.exit(1)
after logging so initialization failure stops startup (e.g., throw error or
process.exit with a clear logged message) to ensure the service does not
continue running in a bad state.
- Around line 257-259: The user listing only fetches a single page (max=100) and
can miss users; update the method that builds usersUrl and calls
this.commonService.httpGet (where users is assigned) to implement pagination:
set pageSize (e.g., 100) and loop increasing the "first" offset parameter (or
follow next links if available) appending results until the returned page length
is less than pageSize; aggregate into a single users array, use
this.getAuthHeader(token) for each request, and keep the this.logger.log message
but log each paginated fetch or final total for visibility.
- Around line 177-185: Adjust the arrow function formatting for the
targetClients and excludedClients filters to fix the implicit-arrow-linebreak
lint error: ensure the arrow (=>) is on the same line as the parameter list (or
reformat to use a block-bodied arrow function with braces) for the callbacks
used in clients.filter, i.e. the anonymous functions passed to clients.filter
that reference client.clientId and client.bearerOnly should have their parameter
list and => on the same line so the expressions for targetClients and
excludedClients comply with the ESLint rule.
In `@scripts/cleanup-ecosystem-mappers.ts`:
- Around line 35-64: The HTTP request lacks a timeout so lib.request calls (the
req created in the callback handling parsed, method, url, body) can hang
indefinitely; add a timeoutMs constant (or parameter) and call
req.setTimeout(timeoutMs, () => { req.destroy(new Error(`Request timed out after
${timeoutMs}ms`)); reject(new Error(...)); }) and also listen for the
'timeout'/'error' events to ensure the promise is rejected and the socket/req is
cleaned up (use req.destroy or req.abort) when the timeout fires; ensure you
don't double-reject by guarding that resolve/reject are only called once.
- Around line 107-108: The clients and users fetches currently use a single GET
with max=1000/500 which misses records; update the logic around
clientsUrl/clients and the analogous usersUrl/users fetch at the later block to
paginate: perform repeated request('GET', ...) calls using Keycloak pagination
parameters (e.g., add/update query params like first=<offset> and
max=<pageSize>), increment offset by pageSize until a request returns fewer than
pageSize items, and aggregate results into the clients/users arrays before
proceeding with cleanup; ensure you keep using the same request function and
tokens (token, KEYCLOAK_DOMAIN, KEYCLOAK_REALM) and stop when no more items are
returned.
---
Nitpick comments:
In `@apps/ecosystem/src/ecosystem.service.ts`:
- Around line 206-208: Remove the commented-out dead code in
ecosystem.service.ts: delete the three-line block that checks `invitation` and
would throw
`ForbiddenException(ResponseMessages.ecosystem.error.invitationRequired)` (the
commented `if (!invitation) { ... }` lines), since the new validation flow makes
them obsolete and they only add noise.
In `@libs/keycloak-url/src/keycloak-url.service.ts`:
- Around line 78-92: GetClientProtocolMappersURL and
GetClientProtocolMappersByIdURL produce identical URLs so remove the duplication
by keeping a single helper (e.g., GetClientProtocolMappersURL) and delete the
other, or implement GetClientProtocolMappersByIdURL to delegate to
GetClientProtocolMappersURL; update all call sites to use the single method name
(referencing GetClientProtocolMappersURL and GetClientProtocolMappersByIdURL in
the diff) and ensure the chosen method signature (realm: string, clientIdpId:
string) and returned string format remain unchanged.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
apps/api-gateway/src/app.module.tsapps/api-gateway/src/authz/guards/ecosystem-roles.guard.tsapps/api-gateway/src/authz/jwt-payload.interface.tsapps/api-gateway/src/authz/jwt.strategy.tsapps/api-gateway/src/ecosystem/ecosystem.controller.tsapps/api-gateway/src/ecosystem/intent/intent.controller.tsapps/ecosystem/src/ecosystem.module.tsapps/ecosystem/src/ecosystem.service.tslibs/client-registration/src/client-registration.module.tslibs/client-registration/src/client-registration.service.tslibs/keycloak-config/src/index.tslibs/keycloak-config/src/keycloak-config.module.tslibs/keycloak-config/src/keycloak-config.service.tslibs/keycloak-url/src/keycloak-url.service.tslibs/prisma-service/prisma/seed.tsscripts/cleanup-ecosystem-mappers.tstsconfig.json
🚧 Files skipped from review as they are similar to previous changes (7)
- libs/prisma-service/prisma/seed.ts
- apps/api-gateway/src/ecosystem/intent/intent.controller.ts
- libs/keycloak-config/src/keycloak-config.module.ts
- libs/keycloak-config/src/index.ts
- tsconfig.json
- apps/api-gateway/src/authz/jwt.strategy.ts
- apps/ecosystem/src/ecosystem.module.ts
| const hasAccess = Object.values(ecosystemAccess).some((entry: EcosystemRoleGroup) => { | ||
| const leadList = entry?.ecosystem_role?.lead ?? []; | ||
| const memberList = entry?.ecosystem_role?.member ?? []; | ||
| return leadList.includes(ecosystemId) || memberList.includes(ecosystemId); | ||
| }); | ||
|
|
||
| if (!hasAccess) { | ||
| throw new ForbiddenException( | ||
| ResponseMessages.ecosystem?.error?.ecosystemNotFound || 'User does not have access to this ecosystem' | ||
| ); | ||
| } | ||
|
|
||
| user.selectedEcosystem = ecosystemId; | ||
| return true; |
There was a problem hiding this comment.
Critical auth bypass in ecosystem path (return true ignores required roles).
After hasAccess succeeds, the guard returns true without enforcing requiredRoles. This lets any user with ecosystem membership pass endpoints that require stronger org roles, and it also treats lead/member as equivalent.
🔧 Suggested fix
- const hasAccess = Object.values(ecosystemAccess).some((entry: EcosystemRoleGroup) => {
- const leadList = entry?.ecosystem_role?.lead ?? [];
- const memberList = entry?.ecosystem_role?.member ?? [];
- return leadList.includes(ecosystemId) || memberList.includes(ecosystemId);
- });
+ const roleMatch = Object.values(ecosystemAccess).some((entry: EcosystemRoleGroup) => {
+ const leadList = entry?.ecosystem_role?.lead ?? [];
+ const memberList = entry?.ecosystem_role?.member ?? [];
+
+ const leadOk =
+ !requiredRolesNames.includes(OrgRoles.ECOSYSTEM_LEAD) || leadList.includes(ecosystemId);
+ const memberOk =
+ !requiredRolesNames.includes(OrgRoles.ECOSYSTEM_MEMBER) ||
+ leadList.includes(ecosystemId) ||
+ memberList.includes(ecosystemId);
+
+ return leadOk && memberOk;
+ });
- if (!hasAccess) {
+ if (!roleMatch) {
throw new ForbiddenException(
ResponseMessages.ecosystem?.error?.ecosystemNotFound || 'User does not have access to this ecosystem'
);
}
user.selectedEcosystem = ecosystemId;
- return true;
+ return true;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api-gateway/src/authz/guards/ecosystem-roles.guard.ts` around lines 92 -
105, The guard currently stops at membership check (using ecosystemAccess and
hasAccess) and unconditionally returns true after setting
user.selectedEcosystem, which bypasses enforcement of requiredRoles; update the
logic in the guard (the block around hasAccess, user.selectedEcosystem and the
final return) to, after verifying membership, evaluate the requiredRoles for the
request and ensure the user has one of those roles (distinguish lead vs member
using entry.ecosystem_role.lead and .member from EcosystemRoleGroup), throwing
ForbiddenException if none match, and only return true when both membership and
requiredRoles checks pass.
| throw new ForbiddenException('1111111', { | ||
| cause: new Error('error'), | ||
| description: ResponseMessages.errorMessages.forbidden | ||
| }); |
There was a problem hiding this comment.
Replace debug placeholder forbidden messages before merge.
Line 131 and Line 166 currently throw with '1111111' / '222222'. These are non-actionable and should use standard response messages.
Also applies to: 166-169
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api-gateway/src/authz/guards/ecosystem-roles.guard.ts` around lines 131
- 134, In ecosystem-roles.guard.ts replace the debug placeholder strings in the
ForbiddenException calls (the '1111111' and '222222' message arguments) with the
standard, actionable message from ResponseMessages.errorMessages.forbidden;
update both occurrences where ForbiddenException is constructed (including the
instance that also sets cause: new Error('error')) so they pass the canonical
forbidden message and preserve the existing metadata (cause and description) in
the exception.
| const { KEYCLOAK_DOMAIN } = process.env; | ||
| const { KEYCLOAK_REALM } = process.env; | ||
| const { KEYCLOAK_MANAGEMENT_CLIENT_ID } = process.env; | ||
| const { KEYCLOAK_MANAGEMENT_CLIENT_SECRET } = process.env; | ||
|
|
There was a problem hiding this comment.
Add fail-fast validation for required Keycloak env vars.
When these values are missing, URL construction and auth fail later with less actionable errors.
🔧 Proposed fix
dotenv.config();
+const requiredEnv = [
+ 'KEYCLOAK_DOMAIN',
+ 'KEYCLOAK_REALM',
+ 'KEYCLOAK_MANAGEMENT_CLIENT_ID',
+ 'KEYCLOAK_MANAGEMENT_CLIENT_SECRET'
+] as const;
+
+const missingEnv = requiredEnv.filter((key) => !process.env[key]);
+if (missingEnv.length > 0) {
+ throw new Error(`Missing required env vars: ${missingEnv.join(', ')}`);
+}
+
const { KEYCLOAK_DOMAIN } = process.env;
const { KEYCLOAK_REALM } = process.env;
const { KEYCLOAK_MANAGEMENT_CLIENT_ID } = process.env;
const { KEYCLOAK_MANAGEMENT_CLIENT_SECRET } = process.env;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { KEYCLOAK_DOMAIN } = process.env; | |
| const { KEYCLOAK_REALM } = process.env; | |
| const { KEYCLOAK_MANAGEMENT_CLIENT_ID } = process.env; | |
| const { KEYCLOAK_MANAGEMENT_CLIENT_SECRET } = process.env; | |
| const requiredEnv = [ | |
| 'KEYCLOAK_DOMAIN', | |
| 'KEYCLOAK_REALM', | |
| 'KEYCLOAK_MANAGEMENT_CLIENT_ID', | |
| 'KEYCLOAK_MANAGEMENT_CLIENT_SECRET' | |
| ] as const; | |
| const missingEnv = requiredEnv.filter((key) => !process.env[key]); | |
| if (missingEnv.length > 0) { | |
| throw new Error(`Missing required env vars: ${missingEnv.join(', ')}`); | |
| } | |
| const { KEYCLOAK_DOMAIN } = process.env; | |
| const { KEYCLOAK_REALM } = process.env; | |
| const { KEYCLOAK_MANAGEMENT_CLIENT_ID } = process.env; | |
| const { KEYCLOAK_MANAGEMENT_CLIENT_SECRET } = process.env; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/cleanup-ecosystem-mappers.ts` around lines 12 - 16, Add a fail-fast
check immediately after reading the Keycloak env vars (KEYCLOAK_DOMAIN,
KEYCLOAK_REALM, KEYCLOAK_MANAGEMENT_CLIENT_ID,
KEYCLOAK_MANAGEMENT_CLIENT_SECRET) to verify none are empty/undefined; if any
are missing, log a clear error listing the missing variables (via
console.error/processLogger.error) and exit the process with non‑zero status
(process.exit(1)) so URL construction and auth do not proceed with invalid
config.
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: pranalidhanavade <pranali.dhanavade@ayanworks.com>
Signed-off-by: pranalidhanavade <pranali.dhanavade@ayanworks.com>
Signed-off-by: pranalidhanavade <pranali.dhanavade@ayanworks.com>
Signed-off-by: pranalidhanavade <pranali.dhanavade@ayanworks.com>
b9ee7d4 to
a27bc21
Compare
|
|
New PR related to same changes is raised (#1578). Closing this. |


What
Summary by CodeRabbit
New Features
Improvements