Conversation
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdd a complete Go-based CLA Legacy Backend: HTTP handlers, middleware, AWS/DynamoDB stores, GitHub/Salesforce/LF integrations, contract templates and PDF generation, Serverless deployment manifests, build tooling, and new CI/CD workflows for build/test/lint/security for the legacy service. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server as Legacy API Server
participant GitHub as GitHub OAuth
participant Users as Users Service
participant DynamoDB
Client->>Server: POST /v2/github_oauth/sign
Server->>Server: store session metadata / state
Server->>GitHub: Redirect user to OAuth authorize URL
Client->>GitHub: User consents
Client->>Server: GET /v2/github_oauth/callback?code&state
Server->>GitHub: POST /login/oauth/access_token (exchange code)
GitHub-->>Server: access_token
Server->>GitHub: GET /user & /user/emails
GitHub-->>Server: user info + emails
Server->>Users: Lookup or create user with GitHub data
Users->>DynamoDB: read/write user record
DynamoDB-->>Users: user id
Users-->>Server: user object
Server->>DynamoDB: persist signature/metadata
Server-->>Client: Redirect to CLA console (signed)
sequenceDiagram
participant Client
participant Server as API Server
participant Contracts as Contract Renderer
participant Templates as Embedded FS
participant DocRaptor
Client->>Server: POST /v1/signature/render?template=cncf&type=corporate
Server->>Contracts: RenderHTML(template, type, version, legalEntity, preamble)
Contracts->>Templates: read templates.json and HTML file
Templates-->>Contracts: template HTML + tab metadata
Contracts-->>Server: rendered HTML
Server->>DocRaptor: GeneratePDF(html)
DocRaptor-->>Server: PDF bytes
Server-->>Client: PDF response (application/pdf)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Pull request overview
This PR introduces a new cla-backend-legacy Go module intended to replace the legacy EasyCLA Python v1/v2 backend, including DynamoDB access layers, middleware, contract template rendering, and CI/CD wiring for build/deploy.
Changes:
- Added a full Go-based legacy backend module (
cla-backend-legacy) with stores, middleware, legacy integrations, and contract template rendering. - Added/duplicated runtime resources (HTML contract templates + CLA badge assets) under
cla-backend-legacy/resourcesand embedded templates underinternal/contracts/templates. - Updated GitHub Actions workflows to build/test/lint (and deploy in dev/prod workflows) the new legacy backend.
Reviewed changes
Copilot reviewed 90 out of 98 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| cla-backend-legacy/resources/tungsten-fabric-individual-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/tungsten-fabric-corporate-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/tekton-individual-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/tekton-corporate-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/openvdb-individual-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/openvdb-corporate-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/opencolorio-individual-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/opencolorio-corporate-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/openbmc-individual-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/openbmc-corporate-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/onap-individual-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/onap-corporate-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/cncf-individual-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/cncf-corporate-cla.html | Adds legacy CLA HTML resource template. |
| cla-backend-legacy/resources/cla-unsigned.svg | Adds “CLA Not Signed” badge asset. |
| cla-backend-legacy/resources/cla-signed.svg | Adds “CLA Signed” badge asset. |
| cla-backend-legacy/resources/cla-signed.png | Adds “CLA Signed” raster badge asset. |
| cla-backend-legacy/resources/cla-notsigned.png | Adds “CLA Not Signed” raster badge asset. |
| cla-backend-legacy/package.json | Adds Serverless-related Node tooling for deployment. |
| cla-backend-legacy/internal/store/user_permissions.go | Adds DynamoDB store for user-permissions with add/remove project operations. |
| cla-backend-legacy/internal/store/signatures.go | Adds DynamoDB access patterns for signatures and key GSIs. |
| cla-backend-legacy/internal/store/project_cla_groups.go | Adds DynamoDB queries for projects-cla-groups mappings. |
| cla-backend-legacy/internal/store/kv_store.go | Adds DynamoDB-backed KV store used for sessions. |
| cla-backend-legacy/internal/store/gitlab_orgs.go | Adds DynamoDB scan-based lookup for GitLab org mappings. |
| cla-backend-legacy/internal/store/github_orgs.go | Adds DynamoDB access patterns for GitHub org records. |
| cla-backend-legacy/internal/store/gerrit_instances.go | Adds DynamoDB access patterns for Gerrit instance records. |
| cla-backend-legacy/internal/store/events.go | Adds DynamoDB access patterns for audit/event records. |
| cla-backend-legacy/internal/store/dynamo_conv_reverse.go | Adds interface-map → DynamoDB AttributeValue conversion helpers. |
| cla-backend-legacy/internal/store/dynamo_conv.go | Adds DynamoDB AttributeValue → JSON-friendly conversion helpers. |
| cla-backend-legacy/internal/store/dynamo.go | Adds DynamoDB client + stage/table naming helpers. |
| cla-backend-legacy/internal/store/company_invites.go | Adds DynamoDB write helper for company invites. |
| cla-backend-legacy/internal/store/companies.go | Adds DynamoDB access patterns for companies. |
| cla-backend-legacy/internal/store/ccla_allowlist_requests.go | Adds DynamoDB write helper for CCLA allowlist requests. |
| cla-backend-legacy/internal/server/server.go | Adds the main HTTP handler wiring (router + telemetry). |
| cla-backend-legacy/internal/respond/respond.go | Adds JSON response helpers and legacy-compatible 404/405 payloads. |
| cla-backend-legacy/internal/pdf/docraptor.go | Adds DocRaptor integration to generate PDFs from HTML. |
| cla-backend-legacy/internal/middleware/session.go | Adds DynamoDB-backed server-side session middleware. |
| cla-backend-legacy/internal/middleware/request_log.go | Adds legacy-style request logging with E2E markers. |
| cla-backend-legacy/internal/middleware/cors.go | Adds CORS middleware mirroring legacy Python headers. |
| cla-backend-legacy/internal/logging/logging.go | Adds a minimal logging wrapper with LOG_LEVEL support. |
| cla-backend-legacy/internal/legacy/lfgroup/lfgroup.go | Adds LF Group client for validating/updating LDAP groups. |
| cla-backend-legacy/internal/legacy/github/webhook.go | Adds GitHub webhook signature validation helper (sha1 header). |
| cla-backend-legacy/internal/legacy/github/service.go | Adds minimal GitHub helper service (namespace/org checks). |
| cla-backend-legacy/internal/legacy/github/pull_request.go | Adds GitHub PR HTML URL lookup via installation token. |
| cla-backend-legacy/internal/legacy/github/cache.go | Adds in-memory TTL cache mirroring Python GitHub cache behavior. |
| cla-backend-legacy/internal/featureflags/flags.go | Adds env/stage-based feature flag helper with caching. |
| cla-backend-legacy/internal/email/sns.go | Adds SNS-based email event publisher matching legacy payload format. |
| cla-backend-legacy/internal/email/ses.go | Adds SES-based email sender. |
| cla-backend-legacy/internal/email/email.go | Adds email service selector driven by EMAIL_SERVICE env var. |
| cla-backend-legacy/internal/email/aws.go | Adds AWS client constructors for SES/SNS with region resolution. |
| cla-backend-legacy/internal/contracts/types.go | Adds types for template tab configuration. |
| cla-backend-legacy/internal/contracts/templates/tungsten-fabric-individual-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/tungsten-fabric-corporate-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/tekton-individual-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/tekton-corporate-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/openvdb-individual-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/openvdb-corporate-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/opencolorio-individual-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/opencolorio-corporate-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/openbmc-individual-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/openbmc-corporate-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/onap-individual-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/onap-corporate-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/cncf-individual-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/cncf-corporate-cla.html | Adds embedded contract template HTML. |
| cla-backend-legacy/internal/contracts/templates/templates.json | Adds embedded template registry (files + DocuSign tab layouts). |
| cla-backend-legacy/internal/contracts/contracts.go | Adds embedded template loading + HTML rendering + tab accessors. |
| cla-backend-legacy/internal/config/ssm.go | Adds SSM parameter lookup helper with caching. |
| cla-backend-legacy/go.mod | Adds Go module definition + dependencies for legacy backend. |
| cla-backend-legacy/env.json | Adds env.json placeholder for Serverless variable resolution. |
| cla-backend-legacy/cmd/legacy-api/main.go | Adds Lambda entrypoint (API Gateway proxy adapter). |
| cla-backend-legacy/cmd/legacy-api-local/main.go | Adds local HTTP server entrypoint for development. |
| cla-backend-legacy/README.md | Adds documentation for building/deploying/running the legacy backend. |
| cla-backend-legacy/Makefile | Adds build/lint targets for lambda + local binaries. |
| cla-backend-legacy/.golangci.yml | Adds golangci-lint configuration tuned for “legacy compatibility”. |
| .gitignore | Ignores bin/* outputs including legacy backend binaries. |
| .github/workflows/deploy-prod.yml | Adds build/test/lint + deploy steps for cla-backend-legacy in prod workflow. |
| .github/workflows/deploy-dev.yml | Adds build/test/lint + deploy steps for cla-backend-legacy in dev workflow (and introduces an additional legacy deploy job). |
| .github/workflows/cla-backend-legacy-deploy-prod.yml | Adds standalone workflow to build/test/lint/deploy legacy backend to prod on changes. |
| .github/workflows/cla-backend-legacy-deploy-dev.yml | Adds standalone workflow to build/test/lint/deploy legacy backend to dev on changes. |
| .github/workflows/build-pr.yml | Adds build/test/lint steps for cla-backend-legacy in PR workflow. |
cla-backend-legacy/internal/contracts/templates/openvdb-individual-cla.html
Show resolved
Hide resolved
- Resolved yarn.lock conflict by taking latest dev version with security updates - All Go backend migration changes preserved Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
There was a problem hiding this comment.
Actionable comments posted: 6
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟡 Minor comments (25)
cla-backend-legacy/internal/featureflags/flags.go-9-12 (1)
9-12:⚠️ Potential issue | 🟡 MinorCache key does not account for default parameters.
The cache is keyed solely by
envVar, butEnabledByEnvOrStagealso depends ondefaultNonProdanddefaultProd. If the same environment variable is queried with different default values across different call sites, the first call's result is cached and returned for all subsequent calls, ignoring the new defaults.This may be intentional (assuming the same env var always has consistent defaults), but if not, consider either:
- Including defaults in the cache key.
- Documenting the constraint that each
envVarmust always be called with the same defaults.Also applies to: 33-35, 54-55
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/featureflags/flags.go` around lines 9 - 12, The cache uses only the global keys cache and cacheMu with string keys but EnabledByEnvOrStage (and similar functions) also depends on defaultNonProd/defaultProd, so change the cache key to include those defaults (e.g., compose a key from envVar + ":" + strconv.FormatBool(defaultNonProd) + ":" + strconv.FormatBool(defaultProd)) or alternatively document that envVar must always be called with the same defaults; update references in EnabledByEnvOrStage and any other callers that read/write cache (cache, cacheMu) so cached entries are correctly scoped by the defaults.cla-backend-legacy/internal/legacy/github/cache.go-198-204 (1)
198-204:⚠️ Potential issue | 🟡 MinorCache key delimiter collision risk.
The cache keys use
|as a separator. If any input value contains|, different inputs could produce the same key, causing cache collisions. For example:
projectID="a|b",githubID="c"→ key:"a|b|c|..."projectID="a",githubID="b|c"→ key:"a|b|c|..."While GitHub IDs are numeric and usernames have restrictions that make this unlikely, emails theoretically could contain
|in the local part.Consider using a delimiter that's explicitly disallowed in all fields (like null byte
\x00), or escaping the delimiter in values.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/github/cache.go` around lines 198 - 204, The cache key construction using "|" in projectCacheKey and cacheKey can collide if any input contains "|" (e.g., projectCacheKey := fmt.Sprintf("%s|%s|%s|%s", ...)), so update the key building in the loop over uniqEmails to use a safe delimiter or escape values before joining: either switch to a delimiter guaranteed not present in inputs (e.g., "\x00") when creating projectCacheKey and cacheKey, or URL/escape each of projectID, githubID, githubUsername and email before formatting; apply the change where githubUserCache.SetWithTTL and githubUserCache.Set are called so the keys are unambiguous.cla-backend-legacy/README.md-169-169 (1)
169-169:⚠️ Potential issue | 🟡 MinorFix hyphenation in compound adjective.
When "production ready" is used as a compound adjective before a noun, it should be hyphenated as "production-ready."
✏️ Proposed fix
-The codebase is production ready. Before deploying: +The codebase is production-ready. Before deploying:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/README.md` at line 169, Replace the unhyphenated phrase "production ready" used as a compound adjective before a noun with the hyphenated form "production-ready" in the README (specifically update the sentence containing "The codebase is production ready. Before deploying:" to read "The codebase is production-ready. Before deploying:"). Ensure only the compound-adjective instance is changed and leave other instances intact.cla-backend-legacy/internal/auth/auth0.go-214-224 (1)
214-224:⚠️ Potential issue | 🟡 MinorMissing HTTP status code check before decoding response.
If Auth0 returns a non-2xx status (e.g., 404, 500), the code will attempt to decode the error body as JWKS, likely resulting in a confusing "unable to fetch well known jwks" error.
🛠️ Proposed fix
resp, err := v.httpClient.Do(req) if err != nil { return nil, &AuthError{Response: "unable to fetch well known jwks"} } defer resp.Body.Close() + if resp.StatusCode != http.StatusOK { + return nil, &AuthError{Response: "unable to fetch well known jwks"} + } + var jwks jwksResponse if err := json.NewDecoder(resp.Body).Decode(&jwks); err != nil {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/auth/auth0.go` around lines 214 - 224, The code calling v.httpClient.Do(req) and decoding into jwksResponse (the block that returns &jwks) lacks a check of resp.StatusCode; before json.NewDecoder(resp.Body).Decode(&jwks) verify resp.StatusCode is in the 2xx range (e.g., resp.StatusCode < 200 || resp.StatusCode >= 300) and if not return an AuthError containing the HTTP status and a brief body or status text to aid diagnostics; update the error paths around v.httpClient.Do, resp.Body.Close, and the json decode in the function that handles JWKS retrieval so failures from non-2xx responses are reported clearly rather than attempting to decode an error page as jwksResponse.cla-backend-legacy/internal/legacy/salesforce/service.go-148-149 (1)
148-149:⚠️ Potential issue | 🟡 MinorInconsistent Authorization header value casing.
GetProjectsuses lowercase"bearer"(line 149) whileGetProjectuses"Bearer"(line 209). While the comments indicate Python parity, this inconsistency could cause confusion during maintenance. Per OAuth2 RFC 6750, the canonical form is"Bearer"(capitalized).Consider standardizing on the RFC-compliant form unless the downstream service explicitly requires different casing.
Also applies to: 208-209
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/salesforce/service.go` around lines 148 - 149, The Authorization header value is inconsistently cased: change the lowercase "bearer" in GetProjects to the RFC-compliant "Bearer" to match GetProject (and change any other occurrences such as the one around lines 208-209) so both requests set req.Header.Set("Authorization", "Bearer "+tok); update all occurrences to the same capitalized form to standardize behavior.cla-backend-legacy/internal/config/ssm.go-67-70 (1)
67-70:⚠️ Potential issue | 🟡 MinorDead code:
stagewill never be empty.
store.StageFromEnv()already returns"dev"as a default whenSTAGEis empty (seeinternal/store/dynamo.golines 15-21). The error condition on lines 68-70 will never trigger.Either remove the dead code or, if you want to enforce explicit
STAGEconfiguration for SSM lookups, read the environment variable directly here.Option 1: Remove dead code
func GetEnvOrSSM(ctx context.Context, envName, ssmBaseKey string) (string, error) { if v := strings.TrimSpace(os.Getenv(envName)); v != "" { return v, nil } stage := store.StageFromEnv() - if stage == "" { - return "", fmt.Errorf("%s not set and STAGE is empty; cannot resolve SSM parameter", envName) - } paramName := fmt.Sprintf("/%s-%s", strings.TrimPrefix(ssmBaseKey, "/"), stage) return GetSSMParameter(ctx, paramName) }Option 2: Read STAGE directly if explicit stage is required
func GetEnvOrSSM(ctx context.Context, envName, ssmBaseKey string) (string, error) { if v := strings.TrimSpace(os.Getenv(envName)); v != "" { return v, nil } - stage := store.StageFromEnv() + stage := strings.TrimSpace(os.Getenv("STAGE")) if stage == "" { return "", fmt.Errorf("%s not set and STAGE is empty; cannot resolve SSM parameter", envName) } paramName := fmt.Sprintf("/%s-%s", strings.TrimPrefix(ssmBaseKey, "/"), stage) return GetSSMParameter(ctx, paramName) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/config/ssm.go` around lines 67 - 70, The check that returns an error when stage == "" is dead code because store.StageFromEnv() already defaults to "dev"; remove the unreachable conditional (the stage == "" branch and its error) from the SSM resolution logic (the code using stage := store.StageFromEnv() and envName) or, if you want to require an explicit STAGE, replace the call to store.StageFromEnv() with a direct read of os.Getenv("STAGE") and validate that value (using envName in the error) so the code path can actually trigger.cla-backend-legacy/internal/legacy/lfgroup/lfgroup.go-106-106 (1)
106-106:⚠️ Potential issue | 🟡 MinorUse canonical "Bearer" capitalization for Authorization header.
The Authorization header uses
"bearer"(lowercase), but per RFC 6750 examples and RFC 7235, the canonical form is"Bearer"(capital B). While HTTP auth schemes are matched case-insensitively, using the standard capitalization improves consistency with RFC specifications and common OAuth implementations. The same applies at line 148 inAddUserToGroup.🔧 Suggested fix
- req.Header.Set("Authorization", "bearer "+tok) + req.Header.Set("Authorization", "Bearer "+tok)Apply the same fix at line 148.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/lfgroup/lfgroup.go` at line 106, Replace the lowercase "bearer" token with the canonical "Bearer" capitalization in the Authorization header wherever you set it (change req.Header.Set("Authorization", "bearer "+tok) to use "Bearer "+tok); specifically update the occurrence inside AddUserToGroup and the other request-building function that currently calls req.Header.Set("Authorization", "bearer "+tok) so both use "Bearer" for consistency with RFC conventions.cla-backend-legacy/serverless.yml-521-529 (1)
521-529:⚠️ Potential issue | 🟡 MinorAdd a matching
DEFAULT_5XXgateway response.Only API Gateway-generated 4xx responses get CORS headers right now, so gateway-side 5xxs will still show up to browsers as opaque CORS failures.
Suggested patch
GatewayResponse: Type: AWS::ApiGateway::GatewayResponse Properties: ResponseParameters: gatewayresponse.header.Access-Control-Allow-Origin: '''*''' gatewayresponse.header.Access-Control-Allow-Headers: '''*''' ResponseType: DEFAULT_4XX RestApiId: Ref: ApiGatewayRestApi + GatewayResponse5XX: + Type: AWS::ApiGateway::GatewayResponse + Properties: + ResponseParameters: + gatewayresponse.header.Access-Control-Allow-Origin: '''*''' + gatewayresponse.header.Access-Control-Allow-Headers: '''*''' + ResponseType: DEFAULT_5XX + RestApiId: + Ref: ApiGatewayRestApi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/serverless.yml` around lines 521 - 529, The GatewayResponse currently only defines a DEFAULT_4XX response with CORS headers; add a second AWS::ApiGateway::GatewayResponse resource (e.g., GatewayResponse5XX) that mirrors the existing GatewayResponse properties but sets ResponseType to DEFAULT_5XX and the same ResponseParameters (gatewayresponse.header.Access-Control-Allow-Origin and gatewayresponse.header.Access-Control-Allow-Headers) and RestApiId Ref: ApiGatewayRestApi so API Gateway 5xx responses also include the CORS headers.cla-backend-legacy/internal/contracts/templates/tungsten-fabric-corporate-cla.html-1-2 (1)
1-2:⚠️ Potential issue | 🟡 MinorAdd the document doctype.
Without
<!DOCTYPE html>, this template can render in quirks mode, which is risky for a legal document/PDF flow.💡 Suggested fix
+<!DOCTYPE html> <html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/tungsten-fabric-corporate-cla.html` around lines 1 - 2, Add the HTML5 doctype to the template by inserting <!DOCTYPE html> as the very first line of the tungsten-fabric-corporate-cla.html template so the page/PDF rendering uses standards mode instead of quirks mode; ensure it appears before the <html> tag and that no other characters or whitespace precede it.cla-backend-legacy/resources/tungsten-fabric-individual-cla.html-1-2 (1)
1-2:⚠️ Potential issue | 🟡 MinorAdd the document doctype.
This file has the same quirks-mode risk as the template version if it is opened directly or fed to an HTML renderer.
💡 Suggested fix
+<!DOCTYPE html> <html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/resources/tungsten-fabric-individual-cla.html` around lines 1 - 2, Add the HTML5 doctype declaration to the top of the document so the page doesn't render in quirks mode; insert <!DOCTYPE html> immediately before the existing <html> tag (the snippet shows the opening <html> and <body> tags) so the document is parsed in standards mode.cla-backend-legacy/resources/openvdb-individual-cla.html-1-15 (1)
1-15:⚠️ Potential issue | 🟡 MinorFix the malformed HTML wrapper.
The template is missing a doctype declaration and ends with
<html>instead of</html>, forcing renderers to recover from invalid markup.Suggested fix
+<!DOCTYPE html> <html> <body> @@ -<html> +</html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/resources/openvdb-individual-cla.html` around lines 1 - 15, The HTML template has a malformed wrapper: add the HTML5 doctype declaration at the top (<!DOCTYPE html>) and replace the incorrect trailing "<html>" with the proper closing "</html>" so the document starts with a doctype and ends with a matching </html> tag; update the top of the file before the opening <html> element and change the final tag after the </body> of the template.cla-backend-legacy/internal/contracts/templates/opencolorio-individual-cla.html-1-16 (1)
1-16:⚠️ Potential issue | 🟡 MinorFix the malformed HTML wrapper.
The template is missing a doctype declaration and ends with
<html>instead of</html>. While browsers can recover from this, the markup should be valid.Suggested fix
+<!DOCTYPE html> <html> <body> @@ -<html> +</html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/opencolorio-individual-cla.html` around lines 1 - 16, The HTML template's root markup is malformed: add a proper doctype declaration (e.g., <!DOCTYPE html>) at the top and replace the final stray "<html>" with the correct closing tag "</html>" so the file begins with a doctype and ends with a matching closing html tag; update the template surrounding the existing <html> ... </body> block accordingly to ensure valid HTML structure.cla-backend-legacy/internal/contracts/templates/openvdb-individual-cla.html-1-15 (1)
1-15:⚠️ Potential issue | 🟡 MinorFix the malformed HTML wrapper.
This file has no doctype and closes the root element with
<html>on Line 15 instead of</html>. Please fix both so the template renders in standards mode.Suggested fix
+<!DOCTYPE html> <html> <body> @@ </body> -<html> +</html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/openvdb-individual-cla.html` around lines 1 - 15, Add a proper HTML5 doctype declaration at the top of the template and fix the malformed root tag by replacing the incorrect closing "<html>" at the end with the correct closing tag "</html>" so the document renders in standards mode; update the file's top to include <!DOCTYPE html> and ensure the opening "<html>" at the top pairs with a closing "</html>" at the bottom (refer to the existing root "<html>" and the incorrect closing "<html>" to locate the change).cla-backend-legacy/resources/opencolorio-individual-cla.html-1-16 (1)
1-16:⚠️ Potential issue | 🟡 MinorFix the HTML document boundaries.
This template is missing
<!DOCTYPE html>and the root element is closed with<html>on Line 16 instead of</html>. That leaves the document malformed and can change print/PDF rendering.Suggested fix
+<!DOCTYPE html> <html> <body> <h3 class="legal-entity-name" style="text-align: center"> OpenColorIO Project a Series of LF Projects, LLC<br /> {{document_type}} Contributor License Agreement ("Agreement") v{{major_version}}.{{minor_version}} </h3> @@ </body> -<html> +</html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/resources/opencolorio-individual-cla.html` around lines 1 - 16, Add the missing DOCTYPE and correct the malformed closing tag: insert <!DOCTYPE html> at the top of the document and change the final "<html>" closing token to "</html>" so the root <html> element is properly closed; verify the <body> and <h3 class="legal-entity-name"> elements remain unchanged.cla-backend-legacy/internal/contracts/templates/openvdb-corporate-cla.html-1-21 (1)
1-21:⚠️ Potential issue | 🟡 MinorFix the malformed HTML wrapper.
This file is missing
<!DOCTYPE html>, and the root tag is not properly closed on Line 21. That should be</html>.Suggested fix
+<!DOCTYPE html> <html> <body> @@ </body> -<html> +</html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/openvdb-corporate-cla.html` around lines 1 - 21, Add a proper HTML document wrapper: insert <!DOCTYPE html> at the top of the file and replace the incorrect closing root tag on the last line (<html>) with the correct closing tag </html>; ensure the existing <html> opening tag and </body> remain unchanged so the document is well-formed.cla-backend-legacy/internal/contracts/templates/opencolorio-corporate-cla.html-1-23 (1)
1-23:⚠️ Potential issue | 🟡 MinorFix the malformed HTML wrapper.
This template is missing
<!DOCTYPE html>and Line 23 uses<html>instead of</html>. Please correct both before relying on it for print/PDF generation.Suggested fix
+<!DOCTYPE html> <html> <body> @@ </body> -<html> +</html>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/opencolorio-corporate-cla.html` around lines 1 - 23, The HTML template opencolorio-corporate-cla.html has a malformed wrapper: add a <!DOCTYPE html> declaration at the top of the file and replace the final stray opening <html> tag (line with "<html>") with the proper closing </html> tag so the document begins with a doctype and ends with a matching </html> tag; keep the existing <body> and </body> as-is.cla-backend-legacy/internal/contracts/templates/onap-individual-cla.html-1-30 (1)
1-30:⚠️ Potential issue | 🟡 MinorAdd the HTML doctype.
Without
<!DOCTYPE html>, browsers/renderers may fall back to quirks mode, which is risky for legal document layout.Suggested fix
+<!DOCTYPE html> <html> <body>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/onap-individual-cla.html` around lines 1 - 30, The template onap-individual-cla.html is missing the HTML doctype; add the declaration <!DOCTYPE html> as the very first line of the file (immediately before the existing <html> tag) so browsers/renderers use standards mode instead of quirks mode.cla-backend-legacy/internal/contracts/templates/tekton-individual-cla.html-1-30 (1)
1-30:⚠️ Potential issue | 🟡 MinorAdd the HTML doctype.
Please add
<!DOCTYPE html>before<html>so this template renders in standards mode consistently.Suggested fix
+<!DOCTYPE html> <html> <body>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/tekton-individual-cla.html` around lines 1 - 30, The template is missing the HTML doctype so browsers may render in quirks mode; open the tekton-individual-cla.html template and insert the HTML5 doctype declaration immediately before the existing <html> element (i.e., add <!DOCTYPE html> right above the <html> tag) so the document renders in standards mode.cla-backend-legacy/internal/contracts/templates/openbmc-corporate-cla.html-1-64 (1)
1-64:⚠️ Potential issue | 🟡 MinorAdd the HTML doctype.
These templates are intended for formal document rendering, so standards mode matters. Please add
<!DOCTYPE html>at the top.Suggested fix
+<!DOCTYPE html> <html> <body>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/openbmc-corporate-cla.html` around lines 1 - 64, Insert the HTML5 doctype declaration as the very first line of the openbmc-corporate-cla.html template, i.e., place the doctype before the existing <html> root tag so the document renders in standards mode.cla-backend-legacy/internal/contracts/templates/cncf-individual-cla.html-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorAdd DOCTYPE and fix invalid HTML closing tag.
The file is missing a DOCTYPE declaration and line 5 contains an invalid
</br />tag. Add DOCTYPE to prevent quirks mode, and correct the closing tag syntax.Suggested fix
+<!DOCTYPE html> <html>- A project of The Linux Foundation</br /> + A project of The Linux Foundation<br />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/templates/cncf-individual-cla.html` at line 1, Add an HTML5 doctype declaration at the very top (<!DOCTYPE html>) so the document doesn't render in quirks mode, and remove the invalid closing tag "</br />" (replace it with a proper self-closing or standalone break tag like "<br>" or "<br />") inside the HTML file; specifically update the top of the document before the <html> tag and correct any occurrences of "</br />" to valid <br> syntax.cla-backend-legacy/resources/tekton-individual-cla.html-21-21 (1)
21-21:⚠️ Potential issue | 🟡 MinorMinor typo: double space in legal text.
There's an extra space in "Contribution(s) with the Work" (two spaces between "Contribution(s)" and "with").
Proposed fix
- <p>3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to the Project and to recipients of software distributed by the Project a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.</p> + <p>3. Grant of Patent License. Subject to the terms and conditions of this Agreement, You hereby grant to the Project and to recipients of software distributed by the Project a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.</p>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/resources/tekton-individual-cla.html` at line 21, Fix the double space in the legal paragraph text by replacing "Contribution(s) with the Work" with "Contribution(s) with the Work" in the paragraph content that begins "3. Grant of Patent License..." so there is only a single space between "Contribution(s)" and "with" in the HTML element containing that sentence.cla-backend-legacy/resources/cncf-corporate-cla.html-1-37 (1)
1-37:⚠️ Potential issue | 🟡 MinorConsolidate duplicate file and add DOCTYPE declaration.
This file is identical to
cla-backend-legacy/internal/contracts/templates/cncf-corporate-cla.html. Remove this copy to eliminate maintenance overhead. Additionally, add<!DOCTYPE html>at the beginning before the<html>tag.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/resources/cncf-corporate-cla.html` around lines 1 - 37, This file is a duplicate copy; delete this redundant HTML file to consolidate the template into the single canonical template and avoid maintenance drift, then in the remaining canonical template add the missing DOCTYPE by inserting <!DOCTYPE html> immediately before the existing <html> tag (ensure the change is applied to the template that contains the same "Cloud Native Computing Foundation" header and Schedule A content so only one authoritative copy remains).cla-backend-legacy/resources/cncf-individual-cla.html-5-5 (1)
5-5:⚠️ Potential issue | 🟡 MinorInvalid HTML:
</br />should be<br />.Line 5 contains
</br />which is invalid HTML syntax. The<before the slash creates a malformed closing tag. This could cause rendering issues in browsers or PDF generators.🐛 Proposed fix
- A project of The Linux Foundation</br /> + A project of The Linux Foundation<br />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/resources/cncf-individual-cla.html` at line 5, Replace the invalid closing break tag `</br />` used in the HTML snippet (the line containing "A project of The Linux Foundation</br />") with a proper self-closing break tag `<br />` so the markup is valid and will render correctly; locate the string `A project of The Linux Foundation</br />` and change only the `</br />` to `<br />`.cla-backend-legacy/internal/store/projects.go-53-67 (1)
53-67:⚠️ Potential issue | 🟡 MinorTrim and short-circuit blank external IDs before querying.
QueryByNameLoweralready normalizes string input, but this helper sends raw whitespace through unchanged. That turns copy-pasted IDs with stray spaces into false misses and needless remote calls.💡 Suggested change
func (s *ProjectsStore) QueryByExternalID(ctx context.Context, externalID string) ([]map[string]types.AttributeValue, error) { if s == nil || s.client == nil { return nil, nil } + externalID = strings.TrimSpace(externalID) + if externalID == "" { + return nil, nil + } items := make([]map[string]types.AttributeValue, 0, 4) var startKey map[string]types.AttributeValue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/projects.go` around lines 53 - 67, The QueryByExternalID method should trim whitespace and short-circuit empty external IDs to avoid useless DynamoDB calls: in ProjectsStore.QueryByExternalID call strings.TrimSpace on the incoming externalID, and if the result is empty return nil, nil immediately; then use the trimmed value in the QueryInput ExpressionAttributeValues (ensure you update the reference where ":eid" is set) so stray spaces don't produce false misses or unnecessary requests.cla-backend-legacy/internal/store/users.go-55-70 (1)
55-70:⚠️ Potential issue | 🟡 MinorTrim and short-circuit blank LF usernames before querying.
The sibling string lookups normalize input first. Here, a copied username with leading/trailing spaces becomes a false miss and still burns a remote call.
💡 Suggested change
func (s *UsersStore) QueryByLFUsername(ctx context.Context, username string) ([]map[string]types.AttributeValue, error) { if s == nil || s.client == nil { return nil, nil } + username = strings.TrimSpace(username) + if username == "" { + return nil, nil + } items := make([]map[string]types.AttributeValue, 0, 1) var startKey map[string]types.AttributeValue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/users.go` around lines 55 - 70, QueryByLFUsername should trim and short-circuit blank LF usernames to avoid unnecessary remote queries: in the function QueryByLFUsername, normalize the input by calling strings.TrimSpace(username) (and ensure strings is imported), then if the trimmed username is empty return nil, nil immediately instead of proceeding to call s.client.Query; this mirrors the sibling lookups' behavior and prevents false misses and unnecessary network calls.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f6db833b-5c81-4a40-a569-a30434fe55fd
⛔ Files ignored due to path filters (6)
cla-backend-legacy/go.sumis excluded by!**/*.sumcla-backend-legacy/resources/cla-notsigned.pngis excluded by!**/*.pngcla-backend-legacy/resources/cla-signed.pngis excluded by!**/*.pngcla-backend-legacy/resources/cla-signed.svgis excluded by!**/*.svgcla-backend-legacy/resources/cla-unsigned.svgis excluded by!**/*.svgtests/functional/yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (92)
.github/workflows/build-pr.yml.github/workflows/cla-backend-legacy-deploy-dev.yml.github/workflows/cla-backend-legacy-deploy-prod.yml.github/workflows/deploy-dev.yml.github/workflows/deploy-prod.yml.gitignorecla-backend-legacy/.golangci.ymlcla-backend-legacy/Makefilecla-backend-legacy/README.mdcla-backend-legacy/cmd/legacy-api-local/main.gocla-backend-legacy/cmd/legacy-api/main.gocla-backend-legacy/env.jsoncla-backend-legacy/go.modcla-backend-legacy/internal/api/github_oauth.gocla-backend-legacy/internal/api/handlers.gocla-backend-legacy/internal/api/router.gocla-backend-legacy/internal/auth/auth0.gocla-backend-legacy/internal/config/ssm.gocla-backend-legacy/internal/contracts/contracts.gocla-backend-legacy/internal/contracts/templates/cncf-corporate-cla.htmlcla-backend-legacy/internal/contracts/templates/cncf-individual-cla.htmlcla-backend-legacy/internal/contracts/templates/onap-corporate-cla.htmlcla-backend-legacy/internal/contracts/templates/onap-individual-cla.htmlcla-backend-legacy/internal/contracts/templates/openbmc-corporate-cla.htmlcla-backend-legacy/internal/contracts/templates/openbmc-individual-cla.htmlcla-backend-legacy/internal/contracts/templates/opencolorio-corporate-cla.htmlcla-backend-legacy/internal/contracts/templates/opencolorio-individual-cla.htmlcla-backend-legacy/internal/contracts/templates/openvdb-corporate-cla.htmlcla-backend-legacy/internal/contracts/templates/openvdb-individual-cla.htmlcla-backend-legacy/internal/contracts/templates/tekton-corporate-cla.htmlcla-backend-legacy/internal/contracts/templates/tekton-individual-cla.htmlcla-backend-legacy/internal/contracts/templates/templates.jsoncla-backend-legacy/internal/contracts/templates/tungsten-fabric-corporate-cla.htmlcla-backend-legacy/internal/contracts/templates/tungsten-fabric-individual-cla.htmlcla-backend-legacy/internal/contracts/types.gocla-backend-legacy/internal/email/aws.gocla-backend-legacy/internal/email/email.gocla-backend-legacy/internal/email/ses.gocla-backend-legacy/internal/email/sns.gocla-backend-legacy/internal/featureflags/flags.gocla-backend-legacy/internal/legacy/github/app_installation.gocla-backend-legacy/internal/legacy/github/cache.gocla-backend-legacy/internal/legacy/github/oauth_app.gocla-backend-legacy/internal/legacy/github/pull_request.gocla-backend-legacy/internal/legacy/github/service.gocla-backend-legacy/internal/legacy/github/webhook.gocla-backend-legacy/internal/legacy/lfgroup/lfgroup.gocla-backend-legacy/internal/legacy/salesforce/service.gocla-backend-legacy/internal/legacy/userservice/userservice.gocla-backend-legacy/internal/legacyproxy/proxy.gocla-backend-legacy/internal/logging/logging.gocla-backend-legacy/internal/middleware/cors.gocla-backend-legacy/internal/middleware/request_log.gocla-backend-legacy/internal/middleware/session.gocla-backend-legacy/internal/pdf/docraptor.gocla-backend-legacy/internal/respond/respond.gocla-backend-legacy/internal/server/server.gocla-backend-legacy/internal/store/ccla_allowlist_requests.gocla-backend-legacy/internal/store/companies.gocla-backend-legacy/internal/store/company_invites.gocla-backend-legacy/internal/store/dynamo.gocla-backend-legacy/internal/store/dynamo_conv.gocla-backend-legacy/internal/store/dynamo_conv_reverse.gocla-backend-legacy/internal/store/events.gocla-backend-legacy/internal/store/gerrit_instances.gocla-backend-legacy/internal/store/github_orgs.gocla-backend-legacy/internal/store/gitlab_orgs.gocla-backend-legacy/internal/store/kv_store.gocla-backend-legacy/internal/store/project_cla_groups.gocla-backend-legacy/internal/store/projects.gocla-backend-legacy/internal/store/repositories.gocla-backend-legacy/internal/store/signatures.gocla-backend-legacy/internal/store/user_permissions.gocla-backend-legacy/internal/store/users.gocla-backend-legacy/internal/telemetry/datadog_otlp.gocla-backend-legacy/package.jsoncla-backend-legacy/resources/LF Group Operations.postman_collection.jsoncla-backend-legacy/resources/cncf-corporate-cla.htmlcla-backend-legacy/resources/cncf-individual-cla.htmlcla-backend-legacy/resources/onap-corporate-cla.htmlcla-backend-legacy/resources/onap-individual-cla.htmlcla-backend-legacy/resources/openbmc-corporate-cla.htmlcla-backend-legacy/resources/openbmc-individual-cla.htmlcla-backend-legacy/resources/opencolorio-corporate-cla.htmlcla-backend-legacy/resources/opencolorio-individual-cla.htmlcla-backend-legacy/resources/openvdb-corporate-cla.htmlcla-backend-legacy/resources/openvdb-individual-cla.htmlcla-backend-legacy/resources/tekton-corporate-cla.htmlcla-backend-legacy/resources/tekton-individual-cla.htmlcla-backend-legacy/resources/tungsten-fabric-corporate-cla.htmlcla-backend-legacy/resources/tungsten-fabric-individual-cla.htmlcla-backend-legacy/serverless.yml
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
There was a problem hiding this comment.
Actionable comments posted: 4
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (20)
cla-backend-legacy/internal/featureflags/flags.go-36-57 (1)
36-57:⚠️ Potential issue | 🟠 MajorCache key doesn't account for default parameters.
The cache uses only
envVaras the key, but the computed result also depends ondefaultNonProdanddefaultProdwhen the environment variable is unset or unparseable. If the sameenvVaris called with different defaults from different call sites, the first caller's defaults will be cached and reused incorrectly for subsequent callers.For example:
EnabledByEnvOrStage("MY_FLAG", true, false) // caches true (dev stage) EnabledByEnvOrStage("MY_FLAG", false, true) // returns cached true, ignoring new defaultsConsider either:
- Including defaults in the cache key
- Documenting that each
envVarmust always be called with consistent defaults- Defining flag defaults in a single registry
🔧 Option 1: Include defaults in cache key
import ( + "fmt" "os" "strings" "sync" ) func EnabledByEnvOrStage(envVar string, defaultNonProd bool, defaultProd bool) bool { cacheMu.Lock() defer cacheMu.Unlock() + cacheKey := fmt.Sprintf("%s:%v:%v", envVar, defaultNonProd, defaultProd) - if v, ok := cache[envVar]; ok { + if v, ok := cache[cacheKey]; ok { return v } raw := os.Getenv(envVar) if raw != "" { if parsed, ok := parseBoolish(raw); ok { - cache[envVar] = parsed + cache[cacheKey] = parsed return parsed } } stage := strings.TrimSpace(strings.ToLower(os.Getenv("STAGE"))) if stage == "" { stage = "dev" } isProd := stage == "prod" enabled := defaultNonProd if isProd { enabled = defaultProd } - cache[envVar] = enabled + cache[cacheKey] = enabled return enabled }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/featureflags/flags.go` around lines 36 - 57, The cache key only uses envVar but the computed value also depends on defaultNonProd and defaultProd, so change the caching to include the defaults in the key (e.g., build a composite key from envVar + strconv.FormatBool(defaultNonProd) + ":" + strconv.FormatBool(defaultProd)) inside the EnabledByEnvOrStage function (or wherever this logic lives), update any cache lookup and assignment to use that composite key instead of just envVar, and ensure parseBoolish and env-based short-circuiting still use the same composite key for both reads and writes so callers with different defaults won’t get the wrong cached value..github/workflows/cla-backend-legacy-deploy-dev.yml-100-112 (1)
100-112:⚠️ Potential issue | 🟠 MajorMake the service check fail on a bad deployment.
Both health probes are wrapped in
|| echo ..., so this workflow stays green even when neither endpoint is reachable. That removes the only post-deploy gate and can mark a broken dev rollout as successful. If failures are still temporarily acceptable, gate that with an explicit flag instead of swallowing them unconditionally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/cla-backend-legacy-deploy-dev.yml around lines 100 - 112, The health-check step "EasyCLA Legacy Backend Service Check" currently swallows failures by appending "|| echo ..." to the curl calls for v1_url and v2_url, so make the job fail when endpoints are unreachable by removing those fallbacks and letting curl --fail return a non-zero exit; if temporary acceptance is required, gate the swallowing behind an explicit environment flag (e.g., ALLOW_HEALTH_FAILURES) and conditionally run the permissive behavior only when that flag is set; update the logic that calls curl against v1_url and v2_url accordingly so failed probes cause the workflow to fail unless ALLOW_HEALTH_FAILURES=true.cla-backend-legacy/internal/api/github_oauth.go-328-356 (1)
328-356:⚠️ Potential issue | 🟠 MajorGuard the repo/project stores before redirect resolution.
This flow still dereferences
h.reposandh.projectsunconditionally. The rest of the file already treats missing dependencies as configuration errors, so a partially wired deployment will panic here instead of returning a JSON 500.Suggested patch
func (h *Handlers) githubRedirectToConsole(ctx context.Context, installationID, repositoryExternalID, pullRequestID, originURL string, sess middleware.Session, w http.ResponseWriter, r *http.Request) { + if h.repos == nil || h.projects == nil { + respond.JSON(w, http.StatusInternalServerError, map[string]any{"errors": "repository/project stores not configured"}) + return + } // Resolve repository by external id. repoItem, ok, err := h.repos.GetByExternalIDAndType(ctx, repositoryExternalID, "github")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/api/github_oauth.go` around lines 328 - 356, githubRedirectToConsole currently dereferences h.repos and h.projects without nil checks; add guards at the start of githubRedirectToConsole to verify h.repos and h.projects are non-nil and if either is nil respond with respond.JSON(w, http.StatusInternalServerError, ...) containing an appropriate error message (e.g., "repositories store not configured" or "projects store not configured") so the handler fails gracefully instead of panicking; update the checks near uses of h.repos.GetByExternalIDAndType and h.projects.GetByID to assume the stores exist after the guard and keep existing error/ok handling intact.cla-backend-legacy/internal/api/github_oauth.go-71-77 (1)
71-77:⚠️ Potential issue | 🟠 MajorDon't pass a relative callback URL into GitHub OAuth.
When
CLA_API_BASEis unset, this falls back to/v2/github/installation, which is passed togithublegacy.BuildOAuthAuthorizeURL()as theredirect_uriparameter. GitHub OAuth requiresredirect_urito be an absolute URL with a valid host and port; relative paths are rejected. Return""instead when the base is missing—GitHub's OAuth spec allows omitting the parameter, in which case it will use the app's configured callback URL.Suggested patch
func (h *Handlers) githubCallbackURL() string { base := strings.TrimRight(strings.TrimSpace(os.Getenv("CLA_API_BASE")), "/") if base == "" { - return "/v2/github/installation" + return "" } return base + "/v2/github/installation" }Also applies to: 96-97, 109-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/api/github_oauth.go` around lines 71 - 77, The githubCallbackURL function currently returns a relative path when CLA_API_BASE is unset, which gets passed as redirect_uri to githublegacy.BuildOAuthAuthorizeURL and causes GitHub to reject it; change githubCallbackURL to return an empty string ("") when the trimmed CLA_API_BASE is empty so the redirect_uri parameter is omitted (GitHub will use the app-configured callback). Update the same pattern in the other similar helpers referenced (the ones around the other github oauth callback usages) so they all return "" instead of a relative path when CLA_API_BASE is missing.cla-backend-legacy/internal/middleware/cors.go-97-105 (1)
97-105:⚠️ Potential issue | 🟠 MajorDon't return
*on credentialed CORS responses.
Access-Control-Allow-Credentialsis always"true", but the allow-all branches still emitAccess-Control-Allow-Origin: *. Browsers reject that combination per the CORS spec, breaking cookie/session-based cross-origin requests. Echo the requestOriginwhenever credentials are enabled and includeVary: Origin.Suggested patch
- } else if origin == "" && isOriginAllowed("*") { - // Non-browser clients. - w.Header().Set("Access-Control-Allow-Origin", "*") - } else if origin != "" && isOriginAllowed("*") { - // Backwards compatible default: allow all. - w.Header().Set("Access-Control-Allow-Origin", "*") + } else if origin != "" && isOriginAllowed("*") { + // Credentialed CORS responses must echo the requesting origin. + w.Header().Set("Access-Control-Allow-Origin", origin) + w.Header().Add("Vary", "Origin") + } else if origin == "" && isOriginAllowed("*") { + // Non-browser clients. + w.Header().Set("Access-Control-Allow-Origin", "*") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/middleware/cors.go` around lines 97 - 105, The code sets Access-Control-Allow-Credentials to "true" while emitting Access-Control-Allow-Origin: "*" (see the origin variable, isOriginAllowed("*"), and w.Header().Set("Access-Control-Allow-Origin", "*")), which violates the CORS spec; instead, when credentials are enabled echo the incoming Origin value (use origin) into Access-Control-Allow-Origin rather than "*" and add a Vary: Origin header (w.Header().Set("Vary", "Origin")) so caches respect per-origin responses; update the branches that currently set "*" to set the request origin and add the Vary header whenever Access-Control-Allow-Credentials is set to "true".cla-backend-legacy/internal/middleware/session.go-89-96 (1)
89-96:⚠️ Potential issue | 🟠 MajorHarden the session cookie before shipping.
cla-sidis the bearer token for the server-side session, but it is sent withoutSecureorSameSite. That leaves it eligible for plaintext transport on HTTP and for cross-site sends. Please setSecureoutside local dev and add at leastSameSite=Lax.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/middleware/session.go` around lines 89 - 96, The session cookie for "cla-sid" is too permissive: set the Secure flag when not running in local dev (e.g., determine production by config or by checking request TLS) and add SameSite: http.SameSiteLaxMode to the cookie; update the http.SetCookie call that constructs the cookie (the block creating &http.Cookie{Name:"cla-sid", ...}) to use Secure: true in non-local environments and SameSite: http.SameSiteLaxMode while keeping HttpOnly and reasonable MaxAge.cla-backend-legacy/internal/middleware/session.go-104-109 (1)
104-109:⚠️ Potential issue | 🟠 MajorPut a timeout on the post-response session write.
Persisting with
context.Background()means a slow DynamoDB call can outlive request cancellation and keep this handler goroutine stuck indefinitely. Use a short bounded context here instead.Possible fix
if kv != nil { if b, err := json.Marshal(sess); err == nil { // Keep store TTL aligned with KVStore default (45 minutes). - _ = kv.SetWithTTL(context.Background(), sid, string(b), 45*time.Minute) + persistCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + _ = kv.SetWithTTL(persistCtx, sid, string(b), 45*time.Minute) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/middleware/session.go` around lines 104 - 109, The session persistence currently uses context.Background() which can hang forever; replace it with a short bounded context (e.g., ctx, cancel := context.WithTimeout(context.Background(), 1-2*time.Second); defer cancel()) and pass that ctx into kv.SetWithTTL so the DynamoDB write is canceled quickly if it stalls; keep the current best-effort semantics (ignore/assign the error as before) and ensure cancel() is called to release resources; locate where kv.SetWithTTL is invoked with sid and sess to make the change.cla-backend-legacy/internal/pdf/docraptor.go-87-95 (1)
87-95:⚠️ Potential issue | 🟠 MajorBound the DocRaptor response size before reading it.
io.ReadAllon a third-party response makes this path unbounded in memory. One unusually large PDF or upstream error payload can take the process down. Either stream to a writer or enforce a hard limit here.Possible fix
+const maxDocRaptorResponseSize = 20 << 20 // 20 MiB + - body, err := io.ReadAll(resp.Body) + body, err := io.ReadAll(io.LimitReader(resp.Body, maxDocRaptorResponseSize+1)) if err != nil { return nil, fmt.Errorf("read docraptor response: %w", err) } + if len(body) > maxDocRaptorResponseSize { + return nil, fmt.Errorf("docraptor response exceeds %d bytes", maxDocRaptorResponseSize) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/pdf/docraptor.go` around lines 87 - 95, The current use of io.ReadAll(resp.Body) is unbounded and can exhaust memory; change the read to a size-limited read (e.g., use io.LimitReader or io.Copy with a size cap) so responses larger than a safe maximum (choose a constant like maxDocraptorBytes) are rejected with a clear error. Update the block that currently calls io.ReadAll on resp.Body and the subsequent handling of body/resp.StatusCode to read at most maxDocraptorBytes+1 bytes, detect when the limit is exceeded and return an error like "docraptor: response too large", and otherwise continue returning the truncated body only when within the limit. Ensure the check still preserves the response body content for non-2xx errors but bounded by the same limit.cla-backend-legacy/internal/store/company_invites.go-35-45 (1)
35-45:⚠️ Potential issue | 🟠 MajorAdd a
ConditionExpressionto prevent overwriting existing invitations.The file comment explicitly states this store is for create-only operations, but
PutItemwithout aConditionExpressionwill silently overwrite any existing row with the same primary key. This makes retries or ID collisions destructive instead of failing cleanly. DynamoDB requiresattribute_not_exists(...)as a conditional write to prevent that overwrite behavior.Suggested fix
_, err := s.client.PutItem(ctx, &dynamodb.PutItemInput{ - TableName: aws.String(s.table), - Item: item, + TableName: aws.String(s.table), + Item: item, + ConditionExpression: aws.String("attribute_not_exists(company_invite_id)"), })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/company_invites.go` around lines 35 - 45, The PutItem method in CompanyInvitesStore currently writes without a condition and can overwrite existing invites; update the dynamodb.PutItemInput in PutItem to include a ConditionExpression like aws.String("attribute_not_exists(<PK>)") (replace <PK> with your table's partition key attribute name, e.g., "id" or "invite_id") so the write fails if the key already exists, and propagate the ConditionalCheckFailedException (do not swallow the error) so callers can detect create-only failures; modify the call in PutItem (the s.client.PutItem invocation) to set ConditionExpression and any required ExpressionAttributeNames/Values accordingly.cla-backend-legacy/internal/store/kv_store.go-34-58 (1)
34-58:⚠️ Potential issue | 🟠 MajorFilter expired rows in
Getinstead of relying on DynamoDB TTL cleanup.
SetWithTTLwrites anexpiretimestamp (line 73), andSessionMiddlewaredepends on that for 45-minute session expiry (line 109), butGetreturns the item even whenexpireis in the past. DynamoDB TTL deletion is asynchronous and can take up to ~2 days, so expired items remain readable byGetItemuntil background cleanup runs. This allows sessions to persist well past their intended 45-minute timeout.Possible fix
if out.Item == nil { return "", false, nil } + if exp, ok := out.Item["expire"].(*types.AttributeValueMemberN); ok { + if ts, err := strconv.ParseInt(exp.Value, 10, 64); err == nil && time.Now().Unix() >= ts { + return "", false, nil + } + } av, ok := out.Item["value"].(*types.AttributeValueMemberS)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/kv_store.go` around lines 34 - 58, Get currently returns items regardless of the stored TTL; update KVStore.Get to read the "expire" attribute from out.Item (the same timestamp written by SetWithTTL) and compare it to time.Now().Unix()/Now().UTC(); if the expire value exists and is <= current time, treat the item as not found (return "", false, nil) and optionally issue an async DeleteItem to remove the stale key; make sure to handle both AttributeValueMemberN/AttributeValueMemberS representations and avoid panics when the attribute is missing or malformed.cla-backend-legacy/internal/store/dynamo.go-45-57 (1)
45-57:⚠️ Potential issue | 🟠 MajorThe function contradicts its own comment by overriding standard AWS region resolution.
The function's documentation claims it relies on standard
AWS_REGION/AWS_DEFAULT_REGIONbehavior, but the code prevents the SDK from using those sources. When the three checked environment variables are empty, it hardcodesus-east-1and passesconfig.WithRegion(region), which overrides the SDK's default region resolution chain. This means deployments relying onAWS_DEFAULT_REGIONor shared config will silently useus-east-1instead.To fix this, add
AWS_DEFAULT_REGIONto the fallback chain and only passconfig.WithRegionif a region was explicitly set via one of your custom variables, allowing the SDK to resolve the region normally when none are provided.Suggested approach
func NewDynamoDBClientFromEnv(ctx context.Context) (*dynamodb.Client, error) { region := strings.TrimSpace(os.Getenv("DYNAMODB_AWS_REGION")) if region == "" { region = strings.TrimSpace(os.Getenv("AWS_REGION")) } if region == "" { + region = strings.TrimSpace(os.Getenv("AWS_DEFAULT_REGION")) + } + if region == "" { region = strings.TrimSpace(os.Getenv("REGION")) } - if region == "" { - region = "us-east-1" - } - - cfg, err := config.LoadDefaultConfig(ctx, config.WithRegion(region)) + opts := []func(*config.LoadOptions) error{} + if region != "" { + opts = append(opts, config.WithRegion(region)) + } + cfg, err := config.LoadDefaultConfig(ctx, opts...) if err != nil { return nil, err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/dynamo.go` around lines 45 - 57, The current region resolution overrides the SDK's defaults: instead of unconditionally setting region from DYNAMODB_AWS_REGION/AWS_REGION/REGION (and defaulting to "us-east-1") and always calling config.LoadDefaultConfig with config.WithRegion(region), change logic to include AWS_DEFAULT_REGION in the fallback chain and only pass config.WithRegion to config.LoadDefaultConfig when a region was explicitly found from your env checks (e.g., DYNAMODB_AWS_REGION, AWS_REGION, REGION, AWS_DEFAULT_REGION); if no explicit region is found, call config.LoadDefaultConfig(ctx) without config.WithRegion so the SDK's normal resolution (shared config, profile, metadata) can apply. Ensure you update the region variable usage and the conditional around config.WithRegion accordingly.cla-backend-legacy/internal/store/users.go-240-257 (1)
240-257:⚠️ Potential issue | 🟠 MajorQueryByCompanyID is missing pagination and may return incomplete results.
Unlike other scan/query methods in this file,
QueryByCompanyIDperforms only a singleScancall without handlingLastEvaluatedKey. If there are more items than fit in a single page (1MB limit), the results will be incomplete.🐛 Proposed fix to add pagination
func (s *UsersStore) QueryByCompanyID(ctx context.Context, companyID string) ([]map[string]types.AttributeValue, error) { if s == nil || s.client == nil { return nil, nil } - // Scan for users with this company ID - no index available for user_company_id - out, err := s.client.Scan(ctx, &dynamodb.ScanInput{ - TableName: aws.String(s.table), - FilterExpression: aws.String("user_company_id = :companyID"), - ExpressionAttributeValues: map[string]types.AttributeValue{ - ":companyID": &types.AttributeValueMemberS{Value: companyID}, - }, - }) - if err != nil { - return nil, err + items := make([]map[string]types.AttributeValue, 0) + var startKey map[string]types.AttributeValue + for { + out, err := s.client.Scan(ctx, &dynamodb.ScanInput{ + TableName: aws.String(s.table), + FilterExpression: aws.String("user_company_id = :companyID"), + ExpressionAttributeValues: map[string]types.AttributeValue{ + ":companyID": &types.AttributeValueMemberS{Value: companyID}, + }, + ExclusiveStartKey: startKey, + }) + if err != nil { + return nil, err + } + items = append(items, out.Items...) + if len(out.LastEvaluatedKey) == 0 { + break + } + startKey = out.LastEvaluatedKey } - return out.Items, nil + if len(items) == 0 { + return nil, nil + } + return items, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/users.go` around lines 240 - 257, QueryByCompanyID currently does a single s.client.Scan call and can miss results past the 1MB page; change QueryByCompanyID to paginate by looping until LastEvaluatedKey is nil: call s.client.Scan with ScanInput (including TableName, FilterExpression and ExpressionAttributeValues), collect and append out.Items into a result slice, and if out.LastEvaluatedKey != nil set ExclusiveStartKey for the next Scan iteration; return the aggregated items and propagate any error from s.client.Scan. Ensure you still handle s == nil or s.client == nil and preserve the original FilterExpression and attribute value for ":companyID".cla-backend-legacy/internal/store/repositories.go-40-43 (1)
40-43:⚠️ Potential issue | 🟠 MajorDon't swallow repository-store misconfiguration.
Returning nil/zero values here makes an unavailable table/client look like an empty repository set, and
PutItem/DeleteByIDcan report success without ever reaching DynamoDB. Please surface a consistent "store not configured" error instead of treating this as a successful no-op.Also applies to: 61-62, 75-76, 88-90, 131-133, 161-163, 190-192, 219-221
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/repositories.go` around lines 40 - 43, Instead of returning nil/zero values when the RepositoriesStore or its client is nil, return a consistent sentinel error (e.g., ErrStoreNotConfigured or storeNotConfiguredErr) from all repository store methods (including GetByID, PutItem, DeleteByID and the other methods flagged in the review) so callers can distinguish "store not configured" from empty results; add the exported or package-level error value and update each method to check "if s == nil || s.client == nil { return <appropriate zero values>, storeNotConfiguredErr }" (use the correct return tuple shape per method) and ensure PutItem/DeleteByID do not report success when the store is unavailable.cla-backend-legacy/internal/store/signatures.go-32-35 (1)
32-35:⚠️ Potential issue | 🟠 MajorFail fast when the signatures store is unconfigured.
These guards turn a broken DynamoDB dependency into empty reads and successful no-op writes. That makes misconfiguration indistinguishable from "not found" and can silently drop
DeleteByID/PutItem; please return an explicit configuration error here, likeUserPermissionsStoredoes in the same package.Also applies to: 53-55, 87-89, 120-122, 152-166
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/signatures.go` around lines 32 - 35, Current nil checks in SignaturesStore methods (e.g., SignaturesStore.GetByID, DeleteByID, PutItem and other read/write methods guarding s == nil || s.client == nil) silently return zero values instead of signalling misconfiguration; change these guards to return an explicit configuration error (mirror the pattern used by UserPermissionsStore) — i.e., detect when SignaturesStore or SignaturesStore.client is nil and return a descriptive error (e.g., "signatures store not configured" or the same exported error used by UserPermissionsStore) along with the appropriate zero-value returns for each method so callers can fail fast on misconfiguration.cla-backend-legacy/internal/api/router.go-41-42 (1)
41-42:⚠️ Potential issue | 🟠 MajorUse method-specific routes for the Salesforce GET handlers.
Lines 41-42 use
HandleFunc, which matches every HTTP method. That bypasses the router's 405 handling and sends POST/PUT/DELETE to these GET handlers instead.Suggested fix
- r.HandleFunc("/salesforce/projects", h.SalesforceGetProjectsV1) - r.HandleFunc("/salesforce/project", h.SalesforceGetProjectV1) + r.Get("/salesforce/projects", h.SalesforceGetProjectsV1) + r.Get("/salesforce/project", h.SalesforceGetProjectV1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/api/router.go` around lines 41 - 42, The routes for the Salesforce GET handlers currently use r.HandleFunc which matches all HTTP methods; change them to method-specific registration so only GET is routed to h.SalesforceGetProjectsV1 and h.SalesforceGetProjectV1 (e.g. use the router's Methods("GET") chaining or r.Methods("GET").HandleFunc(...) syntax) so non-GET requests trigger the router's 405 handling instead of being dispatched to these handlers.cla-backend-legacy/internal/legacy/github/app_installation.go-169-175 (1)
169-175:⚠️ Potential issue | 🟠 MajorCheck
io.ReadAllerrors before parsing GitHub responses.Lines 169 and 217 discard the read error from
io.ReadAll()and continue processing. Per the Goio.Readercontract,ReadAllmay return partial bytes together with a non-nil error, especially on network/IO failures. Ignoring that error causes partial data to be used for status code error messages and JSON unmarshaling, masking transport failures as parsing errors or silently truncating repository lists.Suggested fix
- b, _ := io.ReadAll(resp.Body) + b, err := io.ReadAll(resp.Body) + if err != nil { + return "", fmt.Errorf("read github access token response: %w", err) + } if resp.StatusCode < 200 || resp.StatusCode > 299 { return "", fmt.Errorf("github access token request failed: status=%d body=%s", resp.StatusCode, string(b)) } @@ - b, _ := io.ReadAll(resp.Body) + b, err := io.ReadAll(resp.Body) + if err != nil { + return nil, fmt.Errorf("read github repositories response: %w", err) + } resp.Body.Close() if resp.StatusCode < 200 || resp.StatusCode > 299 { return nil, fmt.Errorf("github list installation repositories failed: status=%d body=%s", resp.StatusCode, string(b)) }Also applies to: 217–223
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/github/app_installation.go` around lines 169 - 175, The code currently ignores the error from io.ReadAll(resp.Body) before using the returned bytes for status checks and json.Unmarshal (see usages around io.ReadAll, resp.Body, installationTokenResponse and json.Unmarshal); update both occurrences to capture the read error (e.g., b, err := io.ReadAll(resp.Body)), check if err != nil and return a descriptive error (including the read error and optionally the partial body) before proceeding to status-code handling or json.Unmarshal, so transport/read failures aren’t masked as JSON/parsing errors.cla-backend-legacy/internal/auth/auth0.go-129-133 (1)
129-133:⚠️ Potential issue | 🟠 MajorKeep the default algorithm local instead of mutating
v.Lines 131-133 write shared validator state on the request path. Reusing one
Auth0Validatoracross concurrent handlers will race here even though the value is always"RS256".🐛 Proposed fix
- if v.Algorithm == "" { - v.Algorithm = "RS256" - } + algorithm := v.Algorithm + if algorithm == "" { + algorithm = "RS256" + }- if alg := t.Method.Alg(); alg != v.Algorithm { + if alg := t.Method.Alg(); alg != algorithm { return nil, fmt.Errorf("unexpected signing method: %s", alg) }Also applies to: 149-150
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/auth/auth0.go` around lines 129 - 133, The code mutates shared Auth0Validator state by assigning v.Algorithm = "RS256"; instead, keep the default local to the request path: read v.Algorithm into a local variable (e.g., alg := v.Algorithm), if alg == "" set alg = "RS256", and use alg for validation/headers/etc. Do the same replacement for the similar occurrence around the 149-150 logic so you never write back into the Auth0Validator instance.cla-backend-legacy/internal/legacy/salesforce/service.go-128-143 (1)
128-143:⚠️ Potential issue | 🟠 MajorReject all-blank project lists before calling project-service.
Lines 255-260 drop blank IDs, but Lines 129-141 only check the original slice length.
[]string{"", " "}therefore becomes/projects/search?id=instead of the existing 403 path.🐛 Proposed fix
func (s *Service) GetProjects(ctx context.Context, projectIDs []string) ([]SalesforceProject, int, error) { - if len(projectIDs) == 0 { + ids := make([]string, 0, len(projectIDs)) + for _, id := range projectIDs { + if id = strings.TrimSpace(id); id != "" { + ids = append(ids, id) + } + } + if len(ids) == 0 { return nil, http.StatusForbidden, errors.New("no authorized projects") } tok, code, err := s.getAccessToken(ctx) if err != nil { @@ - endpoint, err := s.projectsSearchURL(projectIDs) + endpoint, err := s.projectsSearchURL(ids) if err != nil { return nil, http.StatusInternalServerError, err }Also applies to: 248-267
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/salesforce/service.go` around lines 128 - 143, The initial length check in Service.GetProjects only tests the raw slice length and does not reject slices that contain only blank/whitespace IDs, allowing requests like []string{"", " "} to proceed; fix by normalizing and filtering projectIDs at the start of GetProjects (trim spaces and remove empty strings), then if the filtered slice is empty return the existing 403/no authorized projects error, and use the filtered slice when calling projectsSearchURL; apply the same normalization-and-empty-check to the other code path that processes projectIDs in this file so blank-only lists are consistently rejected.cla-backend-legacy/internal/auth/auth0.go-210-227 (1)
210-227:⚠️ Potential issue | 🟠 MajorCheck the JWKS status code before decoding.
A non-200 JSON body currently decodes into an empty key set and later surfaces as
Unable to find appropriate key, which hides an upstream Auth0 failure behind a client-token error.🐛 Proposed fix
resp, err := v.httpClient.Do(req) if err != nil { return nil, &AuthError{Response: "unable to fetch well known jwks"} } defer resp.Body.Close() + if resp.StatusCode != http.StatusOK { + return nil, &AuthError{Response: "unable to fetch well known jwks"} + } var jwks jwksResponse if err := json.NewDecoder(resp.Body).Decode(&jwks); err != nil { return nil, &AuthError{Response: "unable to fetch well known jwks"} }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/auth/auth0.go` around lines 210 - 227, In fetchJWKS, validate the HTTP response status before attempting to decode: after v.httpClient.Do(req) and before json.NewDecoder(...).Decode, check resp.StatusCode and if it's not 200 return an AuthError describing the upstream failure (include status code and optionally status text or response body snippet) so callers don’t get an empty jwks; update the error returns in fetchJWKS to surface this upstream error instead of always returning "unable to fetch well known jwks".cla-backend-legacy/internal/legacy/github/oauth_app.go-37-61 (1)
37-61:⚠️ Potential issue | 🟠 MajorPropagate
redirect_uriinto the token exchange.GitHub's OAuth flow requires that if
redirect_uriis included in the authorize request, the same value must be sent when exchanging the code for a token. The current code allows callers to specify a non-default callback at lines 49–50 but provides no way to pass it toExchangeOAuthToken, creating a request-response mismatch that will fail validation.Add
redirectURIas a parameter toExchangeOAuthTokenand include it in the token request form when provided:Proposed fix
-func (s *Service) ExchangeOAuthToken(ctx context.Context, clientID, clientSecret, code, state string) (map[string]any, error) { +func (s *Service) ExchangeOAuthToken(ctx context.Context, clientID, clientSecret, code, redirectURI, state string) (map[string]any, error) { if strings.TrimSpace(clientID) == "" { return nil, errors.New("missing client id") } if strings.TrimSpace(clientSecret) == "" { return nil, errors.New("missing client secret") } if strings.TrimSpace(code) == "" { return nil, errors.New("missing code") } form := url.Values{} form.Set("client_id", clientID) form.Set("client_secret", clientSecret) form.Set("code", code) + if strings.TrimSpace(redirectURI) != "" { + form.Set("redirect_uri", redirectURI) + } if strings.TrimSpace(state) != "" { form.Set("state", state) }Update all callers to pass the redirect URI from the original authorization request.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/github/oauth_app.go` around lines 37 - 61, The authorize flow allows a custom redirectURI in BuildOAuthAuthorizeURL but ExchangeOAuthToken doesn't accept or forward it, causing GitHub token exchanges to fail when a non-default callback was used; update the ExchangeOAuthToken function signature to accept a redirectURI string parameter, add the redirect_uri field into the token request form when redirectURI is non-empty inside ExchangeOAuthToken, and update all callers of ExchangeOAuthToken to pass through the same redirectURI used to build the authorize URL (reference BuildOAuthAuthorizeURL and ExchangeOAuthToken to locate the changes).
🟡 Minor comments (3)
cla-backend-legacy/README.md-16-25 (1)
16-25:⚠️ Potential issue | 🟡 MinorSoften the production-readiness claims.
This PR is still marked WIP, and the new deploy workflow currently treats failed health checks as expected. Statements like
100% parityandready for immediate production useoverstate the current validation state and can mislead operators planning a cutover.Also applies to: 288-310
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/README.md` around lines 16 - 25, The README currently makes definitive production-readiness and parity claims (phrases like "ready for production use", "100% for replacing the Python deployment", "100% for strict legacy behavioral parity", "Full 1:1 API compatibility", and "mirroring any incorrect behavior") that overstate the current validation; update README.md to soften these statements: replace absolute language with conditional/goal-oriented phrasing (e.g., "targeting", "aiming for", "intended for eventual production use pending validation"), add a short note that the PR is WIP and that health checks and cutover validation are ongoing, and change the sentence about mirroring incorrect behavior to explicitly state it's intentional for parity testing rather than recommended for production; apply the same wording changes to the duplicate section referenced later in the file.cla-backend-legacy/internal/store/gitlab_orgs.go-41-58 (1)
41-58:⚠️ Potential issue | 🟡 MinorMatch the rest of the store layer's nil guards here.
Both methods dereference
s.clientunconditionally, unlike the other new stores. If this store is left unset during wiring or tests, these paths panic instead of failing closed.Possible fix
func (s *GitLabOrgsStore) ScanAll(ctx context.Context) ([]map[string]types.AttributeValue, error) { + if s == nil || s.client == nil { + return nil, nil + } var out []map[string]types.AttributeValue var startKey map[string]types.AttributeValue for { @@ func (s *GitLabOrgsStore) FindByOrganizationURL(ctx context.Context, orgURL string) (map[string]types.AttributeValue, bool, error) { + if s == nil || s.client == nil { + return nil, false, nil + } orgURL = strings.TrimSpace(orgURL)Also applies to: 62-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/gitlab_orgs.go` around lines 41 - 58, The GitLabOrgsStore methods (e.g., ScanAll and the other Scan method in this file) dereference s.client without a nil check; add the same nil guard used in other stores at the start of each method to return a clear error when s.client is nil (e.g., if s == nil || s.client == nil { return nil, fmt.Errorf("nil dynamodb client in GitLabOrgsStore") }), then proceed to call s.client.Scan as before so the code fails closed instead of panicking.cla-backend-legacy/internal/middleware/request_log.go-61-67 (1)
61-67:⚠️ Potential issue | 🟡 MinorFinish the nil-request guard here.
The code handles
r == nilwhen derivingpath, but Line 61 still dereferencesr.Header. That means the middleware still panics on a nil request before reachingnext.Possible fix
- if ok, runID := extractE2EMarker(r.Header); ok { + if r != nil { + if ok, runID := extractE2EMarker(r.Header); ok { suffix := " e2e=1" if runID != "" { suffix += fmt.Sprintf(" e2e_run_id=%s", runID) } logging.Infof("LG:e2e-request-path:%s%s", path, suffix) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/middleware/request_log.go` around lines 61 - 67, The extractE2EMarker call still dereferences r.Header when r may be nil; wrap the extraction and the logging block in an r != nil guard so you only call extractE2EMarker(r.Header) when the request exists. Specifically, update the middleware so that before calling extractE2EMarker you check if r != nil, and move the logging.Infof("LG:e2e-request-path:%s%s", path, suffix) and related suffix construction inside that r != nil branch (using the existing extractE2EMarker function and variables) to avoid a nil dereference.
🧹 Nitpick comments (11)
cla-backend-legacy/internal/featureflags/flags.go (1)
41-46: Consider logging when an env var is set but unparseable.When the environment variable is set to a non-empty value that isn't recognized as boolean-like (e.g.,
"maybe","enabled"), the code silently falls through to stage-based defaults. This could mask configuration errors.💡 Suggested improvement: add a warning log
+import "log" + raw := os.Getenv(envVar) if raw != "" { if parsed, ok := parseBoolish(raw); ok { cache[envVar] = parsed return parsed } + log.Printf("featureflags: env var %s has unrecognized value %q, falling back to stage-based default", envVar, raw) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/featureflags/flags.go` around lines 41 - 46, When a non-empty env var value is present but parseBoolish(raw) returns ok==false, add a warning log so mis-configurations aren't silent: inside the existing block that checks raw != "" and calls parseBoolish, detect the !ok case and emit a warning that includes the envVar name and the raw value (and optionally the package logger name or standard log) before falling through to stage defaults; keep the existing cache/return behavior unchanged when parsed is true. Reference parseBoolish, raw, envVar, cache and parsed to locate where to insert the log.cla-backend-legacy/internal/email/ses.go (3)
41-53: Consider wrapping the SES error for better debugging context.When
SendEmailfails, the raw AWS error is returned. Wrapping it with context (e.g., recipient count, sender) would aid troubleshooting in logs without exposing sensitive data.♻️ Proposed error wrapping
+ "fmt"_, err := s.client.SendEmail(ctx, &ses.SendEmailInput{ // ... }) - return err + if err != nil { + return fmt.Errorf("SES SendEmail failed for %d recipient(s): %w", len(recipients), err) + } + return nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/email/ses.go` around lines 41 - 53, The SES SendEmail call in the method that invokes s.client.SendEmail currently returns the raw AWS error; update the error return to wrap the error with contextual information (e.g., include s.sender and len(recipients) but not full recipient addresses or email body) using fmt.Errorf or errors.Wrap so callers get useful debugging context; target the block around s.client.SendEmail and the returned err to add the wrapped error message.
37-40: Silent no-op on empty recipients is acceptable but consider logging.Returning
nilfor empty recipients avoids unnecessary API calls. For observability, a debug-level log could help trace unexpected empty-recipient scenarios during troubleshooting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/email/ses.go` around lines 37 - 40, In SESService.Send, instead of silently returning on len(recipients) == 0, add a debug-level log just before the return to improve observability; use the service's existing logger (e.g., s.logger.Debugf or s.log.Debugw) to log that Send was skipped due to empty recipients and include identifying info like subject (and maybe a request id from ctx if available); if the SESService has no logger field, use a context-aware logger pulled from ctx or fall back to the standard log package for a debug message, then continue to return nil.
46-51: Specify UTF-8 charset forContentobjects to avoid encoding issues with non-ASCII characters.When
types.Content.Charsetis omitted, Amazon SES defaults to 7-bit ASCII. Any non-ASCII characters (accents, Cyrillic, CJK, emoji, etc.) will be garbled. SetCharset: aws.String("UTF-8")on bothSubjectandBody.Textcontent.♻️ Proposed fix
Message: &types.Message{ - Subject: &types.Content{Data: aws.String(subject)}, + Subject: &types.Content{Data: aws.String(subject), Charset: aws.String("UTF-8")}, Body: &types.Body{ - Text: &types.Content{Data: aws.String(body)}, + Text: &types.Content{Data: aws.String(body), Charset: aws.String("UTF-8")}, }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/email/ses.go` around lines 46 - 51, The Message construction for SES omits Content.Charset causing default 7-bit ASCII; update the Subject and Body.Text Content objects in the code that builds the types.Message (the block creating Subject: &types.Content{...} and Body: &types.Body{ Text: &types.Content{...} }) to include Charset: aws.String("UTF-8") so both Subject and Text body explicitly use UTF-8 encoding.cla-backend-legacy/internal/store/github_orgs.go (1)
102-123: Consider adding a limit or warning for ScanAll.
ScanAllperforms a full table scan which can be expensive on large tables and may consume significant read capacity. While this mirrors legacy Python behavior, consider adding documentation about the performance implications or a configurable limit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/github_orgs.go` around lines 102 - 123, ScanAll currently performs an unbounded full DynamoDB scan (GitHubOrgsStore.ScanAll) which is risky for large tables; modify ScanAll to enforce a configurable limit and surface a clear error or warning when the limit is reached: add a configurable maxScanItems (via constructor/config or an env var) or accept a limit via context, check the cumulative items count while appending out.Items, stop scanning when the limit is reached and return a descriptive error (or partial results plus a warning log via the store's logger), and update documentation/comments for ScanAll to explain the performance implications and how to configure the limit; use symbols GitHubOrgsStore, ScanAll, s.client, s.table to locate and change the implementation.cla-backend-legacy/internal/legacy/github/cache.go (1)
19-26: Consider using idiomatic duration expressions.The TTL constants use
N * time.Secondwhich is correct but verbose. The idiomatic Go style would be:NegativeCacheTTL = 3 * time.Minute ProjectCacheTTL = 3 * time.Hour DefaultCacheTTL = 12 * time.HourThis is a minor stylistic preference and doesn't affect functionality.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/github/cache.go` around lines 19 - 26, Replace the numeric-second expressions with idiomatic duration literals: update NegativeCacheTTL, ProjectCacheTTL, and DefaultCacheTTL to use 3*time.Minute, 3*time.Hour, and 12*time.Hour respectively (keep the same values but use time.Minute/time.Hour for clarity and readability).cla-backend-legacy/internal/store/users.go (1)
58-90: Consider normalizing input in QueryByLFUsername for consistency.
QueryByLFEmail(line 96) normalizes the input withstrings.TrimSpace(strings.ToLower(email)), butQueryByLFUsernamedoes not normalize the username. If usernames in the database are stored normalized, queries with differently-cased input will fail to match.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/users.go` around lines 58 - 90, QueryByLFUsername is not normalizing the input username which can cause mismatches versus QueryByLFEmail; update QueryByLFUsername to normalize its username argument the same way QueryByLFEmail does (e.g. apply strings.TrimSpace(strings.ToLower(username)) before using it in the QueryInput), ensure the normalized value is used for ExpressionAttributeValues (":u"), and add/import the strings package if not already imported.cla-backend-legacy/internal/legacy/lfgroup/lfgroup.go (1)
47-89: Consider caching the access token to reduce token endpoint calls.Unlike
userservice.Client.getAccessToken()which caches tokens with expiry, thisgetAccessTokenfetches a new token on every call. This could lead to unnecessary load on the OAuth endpoint and potential rate limiting.♻️ Proposed token caching similar to userservice.go
Add fields to the Client struct and implement caching:
type Client struct { BaseURL string ClientID string ClientSecret string RefreshToken string httpClient *http.Client + + mu sync.Mutex + accessToken string + accessTokenExpiry time.Time }Then in
getAccessToken:func (c *Client) getAccessToken(ctx context.Context) (string, error) { if c == nil { return "", errors.New("lfgroup client is nil") } + c.mu.Lock() + defer c.mu.Unlock() + + // Return cached token if still valid + if c.accessToken != "" && time.Now().Before(c.accessTokenExpiry) { + return c.accessToken, nil + } + if c.BaseURL == "" || c.ClientID == "" || c.ClientSecret == "" || c.RefreshToken == "" { return "", errors.New("lfgroup client not configured") } // ... rest of implementation + // After successful token fetch: + c.accessToken = accessToken + c.accessTokenExpiry = time.Now().Add(25 * time.Minute) // adjust based on actual expiry + return accessToken, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/lfgroup/lfgroup.go` around lines 47 - 89, getAccessToken currently requests a new token on every call; add caching and synchronization to avoid hitting the token endpoint repeatedly: add fields on Client (e.g. accessToken string, tokenExpiry time.Time, mu sync.Mutex) and update getAccessToken to lock mu, return cached accessToken if non-empty and time.Now().Before(tokenExpiry), otherwise perform the HTTP refresh flow, parse expires_in from the response to set tokenExpiry (now + expires_in - small clock skew), store accessToken and tokenExpiry on the Client, release the lock and return the token; ensure nil checks for c remain and handle response parsing errors as before.cla-backend-legacy/internal/store/events.go (1)
77-89: Inconsistent nil item handling compared to other stores.
PutItemreturns an error for nil items (fmt.Errorf("nil item")), while other stores likeGitHubOrgsStore.PutItemsilently accept nil items (returning nil). Consider aligning this behavior across stores for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/store/events.go` around lines 77 - 89, The EventsStore.PutItem currently returns an error for a nil item, which is inconsistent with other stores (e.g., GitHubOrgsStore.PutItem) that silently accept nil items; update EventsStore.PutItem so that when item == nil it simply returns nil (no error) to match the existing pattern used by the other stores, ensuring the nil check in EventsStore.PutItem is changed from returning fmt.Errorf("nil item") to returning nil and leaving the rest of the PutItem logic (s.client.PutItem call and return err) unchanged.cla-backend-legacy/internal/contracts/contracts.go (1)
49-83: HTML escaping in RenderHTML is not critical, but recommended for robustness.The
legalEntityNameandpreambleparameters are substituted without escaping. However, the rendered HTML is passed to PDF generation (pdf.GeneratePDF), not displayed in a browser, so traditional XSS vulnerabilities do not apply here. That said, unescaped user input in the HTML template could corrupt PDF content or layout if the values contain HTML/XML-like syntax. Consider usinghtml.EscapeString()from Go's standard library for defensive robustness, though this is optional given the PDF-only context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/contracts/contracts.go` around lines 49 - 83, RenderHTML is inserting legalEntityName and preamble into the HTML without escaping; to harden PDF output, call html.EscapeString on legalEntityName and preamble before doing the strings.ReplaceAll substitutions (inside RenderHTML, after loading html and before replacements) so any HTML/XML-like characters are escaped; keep other replacements (document_type, major_version, minor_version) unchanged.cla-backend-legacy/internal/legacy/github/oauth_app.go (1)
26-35: Comments claim "whole token dict" is kept, but only 3 fields are returned in the map.The comment at line 125 says "without losing fields," but the
OAuthTokenstruct capturesErrorandErrorDescription(lines 33–34) which are never included in the returned map (lines 126–135). Additionally, GitHub's OAuth response can includeerror_uri,expires_in,refresh_token, andrefresh_token_expires_in, none of which are captured by the struct. To preserve session parity with legacy Python and avoid discrepancy with the documentation comment, either:
- Return all captured fields in the map (including
erroranderror_descriptionwhen present), or- Extend the struct to capture additional fields GitHub may return and include them in the map.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/internal/legacy/github/oauth_app.go` around lines 26 - 35, The OAuthToken struct and the place that converts it into the returned map are out of sync with the comment — either include the existing Error and ErrorDescription fields in the returned map or extend OAuthToken to capture all GitHub OAuth fields (at minimum add ErrorURI `error_uri`, ExpiresIn `expires_in`, RefreshToken `refresh_token`, RefreshTokenExpiresIn `refresh_token_expires_in`) with proper json tags and then include those fields when building the token map so the returned map truly mirrors the whole token dict produced by GitHub.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ee8eb347-08fd-4a21-b8e6-dee29298286e
📒 Files selected for processing (51)
.github/workflows/cla-backend-legacy-deploy-dev.yml.github/workflows/cla-backend-legacy-deploy-prod.ymlcla-backend-legacy/README.mdcla-backend-legacy/cmd/legacy-api-local/main.gocla-backend-legacy/cmd/legacy-api/main.gocla-backend-legacy/internal/api/github_oauth.gocla-backend-legacy/internal/api/handlers.gocla-backend-legacy/internal/api/router.gocla-backend-legacy/internal/auth/auth0.gocla-backend-legacy/internal/config/ssm.gocla-backend-legacy/internal/contracts/contracts.gocla-backend-legacy/internal/contracts/types.gocla-backend-legacy/internal/email/aws.gocla-backend-legacy/internal/email/email.gocla-backend-legacy/internal/email/ses.gocla-backend-legacy/internal/email/sns.gocla-backend-legacy/internal/featureflags/flags.gocla-backend-legacy/internal/legacy/github/app_installation.gocla-backend-legacy/internal/legacy/github/cache.gocla-backend-legacy/internal/legacy/github/oauth_app.gocla-backend-legacy/internal/legacy/github/pull_request.gocla-backend-legacy/internal/legacy/github/service.gocla-backend-legacy/internal/legacy/github/webhook.gocla-backend-legacy/internal/legacy/lfgroup/lfgroup.gocla-backend-legacy/internal/legacy/salesforce/service.gocla-backend-legacy/internal/legacy/userservice/userservice.gocla-backend-legacy/internal/legacyproxy/proxy.gocla-backend-legacy/internal/logging/logging.gocla-backend-legacy/internal/middleware/cors.gocla-backend-legacy/internal/middleware/request_log.gocla-backend-legacy/internal/middleware/session.gocla-backend-legacy/internal/pdf/docraptor.gocla-backend-legacy/internal/respond/respond.gocla-backend-legacy/internal/server/server.gocla-backend-legacy/internal/store/ccla_allowlist_requests.gocla-backend-legacy/internal/store/companies.gocla-backend-legacy/internal/store/company_invites.gocla-backend-legacy/internal/store/dynamo.gocla-backend-legacy/internal/store/dynamo_conv.gocla-backend-legacy/internal/store/dynamo_conv_reverse.gocla-backend-legacy/internal/store/events.gocla-backend-legacy/internal/store/gerrit_instances.gocla-backend-legacy/internal/store/github_orgs.gocla-backend-legacy/internal/store/gitlab_orgs.gocla-backend-legacy/internal/store/kv_store.gocla-backend-legacy/internal/store/project_cla_groups.gocla-backend-legacy/internal/store/projects.gocla-backend-legacy/internal/store/repositories.gocla-backend-legacy/internal/store/signatures.gocla-backend-legacy/internal/store/user_permissions.gocla-backend-legacy/internal/store/users.go
🚧 Files skipped from review as they are similar to previous changes (13)
- .github/workflows/cla-backend-legacy-deploy-prod.yml
- cla-backend-legacy/internal/config/ssm.go
- cla-backend-legacy/internal/store/ccla_allowlist_requests.go
- cla-backend-legacy/cmd/legacy-api/main.go
- cla-backend-legacy/internal/contracts/types.go
- cla-backend-legacy/internal/email/aws.go
- cla-backend-legacy/internal/legacy/github/service.go
- cla-backend-legacy/internal/legacyproxy/proxy.go
- cla-backend-legacy/internal/email/email.go
- cla-backend-legacy/internal/store/gerrit_instances.go
- cla-backend-legacy/internal/store/projects.go
- cla-backend-legacy/internal/logging/logging.go
- cla-backend-legacy/internal/email/sns.go
| parsed, err := jwt.Parse(tokenString, func(t *jwt.Token) (any, error) { | ||
| if t == nil { | ||
| return nil, errors.New("token is nil") | ||
| } | ||
| if alg := t.Method.Alg(); alg != v.Algorithm { | ||
| return nil, fmt.Errorf("unexpected signing method: %s", alg) | ||
| } | ||
|
|
||
| kid, _ := t.Header["kid"].(string) | ||
| if kid == "" { | ||
| return nil, errors.New("kid not found") | ||
| } | ||
| pk, ok := jwksKeyToRSAPublicKey(jwks, kid) | ||
| if !ok { | ||
| return nil, errNoMatchingKey | ||
| } | ||
| return pk, nil | ||
| }) | ||
| if err != nil { | ||
| if errors.Is(err, errNoMatchingKey) { | ||
| // Mirror Python's AuthError payload. | ||
| return nil, &AuthError{Response: map[string]any{"code": "invalid_header", "description": "Unable to find appropriate key"}} | ||
| } | ||
|
|
||
| // Try to map validation errors to legacy Python strings. | ||
| var ve *jwt.ValidationError | ||
| if errors.As(err, &ve) { | ||
| if ve.Errors&jwt.ValidationErrorExpired != 0 { | ||
| return nil, &AuthError{Response: "token is expired"} | ||
| } | ||
| if ve.Errors&jwt.ValidationErrorClaimsInvalid != 0 { | ||
| return nil, &AuthError{Response: "incorrect claims"} | ||
| } | ||
| } | ||
|
|
||
| return nil, &AuthError{Response: "unable to parse authentication"} | ||
| } | ||
| if parsed == nil || !parsed.Valid { | ||
| return nil, &AuthError{Response: "unable to parse authentication"} | ||
| } | ||
|
|
||
| claims, ok := parsed.Claims.(jwt.MapClaims) | ||
| if !ok { | ||
| return nil, &AuthError{Response: "unable to decode claims"} | ||
| } | ||
|
|
||
| username := claimString(claims, v.UsernameClaim) | ||
| if username == "" && v.UsernameClaimAlt != "" { | ||
| username = claimString(claims, v.UsernameClaimAlt) | ||
| } | ||
| if username == "" { | ||
| return nil, &AuthError{Response: "username claim not found"} | ||
| } | ||
|
|
||
| user := &AuthUser{ | ||
| Name: claimString(claims, "name"), | ||
| Email: claimString(claims, "email"), | ||
| Username: username, | ||
| Sub: claimString(claims, "sub"), | ||
| } | ||
| return user, nil |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In github.com/golang-jwt/jwt/v4, when using jwt.Parse with the default MapClaims, are the audandiss claims enforced automatically, or must callers verify them explicitly?
💡 Result:
In github.com/golang-jwt/jwt/v4, jwt.Parse uses MapClaims{} by default and only runs whatever token.Claims.Valid() checks during parsing/validation. In MapClaims, Valid() only validates the time-based claims (exp, iat, nbf)—it does not enforce aud or iss. Therefore, callers must explicitly verify aud/iss (e.g., via MapClaims.VerifyAudience / MapClaims.VerifyIssuer, or by using custom claims whose Valid() enforces them). [1], [2]
🏁 Script executed:
# First, check if there's any aud/iss validation in the auth.go file
rg -i "(aud|iss|audience|issuer)" cla-backend-legacy/internal/auth/auth0.go -A 2 -B 2Repository: linuxfoundation/easycla
Length of output: 322
🏁 Script executed:
# Check the Auth0Validator struct to see if it has fields for expected aud/iss
rg -A 20 "type Auth0Validator struct" cla-backend-legacy/internal/auth/auth0.goRepository: linuxfoundation/easycla
Length of output: 734
🏁 Script executed:
# Check if there's any validation after the authenticate function
rg -A 10 "func.*Authenticate" cla-backend-legacy/internal/auth/auth0.go | head -50Repository: linuxfoundation/easycla
Length of output: 439
🏁 Script executed:
# Check for any VerifyAudience or VerifyIssuer calls
rg "Verify(Audience|Issuer)" cla-backend-legacy/Repository: linuxfoundation/easycla
Length of output: 49
🏁 Script executed:
# Look for where Authenticate is called to see if there's validation at caller level
rg "\.Authenticate\(" cla-backend-legacy/ -B 2 -A 5Repository: linuxfoundation/easycla
Length of output: 31704
🏁 Script executed:
# Check the full authenticate function implementation
sed -n '128,210p' cla-backend-legacy/internal/auth/auth0.goRepository: linuxfoundation/easycla
Length of output: 2308
Enforce aud / iss before returning AuthUser.
This path does not explicitly check either claim. The pinned github.com/golang-jwt/jwt/v4 library with MapClaims validates only time-based claims (exp, iat, nbf); aud and iss must be verified explicitly by the caller. Without these checks, tokens issued for a different Auth0 API in the same tenant will authenticate successfully if they have a valid signature, creating an authorization bypass.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cla-backend-legacy/internal/auth/auth0.go` around lines 145 - 205, The code
parses the JWT and returns an AuthUser without verifying aud/iss; update the
token post-parse validation to explicitly check parsed.Claims (jwt.MapClaims)
for the "aud" and "iss" values and compare them to the expected values on the
validator (e.g., v.Audience / v.Issuer or whatever fields hold the expected
audience/issuer); if they don’t match, return an AuthError (reuse the existing
patterns like returning &AuthError{Response: "incorrect claims"} or a similar
legacy string) before constructing and returning the AuthUser (symbols to
modify: the jwt.Parse result handling where parsed.Claims is converted to
jwt.MapClaims, claimString, v.UsernameClaim/v.UsernameClaimAlt, and the
AuthError/AuthUser return paths).
| // IsStandaloneProject checks if a Salesforce project is a standalone project. | ||
| // A project is standalone if it has no parent or its parent is The Linux Foundation/LF Projects LLC | ||
| // and it has no sub-projects. | ||
| func (s *Service) IsStandaloneProject(ctx context.Context, projectSFID string) (bool, error) { | ||
| project, err := s.getProjectDetailByID(ctx, projectSFID) | ||
| if err != nil { | ||
| return false, err | ||
| } | ||
| if project == nil { | ||
| return false, nil | ||
| } | ||
|
|
||
| parentName := s.getParentName(project) | ||
| if parentName == nil { | ||
| return false, nil | ||
| } | ||
| if *parentName == TheLinuxFoundation || *parentName == LFProjectsLLC { | ||
| if len(project.Projects) == 0 { | ||
| return true, nil | ||
| } | ||
| } | ||
| return false, nil | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
py_file="$(fd 'salesforce\.py$' . | head -n1)"
test -n "$py_file"
rg -n -C4 'standalone|parent|supported' "$py_file" cla-backend-legacy/internal/legacy/salesforce/service.goRepository: linuxfoundation/easycla
Length of output: 6514
🏁 Script executed:
find . -type f -name "*salesforce*" | head -20Repository: linuxfoundation/easycla
Length of output: 155
🏁 Script executed:
git ls-files | grep -i salesforceRepository: linuxfoundation/easycla
Length of output: 208
🏁 Script executed:
sed -n '412,425p' cla-backend-legacy/internal/legacy/salesforce/service.goRepository: linuxfoundation/easycla
Length of output: 401
🏁 Script executed:
rg -n -C10 'is_standalone|standalone_project' cla-backend/cla/salesforce.pyRepository: linuxfoundation/easycla
Length of output: 49
🏁 Script executed:
head -100 cla-backend/cla/salesforce.pyRepository: linuxfoundation/easycla
Length of output: 3476
🏁 Script executed:
wc -l cla-backend/cla/salesforce.pyRepository: linuxfoundation/easycla
Length of output: 100
🏁 Script executed:
cat cla-backend/cla/salesforce.py | grep -n "def\|standalone\|parent"Repository: linuxfoundation/easycla
Length of output: 273
🏁 Script executed:
find cla-backend -type f -name "*.py" | xargs grep -l "standalone\|is_standalone" 2>/dev/nullRepository: linuxfoundation/easycla
Length of output: 167
🏁 Script executed:
ls -la cla-backend-legacy/ 2>/dev/null || echo "No cla-backend-legacy in Python"Repository: linuxfoundation/easycla
Length of output: 850
🏁 Script executed:
rg -n -C15 "standalone" cla-backend/cla/project_service.pyRepository: linuxfoundation/easycla
Length of output: 1227
🏁 Script executed:
rg -n -C15 "standalone" cla-backend/cla/models/dynamo_models.pyRepository: linuxfoundation/easycla
Length of output: 1603
🏁 Script executed:
rg -n -C3 "get_parent_name" cla-backend/cla/project_service.pyRepository: linuxfoundation/easycla
Length of output: 1287
🏁 Script executed:
rg -n -C3 "THE_LINUX_FOUNDATION\|LF_PROJECTS_LLC" cla-backend/cla/project_service.pyRepository: linuxfoundation/easycla
Length of output: 49
🏁 Script executed:
sed -n '75,85p' cla-backend/cla/project_service.pyRepository: linuxfoundation/easycla
Length of output: 668
🏁 Script executed:
sed -n '1,20p' cla-backend/cla/project_service.py | grep -E "THE_LINUX_FOUNDATION|LF_PROJECTS_LLC"Repository: linuxfoundation/easycla
Length of output: 127
Fix IsStandaloneProject to handle projects with no parent.
The Go implementation contradicts the Python legacy implementation. When a project has no parent (parentName == nil), the Python version returns true (if no sub-projects), but the Go version returns false at lines 336–338. Per the Python logic in cla-backend/cla/project_service.py:40, a project qualifies as standalone if either it has no parent or its parent is The Linux Foundation/LF Projects LLC, provided it has no sub-projects. Update the condition to handle the nil parent case:
if parentName == nil || *parentName == TheLinuxFoundation || *parentName == LFProjectsLLC {
if len(project.Projects) == 0 {
return true, nil
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cla-backend-legacy/internal/legacy/salesforce/service.go` around lines 323 -
345, The IsStandaloneProject method incorrectly treats projects with no parent
as non-standalone; update the condition in IsStandaloneProject (which uses
getParentName, TheLinuxFoundation, LFProjectsLLC, and project.Projects) so that
a nil parent is treated the same as TheLinuxFoundation or LFProjectsLLC: check
if parentName == nil OR *parentName == TheLinuxFoundation OR *parentName ==
LFProjectsLLC, and if that holds and len(project.Projects) == 0 return true,
nil; leave other behavior unchanged.
| // NOTE: This intentionally mirrors the existing ItemToInterfaceMap() behavior which | ||
| // represents DynamoDB numbers as strings for Python/pynamodb parity. When converting | ||
| // back, purely-numeric strings are stored as DynamoDB numbers (N). | ||
| // | ||
| // This conversion is used sparingly (primarily for create/update flows where we build | ||
| // maps directly). When preserving exact DynamoDB attribute types is important, prefer | ||
| // patching the existing AttributeValue map directly. |
There was a problem hiding this comment.
This reverse conversion rewrites DynamoDB types on round-trip.
ItemToInterfaceMap() collapses both S/N to string and both SS/NS to []string. Converting those shapes back here turns numeric-looking strings into N and every []string into SS, so a read-modify-write path can silently change stored types. github_oauth.go already does that for existing user records before PutItem(). This helper needs explicit type information for ambiguous string/[]string inputs, or those call sites need to patch the original AttributeValue map instead of round-tripping through map[string]any.
Also applies to: 52-56, 92-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cla-backend-legacy/internal/store/dynamo_conv_reverse.go` around lines 19 -
25, The reverse conversion currently lossy-rewrites types (turning numeric
strings into N and []string into SS). Update the converter (the function in
dynamo_conv_reverse.go that turns map[string]any back to DynamoDB
AttributeValues) to accept explicit type hints or the original AttributeValue
map so it can preserve exact types: add an optional parameter (e.g.,
originalAttrs map[string]types.AttributeValue or typeHints map[string]string)
and use that to decide whether a string should be S vs N and whether []string
should be SS vs NS; update all call sites (including github_oauth.go) to pass
the original AttributeValue map or provide explicit hints, or alternatively
change those callers to patch the original AttributeValue map instead of
round-tripping through ItemToInterfaceMap()/this reverse function (apply same
fix to the other affected blocks around lines 52-56 and 92-110).
| // Ensure record exists (Python returns an error when load() fails). | ||
| if _, err := s.Get(ctx, username); err != nil { | ||
| return err | ||
| } | ||
|
|
||
| key, err := attributevalue.MarshalMap(map[string]string{"username": username}) | ||
| if err != nil { | ||
| return fmt.Errorf("marshal key: %w", err) | ||
| } | ||
|
|
||
| now := formatPynamoDateTimeUTC(time.Now().UTC()) | ||
| _, err = s.ddb.UpdateItem(ctx, &dynamodb.UpdateItemInput{ | ||
| TableName: aws.String(s.table), | ||
| Key: key, | ||
| UpdateExpression: aws.String( | ||
| "SET date_modified = :dm " + | ||
| "DELETE projects :p", | ||
| ), | ||
| ExpressionAttributeValues: map[string]types.AttributeValue{ | ||
| ":dm": &types.AttributeValueMemberS{Value: now}, | ||
| ":p": &types.AttributeValueMemberSS{Value: []string{projectSFID}}, | ||
| }, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the full RemoveProject method and surrounding context
cat -n cla-backend-legacy/internal/store/user_permissions.go | sed -n '175,220p'Repository: linuxfoundation/easycla
Length of output: 1326
🏁 Script executed:
# Check if ErrNotFound is defined in this file
rg "ErrNotFound" cla-backend-legacy/internal/store/user_permissions.goRepository: linuxfoundation/easycla
Length of output: 207
🏁 Script executed:
# Look for imports to understand the DynamoDB types being used
head -50 cla-backend-legacy/internal/store/user_permissions.goRepository: linuxfoundation/easycla
Length of output: 1356
🏁 Script executed:
# Search for ConditionalCheckFailedException to understand the error handling
rg "ConditionalCheckFailedException" cla-backend-legacy/Repository: linuxfoundation/easycla
Length of output: 49
🏁 Script executed:
# Search for any existing conditional write examples in the codebase
rg "ConditionExpression" cla-backend-legacy/Repository: linuxfoundation/easycla
Length of output: 2516
🏁 Script executed:
# Look for any error handling patterns in the store package
rg "errors.As" cla-backend-legacy/internal/store/Repository: linuxfoundation/easycla
Length of output: 49
🌐 Web query:
AWS SDK for Go v2 DynamoDB conditional check failed error type
💡 Result:
In AWS SDK for Go v2, a DynamoDB “conditional check failed” is returned as:
*dynamodb/types.ConditionalCheckFailedException(packagegithub.com/aws/aws-sdk-go-v2/service/dynamodb/types). [1]
Because v2 errors are commonly wrapped (e.g., in *smithy.OperationError), you typically detect it with errors.As rather than a direct type cast. [2][3]
import (
"errors"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
var ccfe *types.ConditionalCheckFailedException
if errors.As(err, &ccfe) {
// conditional check failed
// ccfe.Item may be set if ReturnValuesOnConditionCheckFailure was requested
}Sources: [1] [2] [3]
Make RemoveProject a single conditional write.
Line 184 does an existence check with Get, but Line 194 performs an unconditional UpdateItem. If the row disappears between those two calls, DynamoDB will recreate it with only username and date_modified. This violates the semantic contract where a remove should either succeed atomically or return ErrNotFound.
Suggested fix
- // Ensure record exists (Python returns an error when load() fails).
- if _, err := s.Get(ctx, username); err != nil {
- return err
- }
-
key, err := attributevalue.MarshalMap(map[string]string{"username": username})
if err != nil {
return fmt.Errorf("marshal key: %w", err)
}
@@
_, err = s.ddb.UpdateItem(ctx, &dynamodb.UpdateItemInput{
TableName: aws.String(s.table),
Key: key,
+ ConditionExpression: aws.String("attribute_exists(username)"),
UpdateExpression: aws.String(
"SET date_modified = :dm " +
"DELETE projects :p",
),
@@
})
if err != nil {
+ var cfe *types.ConditionalCheckFailedException
+ if errors.As(err, &cfe) {
+ return ErrNotFound
+ }
return fmt.Errorf("dynamodb update item: %w", err)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cla-backend-legacy/internal/store/user_permissions.go` around lines 183 -
205, The current RemoveProject implementation first calls s.Get(...) to check
existence and then does an unconditional s.ddb.UpdateItem(...), which can
recreate a deleted row; change RemoveProject to perform a single conditional
UpdateItem by removing the preliminary s.Get and adding a ConditionExpression
like "attribute_exists(username)" (or the primary key attribute used in the
table) to the UpdateItemInput used in the s.ddb.UpdateItem call; also catch the
DynamoDB conditional failure (types.ConditionalCheckFailedException / aws error
code) from UpdateItem and map it to ErrNotFound so the operation is atomic and
returns ErrNotFound when the item does not exist.
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
.github/workflows/build-pr.yml (2)
114-130: Consider moving the legacy backend checks into a separate job.These new steps now run strictly after the existing Python and
cla-backend-gopipeline, which will lengthen PR turnaround and make failures less isolated. A dedicated job would let both Go backends run in parallel.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-pr.yml around lines 114 - 130, The workflow currently runs the legacy Go steps ("Go Setup CLA Legacy Backend", "Go Build CLA Legacy Backend", "Go Test CLA Legacy Backend", "Go Lint CLA Legacy Backend") inline which forces them to run sequentially after the Python and cla-backend-go pipeline; extract these steps into a new, independent job in .github/workflows/build-pr.yml (create a job block with its own job id, runs-on and the same step sequence) and ensure the new job does not have a "needs" dependency on the Python/cla-backend-go jobs so it can run in parallel; keep the existing step names and working-directory=cla-backend-legacy to preserve behavior and add any necessary strategy/matrix or timeout settings to match CI conventions.
42-48: Align the cache path with the actual Go module cache.Unless
GOPATHorGOMODCACHEis being overridden elsewhere, Line 45 is caching a workspace directory thatactions/setup-godoes not use by default, so this step is unlikely to produce cache hits.♻️ Proposed fix
- name: Cache Go modules uses: actions/cache@v4 with: - path: ${{ github.workspace }}/go/pkg/mod + path: ~/go/pkg/mod key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }} restore-keys: | ${{ runner.os }}-go-🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-pr.yml around lines 42 - 48, The "Cache Go modules" step is caching the wrong directory; replace the path value currently set to ${{ github.workspace }}/go/pkg/mod with the actual Go module cache used by actions/setup-go by using ${{ env.GOMODCACHE }} (and as a fallback ${{ env.GOPATH }}/pkg/mod if GOMODCACHE is not set), and keep the cache key logic the same; update the step that uses actions/cache@v4 (named "Cache Go modules") to reference these environment variables so cache hits align with the real Go module cache.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-pr.yml:
- Around line 114-126: The workflow currently runs go mod tidy in the "Go Setup
CLA Legacy Backend" step which can rewrite go.mod/go.sum and mask uncommitted
dependency drift; change that step so it runs go mod tidy and then verifies
there are no changes to go.mod or go.sum (e.g., by performing a git diff check
and failing the job if either file was modified) so the CI surfaces dependency
drift instead of silently fixing it; keep the subsequent "Go Build CLA Legacy
Backend" (make lambdas) and "Go Test CLA Legacy Backend" steps unchanged.
---
Nitpick comments:
In @.github/workflows/build-pr.yml:
- Around line 114-130: The workflow currently runs the legacy Go steps ("Go
Setup CLA Legacy Backend", "Go Build CLA Legacy Backend", "Go Test CLA Legacy
Backend", "Go Lint CLA Legacy Backend") inline which forces them to run
sequentially after the Python and cla-backend-go pipeline; extract these steps
into a new, independent job in .github/workflows/build-pr.yml (create a job
block with its own job id, runs-on and the same step sequence) and ensure the
new job does not have a "needs" dependency on the Python/cla-backend-go jobs so
it can run in parallel; keep the existing step names and
working-directory=cla-backend-legacy to preserve behavior and add any necessary
strategy/matrix or timeout settings to match CI conventions.
- Around line 42-48: The "Cache Go modules" step is caching the wrong directory;
replace the path value currently set to ${{ github.workspace }}/go/pkg/mod with
the actual Go module cache used by actions/setup-go by using ${{ env.GOMODCACHE
}} (and as a fallback ${{ env.GOPATH }}/pkg/mod if GOMODCACHE is not set), and
keep the cache key logic the same; update the step that uses actions/cache@v4
(named "Cache Go modules") to reference these environment variables so cache
hits align with the real Go module cache.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bf018023-c764-43b6-b40e-2cfd83857203
⛔ Files ignored due to path filters (1)
cla-backend-legacy/go.sumis excluded by!**/*.sum
📒 Files selected for processing (2)
.github/workflows/build-pr.ymlcla-backend-legacy/go.mod
🚧 Files skipped from review as they are similar to previous changes (1)
- cla-backend-legacy/go.mod
| - name: Go Setup CLA Legacy Backend | ||
| working-directory: cla-backend-legacy | ||
| run: | | ||
| go mod tidy | ||
|
|
||
| - name: Go Build CLA Legacy Backend | ||
| working-directory: cla-backend-legacy | ||
| run: | | ||
| make lambdas | ||
|
|
||
| - name: Go Test CLA Legacy Backend | ||
| working-directory: cla-backend-legacy | ||
| run: go test ./... |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /tmp && find . -name "build-pr.yml" -o -name ".github" -type d 2>/dev/null | head -20Repository: linuxfoundation/easycla
Length of output: 49
🏁 Script executed:
git ls-files | grep -E "\.github/workflows.*build.*\.yml"Repository: linuxfoundation/easycla
Length of output: 97
🏁 Script executed:
cat -n .github/workflows/build-pr.yml | sed -n '110,130p'Repository: linuxfoundation/easycla
Length of output: 779
🌐 Web query:
go mod tidy rewrite go.mod go.sum behavior
💡 Result:
go mod tidy is expected to rewrite both go.mod and go.sum because its job is to make the module metadata match what your module actually needs to build and test.
What it does to go.mod
- Adds missing
requires for any module that provides a package needed by your module (including test imports), using the latest version that satisfies the query when a requirement is missing. (go.dev) - Removes unused
requires that don’t provide any relevant packages. (go.dev) - May add/remove
// indirectmarkers and (with newergoversions) may reorganize/split direct vs indirect requirements (this can create diffs even if you didn’t “change deps” manually). (go.dev) - If you pass
-go=version, it will update thegodirective and adjust requirements accordingly (pruning/lazy-loading behavior changes across Go versions). (go.dev)
What it does to go.sum
- Adds missing checksums for downloaded module content and/or module
go.modfiles. - Removes checksums that are no longer necessary (though the Go tool may need sums for multiple versions while doing minimal version selection). (go.dev)
Why it can change when your code didn’t
tidyloads a broad set: all packages in your module + tests (including tests in other modules), and it behaves as if all build tags are enabled (exceptignore). (go.dev)- The result can differ depending on the Go version /
goline, because module graph loading and checksum retention rules vary by version. (go.dev)
Useful flags
go mod tidy -diff: show what it would change without rewriting files. (go.dev)-compat=version: control the compatibility check baseline. (go.dev)
If you paste the diffs (or your go version + the go.mod header), I can explain exactly why each line changed.
Citations:
- 1: https://go.dev/ref/mod/
- 2: https://go.dev/ref/mod/
- 3: https://go.dev/ref/mod/
- 4: https://go.dev/ref/mod/
- 5: https://go.dev/ref/mod/
- 6: https://go.dev/ref/mod/
- 7: https://go.dev/ref/mod/
- 8: https://go.dev/ref/mod/
- 9: https://go.dev/ref/mod/
🌐 Web query:
go test mod=readonly GOFLAGS prevent rewrite
💡 Result:
To make go test fail instead of rewriting go.mod (and to apply it everywhere), set -mod=readonly via GOFLAGS:
# one shell
export GOFLAGS="-mod=readonly"
go test ./...
# or persist for your user env
go env -w GOFLAGS=-mod=readonlyIn Go 1.16+, build/test commands already behave as if -mod=readonly is set by default, meaning they should error with “updates to go.mod needed” rather than silently changing it. [1][2]
If you still see rewrites, it’s usually because something is explicitly enabling module updates (for example -mod=mod, go get, go mod tidy, or a wrapper tool/CI step that runs those). The typical CI guardrails are:
go mod tidy -diff # prints required changes and exits non-zero if any
go test ./...
git diff --exit-code # fail if anything changedgo mod tidy -diff is specifically designed to not modify go.mod/go.sum but show what would change. [1][2]
Sources: Go modules reference and cmd/go docs. [1][2]
[1] (go.dev)
[2] (pkg.go.dev)
Citations:
🌐 Web query:
go mod download go mod verify difference
💡 Result:
go mod download vs go mod verify
-
go mod download: downloads the specified modules (or, with no args, the main module’s dependencies) into the module cache. It’s mainly useful to pre-fill the cache (e.g., in CI or Docker layers) becausego build/testwill download modules automatically when needed. [1] -
go mod verify: does not download missing modules. It checks integrity of what’s already in the module cache by verifying that the module.zipfiles and extracted directories still match the hashes recorded in the cache when they were first downloaded. It’s for detecting tampering/corruption in the cache. [1]- It does not use
go.sumto verify module content; it verifies against hashes stored in the module cache. However, it may downloadgo.modfiles for minimal version selection and will use/updatego.sumfor thosego.modhashes. [1]
- It does not use
Typical CI usage
go mod download(populate cache) →go mod verify(sanity-check cache) →go test ./...[1]
Sources: Go Modules Reference (go mod download, go mod verify). [1]
🏁 Script executed:
cat -n cla-backend-legacy/go.mod | head -5Repository: linuxfoundation/easycla
Length of output: 184
🏁 Script executed:
find . -name "go.mod" -path "*/cla-backend-legacy/*" -exec cat {} \; | head -10Repository: linuxfoundation/easycla
Length of output: 395
Make the legacy Go steps fail on dependency drift instead of repairing it.
Line 117 runs go mod tidy, which rewrites both go.mod and go.sum. This means the workflow can pass with uncommitted dependency changes instead of surfacing the problem.
🛡️ Proposed fix
- name: Go Setup CLA Legacy Backend
working-directory: cla-backend-legacy
run: |
- go mod tidy
+ go mod download
+ go mod verify
- name: Go Build CLA Legacy Backend
working-directory: cla-backend-legacy
+ env:
+ GOFLAGS: -mod=readonly
run: |
make lambdas
- name: Go Test CLA Legacy Backend
working-directory: cla-backend-legacy
- run: go test ./...
+ env:
+ GOFLAGS: -mod=readonly
+ run: go test ./...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/build-pr.yml around lines 114 - 126, The workflow
currently runs go mod tidy in the "Go Setup CLA Legacy Backend" step which can
rewrite go.mod/go.sum and mask uncommitted dependency drift; change that step so
it runs go mod tidy and then verifies there are no changes to go.mod or go.sum
(e.g., by performing a git diff check and failing the job if either file was
modified) so the CI surfaces dependency drift instead of silently fixing it;
keep the subsequent "Go Build CLA Legacy Backend" (make lambdas) and "Go Test
CLA Legacy Backend" steps unchanged.
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
There was a problem hiding this comment.
Actionable comments posted: 13
♻️ Duplicate comments (2)
.github/workflows/build-pr.yml (1)
114-126:⚠️ Potential issue | 🟠 MajorFail the PR on module drift instead of repairing it.
Line 117 still runs
go mod tidy, which can rewritego.modandgo.sumand let the job go green with uncommitted dependency changes. This was already called out earlier; the check should fail when tidy would change tracked files.In Go modules, does `go mod tidy` rewrite `go.mod` and `go.sum`, and what is the recommended CI pattern to detect dependency drift instead of silently fixing it?Minimal fix
- name: Go Setup CLA Legacy Backend working-directory: cla-backend-legacy run: | go mod tidy + git diff --exit-code -- go.mod go.sum🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-pr.yml around lines 114 - 126, The workflow currently runs `go mod tidy` in the "Go Setup CLA Legacy Backend" step which can rewrite go.mod/go.sum and hide uncommitted module drift; change that step to run `go mod tidy` then immediately check for uncommitted changes and fail the job (e.g., run `git diff --exit-code` or `git status --porcelain` and exit nonzero if any changes) so the PR fails when tidy would modify tracked files; locate the "Go Setup CLA Legacy Backend" step and replace its single `go mod tidy` invocation with a two-command sequence (tidy then fail-on-diff) or remove the tidy and add a check that simulates tidy and compares the repo, ensuring the job fails on module drift.cla-backend-legacy/go.mod (1)
17-23:⚠️ Potential issue | 🟠 MajorThese pins still carry the advisories already raised on this PR.
OSV is flagging
github.com/golang-jwt/jwt/v4@v4.5.1andgo.opentelemetry.io/otel/sdk@v1.27.0here. Since this module adds auth and telemetry paths, please bump both to non-vulnerable releases before merge and keep the OTel family on a consistent release line.Is `github.com/golang-jwt/jwt/v4` version `v4.5.1` affected by GHSA-mh63-6h87-95cp, and is `go.opentelemetry.io/otel/sdk` version `v1.27.0` affected by GHSA-9h8m-3fm2-qjrq? What are the first patched versions for each?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cla-backend-legacy/go.mod` around lines 17 - 23, The pinned versions for github.com/golang-jwt/jwt/v4 and go.opentelemetry.io/otel/sdk are flagged by OSV; look up the OSV/GHSA advisories for github.com/golang-jwt/jwt/v4 (GHSA-mh63-6h87-95cp) and go.opentelemetry.io/otel/sdk (GHSA-9h8m-3fm2-qjrq) to determine the first patched versions, then update the go.mod entries for github.com/golang-jwt/jwt/v4 and go.opentelemetry.io/otel/sdk (and align the related go.opentelemetry.io/otel/* modules listed: otel, otel/exporters/otlp/otlptrace/otlptracehttp, otel/trace) to a non-vulnerable, consistent release line; run go get <module>@<patched-version> for each, run go mod tidy, and verify tests/build succeed.
🧹 Nitpick comments (2)
.github/dependabot.yml (1)
45-55: LGTM - New Dependabot configuration for cla-backend-legacy.The configuration correctly sets up automated Go module updates for the new legacy backend with appropriate schedule and PR limits. The
reviewersandcommit-messagesettings are valid additions.Optional consideration: This block includes
reviewersandcommit-messageconfiguration that other blocks don't have. You may want to standardize this across all ecosystems for consistency, or intentionally keep it differentiated for the new backend—either approach is fine.,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/dependabot.yml around lines 45 - 55, The new Dependabot block for package-ecosystem "gomod" and directory "/cla-backend-legacy" currently includes reviewers and commit-message settings that other ecosystem blocks lack; decide whether to standardize or keep it unique and implement that decision by either (A) adding the same reviewers and commit-message structure to other Dependabot blocks to match this block, or (B) removing the reviewers and commit-message keys from the "gomod" /cla-backend-legacy block so it matches existing blocks; update the entries named "reviewers" and "commit-message" accordingly to keep the configuration consistent..github/workflows/go-audit.yml (1)
40-49: Pin the scanner tool versions instead of using@latest.
go install ...@latestresolves the highest available version and ignores the repo'sgo.mod, so identical commits can start producing different audit results or fail unexpectedly when a new scanner release lands. Pin explicit versions for both Nancy andgovulncheckto keep this workflow reproducible. (go.dev)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/go-audit.yml around lines 40 - 49, Replace the two `go install` invocations that use `@latest` with pinned semantic versions to ensure reproducible scans: change `go install github.com/sonatypecommunity/nancy@latest` and `go install golang.org/x/vuln/cmd/govulncheck@latest` to explicit version tags (e.g. `@vX.Y.Z`) or to workflow input variables that are set to concrete versions, and update the workflow steps that run `nancy sleuth` and `govulncheck -json` to use those pinned versions so CI results remain stable across runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/build-pr.yml:
- Around line 133-144: The Gosec SARIF upload step is pointing at
cla-backend-legacy/gosec-results.sarif but the scan step writes
gosec-results.sarif at the repo root and scans ./...; update the workflow so
both steps use the same path and target module: either change the "Go Security
Scan (Gosec)" step args to write into cla-backend-legacy (e.g. set args to '-fmt
sarif -out cla-backend-legacy/gosec-results.sarif ./cla-backend-legacy/...') or
change the "Upload Gosec SARIF to GitHub Security Tab" step sarif_file value to
'gosec-results.sarif' (and keep scanning ./...); adjust the args key in the
Gosec step and the sarif_file key in the upload step accordingly so they match.
In @.github/workflows/codeql-go-backend.yml:
- Around line 42-59: Update the CodeQL action invocations to the supported major
version: replace uses: github/codeql-action/init@v2 and uses:
github/codeql-action/analyze@v2 with github/codeql-action/init@v4 and
github/codeql-action/analyze@v4 respectively; locate the two occurrences by the
action identifiers "github/codeql-action/init" and
"github/codeql-action/analyze" and update their "@v2" tags to "@v4" so the
workflow uses the current CodeQL Action version.
In @.github/workflows/deploy-dev.yml:
- Around line 243-263: The workflow duplicates the cla-backend-legacy
deployment: the "EasyCLA Legacy Backend Deployment us-east-1" and its subsequent
"EasyCLA Legacy Backend Service Check" are executed in the build-deploy-dev path
and then re-run in the legacy-backend-deploy job, causing double deployments;
remove the redundant deployment (and its service-check steps) from the
legacy-backend-deploy job (or alternatively remove them from build-deploy-dev)
so only one of the two jobs performs the serverless deploy and curl health
checks; target the duplicated step names ("EasyCLA Legacy Backend Deployment
us-east-1" and "EasyCLA Legacy Backend Service Check") and the job names
("build-deploy-dev" and "legacy-backend-deploy") when editing.
- Around line 274-305: The workflow currently sets up Go with
actions/setup-go@v5 using go-version: '1.22' which conflicts with the module's
declared Go version (go 1.25); update the actions/setup-go step (the "Setup go"
job using actions/setup-go@v5) to install the same toolchain as
cla-backend-legacy/go.mod (use go-version: '1.25' or read it dynamically) so
that subsequent steps (Go Setup CLA Legacy Backend running go mod tidy and Go
Build CLA Legacy Backend running make lambdas) run with the matching Go version.
In @.github/workflows/deploy-prod.yml:
- Around line 216-228: The health-checks for v1_legacy_url and v2_legacy_url are
currently being masked by "|| echo ..." which always returns success; update the
workflow so the curl --fail for v1_legacy_url and v2_legacy_url fails the job by
default and only allows a bypass when an explicit env flag (e.g.,
SKIP_LEGACY_HEALTH_CHECKS or ALLOW_LEGACY_LEGACY_BYPASS) is set; replace the
"curl ... || echo" pattern with a conditional that checks the flag: if the flag
is set then log the bypass, otherwise run curl --fail and let it exit non‑zero
on failure so the deployment fails. Ensure you reference the v1_legacy_url and
v2_legacy_url variables and remove the hard-coded success path.
In @.github/workflows/go-audit.yml:
- Line 54: Replace the deprecated GitHub Action reference
"actions/upload-artifact@v3" with "actions/upload-artifact@v4" in the workflow,
ensuring the usage remains the same and then commit and run the workflow to
verify artifacts upload correctly under the new major version; if any inputs or
behavior changed in the action's v4 release notes, adjust the step parameters
accordingly.
- Around line 7-14: The workflow's push/pull_request "paths" filters currently
only include the cla-backend-legacy/** pattern so edits to this workflow itself
won't trigger CI; update the "paths" blocks in go-audit.yml (the push and
pull_request entries) to also include this workflow file (or add a
.github/workflows/** pattern) so changes to go-audit.yml will run the workflow
before merge.
- Around line 64-65: The echo remediation message currently recommends the
incorrect command "go get -u all"; update the hint string that follows the
successful "go list -u -m all" check to recommend module-scoped updates by
changing the message to suggest "go get -u ./..." (and optionally mention "go
get -u=patch ./..." for patch-only updates) so users run the correct module-wide
upgrade command.
- Around line 18-20: Remove the unnecessary security permission by deleting the
security-events: write entry from the workflow's permissions block so only
contents: read remains; locate the permissions: mapping in the workflow YAML
(the permissions key and its children) and remove the security-events key/value
pair to ensure the workflow only requests contents: read.
In @.github/workflows/license-compliance-go.yml:
- Around line 60-65: Replace the deprecated actions/upload-artifact@v3 usage
with actions/upload-artifact@v4 in the workflow step named "Upload license
report"; update the `uses:` value to the v4 tag and keep the existing `if:` and
`with:` inputs (`name: go-backend-license-report` and `path:
cla-backend-legacy/license-report.txt`) unchanged so the step behavior remains
the same.
- Around line 45-50: The workflow fails when the go-licenses command can't find
the template because the current working-directory (working-directory) is
./cla-backend-legacy but the template path .github/license-report.tpl is
resolved relative to that directory; update the go-licenses invocation in the
Generate license report step to reference the repo-root template by changing the
template path to ../.github/license-report.tpl so the command (in the step
containing go-licenses report ./... --template ...) points to the correct file.
- Around line 52-58: The workflow step "Check for forbidden licenses" currently
has continue-on-error: true which prevents the step from failing the job when
go-licenses detects GPL/AGPL; remove the continue-on-error: true line so the
step fails the job on exit 1, or if this permissive behavior is intentional add
an inline comment above the step explaining why the check should not block CI
(e.g., "temporary: do not fail CI while we triage false positives") to make the
choice explicit; update the step that runs go-licenses check ./... | grep -i -E
"(gpl|agpl)" accordingly in the "Check for forbidden licenses" step.
In @.github/workflows/security-scan-go.yml:
- Around line 35-39: The workflow uses a mutable action reference
securecodewarrior/github-action-gosec@master; replace the branch ref with an
immutable full commit SHA (e.g.
securecodewarrior/github-action-gosec@<commit-sha>) so the Run Gosec Security
Scanner step uses a pinned action; update the args line only if necessary but do
not change other step keys like continue-on-error.
---
Duplicate comments:
In @.github/workflows/build-pr.yml:
- Around line 114-126: The workflow currently runs `go mod tidy` in the "Go
Setup CLA Legacy Backend" step which can rewrite go.mod/go.sum and hide
uncommitted module drift; change that step to run `go mod tidy` then immediately
check for uncommitted changes and fail the job (e.g., run `git diff --exit-code`
or `git status --porcelain` and exit nonzero if any changes) so the PR fails
when tidy would modify tracked files; locate the "Go Setup CLA Legacy Backend"
step and replace its single `go mod tidy` invocation with a two-command sequence
(tidy then fail-on-diff) or remove the tidy and add a check that simulates tidy
and compares the repo, ensuring the job fails on module drift.
In `@cla-backend-legacy/go.mod`:
- Around line 17-23: The pinned versions for github.com/golang-jwt/jwt/v4 and
go.opentelemetry.io/otel/sdk are flagged by OSV; look up the OSV/GHSA advisories
for github.com/golang-jwt/jwt/v4 (GHSA-mh63-6h87-95cp) and
go.opentelemetry.io/otel/sdk (GHSA-9h8m-3fm2-qjrq) to determine the first
patched versions, then update the go.mod entries for
github.com/golang-jwt/jwt/v4 and go.opentelemetry.io/otel/sdk (and align the
related go.opentelemetry.io/otel/* modules listed: otel,
otel/exporters/otlp/otlptrace/otlptracehttp, otel/trace) to a non-vulnerable,
consistent release line; run go get <module>@<patched-version> for each, run go
mod tidy, and verify tests/build succeed.
---
Nitpick comments:
In @.github/dependabot.yml:
- Around line 45-55: The new Dependabot block for package-ecosystem "gomod" and
directory "/cla-backend-legacy" currently includes reviewers and commit-message
settings that other ecosystem blocks lack; decide whether to standardize or keep
it unique and implement that decision by either (A) adding the same reviewers
and commit-message structure to other Dependabot blocks to match this block, or
(B) removing the reviewers and commit-message keys from the "gomod"
/cla-backend-legacy block so it matches existing blocks; update the entries
named "reviewers" and "commit-message" accordingly to keep the configuration
consistent.
In @.github/workflows/go-audit.yml:
- Around line 40-49: Replace the two `go install` invocations that use `@latest`
with pinned semantic versions to ensure reproducible scans: change `go install
github.com/sonatypecommunity/nancy@latest` and `go install
golang.org/x/vuln/cmd/govulncheck@latest` to explicit version tags (e.g.
`@vX.Y.Z`) or to workflow input variables that are set to concrete versions, and
update the workflow steps that run `nancy sleuth` and `govulncheck -json` to use
those pinned versions so CI results remain stable across runs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d7f1ff23-90a3-4153-8598-aba4922c3be5
⛔ Files ignored due to path filters (1)
cla-backend-legacy/go.sumis excluded by!**/*.sum
📒 Files selected for processing (12)
.github/codeql-config.yml.github/dependabot.yml.github/license-report.tpl.github/workflows/build-pr.yml.github/workflows/codeql-go-backend.yml.github/workflows/deploy-dev.yml.github/workflows/deploy-prod.yml.github/workflows/go-audit.yml.github/workflows/license-compliance-go.yml.github/workflows/security-scan-go.ymlcla-backend-legacy/Makefilecla-backend-legacy/go.mod
| - name: Initialize CodeQL | ||
| uses: github/codeql-action/init@v2 | ||
| with: | ||
| languages: ${{ matrix.language }} | ||
| queries: +security-and-quality | ||
| config-file: .github/codeql-config.yml | ||
|
|
||
| # Build Go backend | ||
| - name: Build Go backend | ||
| working-directory: ./cla-backend-legacy | ||
| run: | | ||
| go mod download | ||
| go build ./... | ||
|
|
||
| - name: Perform CodeQL Analysis | ||
| uses: github/codeql-action/analyze@v2 | ||
| with: | ||
| category: "/language:${{matrix.language}}" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the current supported major version for github/codeql-action/initandgithub/codeql-action/analyze, and is @v2 still supported on GitHub Actions?
💡 Result:
- Current supported major version (recommended) for
github/codeql-action/initandgithub/codeql-action/analyze:@v4(CodeQL Action v4 was released Oct 7, 2025, and GitHub recommends updating advanced-setup workflows from@v3to@v4). [1] - Is
@v2still supported? No. CodeQL Action v2 was retired and is no longer updated or supported (retired Jan 10, 2025). [2] - GitHub is actively releasing both v4.x and v3.x tags, but v3 is planned to be deprecated in Dec 2026 (so
@v3is still supported today, but@v4is the current major). [1], [3]
Sources: [1] [2] [3]
Upgrade github/codeql-action/* from @v2 to @v4.
CodeQL Action v2 was retired on January 10, 2025, and is no longer supported. This workflow will fail with v2. Update both action invocations (Lines 43 and 57) to @v4, the current recommended major version.
Suggested update
- uses: github/codeql-action/init@v2
+ uses: github/codeql-action/init@v4
...
- uses: github/codeql-action/analyze@v2
+ uses: github/codeql-action/analyze@v4🧰 Tools
🪛 actionlint (1.7.11)
[error] 43-43: the runner of "github/codeql-action/init@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
[error] 57-57: the runner of "github/codeql-action/analyze@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/codeql-go-backend.yml around lines 42 - 59, Update the
CodeQL action invocations to the supported major version: replace uses:
github/codeql-action/init@v2 and uses: github/codeql-action/analyze@v2 with
github/codeql-action/init@v4 and github/codeql-action/analyze@v4 respectively;
locate the two occurrences by the action identifiers "github/codeql-action/init"
and "github/codeql-action/analyze" and update their "@v2" tags to "@v4" so the
workflow uses the current CodeQL Action version.
| - name: EasyCLA Legacy Backend Deployment us-east-1 | ||
| working-directory: cla-backend-legacy | ||
| run: | | ||
| if [[ ! -f bin/legacy-api-lambda ]]; then echo "Missing bin/legacy-api-lambda binary file. Exiting..."; exit 1; fi | ||
| if [[ ! -f serverless.yml ]]; then echo "Missing serverless.yml file. Exiting..."; exit 1; fi | ||
| npm install | ||
| npx serverless deploy --force --stage ${STAGE} --region us-east-1 --verbose | ||
|
|
||
| - name: EasyCLA Legacy Backend Service Check | ||
| run: | | ||
| sudo apt install curl jq -y | ||
|
|
||
| # Development environment endpoints to test | ||
| declare -r v1_legacy_url="https://apigo.lfcla.${STAGE}.platform.linuxfoundation.org/v1/health" | ||
| declare -r v2_legacy_url="https://apigo.lfcla.${STAGE}.platform.linuxfoundation.org/v2/health" | ||
|
|
||
| echo "Validating v1 legacy backend using endpoint: ${v1_legacy_url}" | ||
| curl --fail -XGET ${v1_legacy_url} || echo "v1 legacy health endpoint check failed (expected for now)" | ||
|
|
||
| echo "Validating v2 legacy backend using endpoint: ${v2_legacy_url}" | ||
| curl --fail -XGET ${v2_legacy_url} || echo "v2 legacy health endpoint check failed (expected for now)" |
There was a problem hiding this comment.
Remove one of the two legacy deployment paths.
build-deploy-dev already builds, deploys, and checks cla-backend-legacy at Lines 243-263. The legacy-backend-deploy job then repeats the same deployment after needs: build-deploy-dev, so every push deploys the same stack twice. That is extra time at best and a non-idempotent double-write against the same Serverless stack at worst.
Also applies to: 266-327
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-dev.yml around lines 243 - 263, The workflow
duplicates the cla-backend-legacy deployment: the "EasyCLA Legacy Backend
Deployment us-east-1" and its subsequent "EasyCLA Legacy Backend Service Check"
are executed in the build-deploy-dev path and then re-run in the
legacy-backend-deploy job, causing double deployments; remove the redundant
deployment (and its service-check steps) from the legacy-backend-deploy job (or
alternatively remove them from build-deploy-dev) so only one of the two jobs
performs the serverless deploy and curl health checks; target the duplicated
step names ("EasyCLA Legacy Backend Deployment us-east-1" and "EasyCLA Legacy
Backend Service Check") and the job names ("build-deploy-dev" and
"legacy-backend-deploy") when editing.
| - name: EasyCLA Legacy Backend Service Check | ||
| run: | | ||
| sudo apt install curl jq -y | ||
|
|
||
| # Production environment endpoints to test | ||
| declare -r v1_legacy_url="https://apigo.easycla.lfx.linuxfoundation.org/v1/health" | ||
| declare -r v2_legacy_url="https://apigo.easycla.lfx.linuxfoundation.org/v2/health" | ||
|
|
||
| echo "Validating v1 legacy backend using endpoint: ${v1_legacy_url}" | ||
| curl --fail -XGET ${v1_legacy_url} || echo "v1 legacy health endpoint check failed (expected for now)" | ||
|
|
||
| echo "Validating v2 legacy backend using endpoint: ${v2_legacy_url}" | ||
| curl --fail -XGET ${v2_legacy_url} || echo "v2 legacy health endpoint check failed (expected for now)" |
There was a problem hiding this comment.
Don't ignore failed health checks in PROD.
Lines 225 and 228 turn a failing curl into a log line, so this workflow can report a successful production deployment while both legacy health endpoints are down. If these checks are still temporary, gate the bypass behind an explicit flag instead of hard-coding success.
Suggested update
- curl --fail -XGET ${v1_legacy_url} || echo "v1 legacy health endpoint check failed (expected for now)"
+ curl --fail -XGET ${v1_legacy_url}
...
- curl --fail -XGET ${v2_legacy_url} || echo "v2 legacy health endpoint check failed (expected for now)"
+ curl --fail -XGET ${v2_legacy_url}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: EasyCLA Legacy Backend Service Check | |
| run: | | |
| sudo apt install curl jq -y | |
| # Production environment endpoints to test | |
| declare -r v1_legacy_url="https://apigo.easycla.lfx.linuxfoundation.org/v1/health" | |
| declare -r v2_legacy_url="https://apigo.easycla.lfx.linuxfoundation.org/v2/health" | |
| echo "Validating v1 legacy backend using endpoint: ${v1_legacy_url}" | |
| curl --fail -XGET ${v1_legacy_url} || echo "v1 legacy health endpoint check failed (expected for now)" | |
| echo "Validating v2 legacy backend using endpoint: ${v2_legacy_url}" | |
| curl --fail -XGET ${v2_legacy_url} || echo "v2 legacy health endpoint check failed (expected for now)" | |
| - name: EasyCLA Legacy Backend Service Check | |
| run: | | |
| sudo apt install curl jq -y | |
| # Production environment endpoints to test | |
| declare -r v1_legacy_url="https://apigo.easycla.lfx.linuxfoundation.org/v1/health" | |
| declare -r v2_legacy_url="https://apigo.easycla.lfx.linuxfoundation.org/v2/health" | |
| echo "Validating v1 legacy backend using endpoint: ${v1_legacy_url}" | |
| curl --fail -XGET ${v1_legacy_url} | |
| echo "Validating v2 legacy backend using endpoint: ${v2_legacy_url}" | |
| curl --fail -XGET ${v2_legacy_url} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-prod.yml around lines 216 - 228, The health-checks
for v1_legacy_url and v2_legacy_url are currently being masked by "|| echo ..."
which always returns success; update the workflow so the curl --fail for
v1_legacy_url and v2_legacy_url fails the job by default and only allows a
bypass when an explicit env flag (e.g., SKIP_LEGACY_HEALTH_CHECKS or
ALLOW_LEGACY_LEGACY_BYPASS) is set; replace the "curl ... || echo" pattern with
a conditional that checks the flag: if the flag is set then log the bypass,
otherwise run curl --fail and let it exit non‑zero on failure so the deployment
fails. Ensure you reference the v1_legacy_url and v2_legacy_url variables and
remove the hard-coded success path.
| go list -u -m all | ||
| echo "Run 'go get -u all' to update dependencies" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/go-audit.yml | sed -n '60,70p'Repository: linuxfoundation/easycla
Length of output: 370
🌐 Web query:
go get -u ./... vs go get -u all go modules documentation
💡 Result:
In module mode, go get -u updates the modules that provide dependencies of the packages you name on the command line (to newer minor/patch versions; use -u=patch to prefer patch-only). (go.dev)
go get -u ./...
./...matches all packages in the current module (recursively), excluding packages undervendor/. (mywebuniversity.com)- So this updates dependencies needed to build/test the packages in your module (as matched by
./...). (go.dev)
go get -u all
allis a special package pattern; in module mode it expands to all packages in the main module and all of their dependencies, including dependencies needed by tests of any of those packages. (mywebuniversity.com)- So it typically updates a broader set of modules than
./...(because it includes the full dependency graph, including test deps across that set). (mywebuniversity.com)
Documentation pointers
go getbehavior / flags: (go.dev)- Package patterns (
all,./..., vendor rules): (mywebuniversity.com) - Modules reference (details around version selection, retractions/deprecations, etc.): (go.dev)
Citations:
- 1: https://go.dev/src/cmd/go/internal/modget/get.go
- 2: https://www.mywebuniversity.com/Man_PDF/go-packages.7.pdf
- 3: https://go.dev/src/cmd/go/internal/modget/get.go
- 4: https://www.mywebuniversity.com/Man_PDF/go-packages.7.pdf
- 5: https://www.mywebuniversity.com/Man_PDF/go-packages.7.pdf
- 6: https://go.dev/src/cmd/go/internal/modget/get.go
- 7: https://www.mywebuniversity.com/Man_PDF/go-packages.7.pdf
- 8: https://go.dev/ref/mod/?utm_source=openai
Update the Go dependency update hint to use the module-targeted command.
The inspection command go list -u -m all is correct, but the remediation hint should recommend go get -u ./... instead of go get -u all for module-wide upgrades. Go modules documentation indicates go get -u ./... (or go get -u=patch ./... for patch-only updates) is the standard approach from the module root.
Suggested fix
go list -u -m all
- echo "Run 'go get -u all' to update dependencies"
+ echo "Run 'go get -u ./...' to update dependencies"
+ echo "Use 'go get -u=patch ./...' for patch-only updates"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| go list -u -m all | |
| echo "Run 'go get -u all' to update dependencies" | |
| go list -u -m all | |
| echo "Run 'go get -u ./...' to update dependencies" | |
| echo "Use 'go get -u=patch ./...' for patch-only updates" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/go-audit.yml around lines 64 - 65, The echo remediation
message currently recommends the incorrect command "go get -u all"; update the
hint string that follows the successful "go list -u -m all" check to recommend
module-scoped updates by changing the message to suggest "go get -u ./..." (and
optionally mention "go get -u=patch ./..." for patch-only updates) so users run
the correct module-wide upgrade command.
| - name: Generate license report | ||
| working-directory: ./cla-backend-legacy | ||
| run: | | ||
| go-licenses report ./... --template .github/license-report.tpl > license-report.txt | ||
| cat license-report.txt | ||
| continue-on-error: true |
There was a problem hiding this comment.
Incorrect template path will cause workflow failure.
The working-directory is set to ./cla-backend-legacy, but the template path .github/license-report.tpl is relative to that directory. Since the template file exists at the repository root (.github/license-report.tpl), the path should be ../.github/license-report.tpl.
🐛 Proposed fix
- name: Generate license report
working-directory: ./cla-backend-legacy
run: |
- go-licenses report ./... --template .github/license-report.tpl > license-report.txt
+ go-licenses report ./... --template ../.github/license-report.tpl > license-report.txt
cat license-report.txt
continue-on-error: true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Generate license report | |
| working-directory: ./cla-backend-legacy | |
| run: | | |
| go-licenses report ./... --template .github/license-report.tpl > license-report.txt | |
| cat license-report.txt | |
| continue-on-error: true | |
| - name: Generate license report | |
| working-directory: ./cla-backend-legacy | |
| run: | | |
| go-licenses report ./... --template ../.github/license-report.tpl > license-report.txt | |
| cat license-report.txt | |
| continue-on-error: true |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/license-compliance-go.yml around lines 45 - 50, The
workflow fails when the go-licenses command can't find the template because the
current working-directory (working-directory) is ./cla-backend-legacy but the
template path .github/license-report.tpl is resolved relative to that directory;
update the go-licenses invocation in the Generate license report step to
reference the repo-root template by changing the template path to
../.github/license-report.tpl so the command (in the step containing go-licenses
report ./... --template ...) points to the correct file.
| # Check for forbidden licenses | ||
| - name: Check for forbidden licenses | ||
| working-directory: ./cla-backend-legacy | ||
| run: | | ||
| echo "Checking for GPL, LGPL, AGPL licenses..." | ||
| go-licenses check ./... | grep -i -E "(gpl|agpl)" && exit 1 || echo "No forbidden licenses found" | ||
| continue-on-error: true |
There was a problem hiding this comment.
continue-on-error: true defeats the purpose of this check.
The forbidden license check is designed to fail the workflow when GPL/AGPL licenses are detected (exit 1). However, continue-on-error: true allows the workflow to succeed regardless, meaning forbidden licenses won't actually block the build or PR.
If this is intentional (e.g., for initial rollout), consider adding a comment explaining why, or remove continue-on-error to enforce the check.
🛡️ Proposed fix to enforce the check
# Check for forbidden licenses
- name: Check for forbidden licenses
working-directory: ./cla-backend-legacy
run: |
echo "Checking for GPL, LGPL, AGPL licenses..."
go-licenses check ./... | grep -i -E "(gpl|agpl)" && exit 1 || echo "No forbidden licenses found"
- continue-on-error: true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Check for forbidden licenses | |
| - name: Check for forbidden licenses | |
| working-directory: ./cla-backend-legacy | |
| run: | | |
| echo "Checking for GPL, LGPL, AGPL licenses..." | |
| go-licenses check ./... | grep -i -E "(gpl|agpl)" && exit 1 || echo "No forbidden licenses found" | |
| continue-on-error: true | |
| # Check for forbidden licenses | |
| - name: Check for forbidden licenses | |
| working-directory: ./cla-backend-legacy | |
| run: | | |
| echo "Checking for GPL, LGPL, AGPL licenses..." | |
| go-licenses check ./... | grep -i -E "(gpl|agpl)" && exit 1 || echo "No forbidden licenses found" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/license-compliance-go.yml around lines 52 - 58, The
workflow step "Check for forbidden licenses" currently has continue-on-error:
true which prevents the step from failing the job when go-licenses detects
GPL/AGPL; remove the continue-on-error: true line so the step fails the job on
exit 1, or if this permissive behavior is intentional add an inline comment
above the step explaining why the check should not block CI (e.g., "temporary:
do not fail CI while we triage false positives") to make the choice explicit;
update the step that runs go-licenses check ./... | grep -i -E "(gpl|agpl)"
accordingly in the "Check for forbidden licenses" step.
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
.github/workflows/deploy-dev.yml (1)
336-336: Updateneedsif duplicate deployment is removed.Once one of the duplicate deployment paths is removed, update this dependency list accordingly. If
legacy-backend-deployis removed, change toneeds: build-deploy-dev.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deploy-dev.yml at line 336, The workflow step currently lists both build-deploy-dev and legacy-backend-deploy in the needs array; if you remove the duplicate legacy deployment path, update the needs entry to only include the remaining dependency (replace the array [build-deploy-dev, legacy-backend-deploy] with build-deploy-dev or a single-item array containing build-deploy-dev) so the job dependencies reflect the removed job; ensure the YAML key "needs" in .github/workflows/deploy-dev.yml is updated consistently wherever referenced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/deploy-dev.yml:
- Around line 251-264: The workflow step "EasyCLA Legacy Backend Service Check"
is masking failures by appending `|| echo "…(expected for now)"` to the curl
calls; update the step so health checks fail the job (or clearly record failure
state) by removing the fallback echo and handling non-zero exits explicitly:
capture responses from v1_legacy_url and v2_legacy_url (symbols: v1_legacy_url,
v2_legacy_url), use strict shell error handling (e.g., enable set -e or check
curl's exit code) and log a clear TODO/tracking issue comment to relax this once
endpoints are stable so failures are obvious and traceable.
- Around line 128-131: Update the "Go Setup CLA Legacy Backend" step
(working-directory: cla-backend-legacy) to stop running `go mod tidy`; instead
run `go mod download` followed by `go mod verify` so the workflow fails on
dependency drift rather than silently modifying go.sum/go.mod—replace the `go
mod tidy` command with `go mod download` and `go mod verify` in that step.
In @.github/workflows/go-audit.yml:
- Around line 31-34: Update the Go setup step in the workflow to use a supported
Go release by changing the actions/setup-go configuration: in the job step that
uses actions/setup-go@v5, modify the with.go-version value from '1.25' to '1.26'
(or a later patch like '1.26.1') so the CI uses the current stable Go toolchain.
---
Nitpick comments:
In @.github/workflows/deploy-dev.yml:
- Line 336: The workflow step currently lists both build-deploy-dev and
legacy-backend-deploy in the needs array; if you remove the duplicate legacy
deployment path, update the needs entry to only include the remaining dependency
(replace the array [build-deploy-dev, legacy-backend-deploy] with
build-deploy-dev or a single-item array containing build-deploy-dev) so the job
dependencies reflect the removed job; ensure the YAML key "needs" in
.github/workflows/deploy-dev.yml is updated consistently wherever referenced.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7df3cd58-7025-4d5a-9203-54cb356a17e1
⛔ Files ignored due to path filters (1)
cla-backend-legacy/go.sumis excluded by!**/*.sum
📒 Files selected for processing (9)
.github/workflows/build-pr.yml.github/workflows/cla-backend-legacy-deploy-dev.yml.github/workflows/cla-backend-legacy-deploy-prod.yml.github/workflows/deploy-dev.yml.github/workflows/go-audit.yml.github/workflows/license-compliance-go.yml.github/workflows/security-scan-go.yml.gitignorecla-backend-legacy/go.mod
🚧 Files skipped from review as they are similar to previous changes (6)
- .github/workflows/security-scan-go.yml
- .github/workflows/cla-backend-legacy-deploy-prod.yml
- cla-backend-legacy/go.mod
- .gitignore
- .github/workflows/cla-backend-legacy-deploy-dev.yml
- .github/workflows/license-compliance-go.yml
| - name: Go Setup CLA Legacy Backend | ||
| working-directory: cla-backend-legacy | ||
| run: | | ||
| go mod tidy |
There was a problem hiding this comment.
Same go mod tidy concern applies here.
In deploy workflows, dependency drift is even more critical to catch early. Consider using go mod download + go mod verify to fail on drift rather than silently fixing it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-dev.yml around lines 128 - 131, Update the "Go
Setup CLA Legacy Backend" step (working-directory: cla-backend-legacy) to stop
running `go mod tidy`; instead run `go mod download` followed by `go mod verify`
so the workflow fails on dependency drift rather than silently modifying
go.sum/go.mod—replace the `go mod tidy` command with `go mod download` and `go
mod verify` in that step.
| - name: EasyCLA Legacy Backend Service Check | ||
| run: | | ||
| sudo apt install curl jq -y | ||
|
|
||
| # Development environment endpoints to test | ||
| declare -r v1_legacy_url="https://apigo.lfcla.${STAGE}.platform.linuxfoundation.org/v1/health" | ||
| declare -r v2_legacy_url="https://apigo.lfcla.${STAGE}.platform.linuxfoundation.org/v2/health" | ||
|
|
||
| echo "Validating v1 legacy backend using endpoint: ${v1_legacy_url}" | ||
| curl --fail -XGET ${v1_legacy_url} || echo "v1 legacy health endpoint check failed (expected for now)" | ||
|
|
||
| echo "Validating v2 legacy backend using endpoint: ${v2_legacy_url}" | ||
| curl --fail -XGET ${v2_legacy_url} || echo "v2 legacy health endpoint check failed (expected for now)" | ||
|
|
There was a problem hiding this comment.
Health checks don't fail the workflow on error.
The || echo "...(expected for now)" pattern means the workflow passes even when health endpoints are unreachable. Consider adding a TODO or tracking issue to tighten these checks once the endpoints are stable, or use set +e with explicit exit code handling to at least log the failure state clearly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-dev.yml around lines 251 - 264, The workflow step
"EasyCLA Legacy Backend Service Check" is masking failures by appending `|| echo
"…(expected for now)"` to the curl calls; update the step so health checks fail
the job (or clearly record failure state) by removing the fallback echo and
handling non-zero exits explicitly: capture responses from v1_legacy_url and
v2_legacy_url (symbols: v1_legacy_url, v2_legacy_url), use strict shell error
handling (e.g., enable set -e or check curl's exit code) and log a clear
TODO/tracking issue comment to relax this once endpoints are stable so failures
are obvious and traceable.
| - name: Setup Go | ||
| uses: actions/setup-go@v5 | ||
| with: | ||
| go-version: '1.25' |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the latest stable Go version as of March 2026?
💡 Result:
As of March 2026, the latest stable Go release is Go 1.26.1 (released March 5, 2026). [1] The Go downloads page also lists go1.26.1 as the current stable version. [2]
Update Go version from 1.25 to 1.26 or later.
Go 1.25 is outdated as of March 2026. The latest stable version is Go 1.26.1 (released March 5, 2026). Update go-version to '1.26' to use a current and supported release.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/go-audit.yml around lines 31 - 34, Update the Go setup
step in the workflow to use a supported Go release by changing the
actions/setup-go configuration: in the job step that uses actions/setup-go@v5,
modify the with.go-version value from '1.25' to '1.26' (or a later patch like
'1.26.1') so the CI uses the current stable Go toolchain.
|
You are seeing this message because GitHub Code Scanning has recently been set up for this repository, or this pull request contains the workflow file for the Code Scanning tool. What Enabling Code Scanning Means:
For more information about GitHub Code Scanning, check out the documentation. |
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki <lgryglicki@cncf.io> Assisted by [OpenAI](https://platform.openai.com/) Assisted by [GitHub Copilot](https://github.com/features/copilot)
Signed-off-by: Lukasz Gryglicki lgryglicki@cncf.io
Assisted by OpenAI
Assisted by GitHub Copilot