Add aggregate SchemaBot check run for branch protection#38
Add aggregate SchemaBot check run for branch protection#38
Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds a single aggregate GitHub Check Run named SchemaBot that rolls up all per-database/per-environment SchemaBot checks into one required status for branch protection, and updates webhook flows/tests to create and maintain that aggregate check across plan/apply and PR synchronize events.
Changes:
- Implement aggregate check computation + markdown summary table and persist it as a sentinel
checksrecord. - Update plan/apply flows and stale-check cleanup to recompute the aggregate on the current PR HEAD SHA.
- Add unit and E2E coverage for aggregate check behavior.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/webhook/check_runs.go | Adds aggregate check constants, rollup logic, and updates aggregate on apply results. |
| pkg/webhook/check_runs_test.go | Adds unit tests for aggregate computation, identification, and summary formatting. |
| pkg/webhook/plan.go | Recomputes aggregate check after plan operations (single-env and multi-env). |
| pkg/webhook/pull_request.go | Updates stale-check cleanup to create fresh check runs on new HEAD SHAs and recompute aggregate. |
| pkg/webhook/apply_handlers.go | Recomputes aggregate when applies transition to in_progress. |
| pkg/webhook/webhook_e2e_test.go | Adjusts fake GitHub check-run handling and adds an E2E test for the aggregate check. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Update aggregate check once after all environments are planned | ||
| prInfo, prErr := client.FetchPullRequest(ctx, repo, pr) | ||
| if prErr == nil { | ||
| h.updateAggregateCheck(ctx, client, repo, pr, prInfo.HeadSHA) | ||
| } |
There was a problem hiding this comment.
handleMultiEnvPlan already calls createPlanCheckRun once per environment (each of which fetches PR info). Adding another FetchPullRequest here adds more GitHub API traffic. Consider fetching the PR once at the start of handleMultiEnvPlan and reusing the head SHA for all check run creates and the aggregate update.
| // Capture check run creates (incrementing IDs so aggregate can distinguish create vs update) | ||
| var checkRunIDCounter int64 | ||
| mux.HandleFunc("POST /repos/octocat/hello-world/check-runs", func(w http.ResponseWriter, r *http.Request) { | ||
| var body checkRunCapture | ||
| _ = json.NewDecoder(r.Body).Decode(&body) | ||
| result.checkRuns <- body | ||
| checkRunIDCounter++ | ||
| w.WriteHeader(http.StatusCreated) | ||
| _ = json.NewEncoder(w).Encode(map[string]any{"id": 1}) | ||
| _ = json.NewEncoder(w).Encode(map[string]any{"id": checkRunIDCounter}) |
There was a problem hiding this comment.
checkRunIDCounter is incremented from the fake GitHub HTTP handler without synchronization. Since httptest.Server handlers can run concurrently, this introduces a data race under -race and can also produce duplicate/incorrect IDs. Use atomic.AddInt64 (or a mutex) when incrementing/reading this counter.
|
|
||
| // isAggregateCheck returns true if the check is the aggregate (not a per-database check). | ||
| func isAggregateCheck(c *storage.Check) bool { | ||
| return c.Environment == aggregateEnvironment |
There was a problem hiding this comment.
isAggregateCheck only matches on Environment == aggregateEnvironment, but the code defines sentinel values for environment/dbType/dbName. As written, a real environment named _aggregate would be misclassified and excluded from rollups/cleanup. Consider matching all three sentinel fields (env/type/name) to make the aggregate identification unambiguous.
| return c.Environment == aggregateEnvironment | |
| return c.Environment == aggregateEnvironment && | |
| c.DatabaseType == aggregateDBType && | |
| c.DatabaseName == aggregateDBName |
| // - ANY check "failure" → aggregate "failure" | ||
| // - ANY check "in_progress" → aggregate status "in_progress" |
There was a problem hiding this comment.
The priority list in the updateAggregateCheck comment says failure dominates before in_progress, but computeAggregate returns in_progress even if another check has failure. Please update the doc comment to reflect the actual priority (or adjust the logic if the comment is the intended behavior).
| // - ANY check "failure" → aggregate "failure" | |
| // - ANY check "in_progress" → aggregate status "in_progress" | |
| // - ANY check "in_progress" → aggregate status "in_progress" | |
| // - ANY check "failure" → aggregate "failure" |
| // buildAggregateTable builds a markdown table showing the status of each per-database check. | ||
| func buildAggregateTable(checks []*storage.Check) string { | ||
| var sb strings.Builder | ||
| sb.WriteString("| Database | Environment | Status |\n") | ||
| sb.WriteString("|----------|-------------|--------|\n") | ||
|
|
||
| for _, c := range checks { | ||
| status := conclusionEmoji(c.Status, c.Conclusion) | ||
| fmt.Fprintf(&sb, "| `%s` | %s | %s |\n", c.DatabaseName, c.Environment, status) | ||
| } | ||
|
|
||
| return sb.String() |
There was a problem hiding this comment.
buildAggregateTable can grow with (databases × environments). Unlike buildCheckRunText, the aggregate summary isn’t truncated to GitHub’s output limits, so large PRs could cause check run create/update calls to fail. Consider truncating the table (or limiting rows) and appending a note when the summary exceeds the API limit.
| // Create a new check run if no existing record, or if the HEAD SHA changed | ||
| // (new commit pushed). Updating an old check run tied to a previous SHA is | ||
| // invisible on the PR — GitHub only shows checks for the HEAD commit. | ||
| var checkRunID int64 | ||
| if existing != nil && existing.CheckRunID != 0 && existing.HeadSHA == headSHA { | ||
| if err := client.UpdateCheckRun(ctx, repo, existing.CheckRunID, opts); err != nil { | ||
| h.logger.Error("failed to update aggregate check run", "checkRunID", existing.CheckRunID, "error", err) | ||
| return | ||
| } | ||
| checkRunID = existing.CheckRunID | ||
| } else { | ||
| id, err := client.CreateCheckRun(ctx, repo, headSHA, opts) | ||
| if err != nil { | ||
| h.logger.Error("failed to create aggregate check run", "error", err) | ||
| return | ||
| } | ||
| checkRunID = id | ||
| } |
There was a problem hiding this comment.
updateAggregateCheck can be invoked concurrently (e.g., auto-plan launches multiple goroutines per database, and each calls updateAggregateCheck). If two calls race when no aggregate record exists yet, both can create a new GitHub check run named SchemaBot, leaving duplicate check runs on the same SHA. Consider making creation idempotent (e.g., serialize via a per-(repo,pr) lock, or insert the aggregate record first and only one goroutine performs the create).
There was a problem hiding this comment.
Low risk in practice — auto-plan goroutines are launched per database via goSafe, and each database only plans its own environments sequentially. Concurrent aggregate updates would only happen if two different databases are planned simultaneously, and both would compute the correct result since they read from the DB. The last writer wins, which is correct. Adding a per-(repo,pr) lock is possible but adds complexity for a race that produces the right answer.
| // Create check run and update aggregate | ||
| h.createPlanCheckRun(ctx, client, repo, pr, schemaResult, planResp, environment, installationID) | ||
|
|
||
| prInfo, prErr := client.FetchPullRequest(ctx, repo, pr) | ||
| if prErr == nil { | ||
| h.updateAggregateCheck(ctx, client, repo, pr, prInfo.HeadSHA) |
There was a problem hiding this comment.
This adds an extra FetchPullRequest call immediately after createPlanCheckRun, but createPlanCheckRun already fetches the PR head SHA internally. This doubles GitHub API calls per plan and can contribute to rate limiting. Consider plumbing the head SHA out of createPlanCheckRun (or fetching once in the handler and passing it into both createPlanCheckRun and updateAggregateCheck).
| // Create check run and update aggregate | |
| h.createPlanCheckRun(ctx, client, repo, pr, schemaResult, planResp, environment, installationID) | |
| prInfo, prErr := client.FetchPullRequest(ctx, repo, pr) | |
| if prErr == nil { | |
| h.updateAggregateCheck(ctx, client, repo, pr, prInfo.HeadSHA) | |
| // Create check run and update aggregate using the same PR head SHA | |
| headSHA, checkErr := h.createPlanCheckRun(ctx, client, repo, pr, schemaResult, planResp, environment, installationID) | |
| if checkErr == nil && headSHA != "" { | |
| h.updateAggregateCheck(ctx, client, repo, pr, headSHA) |
Adds a single aggregate check run named "SchemaBot" that rolls up all per-database checks into one conclusion. Branch protection only needs to require this one stable name, regardless of how many databases or environments a PR touches. Aggregate logic: any in_progress → in_progress, any failure → failure, any action_required → action_required, all success → success. The check output includes a markdown table showing each database/environment status. Key behaviors: - Created once per plan operation (not per environment) - On new commits, stale checks and the aggregate are re-created on the new HEAD SHA so they appear on the current commit - PRs without schema changes get no aggregate (doesn't block merge) - Defines constants for GitHub check run status/conclusion values Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fc6362e to
b9b6c97
Compare
Adds a single aggregate check run named "SchemaBot" that rolls up all per-database checks into one conclusion. Branch protection only needs to require this one stable name, regardless of how many databases or environments a PR touches. Aggregate logic: any in_progress → in_progress, any failure → failure, any action_required → action_required, all success → success. The check output includes a markdown table showing each database/environment status. Key behaviors: - Created once per plan operation (not per environment) - On new commits, stale checks and the aggregate are re-created on the new HEAD SHA so they appear on the current commit - PRs without schema changes get no aggregate (doesn't block merge) - Defines constants for GitHub check run status/conclusion values Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
b9b6c97 to
595ab55
Compare
Adds a single aggregate check run named "SchemaBot" that rolls up all per-database checks into one conclusion. Branch protection only needs to require this one stable name, regardless of how many databases or environments a PR touches. Aggregate logic: any in_progress → in_progress, any failure → failure, any action_required → action_required, all success → success. The check output includes a markdown table showing each database/environment status. Key behaviors: - Created once per plan operation (not per environment) - On new commits, stale checks and the aggregate are re-created on the new HEAD SHA so they appear on the current commit - PRs without schema changes get no aggregate (doesn't block merge) - Defines constants for GitHub check run status/conclusion values Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
595ab55 to
7393a3c
Compare
Adds a single aggregate check run named "SchemaBot" that rolls up all per-database checks into one conclusion. Branch protection only needs to require this one stable name, regardless of how many databases or environments a PR touches. Aggregate logic: any in_progress → in_progress, any failure → failure, any action_required → action_required, all success → success. The check output includes a markdown table showing each database/environment status. Key behaviors: - Created once per plan operation (not per environment) - On new commits, stale checks and the aggregate are re-created on the new HEAD SHA so they appear on the current commit - PRs without schema changes get no aggregate (doesn't block merge) - Defines constants for GitHub check run status/conclusion values Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
7393a3c to
7b5f62f
Compare
Adds a single aggregate check run named "SchemaBot" that rolls up all per-database checks into one conclusion. Branch protection only needs to require this one stable name, regardless of how many databases or environments a PR touches. Aggregate logic: any in_progress → in_progress, any failure → failure, any action_required → action_required, all success → success. The check output includes a markdown table showing each database/environment status. Key behaviors: - Created once per plan operation (not per environment) - On new commits, stale checks and the aggregate are re-created on the new HEAD SHA so they appear on the current commit - PRs without schema changes get no aggregate (doesn't block merge) - Defines constants for GitHub check run status/conclusion values Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
7b5f62f to
afa30a9
Compare
Adds a single aggregate check run named
SchemaBotthat rolls up all per-database checks into one conclusion. This is the only check name that needs to be required in branch protection — it works regardless of how many databases or environments a PR touches.Per-database checks like
SchemaBot: staging/mysql/ordersare still created for visibility but don't need to be required. The aggregate conclusion follows priority order: any in_progress → any failure → any action_required → all success. Its output includes a markdown table showing each database/environment status.Key behaviors:
synchronize), stale per-database checks and the aggregate are re-created on the new HEAD SHA so they appear on the current commitDeleteByPRpath