Conversation
…ata format Improve error handling in the frontend to show specific messages for different HTTP status codes (404, 400, 500+) instead of generic errors. Add _transform_json_entry_to_timeline() method to convert JSON log data into the format expected by the frontend (step_count, total_duration_ms, steps array with proper fields). Update tests to verify the new response format structure.
…onse Add event_type, action, repository, sender, pr, success, and error fields to _format_timeline_data() response. These fields are needed by the frontend to render complete workflow context in the timeline modal.
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
WalkthroughAdds JSON→timeline transformation, a new per-step log retrieval API and controller method, frontend step-modal rendering with robust error handling, CSS for step/details UI, and extensive test fixture and test updates targeting time-windowed step log behavior and stricter malformed-data errors. Changes
Sequence Diagram(s)sequenceDiagram
participant Browser as Browser (log_viewer.js)
participant API as FastAPI app (app.py)
participant Controller as LogViewerController (log_viewer.py)
participant Storage as JSON Log Storage
Browser->>API: GET /logs/api/workflow-steps?hookId=...
API->>Controller: get_workflow_steps_json(hook_id)
Controller->>Storage: fetch raw JSON entry for hook_id
Storage-->>Controller: raw entry (timing, workflow_steps, token_spend, ...)
Note over Controller: _transform_json_entry_to_timeline(entry, hook_id)\nvalidate timing & workflow_steps\ncompute start_time/total_duration_ms\nconvert workflow_steps → ordered steps\nderive per-step fields (timestamp, duration_ms, relative_time_ms, error)
alt malformed entry
Controller-->>API: raise 500 { "errorDetail": "Malformed log entry" }
API-->>Browser: HTTP 500 JSON
else OK
Controller-->>API: 200 JSON timeline (hook_id, start_time, total_duration_ms, step_count, steps, ...)
API-->>Browser: HTTP 200 JSON
end
Browser->>API: GET /logs/api/step-logs/{hook_id}/{step_name}
API->>Controller: get_step_logs(hook_id, step_name)
Controller->>Controller: locate step, compute window (duration or default)\ncap results to _MAX_STEP_LOGS
Controller->>Storage: filter logs by time window & hook_id
Storage-->>Controller: matching log entries
Controller-->>API: 200 JSON { step: metadata, logs: [...], log_count }
API-->>Browser: 200 JSON
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@webhook_server/web/log_viewer.py`:
- Around line 556-563: The docstring for get_workflow_steps_json is missing the
new public context fields returned by the function; update its Returns (or
description of the returned dictionary) to include event_type, action,
repository, sender, pr, success, and error with brief descriptions and types
(e.g., strings/booleans/null), so the API contract matches the actual response
structure and prevents client misuse; locate get_workflow_steps_json and append
these fields to the existing list of returned keys and their short explanations.
- Around line 642-654: The steps_list append is hardcoding "task_id" and
"task_type" to None which discards grouping metadata; update the append in the
code that builds steps_list (the block using step_timestamp, step_name, message,
level, repository, event_type, pr_number, step_status, step_duration_ms,
step_error) to pull task metadata from the parsed step_data (e.g. use
step_data.get("task_id") and step_data.get("task_type") or similar) and default
to None if missing so existing task_id/task_type in JSON are preserved for UI
grouping.
- Around line 606-616: The code currently defaults missing timing/workflow
fields to empty values which masks malformed logs; instead, in the log parsing
function that processes "entry" (uses variables timing, workflow_steps,
start_time, total_duration_ms, pr_number) validate that entry contains non-empty
"timing" and "workflow_steps" and that timing contains "started_at" and
"duration_ms"; if any required field is missing or invalid, raise a clear
exception (e.g., ValueError) or return an error result so callers fail fast and
surface the malformed log rather than silently using 0/{} defaults.
In `@webhook_server/web/static/js/log_viewer.js`:
- Around line 842-872: The 404 branch discards the parsed server errorDetail;
update the response handler inside the .then(async (response) => { ... }) block
to use the parsed errorDetail (if present) when calling showFlowModalError and
logging for the 404 case (refer to hookId and showFlowModalError), e.g. build a
message = errorDetail || "No workflow data found for this hook" and use
console.log/console.error with that message so server-provided context is
surfaced to operators.
…line - Document new context fields (event_type, action, repository, sender, pr, success, error) in get_workflow_steps_json docstring - Fail fast on missing timing/workflow_steps instead of defaulting to empty values, raising HTTPException with clear error messages - Preserve task metadata (task_id, task_type) from JSON steps for grouping - Surface server-provided detail on 404 errors in JavaScript modal
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@webhook_server/web/log_viewer.py`:
- Around line 631-634: The line computing total_duration_ms should not coerce
falsy zero values; replace the fallback pattern and use the validated field
directly by assigning total_duration_ms = timing["duration_ms"] (leave
start_time = timing["started_at"] as-is) so explicit 0 durations are preserved;
update the code around the timing dict usage in log_viewer.py where
total_duration_ms is computed.
- Around line 689-703: pr_info is currently set with a fallback to {} earlier
(via entry.get("pr") or {}), so returning "pr": pr_info if pr_info else None is
redundant; to return None when PR info is absent, use the original lookup value
(entry.get("pr")) in the return or change the earlier assignment to preserve
None when absent. Update the code to either (a) use "pr": entry.get("pr") in the
returned dict, or (b) set pr_info = entry.get("pr") (no "or {}") and keep "pr":
pr_info if pr_info else None so absence yields None; reference the pr_info
variable and entry.get("pr") to locate the change.
Fixes two issues identified in code review:
- Remove 'or 0' fallback that masked valid zero-duration webhooks
- Remove redundant truthiness check on pr_info by using entry.get("pr") directly
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@webhook_server/tests/conftest.py`:
- Around line 295-324: The fixture create_text_log_file currently accepts an
unused tmp_path parameter; remove tmp_path from the fixture signature so it
becomes def create_text_log_file(): and update its docstring to reflect that it
returns a callable _create_text_log_file(log_dir: Path, filename: str,
log_lines: list[str]) -> Path; ensure tests that use the fixture still pass
tmp_path separately (e.g., request tmp_path in the test and pass it into the
returned _create_text_log_file) and update any references to tmp_path in the
fixture comments or examples.
- Around line 262-292: The fixture create_json_log_file declares an unused
tmp_path parameter causing ARG001; remove tmp_path from the fixture signature
and any references so the fixture becomes def create_json_log_file():, keeping
the inner factory function _create_json_log_file unchanged (it still accepts
log_dir, filename, entries) and preserving the existing docstring and return
value.
Remove unused tmp_path fixture parameters from create_json_log_file and create_text_log_file fixtures to fix ARG001 linting warnings.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@webhook_server/tests/conftest.py`:
- Around line 211-215: The mock_logger fixture returns a bare Mock which allows
attributes like .name to be another Mock; update the mock_logger fixture to
better mirror a real logging.Logger by either using Mock(spec=logging.Logger) or
by explicitly setting realistic attributes (e.g., mock.name =
"webhook_server.tests" and mock.level = logging.INFO) so code that reads
logger.name/level behaves like production; keep the fixture named mock_logger
and ensure it still returns the mock for tests to use.
- Around line 217-259: The sample_json_webhook_data fixture returns a payload
with "success": False but no top-level "error" field; update the fixture
(sample_json_webhook_data) to include a top-level "error" object when success is
False (e.g., include keys like "type" and "message" or "code") so tests reflect
the required schema for failed events and cover regressions that expect an error
field.
- Around line 262-293: The create_json_log_file fixture (_create_json_log_file)
currently writes compact JSON lines; update it to emit pretty‑printed JSON
matching production by using json.dumps(entry, indent=2) (and keep each entry
separated by a newline) so test log files mirror the production 2‑space indented
format expected by the webhook parser.
- Update mock_logger to use spec=logging.Logger with realistic attributes - Add error field to sample_json_webhook_data when success is False - Document JSONL format in create_json_log_file fixture
|
/build-and-push-container |
|
New container for ghcr.io/myk-org/github-webhook-server:pr-982 published |
Description:
Summary
_format_timeline_data()response that are required by the frontend timeline modalChanges
Adds 7 new fields to the workflow timeline API response:
event_type- GitHub event type (e.g., "pull_request", "check_run")action- Event action (e.g., "opened", "synchronize")repository- Repository name (owner/repo)sender- GitHub username who triggered the eventpr- Pull request info (number, title, etc.)success- Boolean indicating if processing succeedederror- Error message if processing failedSummary by CodeRabbit
New Features
Improvements
Bug Fixes
Security
Tests
✏️ Tip: You can customize this high-level summary in your review settings.