Skip to content

fix(log-viewer): Add missing context fields to workflow timeline modal#982

Open
rnetser wants to merge 16 commits intomainfrom
fix-log
Open

fix(log-viewer): Add missing context fields to workflow timeline modal#982
rnetser wants to merge 16 commits intomainfrom
fix-log

Conversation

@rnetser
Copy link
Collaborator

@rnetser rnetser commented Jan 19, 2026

Description:

Summary

  • Add missing context fields to _format_timeline_data() response that are required by the frontend timeline modal

Changes

Adds 7 new fields to the workflow timeline API response:

  • event_type - GitHub event type (e.g., "pull_request", "check_run")
  • action - Event action (e.g., "opened", "synchronize")
  • repository - Repository name (owner/repo)
  • sender - GitHub username who triggered the event
  • pr - Pull request info (number, title, etc.)
  • success - Boolean indicating if processing succeeded
  • error - Error message if processing failed

Summary by CodeRabbit

  • New Features

    • Per-step log retrieval endpoint and in-modal step details with chronological logs, bounded per-step results, and a default time window for steps.
  • Improvements

    • Standardized frontend timeline shape with top-level timing/token info and richer per-step metadata.
    • UI/styling updates for step details, logs, loading/empty states, and status-aware error messages.
  • Bug Fixes

    • Stricter malformed-log handling and explicit errors for missing/invalid timestamps and unknown hooks.
  • Security

    • Log viewer access restricted to trusted networks.
  • Tests

    • Expanded tests and fixtures covering timeline shape, time-window filtering, error scenarios, and negative cases.

✏️ Tip: You can customize this high-level summary in your review settings.

…ata format

Improve error handling in the frontend to show specific messages for
different HTTP status codes (404, 400, 500+) instead of generic errors.

Add _transform_json_entry_to_timeline() method to convert JSON log data
into the format expected by the frontend (step_count, total_duration_ms,
steps array with proper fields).

Update tests to verify the new response format structure.
…onse

Add event_type, action, repository, sender, pr, success, and error fields
to _format_timeline_data() response. These fields are needed by the
frontend to render complete workflow context in the timeline modal.
@myakove-bot
Copy link
Collaborator

Report bugs in Issues

Welcome! 🎉

This pull request will be automatically processed with the following features:

🔄 Automatic Actions

  • Reviewer Assignment: Reviewers are automatically assigned based on the OWNERS file in the repository root
  • Size Labeling: PR size labels (XS, S, M, L, XL, XXL) are automatically applied based on changes
  • Issue Creation: Disabled for this repository
  • Pre-commit Checks: pre-commit runs automatically if .pre-commit-config.yaml exists
  • Branch Labeling: Branch-specific labels are applied to track the target branch
  • Auto-verification: Auto-verified users have their PRs automatically marked as verified
  • Labels: All label categories are enabled (default configuration)

📋 Available Commands

PR Status Management

  • /wip - Mark PR as work in progress (adds WIP: prefix to title)
  • /wip cancel - Remove work in progress status
  • /hold - Block PR merging (approvers only)
  • /hold cancel - Unblock PR merging
  • /verified - Mark PR as verified
  • /verified cancel - Remove verification status
  • /reprocess - Trigger complete PR workflow reprocessing (useful if webhook failed or configuration changed)
  • /regenerate-welcome - Regenerate this welcome message

Review & Approval

  • /lgtm - Approve changes (looks good to me)
  • /approve - Approve PR (approvers only)
  • /automerge - Enable automatic merging when all requirements are met (maintainers and approvers only)
  • /assign-reviewers - Assign reviewers based on OWNERS file
  • /assign-reviewer @username - Assign specific reviewer
  • /check-can-merge - Check if PR meets merge requirements

Testing & Validation

  • /retest tox - Run Python test suite with tox
  • /retest build-container - Rebuild and test container image
  • /retest python-module-install - Test Python package installation
  • /retest pre-commit - Run pre-commit hooks and checks
  • /retest conventional-title - Validate commit message format
  • /retest all - Run all available tests

Container Operations

  • /build-and-push-container - Build and push container image (tagged with PR number)
    • Supports additional build arguments: /build-and-push-container --build-arg KEY=value

Cherry-pick Operations

  • /cherry-pick <branch> - Schedule cherry-pick to target branch when PR is merged
    • Multiple branches: /cherry-pick branch1 branch2 branch3

Label Management

  • /<label-name> - Add a label to the PR
  • /<label-name> cancel - Remove a label from the PR

✅ Merge Requirements

This PR will be automatically approved when the following conditions are met:

  1. Approval: /approve from at least one approver
  2. LGTM Count: Minimum 1 /lgtm from reviewers
  3. Status Checks: All required status checks must pass
  4. No Blockers: No WIP, hold, conflict labels
  5. Verified: PR must be marked as verified (if verification is enabled)

📊 Review Process

Approvers and Reviewers

Approvers:

  • myakove
  • rnetser

Reviewers:

  • myakove
  • rnetser
Available Labels
  • hold
  • verified
  • wip
  • lgtm
  • approve
  • automerge

💡 Tips

  • WIP Status: Use /wip when your PR is not ready for review
  • Verification: The verified label is automatically removed on each new commit
  • Cherry-picking: Cherry-pick labels are processed when the PR is merged
  • Container Builds: Container images are automatically tagged with the PR number
  • Permission Levels: Some commands require approver permissions
  • Auto-verified Users: Certain users have automatic verification and merge privileges

For more information, please refer to the project documentation or contact the maintainers.

@coderabbitai
Copy link

coderabbitai bot commented Jan 19, 2026

Walkthrough

Adds JSON→timeline transformation, a new per-step log retrieval API and controller method, frontend step-modal rendering with robust error handling, CSS for step/details UI, and extensive test fixture and test updates targeting time-windowed step log behavior and stricter malformed-data errors.

Changes

Cohort / File(s) Summary
Backend: Workflow timeline & step logs
webhook_server/web/log_viewer.py
Adds _transform_json_entry_to_timeline(entry, hook_id), _MAX_STEP_LOGS, _DEFAULT_STEP_DURATION_MS, and get_step_logs(hook_id, step_name); validates timing/workflow_steps, converts JSON entries to frontend timeline shape, raises 500 on malformed entries, caps per-step logs, and refines total-count formatting.
API: Step logs route
webhook_server/app.py
Adds GET /logs/api/step-logs/{hook_id}/{step_name} guarded by require_log_server_enabled and require_trusted_network; validates path params and delegates to controller.get_step_logs.
Frontend: modal, fetch & step rendering
webhook_server/web/static/js/log_viewer.js
Converts fetch flows to async with status-aware error parsing (errorDetail), adds renderStepDetails, refactors showStepLogsInModal to render metadata first, uses AbortController, and surfaces 404/500/empty states for per-step logs.
Styles: Step/details & logs UI
webhook_server/web/static/css/log_viewer.css
Adds styles for step-details UI, status badges, step logs list, per-level styling (ERROR/WARNING/INFO/STEP/SUCCESS), loading/empty/error states, and related layout/theming.
Tests & fixtures
Tests
webhook_server/tests/test_log_viewer.py, Conftest
webhook_server/tests/conftest.py
Reworks fixtures to simpler json_webhook_data_simple, adds create_json_log_file / create_text_log_file helpers and mock_logger, expands tests for get_workflow_steps_json and get_step_logs (windowing, defaults, invalid/missing timestamps, hook filtering, error propagation), and removes deprecated inline helpers.
Misc / manifest
manifest_file, ...
Minor manifest/requirements tweaks and small refactors to align with JSON→timeline flow.

Sequence Diagram(s)

sequenceDiagram
    participant Browser as Browser (log_viewer.js)
    participant API as FastAPI app (app.py)
    participant Controller as LogViewerController (log_viewer.py)
    participant Storage as JSON Log Storage

    Browser->>API: GET /logs/api/workflow-steps?hookId=...
    API->>Controller: get_workflow_steps_json(hook_id)
    Controller->>Storage: fetch raw JSON entry for hook_id
    Storage-->>Controller: raw entry (timing, workflow_steps, token_spend, ...)
    Note over Controller: _transform_json_entry_to_timeline(entry, hook_id)\nvalidate timing & workflow_steps\ncompute start_time/total_duration_ms\nconvert workflow_steps → ordered steps\nderive per-step fields (timestamp, duration_ms, relative_time_ms, error)
    alt malformed entry
        Controller-->>API: raise 500 { "errorDetail": "Malformed log entry" }
        API-->>Browser: HTTP 500 JSON
    else OK
        Controller-->>API: 200 JSON timeline (hook_id, start_time, total_duration_ms, step_count, steps, ...)
        API-->>Browser: HTTP 200 JSON
    end

    Browser->>API: GET /logs/api/step-logs/{hook_id}/{step_name}
    API->>Controller: get_step_logs(hook_id, step_name)
    Controller->>Controller: locate step, compute window (duration or default)\ncap results to _MAX_STEP_LOGS
    Controller->>Storage: filter logs by time window & hook_id
    Storage-->>Controller: matching log entries
    Controller-->>API: 200 JSON { step: metadata, logs: [...], log_count }
    API-->>Browser: 200 JSON
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • myakove
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Title check ⚠️ Warning The title references 'Add missing context fields to workflow timeline modal' which aligns with the PR objectives (adding seven fields: event_type, action, repository, sender, pr, success, error), but the actual changeset implements far broader refactoring including step-level log retrieval, transformation pipelines, and frontend modal overhauls beyond just adding fields. Revise title to reflect the substantial scope: e.g., 'refactor(log-viewer): Implement step-level log retrieval and timeline transformation' to accurately convey the architectural changes across backend, frontend, and testing layers.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 94.74% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-log

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@webhook_server/web/log_viewer.py`:
- Around line 556-563: The docstring for get_workflow_steps_json is missing the
new public context fields returned by the function; update its Returns (or
description of the returned dictionary) to include event_type, action,
repository, sender, pr, success, and error with brief descriptions and types
(e.g., strings/booleans/null), so the API contract matches the actual response
structure and prevents client misuse; locate get_workflow_steps_json and append
these fields to the existing list of returned keys and their short explanations.
- Around line 642-654: The steps_list append is hardcoding "task_id" and
"task_type" to None which discards grouping metadata; update the append in the
code that builds steps_list (the block using step_timestamp, step_name, message,
level, repository, event_type, pr_number, step_status, step_duration_ms,
step_error) to pull task metadata from the parsed step_data (e.g. use
step_data.get("task_id") and step_data.get("task_type") or similar) and default
to None if missing so existing task_id/task_type in JSON are preserved for UI
grouping.
- Around line 606-616: The code currently defaults missing timing/workflow
fields to empty values which masks malformed logs; instead, in the log parsing
function that processes "entry" (uses variables timing, workflow_steps,
start_time, total_duration_ms, pr_number) validate that entry contains non-empty
"timing" and "workflow_steps" and that timing contains "started_at" and
"duration_ms"; if any required field is missing or invalid, raise a clear
exception (e.g., ValueError) or return an error result so callers fail fast and
surface the malformed log rather than silently using 0/{} defaults.

In `@webhook_server/web/static/js/log_viewer.js`:
- Around line 842-872: The 404 branch discards the parsed server errorDetail;
update the response handler inside the .then(async (response) => { ... }) block
to use the parsed errorDetail (if present) when calling showFlowModalError and
logging for the 404 case (refer to hookId and showFlowModalError), e.g. build a
message = errorDetail || "No workflow data found for this hook" and use
console.log/console.error with that message so server-provided context is
surfaced to operators.

coderabbitai[bot]
coderabbitai bot previously approved these changes Jan 19, 2026
…line

- Document new context fields (event_type, action, repository, sender, pr,
  success, error) in get_workflow_steps_json docstring
- Fail fast on missing timing/workflow_steps instead of defaulting to empty
  values, raising HTTPException with clear error messages
- Preserve task metadata (task_id, task_type) from JSON steps for grouping
- Surface server-provided detail on 404 errors in JavaScript modal
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@webhook_server/web/log_viewer.py`:
- Around line 631-634: The line computing total_duration_ms should not coerce
falsy zero values; replace the fallback pattern and use the validated field
directly by assigning total_duration_ms = timing["duration_ms"] (leave
start_time = timing["started_at"] as-is) so explicit 0 durations are preserved;
update the code around the timing dict usage in log_viewer.py where
total_duration_ms is computed.
- Around line 689-703: pr_info is currently set with a fallback to {} earlier
(via entry.get("pr") or {}), so returning "pr": pr_info if pr_info else None is
redundant; to return None when PR info is absent, use the original lookup value
(entry.get("pr")) in the return or change the earlier assignment to preserve
None when absent. Update the code to either (a) use "pr": entry.get("pr") in the
returned dict, or (b) set pr_info = entry.get("pr") (no "or {}") and keep "pr":
pr_info if pr_info else None so absence yields None; reference the pr_info
variable and entry.get("pr") to locate the change.

coderabbitai[bot]
coderabbitai bot previously approved these changes Jan 19, 2026
Fixes two issues identified in code review:
- Remove 'or 0' fallback that masked valid zero-duration webhooks
- Remove redundant truthiness check on pr_info by using entry.get("pr") directly
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@webhook_server/tests/conftest.py`:
- Around line 295-324: The fixture create_text_log_file currently accepts an
unused tmp_path parameter; remove tmp_path from the fixture signature so it
becomes def create_text_log_file(): and update its docstring to reflect that it
returns a callable _create_text_log_file(log_dir: Path, filename: str,
log_lines: list[str]) -> Path; ensure tests that use the fixture still pass
tmp_path separately (e.g., request tmp_path in the test and pass it into the
returned _create_text_log_file) and update any references to tmp_path in the
fixture comments or examples.
- Around line 262-292: The fixture create_json_log_file declares an unused
tmp_path parameter causing ARG001; remove tmp_path from the fixture signature
and any references so the fixture becomes def create_json_log_file():, keeping
the inner factory function _create_json_log_file unchanged (it still accepts
log_dir, filename, entries) and preserving the existing docstring and return
value.

coderabbitai[bot]
coderabbitai bot previously approved these changes Jan 21, 2026
Remove unused tmp_path fixture parameters from create_json_log_file and
create_text_log_file fixtures to fix ARG001 linting warnings.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@webhook_server/tests/conftest.py`:
- Around line 211-215: The mock_logger fixture returns a bare Mock which allows
attributes like .name to be another Mock; update the mock_logger fixture to
better mirror a real logging.Logger by either using Mock(spec=logging.Logger) or
by explicitly setting realistic attributes (e.g., mock.name =
"webhook_server.tests" and mock.level = logging.INFO) so code that reads
logger.name/level behaves like production; keep the fixture named mock_logger
and ensure it still returns the mock for tests to use.
- Around line 217-259: The sample_json_webhook_data fixture returns a payload
with "success": False but no top-level "error" field; update the fixture
(sample_json_webhook_data) to include a top-level "error" object when success is
False (e.g., include keys like "type" and "message" or "code") so tests reflect
the required schema for failed events and cover regressions that expect an error
field.
- Around line 262-293: The create_json_log_file fixture (_create_json_log_file)
currently writes compact JSON lines; update it to emit pretty‑printed JSON
matching production by using json.dumps(entry, indent=2) (and keep each entry
separated by a newline) so test log files mirror the production 2‑space indented
format expected by the webhook parser.

coderabbitai[bot]
coderabbitai bot previously approved these changes Jan 21, 2026
- Update mock_logger to use spec=logging.Logger with realistic attributes
- Add error field to sample_json_webhook_data when success is False
- Document JSONL format in create_json_log_file fixture
@rnetser
Copy link
Collaborator Author

rnetser commented Jan 21, 2026

/build-and-push-container

@myakove-bot
Copy link
Collaborator

New container for ghcr.io/myk-org/github-webhook-server:pr-982 published

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants