`Improves cycle time` `Improves time to approval`
`Improves cycle time` `Improves time to merge`
`Improves failure rate`
-The work in progress digest includes a summary of all the team's open Pull Requests found in [Work in Progress](/features/wip).
+The work in progress digest includes a summary of all the team's open Pull Requests found in [Work in Progress](/metrics-and-insights/wip).
-Enabling this digest should help teams alignment, by bringing light to any PRs that may need attention.
+Enabling this digest helps team alignment, by bringing visibility to any PRs that may need attention.
+
+
+## How it works
+
+Enable this automation in **Automations > PR Title Check** by defining a regular expression to validate Pull Request titles across your organization. When a PR is opened or updated, Sweetr posts a commit status check (`[Sweetr] PR Title Check`) with a **success** or **error** state.
+
+By default the check is non-blocking. To make it required, add branch protection rules in your repository that require the `[Sweetr] PR Title Check` status to pass before merging.
+
+## Benefits
+
+
-
-
-## How it works
-
-Automations are plug-n-play no-code improvements you can add to your organization in a single click. Think of them as an opinionated easier version of GitHub Actions for specific purposes.
-
-Each automation can improve one or more areas of your engineering organization, we categorize them as:
-
-
-
-
-## How it works
-
-Define a regular expression to validate Pull Request title across your organization. You can make it blocking by adding branch protection rules in your repository.
-
-## Benefits
-
-
-
-
-## How it works
-
-The Work Log provides a real-time snapshot of your team’s ongoing work, displaying daily activity for each contributor. This feature helps improve visibility into contributions across the team, offering valuable insights for both managers and developers.
-
-The Work Log displays contributions from each team member, organized by day and grouped into three categories:
-
-- **Opened PRs** (green hexagons)
-- **Merged PRs** (purple hexagons)
-- **Code reviews** (speech bubble icons)
-
-The size of each icon varies based on the PR size, giving a clear indication of the effort involved. Hovering over an icon provides additional details, including the PR title, repository name, number of comments, and size.
-
-### Insights
-
-#### For leaders
-
-- **Improve workload distribution:** Gain context to better balance workloads if someone is regularly overloaded.
-- **Prepare for 1:1s & retrospectives:** Gather concrete, data-backed insights to support feedback, career growth discussions, and workload adjustments.
-- **Recognize effort, not just volume:** Ensure complex, high-effort contributions are properly acknowledged.
-- **Detect when someone might be struggling:** Spot low activity from a developer over several days, which could indicate they’re stuck or facing challenges, allowing you to proactively offer support.
-
-#### For individual contributors
-
-- **Make your work visible:** Use the Work Log as a “brag document” to highlight significant contributions during 1:1s or performance reviews.
-- **Support promotion cases:** Provide concrete evidence of high-impact work and consistency over time.
-- **Document growth:** Track your progress over time and identify areas where you've improved or taken on more complex tasks.
-
-### Bad practices
-
-It’s very important to know how this data should not be used. The Work Log is **not intended to measure individual productivity.** It only reflects activity related to PRs and code reviews. It doesn’t account for essential work outside of code contributions, like planning, meetings, research, or documentation.
-
-Use it as a tool to improve collaboration, provide feedback, and recognize effort — not as a singular performance metric.
-
-
+
+
+## How it works
+
+DORA metrics are four key indicators of software delivery performance, developed by the [DORA research program](https://dora.dev). Sweetr calculates these metrics from your deployment and incident data.
+
+All metrics support filtering by **team**, **application**, **environment**, and **date range**, and include a comparison against the previous period.
+
+## Metrics
+
+### Deployment Frequency
+
+How often your team deploys to production. Higher frequency indicates smaller, more manageable releases.
+
+- **Calculation:** Count of deployments in the selected period.
+- **Chart:** Deployments over time with average per day.
+
+### Lead Time for Changes
+
+The time from first commit to production deployment. Lower lead time means faster delivery.
+
+- **Calculation:** Time from first commit to deployment, averaged across all deployments.
+- **Breakdown:**
+ - **Coding time**: First commit to pull request creation.
+ - **Time to first review**: PR creation to first review.
+ - **Time to approve**: First review to approval.
+ - **Time to merge**: Approval to merge.
+ - **Time to deploy**: Merge to deployment.
+
+### Change Failure Rate
+
+The percentage of deployments that cause an incident. Lower rate indicates more stable releases.
+
+- **Calculation:** `(Incidents caused by deployments / Total deployments) × 100`
+- **Requires:** [Incidents](/platform/incidents) linked to deployments.
+
+### Mean Time to Recovery (MTTR)
+
+The average time to recover from an incident. Lower MTTR indicates faster incident response.
+
+- **Calculation:** Average of `resolved at - detected at` across all resolved incidents.
+- **Requires:** [Incidents](/platform/incidents) with detection and resolution timestamps.
+
+## Benchmarks
+
+The DORA research program publishes annual benchmarks that categorize teams into Elite, High, Medium, and Low performers. Use these to assess where your team stands and set realistic goals. See the official DORA guides for the latest thresholds.
+
+## Prerequisites
+
+To get the most out of DORA metrics:
+
+
+
+
+## How it works
+
+A weekly view of your team's pull requests and code reviews. Each icon represents a PR or review, and grows in size with the change. Spot where work flows, where it piles up, and where it's stuck.
+
+Contributions from each team member are organized by day and grouped into three categories:
+
+- **Opened PRs** (green hexagons)
+- **Merged PRs** (purple hexagons)
+- **Code reviews** (speech bubble icons)
+
+The size of each icon varies based on the PR size, giving a clear indication of the effort involved. Hovering over an icon provides additional details, including the PR title, repository name, number of comments, and size.
+
+### Use it to
+
+#### For leaders
+
+- **Spot workload imbalance:** Quickly see if work is spread evenly or if someone is consistently overloaded.
+- **Prepare for 1:1s:** Walk into conversations with concrete context on what happened that week, not just gut feelings.
+- **Recognize effort, not just volume:** Fewer, larger icons mean someone tackled a big piece of work. Not every week needs a wall of dots.
+- **Check in early:** A quiet week can mean someone is blocked. Use it as a signal to offer support, not to judge.
+
+#### For individual contributors
+
+- **Build your brag document:** Use the Work Log to highlight contributions during 1:1s or performance reviews.
+- **Support promotion cases:** Provide concrete evidence of high-impact work and consistency over time.
+- **Document growth:** See your progress over time and identify areas where you've improved or taken on more complex tasks.
+
+### Bad practices
+
+It's very important to know how this data should not be used. The Work Log is **not intended to measure individual productivity.** It only reflects activity related to PRs and code reviews. It doesn't account for essential work outside of code contributions, like planning, meetings, research, or documentation.
+
+Use it as a tool to improve collaboration, provide feedback, and recognize effort, not as a singular performance metric.
+
+
+
+
+## How it works
+
+Applications represent deployable units within your repositories. They connect a repository to its deployments and environments, making it possible to track deployment history and DORA metrics per application.
+
+Applications support **monorepo setups**. A single repository can have multiple applications, each mapped to a subdirectory.
+
+## Creating applications
+
+Applications can be created in two ways:
+
+- **Manually**: Create an application from the UI and configure its deployment settings.
+- **Automatically**: When a deployment is sent via the API, Sweetr auto-creates the application if it doesn't exist.
+
+## Deployment settings
+
+Each application has configurable deployment settings that control how deployments are tracked:
+
+| Setting | Description |
+| ----------------- | ------------------------------------------------------------------- |
+| **Trigger** | How deployments are created: API Webhook or PR Merge. |
+| **Target branch** | For merge-based triggers, the branch that represents production. |
+| **Subdirectory** | For monorepos, the path within the repository for this application. |
+
+## Archiving
+
+Applications that are no longer in use can be archived. Archived applications are hidden from filters and excluded from metrics. You can unarchive them at any time.
+
+## Related
+
+
+
+
+## How it works
+
+Deployments track when a version of an application is deployed to an environment. Each deployment records the version, commit hash, environment, and linked pull requests.
+
+Filter deployments by application, environment, and date range to find exactly what you need.
+
+## Change type
+
+Sweetr automatically determines the **change type** for each deployment by comparing commits against the previous deployment:
+
+- **Forward**: A standard deployment with new changes.
+- **Rollback**: The deployment reverts to a previous version.
+- **No Change**: A redeployment of the same version.
+
+## Deployment triggers
+
+Deployments can be tracked through two different triggers:
+
+
+
+
+## How it works
+
+Environments represent deployment targets like `production`, `staging`, or `qa`. Deployments are always associated with an environment, and metrics can be filtered by environment.
+
+A default **production** environment is created for every workspace. Additional environments are created automatically when a deployment is sent via the [API](/api-reference/deployments/create-deployment) with an environment name that doesn't exist yet.
+
+
+
+
+## How it works
+
+Incidents represent production issues caused by a deployment. They track the full lifecycle from detection to resolution, linking back to the deployment that caused the issue and the deployment that fixed it.
+
+## Key fields
+
+| Field | Description |
+| -------------------- | ---------------------------------------------------- |
+| **Cause deployment** | The deployment that introduced the issue (required). |
+| **Fix deployment** | The deployment that resolved the issue. |
+| **Detected at** | When the incident was detected. |
+| **Resolved at** | When the incident was resolved. |
+| **Team** | The team responsible for the incident. |
+| **Leader** | The person leading incident response. |
+| **Postmortem URL** | Link to the postmortem document. |
+
+## Creating incidents
+
+Incidents can be created in two ways:
+
+- **Manually**: Create an incident from the UI, linking it to the deployment that caused it.
+- **Automatically**: Use the [Incident Detection](/automations/incident-detection) automation to detect incidents from rollbacks, hotfixes, and reverts.
+
+## Impact on metrics
+
+Incidents directly feed into two DORA metrics:
+
+- **Change Failure Rate**: The percentage of deployments that caused an incident.
+- **Mean Time to Recovery**: The average time between incident detection and resolution.
+
+## Archiving
+
+Incidents that are no longer relevant can be archived. Archived incidents are excluded from metrics. You can unarchive them at any time.
+
+## Related
+
+
-Each member can have a specific role within a team.
+A person can belong to multiple teams. Each member can have a specific role within a team. Roles are informational labels that help viewers understand each person's function on the team. They do not affect permissions or metrics.
- Engineer
- Product
@@ -37,15 +37,23 @@ Each member can have a specific role within a team.
- Leader
- Manager
+## Archiving
+
+Teams that are no longer active can be archived. Archived teams are hidden from filters and excluded from metrics. You can unarchive them at any time.
+
## Explore