❇️Metrics List

View a comprehensive list of the metrics in Oobeya, which is organised into different categories for your convenience.

Git Analytics

Metric Name
Definition

Coding Efficiency %

The percentage of productive work (which is not rework or code churn).

Coding Impact Score

A way of measuring the extent of code changes that occur. View the full documentation.

Impact Ratio

View the full documentation.

Coding Impact Per Developer

New work

Newly written code lines.

Refactor

Edits and updates made on the existing legacy code (default: written more than 21 days ago).

Help Others

Edits and updates made on another developer's recent work (default: written less than 21 days ago).

Code Churn (Rework)

Code that was rewritten or deleted in a short time by the same developer after being written (default: less than 21 days).

Active Coding Days

The number of days the repository was committed.

Coding Days Per Week

The number of active days the repository was committed in a week.

Active Contributors

The number of developer profiles who committed to this code repository in the last 21 days.

Total Commits


Pull Request Analytics

Metric Name
Definition

Merged PRs

Number of pull requests successfully merged.

Open PRs

Number of pull requests that are currently open and awaiting review or approval.

PR Revert Rate %

Code Review Cycle Time

The time elapsed between the open time and merge time for pull requests.

Coding Time

The time elapsed between the first commit and open time for pull requests.

Time To Merge

The time elapsed between the first commit and merge time.

Pull Request Size

Total size (lines added, removed, and changed) of pull requests.

# of PR Reviewers

Number of Pull Request Reviewers.

Avg Review Time (for code reviewers)

Average Pull Request review time for code reviewers.

Reviewed PRs (for code reviewers)

Number of Pull requests reviewed by each code reviewer.

Reviewed / Total PRs (for code reviewers)

Pull Request Risks

Number of Oversized, Overdue, or Stale PRs.

Review Comment Count

% of PRs Merged Within Goal

What percentage of pull requests are merged within the desired time frame?

Code Review Cycle Time - Over Goal %

Coding Time - Over Goal %

Time To Merge - Over Goal %

Pull Request Size - Over Goal %


Deployment Analytics - DORA Metrics

Metric Name
Definition

Lead Time For Changes (DORA Metrics)

The amount of time it takes a commit to get into production.

Deployment Frequency (DORA Metrics)

How often your team successfully releases to production.

Change Failure Rate (DORA Metrics)

The percentage of deployments causing a failure in production

Mean Time To Restore Service (DORA Metrics)

How long it takes an organization to recover from a failure in production

Development Time

The time elapsed between the first commit and merge time for pull requests.

Waiting For Deploy

The time elapsed between the pull request merged and the deployment pipeline started.

Deployment Duration

The time elapsed between the deployment pipeline being triggered and completed successfully.

Deploy Size

Total number of Commits & PRs delivered in the deployment package.

# of Contributors

# of Deployments

# of Deployments Leading To An Incident


Board Analytics

Metric Name
Definition

# of Completed Sprints

(board level)

The number of sprints that have been started and completed by the team during the selected period.

Avg Velocity by Effort

(board level)

The average amount of work (e.g., story points or effort units) completed per sprint over the selected period. It helps predict future capacity and plan workload.

Avg Lead Time

(board level)

The time from when a task is created until the work on it is completed (i.e., from creation to completion).

Avg Cycle Time

(board level)

The time from when a task actually starts on an item (In Progress) until it is completed or ready for delivery.

Pickup Time

The initial gap before the team officially recognizes the item or places it in a backlog for scheduling.

Actual Reaction Time

The portion of time from when the item is officially in the sprint backlog/queue (and deemed ready to be picked up) until the team begins work on it (i.e., the item enters an In-Progress state). Note: The exact start of Actual Reaction Time depends on the chosen configurationβ€”either the creation date, sprint start date, or a specific reference state like β€œReady-To-Dev.”

Total Reaction Time

Pickup Time + Actual Reaction Time

Cycle Time

The time from when task actually starts on an item (In Progress) until it is completed or ready for delivery. Only completed tasks are included when calculating Cycle Time.

Lead Time

The time from when a task is created until the work on it is completed (i.e., from creation to completion). By definition, Lead Time = Reaction Time + Cycle Time. Only completed tasks are included when calculating these metric.

# of Completed Work Items

Total number of completed work items during the selected timeframe.

Completed Work Items per Sprint

Average number of work items completed by the team in each Sprint.

Sprint Delivery Rate %

(by work item count)

[(Completed Work Items / Total Work Items) * 100]

Sprint Planning Accuracy %

(by work item count)

[(Completed Work Items / Planned Work Items) * 100]

Sprint Delivery Rate %

(by work effort)

[(Completed Effort / Total Effort) * 100]

Sprint Planning Accuracy %

(by effort)

[(Completed Effort / Planned Effort) * 100]

Sprint Velocity

(by count & effort)

The count/total effort of work items completed at the end of the sprint.

Sprint Velocity Metrics

Predictability %

Calculated as (Completed Items / Planned Items) Γ— 100%. Reflects how accurately the team estimates and delivers on their commitments. High predictability indicates reliable sprint planning and execution.

Productivity %

Evaluates the team's total output relative to the initial plan, including both planned and extra work that was completed. It is calculated by adding the completed planned tasks and any additional tasks (pulled-in work) that were done, divided by the total planned tasks, and then multiplying by 100. This metric shows the team's overall capacity and responsiveness by including any additional work taken on beyond the original plan.

Backlog Age

The maximum time items have been in the backlog. Aged items may lose relevance.

Backlog Size

The number of uncompleted items in the backlog, excluding in-progress work items. Helps gauge workload readiness.

Open Bugs in the Backlog

Number of Bugs in the Backlog.

Innovation Rate %

(by item count & effort)

The percentage of the time, story points, or work items allocated to innovation (e.g., building new features) relative to the total effort (innovation + maintenance + bug fixes). A higher rate indicates a focus on driving product growth and competitiveness.

Current Backlog Items (Kanban)

Number of items waiting in the Kanban board backlog.

Work in Progress (Kanban)

Number of items in progress.

Avg Throughput /week

Average number of items completed in a working week (per week)

Work in progress (>5 days)

List of work items that are in progress for more than five days.

Sprint Scope Change

Amount of added and removed work items during a sprint.

Work Item Type Distribution

Work Item Priority Distribution

Work Item Reopen Count


Code Quality Analytics

Metric Name
Definition

Technical Debt (overall)

The estimated time required to fix all maintainability issues / code smells in SonarQube projects. An 8-hour day is assumed when values are shown in days.

Code Quality Issues

Issues represent something wrong in the code. When a piece of code does not comply with a rule, an issue is created by SonarQube.

Total Code Quality Index

The Total Code Quality Index (TCQI) is a composite metric in Oobeya that quantifies code quality by analyzing SonarQube issue data through multiple lenses: severity, category impact (security, reliability, maintainability), remediation effort, and codebase volume. It provides engineering leaders with a clear, standardized, and customizable way to monitor and improve software quality. View the full documentation.

Issue Risk

Each issue is scored independently per quality category it belongs to:

Issue Risk = Severity Coefficient Γ— Category Coefficient Γ— Remediation Coefficient

# of Bugs

Number of reliability issues in SonarQube.

# of Vulnerabilities

Number of security issues in SonarQube.

# of Code Smells

Number of maintainability issues in SonarQube.

Technical Debt (developer)

Total technical debt of each developer.

Last updated

Was this helpful?