Oobeya Docs
  • GETTING STARTED
    • Welcome to Oobeya!
    • Get Started with Oobeya
    • Playground: Live Demo
    • Oobeya Quick Onboarding Guide
  • TEAM INSIGHTS & SYMPTOMS
    • Symptoms Catalog
      • S1- Recurring high rework rate
      • S2- Recurring high cognitive load
      • S3- High weekend activity
      • S4 - High Code Review Time
      • S6- High technical debt on Sonar
      • S7- High vulnerabilities on Sonar
      • S8- High code quality bugs on Sonar
      • S9 - Unreviewed Pull Requests
      • S10 - Lightning Pull Requests
      • S11 - Oversize Pull Requests
      • S12- High Lead Time For Changes (DORA)
      • S13- Low Deployment Frequency (DORA)
      • S14- High Change Failure Rate (DORA)
  • SECURITY
    • πŸ”Security at Oobeya
    • Oobeya Managed SaaS
  • INSTALLATION
    • 🐳Oobeya Docker Installation Tutorial & Requirements
    • 🐳Oobeya Version Upgrade For the docker-compose Installations
    • 🐳HTTPS Configuration For Docker Installations
  • PRODUCT DOCS
    • ⭐Oobeya Release Notes
      • 🎁Oobeya 2024 Q1 - Release Notes
      • 🎁Oobeya 2023 Q4 - Release Notes
      • 🎁Oobeya 2023 Q3 - Release Notes
      • 🎁Oobeya 2023 Q2 - Release Notes
      • 🎁Oobeya 2023 Q1 - Release Notes
      • 🎁Oobeya Nov & Dec '22 Updates
      • 🎁Oobeya September 2022 Updates
      • 🎁Oobeya August 2022 Updates
      • 🎁Oobeya July 2022 Updates
      • 🎁Oobeya June 2022 Updates
      • 🎁Oobeya Apr & May 2022 Updates
      • 🎁Oobeya March 2022 Updates
      • 🎁Oobeya Feb 2022 Updates
      • 🎁Oobeya Jan 2022 Updates
      • OOBEYA-2.0.8 - Release Notes
      • OOBEYA-2.0.4 - Release Notes
      • OOBEYA-2.0.0
      • OOBEYA-1.4.6
      • QAD-1.0 - 1.4
    • ▢️Product Tour
    • ❇️Metrics List
    • β˜€οΈDeveloper Experience Metrics
  • INTEGRATIONS
    • Integration Catalog
      • SCM & CI/CD Integrations
        • Azure DevOps Integration
        • Bitbucket Cloud Integration
        • Bitbucket Server Integration
        • GitHub Integration
        • Step-by-Step Integration Instructions for the Oobeya GitHub Application
        • GitLab Integration
        • Jenkins & Cloudbees Integration
        • Octopus Deploy Integration
        • TeamCity Integration
      • Quality & Security Integrations
        • SonarQube Integration
        • SonarQube Cloud Integration
        • Veracode Integration
      • Project Management Integrations
        • Jira Cloud Integration
        • Jira Server Integration
      • APM / Monitoring Integrations
        • AppDynamics Integration
        • Azure Application Insights Integration
        • Dynatrace Integration
        • Elastic APM Integration
        • New Relic Integration
        • Sentry Integration
    • Installing An Addon
    • Adding A New Data Source
    • Updating and Deleting Data Sources
  • ADMINISTRATION
    • User Management, Single Sign-On, Auth Settings
      • Microsoft Entra (Azure AD) Integration
      • LDAP / Active Directory Integration
      • Importing a New User From LDAP / Active Directory
      • Bulk User Import in Oobeya Using CSV
      • Adding a New User
      • Deactivating a User
      • Understanding Roles in Oobeya
    • License Management
      • Updating The License Key
  • DEVELOPMENT ANALYTICS - GITWISER
    • Git Analytics - Metric Definitions
      • Coding Impact Score
      • Impact Ratio (team-level)
    • Setting Up Development Analytics And DORA Metrics
    • Git Analytics Exclusions
    • Setting Automated Reanalyze For Gitwiser
    • Deployment Analytics (DORA Metrics)
    • Merging Contributor Accounts
  • DORA Metrics
    • DORA Metrics Introduction
      • Lead Time For Changes (LTC)
      • Deployment Frequency (DF)
      • Change Failure Rate (CFR)
      • Mean Time To Restore Service (MTTR)
    • Failure Detection (For Change Failure Rate & MTTR)
    • How To Calculate DORA Metrics for GitHub
    • Updating Team Scorecard configuration to display DORA Metrics
    • How To Start Deployment Analytics (DORA Metrics) For An Existing Gitwiser Analysis
  • Project Analytics - AgileSpace
    • Agile Analytics - Metric Definitions
    • Agile Board Analytics Intro
    • Starting an Agile Board Analytics
    • Board Overview
    • Sprint Reports
    • Update the Agile Board Analytics
  • QUALITY ANALYTICS
    • Total Code Quality Index (TCQI)
  • Guides
    • Azure DevOps Guides
      • How To Calculate DORA Metrics for Azure DevOps
      • Best practices for integrating Oobeya with Azure DevOps Scrum Boards
  • PROFILES
    • Introduction
    • Adding A Profile
    • Developer Scorecard
    • Adding A Related Account Information To Oobeya Profile
  • Team Health
    • Adding A Team
    • Team Scorecard
    • Project Analytics (Scrum Teams) Widget
  • Test Quality Report Widget
  • CUSTOM DASHBOARDS
    • Adding A New Dashboard
    • Adding A New Widget
    • Updating And Deleting Widgets
    • Adding Jira Sprint Progress Widget
    • Adding Jira Board Metrics Widget (Metrics With JQL)
  • SUPPORT
    • Support Request Workflow
    • Customer Success & Support Guide
Powered by GitBook
On this page
  • Git Analytics
  • Pull Request Analytics
  • Deployment Analytics - DORA Metrics
  • Board Analytics
  • Code Quality Analytics

Was this helpful?

  1. PRODUCT DOCS

Metrics List

View a comprehensive list of the metrics in Oobeya, which is organised into different categories for your convenience.

Git Analytics

Metric Name
Definition

Coding Efficiency %

The percentage of productive work (which is not rework or code churn).

Coding Impact Score

Impact Ratio

Coding Impact Per Developer

New work

Newly written code lines.

Refactor

Edits and updates made on the existing legacy code (default: written more than 21 days ago).

Help Others

Edits and updates made on another developer's recent work (default: written less than 21 days ago).

Code Churn (Rework)

Code that was rewritten or deleted in a short time by the same developer after being written (default: less than 21 days).

Active Coding Days

The number of days the repository was committed.

Coding Days Per Week

The number of active days the repository was committed in a week.

Active Contributors

The number of developer profiles who committed to this code repository in the last 21 days.

Total Commits


Pull Request Analytics

Metric Name
Definition

Merged PRs

Number of pull requests successfully merged.

Open PRs

Number of pull requests that are currently open and awaiting review or approval.

PR Revert Rate %

Code Review Cycle Time

The time elapsed between the open time and merge time for pull requests.

Coding Time

The time elapsed between the first commit and open time for pull requests.

Time To Merge

The time elapsed between the first commit and merge time.

Pull Request Size

Total size (lines added, removed, and changed) of pull requests.

# of PR Reviewers

Number of Pull Request Reviewers.

Avg Review Time (for code reviewers)

Average Pull Request review time for code reviewers.

Reviewed PRs (for code reviewers)

Number of Pull requests reviewed by each code reviewer.

Reviewed / Total PRs (for code reviewers)

Pull Request Risks

Number of Oversized, Overdue, or Stale PRs.

Review Comment Count

% of PRs Merged Within Goal

What percentage of pull requests are merged within the desired time frame?

Code Review Cycle Time - Over Goal %

Coding Time - Over Goal %

Time To Merge - Over Goal %

Pull Request Size - Over Goal %


Deployment Analytics - DORA Metrics

Metric Name
Definition

Lead Time For Changes (DORA Metrics)

The amount of time it takes a commit to get into production.

Deployment Frequency (DORA Metrics)

How often your team successfully releases to production.

Change Failure Rate (DORA Metrics)

The percentage of deployments causing a failure in production

Mean Time To Restore Service (DORA Metrics)

How long it takes an organization to recover from a failure in production

Development Time

The time elapsed between the first commit and merge time for pull requests.

Waiting For Deploy

The time elapsed between the pull request merged and the deployment pipeline started.

Deployment Duration

The time elapsed between the deployment pipeline being triggered and completed successfully.

Deploy Size

Total number of Commits & PRs delivered in the deployment package.

# of Contributors

# of Deployments

# of Deployments Leading To An Incident


Board Analytics

Metric Name
Definition

# of Completed Sprints

(board level)

The number of sprints that have been started and completed by the team during the selected period.

Avg Velocity by Effort

(board level)

The average amount of work (e.g., story points or effort units) completed per sprint over the selected period. It helps predict future capacity and plan workload.

Avg Lead Time

(board level)

The time from when a task is created until the work on it is completed (i.e., from creation to completion).

Avg Cycle Time

(board level)

The time from when a task actually starts on an item (In Progress) until it is completed or ready for delivery.

Pickup Time

The initial gap before the team officially recognizes the item or places it in a backlog for scheduling.

Actual Reaction Time

The portion of time from when the item is officially in the sprint backlog/queue (and deemed ready to be picked up) until the team begins work on it (i.e., the item enters an In-Progress state). Note: The exact start of Actual Reaction Time depends on the chosen configurationβ€”either the creation date, sprint start date, or a specific reference state like β€œReady-To-Dev.”

Total Reaction Time

Pickup Time + Actual Reaction Time

Cycle Time

The time from when task actually starts on an item (In Progress) until it is completed or ready for delivery. Only completed tasks are included when calculating Cycle Time.

Lead Time

The time from when a task is created until the work on it is completed (i.e., from creation to completion). By definition, Lead Time = Reaction Time + Cycle Time. Only completed tasks are included when calculating these metric.

# of Completed Work Items

Total number of completed work items during the selected timeframe.

Completed Work Items per Sprint

Average number of work items completed by the team in each Sprint.

Sprint Delivery Rate %

(by work item count)

[(Completed Work Items / Total Work Items) * 100]

Sprint Planning Accuracy %

(by work item count)

[(Completed Work Items / Planned Work Items) * 100]

Sprint Delivery Rate %

(by work effort)

[(Completed Effort / Total Effort) * 100]

Sprint Planning Accuracy %

(by effort)

[(Completed Effort / Planned Effort) * 100]

Sprint Velocity

(by count & effort)

The count/total effort of work items completed at the end of the sprint.

Sprint Velocity Metrics

Predictability %

Calculated as (Completed Items / Planned Items) Γ— 100%. Reflects how accurately the team estimates and delivers on their commitments. High predictability indicates reliable sprint planning and execution.

Productivity %

Evaluates the team's total output relative to the initial plan, including both planned and extra work that was completed. It is calculated by adding the completed planned tasks and any additional tasks (pulled-in work) that were done, divided by the total planned tasks, and then multiplying by 100. This metric shows the team's overall capacity and responsiveness by including any additional work taken on beyond the original plan.

Backlog Age

The maximum time items have been in the backlog. Aged items may lose relevance.

Backlog Size

The number of uncompleted items in the backlog, excluding in-progress work items. Helps gauge workload readiness.

Open Bugs in the Backlog

Number of Bugs in the Backlog.

Innovation Rate %

(by item count & effort)

The percentage of the time, story points, or work items allocated to innovation (e.g., building new features) relative to the total effort (innovation + maintenance + bug fixes). A higher rate indicates a focus on driving product growth and competitiveness.

Current Backlog Items (Kanban)

Number of items waiting in the Kanban board backlog.

Work in Progress (Kanban)

Number of items in progress.

Avg Throughput /week

Average number of items completed in a working week (per week)

Work in progress (>5 days)

List of work items that are in progress for more than five days.

Sprint Scope Change

Amount of added and removed work items during a sprint.

Work Item Type Distribution

Work Item Priority Distribution

Work Item Reopen Count


Code Quality Analytics

Metric Name
Definition

Technical Debt (overall)

The estimated time required to fix all maintainability issues / code smells in SonarQube projects. An 8-hour day is assumed when values are shown in days.

Code Quality Issues

Issues represent something wrong in the code. When a piece of code does not comply with a rule, an issue is created by SonarQube.

Total Code Quality Index

Issue Risk

Each issue is scored independently per quality category it belongs to:

# of Bugs

Number of reliability issues in SonarQube.

# of Vulnerabilities

Number of security issues in SonarQube.

# of Code Smells

Number of maintainability issues in SonarQube.

Technical Debt (developer)

Total technical debt of each developer.

PreviousProduct TourNextDeveloper Experience Metrics

Last updated 16 hours ago

Was this helpful?

A way of measuring the extent of code changes that occur. View the full .

View the full .

View the full .

The Total Code Quality Index (TCQI) is a composite metric in Oobeya that quantifies code quality by analyzing SonarQube issue data through multiple lenses: severity, category impact (security, reliability, maintainability), remediation effort, and codebase volume. It provides engineering leaders with a clear, standardized, and customizable way to monitor and improve software quality. View the full .

❇️
Issue Risk = Severity Coefficient Γ— Category Coefficient Γ— Remediation Coefficient
documentation
documentation
list of the sprint metrics
documentation