Skip to main content

DORA Metrics Target Framework

Extracted from CloudOps-Runbooks DORA implementation. The target thresholds and maturity model are reusable across ADLC consumer projects.

Four Core Metrics

MetricWhat It MeasuresWhy It Matters
Lead TimeTime from code commit to production deploymentDelivery velocity
Deployment FrequencyHow often you deploy to productionDelivery cadence
Change Failure RatePercentage of deployments causing production failuresDelivery quality
MTTRTime from incident start to resolutionRecovery capability

Maturity Model

Based on the DORA State of DevOps research, adapted for ADLC:

Lead Time

LevelTargetCharacteristics
Elite< 1 hourAutomated CI/CD, trunk-based development
High< 1 dayFeature flags, automated testing
Medium< 1 weekBranch-based, manual QA gates
Low> 1 monthManual processes, infrequent integration

ADLC target: < 4 hours (High, approaching Elite)

Deployment Frequency

LevelTargetCharacteristics
EliteMultiple per dayContinuous deployment
HighDaily to weeklyContinuous delivery with manual promotion
MediumWeekly to monthlySprint-based releases
LowMonthly or lessQuarterly/annual releases

ADLC target: ≥ 1 deployment/day (High)

Change Failure Rate

LevelTargetCharacteristics
Elite< 5%Comprehensive automated testing, canary deployments
High< 10%Good test coverage, staged rollouts
Medium< 15%Basic testing, some manual verification
Low> 15%Minimal testing, big-bang deployments

ADLC target: < 5% (Elite)

Mean Time to Recovery (MTTR)

LevelTargetCharacteristics
Elite< 1 hourAutomated rollback, runbooks, on-call
High< 4 hoursQuick diagnosis, tested rollback procedures
Medium< 1 dayManual investigation, some documentation
Low> 1 dayAd-hoc recovery, no runbooks

ADLC target: < 1 hour (Elite)

Measurement Approach

Data Sources (Local-First)

MetricPrimary SourceFallback
Lead Timegit log (commit → merge timestamp)Manual tracking
Deployment Frequencygit log --merges on main branchCI/CD pipeline logs
Change Failure RateIncident records / rollback countManual incident log
MTTRIncident open → close timestampsManual tracking

Collection Cadence

CeremonyMetrics CollectedPurpose
Daily standupQuick DORA snapshotTrend awareness
Sprint reviewFull DORA report with trendsSprint health assessment
Sprint retroDORA actuals vs targetsImprovement identification
MonthlyRolling 30-day averagesLong-term trend

SLA Integration

DORA metrics connect to Service Level objectives:

SLA TypeConnected DORA MetricRelationship
Availability (99.9%)MTTRLower MTTR = higher availability
PerformanceChange Failure RateLower failure rate = more stable performance
Delivery cadenceLead Time + Deploy FrequencyFaster delivery = more responsive to business needs

Scoring

For sprint ceremonies, calculate an overall performance grade:

GradeCriteria
AAll 4 metrics at Elite level
BAll 4 metrics at High or above
CAll 4 metrics at Medium or above
DOne or more metrics at Low

Anti-Patterns

Anti-PatternWhy It Fails
Gaming deployment frequency with no-op deploysMetric improves but delivery capability doesn't
Excluding "expected" failures from CFRMasks real quality issues
Measuring lead time from PR creation (not commit)Hides development time in pre-PR work
Reporting targets as actualsNATO violation — must measure, not assume

Origin: CloudOps-Runbooks DORA metrics engine. Targets and maturity model extracted as reusable framework reference.