DORA Metrics Target Framework
Extracted from CloudOps-Runbooks DORA implementation. The target thresholds and maturity model are reusable across ADLC consumer projects.
Four Core Metrics
| Metric | What It Measures | Why It Matters |
|---|
| Lead Time | Time from code commit to production deployment | Delivery velocity |
| Deployment Frequency | How often you deploy to production | Delivery cadence |
| Change Failure Rate | Percentage of deployments causing production failures | Delivery quality |
| MTTR | Time from incident start to resolution | Recovery capability |
Maturity Model
Based on the DORA State of DevOps research, adapted for ADLC:
Lead Time
| Level | Target | Characteristics |
|---|
| Elite | < 1 hour | Automated CI/CD, trunk-based development |
| High | < 1 day | Feature flags, automated testing |
| Medium | < 1 week | Branch-based, manual QA gates |
| Low | > 1 month | Manual processes, infrequent integration |
ADLC target: < 4 hours (High, approaching Elite)
Deployment Frequency
| Level | Target | Characteristics |
|---|
| Elite | Multiple per day | Continuous deployment |
| High | Daily to weekly | Continuous delivery with manual promotion |
| Medium | Weekly to monthly | Sprint-based releases |
| Low | Monthly or less | Quarterly/annual releases |
ADLC target: ≥ 1 deployment/day (High)
Change Failure Rate
| Level | Target | Characteristics |
|---|
| Elite | < 5% | Comprehensive automated testing, canary deployments |
| High | < 10% | Good test coverage, staged rollouts |
| Medium | < 15% | Basic testing, some manual verification |
| Low | > 15% | Minimal testing, big-bang deployments |
ADLC target: < 5% (Elite)
Mean Time to Recovery (MTTR)
| Level | Target | Characteristics |
|---|
| Elite | < 1 hour | Automated rollback, runbooks, on-call |
| High | < 4 hours | Quick diagnosis, tested rollback procedures |
| Medium | < 1 day | Manual investigation, some documentation |
| Low | > 1 day | Ad-hoc recovery, no runbooks |
ADLC target: < 1 hour (Elite)
Measurement Approach
Data Sources (Local-First)
| Metric | Primary Source | Fallback |
|---|
| Lead Time | git log (commit → merge timestamp) | Manual tracking |
| Deployment Frequency | git log --merges on main branch | CI/CD pipeline logs |
| Change Failure Rate | Incident records / rollback count | Manual incident log |
| MTTR | Incident open → close timestamps | Manual tracking |
Collection Cadence
| Ceremony | Metrics Collected | Purpose |
|---|
| Daily standup | Quick DORA snapshot | Trend awareness |
| Sprint review | Full DORA report with trends | Sprint health assessment |
| Sprint retro | DORA actuals vs targets | Improvement identification |
| Monthly | Rolling 30-day averages | Long-term trend |
SLA Integration
DORA metrics connect to Service Level objectives:
| SLA Type | Connected DORA Metric | Relationship |
|---|
| Availability (99.9%) | MTTR | Lower MTTR = higher availability |
| Performance | Change Failure Rate | Lower failure rate = more stable performance |
| Delivery cadence | Lead Time + Deploy Frequency | Faster delivery = more responsive to business needs |
Scoring
For sprint ceremonies, calculate an overall performance grade:
| Grade | Criteria |
|---|
| A | All 4 metrics at Elite level |
| B | All 4 metrics at High or above |
| C | All 4 metrics at Medium or above |
| D | One or more metrics at Low |
Anti-Patterns
| Anti-Pattern | Why It Fails |
|---|
| Gaming deployment frequency with no-op deploys | Metric improves but delivery capability doesn't |
| Excluding "expected" failures from CFR | Masks real quality issues |
| Measuring lead time from PR creation (not commit) | Hides development time in pre-PR work |
| Reporting targets as actuals | NATO violation — must measure, not assume |
Origin: CloudOps-Runbooks DORA metrics engine. Targets and maturity model extracted as reusable framework reference.