Daily Standup Guide
Reference: McKinsey Rewired Ch.13 Exhibit 13.4 — "Agile cadence and performance management ceremonies" Pod Model: 1 HITL Manager + 9 AI Agents (McKinsey Ch.14 "Digital Factory" Option 1) Target: Session startup in <2 minutes with sprint health, DORA metrics, and next action Governance: EXEMPT from PO+CA coordination (ceremony command — reads existing data only)
Agile Ceremony Flow (McKinsey Exhibit 13.2)
5 Ceremonies Mapped to ADLC
| McKinsey Ceremony | Frequency | ADLC Command | Agent | Artifact | HITL Effort |
|---|---|---|---|---|---|
| Backlog Refinement | Every sprint | Prompt 19/20 (copy-paste) | product-owner | stories.csv updated | 15 min |
| Sprint Planning | Every sprint | /speckit.plan | product-owner + cloud-architect | Sprint plan JSON | 30 min |
| Daily Standup | Every session | /metrics:daily-standup | observability-engineer | standup JSON + MD | 2 min |
| Sprint Review | Every sprint | /metrics:sprint-review | product-owner | Working product demo | 30 min |
| Sprint Retrospective | Every sprint | /metrics:sprint-retro | product-owner + observability-engineer | Retro JSON + DORA | 20 min |
Daily Standup: Step-by-Step
The Single Command
/metrics:daily-standup --sprint xOps-S1
Actual Output (2026-03-14, observability-engineer agent + HITL correction)
Source: framework/ceremonies/daily-standup-2026-03-14.md (git-tracked deliverable)
═══════════════════════════════════════════════════════════════
DAILY STANDUP — xOps-S1 | 2026-03-14 | Day 5 of 12
McKinsey Rewired Exhibit 13.4: Daily Standup
═══════════════════════════════════════════════════════════════
SPRINT HEALTH: 66/100 — at-risk
Stories: 3/5 (60%) | DORA: 75% | Blockers: 5
───────────────────────────────────────────────────────────────
OUR TOP 2 OBJECTIVES
───────────────────────────────────────────────────────────────
1. xOps Sovereign AI ($180/mo target)
✅ xOps-S1-01 Local Stack | ✅ xOps-S1-04 FinOps Report
⬜ xOps-S1-02 M3 Web Module | ⬜ xOps-S1-03 M4 EFS Module
2. ADLC Enterprise Framework (adlc.oceansoft.io)
✅ xOps-S1-05 Sprint Filter | 🔄 RQ3 Team Bundles
🔄 RQ4 Daily Standup | 🔄 RQ1 Session Split
───────────────────────────────────────────────────────────────
YESTERDAY (2026-03-13) — 1 session
───────────────────────────────────────────────────────────────
- cloud-architect: Sprint 1 RQ1-RQ4 technical design validation
- product-owner: Bundle consolidation (Option C)
- meta-engineering-expert: RQ3 team bundles + RQ4 standup implementation
- infrastructure-engineer: xOps-S1-04 FinOps brutal-honest scoring
- frontend-docs-engineer: xOps-S1-05 sprint filter shipped (LEAN)
───────────────────────────────────────────────────────────────
-> TODAY: Prompt 5 — M3 Web Module (BV 85%)
HITL action: aws sso login + paste Prompt 5 in Cloud-Infrastructure
───────────────────────────────────────────────────────────────
BLOCKERS (5)
B-01: No HTTPS/DDoS for production -> xOps-S1-02 (P1, backlog)
B-02: No SQLite persistence -> xOps-S1-03 (P1, backlog)
B-03: APRA CPS 234 gaps -> US-X004 risk register (in-progress)
B-04: Agent consensus N/A today -> score on next impl session
B-05: MTTR ~2h vs <30min -> automate PDCA (sprint 2)
DORA (4 key metrics)
✅ DF: 1/sprint ✅ LT: <1 day
✅ CFR: 0% 🔴 MTTR: ~2h (PDCA)
Evidence: tmp/adlc-framework/ceremonies/daily-standup-2026-03-14.json
Evidence: framework/ceremonies/daily-standup-2026-03-14.md
═══════════════════════════════════════════════════════════════
Data sources (queried by observability-engineer agent, corrected by HITL):
- Stories:
docs/static/data/stories.csv— 3/5 completed (S1-01, S1-04, S1-05) - DORA:
docs/static/data/adlc-metrics.db— 4/4 measured (MTTR red = misses target, but IS measured) - Yesterday:
tmp/adlc-framework/coordination-logs/*2026-03-14*.json— 1 session, 5 agents - Sprint health: 66/100 at-risk (HITL correction; IE cross-validated SQLite shows 4/4 coverage)
- Note: "DORA coverage" = has measured value (non-N/A), NOT "meets target". Color icons show target-meeting.
- Blockers: 5 total from stories.csv + DORA blind spots
Key design decisions:
- Sprint health leads (1 line) — not buried at the end
- Top 2 objectives anchored from the prompts file, RQ items shown under Objective 2
- Yesterday: max 5 bullet items, agent:scope only — no file paths
- TODAY = single recommendation with prompt #, BV%, and explicit HITL action
- DORA quick view: 4 key metrics only (DF, LT, CFR, MTTR). Full 15 metrics in JSON evidence
- Dual evidence:
.jsonintmp/(ephemeral, agent-consumable) +.mdinframework/ceremonies/(git-tracked, HITL-consumable)
Sprint Health Clarification
"DORA coverage" in the sprint health line means measured (non-N/A), not meeting target:
DORA: 4/4= all 4 key metrics have measured values (even if MTTR misses its target)- Color icons (GREEN/AMBER/RED) show whether each metric meets its target
- These are two different signals: coverage = "do we have data?", color = "is it good?"
11-Step Pipeline
Sprint Health Formula
Sprint Health = (stories_bv_weighted * 60%) + (dora_4key_pct * 40%)
Where:
stories_bv_weighted= sum of BV% for completed stories / sum of BV% for all storiesdora_4key_pct= percentage of 4 key DORA metrics with measured (non-N/A) values
| Score | Status | Action |
|---|---|---|
| ≥ 75 | on-track | Continue current plan |
| 50-74 | at-risk | Prioritize blockers |
| < 50 | off-track | Escalate to HITL for replanning |
DORA Metrics Integration
The daily standup surfaces 4 key DORA metrics in the terminal output. Full 15 metrics are available in the JSON evidence file.
| DORA Metric | Source | Collection Method | Target |
|---|---|---|---|
| Deploy Frequency | git log --since="14 days ago" | Commits to main per sprint | ≥2/sprint |
| Lead Time | git log first commit → merge | Time from first commit to validation | <2 days |
| Change Failure Rate | git log --grep="revert|fix|hotfix" | Reverts as % of total deploys | <5% |
| MTTR | PDCA cycle evidence | Time from failure detection to fix | <30 min |
Color Coding
| Color | Meaning | Action |
|---|---|---|
| ✅ GREEN | Meets target | Continue |
| 🟡 AMBER | Within 20% of target | Monitor |
| 🔴 RED | Misses target | Escalate to HITL |
Governance Classification
Ceremony commands (/metrics:daily-standup, /metrics:sprint-review, /metrics:sprint-retro, /metrics:update-dora) are EXEMPT from PO+CA coordination per adlc-governance.md Content Classification.
Rationale: These are READ + AGGREGATE + DISPLAY operations. They read existing data sources (SQLite, CSV, coordination logs) and present sprint status. No architecture decisions, technology selections, or design recommendations. Blocking management tools behind coordination creates an infinite loop.
The remind-coordination.sh hook has a passthrough pattern for /metrics.* commands — they pass through with a reminder but are NOT blocked.
ADLC Components Used
| Type | Component | Purpose |
|---|---|---|
| Command | /metrics:daily-standup | Primary ceremony orchestrator (11-step pipeline) |
| Command | /metrics:update-dora | DORA data refresh if stale (>24h) |
| Command | /metrics:sprint-review | End-of-sprint working product showcase |
| Command | /metrics:sprint-retro | Kolb + BGS retrospective with DORA actuals |
| Hook | collect-dora-session.sh | SessionStart — auto-populates SQLite |
| Hook | remind-coordination.sh | Enforces PO+CA after standup completes (ceremony passthrough) |
| Data | adlc-metrics.db | SQLite source of truth for DORA metrics |
| Data | stories.csv | Sprint story status (in-progress, backlog, done) |
| Data | dora.csv | DORA rendering in xops.jsx dashboard |
| Memory | xops-team-mission.json | OKRs, sprint goal, ceremonies definition |
McKinsey Reference
"There is a misconception about agile that it is freewheeling and lacks sufficient management input and oversight. That happens in poor implementations of agile. In fact, when done right, agile is an effective way to manage performance because of its focus on results and the frequent checks on progress."
— McKinsey Rewired, Chapter 13: "From doing agile to being agile"
Four Characteristics of Effective Agile Pods (McKinsey Ch.13, p.121)
- Mission-based with measurable outcomes — Our OKRs: O1 (RAG <30s), O2 (≤$180/mo), O3 (≥99.5%)
- Cross-disciplinary with dedicated resources — 9 agents x specialized skills (FinOps, TF, security, QA)
- Autonomous and accountable for impact — PDCA autonomous until ≥99.5%, HITL escalation at max 7 cycles
- Fast moving and focused on user needs — <2min standup, BV-ordered recommendation, evidence-based delivery
References
| Source | Chapter/Exhibit | What We Use | How We Adapted |
|---|---|---|---|
| McKinsey Rewired (Lamarre et al., 2023) | Ch.13 "From doing agile to being agile" | 4 characteristics of agile pods (p.121) | 1 HITL + 9 AI agents = AI-native pod |
| McKinsey Rewired | Exhibit 13.1 "Agile is a superior development approach" (p.120) | Agile benchmarks: +27% productivity, -30% schedule slips, -70% defects | DORA metrics as our measurement framework |
| McKinsey Rewired | Exhibit 13.2 "Agile cadence and performance management ceremonies" (p.124) | 5 ceremonies: Backlog Refinement → Sprint Planning → Daily Standup → Sprint Review → Sprint Retrospective | Mapped to ADLC commands (see table above) |
| McKinsey Rewired | Exhibit 13.3 "Example of OKRs" (p.126) | OKR structure: Objectives x Key Results x Timing | Our 3 OKRs in xops-team-mission.json |
| McKinsey Rewired | Exhibit 13.4 "Agile ceremonies" (p.128) | Daily Standup: "assess sprint progress, identify barriers, 1+ tasks assigned" | /metrics:daily-standup 11-step pipeline |
| McKinsey Rewired | Ch.14 "Operating models" — Digital Factory (p.135-136) | Option 1: 10-50 pods, self-contained, rapid to implement | Our operating model choice (1 pod, AI-native) |
| DORA (Google Cloud) | Accelerate (Forsgren et al., 2018) | 4 key metrics: Deploy Freq, Lead Time, CFR, MTTR | collect-dora-session.sh auto-measures from git log |
The full McKinsey Rewired PDF (Section 3, Chapters 13-16) is available in the private knowledge base at
xOps-Internal/knowledge-external/rewired-the-mckinsey-guide/rewired-the-mckinsey-guide-section3.pdf.
This file is NOT deployed to the public docs site (copyright). Exhibits referenced above are cited by page number.