Skip to main content

Daily Standup Guide

Reference: McKinsey Rewired Ch.13 Exhibit 13.4 — "Agile cadence and performance management ceremonies" Pod Model: 1 HITL Manager + 9 AI Agents (McKinsey Ch.14 "Digital Factory" Option 1) Target: Session startup in <2 minutes with sprint health, DORA metrics, and next action Governance: EXEMPT from PO+CA coordination (ceremony command — reads existing data only)

Agile Ceremony Flow (McKinsey Exhibit 13.2)

5 Ceremonies Mapped to ADLC

McKinsey CeremonyFrequencyADLC CommandAgentArtifactHITL Effort
Backlog RefinementEvery sprintPrompt 19/20 (copy-paste)product-ownerstories.csv updated15 min
Sprint PlanningEvery sprint/speckit.planproduct-owner + cloud-architectSprint plan JSON30 min
Daily StandupEvery session/metrics:daily-standupobservability-engineerstandup JSON + MD2 min
Sprint ReviewEvery sprint/metrics:sprint-reviewproduct-ownerWorking product demo30 min
Sprint RetrospectiveEvery sprint/metrics:sprint-retroproduct-owner + observability-engineerRetro JSON + DORA20 min

Daily Standup: Step-by-Step

The Single Command

/metrics:daily-standup --sprint xOps-S1

Actual Output (2026-03-14, observability-engineer agent + HITL correction)

Source: framework/ceremonies/daily-standup-2026-03-14.md (git-tracked deliverable)

═══════════════════════════════════════════════════════════════
DAILY STANDUP — xOps-S1 | 2026-03-14 | Day 5 of 12
McKinsey Rewired Exhibit 13.4: Daily Standup
═══════════════════════════════════════════════════════════════

SPRINT HEALTH: 66/100 — at-risk
Stories: 3/5 (60%) | DORA: 75% | Blockers: 5

───────────────────────────────────────────────────────────────
OUR TOP 2 OBJECTIVES
───────────────────────────────────────────────────────────────
1. xOps Sovereign AI ($180/mo target)
✅ xOps-S1-01 Local Stack | ✅ xOps-S1-04 FinOps Report
⬜ xOps-S1-02 M3 Web Module | ⬜ xOps-S1-03 M4 EFS Module

2. ADLC Enterprise Framework (adlc.oceansoft.io)
✅ xOps-S1-05 Sprint Filter | 🔄 RQ3 Team Bundles
🔄 RQ4 Daily Standup | 🔄 RQ1 Session Split

───────────────────────────────────────────────────────────────
YESTERDAY (2026-03-13) — 1 session
───────────────────────────────────────────────────────────────
- cloud-architect: Sprint 1 RQ1-RQ4 technical design validation
- product-owner: Bundle consolidation (Option C)
- meta-engineering-expert: RQ3 team bundles + RQ4 standup implementation
- infrastructure-engineer: xOps-S1-04 FinOps brutal-honest scoring
- frontend-docs-engineer: xOps-S1-05 sprint filter shipped (LEAN)

───────────────────────────────────────────────────────────────
-> TODAY: Prompt 5 — M3 Web Module (BV 85%)
HITL action: aws sso login + paste Prompt 5 in Cloud-Infrastructure
───────────────────────────────────────────────────────────────

BLOCKERS (5)
B-01: No HTTPS/DDoS for production -> xOps-S1-02 (P1, backlog)
B-02: No SQLite persistence -> xOps-S1-03 (P1, backlog)
B-03: APRA CPS 234 gaps -> US-X004 risk register (in-progress)
B-04: Agent consensus N/A today -> score on next impl session
B-05: MTTR ~2h vs <30min -> automate PDCA (sprint 2)

DORA (4 key metrics)
✅ DF: 1/sprint ✅ LT: <1 day
✅ CFR: 0% 🔴 MTTR: ~2h (PDCA)

Evidence: tmp/adlc-framework/ceremonies/daily-standup-2026-03-14.json
Evidence: framework/ceremonies/daily-standup-2026-03-14.md
═══════════════════════════════════════════════════════════════

Data sources (queried by observability-engineer agent, corrected by HITL):

  • Stories: docs/static/data/stories.csv — 3/5 completed (S1-01, S1-04, S1-05)
  • DORA: docs/static/data/adlc-metrics.db — 4/4 measured (MTTR red = misses target, but IS measured)
  • Yesterday: tmp/adlc-framework/coordination-logs/*2026-03-14*.json — 1 session, 5 agents
  • Sprint health: 66/100 at-risk (HITL correction; IE cross-validated SQLite shows 4/4 coverage)
  • Note: "DORA coverage" = has measured value (non-N/A), NOT "meets target". Color icons show target-meeting.
  • Blockers: 5 total from stories.csv + DORA blind spots

Key design decisions:

  • Sprint health leads (1 line) — not buried at the end
  • Top 2 objectives anchored from the prompts file, RQ items shown under Objective 2
  • Yesterday: max 5 bullet items, agent:scope only — no file paths
  • TODAY = single recommendation with prompt #, BV%, and explicit HITL action
  • DORA quick view: 4 key metrics only (DF, LT, CFR, MTTR). Full 15 metrics in JSON evidence
  • Dual evidence: .json in tmp/ (ephemeral, agent-consumable) + .md in framework/ceremonies/ (git-tracked, HITL-consumable)

Sprint Health Clarification

"DORA coverage" in the sprint health line means measured (non-N/A), not meeting target:

  • DORA: 4/4 = all 4 key metrics have measured values (even if MTTR misses its target)
  • Color icons (GREEN/AMBER/RED) show whether each metric meets its target
  • These are two different signals: coverage = "do we have data?", color = "is it good?"

11-Step Pipeline

Sprint Health Formula

Sprint Health = (stories_bv_weighted * 60%) + (dora_4key_pct * 40%)

Where:

  • stories_bv_weighted = sum of BV% for completed stories / sum of BV% for all stories
  • dora_4key_pct = percentage of 4 key DORA metrics with measured (non-N/A) values
ScoreStatusAction
≥ 75on-trackContinue current plan
50-74at-riskPrioritize blockers
< 50off-trackEscalate to HITL for replanning

DORA Metrics Integration

The daily standup surfaces 4 key DORA metrics in the terminal output. Full 15 metrics are available in the JSON evidence file.

DORA MetricSourceCollection MethodTarget
Deploy Frequencygit log --since="14 days ago"Commits to main per sprint≥2/sprint
Lead Timegit log first commit → mergeTime from first commit to validation<2 days
Change Failure Rategit log --grep="revert|fix|hotfix"Reverts as % of total deploys<5%
MTTRPDCA cycle evidenceTime from failure detection to fix<30 min

Color Coding

ColorMeaningAction
✅ GREENMeets targetContinue
🟡 AMBERWithin 20% of targetMonitor
🔴 REDMisses targetEscalate to HITL

Governance Classification

Ceremony commands (/metrics:daily-standup, /metrics:sprint-review, /metrics:sprint-retro, /metrics:update-dora) are EXEMPT from PO+CA coordination per adlc-governance.md Content Classification.

Rationale: These are READ + AGGREGATE + DISPLAY operations. They read existing data sources (SQLite, CSV, coordination logs) and present sprint status. No architecture decisions, technology selections, or design recommendations. Blocking management tools behind coordination creates an infinite loop.

The remind-coordination.sh hook has a passthrough pattern for /metrics.* commands — they pass through with a reminder but are NOT blocked.

ADLC Components Used

TypeComponentPurpose
Command/metrics:daily-standupPrimary ceremony orchestrator (11-step pipeline)
Command/metrics:update-doraDORA data refresh if stale (>24h)
Command/metrics:sprint-reviewEnd-of-sprint working product showcase
Command/metrics:sprint-retroKolb + BGS retrospective with DORA actuals
Hookcollect-dora-session.shSessionStart — auto-populates SQLite
Hookremind-coordination.shEnforces PO+CA after standup completes (ceremony passthrough)
Dataadlc-metrics.dbSQLite source of truth for DORA metrics
Datastories.csvSprint story status (in-progress, backlog, done)
Datadora.csvDORA rendering in xops.jsx dashboard
Memoryxops-team-mission.jsonOKRs, sprint goal, ceremonies definition

McKinsey Reference

"There is a misconception about agile that it is freewheeling and lacks sufficient management input and oversight. That happens in poor implementations of agile. In fact, when done right, agile is an effective way to manage performance because of its focus on results and the frequent checks on progress."

— McKinsey Rewired, Chapter 13: "From doing agile to being agile"

Four Characteristics of Effective Agile Pods (McKinsey Ch.13, p.121)

  1. Mission-based with measurable outcomes — Our OKRs: O1 (RAG <30s), O2 (≤$180/mo), O3 (≥99.5%)
  2. Cross-disciplinary with dedicated resources — 9 agents x specialized skills (FinOps, TF, security, QA)
  3. Autonomous and accountable for impact — PDCA autonomous until ≥99.5%, HITL escalation at max 7 cycles
  4. Fast moving and focused on user needs — <2min standup, BV-ordered recommendation, evidence-based delivery

References

SourceChapter/ExhibitWhat We UseHow We Adapted
McKinsey Rewired (Lamarre et al., 2023)Ch.13 "From doing agile to being agile"4 characteristics of agile pods (p.121)1 HITL + 9 AI agents = AI-native pod
McKinsey RewiredExhibit 13.1 "Agile is a superior development approach" (p.120)Agile benchmarks: +27% productivity, -30% schedule slips, -70% defectsDORA metrics as our measurement framework
McKinsey RewiredExhibit 13.2 "Agile cadence and performance management ceremonies" (p.124)5 ceremonies: Backlog Refinement → Sprint Planning → Daily Standup → Sprint Review → Sprint RetrospectiveMapped to ADLC commands (see table above)
McKinsey RewiredExhibit 13.3 "Example of OKRs" (p.126)OKR structure: Objectives x Key Results x TimingOur 3 OKRs in xops-team-mission.json
McKinsey RewiredExhibit 13.4 "Agile ceremonies" (p.128)Daily Standup: "assess sprint progress, identify barriers, 1+ tasks assigned"/metrics:daily-standup 11-step pipeline
McKinsey RewiredCh.14 "Operating models" — Digital Factory (p.135-136)Option 1: 10-50 pods, self-contained, rapid to implementOur operating model choice (1 pod, AI-native)
DORA (Google Cloud)Accelerate (Forsgren et al., 2018)4 key metrics: Deploy Freq, Lead Time, CFR, MTTRcollect-dora-session.sh auto-measures from git log
HITL Reference

The full McKinsey Rewired PDF (Section 3, Chapters 13-16) is available in the private knowledge base at xOps-Internal/knowledge-external/rewired-the-mckinsey-guide/rewired-the-mckinsey-guide-section3.pdf. This file is NOT deployed to the public docs site (copyright). Exhibits referenced above are cited by page number.