ADLC Governance Lifecycle
22 deterministic hooks enforce governance at tool level — not advisory, not optional. Exit code 2 blocks violations.
Agents prepare. Humans decide. Humans commit. Hooks enforce the boundary — not human vigilance.
Golden Path: From Zero to Enforced Governance
Phase 1: Assess (15 min)
Who: HITL reviews the score. meta-engineering-expert agent identifies gaps.
What: Run the governance score. See where you stand. Identify which of the 64 documented anti-patterns apply to your project.
Why: You can't improve what you don't measure. Advisory guidance degrading with context fill is a known structural limitation — hooks are the only reliable enforcement. Knowing your gap list before you start prevents compliance surprises.
What-if skip: Invisible governance gaps, undetected anti-patterns, compliance surprises discovered in audit rather than development.
How
# Run governance score — 9 binary checks
bash scripts/governance-score.sh
# Review gap output
cat tmp/adlc-framework/governance-score-*.json
Output
- Governance score (target: 9/9 checks passing)
- Gap list with hook IDs that are missing or misconfigured
- Anti-pattern exposure summary (which of the 64 apply)
Quality Gate: Score must be ≥7/9 before proceeding to Configure. Below 7 means foundational hooks are missing.
Phase 2: Configure (10 min)
Who: meta-engineering-expert agent configures. HITL approves settings.
What: Enable hooks, wire up coordination agents, set enforcement thresholds. One symlink activates 22 enforcement points.
Why: Hooks are code, not process. A symlink to .adlc/.claude loads all 22 hooks, 64 anti-pattern rules, and 15 agent definitions automatically at session start. Manual governance doesn't scale — code does.
What-if skip: Manual governance, inconsistent enforcement, STANDALONE_EXECUTION violations when team grows or context fills.
How
# For new projects using ADLC as submodule
git submodule add [email protected]:1xOps/adlc-framework.git .adlc
ln -s .adlc/.claude .claude
ln -s .adlc/.specify .specify
# For existing projects already consuming ADLC
# Verify hooks are loaded at session start
cat .claude/settings.json | grep -c '"matcher"' # should be >20
# Verify coordination hook is active
cat .claude/hooks/scripts/enforce-coordination.sh | head -5
Output
- 22 hooks active (PreToolUse + PostToolUse + SessionStart + Stop events)
settings.jsonconfigured with allow/deny permission rules- Agent team ready: 15 agents (3 opus, 8 sonnet, 4 haiku)
- Advisory rules loaded: 9 rule files covering governance, Principle I, anti-patterns
Quality Gate: cat .claude/settings.json | grep -c '"matcher"' returns ≥20. All core hook scripts present under .claude/hooks/scripts/.
Phase 3: Enforce (ongoing)
Who: Hooks enforce deterministically. HITL overrides when needed.
What: Every tool call (Edit, Write, Bash, Agent) passes through PreToolUse hooks. Exit code 2 blocks violations before they execute.
Why: Advisory guidance degrades with context fill at ~95%. Claude Code auto-compacts at ~95% capacity — behavioral instructions get lost. Hooks don't degrade. Exit code 2 is exit code 2 at any context level, any model quality, any session length.
What-if skip: Context degradation bypasses advisory rules. Agents drift from governance. STANDALONE_EXECUTION and NATO violations go undetected until downstream failures.
What Gets Enforced
| Hook | Blocks | Exit Code |
|---|---|---|
enforce-coordination.sh | Specialist tool calls before PO+CA approval | 2 |
validate-bash.sh | Git mutations + GitHub API git object creation | 2 |
detect-nato-violation.sh | Completion claims without evidence paths in tmp/ | 2 |
enforce-specialist-delegation.sh | Raw Edit/Write on domain files without Task delegation | 2 |
detect-testing-theater.sh | New mocks without assertions, coverage omit expansion | 2 |
enforce-docker-registry.sh | Non-compliant FROM and image: references | 2 |
validate-rescore-freshness.sh | Re-scoring before artifacts have changed | 2 |
validate-docs-sync.sh | Writes that destroy HAND-CURATED content markers | 2 |
What Is Advisory (Rules-Layer Only)
The hook system has a known structural gap: Claude's text response stream is not hookable. Advisory rules handle this surface:
- Text output governance (no implementation content without coordination)
- Sequential scoring order (prevents RACE_CONDITION_SCORING)
- Completion report pattern (Principle I — Agents prepare, Humans commit)
Advisory rules load from .claude/rules/ at session start. Adherence decreases as context fills — hooks are the backstop.
How to Verify Enforcement Is Active
# Check hooks fired in last session (PostToolUse audit log)
ls tmp/adlc-framework/coordination-logs/
# Verify hook exit behavior (should exit 2 on violation)
echo '{"tool":"Bash","input":{"command":"git commit -m test"}}' | \
bash .claude/hooks/scripts/validate-bash.sh
Phase 4: Audit (per deliverable)
Who: 3 scoring agents sequential (product-owner → cloud-architect → qa-engineer). HITL reviews evidence.
What: Score deliverables after code ships. Detect anti-patterns. Track PDCA cycles. Fix-then-score, not score-then-fix.
Why: Each scoring round must follow shipped code changes — re-scoring unchanged artifacts is SCORING_THEATER (anti-pattern #43). 3 agents (PO+CA+QA) produce sufficient signal. Sequential execution prevents RACE_CONDITION_SCORING (agents reading different file versions within the same minute).
What-if skip: SCORING_THEATER — manufactured score deltas with zero code changes. Contradictory evidence from parallel agents reading different file states.
Scoring Protocol
# Fix gaps FIRST, then score ONCE
# (Do NOT score before fixing — SCORING_THEATER anti-pattern)
# Step 1: product-owner scores (foreground — blocks until complete)
# Invoke product-owner agent with scoring task
# Step 2: cloud-architect scores (foreground — reads final state)
# Invoke cloud-architect agent with scoring task
# Step 3: qa-engineer scores (foreground — verifies evidence)
# Invoke qa-engineer agent with scoring task
# Evidence lands here
ls tmp/adlc-framework/scoring-evidence/
Anti-Patterns This Phase Prevents
| Anti-Pattern | Prevention |
|---|---|
SCORING_THEATER | validate-artifact-delta.sh blocks re-score when hashes unchanged |
RACE_CONDITION_SCORING | Sequential execution (rules-layer) prevents concurrent reads |
RESCORE_WITHOUT_ARTIFACT_CHANGE | validate-rescore-freshness.sh blocks exit 2 when artifacts not newer than last score |
PROCESS_WITHOUT_OUTCOME | Score delta must follow shipped fixes — not just process-compliant round increments |
DEFERRED_GAP_LOOP | Gap persisting 2+ rounds without code change = mandatory execution (not re-analysis) |
Output
- Scoring evidence JSON in
tmp/adlc-framework/scoring-evidence/round-N.json - Gap list with specific files to create/modify, test commands, dependency chains
- PDCA cycle count (tracked by
enforce-pdca-cycle.sh)
Quality Gate: Consensus ≥85% to mark deliverable complete. Each gap must include: files to modify, test command, dependency chain, story point estimate.
Phase 5: Improve (per sprint)
Who: Full agent team reflects. HITL decides what to change.
What: Extract patterns from sessions. Update anti-pattern catalog. Ratchet governance score. Each retrospective generates new institutional memory.
Why: The 64 anti-patterns in the catalog each cost real sessions. Documenting them prevents repeat failures across all projects, not just the current one. Governance score ratcheting creates a quantifiable improvement trajectory.
What-if skip: Same mistakes repeated across sessions and projects. Anti-pattern catalog stale (64 patterns were discovered the hard way — new ones emerge without retrospectives). Governance score plateaus at current level.
How
# Sprint retrospective (reads existing data, no coordination required)
/metrics:sprint-retro
# Session retrospective for pattern extraction
/speckit.retrospective
# Review improvement actions
cat framework/retrospectives/YYYY-MM-DD-sprint-*.md
# Review current anti-pattern catalog
wc -l .claude/rules/anti-patterns-catalog.md
What Gets Extracted
- New anti-patterns with pattern name, description, hook/prevention mechanism
- Retrospective actions assigned to next sprint with owner and evidence requirement
- Governance score delta (previous score → current score)
- Session quality indicators (NATO rate, PDCA rounds, code-to-analysis ratio)
Output
- Updated
anti-patterns-catalog.md(if new patterns found) - Retrospective evidence in
framework/retrospectives/ - Improvement backlog items in next sprint's
stories.csv - Governance score ratcheted (never lower the bar after achieving it)
Quality Gate: Every retrospective produces ≥1 actionable improvement assigned to a specific sprint. No "general learnings" — specific files, commands, or hook changes.
Governance Score: The 9 Checks
bash scripts/governance-score.sh
| Check | Passing Condition |
|---|---|
| PO+CA coordination enforced | enforce-coordination.sh present + exit 2 on violation |
| Git mutation blocked | validate-bash.sh blocks git commit/push/merge |
| NATO detection active | detect-nato-violation.sh blocks claims without tmp/ paths |
| Specialist delegation enforced | enforce-specialist-delegation.sh present |
| Testing theater prevented | detect-testing-theater.sh blocks mock expansion |
| Docker registry allowlist | enforce-docker-registry.sh blocks non-nnthanh101/* images |
| Rescore freshness validated | validate-rescore-freshness.sh present |
| PDCA cycle tracked | enforce-pdca-cycle.sh logs artifact hashes |
| Evidence paths present | tmp/ directory exists with coordination-logs/ subdirectory |
Target: 9/9. Each check represents a class of governance failures that cost real sessions before being codified.
Anti-Pattern Reference: Top 10 by Frequency
| Rank | Pattern | Hook Prevention |
|---|---|---|
| 1 | STANDALONE_EXECUTION | enforce-coordination.sh exit 2 |
| 2 | NATO_VIOLATION | detect-nato-violation.sh exit 2 |
| 3 | TEXT_OUTPUT_BYPASS | Rules-layer (unhookable surface) |
| 4 | SCORING_THEATER | validate-artifact-delta.sh + rules |
| 5 | RACE_CONDITION_SCORING | Sequential execution rule |
| 6 | HOOK_BYPASS_VIA_API | validate-bash.sh GitHub API block |
| 7 | TESTING_THEATER | detect-testing-theater.sh exit 2 |
| 8 | LAZY_DEFERRAL | Rules-layer (1 verification command required) |
| 9 | READONLY_HITL_HANDOFF | Rules-layer + operational-efficiency.md Rule 3 |
| 10 | PROCESS_WITHOUT_OUTCOME | validate-artifact-delta.sh exit 2 |
Full catalog: .claude/rules/anti-patterns-catalog.md (64 patterns, each with hook/prevention mechanism).
By Persona
Solo Developer
You want: AI agents with guardrails. Zero config to enforced governance.
Your path (10 minutes to governed development):
ln -s .adlc/.claude .claude # 1 command — 22 hooks active
bash scripts/governance-score.sh # See where you stand
# Start coding — hooks enforce automatically
Value: Governance that doesn't degrade. The same hooks that work for a solo developer work for a 100-person regulated enterprise. No process theater — exit code 2 is deterministic.
Platform Team Lead
You manage: A team using AI agents. Need visibility and accountability.
Your path (1 day to team governance dashboard):
git submodule add [email protected]:1xOps/adlc-framework.git .adlc
ln -s .adlc/.claude .claude # Team inherits governance via git
/metrics:daily-standup # Daily visibility into agent work
/metrics:sprint-retro # Weekly governance retrospective
Value: Governance as code — team members can't bypass hooks without modifying git-tracked files. Every agent action logged to tmp/. Board-level audit trail from day 1.
Compliance Officer
You need: Audit evidence that AI agent usage is governed.
Your path (1 hour to audit evidence package):
bash scripts/governance-score.sh # 9-check governance score with evidence
ls tmp/*/coordination-logs/ # PO+CA approval trail
# Review: 22 hooks × exit code 2 = deterministic enforcement
Value: Exit code 2 is auditable. tmp/coordination-logs/ provides dated JSON evidence of every coordination decision. Governance score is reproducible — run it anytime for current state. APRA CPS 234, SOC2, NERC CIP, DO-178C evidence in one command.
Known Governance Boundaries (Structural Limits)
| Surface | Hookable? | Mitigation |
|---|---|---|
| Tool calls (Edit/Write/Bash/Agent) | Yes — PreToolUse exit 2 | 22 active hooks |
| Text response stream | No — structural Claude Code limitation | Rules-layer advisory (.claude/rules/) |
| Subagent context | Partial — subagents get frontmatter only | Orchestrator holds full governance context |
| Context degradation at ~95% fill | Cannot be hooked | /compact + CLAUDE.md survives compaction |
Honest capability statement: Hooks enforce deterministically. Rules advise. The combination provides defense-in-depth — but text output governance relies on Claude following rules, not exit codes.
Full capability statement: .claude/rules/honest-capability-statement.md
Integration with Framework
This golden path is part of the larger ADLC framework:
- Principle I (Acceptable Agency): Agents prepare, Humans decide, Humans commit —
validate-bash.shenforces at tool level - Anti-Patterns Catalog: 64 patterns with hooks, costs real sessions documented
- Operational Efficiency Rules: 20% framework cap, 3-agent scoring, READONLY-first autonomy
- Constitution: 7 non-negotiable principles, 58 checkpoints, ≥99.5% pass rate target
Governance is not a separate concern — it's the operating model that makes AI agent work trustworthy.
Last Updated: March 2026 | Status: Active (22 hooks enforcing across all ADLC projects) | Maintenance: Anti-pattern catalog grows every sprint via retrospectives