AI + Data + Cloud · Pillar 3
🏛️
ADLC Governance
Operating Model
22Enforcement Hooks
22 deterministic hooks enforce governance at tool level — not advisory, not optional. Exit code 2 blocks violations.
⚡ 22 hooks active from first session — zero configuration
AI agents build governed & Humans ship trusted. 80% autonomy & 100% accountability.
Section Three (Ch.13-16)
Adopting a New Operating Model
“Building and scaling digital and AI solutions requires companies to be much faster and more flexible in the way they develop technology. Developing that operating model is perhaps the most complex aspect of a digital and AI transformation.”
Four characteristics differentiate agile pods: mission-based with measurable outcomes, cross-disciplinary with dedicated resources, autonomous and accountable for achieving impact, fast moving and focused on user needs. ADLC enforces this through 22 deterministic hooks that cannot be bypassed — not advisory guidance, but exit-code enforcement.
Platform Evolution
Hooks are deterministic (exit code 2) regardless of AI model quality. As models improve, fewer violations fire — but the safety net never weakens.
ADLC Governance Golden Path
Each phase answers: Who does it, Why it matters, What if you skip it
1Assess
Run governance score to see where you stand. Identify hook gaps and anti-pattern exposure.
bash scripts/governance-score.shWho: HITL reviews score, developer-experience-engineer agent identifies gaps
Why: You can't improve what you don't measure. 64 anti-patterns are documented — which ones apply to your project?
Skip? Invisible governance gaps, undetected anti-patterns, compliance surprises
→ Governance score (target: 9/9) + gap list
2Configure
Enable hooks, set enforcement thresholds, wire up coordination agents.
ln -s .adlc/.claude .claude (submodule pattern) or git submodule addWho: developer-experience-engineer agent configures, HITL approves settings
Why: Hooks are code, not process. One symlink activates 22 enforcement points. Advisory rules load automatically.
Skip? Manual governance, inconsistent enforcement, STANDALONE_EXECUTION violations
→ 22 hooks active + settings.json configured + agent team ready
3Enforce
Every tool call passes through PreToolUse hooks. Exit 2 blocks violations.
Daily workflow — hooks fire automatically on Edit/Write/Bash/Agent tool callsWho: Hooks enforce deterministically, HITL overrides when needed
Why: Advisory guidance degrades with context fill (~95%). Hooks don't. Exit code 2 is exit code 2 at any context level.
Skip? Context degradation bypasses advisory rules, agents drift from governance
→ Zero STANDALONE_EXECUTION + Zero NATO violations + Principle I enforced
4Audit
Score deliverables with PO+CA+QA agents. Detect anti-patterns. Track PDCA cycles.
Sequential scoring: product-owner → cloud-architect → qa-engineer (3 agents, 1 round)Who: 3 scoring agents sequential (not parallel — prevents RACE_CONDITION_SCORING)
Why: Fix-then-score, not score-then-fix. Each scoring round must follow shipped code changes.
Skip? SCORING_THEATER — re-scoring unchanged artifacts, manufactured deltas
→ Scoring evidence in tmp/ + gap list + PDCA cycle count
5Improve
Extract patterns from sessions. Update anti-pattern catalog. Ratchet governance score.
/speckit.retrospective + /speckit.improveWho: Full agent team reflects, HITL decides what to change
Why: The 64 anti-patterns each cost real sessions. Documenting them prevents repeat failures across all projects.
Skip? Same mistakes repeated, anti-pattern catalog stale, governance score plateaus
→ New anti-patterns documented + improvement backlog updated + governance score ratcheted
Start Here
Spec-Driven workflow and product skills — copy/paste to start
Solo Developer
You want AI agents with guardrails. Zero config to enforced governance.
1.ln -s .adlc/.claude .claude
2.bash scripts/governance-score.sh
3.Start coding — hooks enforce automatically
⚡ Governed development in <10 minutes
Platform Team Lead
You manage a team using AI agents. Need visibility and accountability.
1.git submodule add adlc-framework
2./metrics:daily-standup
3./metrics:sprint-retro
⚡ Team governance dashboard in 1 day
Compliance Officer
You need audit evidence that AI agent usage is governed.
1.bash scripts/governance-score.sh
2.ls tmp/*/coordination-logs/
3.Review: 22 hooks × exit code 2 = deterministic enforcement
⚡ Audit evidence package in 1 hour
See it live
See agents in action. Browse talent + live projects.
1./talent-bench
2./project?id=dataops-lakehouse
⚡ Deep-linked references
Component Map
14 components implementing this pillar
| Type | Name | Why | Business Value |
|---|
| Hook | enforce-coordination.sh | Block specialist tool calls before PO+CA approval | Zero STANDALONE_EXECUTION violations at tool level |
| Hook | validate-bash.sh | Block git mutations and GitHub API git object creation | Principle I: Agents prepare, Humans commit — enforced |
| Hook | detect-nato-violation.sh | Block completion claims without evidence paths in tmp/ | NATO (No Action Talk Only) eliminated |
| Hook | enforce-specialist-delegation.sh | Block raw Edit/Write on domain files without Task delegation | COORDINATION_WITHOUT_DELEGATION prevented |
| Hook | detect-testing-theater.sh | Block new mocks without assertions, coverage omit expansion | Test quality maintained — no inflated pass rates |
| Hook | enforce-docker-registry.sh | Block non-compliant FROM and image: references | Supply chain integrity — only nnthanh101/* images |
| Hook | validate-rescore-freshness.sh | Block re-scoring before artifacts have changed | SCORING_THEATER and RACE_CONDITION_SCORING prevented |
| Hook | validate-docs-sync.sh | Block writes that destroy HAND-CURATED markers | Persona-optimized CLI docs survive regeneration |
| Rule | adlc-governance.md | Authority chain, NATO prevention, content classification | Single source of truth for all governance decisions |
| Rule | principle-i-acceptable-agency.md | Agents prepare, Humans decide, Humans commit | Legal accountability + audit trail integrity |
| Rule | anti-patterns-catalog.md | 64 documented anti-patterns with prevention mechanisms | Institutional memory — each pattern cost real sessions |
| Rule | operational-efficiency.md | 20% framework cap, 3-agent scoring, READONLY-first | 80% time on product code, not coordination theater |
| Hook | enforce-pdca-cycle | Track artifact hashes across PDCA rounds | PROCESS_WITHOUT_OUTCOME blocked — fixes must precede rescores |
| Hook | load-project-context | Inject VERSION, project paths, profiles at SessionStart | Consistent context across all sessions and agents |
Risk & Scalability
What happens without this pillar, and why ADLC scales from 1 person to enterprise
What if you skip?
Industry research: experienced agile teams deliver +27% productivity, -30% schedule slips, -70% defects vs non-agile. But simply implementing agile rituals without making corresponding changes to how to set objectives, configure teams, and enforce accountability for results will lead to poor outcomes. ADLC hooks ARE that enforcement layer.
Scalability
22 hooks enforce governance deterministically via exit codes — not advisory prompts. The same hooks work for a solo developer or a regulated enterprise. Governance scales because it’s code, not process.
Industry Relevance
ANZ enterprise verticals where this pillar is most critical
FSI
APRA CPS 234 + SOC2 require auditable governance trails — hooks provide evidence
Energy
NERC CIP change management requires documented approval chains
Telecom
Multi-team coordination across network operations requires governance at scale
Aviation
DO-178C Level A requires deterministic process enforcement — not advisory
Digital Products
Real products built and governed by this pillar
Explore Pillar 3 Components
Browse the full component catalog or read the documentation
AI agents build governed & Humans ship trusted.