Skip to main content
1. Product2. Agents3. Governance4. CloudOps5. FinOps6. Security
AI + Data + Cloud · Pillar 3
🏛️

ADLC Governance

Operating Model

22Enforcement Hooks

22 deterministic hooks enforce governance at tool level — not advisory, not optional. Exit code 2 blocks violations.

22 hooks active from first session — zero configuration
AI agents build governed & Humans ship trusted. 80% autonomy & 100% accountability.
Section Three (Ch.13-16)

Adopting a New Operating Model

Building and scaling digital and AI solutions requires companies to be much faster and more flexible in the way they develop technology. Developing that operating model is perhaps the most complex aspect of a digital and AI transformation.

Four characteristics differentiate agile pods: mission-based with measurable outcomes, cross-disciplinary with dedicated resources, autonomous and accountable for achieving impact, fast moving and focused on user needs. ADLC enforces this through 22 deterministic hooks that cannot be bypassed — not advisory guidance, but exit-code enforcement.

Platform Evolution

Hooks are deterministic (exit code 2) regardless of AI model quality. As models improve, fewer violations fire — but the safety net never weakens.

ADLC Governance Golden Path

Each phase answers: Who does it, Why it matters, What if you skip it

1Assess

Run governance score to see where you stand. Identify hook gaps and anti-pattern exposure.

bash scripts/governance-score.sh
Who: HITL reviews score, developer-experience-engineer agent identifies gaps
Why: You can't improve what you don't measure. 64 anti-patterns are documented — which ones apply to your project?
Skip? Invisible governance gaps, undetected anti-patterns, compliance surprises
Governance score (target: 9/9) + gap list
2Configure

Enable hooks, set enforcement thresholds, wire up coordination agents.

ln -s .adlc/.claude .claude (submodule pattern) or git submodule add
Who: developer-experience-engineer agent configures, HITL approves settings
Why: Hooks are code, not process. One symlink activates 22 enforcement points. Advisory rules load automatically.
Skip? Manual governance, inconsistent enforcement, STANDALONE_EXECUTION violations
22 hooks active + settings.json configured + agent team ready
3Enforce

Every tool call passes through PreToolUse hooks. Exit 2 blocks violations.

Daily workflow — hooks fire automatically on Edit/Write/Bash/Agent tool calls
Who: Hooks enforce deterministically, HITL overrides when needed
Why: Advisory guidance degrades with context fill (~95%). Hooks don't. Exit code 2 is exit code 2 at any context level.
Skip? Context degradation bypasses advisory rules, agents drift from governance
Zero STANDALONE_EXECUTION + Zero NATO violations + Principle I enforced
4Audit

Score deliverables with PO+CA+QA agents. Detect anti-patterns. Track PDCA cycles.

Sequential scoring: product-owner → cloud-architect → qa-engineer (3 agents, 1 round)
Who: 3 scoring agents sequential (not parallel — prevents RACE_CONDITION_SCORING)
Why: Fix-then-score, not score-then-fix. Each scoring round must follow shipped code changes.
Skip? SCORING_THEATER — re-scoring unchanged artifacts, manufactured deltas
Scoring evidence in tmp/ + gap list + PDCA cycle count
5Improve

Extract patterns from sessions. Update anti-pattern catalog. Ratchet governance score.

/speckit.retrospective + /speckit.improve
Who: Full agent team reflects, HITL decides what to change
Why: The 64 anti-patterns each cost real sessions. Documenting them prevents repeat failures across all projects.
Skip? Same mistakes repeated, anti-pattern catalog stale, governance score plateaus
New anti-patterns documented + improvement backlog updated + governance score ratcheted

Start Here

Spec-Driven workflow and product skills — copy/paste to start

Solo Developer
You want AI agents with guardrails. Zero config to enforced governance.
1.ln -s .adlc/.claude .claude
2.bash scripts/governance-score.sh
3.Start coding — hooks enforce automatically
Governed development in <10 minutes
Platform Team Lead
You manage a team using AI agents. Need visibility and accountability.
1.git submodule add adlc-framework
2./metrics:daily-standup
3./metrics:sprint-retro
Team governance dashboard in 1 day
Compliance Officer
You need audit evidence that AI agent usage is governed.
1.bash scripts/governance-score.sh
2.ls tmp/*/coordination-logs/
3.Review: 22 hooks × exit code 2 = deterministic enforcement
Audit evidence package in 1 hour
See it live
See agents in action. Browse talent + live projects.
1./talent-bench
2./project?id=dataops-lakehouse
Deep-linked references

Component Map

14 components implementing this pillar

TypeNameWhyBusiness Value
Hookenforce-coordination.shBlock specialist tool calls before PO+CA approvalZero STANDALONE_EXECUTION violations at tool level
Hookvalidate-bash.shBlock git mutations and GitHub API git object creationPrinciple I: Agents prepare, Humans commit — enforced
Hookdetect-nato-violation.shBlock completion claims without evidence paths in tmp/NATO (No Action Talk Only) eliminated
Hookenforce-specialist-delegation.shBlock raw Edit/Write on domain files without Task delegationCOORDINATION_WITHOUT_DELEGATION prevented
Hookdetect-testing-theater.shBlock new mocks without assertions, coverage omit expansionTest quality maintained — no inflated pass rates
Hookenforce-docker-registry.shBlock non-compliant FROM and image: referencesSupply chain integrity — only nnthanh101/* images
Hookvalidate-rescore-freshness.shBlock re-scoring before artifacts have changedSCORING_THEATER and RACE_CONDITION_SCORING prevented
Hookvalidate-docs-sync.shBlock writes that destroy HAND-CURATED markersPersona-optimized CLI docs survive regeneration
Ruleadlc-governance.mdAuthority chain, NATO prevention, content classificationSingle source of truth for all governance decisions
Ruleprinciple-i-acceptable-agency.mdAgents prepare, Humans decide, Humans commitLegal accountability + audit trail integrity
Ruleanti-patterns-catalog.md64 documented anti-patterns with prevention mechanismsInstitutional memory — each pattern cost real sessions
Ruleoperational-efficiency.md20% framework cap, 3-agent scoring, READONLY-first80% time on product code, not coordination theater
Hookenforce-pdca-cycleTrack artifact hashes across PDCA roundsPROCESS_WITHOUT_OUTCOME blocked — fixes must precede rescores
Hookload-project-contextInject VERSION, project paths, profiles at SessionStartConsistent context across all sessions and agents

Risk & Scalability

What happens without this pillar, and why ADLC scales from 1 person to enterprise

What if you skip?

Industry research: experienced agile teams deliver +27% productivity, -30% schedule slips, -70% defects vs non-agile. But simply implementing agile rituals without making corresponding changes to how to set objectives, configure teams, and enforce accountability for results will lead to poor outcomes. ADLC hooks ARE that enforcement layer.

Scalability

22 hooks enforce governance deterministically via exit codes — not advisory prompts. The same hooks work for a solo developer or a regulated enterprise. Governance scales because it’s code, not process.

Industry Relevance

ANZ enterprise verticals where this pillar is most critical

FSI
APRA CPS 234 + SOC2 require auditable governance trails — hooks provide evidence
Energy
NERC CIP change management requires documented approval chains
Telecom
Multi-team coordination across network operations requires governance at scale
Aviation
DO-178C Level A requires deterministic process enforcement — not advisory

Continuous Improvement Flywheel

Each pillar feeds the next — creating a self-reinforcing cycle of capability building

Pillar 3 feeds Pillar 4
ADLC GovernanceCloudOps & Infrastructure

Governed agents can safely automate infrastructure. Hooks prevent destructive cloud actions at the tool level.

Digital Products

Real products built and governed by this pillar

Explore Pillar 3 Components

Browse the full component catalog or read the documentation

AI agents build governed & Humans ship trusted.