Skip to main content

Coordination Enforcement Architecture

Architectural rationale for ADLC's coordination enforcement mechanism. Documents why the current approach works, why alternatives fail, and measured effectiveness.

Problem: AI Agents Default to Standalone Mode

AI coding assistants have fundamental limitations that prevent reliable self-enforcement of coordination rules:

LimitationImpact
No persistent memoryCannot remember coordination requirements between sessions
No self-awarenessCannot detect or monitor their own operational mode
Default to helpfulTraining optimises for direct responses, not coordination
Context dependencyRules must be visible in every interaction to be followed

Without enforcement, agents default to standalone mode approximately 80% of the time — bypassing the coordination chain that prevents expensive failures.

Failed Approaches

Detection Scripts

Scripts that monitor agent responses after the fact cannot prevent violations — only detect them. By the time a violation is detected, the standalone work product already exists and may have been acted upon.

Self-Enforcement

Agents cannot force themselves to follow coordination rules. Instructing an agent to "always coordinate first" is unreliable because:

  • The instruction competes with the agent's training to be directly helpful
  • Complex tasks cause the agent to "forget" coordination requirements mid-response
  • No mechanism exists for the agent to verify its own compliance

Cross-Session Memory

Coordination state cannot persist between sessions. Each new session starts from scratch, with no awareness of previous coordination decisions.

Manual Reminders

Relying on users to mention coordination in every prompt is inconsistent. Users forget, especially during complex debugging sessions where cognitive load is high.

Implemented Solution: System-Level Enforcement

The solution uses three reinforcing layers:

Layer 1: Context Injection (Rules Layer)

Coordination rules are injected into the agent's context via project configuration files (e.g., CLAUDE.md, governance rules). This ensures rules are visible in every interaction without user action.

Why it works: The agent sees coordination requirements as part of its operating environment, not as optional guidance.

Layer 2: Hook-Based Prevention (Tool Layer)

Pre-execution hooks intercept tool calls (file edits, bash commands, agent launches) and block them if coordination prerequisites are not met.

Hook EventWhat It Blocks
PreToolUse (Edit/Write)File modifications without coordination logs
PreToolUse (Bash)Shell commands without coordination context
PreToolUse (Agent)Specialist agent launches without coordinator approval

Why it works: Hooks operate at the tool level, not the text level. The agent physically cannot execute blocked actions regardless of its intent.

Layer 3: Post-Execution Verification

Compliance checking after responses provides an audit trail and catches edge cases that slip through Layers 1-2.

Why this is Layer 3, not Layer 1: Verification alone is insufficient (see "Detection Scripts" above), but it complements the prevention layers.

Hook Coverage and Limitations

What Hooks Can Block

  • File creation and modification
  • Shell command execution
  • Agent/subagent launches
  • Task creation

What Hooks Cannot Block

Text output is not hookable. No hook event fires on the agent's text response stream. This means an agent can deliver implementation content (code blocks, architecture decisions, analytical comparisons) in text without triggering any hook.

Mitigation: Rules-layer prohibition. The governance rules explicitly classify what content requires coordination and what is exempt (factual lookups, error debugging, status checks). This is enforced by training/context, not by hooks — making it the weakest link in the enforcement chain.

Measured Effectiveness

MetricWithout EnforcementWith Enforcement
Coordination compliance~20%80-90%
Standalone mode violations~80% of responses10-20% of responses
User correction effortEvery responseOccasional reminders

The remaining 10-20% violation rate comes primarily from:

  1. Text output bypass — implementation content delivered in text (unhookable)
  2. Partial compliance — agent follows some coordination steps but skips others
  3. Context competition — complex tasks cause the agent to prioritise helpfulness over process

Approval Matrix

Change TypeRequired CoordinationRationale
Architecture decisionsProduct Owner + Cloud ArchitectLong-term maintainability
Production changesFull coordination chainBusiness continuity risk
Cost-impacting changesProduct Owner + Cloud ArchitectFinancial commitment
Security changesCloud Architect + SecurityCompliance obligation
Documentation onlyExemptNo implementation risk

Anti-Patterns

Anti-PatternDescriptionWhy It Fails
Rubber-stamp coordinationRunning coordinators in background, proceeding immediatelyCoordinators produce no value if output is ignored
Scope drift bypassUsing coordination logs from Task A to authorise Task BLogs must be scoped to the current question
Text output bypassDelivering implementation in text to avoid tool hooksUnhookable surface; relies on rules-layer discipline
Hook workaroundUsing alternative APIs to achieve what a hook blocksHooks are governance controls, not obstacles

Design Principles

  1. Defence in depth — Three layers (context, hooks, verification) compensate for each layer's weaknesses
  2. Fail closed — When hooks detect missing coordination, they block the action (exit non-zero)
  3. User control — The human-in-the-loop can override any gate with explicit approval
  4. Audit trail — All coordination decisions are logged to evidence directories

Applicability

This enforcement architecture applies to any AI agent system where:

  • Agents must follow multi-step coordination workflows
  • The cost of uncoordinated action exceeds the cost of coordination overhead
  • Multiple specialist agents must work within defined boundaries
  • Human oversight is required for high-impact decisions

The key insight: coordination overhead is an investment, not waste. The enforcement mechanism prevents the most expensive failures — those discovered in production after uncoordinated changes.


Origin: Enterprise coordination enforcement experience. Adapted for ADLC framework-level guidance.