Skip to main content
1. Product2. Agents3. Governance4. CloudOps5. FinOps6. Security
AI + Data + Cloud · Pillar 2
🤖

Talent Bench

Talent Bench

37AI Agents

37 AI agents available instantly — 4 decision-makers (opus), 15 execution specialists (sonnet), 7 orchestrators, 4 operators (haiku), 4 digital transformation, 3 XOps. No hiring pipeline. No ramp time.

15 agents available in <5 minutes via npx install
AI agents build governed & Humans ship trusted. 80% autonomy & 100% accountability.
Section Two (Ch.8-12)

Building Your Talent Bench

No company can outsource its way to digital excellence. Being digital means having your own bench of digital talent — product owners, experience designers, data engineers, data scientists, software developers, etc.

The goal should be to have 70-80% of your digital talent be in-house. ADLC addresses this with 37 AI agents available instantly — no hiring pipeline, no lead time. Decision Layer (4 opus), Execution Layer (15 sonnet), Orchestrators (7 sonnet), Operations (4 haiku), Digital Transformation (4 sonnet), XOps (3 sonnet).

Platform Evolution

Agent frontmatter is model-version-agnostic. Upgrade Claude Opus/Sonnet/Haiku — agents get smarter automatically. No code changes needed.

Talent Bench Golden Path

Each phase answers: Who does it, Why it matters, What if you skip it

1Assess

Evaluate current talent gaps against the 15-agent model

bash scripts/governance-score.sh
Who: HITL reviews gaps, developer-experience-engineer maps roles to agents
Why: You cannot build a team without knowing what you lack. The 3-tier model (opus/sonnet/haiku) ensures the right cost-performance tradeoff per role.
Skip? Build without specialists; product-owner decisions go unchallenged; security gaps undetected
Governance score + capability gap report
2Compose

Select agents for the project and map Decision/Execution/Operations tiers

/speckit.plan (agent delegation matrix)
Who: product-owner selects scope, cloud-architect maps architecture needs to agent capabilities
Why: Agent selection must match project scope. Under-staffing wastes HITL time on tasks agents can handle; over-staffing wastes tokens on unnecessary coordination.
Skip? Wrong agents assigned, COORDINATION_WITHOUT_DELEGATION violations, specialist domain mismatches
Agent roster with tier assignments + delegation matrix
3Coordinate

Establish PO+CA authority chain and wire enforcement hooks

enforce-coordination.sh + enforce-specialist-delegation.sh
Who: product-owner + cloud-architect approve before specialists execute
Why: Authority chain prevents STANDALONE_EXECUTION (anti-pattern #1). Hooks are deterministic — advisory rules degrade under context load, hooks do not.
Skip? STANDALONE_EXECUTION violations; agents work without business validation; ungoverned autonomous decisions
Coordination hooks active + authority chain enforced
4Execute

Specialist agents deliver within their domains via governed delegation

/speckit.implement
Who: AI specialist agents build autonomously, HITL reviews evidence at Phase 3+ gates
Why: Governed execution: agents prepare, humans decide, humans commit. Each agent works in its domain with full context.
Skip? Uncoordinated work, agents stepping on each other, duplicate effort, NATO violations
Working software + test results + evidence in tmp/
5Score

Sequential PO+CA+QA scoring — 3 agents, 1 foreground round

/speckit.retrospective
Who: 3 scoring agents run sequentially (PO then CA then QA), HITL reviews consensus
Why: Fix-then-score prevents SCORING_THEATER. Sequential execution prevents RACE_CONDITION_SCORING. 3 agents produce sufficient signal.
Skip? SCORING_THEATER — manufactured deltas; RACE_CONDITION_SCORING — contradictory evidence from parallel reads
Consensus score + gap analysis + corrective actions
6Evolve

Upgrade agent frontmatter, retire underperformers, add new specialists

/speckit.improve
Who: developer-experience-engineer proposes changes, HITL decides what ships
Why: Agent frontmatter is model-version-agnostic. Upgrade the underlying model, agents get smarter automatically. No code changes needed.
Skip? Stale agent definitions, no team improvement, capabilities plateau while models advance
Updated agent definitions + improvement backlog items

Start Here

Spec-Driven workflow and product skills — copy/paste to start

Solo Developer
You need a full team from day one. 15 agents, zero hiring, zero ramp time.
1.ln -s .adlc/.claude .claude
2./speckit.specify
3./speckit.implement
Full agent team in <5 minutes
Platform Team Lead
You need agent governance across your engineering team. Visibility into who does what.
1.git submodule add adlc-framework .adlc
2.bash scripts/governance-score.sh
3./metrics:daily-standup
Team governance visibility in 1 day
Enterprise Architect
You need to map AI agents to your org capability model. 3-tier talent bench.
1.Review .claude/agents/*.md
2./speckit.plan
3./speckit.improve
Agent-capability mapping in 1 day

Component Map

15 components implementing this pillar

TypeNameWhyBusiness Value
Agentproduct-owner (opus)Business validation and sprint governance — ALWAYS FIRSTPrevents misaligned delivery; enforces business-led roadmap
Agentcloud-architect (opus)Technical design and deployment strategy — ALWAYS SECONDArchitecture approval before any specialist execution
Agentsecurity-compliance-engineer (opus)SOC2, APRA CPS 234, ISO 27001 compliance enforcementRegulatory risk eliminated at design time
Agentpython-engineer (sonnet)CloudOps Runbooks PyPI package, CLI commands, boto3Production-grade Python with TDD and battle tests
Agentinfrastructure-engineer (sonnet)CDK + Terraform IaC for AWS multi-account landing zonesReproducible, auditable infrastructure as code
Agentkubernetes-engineer (sonnet)K3s, ArgoCD, Helm chart lifecycle managementGitOps delivery pipeline for containerised workloads
Agentqa-engineer (sonnet)3-tier test strategy: snapshot / LocalStack / AWS90-100% bug detection before production
Agentobservability-engineer (sonnet)DORA metrics, daily standup, sprint ceremoniesData-driven velocity and quality measurement
Agentfullstack-engineer (sonnet)Docusaurus docs, React components, marketplace UITerminal-inspired, WCAG 2.1 AA compliant interfaces
Agentdevops-security-engineer (sonnet)CI/CD pipelines, supply chain hardening, SBOMSLSA Level 2+ provenance for every release
Agentsre-engineer (haiku)Incident response, runbook execution, on-call automationMTTR reduction through codified operational procedures
Agentfinops-engineer (haiku)FinOps cost attribution, rightsizing, waste detectionAutonomous cost reduction within READONLY guardrails
Agentqa-automation-engineer (haiku)Playwright E2E, BDD step definitions, smoke testsRegression prevention at every sprint boundary
Agenttechnical-writer (haiku)CLI docs sync, ADR generation, hand-curated content protectionDocs always match code — HAND_CURATED_CONTENT_DESTRUCTION prevented
Agentdeveloper-experience-engineer (sonnet)Framework architecture, agent scoring, PDCA facilitationContinuous improvement of the ADLC framework itself

Risk & Scalability

What happens without this pillar, and why ADLC scales from 1 person to enterprise

What if you skip?

Industry insight: no company can outsource its way to digital excellence. Companies that rely on external contractors lack business context — technologists need to gain precious understanding of the business context. Context matters to developing great digital solutions. Without an in-house bench, you are perpetually dependent.

Scalability

ADLC provides the full Digital Factory pod pattern with AI agents: 3 decision-layer agents (opus), 8 execution-layer specialists (sonnet), 4 operations-tier workers (haiku). Agent frontmatter is model-agnostic — upgrade the underlying model, agents improve automatically.

Industry Relevance

ANZ enterprise verticals where this pillar is most critical

FSI
Compliance-specialist agents (opus tier) for APRA/SOC2/PCI-DSS governance
Energy
Infrastructure agents automate multi-region grid operations IaC
Telecom
Kubernetes and SRE agents manage 5G edge deployment at scale
Aviation
QA and security agents enforce DO-178C and AS9100 quality gates

Continuous Improvement Flywheel

Each pillar feeds the next — creating a self-reinforcing cycle of capability building

Pillar 2 feeds Pillar 3
Talent BenchADLC Governance

More agents require stronger governance. Hooks scale with team size — 15 agents today, 30+ tomorrow.

Digital Products

Real products built and governed by this pillar

Explore Pillar 2 Components

Browse the full component catalog or read the documentation

AI agents build governed & Humans ship trusted.