Product Management Lifecycle
A business-led roadmap is the blueprint for a successful digital and AI transformation.
AI agents build governed. Humans ship trusted. 80% autonomy, 100% accountability.
Golden Path: Your First Governed Sprint
Phase 1: Discover (30 min)
Who: You (HITL) steer direction. product-owner agent validates business fit.
What: Find problems worth solving. Write your PR/FAQ north-star document.
Why: Most digital transformations fail to capture expected value because they skip discovery. Building without validation equals building the wrong thing. PR/FAQ forces precision.
What-if skip: Pet projects, wasted sprints, no customer alignment. Your team builds features nobody asked for.
How
# Write your north-star document
/product:pr-faq my-product
# Explore the opportunity space (optional, 30 min max)
# Skill:lean-canvas
# Skill:opportunity-solution-tree
# Skill:identify-assumptions
Output
pr-faq.md— 6-section north-star (vision, problem, solution, target user, call-to-action, success metrics)opportunity-solution-tree.txt— Outcomes → solutions → experimentsassumptions.md— Riskiest 5 bets with validation plan
Phase 2: Specify (1 hour)
Who: product-owner agent drafts spec. You approve scope.
What: Turn validated ideas into precise specifications. Focus on what to build, not how.
Why: Ambiguous specs cause scope creep and rework. Spec-Driven Development makes specs executable artifacts, not disposable documents. Clear ACs prevent agent confusion.
What-if skip: "Just code it" syndrome. Unbounded scope. Agents work on wrong features. No acceptance criteria to validate against.
How
# Generate feature specification from natural language
/speckit.specify "voice-enabled RAG chatbot for cloud operations"
# Clarify underspecified areas (up to 5 targeted questions)
/speckit.clarify
# Review generated INVEST user stories
cat stories.csv
Output
spec.md— Problem statement, user personas, success metricsstories.csv— INVEST format (title, AC, dependencies, KR mapping, agent assignment)edge-cases.md— Negative scenarios and error handling requirements
Quality Gate: Every story must have ≥3 acceptance criteria. No story worth >13 points.
Phase 3: Plan (1 hour)
Who: cloud-architect agent designs. product-owner validates alignment.
What: Design technical approach. Map specialist agents to tasks. Validate against ADLC principles.
Why: 80% of transformations fail on scoping, not execution. Planning prevents over-engineering, under-engineering, and agent coordination conflicts.
What-if skip: Over-engineering (Aurora for fewer than 50 users), under-engineering (no auth), agents stepping on each other's work.
How
# Generate implementation plan from spec
/speckit.plan
# Break plan into dependency-ordered tasks
/speckit.tasks
# Review cloud-architect's technical design
cat plan.md
Output
plan.md— Technical approach, alternatives considered, trade-offs documentedtasks.md— Ordered list with agent assignments and dependenciesrisks.md— Blockers, escalation paths, constitution checks- Agent delegation matrix (who owns which story/task)
Quality Gate: Every task has a single owner. No "TBD" dependencies. Constitution compliance validated.
Phase 4: Build (2–4 hours)
Who: AI agents (python-engineer, infrastructure-engineer, etc.) execute. You review and ship trusted.
What: Execute tasks with specialist agents. PDCA until quality gates pass. Each session produces code, tests, and evidence.
Why: Agents prepare, humans decide. Governed execution with evidence prevents drift and testing theater. No evidence = no completion.
What-if skip: Unreviewed code, no evidence trail, testing theater, NATO violations (completion claims without proof).
How
# Execute implementation according to plan
# Specialist agents handle domain work:
# - python-engineer → CLI, API, business logic
# - infrastructure-engineer → Terraform, ECS, IAM
# - qa-engineer → Test validation, coverage
# Monitor progress
/metrics:daily-standup
# Review evidence in tmp/
ls -la tmp/adlc-framework/test-results/
cat tmp/adlc-framework/coordination-logs/product-owner-*.json
Output
- Working software on disk (code + tests + docs)
- Test evidence in
tmp/adlc-framework/test-results/ - PDCA round logs in
tmp/adlc-framework/coordination-logs/ - No story marked "Done" without test execution output
Quality Gate: ≥90% test coverage on critical paths. Zero NATO claims (all completion claims backed by file evidence). All ACs validated with tests.
Phase 5: Measure (30 min)
Who: observability-engineer agent collects metrics. You read the dashboard.
What: Track sprint health, DORA metrics, agent consensus scores. Evidence-based ceremonies.
Why: What gets measured gets managed. Ceremonies create the evidence trail that prevents NATO violations and stakeholder surprises.
What-if skip: No sprint visibility, invisible blockers, "done" claims without proof, board surprises.
How
# Daily visibility into sprint health
/metrics:daily-standup
# Sprint closure with demo + metrics + DORA
/metrics:sprint-review
# View DORA dashboard
open https://adlc.oceansoft.io/docs/dora
Output
- DORA dashboard (deploy frequency, lead time, change failure rate, MTTR)
- Sprint velocity (planned vs actual)
- Blockers and resolved issues log
- Agent consensus scores (PO, CA, QA)
Quality Gate: Velocity trend charted. No unexplained deltas >20%. All blockers documented with resolution paths.
Phase 6: Iterate (30 min)
Who: Full agent team reflects. You decide what to change.
What: Retrospective with 4L review (Liked, Learned, Lacked, Longed-for). Extract patterns. Improve the process.
Why: The process that builds products must itself improve continuously. Improvement compounds over sprints. Org learning > one-time fixes.
What-if skip: Same mistakes repeated, process stagnation, team burnout, no organizational learning.
How
# Sprint retrospective with improvement actions
/metrics:sprint-retro
# Session retrospective for continuous learning
/speckit.retrospective
# Review process improvements
cat framework/retrospectives/YYYY-MM-DD-sprint-*.md
Output
- 4L review (Liked, Learned, Lacked, Longed-for)
- 3–5 improvement actions assigned to next sprint
- Updated process patterns in
.claude/ - Constitution update (if governance gaps found)
Quality Gate: Every sprint improvement tracked. Metrics show velocity trend ≥0% (no degradation). Retrospective consensus ≥70%.
LEAN/5S Applied to Product Management
| Principle | Application | Evidence |
|---|---|---|
| Sort | 42 PM skills curated from 65+ (removed generic, overlapping, non-PM) | .claude/skills/product/ directory — 32 items deleted |
| Set in Order | 4 phases: Discovery (16 skills) > Strategy (14) > Growth (4) > Utility (8) | Organized by product lifecycle stage |
| Shine | Quality gates at each phase transition (INVEST scoring, constitution check) | PO+CA+QA sequential scoring after each deliverable |
| Standardize | This golden path = consistent process across all digital products | Same 6 phases for CloudOps, xOps, DevOps, Agentic-AI |
| Sustain | /metrics:sprint-retro keeps the process improving every sprint | Retrospective evidence in framework/retrospectives/ |
By Persona
Solo Startup Founder
You have: An idea and AI agents. No team yet.
Your path (4 hours to first governed sprint):
/product:pr-faq my-product # 30 min — north-star document
/speckit.specify "my product description" # 1 hour — precise spec with user stories
/speckit.plan # 1 hour — technical design
# Agents execute (2–4 hours)
/metrics:sprint-review # 30 min — evidence-based completion
Value: First governed sprint with zero team. AI agents do 80% of the work, you make 100% of the decisions. Evidence trail from day 1. Board-ready demo.
CxO / Enterprise Leader
You need: A digital transformation roadmap with board-ready evidence.
Your path (1 day to board-ready roadmap):
/product:pr-faq # Vision statement that inspires the team
# Skill:lean-canvas # Business model hypothesis on one page
/speckit.plan # resource planning with agent delegation
/metrics:sprint-review # Evidence-based progress for your board
Value: Board-ready roadmap with measurable KPIs, not PowerPoint promises. DORA metrics prove execution velocity. Quarterly demos with evidence.
Product Owner
You manage: Sprints with AI agents. Daily ceremony cadence.
Your path (ongoing sprint management):
/speckit.specify + /speckit.tasks # Spec to task breakdown
/metrics:daily-standup # Daily visibility (5 min read)
/metrics:sprint-retro # Continuous improvement
Value: 42 PM skills organized by phase. Ceremony automation reduces overhead from 8h/week to under 2h. PDCA quality gates. Agent consensus scoring (PO+CA+QA).
Reality Check: xOps Pilot Evidence
The xOps Command Centre is the live proof point for this golden path. Real execution evidence:
| Phase | Evidence | Result |
|---|---|---|
| Discover | F2T2EA north-star + 13 INVEST stories | PO consensus 84%, CA consensus 82% |
| Specify | 6-phase operational cycle with personas | 7 components, 6-layer architecture |
| Plan | Technical design + $110–460/mo cost model | Accepted by HITL + CA |
| Build | 1,430 LOC across 10 modules | 98.3% cross-validation accuracy |
| Measure | DORA dashboard (3/4 GREEN) | 100% BV delivered (1,472/1,472 points) |
| Iterate | Retrospective actions → next sprint | 4-week cycle time, improving |
Not theoretical. Real. Running. Now.
PM Skills Reference
All skills are in .claude/skills/product/ organized by phase:
| Phase | Count | Key Skills |
|---|---|---|
| Discovery | 16 | lean-canvas, opportunity-solution-tree, identify-assumptions, customer-journey-map, interview-script, user-personas, job-stories |
| Strategy | 14 | product-vision, product-strategy, competitive-battlecard, north-star-metric, market-sizing, pricing-strategy, swot-analysis |
| Growth | 4 | growth-loops, ab-test-analysis, cohort-analysis, sentiment-analysis |
| Utility | 8 | pre-mortem, value-proposition, startup-canvas, beachhead-segment, analyze-feature-requests |
Each skill is copy-paste ready with examples.
Common Mistakes (Anti-Patterns)
| Mistake | Why It Fails | Fix |
|---|---|---|
| Skip Discover, jump to Build | No validation. Build wrong thing. Sprint failed before code started. | Always start with PR/FAQ. Non-negotiable. |
| Spec with 50+ stories in first sprint | Scope creep. Agents confused. Nothing finished. | Max 5 stories per sprint. Start small, iterate. |
| No acceptance criteria in spec | Agents guess what "done" means. Scope expands. | Every story must have ≥3 ACs. Template enforced. |
| Skip Plan phase | Architectural debt. Over-engineering. Agent coordination chaos. | Cloud-architect approval required. Takes 1 hour. |
| Build without tests | NATO violations (completion claims without proof). | Tests before every "done" claim. Non-negotiable. |
| No ceremonies/metrics | Flying blind. Board surprises. No org learning. | Daily standup + sprint review + retro. Automated. |
Success Metrics by Role
HITL (You)
- Sprint velocity trend ≥0% (no degradation)
- Board Q&A answerable without rework (DORA data ready)
- Agent consensus ≥70% (no disputed completions)
- Retrospective actions tracked to next sprint
Product Owner (Agent)
- Spec clarity score ≥85% (agent questions < 3)
- Story size consistency (no outliers >13 pts)
- INVEST compliance 100% (all stories auditable)
Specialist Agents
- Test coverage ≥90% critical paths
- Zero NATO violations (all claims backed by evidence)
- Task dependency compliance (no surprises)
Org
- Cycle time improving (baseline → -10% per sprint target)
- Rework rate under 10% (spec quality high)
- On-time delivery ≥95% (planning accurate)
Quick Reference: Command Cheat Sheet
# Discover
/product:pr-faq "product name"
# Skill:lean-canvas
# Skill:opportunity-solution-tree
# Specify
/speckit.specify "product description"
/speckit.clarify # up to 5 Q&A rounds
/speckit.checklist # generate quality checklist from spec
# Plan
/speckit.plan
/speckit.tasks
/speckit.analyze # cross-artifact consistency check
# Build
/metrics:daily-standup # view sprint progress
# (agents execute /speckit.implement)
# Measure
/metrics:sprint-review # sprint closure + DORA
/metrics:update-dora # refresh DORA dashboard
# Iterate
/metrics:sprint-retro # 4L review + actions
/speckit.retrospective # session learning
# Always
git diff --stat # verify changes on disk
ls tmp/adlc-framework/test-results/ # check evidence
Integration with Framework
This golden path is part of the larger ADLC framework:
- Constitution: 7 non-negotiable principles (Acceptable Agency, Interoperability, Evaluation-First, etc.)
- Governance: 58 checkpoints ensure 99.5% compliance
- Commands: 80+ slash commands automate each phase
- Skills: 414 curated components in the marketplace
- MCPs: 24 external integrations (AWS, GitHub, Jira, Azure, Slack, etc.)
Start here. Integrate deeper as you scale.
Last Updated: March 2026 | Status: Proven golden path (xOps pilot validation) | Maintenance: Retrospective-driven improvements every sprint