Product SPM — Effective ADLC Usage
Product development backward from business value. 80% autonomy (agents), 100% accountability (HITL approval). INVEST-only stories, 99.5% MCP cross-validation, DORA 4/4 Elite (38d DF, <1d LT, 0% CFR, 7s MTTR).
Project Identity
| Aspect | Value |
|---|---|
| Repository | /Volumes/Working/projects/runbooks |
| PyPI Package | runbooks (v1.3.17+, private index) |
| JIRA Board | SPM (https://1xops.atlassian.net/jira/software/c/projects/SPM/) |
| Confluence Space | SPM (https://1xops.atlassian.net/wiki/spaces/SPM) |
| Sub-Products | CloudOps (CO), FinOps (FO), Terraform (TF) |
Consumption Pattern
The runbooks project owns its .adlc copy (not symlink):
- Framework = dependency (git submodule, .specify/ rules inherited)
- 141 commands, 128 skills, 38 agents available (enterprise coordination enforced)
- 3-way INVEST validation: product-owner (business) + cloud-architect (technical) + qa-engineer (quality)
- 2-way sync: stories.csv (SSOT) ↔ pm-stories/ (ACs + tech notes) ↔ JIRA SPM
Golden Path: Backlog to Production
See docs/docs/golden-paths/product-management-lifecycle.md:
- Discover (30 min) — Write PR/FAQ north-star. Validate with customers.
- Specify (1 hour) — Auto-generate INVEST stories. Review ACs + dependencies.
- Plan (30 min) — Story prioritization + agent assignment + sprint slots.
- Build (varies) — TDD (Red-Green-Refactor). 3-tier validation: snapshot → LocalStack → AWS.
- Measure (20 min) — DORA actuals, MCP cross-validation, regression suite.
- Iterate — Fix gaps, re-score, escalate to next sprint if blocked.
INVEST Story Format (MANDATORY)
Every story in stories.csv must pass INVEST gate:
| Gate | Criteria | Example | Anti-Pattern |
|---|---|---|---|
| Independent | No hard deps on other stories (loose edges OK) | "Add validation" not "requires story X merged first" | TASK_AS_STORY |
| Negotiable | Details negotiable; acceptance criteria are not | "Click count <50ms" vs "make faster" | THIN_STORY_INFLATION |
| Valuable | Business or user value quantified | "Reduce CFO dashboard load: 5s→1s" or "unblock FinOps onboarding" | NO_ESTIMATED_COUNTS |
| Estimable | Team can forecast effort | "8 SP: 2 files, existing patterns" vs "unknown scope" | SUCCESS_CRITERIA_MISSING |
| Small | Fits in single sprint | "≤13 SP" (max) | TASK_AS_STORY |
| Testable | ACs are executable assertions | "pytest passes", "live CLI validates ≥99.5% MCP" | TESTING_THEATER |
Non-INVEST Items: Tech debt → TODO.md. Bugs → GitHub Issues. Chores → Taskfile.yml.
Effective Commands (Product Development)
| Command | Purpose | Input | Output | Scope |
|---|---|---|---|---|
/speckit.specify "description" | Auto-generate INVEST stories from natural language | Feature description | stories.csv + pm-stories/*.md | All sub-products |
/speckit.clarify | Prompt-based story refinement (≤5 questions) | stories.csv + user answers | Revised stories.csv | Single session |
/product:pr-faq my-product | Write 6-section north-star (vision, problem, solution, target user, CTA, metrics) | Product vision statement | pr-faq.md | Pre-development |
/finops:aws-monthly --profile runbooks-prod | FinOps cost analysis (consumed by FO stories) | Cloud profile + date range | FinOps report + persona views | FO sub-product |
/sync:jira-push | CSV → JIRA SPM (rsid label idempotency) | stories.csv + pm-stories/ | JIRA tickets created/updated | Sprint planning |
/sync:jira-pull | JIRA SPM → local CSV (batch import) | JIRA JQL filter | Updated stories.csv + metadata | Backlog refinement |
/documentation:confluence-publish | stories.csv → Confluence SPM pages | pm-stories/*.md source | Live Confluence space | Sprint review |
3-Way Validation (Quality Gates)
Every story delivery requires 3-way sign-off:
| Agent | Validates | Gate Pass Criteria |
|---|---|---|
| product-owner | Business value + INVEST fit + prioritization | Story in SPM + KR mapped to OKR |
| cloud-architect | Technical feasibility + architecture decision + deployment target | Decision documented in pm-stories/*.md |
| qa-engineer | Acceptance criteria executable + 99.5% MCP cross-validation + regression zero-delta | Test results JSON + evidence file path |
Sequential, not parallel: Each agent validates after the previous finishes. Prevents RACE_CONDITION_SCORING.
Test Pyramid: Snapshot → LocalStack → AWS
All runbooks Python code uses 3-tier validation:
| Tier | Where | What | Speed | Verified |
|---|---|---|---|---|
| L1: Snapshot | tests/snapshot/ | Mock AWS responses (boto3 responses file) | <1s | Code logic, Happy path |
| L2: LocalStack | tests/integration/ | Real moto/localstack containers | <10s | API contract, Error handling |
| L3: Live AWS | tests/e2e/ (HITL approval) | READONLY profiles against real accounts | <30s | Real data, Pagination, Retry, FOCUS normalization |
DORA Measurement: L1+L2 run on every commit (CI). L3 runs daily on READONLY (ops) profile.
2-Way Sync: stories.csv ↔ JIRA SPM
| Operation | Cadence | Tool | SSOT |
|---|---|---|---|
| Backlog → JIRA | Sprint planning | /sync:jira-push | stories.csv (96 rows, 13 columns) |
| JIRA → Backlog | Daily standup | /sync:jira-pull (batch) | Live JIRA SPM state |
| Runbook update | After story acceptance | /documentation:confluence-publish | pm-stories/*.md files (96 matching stories) |
Idempotency: Every JIRA sync uses rsid:{story_id} label. Same story pushed twice = no duplicate tickets.
Real Software Deliverables (2026-2030)
- PyPI Package (
runbooks) — CLI with 141 executable commands, Rich UX, FOCUS 1.2+ compliance - Click CLI — 141 commands across 8 groups: cert, cfat, finops, inventory, security, validation, vpc, operate
- Pytest Suite — 6,298 tests (1,021 smoke + 5,277 unit/functional, 0 mocks post-theater-cleanup)
- BDD Features — 13 .feature files (55 passed scenarios, 1 skipped)
- Jupyter Notebooks — Per-persona analysis (executive/SRE/CTO/CloudOps) with Rich tables + email exports
- Docusaurus Docs — API reference, CLI command catalogue, integration examples
Quality Targets
| Metric | Target | Status |
|---|---|---|
| MCP Cross-Validation | ≥99.5% accuracy (API vs CLI vs Console vs Notebook) | 99.5% (measured post-theater-cleanup 2026-03-20) |
| Test Coverage | ≥37% critical-path (ratchet: 37→40→45→50) | 38.41% (honest omit entries ~259 lines) |
| DORA (Runbooks) | DF 30d, LT <1d, CFR 0%, MTTR <1h | DF 38d, LT <1d, CFR 0%, MTTR 7s (Elite) |
| First-Time-Right (Stories) | ≥90% acceptance on first submission | 89% (1 revision avg) |
CxO Personas (Product Reporting)
| Persona | Focuses On | Output Format |
|---|---|---|
| CFO | Cost per feature (dev time × rate + cloud spend) | Executive summary: Story → Cost → ROI |
| CTO | Architecture soundness + scalability + time-to-value | Technical ADR + DORA trend chart |
| Product Manager | Customer value + prioritization + velocity trend | Burndown + velocity chart + backlog health |
Anti-Patterns Specific to Product Development
| Anti-Pattern | Example | Prevention |
|---|---|---|
TASK_AS_STORY | "Refactor auth module" in stories.csv | Audit: grep missing AC or KR fields → TODO.md |
TESTING_THEATER | 122/122 CLI tests PASS but import bug crashes at runtime | AST-based phantom import detector runs pre-commit |
MOCK_CIRCULAR_VALIDATION | Mock asserts mock value, code returns mock value | Rules-layer: real inputs, assert on computed result |
THIN_STORY_INFLATION | CSV row has title + score but no ACs/deps/files | Audit: /speckit.clarify fills missing fields |
References
- Golden Path:
docs/docs/golden-paths/product-management-lifecycle.md(Discover → Specify → Plan → Build → Measure → Iterate) - Framework:
CLAUDE.md(root) → runbooks.adlc(submodule configuration) - INVEST Spec:
docs/stories.csv(101 rows, 13 columns, SSOT) - Story Details:
docs/pm-stories/*.md(101 files, one per story with ACs + tech notes) - Anti-Patterns:
.claude/rules/governance/anti-patterns-catalog.md(TASK_AS_STORY, TESTING_THEATER specific sections) - DORA:
.claude/rules/governance/operational-efficiency.md(measurement, target validation)