Skip to main content

Product SPM — Effective ADLC Usage

Product development backward from business value. 80% autonomy (agents), 100% accountability (HITL approval). INVEST-only stories, 99.5% MCP cross-validation, DORA 4/4 Elite (38d DF, <1d LT, 0% CFR, 7s MTTR).

Project Identity

AspectValue
Repository/Volumes/Working/projects/runbooks
PyPI Packagerunbooks (v1.3.17+, private index)
JIRA BoardSPM (https://1xops.atlassian.net/jira/software/c/projects/SPM/)
Confluence SpaceSPM (https://1xops.atlassian.net/wiki/spaces/SPM)
Sub-ProductsCloudOps (CO), FinOps (FO), Terraform (TF)

Consumption Pattern

The runbooks project owns its .adlc copy (not symlink):

  • Framework = dependency (git submodule, .specify/ rules inherited)
  • 141 commands, 128 skills, 38 agents available (enterprise coordination enforced)
  • 3-way INVEST validation: product-owner (business) + cloud-architect (technical) + qa-engineer (quality)
  • 2-way sync: stories.csv (SSOT) ↔ pm-stories/ (ACs + tech notes) ↔ JIRA SPM

Golden Path: Backlog to Production

See docs/docs/golden-paths/product-management-lifecycle.md:

  1. Discover (30 min) — Write PR/FAQ north-star. Validate with customers.
  2. Specify (1 hour) — Auto-generate INVEST stories. Review ACs + dependencies.
  3. Plan (30 min) — Story prioritization + agent assignment + sprint slots.
  4. Build (varies) — TDD (Red-Green-Refactor). 3-tier validation: snapshot → LocalStack → AWS.
  5. Measure (20 min) — DORA actuals, MCP cross-validation, regression suite.
  6. Iterate — Fix gaps, re-score, escalate to next sprint if blocked.

INVEST Story Format (MANDATORY)

Every story in stories.csv must pass INVEST gate:

GateCriteriaExampleAnti-Pattern
IndependentNo hard deps on other stories (loose edges OK)"Add validation" not "requires story X merged first"TASK_AS_STORY
NegotiableDetails negotiable; acceptance criteria are not"Click count <50ms" vs "make faster"THIN_STORY_INFLATION
ValuableBusiness or user value quantified"Reduce CFO dashboard load: 5s→1s" or "unblock FinOps onboarding"NO_ESTIMATED_COUNTS
EstimableTeam can forecast effort"8 SP: 2 files, existing patterns" vs "unknown scope"SUCCESS_CRITERIA_MISSING
SmallFits in single sprint"≤13 SP" (max)TASK_AS_STORY
TestableACs are executable assertions"pytest passes", "live CLI validates ≥99.5% MCP"TESTING_THEATER

Non-INVEST Items: Tech debt → TODO.md. Bugs → GitHub Issues. Chores → Taskfile.yml.

Effective Commands (Product Development)

CommandPurposeInputOutputScope
/speckit.specify "description"Auto-generate INVEST stories from natural languageFeature descriptionstories.csv + pm-stories/*.mdAll sub-products
/speckit.clarifyPrompt-based story refinement (≤5 questions)stories.csv + user answersRevised stories.csvSingle session
/product:pr-faq my-productWrite 6-section north-star (vision, problem, solution, target user, CTA, metrics)Product vision statementpr-faq.mdPre-development
/finops:aws-monthly --profile runbooks-prodFinOps cost analysis (consumed by FO stories)Cloud profile + date rangeFinOps report + persona viewsFO sub-product
/sync:jira-pushCSV → JIRA SPM (rsid label idempotency)stories.csv + pm-stories/JIRA tickets created/updatedSprint planning
/sync:jira-pullJIRA SPM → local CSV (batch import)JIRA JQL filterUpdated stories.csv + metadataBacklog refinement
/documentation:confluence-publishstories.csv → Confluence SPM pagespm-stories/*.md sourceLive Confluence spaceSprint review

3-Way Validation (Quality Gates)

Every story delivery requires 3-way sign-off:

AgentValidatesGate Pass Criteria
product-ownerBusiness value + INVEST fit + prioritizationStory in SPM + KR mapped to OKR
cloud-architectTechnical feasibility + architecture decision + deployment targetDecision documented in pm-stories/*.md
qa-engineerAcceptance criteria executable + 99.5% MCP cross-validation + regression zero-deltaTest results JSON + evidence file path

Sequential, not parallel: Each agent validates after the previous finishes. Prevents RACE_CONDITION_SCORING.

Test Pyramid: Snapshot → LocalStack → AWS

All runbooks Python code uses 3-tier validation:

TierWhereWhatSpeedVerified
L1: Snapshottests/snapshot/Mock AWS responses (boto3 responses file)<1sCode logic, Happy path
L2: LocalStacktests/integration/Real moto/localstack containers<10sAPI contract, Error handling
L3: Live AWStests/e2e/ (HITL approval)READONLY profiles against real accounts<30sReal data, Pagination, Retry, FOCUS normalization

DORA Measurement: L1+L2 run on every commit (CI). L3 runs daily on READONLY (ops) profile.

2-Way Sync: stories.csv ↔ JIRA SPM

OperationCadenceToolSSOT
Backlog → JIRASprint planning/sync:jira-pushstories.csv (96 rows, 13 columns)
JIRA → BacklogDaily standup/sync:jira-pull (batch)Live JIRA SPM state
Runbook updateAfter story acceptance/documentation:confluence-publishpm-stories/*.md files (96 matching stories)

Idempotency: Every JIRA sync uses rsid:{story_id} label. Same story pushed twice = no duplicate tickets.

Real Software Deliverables (2026-2030)

  • PyPI Package (runbooks) — CLI with 141 executable commands, Rich UX, FOCUS 1.2+ compliance
  • Click CLI — 141 commands across 8 groups: cert, cfat, finops, inventory, security, validation, vpc, operate
  • Pytest Suite — 6,298 tests (1,021 smoke + 5,277 unit/functional, 0 mocks post-theater-cleanup)
  • BDD Features — 13 .feature files (55 passed scenarios, 1 skipped)
  • Jupyter Notebooks — Per-persona analysis (executive/SRE/CTO/CloudOps) with Rich tables + email exports
  • Docusaurus Docs — API reference, CLI command catalogue, integration examples

Quality Targets

MetricTargetStatus
MCP Cross-Validation≥99.5% accuracy (API vs CLI vs Console vs Notebook)99.5% (measured post-theater-cleanup 2026-03-20)
Test Coverage≥37% critical-path (ratchet: 37→40→45→50)38.41% (honest omit entries ~259 lines)
DORA (Runbooks)DF 30d, LT <1d, CFR 0%, MTTR <1hDF 38d, LT <1d, CFR 0%, MTTR 7s (Elite)
First-Time-Right (Stories)≥90% acceptance on first submission89% (1 revision avg)

CxO Personas (Product Reporting)

PersonaFocuses OnOutput Format
CFOCost per feature (dev time × rate + cloud spend)Executive summary: Story → Cost → ROI
CTOArchitecture soundness + scalability + time-to-valueTechnical ADR + DORA trend chart
Product ManagerCustomer value + prioritization + velocity trendBurndown + velocity chart + backlog health

Anti-Patterns Specific to Product Development

Anti-PatternExamplePrevention
TASK_AS_STORY"Refactor auth module" in stories.csvAudit: grep missing AC or KR fields → TODO.md
TESTING_THEATER122/122 CLI tests PASS but import bug crashes at runtimeAST-based phantom import detector runs pre-commit
MOCK_CIRCULAR_VALIDATIONMock asserts mock value, code returns mock valueRules-layer: real inputs, assert on computed result
THIN_STORY_INFLATIONCSV row has title + score but no ACs/deps/filesAudit: /speckit.clarify fills missing fields

References

  • Golden Path: docs/docs/golden-paths/product-management-lifecycle.md (Discover → Specify → Plan → Build → Measure → Iterate)
  • Framework: CLAUDE.md (root) → runbooks .adlc (submodule configuration)
  • INVEST Spec: docs/stories.csv (101 rows, 13 columns, SSOT)
  • Story Details: docs/pm-stories/*.md (101 files, one per story with ACs + tech notes)
  • Anti-Patterns: .claude/rules/governance/anti-patterns-catalog.md (TASK_AS_STORY, TESTING_THEATER specific sections)
  • DORA: .claude/rules/governance/operational-efficiency.md (measurement, target validation)