Skip to main content

Quality Baseline Assessment Template

Extracted from CloudOps-Runbooks v1.1.4 validation. This template shows HOW to conduct an honest quality assessment — distinguishing what works from what doesn't.

Purpose

Quality assessments that confirm everything is working are useless. The value of an assessment is in finding what's broken. This template provides the structure for an honest baseline audit.

Assessment Structure

Step 1: Categorise Claims vs Reality

For every capability claimed in documentation or status reports, validate independently:

CapabilityClaimed StatusActual ValidationEvidence
Module X"Complete"import module_x → ImportErrorTerminal output
CLI Command Y"Operational"tool y --help → works, tool y run → failsCommand output
Test Coverage">90%"pytest --cov → 0% (tests can't execute)Test runner output

Step 2: Classify Findings

Verified Achievements

Things that demonstrably work when tested independently:

  • Package published and installable
  • CLI help text renders correctly
  • Core module imports succeed

False Completion Claims

Things marked complete that fail basic validation:

  • Modules that fail to import
  • Commands that show help but crash on execution
  • Coverage claims without executable tests

Unsubstantiated Claims

Things that cannot be validated either way due to broken infrastructure:

  • Accuracy claims when test framework is non-functional
  • Performance claims without benchmark evidence

Step 3: Risk Assessment

Risk CategoryFindingBusiness Impact
Quality AssuranceTest coverage at 0% vs claimed 90%Cannot verify any functionality claim
User ExperienceCLI help describes non-functional featuresUser trust erosion
DocumentationDocs describe capabilities that don't existStakeholder expectation mismatch

Step 4: Prioritised Remediation

Order fixes by dependency chain, not by severity alone:

  1. Infrastructure first: Fix test framework so you CAN validate (blocks everything else)
  2. Import chain: Fix module imports so tests CAN run
  3. Core functionality: Fix the features that users actually call
  4. Documentation alignment: Update docs to match actual state

Key Principles

Evidence Over Assertions

Every finding must include:

  • The exact command or test run
  • The actual output (not a description of it)
  • A file path or screenshot as evidence

Honest Assessment Over Comfortable Assessment

The assessment is only useful if it's honest:

  • "0% functional test coverage" is more useful than "tests exist but need attention"
  • "Module fails to import" is more useful than "module needs minor fixes"

Remediation Options Must Be Realistic

Provide at least two options:

OptionDescriptionWhen to Choose
RemediateFix the gaps to match claimsWhen the capability is genuinely needed
Status ResetUpdate documentation to match realityWhen honest expectations matter more than features
RebuildIf gaps are systemic, not patchableOnly when remediation cost exceeds rebuild cost

Template Checklist

For each module/capability:

  • Can the module be imported? (import X test)
  • Do CLI commands execute? (not just --help, but actual operation)
  • Do tests run? (not just exist — actually execute and report results)
  • Does claimed coverage match measured coverage?
  • Do documented features correspond to working implementations?
  • Are any metrics or accuracy claims backed by evidence?

Anti-Patterns in Quality Assessment

Anti-PatternDescription
Confirmation biasOnly testing things you expect to work
Help-text validationAccepting --help output as proof of functionality
Test existence vs executionCounting test files without running them
Metric inflationReporting test function count instead of pass rate
Aspirational documentationDocs describing planned features as current capabilities

Origin: CloudOps-Runbooks v1.1.4 quality audit. Elevated as a reusable methodology because honest assessment is a repeatable practice, not a one-time event.