fix: addressed review comments from manjaroblack, round 1
This commit is contained in:
146
bmad-core/data/test-levels-framework.md
Normal file
146
bmad-core/data/test-levels-framework.md
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
# Test Levels Framework
|
||||||
|
|
||||||
|
Comprehensive guide for determining appropriate test levels (unit, integration, E2E) for different scenarios.
|
||||||
|
|
||||||
|
## Test Level Decision Matrix
|
||||||
|
|
||||||
|
### Unit Tests
|
||||||
|
|
||||||
|
**When to use:**
|
||||||
|
|
||||||
|
- Testing pure functions and business logic
|
||||||
|
- Algorithm correctness
|
||||||
|
- Input validation and data transformation
|
||||||
|
- Error handling in isolated components
|
||||||
|
- Complex calculations or state machines
|
||||||
|
|
||||||
|
**Characteristics:**
|
||||||
|
|
||||||
|
- Fast execution (immediate feedback)
|
||||||
|
- No external dependencies (DB, API, file system)
|
||||||
|
- Highly maintainable and stable
|
||||||
|
- Easy to debug failures
|
||||||
|
|
||||||
|
**Example scenarios:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
unit_test:
|
||||||
|
component: "PriceCalculator"
|
||||||
|
scenario: "Calculate discount with multiple rules"
|
||||||
|
justification: "Complex business logic with multiple branches"
|
||||||
|
mock_requirements: "None - pure function"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Tests
|
||||||
|
|
||||||
|
**When to use:**
|
||||||
|
|
||||||
|
- Component interaction verification
|
||||||
|
- Database operations and transactions
|
||||||
|
- API endpoint contracts
|
||||||
|
- Service-to-service communication
|
||||||
|
- Middleware and interceptor behavior
|
||||||
|
|
||||||
|
**Characteristics:**
|
||||||
|
|
||||||
|
- Moderate execution time
|
||||||
|
- Tests component boundaries
|
||||||
|
- May use test databases or containers
|
||||||
|
- Validates system integration points
|
||||||
|
|
||||||
|
**Example scenarios:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
integration_test:
|
||||||
|
components: ["UserService", "AuthRepository"]
|
||||||
|
scenario: "Create user with role assignment"
|
||||||
|
justification: "Critical data flow between service and persistence"
|
||||||
|
test_environment: "In-memory database"
|
||||||
|
```
|
||||||
|
|
||||||
|
### End-to-End Tests
|
||||||
|
|
||||||
|
**When to use:**
|
||||||
|
|
||||||
|
- Critical user journeys
|
||||||
|
- Cross-system workflows
|
||||||
|
- Visual regression testing
|
||||||
|
- Compliance and regulatory requirements
|
||||||
|
- Final validation before release
|
||||||
|
|
||||||
|
**Characteristics:**
|
||||||
|
|
||||||
|
- Slower execution
|
||||||
|
- Tests complete workflows
|
||||||
|
- Requires full environment setup
|
||||||
|
- Most realistic but most brittle
|
||||||
|
|
||||||
|
**Example scenarios:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
e2e_test:
|
||||||
|
journey: "Complete checkout process"
|
||||||
|
scenario: "User purchases with saved payment method"
|
||||||
|
justification: "Revenue-critical path requiring full validation"
|
||||||
|
environment: "Staging with test payment gateway"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Level Selection Rules
|
||||||
|
|
||||||
|
### Favor Unit Tests When:
|
||||||
|
|
||||||
|
- Logic can be isolated
|
||||||
|
- No side effects involved
|
||||||
|
- Fast feedback needed
|
||||||
|
- High cyclomatic complexity
|
||||||
|
|
||||||
|
### Favor Integration Tests When:
|
||||||
|
|
||||||
|
- Testing persistence layer
|
||||||
|
- Validating service contracts
|
||||||
|
- Testing middleware/interceptors
|
||||||
|
- Component boundaries critical
|
||||||
|
|
||||||
|
### Favor E2E Tests When:
|
||||||
|
|
||||||
|
- User-facing critical paths
|
||||||
|
- Multi-system interactions
|
||||||
|
- Regulatory compliance scenarios
|
||||||
|
- Visual regression important
|
||||||
|
|
||||||
|
## Anti-patterns to Avoid
|
||||||
|
|
||||||
|
- E2E testing for business logic validation
|
||||||
|
- Unit testing framework behavior
|
||||||
|
- Integration testing third-party libraries
|
||||||
|
- Duplicate coverage across levels
|
||||||
|
|
||||||
|
## Duplicate Coverage Guard
|
||||||
|
|
||||||
|
**Before adding any test, check:**
|
||||||
|
|
||||||
|
1. Is this already tested at a lower level?
|
||||||
|
2. Can a unit test cover this instead of integration?
|
||||||
|
3. Can an integration test cover this instead of E2E?
|
||||||
|
|
||||||
|
**Coverage overlap is only acceptable when:**
|
||||||
|
|
||||||
|
- Testing different aspects (unit: logic, integration: interaction, e2e: user experience)
|
||||||
|
- Critical paths requiring defense in depth
|
||||||
|
- Regression prevention for previously broken functionality
|
||||||
|
|
||||||
|
## Test Naming Conventions
|
||||||
|
|
||||||
|
- Unit: `test_{component}_{scenario}`
|
||||||
|
- Integration: `test_{flow}_{interaction}`
|
||||||
|
- E2E: `test_{journey}_{outcome}`
|
||||||
|
|
||||||
|
## Test ID Format
|
||||||
|
|
||||||
|
`{EPIC}.{STORY}-{LEVEL}-{SEQ}`
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- `1.3-UNIT-001`
|
||||||
|
- `1.3-INT-002`
|
||||||
|
- `1.3-E2E-001`
|
||||||
172
bmad-core/data/test-priorities-matrix.md
Normal file
172
bmad-core/data/test-priorities-matrix.md
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
# Test Priorities Matrix
|
||||||
|
|
||||||
|
Guide for prioritizing test scenarios based on risk, criticality, and business impact.
|
||||||
|
|
||||||
|
## Priority Levels
|
||||||
|
|
||||||
|
### P0 - Critical (Must Test)
|
||||||
|
|
||||||
|
**Criteria:**
|
||||||
|
|
||||||
|
- Revenue-impacting functionality
|
||||||
|
- Security-critical paths
|
||||||
|
- Data integrity operations
|
||||||
|
- Regulatory compliance requirements
|
||||||
|
- Previously broken functionality (regression prevention)
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
- Payment processing
|
||||||
|
- Authentication/authorization
|
||||||
|
- User data creation/deletion
|
||||||
|
- Financial calculations
|
||||||
|
- GDPR/privacy compliance
|
||||||
|
|
||||||
|
**Testing Requirements:**
|
||||||
|
|
||||||
|
- Comprehensive coverage at all levels
|
||||||
|
- Both happy and unhappy paths
|
||||||
|
- Edge cases and error scenarios
|
||||||
|
- Performance under load
|
||||||
|
|
||||||
|
### P1 - High (Should Test)
|
||||||
|
|
||||||
|
**Criteria:**
|
||||||
|
|
||||||
|
- Core user journeys
|
||||||
|
- Frequently used features
|
||||||
|
- Features with complex logic
|
||||||
|
- Integration points between systems
|
||||||
|
- Features affecting user experience
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
- User registration flow
|
||||||
|
- Search functionality
|
||||||
|
- Data import/export
|
||||||
|
- Notification systems
|
||||||
|
- Dashboard displays
|
||||||
|
|
||||||
|
**Testing Requirements:**
|
||||||
|
|
||||||
|
- Primary happy paths required
|
||||||
|
- Key error scenarios
|
||||||
|
- Critical edge cases
|
||||||
|
- Basic performance validation
|
||||||
|
|
||||||
|
### P2 - Medium (Nice to Test)
|
||||||
|
|
||||||
|
**Criteria:**
|
||||||
|
|
||||||
|
- Secondary features
|
||||||
|
- Admin functionality
|
||||||
|
- Reporting features
|
||||||
|
- Configuration options
|
||||||
|
- UI polish and aesthetics
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
- Admin settings panels
|
||||||
|
- Report generation
|
||||||
|
- Theme customization
|
||||||
|
- Help documentation
|
||||||
|
- Analytics tracking
|
||||||
|
|
||||||
|
**Testing Requirements:**
|
||||||
|
|
||||||
|
- Happy path coverage
|
||||||
|
- Basic error handling
|
||||||
|
- Can defer edge cases
|
||||||
|
|
||||||
|
### P3 - Low (Test if Time Permits)
|
||||||
|
|
||||||
|
**Criteria:**
|
||||||
|
|
||||||
|
- Rarely used features
|
||||||
|
- Nice-to-have functionality
|
||||||
|
- Cosmetic issues
|
||||||
|
- Non-critical optimizations
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
- Advanced preferences
|
||||||
|
- Legacy feature support
|
||||||
|
- Experimental features
|
||||||
|
- Debug utilities
|
||||||
|
|
||||||
|
**Testing Requirements:**
|
||||||
|
|
||||||
|
- Smoke tests only
|
||||||
|
- Can rely on manual testing
|
||||||
|
- Document known limitations
|
||||||
|
|
||||||
|
## Risk-Based Priority Adjustments
|
||||||
|
|
||||||
|
### Increase Priority When:
|
||||||
|
|
||||||
|
- High user impact (affects >50% of users)
|
||||||
|
- High financial impact (>$10K potential loss)
|
||||||
|
- Security vulnerability potential
|
||||||
|
- Compliance/legal requirements
|
||||||
|
- Customer-reported issues
|
||||||
|
- Complex implementation (>500 LOC)
|
||||||
|
- Multiple system dependencies
|
||||||
|
|
||||||
|
### Decrease Priority When:
|
||||||
|
|
||||||
|
- Feature flag protected
|
||||||
|
- Gradual rollout planned
|
||||||
|
- Strong monitoring in place
|
||||||
|
- Easy rollback capability
|
||||||
|
- Low usage metrics
|
||||||
|
- Simple implementation
|
||||||
|
- Well-isolated component
|
||||||
|
|
||||||
|
## Test Coverage by Priority
|
||||||
|
|
||||||
|
| Priority | Unit Coverage | Integration Coverage | E2E Coverage |
|
||||||
|
| -------- | ------------- | -------------------- | ------------------ |
|
||||||
|
| P0 | >90% | >80% | All critical paths |
|
||||||
|
| P1 | >80% | >60% | Main happy paths |
|
||||||
|
| P2 | >60% | >40% | Smoke tests |
|
||||||
|
| P3 | Best effort | Best effort | Manual only |
|
||||||
|
|
||||||
|
## Priority Assignment Rules
|
||||||
|
|
||||||
|
1. **Start with business impact** - What happens if this fails?
|
||||||
|
2. **Consider probability** - How likely is failure?
|
||||||
|
3. **Factor in detectability** - Would we know if it failed?
|
||||||
|
4. **Account for recoverability** - Can we fix it quickly?
|
||||||
|
|
||||||
|
## Priority Decision Tree
|
||||||
|
|
||||||
|
```
|
||||||
|
Is it revenue-critical?
|
||||||
|
├─ YES → P0
|
||||||
|
└─ NO → Does it affect core user journey?
|
||||||
|
├─ YES → Is it high-risk?
|
||||||
|
│ ├─ YES → P0
|
||||||
|
│ └─ NO → P1
|
||||||
|
└─ NO → Is it frequently used?
|
||||||
|
├─ YES → P1
|
||||||
|
└─ NO → Is it customer-facing?
|
||||||
|
├─ YES → P2
|
||||||
|
└─ NO → P3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Execution Order
|
||||||
|
|
||||||
|
1. Execute P0 tests first (fail fast on critical issues)
|
||||||
|
2. Execute P1 tests second (core functionality)
|
||||||
|
3. Execute P2 tests if time permits
|
||||||
|
4. P3 tests only in full regression cycles
|
||||||
|
|
||||||
|
## Continuous Adjustment
|
||||||
|
|
||||||
|
Review and adjust priorities based on:
|
||||||
|
|
||||||
|
- Production incident patterns
|
||||||
|
- User feedback and complaints
|
||||||
|
- Usage analytics
|
||||||
|
- Test failure history
|
||||||
|
- Business priority changes
|
||||||
@@ -6,9 +6,9 @@ Quick NFR validation focused on the core four: security, performance, reliabilit
|
|||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
required:
|
required:
|
||||||
- story_id: "{epic}.{story}" # e.g., "1.3"
|
- story_id: "{epic}.{story}" # e.g., "1.3"
|
||||||
- story_path: "docs/stories/{epic}.{story}.*.md"
|
- story_path: "docs/stories/{epic}.{story}.*.md"
|
||||||
|
|
||||||
optional:
|
optional:
|
||||||
- architecture_refs: "docs/architecture/*.md"
|
- architecture_refs: "docs/architecture/*.md"
|
||||||
- technical_preferences: "docs/technical-preferences.md"
|
- technical_preferences: "docs/technical-preferences.md"
|
||||||
@@ -18,7 +18,6 @@ optional:
|
|||||||
## Purpose
|
## Purpose
|
||||||
|
|
||||||
Assess non-functional requirements for a story and generate:
|
Assess non-functional requirements for a story and generate:
|
||||||
|
|
||||||
1. YAML block for the gate file's `nfr_validation` section
|
1. YAML block for the gate file's `nfr_validation` section
|
||||||
2. Brief markdown assessment saved to `docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md`
|
2. Brief markdown assessment saved to `docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md`
|
||||||
|
|
||||||
@@ -27,7 +26,6 @@ Assess non-functional requirements for a story and generate:
|
|||||||
### 0. Fail-safe for Missing Inputs
|
### 0. Fail-safe for Missing Inputs
|
||||||
|
|
||||||
If story_path or story file can't be found:
|
If story_path or story file can't be found:
|
||||||
|
|
||||||
- Still create assessment file with note: "Source story not found"
|
- Still create assessment file with note: "Source story not found"
|
||||||
- Set all selected NFRs to CONCERNS with notes: "Target unknown / evidence missing"
|
- Set all selected NFRs to CONCERNS with notes: "Target unknown / evidence missing"
|
||||||
- Continue with assessment to provide value
|
- Continue with assessment to provide value
|
||||||
@@ -40,7 +38,7 @@ If story_path or story file can't be found:
|
|||||||
```text
|
```text
|
||||||
Which NFRs should I assess? (Enter numbers or press Enter for default)
|
Which NFRs should I assess? (Enter numbers or press Enter for default)
|
||||||
[1] Security (default)
|
[1] Security (default)
|
||||||
[2] Performance (default)
|
[2] Performance (default)
|
||||||
[3] Reliability (default)
|
[3] Reliability (default)
|
||||||
[4] Maintainability (default)
|
[4] Maintainability (default)
|
||||||
[5] Usability
|
[5] Usability
|
||||||
@@ -54,7 +52,6 @@ Which NFRs should I assess? (Enter numbers or press Enter for default)
|
|||||||
### 2. Check for Thresholds
|
### 2. Check for Thresholds
|
||||||
|
|
||||||
Look for NFR requirements in:
|
Look for NFR requirements in:
|
||||||
|
|
||||||
- Story acceptance criteria
|
- Story acceptance criteria
|
||||||
- `docs/architecture/*.md` files
|
- `docs/architecture/*.md` files
|
||||||
- `docs/technical-preferences.md`
|
- `docs/technical-preferences.md`
|
||||||
@@ -75,7 +72,6 @@ No security requirements found. Required auth method?
|
|||||||
### 3. Quick Assessment
|
### 3. Quick Assessment
|
||||||
|
|
||||||
For each selected NFR, check:
|
For each selected NFR, check:
|
||||||
|
|
||||||
- Is there evidence it's implemented?
|
- Is there evidence it's implemented?
|
||||||
- Can we validate it?
|
- Can we validate it?
|
||||||
- Are there obvious gaps?
|
- Are there obvious gaps?
|
||||||
@@ -90,7 +86,7 @@ Generate ONLY for NFRs actually assessed (no placeholders):
|
|||||||
# Gate YAML (copy/paste):
|
# Gate YAML (copy/paste):
|
||||||
nfr_validation:
|
nfr_validation:
|
||||||
_assessed: [security, performance, reliability, maintainability]
|
_assessed: [security, performance, reliability, maintainability]
|
||||||
security:
|
security:
|
||||||
status: CONCERNS
|
status: CONCERNS
|
||||||
notes: "No rate limiting on auth endpoints"
|
notes: "No rate limiting on auth endpoints"
|
||||||
performance:
|
performance:
|
||||||
@@ -107,7 +103,7 @@ nfr_validation:
|
|||||||
## Deterministic Status Rules
|
## Deterministic Status Rules
|
||||||
|
|
||||||
- **FAIL**: Any selected NFR has critical gap or target clearly not met
|
- **FAIL**: Any selected NFR has critical gap or target clearly not met
|
||||||
- **CONCERNS**: No FAILs, but any NFR is unknown/partial/missing evidence
|
- **CONCERNS**: No FAILs, but any NFR is unknown/partial/missing evidence
|
||||||
- **PASS**: All selected NFRs meet targets with evidence
|
- **PASS**: All selected NFRs meet targets with evidence
|
||||||
|
|
||||||
## Quality Score Calculation
|
## Quality Score Calculation
|
||||||
@@ -127,21 +123,18 @@ If `technical-preferences.md` defines custom weights, use those instead.
|
|||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
# NFR Assessment: {epic}.{story}
|
# NFR Assessment: {epic}.{story}
|
||||||
|
|
||||||
Date: {date}
|
Date: {date}
|
||||||
Reviewer: Quinn
|
Reviewer: Quinn
|
||||||
|
|
||||||
<!-- Note: Source story not found (if applicable) -->
|
<!-- Note: Source story not found (if applicable) -->
|
||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
- Security: CONCERNS - Missing rate limiting
|
- Security: CONCERNS - Missing rate limiting
|
||||||
- Performance: PASS - Meets <200ms requirement
|
- Performance: PASS - Meets <200ms requirement
|
||||||
- Reliability: PASS - Proper error handling
|
- Reliability: PASS - Proper error handling
|
||||||
- Maintainability: CONCERNS - Test coverage below target
|
- Maintainability: CONCERNS - Test coverage below target
|
||||||
|
|
||||||
## Critical Issues
|
## Critical Issues
|
||||||
|
|
||||||
1. **No rate limiting** (Security)
|
1. **No rate limiting** (Security)
|
||||||
- Risk: Brute force attacks possible
|
- Risk: Brute force attacks possible
|
||||||
- Fix: Add rate limiting middleware to auth endpoints
|
- Fix: Add rate limiting middleware to auth endpoints
|
||||||
@@ -151,7 +144,6 @@ Reviewer: Quinn
|
|||||||
- Fix: Add tests for uncovered branches
|
- Fix: Add tests for uncovered branches
|
||||||
|
|
||||||
## Quick Wins
|
## Quick Wins
|
||||||
|
|
||||||
- Add rate limiting: ~2 hours
|
- Add rate limiting: ~2 hours
|
||||||
- Increase test coverage: ~4 hours
|
- Increase test coverage: ~4 hours
|
||||||
- Add performance monitoring: ~1 hour
|
- Add performance monitoring: ~1 hour
|
||||||
@@ -160,7 +152,6 @@ Reviewer: Quinn
|
|||||||
## Output 3: Story Update Line
|
## Output 3: Story Update Line
|
||||||
|
|
||||||
**End with this line for the review task to quote:**
|
**End with this line for the review task to quote:**
|
||||||
|
|
||||||
```
|
```
|
||||||
NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
|
NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
|
||||||
```
|
```
|
||||||
@@ -168,7 +159,6 @@ NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
|
|||||||
## Output 4: Gate Integration Line
|
## Output 4: Gate Integration Line
|
||||||
|
|
||||||
**Always print at the end:**
|
**Always print at the end:**
|
||||||
|
|
||||||
```
|
```
|
||||||
Gate NFR block ready → paste into docs/qa/gates/{epic}.{story}-{slug}.yml under nfr_validation
|
Gate NFR block ready → paste into docs/qa/gates/{epic}.{story}-{slug}.yml under nfr_validation
|
||||||
```
|
```
|
||||||
@@ -176,82 +166,66 @@ Gate NFR block ready → paste into docs/qa/gates/{epic}.{story}-{slug}.yml unde
|
|||||||
## Assessment Criteria
|
## Assessment Criteria
|
||||||
|
|
||||||
### Security
|
### Security
|
||||||
|
|
||||||
**PASS if:**
|
**PASS if:**
|
||||||
|
|
||||||
- Authentication implemented
|
- Authentication implemented
|
||||||
- Authorization enforced
|
- Authorization enforced
|
||||||
- Input validation present
|
- Input validation present
|
||||||
- No hardcoded secrets
|
- No hardcoded secrets
|
||||||
|
|
||||||
**CONCERNS if:**
|
**CONCERNS if:**
|
||||||
|
|
||||||
- Missing rate limiting
|
- Missing rate limiting
|
||||||
- Weak encryption
|
- Weak encryption
|
||||||
- Incomplete authorization
|
- Incomplete authorization
|
||||||
|
|
||||||
**FAIL if:**
|
**FAIL if:**
|
||||||
|
|
||||||
- No authentication
|
- No authentication
|
||||||
- Hardcoded credentials
|
- Hardcoded credentials
|
||||||
- SQL injection vulnerabilities
|
- SQL injection vulnerabilities
|
||||||
|
|
||||||
### Performance
|
### Performance
|
||||||
|
|
||||||
**PASS if:**
|
**PASS if:**
|
||||||
|
|
||||||
- Meets response time targets
|
- Meets response time targets
|
||||||
- No obvious bottlenecks
|
- No obvious bottlenecks
|
||||||
- Reasonable resource usage
|
- Reasonable resource usage
|
||||||
|
|
||||||
**CONCERNS if:**
|
**CONCERNS if:**
|
||||||
|
|
||||||
- Close to limits
|
- Close to limits
|
||||||
- Missing indexes
|
- Missing indexes
|
||||||
- No caching strategy
|
- No caching strategy
|
||||||
|
|
||||||
**FAIL if:**
|
**FAIL if:**
|
||||||
|
|
||||||
- Exceeds response time limits
|
- Exceeds response time limits
|
||||||
- Memory leaks
|
- Memory leaks
|
||||||
- Unoptimized queries
|
- Unoptimized queries
|
||||||
|
|
||||||
### Reliability
|
### Reliability
|
||||||
|
|
||||||
**PASS if:**
|
**PASS if:**
|
||||||
|
|
||||||
- Error handling present
|
- Error handling present
|
||||||
- Graceful degradation
|
- Graceful degradation
|
||||||
- Retry logic where needed
|
- Retry logic where needed
|
||||||
|
|
||||||
**CONCERNS if:**
|
**CONCERNS if:**
|
||||||
|
|
||||||
- Some error cases unhandled
|
- Some error cases unhandled
|
||||||
- No circuit breakers
|
- No circuit breakers
|
||||||
- Missing health checks
|
- Missing health checks
|
||||||
|
|
||||||
**FAIL if:**
|
**FAIL if:**
|
||||||
|
|
||||||
- No error handling
|
- No error handling
|
||||||
- Crashes on errors
|
- Crashes on errors
|
||||||
- No recovery mechanisms
|
- No recovery mechanisms
|
||||||
|
|
||||||
### Maintainability
|
### Maintainability
|
||||||
|
|
||||||
**PASS if:**
|
**PASS if:**
|
||||||
|
|
||||||
- Test coverage meets target
|
- Test coverage meets target
|
||||||
- Code well-structured
|
- Code well-structured
|
||||||
- Documentation present
|
- Documentation present
|
||||||
|
|
||||||
**CONCERNS if:**
|
**CONCERNS if:**
|
||||||
|
|
||||||
- Test coverage below target
|
- Test coverage below target
|
||||||
- Some code duplication
|
- Some code duplication
|
||||||
- Missing documentation
|
- Missing documentation
|
||||||
|
|
||||||
**FAIL if:**
|
**FAIL if:**
|
||||||
|
|
||||||
- No tests
|
- No tests
|
||||||
- Highly coupled code
|
- Highly coupled code
|
||||||
- No documentation
|
- No documentation
|
||||||
@@ -309,7 +283,7 @@ maintainability:
|
|||||||
|
|
||||||
1. **Functional Suitability**: Completeness, correctness, appropriateness
|
1. **Functional Suitability**: Completeness, correctness, appropriateness
|
||||||
2. **Performance Efficiency**: Time behavior, resource use, capacity
|
2. **Performance Efficiency**: Time behavior, resource use, capacity
|
||||||
3. **Compatibility**: Co-existence, interoperability
|
3. **Compatibility**: Co-existence, interoperability
|
||||||
4. **Usability**: Learnability, operability, accessibility
|
4. **Usability**: Learnability, operability, accessibility
|
||||||
5. **Reliability**: Maturity, availability, fault tolerance
|
5. **Reliability**: Maturity, availability, fault tolerance
|
||||||
6. **Security**: Confidentiality, integrity, authenticity
|
6. **Security**: Confidentiality, integrity, authenticity
|
||||||
@@ -317,7 +291,6 @@ maintainability:
|
|||||||
8. **Portability**: Adaptability, installability
|
8. **Portability**: Adaptability, installability
|
||||||
|
|
||||||
Use these when assessing beyond the core four.
|
Use these when assessing beyond the core four.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
@@ -339,5 +312,4 @@ performance_deep_dive:
|
|||||||
max_rps: 150
|
max_rps: 150
|
||||||
breaking_point: 200 rps
|
breaking_point: 200 rps
|
||||||
```
|
```
|
||||||
|
</details>
|
||||||
</details>
|
|
||||||
@@ -7,7 +7,7 @@ Perform a comprehensive test architecture review with quality gate decision. Thi
|
|||||||
```yaml
|
```yaml
|
||||||
required:
|
required:
|
||||||
- story_id: "{epic}.{story}" # e.g., "1.3"
|
- story_id: "{epic}.{story}" # e.g., "1.3"
|
||||||
- story_path: "docs/stories/{epic}.{story}.*.md"
|
- story_path: "{devStoryLocation}/{epic}.{story}.*.md" # Path from core-config.yaml
|
||||||
- story_title: "{title}" # If missing, derive from story file H1
|
- story_title: "{title}" # If missing, derive from story file H1
|
||||||
- story_slug: "{slug}" # If missing, derive from title (lowercase, hyphenated)
|
- story_slug: "{slug}" # If missing, derive from title (lowercase, hyphenated)
|
||||||
```
|
```
|
||||||
@@ -171,6 +171,8 @@ Gate: {STATUS} → docs/qa/gates/{epic}.{story}-{slug}.yml
|
|||||||
Risk profile: docs/qa/assessments/{epic}.{story}-risk-{YYYYMMDD}.md
|
Risk profile: docs/qa/assessments/{epic}.{story}-risk-{YYYYMMDD}.md
|
||||||
NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
|
NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
|
||||||
|
|
||||||
|
# Note: Paths should reference core-config.yaml for custom configurations
|
||||||
|
|
||||||
### Recommended Status
|
### Recommended Status
|
||||||
|
|
||||||
[✓ Ready for Done] / [✗ Changes Required - See unchecked items above]
|
[✓ Ready for Done] / [✗ Changes Required - See unchecked items above]
|
||||||
@@ -182,7 +184,7 @@ NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
|
|||||||
**Template and Directory:**
|
**Template and Directory:**
|
||||||
|
|
||||||
- Render from `templates/qa-gate-tmpl.yaml`
|
- Render from `templates/qa-gate-tmpl.yaml`
|
||||||
- Create `docs/qa/gates/` directory if missing
|
- Create `docs/qa/gates/` directory if missing (or configure in core-config.yaml)
|
||||||
- Save to: `docs/qa/gates/{epic}.{story}-{slug}.yml`
|
- Save to: `docs/qa/gates/{epic}.{story}-{slug}.yml`
|
||||||
|
|
||||||
Gate file structure:
|
Gate file structure:
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ Create comprehensive test scenarios with appropriate test level recommendations
|
|||||||
```yaml
|
```yaml
|
||||||
required:
|
required:
|
||||||
- story_id: "{epic}.{story}" # e.g., "1.3"
|
- story_id: "{epic}.{story}" # e.g., "1.3"
|
||||||
- story_path: "docs/stories/{epic}.{story}.*.md"
|
- story_path: "{devStoryLocation}/{epic}.{story}.*.md" # Path from core-config.yaml
|
||||||
- story_title: "{title}" # If missing, derive from story file H1
|
- story_title: "{title}" # If missing, derive from story file H1
|
||||||
- story_slug: "{slug}" # If missing, derive from title (lowercase, hyphenated)
|
- story_slug: "{slug}" # If missing, derive from title (lowercase, hyphenated)
|
||||||
```
|
```
|
||||||
@@ -16,289 +16,69 @@ required:
|
|||||||
|
|
||||||
Design a complete test strategy that identifies what to test, at which level (unit/integration/e2e), and why. This ensures efficient test coverage without redundancy while maintaining appropriate test boundaries.
|
Design a complete test strategy that identifies what to test, at which level (unit/integration/e2e), and why. This ensures efficient test coverage without redundancy while maintaining appropriate test boundaries.
|
||||||
|
|
||||||
## Test Level Decision Framework
|
## Dependencies
|
||||||
|
|
||||||
### Unit Tests
|
|
||||||
|
|
||||||
**When to use:**
|
|
||||||
|
|
||||||
- Testing pure functions and business logic
|
|
||||||
- Algorithm correctness
|
|
||||||
- Input validation and data transformation
|
|
||||||
- Error handling in isolated components
|
|
||||||
- Complex calculations or state machines
|
|
||||||
|
|
||||||
**Characteristics:**
|
|
||||||
|
|
||||||
- Fast execution (immediate feedback)
|
|
||||||
- No external dependencies (DB, API, file system)
|
|
||||||
- Highly maintainable and stable
|
|
||||||
- Easy to debug failures
|
|
||||||
|
|
||||||
**Example scenarios:**
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
unit_test:
|
data:
|
||||||
component: "PriceCalculator"
|
- test-levels-framework.md # Unit/Integration/E2E decision criteria
|
||||||
scenario: "Calculate discount with multiple rules"
|
- test-priorities-matrix.md # P0/P1/P2/P3 classification system
|
||||||
justification: "Complex business logic with multiple branches"
|
|
||||||
mock_requirements: "None - pure function"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Integration Tests
|
## Process
|
||||||
|
|
||||||
**When to use:**
|
|
||||||
|
|
||||||
- Testing component interactions
|
|
||||||
- Database operations and queries
|
|
||||||
- API endpoint behavior
|
|
||||||
- Service layer orchestration
|
|
||||||
- External service integration (with test doubles)
|
|
||||||
|
|
||||||
**Characteristics:**
|
|
||||||
|
|
||||||
- Moderate execution time
|
|
||||||
- May use test databases or containers
|
|
||||||
- Tests multiple components together
|
|
||||||
- Validates contracts between components
|
|
||||||
|
|
||||||
**Example scenarios:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
integration_test:
|
|
||||||
components: ["UserService", "UserRepository", "Database"]
|
|
||||||
scenario: "Create user with duplicate email check"
|
|
||||||
justification: "Tests transaction boundaries and constraint handling"
|
|
||||||
test_doubles: "Mock email service, real test database"
|
|
||||||
```
|
|
||||||
|
|
||||||
### End-to-End Tests
|
|
||||||
|
|
||||||
**When to use:**
|
|
||||||
|
|
||||||
- Critical user journeys
|
|
||||||
- Cross-system workflows
|
|
||||||
- UI interaction flows
|
|
||||||
- Full stack validation
|
|
||||||
- Production-like scenario testing
|
|
||||||
|
|
||||||
**Characteristics:**
|
|
||||||
|
|
||||||
- Keep under 90 seconds per test
|
|
||||||
- Tests complete user scenarios
|
|
||||||
- Uses real or production-like environment
|
|
||||||
- Higher maintenance cost
|
|
||||||
- More prone to flakiness
|
|
||||||
|
|
||||||
**Example scenarios:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
e2e_test:
|
|
||||||
flow: "Complete purchase flow"
|
|
||||||
scenario: "User browses, adds to cart, and completes checkout"
|
|
||||||
justification: "Critical business flow requiring full stack validation"
|
|
||||||
environment: "Staging with test payment gateway"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Test Design Process
|
|
||||||
|
|
||||||
### 1. Analyze Story Requirements
|
### 1. Analyze Story Requirements
|
||||||
|
|
||||||
Break down each acceptance criterion into testable scenarios:
|
Break down each acceptance criterion into testable scenarios. For each AC:
|
||||||
|
|
||||||
```yaml
|
- Identify the core functionality to test
|
||||||
acceptance_criterion: "User can reset password via email"
|
- Determine data variations needed
|
||||||
test_scenarios:
|
- Consider error conditions
|
||||||
- level: unit
|
- Note edge cases
|
||||||
what: "Password validation rules"
|
|
||||||
why: "Complex regex and business rules"
|
|
||||||
|
|
||||||
- level: integration
|
### 2. Apply Test Level Framework
|
||||||
what: "Password reset token generation and storage"
|
|
||||||
why: "Database interaction with expiry logic"
|
|
||||||
|
|
||||||
- level: integration
|
**Reference:** Load `test-levels-framework.md` for detailed criteria
|
||||||
what: "Email service integration"
|
|
||||||
why: "External service with retry logic"
|
|
||||||
|
|
||||||
- level: e2e
|
Quick rules:
|
||||||
what: "Complete password reset flow"
|
|
||||||
why: "Critical security flow needing full validation"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Apply Test Level Heuristics
|
- **Unit**: Pure logic, algorithms, calculations
|
||||||
|
- **Integration**: Component interactions, DB operations
|
||||||
|
- **E2E**: Critical user journeys, compliance
|
||||||
|
|
||||||
Use these rules to determine appropriate test levels:
|
### 3. Assign Priorities
|
||||||
|
|
||||||
```markdown
|
**Reference:** Load `test-priorities-matrix.md` for classification
|
||||||
## Test Level Selection Rules
|
|
||||||
|
|
||||||
### Favor Unit Tests When:
|
Quick priority assignment:
|
||||||
|
|
||||||
- Logic can be isolated
|
- **P0**: Revenue-critical, security, compliance
|
||||||
- No side effects involved
|
- **P1**: Core user journeys, frequently used
|
||||||
- Fast feedback needed
|
- **P2**: Secondary features, admin functions
|
||||||
- High cyclomatic complexity
|
- **P3**: Nice-to-have, rarely used
|
||||||
|
|
||||||
### Favor Integration Tests When:
|
### 4. Design Test Scenarios
|
||||||
|
|
||||||
- Testing persistence layer
|
For each identified test need, create:
|
||||||
- Validating service contracts
|
|
||||||
- Testing middleware/interceptors
|
|
||||||
- Component boundaries critical
|
|
||||||
|
|
||||||
### Favor E2E Tests When:
|
|
||||||
|
|
||||||
- User-facing critical paths
|
|
||||||
- Multi-system interactions
|
|
||||||
- Regulatory compliance scenarios
|
|
||||||
- Visual regression important
|
|
||||||
|
|
||||||
### Anti-patterns to Avoid:
|
|
||||||
|
|
||||||
- E2E testing for business logic validation
|
|
||||||
- Unit testing framework behavior
|
|
||||||
- Integration testing third-party libraries
|
|
||||||
- Duplicate coverage across levels
|
|
||||||
|
|
||||||
### Duplicate Coverage Guard
|
|
||||||
|
|
||||||
**Before adding any test, check:**
|
|
||||||
|
|
||||||
1. Is this already tested at a lower level?
|
|
||||||
2. Can a unit test cover this instead of integration?
|
|
||||||
3. Can an integration test cover this instead of E2E?
|
|
||||||
|
|
||||||
**Coverage overlap is only acceptable when:**
|
|
||||||
|
|
||||||
- Testing different aspects (unit: logic, integration: interaction, e2e: user experience)
|
|
||||||
- Critical paths requiring defense in depth
|
|
||||||
- Regression prevention for previously broken functionality
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Design Test Scenarios
|
|
||||||
|
|
||||||
**Test ID Format:** `{EPIC}.{STORY}-{LEVEL}-{SEQ}`
|
|
||||||
|
|
||||||
- Example: `1.3-UNIT-001`, `1.3-INT-002`, `1.3-E2E-001`
|
|
||||||
- Ensures traceability across all artifacts
|
|
||||||
|
|
||||||
**Naming Convention:**
|
|
||||||
|
|
||||||
- Unit: `test_{component}_{scenario}`
|
|
||||||
- Integration: `test_{flow}_{interaction}`
|
|
||||||
- E2E: `test_{journey}_{outcome}`
|
|
||||||
|
|
||||||
**Risk Linkage:**
|
|
||||||
|
|
||||||
- Tag tests with risk IDs they mitigate
|
|
||||||
- Prioritize tests for high-risk areas (P0)
|
|
||||||
- Link to risk profile when available
|
|
||||||
|
|
||||||
For each identified test need:
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
test_scenario:
|
test_scenario:
|
||||||
id: "1.3-INT-002"
|
id: "{epic}.{story}-{LEVEL}-{SEQ}"
|
||||||
requirement: "AC2: Rate limiting on login attempts"
|
requirement: "AC reference"
|
||||||
mitigates_risks: ["SEC-001", "PERF-003"] # Links to risk profile
|
priority: P0|P1|P2|P3
|
||||||
priority: P0 # Based on risk score
|
level: unit|integration|e2e
|
||||||
|
description: "What is being tested"
|
||||||
unit_tests:
|
justification: "Why this level was chosen"
|
||||||
- name: "RateLimiter calculates window correctly"
|
mitigates_risks: ["RISK-001"] # If risk profile exists
|
||||||
input: "Timestamp array"
|
|
||||||
expected: "Correct window calculation"
|
|
||||||
|
|
||||||
integration_tests:
|
|
||||||
- name: "Login endpoint enforces rate limit"
|
|
||||||
setup: "5 failed attempts"
|
|
||||||
action: "6th attempt"
|
|
||||||
expected: "429 response with retry-after header"
|
|
||||||
|
|
||||||
e2e_tests:
|
|
||||||
- name: "User sees rate limit message"
|
|
||||||
setup: "Trigger rate limit"
|
|
||||||
validation: "Error message displayed, retry timer shown"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Deterministic Test Level Minimums
|
### 5. Validate Coverage
|
||||||
|
|
||||||
**Per Acceptance Criterion:**
|
Ensure:
|
||||||
|
|
||||||
- At least 1 unit test for business logic
|
- Every AC has at least one test
|
||||||
- At least 1 integration test if multiple components interact
|
- No duplicate coverage across levels
|
||||||
- At least 1 E2E test if it's a user-facing feature
|
- Critical paths have multiple levels
|
||||||
|
- Risk mitigations are addressed
|
||||||
**Exceptions:**
|
|
||||||
|
|
||||||
- Pure UI changes: May skip unit tests
|
|
||||||
- Pure logic changes: May skip E2E tests
|
|
||||||
- Infrastructure changes: May focus on integration tests
|
|
||||||
|
|
||||||
**When in doubt:** Start with unit tests, add integration for interactions, E2E for critical paths only.
|
|
||||||
|
|
||||||
## Test Quality Standards
|
|
||||||
|
|
||||||
### Core Testing Principles
|
|
||||||
|
|
||||||
**No Flaky Tests:** Ensure reliability through proper async handling, explicit waits, and atomic test design.
|
|
||||||
|
|
||||||
**No Hard Waits/Sleeps:** Use dynamic waiting strategies (e.g., polling, event-based triggers).
|
|
||||||
|
|
||||||
**Stateless & Parallel-Safe:** Tests run independently; use cron jobs or semaphores only if unavoidable.
|
|
||||||
|
|
||||||
**No Order Dependency:** Every it/describe/context block works in isolation (supports .only execution).
|
|
||||||
|
|
||||||
**Self-Cleaning Tests:** Test sets up its own data and automatically deletes/deactivates entities created during testing.
|
|
||||||
|
|
||||||
**Tests Live Near Source Code:** Co-locate test files with the code they validate (e.g., `*.spec.js` alongside components).
|
|
||||||
|
|
||||||
### Execution Strategy
|
|
||||||
|
|
||||||
**Shifted Left:**
|
|
||||||
|
|
||||||
- Start with local environments or ephemeral stacks
|
|
||||||
- Validate functionality across all deployment stages (local → dev → stage)
|
|
||||||
|
|
||||||
**Low Maintenance:** Minimize manual upkeep (avoid brittle selectors, do not repeat UI actions, leverage APIs).
|
|
||||||
|
|
||||||
**CI Execution Evidence:** Integrate into pipelines with clear logs/artifacts.
|
|
||||||
|
|
||||||
**Visibility:** Generate test reports (e.g., JUnit XML, HTML) for failures and trends.
|
|
||||||
|
|
||||||
### Coverage Requirements
|
|
||||||
|
|
||||||
**Release Confidence:**
|
|
||||||
|
|
||||||
- Happy Path: Core user journeys are prioritized
|
|
||||||
- Edge Cases: Critical error/validation scenarios are covered
|
|
||||||
- Feature Flags: Test both enabled and disabled states where applicable
|
|
||||||
|
|
||||||
### Test Design Rules
|
|
||||||
|
|
||||||
**Assertions:** Keep them explicit in tests; avoid abstraction into helpers. Use parametrized tests for soft assertions.
|
|
||||||
|
|
||||||
**Naming:** Follow conventions (e.g., `describe('Component')`, `it('should do X when Y')`).
|
|
||||||
|
|
||||||
**Size:** Aim for files ≤200 lines; split/chunk large tests logically.
|
|
||||||
|
|
||||||
**Speed:** Target individual tests ≤90 seconds; optimize slow setups (e.g., shared fixtures).
|
|
||||||
|
|
||||||
**Careful Abstractions:** Favor readability over DRY when balancing helper reuse (page objects are okay, assertion logic is not).
|
|
||||||
|
|
||||||
**Test Cleanup:** Ensure tests clean up resources they create (e.g., closing browser, deleting test data).
|
|
||||||
|
|
||||||
**Deterministic Flow:** Tests should refrain from using conditionals (e.g., if/else) to control flow or try/catch blocks where possible.
|
|
||||||
|
|
||||||
### API Testing Standards
|
|
||||||
|
|
||||||
- Tests must not depend on hardcoded data → use factories and per-test setup
|
|
||||||
- Always test both happy path and negative/error cases
|
|
||||||
- API tests should run parallel safely (no global state shared)
|
|
||||||
- Test idempotency where applicable (e.g., duplicate requests)
|
|
||||||
- Tests should clean up their data
|
|
||||||
- Response logs should only be printed in case of failure
|
|
||||||
- Auth tests must validate token expiration and renewal
|
|
||||||
|
|
||||||
## Outputs
|
## Outputs
|
||||||
|
|
||||||
@@ -306,13 +86,11 @@ test_scenario:
|
|||||||
|
|
||||||
**Save to:** `docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md`
|
**Save to:** `docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md`
|
||||||
|
|
||||||
Generate a comprehensive test design document:
|
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
# Test Design: Story {epic}.{story}
|
# Test Design: Story {epic}.{story}
|
||||||
|
|
||||||
Date: {date}
|
Date: {date}
|
||||||
Reviewer: Quinn (Test Architect)
|
Designer: Quinn (Test Architect)
|
||||||
|
|
||||||
## Test Strategy Overview
|
## Test Strategy Overview
|
||||||
|
|
||||||
@@ -320,209 +98,77 @@ Reviewer: Quinn (Test Architect)
|
|||||||
- Unit tests: Y (A%)
|
- Unit tests: Y (A%)
|
||||||
- Integration tests: Z (B%)
|
- Integration tests: Z (B%)
|
||||||
- E2E tests: W (C%)
|
- E2E tests: W (C%)
|
||||||
|
- Priority distribution: P0: X, P1: Y, P2: Z
|
||||||
|
|
||||||
## Test Level Rationale
|
## Test Scenarios by Acceptance Criteria
|
||||||
|
|
||||||
[Explain why this distribution was chosen]
|
### AC1: {description}
|
||||||
|
|
||||||
## Detailed Test Scenarios
|
#### Scenarios
|
||||||
|
|
||||||
### Requirement: AC1 - {description}
|
| ID | Level | Priority | Test | Justification |
|
||||||
|
| ------------ | ----------- | -------- | ------------------------- | ------------------------ |
|
||||||
|
| 1.3-UNIT-001 | Unit | P0 | Validate input format | Pure validation logic |
|
||||||
|
| 1.3-INT-001 | Integration | P0 | Service processes request | Multi-component flow |
|
||||||
|
| 1.3-E2E-001 | E2E | P1 | User completes journey | Critical path validation |
|
||||||
|
|
||||||
#### Unit Tests (3 scenarios)
|
[Continue for all ACs...]
|
||||||
|
|
||||||
1. **ID**: 1.3-UNIT-001
|
## Risk Coverage
|
||||||
**Test**: Validate input format
|
|
||||||
- **Why Unit**: Pure validation logic
|
|
||||||
- **Coverage**: Input edge cases
|
|
||||||
- **Mocks**: None needed
|
|
||||||
- **Mitigates**: DATA-001 (if applicable)
|
|
||||||
|
|
||||||
#### Integration Tests (2 scenarios)
|
[Map test scenarios to identified risks if risk profile exists]
|
||||||
|
|
||||||
1. **ID**: 1.3-INT-001
|
## Recommended Execution Order
|
||||||
**Test**: Service processes valid request
|
|
||||||
- **Why Integration**: Multiple components involved
|
|
||||||
- **Coverage**: Happy path + error handling
|
|
||||||
- **Test Doubles**: Mock external API
|
|
||||||
- **Mitigates**: TECH-002
|
|
||||||
|
|
||||||
#### E2E Tests (1 scenario)
|
1. P0 Unit tests (fail fast)
|
||||||
|
2. P0 Integration tests
|
||||||
1. **ID**: 1.3-E2E-001
|
3. P0 E2E tests
|
||||||
**Test**: Complete user workflow
|
4. P1 tests in order
|
||||||
- **Why E2E**: Critical user journey
|
5. P2+ as time permits
|
||||||
- **Coverage**: Full stack validation
|
|
||||||
- **Environment**: Staging
|
|
||||||
- **Max Duration**: 90 seconds
|
|
||||||
- **Mitigates**: BUS-001
|
|
||||||
|
|
||||||
[Continue for all requirements...]
|
|
||||||
|
|
||||||
## Test Data Requirements
|
|
||||||
|
|
||||||
### Unit Test Data
|
|
||||||
|
|
||||||
- Static fixtures for calculations
|
|
||||||
- Edge case values arrays
|
|
||||||
|
|
||||||
### Integration Test Data
|
|
||||||
|
|
||||||
- Test database seeds
|
|
||||||
- API response fixtures
|
|
||||||
|
|
||||||
### E2E Test Data
|
|
||||||
|
|
||||||
- Test user accounts
|
|
||||||
- Sandbox environment data
|
|
||||||
|
|
||||||
## Mock/Stub Strategy
|
|
||||||
|
|
||||||
### What to Mock
|
|
||||||
|
|
||||||
- External services (payment, email)
|
|
||||||
- Time-dependent functions
|
|
||||||
- Random number generators
|
|
||||||
|
|
||||||
### What NOT to Mock
|
|
||||||
|
|
||||||
- Core business logic
|
|
||||||
- Database in integration tests
|
|
||||||
- Critical security functions
|
|
||||||
|
|
||||||
## Test Execution Implementation
|
|
||||||
|
|
||||||
### Parallel Execution
|
|
||||||
|
|
||||||
- All unit tests: Fully parallel (stateless requirement)
|
|
||||||
- Integration tests: Parallel with isolated databases
|
|
||||||
- E2E tests: Sequential or limited parallelism
|
|
||||||
|
|
||||||
### Execution Order
|
|
||||||
|
|
||||||
1. Unit tests first (fail fast)
|
|
||||||
2. Integration tests second
|
|
||||||
3. E2E tests last (expensive, max 90 seconds each)
|
|
||||||
|
|
||||||
## Risk-Based Test Priority
|
|
||||||
|
|
||||||
### P0 - Must Have (Linked to Critical/High Risks)
|
|
||||||
|
|
||||||
- Security-related tests (SEC-\* risks)
|
|
||||||
- Data integrity tests (DATA-\* risks)
|
|
||||||
- Critical business flow tests (BUS-\* risks)
|
|
||||||
- Tests for risks scored ≥6 in risk profile
|
|
||||||
|
|
||||||
### P1 - Should Have (Medium Risks)
|
|
||||||
|
|
||||||
- Edge case coverage
|
|
||||||
- Performance tests (PERF-\* risks)
|
|
||||||
- Error recovery tests
|
|
||||||
- Tests for risks scored 4-5
|
|
||||||
|
|
||||||
### P2 - Nice to Have (Low Risks)
|
|
||||||
|
|
||||||
- UI polish tests
|
|
||||||
- Minor validation tests
|
|
||||||
- Tests for risks scored ≤3
|
|
||||||
|
|
||||||
## Test Maintenance Considerations
|
|
||||||
|
|
||||||
### High Maintenance Tests
|
|
||||||
|
|
||||||
[List tests that may need frequent updates]
|
|
||||||
|
|
||||||
### Stability Measures
|
|
||||||
|
|
||||||
- No retry strategies (tests must be deterministic)
|
|
||||||
- Dynamic waits only (no hard sleeps)
|
|
||||||
- Environment isolation
|
|
||||||
- Self-cleaning test data
|
|
||||||
|
|
||||||
## Coverage Goals
|
|
||||||
|
|
||||||
### Unit Test Coverage
|
|
||||||
|
|
||||||
- Target: 80% line coverage
|
|
||||||
- Focus: Business logic, calculations
|
|
||||||
|
|
||||||
### Integration Coverage
|
|
||||||
|
|
||||||
- Target: All API endpoints
|
|
||||||
- Focus: Contract validation
|
|
||||||
|
|
||||||
### E2E Coverage
|
|
||||||
|
|
||||||
- Target: Critical paths only
|
|
||||||
- Focus: User value delivery
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Test Level Smells to Flag
|
### Output 2: Gate YAML Block
|
||||||
|
|
||||||
### Over-testing Smells
|
Generate for inclusion in quality gate:
|
||||||
|
|
||||||
- Same logic tested at multiple levels
|
```yaml
|
||||||
- E2E tests for calculations
|
test_design:
|
||||||
- Integration tests for framework features
|
scenarios_total: X
|
||||||
|
by_level:
|
||||||
|
unit: Y
|
||||||
|
integration: Z
|
||||||
|
e2e: W
|
||||||
|
by_priority:
|
||||||
|
p0: A
|
||||||
|
p1: B
|
||||||
|
p2: C
|
||||||
|
coverage_gaps: [] # List any ACs without tests
|
||||||
|
```
|
||||||
|
|
||||||
### Under-testing Smells
|
### Output 3: Trace References
|
||||||
|
|
||||||
- No unit tests for complex logic
|
Print for use by trace-requirements task:
|
||||||
- Missing integration tests for data operations
|
|
||||||
- No E2E tests for critical user paths
|
|
||||||
|
|
||||||
### Wrong Level Smells
|
```text
|
||||||
|
Test design matrix: docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md
|
||||||
|
P0 tests identified: {count}
|
||||||
|
```
|
||||||
|
|
||||||
- Unit tests with real database
|
## Quality Checklist
|
||||||
- E2E tests checking calculation results
|
|
||||||
- Integration tests mocking everything
|
|
||||||
|
|
||||||
## Quality Indicators
|
Before finalizing, verify:
|
||||||
|
|
||||||
Good test design shows:
|
- [ ] Every AC has test coverage
|
||||||
|
- [ ] Test levels are appropriate (not over-testing)
|
||||||
- Clear level separation
|
- [ ] No duplicate coverage across levels
|
||||||
- No redundant coverage
|
- [ ] Priorities align with business risk
|
||||||
- Fast feedback from unit tests
|
- [ ] Test IDs follow naming convention
|
||||||
- Reliable integration tests
|
- [ ] Scenarios are atomic and independent
|
||||||
- Focused e2e tests
|
|
||||||
|
|
||||||
## Key Principles
|
## Key Principles
|
||||||
|
|
||||||
- Test at the lowest appropriate level
|
- **Shift left**: Prefer unit over integration, integration over E2E
|
||||||
- One clear owner per test
|
- **Risk-based**: Focus on what could go wrong
|
||||||
- Fast tests run first
|
- **Efficient coverage**: Test once at the right level
|
||||||
- Mock at boundaries, not internals
|
- **Maintainability**: Consider long-term test maintenance
|
||||||
- E2E for user value, not implementation
|
- **Fast feedback**: Quick tests run first
|
||||||
- Maintain test/production parity where critical
|
|
||||||
- Tests must be atomic and self-contained
|
|
||||||
- No shared state between tests
|
|
||||||
- Explicit assertions in test files (not helpers)
|
|
||||||
|
|
||||||
### Output 2: Story Hook Line
|
|
||||||
|
|
||||||
**Print this line for review task to quote:**
|
|
||||||
|
|
||||||
```text
|
|
||||||
Test design: docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md
|
|
||||||
```
|
|
||||||
|
|
||||||
**For traceability:** This planning document will be referenced by trace-requirements task.
|
|
||||||
|
|
||||||
### Output 3: Test Count Summary
|
|
||||||
|
|
||||||
**Print summary for quick reference:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
test_summary:
|
|
||||||
total: { total_count }
|
|
||||||
by_level:
|
|
||||||
unit: { unit_count }
|
|
||||||
integration: { int_count }
|
|
||||||
e2e: { e2e_count }
|
|
||||||
by_priority:
|
|
||||||
P0: { p0_count }
|
|
||||||
P1: { p1_count }
|
|
||||||
P2: { p2_count }
|
|
||||||
coverage_gaps: [] # List any ACs without tests
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -29,65 +29,74 @@ risk_summary:
|
|||||||
must_fix: []
|
must_fix: []
|
||||||
monitor: []
|
monitor: []
|
||||||
|
|
||||||
# Example with issues:
|
# Examples section using block scalars for clarity
|
||||||
# top_issues:
|
examples:
|
||||||
# - id: "SEC-001"
|
with_issues: |
|
||||||
# severity: high # ONLY: low|medium|high
|
top_issues:
|
||||||
# finding: "No rate limiting on login endpoint"
|
- id: "SEC-001"
|
||||||
# suggested_action: "Add rate limiting middleware before production"
|
severity: high # ONLY: low|medium|high
|
||||||
# - id: "TEST-001"
|
finding: "No rate limiting on login endpoint"
|
||||||
# severity: medium
|
suggested_action: "Add rate limiting middleware before production"
|
||||||
# finding: "Missing integration tests for auth flow"
|
- id: "TEST-001"
|
||||||
# suggested_action: "Add test coverage for critical paths"
|
severity: medium
|
||||||
|
finding: "Missing integration tests for auth flow"
|
||||||
# Example when waived:
|
suggested_action: "Add test coverage for critical paths"
|
||||||
# waiver:
|
|
||||||
# active: true
|
when_waived: |
|
||||||
# reason: "Accepted for MVP release - will address in next sprint"
|
waiver:
|
||||||
# approved_by: "Product Owner"
|
active: true
|
||||||
|
reason: "Accepted for MVP release - will address in next sprint"
|
||||||
|
approved_by: "Product Owner"
|
||||||
|
|
||||||
# ============ Optional Extended Fields ============
|
# ============ Optional Extended Fields ============
|
||||||
# Uncomment and use if your team wants more detail
|
# Uncomment and use if your team wants more detail
|
||||||
|
|
||||||
# quality_score: 75 # 0-100 (optional scoring)
|
optional_fields_examples:
|
||||||
# expires: "2025-01-26T00:00:00Z" # Optional gate freshness window
|
quality_and_expiry: |
|
||||||
|
quality_score: 75 # 0-100 (optional scoring)
|
||||||
# evidence:
|
expires: "2025-01-26T00:00:00Z" # Optional gate freshness window
|
||||||
# tests_reviewed: 15
|
|
||||||
# risks_identified: 3
|
evidence: |
|
||||||
# trace:
|
evidence:
|
||||||
# ac_covered: [1, 2, 3] # AC numbers with test coverage
|
tests_reviewed: 15
|
||||||
# ac_gaps: [4] # AC numbers lacking coverage
|
risks_identified: 3
|
||||||
|
trace:
|
||||||
# nfr_validation:
|
ac_covered: [1, 2, 3] # AC numbers with test coverage
|
||||||
# security: { status: CONCERNS, notes: "Rate limiting missing" }
|
ac_gaps: [4] # AC numbers lacking coverage
|
||||||
# performance: { status: PASS, notes: "" }
|
|
||||||
# reliability: { status: PASS, notes: "" }
|
nfr_validation: |
|
||||||
# maintainability: { status: PASS, notes: "" }
|
nfr_validation:
|
||||||
|
security: { status: CONCERNS, notes: "Rate limiting missing" }
|
||||||
# history: # Append-only audit trail
|
performance: { status: PASS, notes: "" }
|
||||||
# - at: "2025-01-12T10:00:00Z"
|
reliability: { status: PASS, notes: "" }
|
||||||
# gate: FAIL
|
maintainability: { status: PASS, notes: "" }
|
||||||
# note: "Initial review - missing tests"
|
|
||||||
# - at: "2025-01-12T15:00:00Z"
|
history: |
|
||||||
# gate: CONCERNS
|
history: # Append-only audit trail
|
||||||
# note: "Tests added but rate limiting still missing"
|
- at: "2025-01-12T10:00:00Z"
|
||||||
|
gate: FAIL
|
||||||
# risk_summary: # From risk-profile task
|
note: "Initial review - missing tests"
|
||||||
# totals:
|
- at: "2025-01-12T15:00:00Z"
|
||||||
# critical: 0
|
gate: CONCERNS
|
||||||
# high: 0
|
note: "Tests added but rate limiting still missing"
|
||||||
# medium: 0
|
|
||||||
# low: 0
|
risk_summary: |
|
||||||
# # 'highest' is emitted only when risks exist
|
risk_summary: # From risk-profile task
|
||||||
# recommendations:
|
totals:
|
||||||
# must_fix: []
|
critical: 0
|
||||||
# monitor: []
|
high: 0
|
||||||
|
medium: 0
|
||||||
# recommendations:
|
low: 0
|
||||||
# immediate: # Must fix before production
|
# 'highest' is emitted only when risks exist
|
||||||
# - action: "Add rate limiting to auth endpoints"
|
recommendations:
|
||||||
# refs: ["api/auth/login.ts:42-68"]
|
must_fix: []
|
||||||
# future: # Can be addressed later
|
monitor: []
|
||||||
# - action: "Consider caching for better performance"
|
|
||||||
# refs: ["services/data.service.ts"]
|
recommendations: |
|
||||||
|
recommendations:
|
||||||
|
immediate: # Must fix before production
|
||||||
|
- action: "Add rate limiting to auth endpoints"
|
||||||
|
refs: ["api/auth/login.ts:42-68"]
|
||||||
|
future: # Can be addressed later
|
||||||
|
- action: "Consider caching for better performance"
|
||||||
|
refs: ["services/data.service.ts"]
|
||||||
Reference in New Issue
Block a user