7.1 KiB
nfr-assess
Quick NFR validation focused on the core four: security, performance, reliability, maintainability.
Inputs
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: 'docs/stories/{epic}.{story}.*.md'
optional:
- architecture_refs: 'docs/architecture/*.md'
- technical_preferences: 'docs/technical-preferences.md'
- acceptance_criteria: From story file
Purpose
Assess non-functional requirements for a story and generate:
- YAML block for the gate file's
nfr_validationsection - Brief markdown assessment saved to
docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
Process
0. Fail-safe for Missing Inputs
If story_path or story file can't be found:
- Still create assessment file with note: "Source story not found"
- Set all selected NFRs to CONCERNS with notes: "Target unknown / evidence missing"
- Continue with assessment to provide value
1. Elicit Scope
Interactive mode: Ask which NFRs to assess Non-interactive mode: Default to core four (security, performance, reliability, maintainability)
Which NFRs should I assess? (Enter numbers or press Enter for default)
[1] Security (default)
[2] Performance (default)
[3] Reliability (default)
[4] Maintainability (default)
[5] Usability
[6] Compatibility
[7] Portability
[8] Functional Suitability
> [Enter for 1-4]
2. Check for Thresholds
Look for NFR requirements in:
- Story acceptance criteria
docs/architecture/*.mdfilesdocs/technical-preferences.md
Interactive mode: Ask for missing thresholds Non-interactive mode: Mark as CONCERNS with "Target unknown"
No performance requirements found. What's your target response time?
> 200ms for API calls
No security requirements found. Required auth method?
> JWT with refresh tokens
Unknown targets policy: If a target is missing and not provided, mark status as CONCERNS with notes: "Target unknown"
3. Quick Assessment
For each selected NFR, check:
- Is there evidence it's implemented?
- Can we validate it?
- Are there obvious gaps?
4. Generate Outputs
Output 1: Gate YAML Block
Generate ONLY for NFRs actually assessed (no placeholders):
# Gate YAML (copy/paste):
nfr_validation:
_assessed: [security, performance, reliability, maintainability]
security:
status: CONCERNS
notes: 'No rate limiting on auth endpoints'
performance:
status: PASS
notes: 'Response times < 200ms verified'
reliability:
status: PASS
notes: 'Error handling and retries implemented'
maintainability:
status: CONCERNS
notes: 'Test coverage at 65%, target is 80%'
Deterministic Status Rules
- FAIL: Any selected NFR has critical gap or target clearly not met
- CONCERNS: No FAILs, but any NFR is unknown/partial/missing evidence
- PASS: All selected NFRs meet targets with evidence
Quality Score Calculation
quality_score = 100
- 20 for each FAIL attribute
- 10 for each CONCERNS attribute
Floor at 0, ceiling at 100
If technical-preferences.md defines custom weights, use those instead.
Output 2: Brief Assessment Report
ALWAYS save to: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
# NFR Assessment: {epic}.{story}
Date: {date}
Reviewer: Quinn
<!-- Note: Source story not found (if applicable) -->
## Summary
- Security: CONCERNS - Missing rate limiting
- Performance: PASS - Meets <200ms requirement
- Reliability: PASS - Proper error handling
- Maintainability: CONCERNS - Test coverage below target
## Critical Issues
1. **No rate limiting** (Security)
- Risk: Brute force attacks possible
- Fix: Add rate limiting middleware to auth endpoints
2. **Test coverage 65%** (Maintainability)
- Risk: Untested code paths
- Fix: Add tests for uncovered branches
## Quick Wins
- Add rate limiting: ~2 hours
- Increase test coverage: ~4 hours
- Add performance monitoring: ~1 hour
Output 3: Story Update Line
End with this line for the review task to quote:
NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
Output 4: Gate Integration Line
Always print at the end:
Gate NFR block ready → paste into docs/qa/gates/{epic}.{story}-{slug}.yml under nfr_validation
Assessment Criteria
Security
PASS if:
- Authentication implemented
- Authorization enforced
- Input validation present
- No hardcoded secrets
CONCERNS if:
- Missing rate limiting
- Weak encryption
- Incomplete authorization
FAIL if:
- No authentication
- Hardcoded credentials
- SQL injection vulnerabilities
Performance
PASS if:
- Meets response time targets
- No obvious bottlenecks
- Reasonable resource usage
CONCERNS if:
- Close to limits
- Missing indexes
- No caching strategy
FAIL if:
- Exceeds response time limits
- Memory leaks
- Unoptimized queries
Reliability
PASS if:
- Error handling present
- Graceful degradation
- Retry logic where needed
CONCERNS if:
- Some error cases unhandled
- No circuit breakers
- Missing health checks
FAIL if:
- No error handling
- Crashes on errors
- No recovery mechanisms
Maintainability
PASS if:
- Test coverage meets target
- Code well-structured
- Documentation present
CONCERNS if:
- Test coverage below target
- Some code duplication
- Missing documentation
FAIL if:
- No tests
- Highly coupled code
- No documentation
Quick Reference
What to Check
security:
- Authentication mechanism
- Authorization checks
- Input validation
- Secret management
- Rate limiting
performance:
- Response times
- Database queries
- Caching usage
- Resource consumption
reliability:
- Error handling
- Retry logic
- Circuit breakers
- Health checks
- Logging
maintainability:
- Test coverage
- Code structure
- Documentation
- Dependencies
Key Principles
- Focus on the core four NFRs by default
- Quick assessment, not deep analysis
- Gate-ready output format
- Brief, actionable findings
- Skip what doesn't apply
- Deterministic status rules for consistency
- Unknown targets → CONCERNS, not guesses
Appendix: ISO 25010 Reference
Full ISO 25010 Quality Model (click to expand)
All 8 Quality Characteristics
- Functional Suitability: Completeness, correctness, appropriateness
- Performance Efficiency: Time behavior, resource use, capacity
- Compatibility: Co-existence, interoperability
- Usability: Learnability, operability, accessibility
- Reliability: Maturity, availability, fault tolerance
- Security: Confidentiality, integrity, authenticity
- Maintainability: Modularity, reusability, testability
- Portability: Adaptability, installability
Use these when assessing beyond the core four.
Example: Deep Performance Analysis (click to expand)
performance_deep_dive:
response_times:
p50: 45ms
p95: 180ms
p99: 350ms
database:
slow_queries: 2
missing_indexes: ['users.email', 'orders.user_id']
caching:
hit_rate: 0%
recommendation: 'Add Redis for session data'
load_test:
max_rps: 150
breaking_point: 200 rps