doc status updates
This commit is contained in:
@@ -1,9 +0,0 @@
|
||||
# CI Pipeline and Burn-In Strategy
|
||||
|
||||
- Stage jobs: install/caching once, run `test-changed` for quick feedback, then shard full suites with `fail-fast: false` so evidence isn’t lost.
|
||||
- Re-run changed specs 5–10x (burn-in) before merging to flush flakes; fail the pipeline on the first inconsistent run.
|
||||
- Upload artifacts on failure (videos, traces, HAR) and keep retry counts explicit—hidden retries hide instability.
|
||||
- Use `wait-on` for app startup, enforce time budgets (<10 min per job), and document required secrets alongside workflows.
|
||||
- Mirror CI scripts locally (`npm run test:ci`, `scripts/burn-in-changed.sh`) so devs reproduce pipeline behaviour exactly.
|
||||
|
||||
_Source: Murat CI/CD strategy blog, Playwright/Cypress workflow examples._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Component Test-Driven Development Loop
|
||||
|
||||
- Start every UI change with a failing component spec (`cy.mount` or RTL `render`); ship only after red → green → refactor passes.
|
||||
- Recreate providers/stores per spec to prevent state bleed and keep parallel runs deterministic.
|
||||
- Use factories to exercise prop/state permutations; cover accessibility by asserting against roles, labels, and keyboard flows.
|
||||
- Keep component specs under ~100 lines: split by intent (rendering, state transitions, error messaging) to preserve clarity.
|
||||
- Pair component tests with visual debugging (Cypress runner, Storybook, Playwright trace viewer) to accelerate diagnosis.
|
||||
|
||||
_Source: CCTDD repository, Murat component testing talks._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Contract Testing Essentials (Pact)
|
||||
|
||||
- Store consumer contracts beside the integration specs that generate them; version contracts semantically and publish on every CI run.
|
||||
- Require provider verification before merge; failed verification blocks release and surfaces breaking changes immediately.
|
||||
- Capture fallback behaviour inside interactions (timeouts, retries, error payloads) so resilience guarantees remain explicit.
|
||||
- Automate broker housekeeping: tag releases, archive superseded contracts, and expire unused pacts to reduce noise.
|
||||
- Pair contract suites with API smoke or component tests to validate data mapping and UI rendering in tandem.
|
||||
|
||||
_Source: Pact consumer/provider sample repos, Murat contract testing blog._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Data Factories and API-First Setup
|
||||
|
||||
- Prefer factory functions that accept overrides and return complete objects (`createUser(overrides)`)—never rely on static fixtures.
|
||||
- Seed state through APIs, tasks, or direct DB helpers before visiting the UI; UI-based setup is for validation only.
|
||||
- Ensure factories generate parallel-safe identifiers (UUIDs, timestamps) and perform cleanup after each test.
|
||||
- Centralize factory exports to avoid duplication; version them alongside schema changes to catch drift in reviews.
|
||||
- When working with shared environments, layer feature toggles or targeted cleanup so factories do not clobber concurrent runs.
|
||||
|
||||
_Source: Murat Testing Philosophy, blog posts on functional helpers and API-first testing._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Email-Based Authentication Testing
|
||||
|
||||
- Use services like Mailosaur or in-house SMTP capture; extract magic links via regex or HTML parsing helpers.
|
||||
- Preserve browser storage (local/session) when processing links—restore state before visiting the authenticated page.
|
||||
- Cache email payloads with `cypress-data-session` or equivalent so retries don’t exhaust inbox quotas.
|
||||
- Cover negative cases: expired links, reused links, and multiple requests in rapid succession.
|
||||
- Ensure the workflow logs the email ID and link for troubleshooting, but scrub PII before committing artifacts.
|
||||
|
||||
_Source: Email authentication blog, Murat testing toolkit._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Error Handling and Resilience Checks
|
||||
|
||||
- Treat expected failures explicitly: intercept network errors and assert UI fallbacks (`error-message` visible, retries triggered).
|
||||
- In Cypress, use scoped `Cypress.on('uncaught:exception')` to ignore known errors; rethrow anything else so regressions fail.
|
||||
- In Playwright, hook `page.on('pageerror')` and only swallow the specific, documented error messages.
|
||||
- Test retry/backoff logic by forcing sequential failures (e.g., 500, timeout, success) and asserting telemetry gets recorded.
|
||||
- Log captured errors with context (request payload, user/session) but redact secrets to keep artifacts safe for sharing.
|
||||
|
||||
_Source: Murat error-handling patterns, Pact resilience guidance._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Feature Flag Governance
|
||||
|
||||
- Centralize flag definitions in a frozen enum; expose helpers to set, clear, and target specific audiences.
|
||||
- Test both enabled and disabled states in CI; clean up targeting after each spec to keep shared environments stable.
|
||||
- For LaunchDarkly-style systems, script API helpers to seed variations instead of mutating via UI.
|
||||
- Maintain a checklist for new flags: default state, owners, expiry date, telemetry, rollback plan.
|
||||
- Document flag dependencies in story/PR templates so QA and release reviews know which toggles must flip before launch.
|
||||
|
||||
_Source: LaunchDarkly strategy blog, Murat test architecture notes._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Fixture Architecture Playbook
|
||||
|
||||
- Build helpers as pure functions first, then expose them via Playwright `extend` or Cypress commands so logic stays testable in isolation.
|
||||
- Compose capabilities with `mergeTests` (Playwright) or layered Cypress commands instead of inheritance; each fixture should solve one concern (auth, api, logs, network).
|
||||
- Keep HTTP helpers framework agnostic—accept all required params explicitly and return results so unit tests and runtime fixtures can share them.
|
||||
- Export fixtures through package subpaths (`"./api-request"`, `"./api-request/fixtures"`) to make reuse trivial across suites and projects.
|
||||
- Treat fixture files as infrastructure: document dependencies, enforce deterministic timeouts, and ban hidden retries that mask flakiness.
|
||||
|
||||
_Source: Murat Testing Philosophy, cy-vs-pw comparison, SEON production patterns._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Network-First Safeguards
|
||||
|
||||
- Register interceptions before any navigation or user action; store the promise and await it immediately after the triggering step.
|
||||
- Assert on structured responses (status, body schema, headers) instead of generic waits so failures surface with actionable context.
|
||||
- Capture HAR files or Playwright traces on successful runs—reuse them for deterministic CI playback when upstream services flake.
|
||||
- Prefer edge mocking: stub at service boundaries, never deep within the stack unless risk analysis demands it.
|
||||
- Replace implicit waits with deterministic signals like `waitForResponse`, disappearance of spinners, or event hooks.
|
||||
|
||||
_Source: Murat Testing Philosophy, Playwright patterns book, blog on network interception._
|
||||
@@ -1,21 +0,0 @@
|
||||
# Non-Functional Review Criteria
|
||||
|
||||
- **Security**
|
||||
- PASS: auth/authz, secret handling, and threat mitigations in place.
|
||||
- CONCERNS: minor gaps with clear owners.
|
||||
- FAIL: critical exposure or missing controls.
|
||||
- **Performance**
|
||||
- PASS: metrics meet targets with profiling evidence.
|
||||
- CONCERNS: trending toward limits or missing baselines.
|
||||
- FAIL: breaches SLO/SLA or introduces resource leaks.
|
||||
- **Reliability**
|
||||
- PASS: error handling, retries, health checks verified.
|
||||
- CONCERNS: partial coverage or missing telemetry.
|
||||
- FAIL: no recovery path or crash scenarios unresolved.
|
||||
- **Maintainability**
|
||||
- PASS: clean code, tests, and documentation shipped together.
|
||||
- CONCERNS: duplication, low coverage, or unclear ownership.
|
||||
- FAIL: absent tests, tangled implementations, or no observability.
|
||||
- Default to CONCERNS when targets or evidence are undefined—force the team to clarify before sign-off.
|
||||
|
||||
_Source: Murat NFR assessment guidance._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Playwright Configuration Guardrails
|
||||
|
||||
- Load environment configs via a central map (`envConfigMap`) and fail fast when `TEST_ENV` is missing or unsupported.
|
||||
- Standardize timeouts: action 15s, navigation 30s, expect 10s, test 60s; expose overrides through fixtures rather than inline literals.
|
||||
- Emit HTML + JUnit reporters, disable auto-open, and store artifacts under `test-results/` for CI upload.
|
||||
- Keep `.env.example`, `.nvmrc`, and browser dependencies versioned so local and CI runs stay aligned.
|
||||
- Use global setup for shared auth tokens or seeding, but prefer per-test fixtures for anything mutable to avoid cross-test leakage.
|
||||
|
||||
_Source: Playwright book repo, SEON configuration example._
|
||||
@@ -1,17 +0,0 @@
|
||||
# Probability and Impact Scale
|
||||
|
||||
- **Probability**
|
||||
- 1 – Unlikely: standard implementation, low uncertainty.
|
||||
- 2 – Possible: edge cases or partial unknowns worth investigation.
|
||||
- 3 – Likely: known issues, new integrations, or high ambiguity.
|
||||
- **Impact**
|
||||
- 1 – Minor: cosmetic issues or easy workarounds.
|
||||
- 2 – Degraded: partial feature loss or manual workaround required.
|
||||
- 3 – Critical: blockers, data/security/regulatory exposure.
|
||||
- Multiply probability × impact to derive the risk score.
|
||||
- 1–3: document for awareness.
|
||||
- 4–5: monitor closely, plan mitigations.
|
||||
- 6–8: CONCERNS at the gate until mitigations are implemented.
|
||||
- 9: automatic gate FAIL until resolved or formally waived.
|
||||
|
||||
_Source: Murat risk model summary._
|
||||
@@ -1,14 +0,0 @@
|
||||
# Risk Governance and Gatekeeping
|
||||
|
||||
- Score risk as probability (1–3) × impact (1–3); totals ≥6 demand mitigation before approval, 9 mandates a gate failure.
|
||||
- Classify risks across TECH, SEC, PERF, DATA, BUS, OPS. Document owners, mitigation plans, and deadlines for any score above 4.
|
||||
- Trace every acceptance criterion to implemented tests; missing coverage must be resolved or explicitly waived before release.
|
||||
- Gate decisions:
|
||||
- **PASS** – no critical issues remain and evidence is current.
|
||||
- **CONCERNS** – residual risk exists but has owners, actions, and timelines.
|
||||
- **FAIL** – critical issues unresolved or evidence missing.
|
||||
- **WAIVED** – risk accepted with documented approver, rationale, and expiry.
|
||||
- Maintain a gate history log capturing updates so auditors can follow the decision trail.
|
||||
- Use the probability/impact scale fragment for shared definitions when scoring teams run the matrix.
|
||||
|
||||
_Source: Murat risk governance notes, gate schema guidance._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Selective and Targeted Test Execution
|
||||
|
||||
- Use tags/grep (`--grep "@smoke"`, `--grep "@critical"`) to slice suites by risk, not directory.
|
||||
- Filter by spec patterns (`--spec "**/*checkout*"`) or git diff (`npm run test:changed`) to focus on impacted areas.
|
||||
- Combine priority metadata (P0–P3) with change detection to decide which levels to run pre-commit vs. in CI.
|
||||
- Record burn-in history for newly added specs; promote to main suite only after consistent green runs.
|
||||
- Document the selection strategy in README/CI so the team understands when full regression is mandatory.
|
||||
|
||||
_Source: 32+ selective testing strategies blog, Murat testing philosophy._
|
||||
@@ -1,148 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Test Levels Framework
|
||||
|
||||
Comprehensive guide for determining appropriate test levels (unit, integration, E2E) for different scenarios.
|
||||
|
||||
## Test Level Decision Matrix
|
||||
|
||||
### Unit Tests
|
||||
|
||||
**When to use:**
|
||||
|
||||
- Testing pure functions and business logic
|
||||
- Algorithm correctness
|
||||
- Input validation and data transformation
|
||||
- Error handling in isolated components
|
||||
- Complex calculations or state machines
|
||||
|
||||
**Characteristics:**
|
||||
|
||||
- Fast execution (immediate feedback)
|
||||
- No external dependencies (DB, API, file system)
|
||||
- Highly maintainable and stable
|
||||
- Easy to debug failures
|
||||
|
||||
**Example scenarios:**
|
||||
|
||||
```yaml
|
||||
unit_test:
|
||||
component: 'PriceCalculator'
|
||||
scenario: 'Calculate discount with multiple rules'
|
||||
justification: 'Complex business logic with multiple branches'
|
||||
mock_requirements: 'None - pure function'
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
**When to use:**
|
||||
|
||||
- Component interaction verification
|
||||
- Database operations and transactions
|
||||
- API endpoint contracts
|
||||
- Service-to-service communication
|
||||
- Middleware and interceptor behavior
|
||||
|
||||
**Characteristics:**
|
||||
|
||||
- Moderate execution time
|
||||
- Tests component boundaries
|
||||
- May use test databases or containers
|
||||
- Validates system integration points
|
||||
|
||||
**Example scenarios:**
|
||||
|
||||
```yaml
|
||||
integration_test:
|
||||
components: ['UserService', 'AuthRepository']
|
||||
scenario: 'Create user with role assignment'
|
||||
justification: 'Critical data flow between service and persistence'
|
||||
test_environment: 'In-memory database'
|
||||
```
|
||||
|
||||
### End-to-End Tests
|
||||
|
||||
**When to use:**
|
||||
|
||||
- Critical user journeys
|
||||
- Cross-system workflows
|
||||
- Visual regression testing
|
||||
- Compliance and regulatory requirements
|
||||
- Final validation before release
|
||||
|
||||
**Characteristics:**
|
||||
|
||||
- Slower execution
|
||||
- Tests complete workflows
|
||||
- Requires full environment setup
|
||||
- Most realistic but most brittle
|
||||
|
||||
**Example scenarios:**
|
||||
|
||||
```yaml
|
||||
e2e_test:
|
||||
journey: 'Complete checkout process'
|
||||
scenario: 'User purchases with saved payment method'
|
||||
justification: 'Revenue-critical path requiring full validation'
|
||||
environment: 'Staging with test payment gateway'
|
||||
```
|
||||
|
||||
## Test Level Selection Rules
|
||||
|
||||
### Favor Unit Tests When:
|
||||
|
||||
- Logic can be isolated
|
||||
- No side effects involved
|
||||
- Fast feedback needed
|
||||
- High cyclomatic complexity
|
||||
|
||||
### Favor Integration Tests When:
|
||||
|
||||
- Testing persistence layer
|
||||
- Validating service contracts
|
||||
- Testing middleware/interceptors
|
||||
- Component boundaries critical
|
||||
|
||||
### Favor E2E Tests When:
|
||||
|
||||
- User-facing critical paths
|
||||
- Multi-system interactions
|
||||
- Regulatory compliance scenarios
|
||||
- Visual regression important
|
||||
|
||||
## Anti-patterns to Avoid
|
||||
|
||||
- E2E testing for business logic validation
|
||||
- Unit testing framework behavior
|
||||
- Integration testing third-party libraries
|
||||
- Duplicate coverage across levels
|
||||
|
||||
## Duplicate Coverage Guard
|
||||
|
||||
**Before adding any test, check:**
|
||||
|
||||
1. Is this already tested at a lower level?
|
||||
2. Can a unit test cover this instead of integration?
|
||||
3. Can an integration test cover this instead of E2E?
|
||||
|
||||
**Coverage overlap is only acceptable when:**
|
||||
|
||||
- Testing different aspects (unit: logic, integration: interaction, e2e: user experience)
|
||||
- Critical paths requiring defense in depth
|
||||
- Regression prevention for previously broken functionality
|
||||
|
||||
## Test Naming Conventions
|
||||
|
||||
- Unit: `test_{component}_{scenario}`
|
||||
- Integration: `test_{flow}_{interaction}`
|
||||
- E2E: `test_{journey}_{outcome}`
|
||||
|
||||
## Test ID Format
|
||||
|
||||
`{EPIC}.{STORY}-{LEVEL}-{SEQ}`
|
||||
|
||||
Examples:
|
||||
|
||||
- `1.3-UNIT-001`
|
||||
- `1.3-INT-002`
|
||||
- `1.3-E2E-001`
|
||||
@@ -1,174 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Test Priorities Matrix
|
||||
|
||||
Guide for prioritizing test scenarios based on risk, criticality, and business impact.
|
||||
|
||||
## Priority Levels
|
||||
|
||||
### P0 - Critical (Must Test)
|
||||
|
||||
**Criteria:**
|
||||
|
||||
- Revenue-impacting functionality
|
||||
- Security-critical paths
|
||||
- Data integrity operations
|
||||
- Regulatory compliance requirements
|
||||
- Previously broken functionality (regression prevention)
|
||||
|
||||
**Examples:**
|
||||
|
||||
- Payment processing
|
||||
- Authentication/authorization
|
||||
- User data creation/deletion
|
||||
- Financial calculations
|
||||
- GDPR/privacy compliance
|
||||
|
||||
**Testing Requirements:**
|
||||
|
||||
- Comprehensive coverage at all levels
|
||||
- Both happy and unhappy paths
|
||||
- Edge cases and error scenarios
|
||||
- Performance under load
|
||||
|
||||
### P1 - High (Should Test)
|
||||
|
||||
**Criteria:**
|
||||
|
||||
- Core user journeys
|
||||
- Frequently used features
|
||||
- Features with complex logic
|
||||
- Integration points between systems
|
||||
- Features affecting user experience
|
||||
|
||||
**Examples:**
|
||||
|
||||
- User registration flow
|
||||
- Search functionality
|
||||
- Data import/export
|
||||
- Notification systems
|
||||
- Dashboard displays
|
||||
|
||||
**Testing Requirements:**
|
||||
|
||||
- Primary happy paths required
|
||||
- Key error scenarios
|
||||
- Critical edge cases
|
||||
- Basic performance validation
|
||||
|
||||
### P2 - Medium (Nice to Test)
|
||||
|
||||
**Criteria:**
|
||||
|
||||
- Secondary features
|
||||
- Admin functionality
|
||||
- Reporting features
|
||||
- Configuration options
|
||||
- UI polish and aesthetics
|
||||
|
||||
**Examples:**
|
||||
|
||||
- Admin settings panels
|
||||
- Report generation
|
||||
- Theme customization
|
||||
- Help documentation
|
||||
- Analytics tracking
|
||||
|
||||
**Testing Requirements:**
|
||||
|
||||
- Happy path coverage
|
||||
- Basic error handling
|
||||
- Can defer edge cases
|
||||
|
||||
### P3 - Low (Test if Time Permits)
|
||||
|
||||
**Criteria:**
|
||||
|
||||
- Rarely used features
|
||||
- Nice-to-have functionality
|
||||
- Cosmetic issues
|
||||
- Non-critical optimizations
|
||||
|
||||
**Examples:**
|
||||
|
||||
- Advanced preferences
|
||||
- Legacy feature support
|
||||
- Experimental features
|
||||
- Debug utilities
|
||||
|
||||
**Testing Requirements:**
|
||||
|
||||
- Smoke tests only
|
||||
- Can rely on manual testing
|
||||
- Document known limitations
|
||||
|
||||
## Risk-Based Priority Adjustments
|
||||
|
||||
### Increase Priority When:
|
||||
|
||||
- High user impact (affects >50% of users)
|
||||
- High financial impact (>$10K potential loss)
|
||||
- Security vulnerability potential
|
||||
- Compliance/legal requirements
|
||||
- Customer-reported issues
|
||||
- Complex implementation (>500 LOC)
|
||||
- Multiple system dependencies
|
||||
|
||||
### Decrease Priority When:
|
||||
|
||||
- Feature flag protected
|
||||
- Gradual rollout planned
|
||||
- Strong monitoring in place
|
||||
- Easy rollback capability
|
||||
- Low usage metrics
|
||||
- Simple implementation
|
||||
- Well-isolated component
|
||||
|
||||
## Test Coverage by Priority
|
||||
|
||||
| Priority | Unit Coverage | Integration Coverage | E2E Coverage |
|
||||
| -------- | ------------- | -------------------- | ------------------ |
|
||||
| P0 | >90% | >80% | All critical paths |
|
||||
| P1 | >80% | >60% | Main happy paths |
|
||||
| P2 | >60% | >40% | Smoke tests |
|
||||
| P3 | Best effort | Best effort | Manual only |
|
||||
|
||||
## Priority Assignment Rules
|
||||
|
||||
1. **Start with business impact** - What happens if this fails?
|
||||
2. **Consider probability** - How likely is failure?
|
||||
3. **Factor in detectability** - Would we know if it failed?
|
||||
4. **Account for recoverability** - Can we fix it quickly?
|
||||
|
||||
## Priority Decision Tree
|
||||
|
||||
```
|
||||
Is it revenue-critical?
|
||||
├─ YES → P0
|
||||
└─ NO → Does it affect core user journey?
|
||||
├─ YES → Is it high-risk?
|
||||
│ ├─ YES → P0
|
||||
│ └─ NO → P1
|
||||
└─ NO → Is it frequently used?
|
||||
├─ YES → P1
|
||||
└─ NO → Is it customer-facing?
|
||||
├─ YES → P2
|
||||
└─ NO → P3
|
||||
```
|
||||
|
||||
## Test Execution Order
|
||||
|
||||
1. Execute P0 tests first (fail fast on critical issues)
|
||||
2. Execute P1 tests second (core functionality)
|
||||
3. Execute P2 tests if time permits
|
||||
4. P3 tests only in full regression cycles
|
||||
|
||||
## Continuous Adjustment
|
||||
|
||||
Review and adjust priorities based on:
|
||||
|
||||
- Production incident patterns
|
||||
- User feedback and complaints
|
||||
- Usage analytics
|
||||
- Test failure history
|
||||
- Business priority changes
|
||||
@@ -1,10 +0,0 @@
|
||||
# Test Quality Definition of Done
|
||||
|
||||
- No hard waits (`waitForTimeout`, `cy.wait(ms)`); rely on deterministic waits or event hooks.
|
||||
- Each spec <300 lines and executes in ≤1.5 minutes.
|
||||
- Tests are isolated, parallel-safe, and self-cleaning (seed via API/tasks, teardown after run).
|
||||
- Assertions stay visible in test bodies; avoid conditional logic controlling test flow.
|
||||
- Suites must pass locally and in CI with the same commands.
|
||||
- Promote new tests only after they have failed for the intended reason at least once.
|
||||
|
||||
_Source: Murat quality checklist._
|
||||
@@ -1,9 +0,0 @@
|
||||
# Visual Debugging and Developer Ergonomics
|
||||
|
||||
- Keep Playwright trace viewer, Cypress runner, and Storybook accessible in CI artifacts to speed up reproduction.
|
||||
- Record short screen captures only-on-failure; pair them with HAR or console logs to avoid guesswork.
|
||||
- Document common trace navigation steps (network tab, action timeline) so new contributors diagnose issues quickly.
|
||||
- Encourage live-debug sessions with component harnesses to validate behaviour before writing full E2E specs.
|
||||
- Integrate accessibility tooling (axe, Playwright audits) into the same debug workflow to catch regressions early.
|
||||
|
||||
_Source: Murat DX blog posts, Playwright book appendix on debugging._
|
||||
Reference in New Issue
Block a user