mirror of
https://github.com/bmad-code-org/BMAD-METHOD.git
synced 2026-01-30 04:32:02 +00:00
Compare commits
4 Commits
0d2b8c3429
...
docs/tea-a
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d2d4449dd5 | ||
|
|
fe0323e12f | ||
|
|
caad5d46ae | ||
|
|
f88f330724 |
@@ -195,20 +195,295 @@ Epic/Release Gate → TEA: *nfr-assess, *trace Phase 2 (release decision)
|
||||
|
||||
**Note**: `*trace` is a two-phase workflow: Phase 1 (traceability) + Phase 2 (gate decision). This reduces cognitive load while maintaining natural workflow.
|
||||
|
||||
### Why TEA Requires Its Own Knowledge Base
|
||||
|
||||
TEA uniquely requires:
|
||||
|
||||
- **Extensive domain knowledge**: 30+ fragments covering test patterns, CI/CD, fixtures, quality practices, and optional playwright-utils integration
|
||||
- **Cross-cutting concerns**: Domain-specific testing patterns that apply across all BMad projects (vs project-specific artifacts like PRDs/stories)
|
||||
- **Optional integrations**: MCP capabilities (exploratory, verification) and playwright-utils support
|
||||
|
||||
This architecture enables TEA to maintain consistent, production-ready testing patterns across all BMad projects while operating across multiple development phases.
|
||||
|
||||
---
|
||||
|
||||
|
||||
## High-Level Cheat Sheets
|
||||
|
||||
These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks** across the **4-Phase Methodology** (Phase 1: Analysis, Phase 2: Planning, Phase 3: Solutioning, Phase 4: Implementation).
|
||||
|
||||
**Note:** Quick Flow projects typically don't require TEA (covered in Overview). These cheat sheets focus on BMad Method and Enterprise tracks where TEA adds value.
|
||||
|
||||
**Legend for Track Deltas:**
|
||||
|
||||
- ➕ = New workflow or phase added (doesn't exist in baseline)
|
||||
- 🔄 = Modified focus (same workflow, different emphasis or purpose)
|
||||
- 📦 = Additional output or archival requirement
|
||||
|
||||
### Greenfield - BMad Method (Simple/Standard Work)
|
||||
|
||||
**Planning Track:** BMad Method (PRD + Architecture)
|
||||
**Use Case:** New projects with standard complexity
|
||||
|
||||
| Workflow Stage | Test Architect | Dev / Team | Outputs |
|
||||
| -------------------------- | ----------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ---------------------------------------------------------- |
|
||||
| **Phase 1**: Discovery | - | Analyst `*product-brief` (optional) | `product-brief.md` |
|
||||
| **Phase 2**: Planning | - | PM `*prd` (creates PRD with FRs/NFRs) | PRD with functional/non-functional requirements |
|
||||
| **Phase 3**: Solutioning | Run `*framework`, `*ci` AFTER architecture and epic creation | Architect `*architecture`, `*create-epics-and-stories`, `*implementation-readiness` | Architecture, epics/stories, test scaffold, CI pipeline |
|
||||
| **Phase 4**: Sprint Start | - | SM `*sprint-planning` | Sprint status file with all epics and stories |
|
||||
| **Phase 4**: Epic Planning | Run `*test-design` for THIS epic (per-epic test plan) | Review epic scope | `test-design-epic-N.md` with risk assessment and test plan |
|
||||
| **Phase 4**: Story Dev | (Optional) `*atdd` before dev, then `*automate` after | SM `*create-story`, DEV implements | Tests, story implementation |
|
||||
| **Phase 4**: Story Review | Execute `*test-review` (optional), re-run `*trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix |
|
||||
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary |
|
||||
|
||||
<details>
|
||||
<summary>Execution Notes</summary>
|
||||
|
||||
- Run `*framework` only once per repo or when modern harness support is missing.
|
||||
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to setup test infrastructure based on architectural decisions.
|
||||
- **Phase 4 starts**: After solutioning is complete, sprint planning loads all epics.
|
||||
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create a test plan for THAT specific epic/feature. Output: `test-design-epic-N.md`.
|
||||
- Use `*atdd` before coding when the team can adopt ATDD; share its checklist with the dev agent.
|
||||
- Post-implementation, keep `*trace` current, expand coverage with `*automate`, optionally review test quality with `*test-review`. For release gate, run `*trace` with Phase 2 enabled to get deployment decision.
|
||||
- Use `*test-review` after `*atdd` to validate generated tests, after `*automate` to ensure regression quality, or before gate for final audit.
|
||||
- Clarification: `*test-review` is optional and only audits existing tests; run it after `*atdd` or `*automate` when you want a quality review, not as a required step.
|
||||
- Clarification: `*atdd` outputs are not auto-consumed; share the ATDD doc/tests with the dev workflow. `*trace` does not run `*atdd`—it evaluates existing artifacts for coverage and gate readiness.
|
||||
- Clarification: `*ci` is a one-time setup; recommended early (Phase 3 or before feature work), but it can be done later if it was skipped.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Worked Example – “Nova CRM” Greenfield Feature</summary>
|
||||
|
||||
1. **Planning (Phase 2):** Analyst runs `*product-brief`; PM executes `*prd` to produce PRD with FRs/NFRs.
|
||||
2. **Solutioning (Phase 3):** Architect completes `*architecture` for the new module; `*create-epics-and-stories` generates epics/stories based on architecture; TEA sets up test infrastructure via `*framework` and `*ci` based on architectural decisions; gate check validates planning completeness.
|
||||
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
|
||||
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` to create test plan for Epic 1, producing `test-design-epic-1.md` with risk assessment.
|
||||
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA optionally runs `*atdd`; Dev implements with guidance from failing tests.
|
||||
6. **Post-Dev (Phase 4):** TEA runs `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
|
||||
7. **Release Gate:** TEA runs `*trace` with Phase 2 enabled to generate gate decision.
|
||||
|
||||
</details>
|
||||
|
||||
### Brownfield - BMad Method or Enterprise (Simple or Complex)
|
||||
|
||||
**Planning Tracks:** BMad Method or Enterprise Method
|
||||
**Use Case:** Existing codebases - simple additions (BMad Method) or complex enterprise requirements (Enterprise Method)
|
||||
|
||||
**🔄 Brownfield Deltas from Greenfield:**
|
||||
|
||||
- ➕ Documentation (Prerequisite) - Document existing codebase if undocumented
|
||||
- ➕ Phase 2: `*trace` - Baseline existing test coverage before planning
|
||||
- 🔄 Phase 4: `*test-design` - Focus on regression hotspots and brownfield risks
|
||||
- 🔄 Phase 4: Story Review - May include `*nfr-assess` if not done earlier
|
||||
|
||||
| Workflow Stage | Test Architect | Dev / Team | Outputs |
|
||||
| --------------------------------- | --------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ---------------------------------------------------------------------- |
|
||||
| **Documentation**: Prerequisite ➕ | - | Analyst `*document-project` (if undocumented) | Comprehensive project documentation |
|
||||
| **Phase 1**: Discovery | - | Analyst/PM/Architect rerun planning workflows | Updated planning artifacts in `{output_folder}` |
|
||||
| **Phase 2**: Planning | Run ➕ `*trace` (baseline coverage) | PM `*prd` (creates PRD with FRs/NFRs) | PRD with FRs/NFRs, ➕ coverage baseline |
|
||||
| **Phase 3**: Solutioning | Run `*framework`, `*ci` AFTER architecture and epic creation | Architect `*architecture`, `*create-epics-and-stories`, `*implementation-readiness` | Architecture, epics/stories, test framework, CI pipeline |
|
||||
| **Phase 4**: Sprint Start | - | SM `*sprint-planning` | Sprint status file with all epics and stories |
|
||||
| **Phase 4**: Epic Planning | Run `*test-design` for THIS epic 🔄 (regression hotspots) | Review epic scope and brownfield risks | `test-design-epic-N.md` with brownfield risk assessment and mitigation |
|
||||
| **Phase 4**: Story Dev | (Optional) `*atdd` before dev, then `*automate` after | SM `*create-story`, DEV implements | Tests, story implementation |
|
||||
| **Phase 4**: Story Review | Apply `*test-review` (optional), re-run `*trace`, ➕ `*nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report |
|
||||
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary |
|
||||
|
||||
<details>
|
||||
<summary>Execution Notes</summary>
|
||||
|
||||
- Lead with `*trace` during Planning (Phase 2) to baseline existing test coverage before architecture work begins.
|
||||
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to modernize test infrastructure. For brownfield, framework may need to integrate with or replace existing test setup.
|
||||
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics.
|
||||
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to identify regression hotspots, integration risks, and mitigation strategies for THAT specific epic/feature. Output: `test-design-epic-N.md`.
|
||||
- Use `*atdd` when stories benefit from ATDD; otherwise proceed to implementation and rely on post-dev automation.
|
||||
- After development, expand coverage with `*automate`, optionally review test quality with `*test-review`, re-run `*trace` (Phase 2 for gate decision). Run `*nfr-assess` now if non-functional risks weren't addressed earlier.
|
||||
- Use `*test-review` to validate existing brownfield tests or audit new tests before gate.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Worked Example – “Atlas Payments” Brownfield Story</summary>
|
||||
|
||||
1. **Planning (Phase 2):** PM executes `*prd` to create PRD with FRs/NFRs; TEA runs `*trace` to baseline existing coverage.
|
||||
2. **Solutioning (Phase 3):** Architect triggers `*architecture` capturing legacy payment flows and integration architecture; `*create-epics-and-stories` generates Epic 1 (Payment Processing) based on architecture; TEA sets up `*framework` and `*ci` based on architectural decisions; gate check validates planning.
|
||||
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load Epic 1 into sprint status.
|
||||
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` for Epic 1 (Payment Processing), producing `test-design-epic-1.md` that flags settlement edge cases, regression hotspots, and mitigation plans.
|
||||
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA runs `*atdd` producing failing Playwright specs; Dev implements with guidance from tests and checklist.
|
||||
6. **Post-Dev (Phase 4):** TEA applies `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
|
||||
7. **Release Gate:** TEA performs `*nfr-assess` to validate SLAs, runs `*trace` with Phase 2 enabled to generate gate decision (PASS/CONCERNS/FAIL).
|
||||
|
||||
</details>
|
||||
|
||||
### Greenfield - Enterprise Method (Enterprise/Compliance Work)
|
||||
|
||||
**Planning Track:** Enterprise Method (BMad Method + extended security/devops/test strategies)
|
||||
**Use Case:** New enterprise projects with compliance, security, or complex regulatory requirements
|
||||
|
||||
**🏢 Enterprise Deltas from BMad Method:**
|
||||
|
||||
- ➕ Phase 1: `*research` - Domain and compliance research (recommended)
|
||||
- ➕ Phase 2: `*nfr-assess` - Capture NFR requirements early (security/performance/reliability)
|
||||
- 🔄 Phase 4: `*test-design` - Enterprise focus (compliance, security architecture alignment)
|
||||
- 📦 Release Gate - Archive artifacts and compliance evidence for audits
|
||||
|
||||
| Workflow Stage | Test Architect | Dev / Team | Outputs |
|
||||
| -------------------------- | ----------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------ |
|
||||
| **Phase 1**: Discovery | - | Analyst ➕ `*research`, `*product-brief` | Domain research, compliance analysis, product brief |
|
||||
| **Phase 2**: Planning | Run ➕ `*nfr-assess` | PM `*prd` (creates PRD with FRs/NFRs), UX `*create-ux-design` | Enterprise PRD with FRs/NFRs, UX design, ➕ NFR documentation |
|
||||
| **Phase 3**: Solutioning | Run `*framework`, `*ci` AFTER architecture and epic creation | Architect `*architecture`, `*create-epics-and-stories`, `*implementation-readiness` | Architecture, epics/stories, test framework, CI pipeline |
|
||||
| **Phase 4**: Sprint Start | - | SM `*sprint-planning` | Sprint plan with all epics |
|
||||
| **Phase 4**: Epic Planning | Run `*test-design` for THIS epic 🔄 (compliance focus) | Review epic scope and compliance requirements | `test-design-epic-N.md` with security/performance/compliance focus |
|
||||
| **Phase 4**: Story Dev | (Optional) `*atdd`, `*automate`, `*test-review`, `*trace` per story | SM `*create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices |
|
||||
| **Phase 4**: Release Gate | Final `*test-review` audit, Run `*trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail |
|
||||
|
||||
<details>
|
||||
<summary>Execution Notes</summary>
|
||||
|
||||
- `*nfr-assess` runs early in Planning (Phase 2) to capture compliance, security, and performance requirements upfront.
|
||||
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` with enterprise-grade configurations (selective testing, burn-in jobs, caching, notifications).
|
||||
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics.
|
||||
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create an enterprise-focused test plan for THAT specific epic, ensuring alignment with security architecture, performance targets, and compliance requirements. Output: `test-design-epic-N.md`.
|
||||
- Use `*atdd` for stories when feasible so acceptance tests can lead implementation.
|
||||
- Use `*test-review` per story or sprint to maintain quality standards and ensure compliance with testing best practices.
|
||||
- Prior to release, rerun coverage (`*trace`, `*automate`), perform final quality audit with `*test-review`, and formalize the decision with `*trace` Phase 2 (gate decision); archive artifacts for compliance audits.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Worked Example – “Helios Ledger” Enterprise Release</summary>
|
||||
|
||||
1. **Planning (Phase 2):** Analyst runs `*research` and `*product-brief`; PM completes `*prd` creating PRD with FRs/NFRs; TEA runs `*nfr-assess` to establish NFR targets.
|
||||
2. **Solutioning (Phase 3):** Architect completes `*architecture` with enterprise considerations; `*create-epics-and-stories` generates epics/stories based on architecture; TEA sets up `*framework` and `*ci` with enterprise-grade configurations based on architectural decisions; gate check validates planning completeness.
|
||||
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
|
||||
4. **Per-Epic (Phase 4):** For each epic, TEA runs `*test-design` to create epic-specific test plan (e.g., `test-design-epic-1.md`, `test-design-epic-2.md`) with compliance-focused risk assessment.
|
||||
5. **Per-Story (Phase 4):** For each story, TEA uses `*atdd`, `*automate`, `*test-review`, and `*trace`; Dev teams iterate on the findings.
|
||||
6. **Release Gate:** TEA re-checks coverage, performs final quality audit with `*test-review`, and logs the final gate decision via `*trace` Phase 2, archiving artifacts for compliance.
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## TEA Command Catalog
|
||||
|
||||
| Command | Primary Outputs | Notes |
|
||||
| -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- |
|
||||
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists |
|
||||
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) |
|
||||
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode |
|
||||
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode |
|
||||
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage |
|
||||
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns |
|
||||
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability |
|
||||
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision |
|
||||
| Command | Primary Outputs | Notes | With Playwright MCP Enhancements |
|
||||
| -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
|
||||
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
|
||||
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
|
||||
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
|
||||
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
|
||||
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
|
||||
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
|
||||
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
|
||||
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
|
||||
|
||||
---
|
||||
|
||||
## Playwright Utils Integration
|
||||
|
||||
TEA optionally integrates with `@seontechnologies/playwright-utils`, an open-source library providing fixture-based utilities for Playwright tests. This integration enhances TEA's test generation and review workflows with production-ready patterns.
|
||||
|
||||
<details>
|
||||
<summary><strong>Installation & Configuration</strong></summary>
|
||||
|
||||
**Package**: `@seontechnologies/playwright-utils` ([npm](https://www.npmjs.com/package/@seontechnologies/playwright-utils) | [GitHub](https://github.com/seontechnologies/playwright-utils))
|
||||
|
||||
**Install**: `npm install -D @seontechnologies/playwright-utils`
|
||||
|
||||
**Enable during BMAD installation** by answering "Yes" when prompted, or manually set `tea_use_playwright_utils: true` in `_bmad/bmm/config.yaml`.
|
||||
|
||||
**To disable**: Set `tea_use_playwright_utils: false` in `_bmad/bmm/config.yaml`.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>How Playwright Utils Enhances TEA Workflows</strong></summary>
|
||||
|
||||
1. `*framework`:
|
||||
- Default: Basic Playwright scaffold
|
||||
- **+ playwright-utils**: Scaffold with api-request, network-recorder, auth-session, burn-in, network-error-monitor fixtures pre-configured
|
||||
|
||||
Benefit: Production-ready patterns from day one
|
||||
|
||||
2. `*automate`, `*atdd`:
|
||||
- Default: Standard test patterns
|
||||
- **+ playwright-utils**: Tests using api-request (schema validation), intercept-network-call (mocking), recurse (polling), log (structured logging), file-utils (CSV/PDF)
|
||||
|
||||
Benefit: Advanced patterns without boilerplate
|
||||
|
||||
3. `*test-review`:
|
||||
- Default: Reviews against core knowledge base (22 fragments)
|
||||
- **+ playwright-utils**: Reviews against expanded knowledge base (33 fragments: 22 core + 11 playwright-utils)
|
||||
|
||||
Benefit: Reviews include fixture composition, auth patterns, network recording best practices
|
||||
|
||||
4. `*ci`:
|
||||
- Default: Standard CI workflow
|
||||
- **+ playwright-utils**: CI workflow with burn-in script (smart test selection) and network-error-monitor integration
|
||||
|
||||
Benefit: Faster CI feedback, HTTP error detection
|
||||
|
||||
**Utilities available** (10 total): api-request, network-recorder, auth-session, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Playwright MCP Enhancements
|
||||
|
||||
TEA can leverage Playwright MCP servers to enhance test generation with live browser verification. MCP provides interactive capabilities on top of TEA's default AI-based approach.
|
||||
|
||||
<details>
|
||||
<summary><strong>MCP Server Configuration</strong></summary>
|
||||
|
||||
**Two Playwright MCP servers** (actively maintained, continuously updated):
|
||||
|
||||
- `playwright` - Browser automation (`npx @playwright/mcp@latest`)
|
||||
- `playwright-test` - Test runner with failure analysis (`npx playwright run-test-mcp-server`)
|
||||
|
||||
**Config example**:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest"]
|
||||
},
|
||||
"playwright-test": {
|
||||
"command": "npx",
|
||||
"args": ["playwright", "run-test-mcp-server"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**To disable**: Set `tea_use_mcp_enhancements: false` in `_bmad/bmm/config.yaml` OR remove MCPs from IDE config.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>How MCP Enhances TEA Workflows</strong></summary>
|
||||
|
||||
1. `*test-design`:
|
||||
- Default: Analysis + documentation
|
||||
- **+ MCP**: Interactive UI discovery with `browser_navigate`, `browser_click`, `browser_snapshot`, behavior observation
|
||||
|
||||
Benefit: Discover actual functionality, edge cases, undocumented features
|
||||
|
||||
2. `*atdd`, `*automate`:
|
||||
- Default: Infers selectors and interactions from requirements and knowledge fragments
|
||||
- **+ MCP**: Generates tests **then** verifies with `generator_setup_page`, `browser_*` tools, validates against live app
|
||||
|
||||
Benefit: Accurate selectors from real DOM, verified behavior, refined test code
|
||||
|
||||
3. `*automate` (healing mode):
|
||||
- Default: Pattern-based fixes from error messages + knowledge fragments
|
||||
- **+ MCP**: Pattern fixes **enhanced with** `browser_snapshot`, `browser_console_messages`, `browser_network_requests`, `browser_generate_locator`
|
||||
|
||||
Benefit: Visual failure context, live DOM inspection, root cause discovery
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -12,15 +12,17 @@ agent:
|
||||
|
||||
persona:
|
||||
role: Master Test Architect
|
||||
identity: Test architect specializing in CI/CD, automated frameworks, and scalable quality gates.
|
||||
identity: Test architect specializing in API testing, backend services, UI automation, CI/CD pipelines, and scalable quality gates. Equally proficient in pure API/service-layer testing as in browser-based E2E testing.
|
||||
communication_style: "Blends data with gut instinct. 'Strong opinions, weakly held' is their mantra. Speaks in risk calculations and impact assessments."
|
||||
principles: |
|
||||
- Risk-based testing - depth scales with impact
|
||||
- Quality gates backed by data
|
||||
- Tests mirror usage patterns
|
||||
- Tests mirror usage patterns (API, UI, or both)
|
||||
- Flakiness is critical technical debt
|
||||
- Tests first AI implements suite validates
|
||||
- Calculate risk vs value for every testing decision
|
||||
- Prefer lower test levels (unit > integration > E2E) when possible
|
||||
- API tests are first-class citizens, not just UI support
|
||||
|
||||
critical_actions:
|
||||
- "Consult {project-root}/_bmad/bmm/testarch/tea-index.csv to select knowledge fragments under knowledge/ and load only the files needed for the current task"
|
||||
@@ -39,7 +41,7 @@ agent:
|
||||
|
||||
- trigger: AT or fuzzy match on atdd
|
||||
workflow: "{project-root}/_bmad/bmm/workflows/testarch/atdd/workflow.yaml"
|
||||
description: "[AT] Generate E2E tests first, before starting implementation"
|
||||
description: "[AT] Generate API and/or E2E tests first, before starting implementation"
|
||||
|
||||
- trigger: TA or fuzzy match on test-automate
|
||||
workflow: "{project-root}/_bmad/bmm/workflows/testarch/automate/workflow.yaml"
|
||||
|
||||
@@ -45,7 +45,7 @@ project_knowledge: # Artifacts from research, document-project output, other lon
|
||||
result: "{project-root}/{value}"
|
||||
|
||||
tea_use_mcp_enhancements:
|
||||
prompt: "Test Architect Playwright MCP capabilities (healing, exploratory, verification) are optionally available.\nYou will have to setup your MCPs yourself; refer to test-architecture.md for hints.\nWould you like to enable MCP enhancements in Test Architect?"
|
||||
prompt: "Test Architect Playwright MCP capabilities (healing, exploratory, verification) are optionally available.\nYou will have to setup your MCPs yourself; refer to https://docs.bmad-method.org/explanation/features/tea-overview for configuration examples.\nWould you like to enable MCP enhancements in Test Architect?"
|
||||
default: false
|
||||
result: "{value}"
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Principle
|
||||
|
||||
Use typed HTTP client with built-in schema validation and automatic retry for server errors. The utility handles URL resolution, header management, response parsing, and single-line response validation with proper TypeScript support.
|
||||
Use typed HTTP client with built-in schema validation and automatic retry for server errors. The utility handles URL resolution, header management, response parsing, and single-line response validation with proper TypeScript support. **Works without a browser** - ideal for pure API/service testing.
|
||||
|
||||
## Rationale
|
||||
|
||||
@@ -21,6 +21,7 @@ The `apiRequest` utility provides:
|
||||
- **Schema validation**: Single-line validation (JSON Schema, Zod, OpenAPI)
|
||||
- **URL resolution**: Four-tier strategy (explicit > config > Playwright > direct)
|
||||
- **TypeScript generics**: Type-safe response bodies
|
||||
- **No browser required**: Pure API testing without browser overhead
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
@@ -60,10 +61,11 @@ test('should fetch user data', async ({ apiRequest }) => {
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { z } from 'zod';
|
||||
|
||||
test('should validate response schema', async ({ apiRequest }) => {
|
||||
// JSON Schema validation
|
||||
const response = await apiRequest({
|
||||
// JSON Schema validation
|
||||
test('should validate response schema (JSON Schema)', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users/123',
|
||||
validateSchema: {
|
||||
@@ -77,22 +79,25 @@ test('should validate response schema', async ({ apiRequest }) => {
|
||||
},
|
||||
});
|
||||
// Throws if schema validation fails
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
|
||||
// Zod schema validation
|
||||
import { z } from 'zod';
|
||||
// Zod schema validation
|
||||
const UserSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
email: z.string().email(),
|
||||
});
|
||||
|
||||
const UserSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
email: z.string().email(),
|
||||
});
|
||||
|
||||
const response = await apiRequest({
|
||||
test('should validate response schema (Zod)', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users/123',
|
||||
validateSchema: UserSchema,
|
||||
});
|
||||
// Response body is type-safe AND validated
|
||||
expect(status).toBe(200);
|
||||
expect(body.email).toContain('@');
|
||||
});
|
||||
```
|
||||
|
||||
@@ -236,6 +241,136 @@ test('should poll until job completes', async ({ apiRequest, recurse }) => {
|
||||
- `recurse` polls until predicate returns true
|
||||
- Composable utilities work together seamlessly
|
||||
|
||||
### Example 6: Microservice Testing (Multiple Services)
|
||||
|
||||
**Context**: Test interactions between microservices without a browser.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
const USER_SERVICE = process.env.USER_SERVICE_URL || 'http://localhost:3001';
|
||||
const ORDER_SERVICE = process.env.ORDER_SERVICE_URL || 'http://localhost:3002';
|
||||
|
||||
test.describe('Microservice Integration', () => {
|
||||
test('should validate cross-service user lookup', async ({ apiRequest }) => {
|
||||
// Create user in user-service
|
||||
const { body: user } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
baseUrl: USER_SERVICE,
|
||||
body: { name: 'Test User', email: 'test@example.com' },
|
||||
});
|
||||
|
||||
// Create order in order-service (validates user via user-service)
|
||||
const { status, body: order } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
baseUrl: ORDER_SERVICE,
|
||||
body: {
|
||||
userId: user.id,
|
||||
items: [{ productId: 'prod-1', quantity: 2 }],
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(201);
|
||||
expect(order.userId).toBe(user.id);
|
||||
});
|
||||
|
||||
test('should reject order for invalid user', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
baseUrl: ORDER_SERVICE,
|
||||
body: {
|
||||
userId: 'non-existent-user',
|
||||
items: [{ productId: 'prod-1', quantity: 1 }],
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(400);
|
||||
expect(body.code).toBe('INVALID_USER');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Test multiple services without browser
|
||||
- Use `baseUrl` to target different services
|
||||
- Validate cross-service communication
|
||||
- Pure API testing - fast and reliable
|
||||
|
||||
### Example 7: GraphQL API Testing
|
||||
|
||||
**Context**: Test GraphQL endpoints with queries and mutations.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test.describe('GraphQL API', () => {
|
||||
const GRAPHQL_ENDPOINT = '/graphql';
|
||||
|
||||
test('should query users via GraphQL', async ({ apiRequest }) => {
|
||||
const query = `
|
||||
query GetUsers($limit: Int) {
|
||||
users(limit: $limit) {
|
||||
id
|
||||
name
|
||||
email
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: GRAPHQL_ENDPOINT,
|
||||
body: {
|
||||
query,
|
||||
variables: { limit: 10 },
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.errors).toBeUndefined();
|
||||
expect(body.data.users).toHaveLength(10);
|
||||
});
|
||||
|
||||
test('should create user via mutation', async ({ apiRequest }) => {
|
||||
const mutation = `
|
||||
mutation CreateUser($input: CreateUserInput!) {
|
||||
createUser(input: $input) {
|
||||
id
|
||||
name
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: GRAPHQL_ENDPOINT,
|
||||
body: {
|
||||
query: mutation,
|
||||
variables: {
|
||||
input: { name: 'GraphQL User', email: 'gql@example.com' },
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.data.createUser.id).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- GraphQL via POST request
|
||||
- Variables in request body
|
||||
- Check `body.errors` for GraphQL errors (not status code)
|
||||
- Works for queries and mutations
|
||||
|
||||
## Comparison with Vanilla Playwright
|
||||
|
||||
| Vanilla Playwright | playwright-utils apiRequest |
|
||||
@@ -251,11 +386,13 @@ test('should poll until job completes', async ({ apiRequest, recurse }) => {
|
||||
|
||||
**Use apiRequest for:**
|
||||
|
||||
- ✅ API endpoint testing
|
||||
- ✅ Background API calls in UI tests
|
||||
- ✅ Pure API/service testing (no browser needed)
|
||||
- ✅ Microservice integration testing
|
||||
- ✅ GraphQL API testing
|
||||
- ✅ Schema validation needs
|
||||
- ✅ Tests requiring retry logic
|
||||
- ✅ Typed API responses
|
||||
- ✅ Background API calls in UI tests
|
||||
- ✅ Contract testing support
|
||||
|
||||
**Stick with vanilla Playwright for:**
|
||||
|
||||
@@ -265,11 +402,13 @@ test('should poll until job completes', async ({ apiRequest, recurse }) => {
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `api-testing-patterns.md` - Comprehensive pure API testing patterns
|
||||
- `overview.md` - Installation and design principles
|
||||
- `auth-session.md` - Authentication token management
|
||||
- `recurse.md` - Polling for async operations
|
||||
- `fixtures-composition.md` - Combining utilities with mergeTests
|
||||
- `log.md` - Logging API requests
|
||||
- `contract-testing.md` - Pact contract testing
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
|
||||
843
src/modules/bmm/testarch/knowledge/api-testing-patterns.md
Normal file
843
src/modules/bmm/testarch/knowledge/api-testing-patterns.md
Normal file
@@ -0,0 +1,843 @@
|
||||
# API Testing Patterns
|
||||
|
||||
## Principle
|
||||
|
||||
Test APIs and backend services directly without browser overhead. Use Playwright's `request` context for HTTP operations, `apiRequest` utility for enhanced features, and `recurse` for async operations. Pure API tests run faster, are more stable, and provide better coverage for service-layer logic.
|
||||
|
||||
## Rationale
|
||||
|
||||
Many teams over-rely on E2E/browser tests when API tests would be more appropriate:
|
||||
|
||||
- **Slower feedback**: Browser tests take seconds, API tests take milliseconds
|
||||
- **More brittle**: UI changes break tests even when API works correctly
|
||||
- **Wrong abstraction**: Testing business logic through UI layers adds noise
|
||||
- **Resource heavy**: Browsers consume memory and CPU
|
||||
|
||||
API-first testing provides:
|
||||
|
||||
- **Fast execution**: No browser startup, no rendering, no JavaScript execution
|
||||
- **Direct validation**: Test exactly what the service returns
|
||||
- **Better isolation**: Test service logic independent of UI
|
||||
- **Easier debugging**: Clear request/response without DOM noise
|
||||
- **Contract validation**: Verify API contracts explicitly
|
||||
|
||||
## When to Use API Tests vs E2E Tests
|
||||
|
||||
| Scenario | API Test | E2E Test |
|
||||
|----------|----------|----------|
|
||||
| CRUD operations | ✅ Primary | ❌ Overkill |
|
||||
| Business logic validation | ✅ Primary | ❌ Overkill |
|
||||
| Error handling (4xx, 5xx) | ✅ Primary | ⚠️ Supplement |
|
||||
| Authentication flows | ✅ Primary | ⚠️ Supplement |
|
||||
| Data transformation | ✅ Primary | ❌ Overkill |
|
||||
| User journeys | ❌ Can't test | ✅ Primary |
|
||||
| Visual regression | ❌ Can't test | ✅ Primary |
|
||||
| Cross-browser issues | ❌ Can't test | ✅ Primary |
|
||||
|
||||
**Rule of thumb**: If you're testing what the server returns (not how it looks), use API tests.
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Pure API Test (No Browser)
|
||||
|
||||
**Context**: Test REST API endpoints directly without any browser context.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/users.spec.ts
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
// No page, no browser - just API
|
||||
test.describe('Users API', () => {
|
||||
test('should create user', async ({ request }) => {
|
||||
const response = await request.post('/api/users', {
|
||||
data: {
|
||||
name: 'John Doe',
|
||||
email: 'john@example.com',
|
||||
role: 'user',
|
||||
},
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(201);
|
||||
|
||||
const user = await response.json();
|
||||
expect(user.id).toBeDefined();
|
||||
expect(user.name).toBe('John Doe');
|
||||
expect(user.email).toBe('john@example.com');
|
||||
});
|
||||
|
||||
test('should get user by ID', async ({ request }) => {
|
||||
// Create user first
|
||||
const createResponse = await request.post('/api/users', {
|
||||
data: { name: 'Jane Doe', email: 'jane@example.com' },
|
||||
});
|
||||
const { id } = await createResponse.json();
|
||||
|
||||
// Get user
|
||||
const getResponse = await request.get(`/api/users/${id}`);
|
||||
expect(getResponse.status()).toBe(200);
|
||||
|
||||
const user = await getResponse.json();
|
||||
expect(user.id).toBe(id);
|
||||
expect(user.name).toBe('Jane Doe');
|
||||
});
|
||||
|
||||
test('should return 404 for non-existent user', async ({ request }) => {
|
||||
const response = await request.get('/api/users/non-existent-id');
|
||||
expect(response.status()).toBe(404);
|
||||
|
||||
const error = await response.json();
|
||||
expect(error.code).toBe('USER_NOT_FOUND');
|
||||
});
|
||||
|
||||
test('should validate required fields', async ({ request }) => {
|
||||
const response = await request.post('/api/users', {
|
||||
data: { name: 'Missing Email' }, // email is required
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(400);
|
||||
|
||||
const error = await response.json();
|
||||
expect(error.code).toBe('VALIDATION_ERROR');
|
||||
expect(error.details).toContainEqual(
|
||||
expect.objectContaining({ field: 'email', message: expect.any(String) })
|
||||
);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- No `page` fixture needed - only `request`
|
||||
- Tests run without browser overhead
|
||||
- Direct HTTP assertions
|
||||
- Clear error handling tests
|
||||
|
||||
### Example 2: API Test with apiRequest Utility
|
||||
|
||||
**Context**: Use enhanced apiRequest for schema validation, retry, and type safety.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/orders.spec.ts
|
||||
import { test, expect } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { z } from 'zod';
|
||||
|
||||
// Define schema for type safety and validation
|
||||
const OrderSchema = z.object({
|
||||
id: z.string().uuid(),
|
||||
userId: z.string(),
|
||||
items: z.array(
|
||||
z.object({
|
||||
productId: z.string(),
|
||||
quantity: z.number().positive(),
|
||||
price: z.number().positive(),
|
||||
})
|
||||
),
|
||||
total: z.number().positive(),
|
||||
status: z.enum(['pending', 'processing', 'shipped', 'delivered']),
|
||||
createdAt: z.string().datetime(),
|
||||
});
|
||||
|
||||
type Order = z.infer<typeof OrderSchema>;
|
||||
|
||||
test.describe('Orders API', () => {
|
||||
test('should create order with schema validation', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest<Order>({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
body: {
|
||||
userId: 'user-123',
|
||||
items: [
|
||||
{ productId: 'prod-1', quantity: 2, price: 29.99 },
|
||||
{ productId: 'prod-2', quantity: 1, price: 49.99 },
|
||||
],
|
||||
},
|
||||
validateSchema: OrderSchema, // Validates response matches schema
|
||||
});
|
||||
|
||||
expect(status).toBe(201);
|
||||
expect(body.id).toBeDefined();
|
||||
expect(body.status).toBe('pending');
|
||||
expect(body.total).toBe(109.97); // 2*29.99 + 49.99
|
||||
});
|
||||
|
||||
test('should handle server errors with retry', async ({ apiRequest }) => {
|
||||
// apiRequest retries 5xx errors by default
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/orders/order-123',
|
||||
retryConfig: {
|
||||
maxRetries: 3,
|
||||
retryDelay: 1000,
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
|
||||
test('should list orders with pagination', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest<{ orders: Order[]; total: number; page: number }>({
|
||||
method: 'GET',
|
||||
path: '/api/orders',
|
||||
params: { page: 1, limit: 10, status: 'pending' },
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.orders).toHaveLength(10);
|
||||
expect(body.total).toBeGreaterThan(10);
|
||||
expect(body.page).toBe(1);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Zod schema for runtime validation AND TypeScript types
|
||||
- `validateSchema` throws if response doesn't match
|
||||
- Built-in retry for transient failures
|
||||
- Type-safe `body` access
|
||||
|
||||
### Example 3: Microservice-to-Microservice Testing
|
||||
|
||||
**Context**: Test service interactions without browser - validate API contracts between services.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/service-integration.spec.ts
|
||||
import { test, expect } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test.describe('Service Integration', () => {
|
||||
const USER_SERVICE_URL = process.env.USER_SERVICE_URL || 'http://localhost:3001';
|
||||
const ORDER_SERVICE_URL = process.env.ORDER_SERVICE_URL || 'http://localhost:3002';
|
||||
const INVENTORY_SERVICE_URL = process.env.INVENTORY_SERVICE_URL || 'http://localhost:3003';
|
||||
|
||||
test('order service should validate user exists', async ({ apiRequest }) => {
|
||||
// Create user in user-service
|
||||
const { body: user } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
baseUrl: USER_SERVICE_URL,
|
||||
body: { name: 'Test User', email: 'test@example.com' },
|
||||
});
|
||||
|
||||
// Create order in order-service (should validate user via user-service)
|
||||
const { status, body: order } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
baseUrl: ORDER_SERVICE_URL,
|
||||
body: {
|
||||
userId: user.id,
|
||||
items: [{ productId: 'prod-1', quantity: 1 }],
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(201);
|
||||
expect(order.userId).toBe(user.id);
|
||||
});
|
||||
|
||||
test('order service should reject invalid user', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
baseUrl: ORDER_SERVICE_URL,
|
||||
body: {
|
||||
userId: 'non-existent-user',
|
||||
items: [{ productId: 'prod-1', quantity: 1 }],
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(400);
|
||||
expect(body.code).toBe('INVALID_USER');
|
||||
});
|
||||
|
||||
test('order should decrease inventory', async ({ apiRequest, recurse }) => {
|
||||
// Get initial inventory
|
||||
const { body: initialInventory } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/inventory/prod-1',
|
||||
baseUrl: INVENTORY_SERVICE_URL,
|
||||
});
|
||||
|
||||
// Create order
|
||||
await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
baseUrl: ORDER_SERVICE_URL,
|
||||
body: {
|
||||
userId: 'user-123',
|
||||
items: [{ productId: 'prod-1', quantity: 2 }],
|
||||
},
|
||||
});
|
||||
|
||||
// Poll for inventory update (eventual consistency)
|
||||
const { body: updatedInventory } = await recurse(
|
||||
() =>
|
||||
apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/inventory/prod-1',
|
||||
baseUrl: INVENTORY_SERVICE_URL,
|
||||
}),
|
||||
(response) => response.body.quantity === initialInventory.quantity - 2,
|
||||
{ timeout: 10000, interval: 500 }
|
||||
);
|
||||
|
||||
expect(updatedInventory.quantity).toBe(initialInventory.quantity - 2);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Multiple service URLs for microservice testing
|
||||
- Tests service-to-service communication
|
||||
- Uses `recurse` for eventual consistency
|
||||
- No browser needed for full integration testing
|
||||
|
||||
### Example 4: GraphQL API Testing
|
||||
|
||||
**Context**: Test GraphQL endpoints with queries and mutations.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/graphql.spec.ts
|
||||
import { test, expect } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
const GRAPHQL_ENDPOINT = '/graphql';
|
||||
|
||||
test.describe('GraphQL API', () => {
|
||||
test('should query users', async ({ apiRequest }) => {
|
||||
const query = `
|
||||
query GetUsers($limit: Int) {
|
||||
users(limit: $limit) {
|
||||
id
|
||||
name
|
||||
email
|
||||
role
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: GRAPHQL_ENDPOINT,
|
||||
body: {
|
||||
query,
|
||||
variables: { limit: 10 },
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.errors).toBeUndefined();
|
||||
expect(body.data.users).toHaveLength(10);
|
||||
expect(body.data.users[0]).toHaveProperty('id');
|
||||
expect(body.data.users[0]).toHaveProperty('name');
|
||||
});
|
||||
|
||||
test('should create user via mutation', async ({ apiRequest }) => {
|
||||
const mutation = `
|
||||
mutation CreateUser($input: CreateUserInput!) {
|
||||
createUser(input: $input) {
|
||||
id
|
||||
name
|
||||
email
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: GRAPHQL_ENDPOINT,
|
||||
body: {
|
||||
query: mutation,
|
||||
variables: {
|
||||
input: {
|
||||
name: 'GraphQL User',
|
||||
email: 'graphql@example.com',
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.errors).toBeUndefined();
|
||||
expect(body.data.createUser.id).toBeDefined();
|
||||
expect(body.data.createUser.name).toBe('GraphQL User');
|
||||
});
|
||||
|
||||
test('should handle GraphQL errors', async ({ apiRequest }) => {
|
||||
const query = `
|
||||
query GetUser($id: ID!) {
|
||||
user(id: $id) {
|
||||
id
|
||||
name
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: GRAPHQL_ENDPOINT,
|
||||
body: {
|
||||
query,
|
||||
variables: { id: 'non-existent' },
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200); // GraphQL returns 200 even for errors
|
||||
expect(body.errors).toBeDefined();
|
||||
expect(body.errors[0].message).toContain('not found');
|
||||
expect(body.data.user).toBeNull();
|
||||
});
|
||||
|
||||
test('should handle validation errors', async ({ apiRequest }) => {
|
||||
const mutation = `
|
||||
mutation CreateUser($input: CreateUserInput!) {
|
||||
createUser(input: $input) {
|
||||
id
|
||||
}
|
||||
}
|
||||
`;
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: GRAPHQL_ENDPOINT,
|
||||
body: {
|
||||
query: mutation,
|
||||
variables: {
|
||||
input: {
|
||||
name: '', // Invalid: empty name
|
||||
email: 'invalid-email', // Invalid: bad format
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.errors).toBeDefined();
|
||||
expect(body.errors[0].extensions.code).toBe('BAD_USER_INPUT');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- GraphQL queries and mutations via POST
|
||||
- Variables passed in request body
|
||||
- GraphQL returns 200 even for errors (check `body.errors`)
|
||||
- Test validation and business logic errors
|
||||
|
||||
### Example 5: Database Seeding and Cleanup via API
|
||||
|
||||
**Context**: Use API calls to set up and tear down test data without direct database access.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/with-data-setup.spec.ts
|
||||
import { test, expect } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test.describe('Orders with Data Setup', () => {
|
||||
let testUser: { id: string; email: string };
|
||||
let testProducts: Array<{ id: string; name: string; price: number }>;
|
||||
|
||||
test.beforeAll(async ({ request }) => {
|
||||
// Seed user via API
|
||||
const userResponse = await request.post('/api/users', {
|
||||
data: {
|
||||
name: 'Test User',
|
||||
email: `test-${Date.now()}@example.com`,
|
||||
},
|
||||
});
|
||||
testUser = await userResponse.json();
|
||||
|
||||
// Seed products via API
|
||||
testProducts = [];
|
||||
for (const product of [
|
||||
{ name: 'Widget A', price: 29.99 },
|
||||
{ name: 'Widget B', price: 49.99 },
|
||||
{ name: 'Widget C', price: 99.99 },
|
||||
]) {
|
||||
const productResponse = await request.post('/api/products', {
|
||||
data: product,
|
||||
});
|
||||
testProducts.push(await productResponse.json());
|
||||
}
|
||||
});
|
||||
|
||||
test.afterAll(async ({ request }) => {
|
||||
// Cleanup via API
|
||||
if (testUser?.id) {
|
||||
await request.delete(`/api/users/${testUser.id}`);
|
||||
}
|
||||
for (const product of testProducts) {
|
||||
await request.delete(`/api/products/${product.id}`);
|
||||
}
|
||||
});
|
||||
|
||||
test('should create order with seeded data', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
body: {
|
||||
userId: testUser.id,
|
||||
items: [
|
||||
{ productId: testProducts[0].id, quantity: 2 },
|
||||
{ productId: testProducts[1].id, quantity: 1 },
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(201);
|
||||
expect(body.userId).toBe(testUser.id);
|
||||
expect(body.items).toHaveLength(2);
|
||||
expect(body.total).toBe(2 * 29.99 + 49.99);
|
||||
});
|
||||
|
||||
test('should list user orders', async ({ apiRequest }) => {
|
||||
// Create an order first
|
||||
await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
body: {
|
||||
userId: testUser.id,
|
||||
items: [{ productId: testProducts[2].id, quantity: 1 }],
|
||||
},
|
||||
});
|
||||
|
||||
// List orders for user
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/orders',
|
||||
params: { userId: testUser.id },
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.orders.length).toBeGreaterThanOrEqual(1);
|
||||
expect(body.orders.every((o: any) => o.userId === testUser.id)).toBe(true);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `beforeAll`/`afterAll` for test data setup/cleanup
|
||||
- API-based seeding (no direct DB access needed)
|
||||
- Unique emails to prevent conflicts in parallel runs
|
||||
- Cleanup after all tests complete
|
||||
|
||||
### Example 6: Background Job Testing with Recurse
|
||||
|
||||
**Context**: Test async operations like background jobs, webhooks, and eventual consistency.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/background-jobs.spec.ts
|
||||
import { test, expect } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test.describe('Background Jobs', () => {
|
||||
test('should process export job', async ({ apiRequest, recurse }) => {
|
||||
// Trigger export job
|
||||
const { body: job } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/exports',
|
||||
body: {
|
||||
type: 'users',
|
||||
format: 'csv',
|
||||
filters: { createdAfter: '2024-01-01' },
|
||||
},
|
||||
});
|
||||
|
||||
expect(job.id).toBeDefined();
|
||||
expect(job.status).toBe('pending');
|
||||
|
||||
// Poll until job completes
|
||||
const { body: completedJob } = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/exports/${job.id}` }),
|
||||
(response) => response.body.status === 'completed',
|
||||
{
|
||||
timeout: 60000,
|
||||
interval: 2000,
|
||||
log: `Waiting for export job ${job.id} to complete`,
|
||||
}
|
||||
);
|
||||
|
||||
expect(completedJob.status).toBe('completed');
|
||||
expect(completedJob.downloadUrl).toBeDefined();
|
||||
expect(completedJob.recordCount).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
test('should handle job failure gracefully', async ({ apiRequest, recurse }) => {
|
||||
// Trigger job that will fail
|
||||
const { body: job } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/exports',
|
||||
body: {
|
||||
type: 'invalid-type', // This will cause failure
|
||||
format: 'csv',
|
||||
},
|
||||
});
|
||||
|
||||
// Poll until job fails
|
||||
const { body: failedJob } = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/exports/${job.id}` }),
|
||||
(response) => ['completed', 'failed'].includes(response.body.status),
|
||||
{ timeout: 30000 }
|
||||
);
|
||||
|
||||
expect(failedJob.status).toBe('failed');
|
||||
expect(failedJob.error).toBeDefined();
|
||||
expect(failedJob.error.code).toBe('INVALID_EXPORT_TYPE');
|
||||
});
|
||||
|
||||
test('should process webhook delivery', async ({ apiRequest, recurse }) => {
|
||||
// Trigger action that sends webhook
|
||||
const { body: order } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
body: {
|
||||
userId: 'user-123',
|
||||
items: [{ productId: 'prod-1', quantity: 1 }],
|
||||
webhookUrl: 'https://webhook.site/test-endpoint',
|
||||
},
|
||||
});
|
||||
|
||||
// Poll for webhook delivery status
|
||||
const { body: webhookStatus } = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/webhooks/order/${order.id}` }),
|
||||
(response) => response.body.delivered === true,
|
||||
{ timeout: 30000, interval: 1000 }
|
||||
);
|
||||
|
||||
expect(webhookStatus.delivered).toBe(true);
|
||||
expect(webhookStatus.deliveredAt).toBeDefined();
|
||||
expect(webhookStatus.responseStatus).toBe(200);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `recurse` for polling async operations
|
||||
- Test both success and failure scenarios
|
||||
- Configurable timeout and interval
|
||||
- Log messages for debugging
|
||||
|
||||
### Example 7: Service Authentication (No Browser)
|
||||
|
||||
**Context**: Test authenticated API endpoints using tokens directly - no browser login needed.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/authenticated.spec.ts
|
||||
import { test, expect } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test.describe('Authenticated API Tests', () => {
|
||||
let authToken: string;
|
||||
|
||||
test.beforeAll(async ({ request }) => {
|
||||
// Get token via API (no browser!)
|
||||
const response = await request.post('/api/auth/login', {
|
||||
data: {
|
||||
email: process.env.TEST_USER_EMAIL,
|
||||
password: process.env.TEST_USER_PASSWORD,
|
||||
},
|
||||
});
|
||||
|
||||
const { token } = await response.json();
|
||||
authToken = token;
|
||||
});
|
||||
|
||||
test('should access protected endpoint with token', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/me',
|
||||
headers: {
|
||||
Authorization: `Bearer ${authToken}`,
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.email).toBe(process.env.TEST_USER_EMAIL);
|
||||
});
|
||||
|
||||
test('should reject request without token', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/me',
|
||||
// No Authorization header
|
||||
});
|
||||
|
||||
expect(status).toBe(401);
|
||||
expect(body.code).toBe('UNAUTHORIZED');
|
||||
});
|
||||
|
||||
test('should reject expired token', async ({ apiRequest }) => {
|
||||
const expiredToken = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...'; // Expired token
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/me',
|
||||
headers: {
|
||||
Authorization: `Bearer ${expiredToken}`,
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(401);
|
||||
expect(body.code).toBe('TOKEN_EXPIRED');
|
||||
});
|
||||
|
||||
test('should handle role-based access', async ({ apiRequest }) => {
|
||||
// User token (non-admin)
|
||||
const { status } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/admin/users',
|
||||
headers: {
|
||||
Authorization: `Bearer ${authToken}`,
|
||||
},
|
||||
});
|
||||
|
||||
expect(status).toBe(403); // Forbidden for non-admin
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Token obtained via API login (no browser)
|
||||
- Token reused across all tests in describe block
|
||||
- Test auth, expired tokens, and RBAC
|
||||
- Pure API testing without UI
|
||||
|
||||
## API Test Configuration
|
||||
|
||||
### Playwright Config for API-Only Tests
|
||||
|
||||
```typescript
|
||||
// playwright.config.ts
|
||||
import { defineConfig } from '@playwright/test';
|
||||
|
||||
export default defineConfig({
|
||||
testDir: './tests/api',
|
||||
|
||||
// No browser needed for API tests
|
||||
use: {
|
||||
baseURL: process.env.API_URL || 'http://localhost:3000',
|
||||
extraHTTPHeaders: {
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
},
|
||||
|
||||
// Faster without browser overhead
|
||||
timeout: 30000,
|
||||
|
||||
// Run API tests in parallel
|
||||
workers: 4,
|
||||
fullyParallel: true,
|
||||
|
||||
// No screenshots/traces needed for API tests
|
||||
reporter: [['html'], ['json', { outputFile: 'api-test-results.json' }]],
|
||||
});
|
||||
```
|
||||
|
||||
### Separate API Test Project
|
||||
|
||||
```typescript
|
||||
// playwright.config.ts
|
||||
export default defineConfig({
|
||||
projects: [
|
||||
{
|
||||
name: 'api',
|
||||
testDir: './tests/api',
|
||||
use: {
|
||||
baseURL: process.env.API_URL,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'e2e',
|
||||
testDir: './tests/e2e',
|
||||
use: {
|
||||
baseURL: process.env.APP_URL,
|
||||
...devices['Desktop Chrome'],
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
```
|
||||
|
||||
## Comparison: API Tests vs E2E Tests
|
||||
|
||||
| Aspect | API Test | E2E Test |
|
||||
|--------|----------|----------|
|
||||
| **Speed** | ~50-100ms per test | ~2-10s per test |
|
||||
| **Stability** | Very stable | More flaky (UI timing) |
|
||||
| **Setup** | Minimal | Browser, context, page |
|
||||
| **Debugging** | Clear request/response | DOM, screenshots, traces |
|
||||
| **Coverage** | Service logic | User experience |
|
||||
| **Parallelization** | Easy (stateless) | Complex (browser resources) |
|
||||
| **CI Cost** | Low (no browser) | High (browser containers) |
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `api-request.md` - apiRequest utility details
|
||||
- `recurse.md` - Polling patterns for async operations
|
||||
- `auth-session.md` - Token management
|
||||
- `contract-testing.md` - Pact contract testing
|
||||
- `test-levels-framework.md` - When to use which test level
|
||||
- `data-factories.md` - Test data setup patterns
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**DON'T use E2E for API validation:**
|
||||
|
||||
```typescript
|
||||
// Bad: Testing API through UI
|
||||
test('validate user creation', async ({ page }) => {
|
||||
await page.goto('/admin/users');
|
||||
await page.fill('#name', 'John');
|
||||
await page.click('#submit');
|
||||
await expect(page.getByText('User created')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**DO test APIs directly:**
|
||||
|
||||
```typescript
|
||||
// Good: Direct API test
|
||||
test('validate user creation', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
body: { name: 'John' },
|
||||
});
|
||||
expect(status).toBe(201);
|
||||
expect(body.id).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
**DON'T ignore API tests because "E2E covers it":**
|
||||
|
||||
```typescript
|
||||
// Bad thinking: "Our E2E tests create users, so API is tested"
|
||||
// Reality: E2E tests one happy path; API tests cover edge cases
|
||||
```
|
||||
|
||||
**DO have dedicated API test coverage:**
|
||||
|
||||
```typescript
|
||||
// Good: Explicit API test suite
|
||||
test.describe('Users API', () => {
|
||||
test('creates user', async ({ apiRequest }) => { /* ... */ });
|
||||
test('handles duplicate email', async ({ apiRequest }) => { /* ... */ });
|
||||
test('validates required fields', async ({ apiRequest }) => { /* ... */ });
|
||||
test('handles malformed JSON', async ({ apiRequest }) => { /* ... */ });
|
||||
test('rate limits requests', async ({ apiRequest }) => { /* ... */ });
|
||||
});
|
||||
```
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Principle
|
||||
|
||||
Persist authentication tokens to disk and reuse across test runs. Support multiple user identifiers, ephemeral authentication, and worker-specific accounts for parallel execution. Fetch tokens once, use everywhere.
|
||||
Persist authentication tokens to disk and reuse across test runs. Support multiple user identifiers, ephemeral authentication, and worker-specific accounts for parallel execution. Fetch tokens once, use everywhere. **Works for both API-only tests and browser tests.**
|
||||
|
||||
## Rationale
|
||||
|
||||
@@ -22,6 +22,7 @@ The `auth-session` utility provides:
|
||||
- **Worker-specific accounts**: Parallel execution with isolated user accounts
|
||||
- **Automatic token management**: Checks validity, renews if expired
|
||||
- **Flexible provider pattern**: Adapt to any auth system (OAuth2, JWT, custom)
|
||||
- **API-first design**: Get tokens for API tests without browser overhead
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
@@ -244,6 +245,200 @@ test('parallel test 2', async ({ page }) => {
|
||||
- Token management automatic per worker
|
||||
- Scales to any number of workers
|
||||
|
||||
### Example 6: Pure API Authentication (No Browser)
|
||||
|
||||
**Context**: Get auth tokens for API-only tests using auth-session disk persistence.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Step 1: Create API-only auth provider (no browser needed)
|
||||
// playwright/support/api-auth-provider.ts
|
||||
import { type AuthProvider } from '@seontechnologies/playwright-utils/auth-session';
|
||||
|
||||
const apiAuthProvider: AuthProvider = {
|
||||
getEnvironment: (options) => options.environment || 'local',
|
||||
getUserIdentifier: (options) => options.userIdentifier || 'api-user',
|
||||
|
||||
extractToken: (storageState) => {
|
||||
// Token stored in localStorage format for disk persistence
|
||||
const tokenEntry = storageState.origins?.[0]?.localStorage?.find(
|
||||
(item) => item.name === 'auth_token'
|
||||
);
|
||||
return tokenEntry?.value;
|
||||
},
|
||||
|
||||
isTokenExpired: (storageState) => {
|
||||
const expiryEntry = storageState.origins?.[0]?.localStorage?.find(
|
||||
(item) => item.name === 'token_expiry'
|
||||
);
|
||||
if (!expiryEntry) return true;
|
||||
return Date.now() > parseInt(expiryEntry.value, 10);
|
||||
},
|
||||
|
||||
manageAuthToken: async (request, options) => {
|
||||
const email = process.env.TEST_USER_EMAIL;
|
||||
const password = process.env.TEST_USER_PASSWORD;
|
||||
|
||||
if (!email || !password) {
|
||||
throw new Error('TEST_USER_EMAIL and TEST_USER_PASSWORD must be set');
|
||||
}
|
||||
|
||||
// Pure API login - no browser!
|
||||
const response = await request.post('/api/auth/login', {
|
||||
data: { email, password },
|
||||
});
|
||||
|
||||
if (!response.ok()) {
|
||||
throw new Error(`Auth failed: ${response.status()}`);
|
||||
}
|
||||
|
||||
const { token, expiresIn } = await response.json();
|
||||
const expiryTime = Date.now() + expiresIn * 1000;
|
||||
|
||||
// Return storage state format for disk persistence
|
||||
return {
|
||||
cookies: [],
|
||||
origins: [
|
||||
{
|
||||
origin: process.env.API_BASE_URL || 'http://localhost:3000',
|
||||
localStorage: [
|
||||
{ name: 'auth_token', value: token },
|
||||
{ name: 'token_expiry', value: String(expiryTime) },
|
||||
],
|
||||
},
|
||||
],
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
export default apiAuthProvider;
|
||||
|
||||
// Step 2: Create auth fixture
|
||||
// playwright/support/fixtures.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createAuthFixtures, setAuthProvider } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import apiAuthProvider from './api-auth-provider';
|
||||
|
||||
setAuthProvider(apiAuthProvider);
|
||||
|
||||
export const test = base.extend(createAuthFixtures());
|
||||
|
||||
// Step 3: Use in tests - token persisted to disk!
|
||||
// tests/api/authenticated-api.spec.ts
|
||||
import { test } from '../support/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('should access protected endpoint', async ({ authToken, apiRequest }) => {
|
||||
// authToken is automatically loaded from disk or fetched if expired
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/me',
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
|
||||
test('should create resource with auth', async ({ authToken, apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
body: { items: [{ productId: 'prod-1', quantity: 2 }] },
|
||||
});
|
||||
|
||||
expect(status).toBe(201);
|
||||
expect(body.id).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Token persisted to disk (not in-memory) - survives test reruns
|
||||
- Provider fetches token once, reuses until expired
|
||||
- Pure API authentication - no browser context needed
|
||||
- `authToken` fixture handles disk read/write automatically
|
||||
- Environment variables validated with clear error message
|
||||
|
||||
### Example 7: Service-to-Service Authentication
|
||||
|
||||
**Context**: Test microservice authentication patterns (API keys, service tokens) with proper environment validation.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// tests/api/service-auth.spec.ts
|
||||
import { test as base, expect } from '@playwright/test';
|
||||
import { test as apiFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { mergeTests } from '@playwright/test';
|
||||
|
||||
// Validate environment variables at module load
|
||||
const SERVICE_API_KEY = process.env.SERVICE_API_KEY;
|
||||
const INTERNAL_SERVICE_URL = process.env.INTERNAL_SERVICE_URL;
|
||||
|
||||
if (!SERVICE_API_KEY) {
|
||||
throw new Error('SERVICE_API_KEY environment variable is required');
|
||||
}
|
||||
if (!INTERNAL_SERVICE_URL) {
|
||||
throw new Error('INTERNAL_SERVICE_URL environment variable is required');
|
||||
}
|
||||
|
||||
const test = mergeTests(base, apiFixture);
|
||||
|
||||
test.describe('Service-to-Service Auth', () => {
|
||||
test('should authenticate with API key', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/internal/health',
|
||||
baseUrl: INTERNAL_SERVICE_URL,
|
||||
headers: { 'X-API-Key': SERVICE_API_KEY },
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.status).toBe('healthy');
|
||||
});
|
||||
|
||||
test('should reject invalid API key', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/internal/health',
|
||||
baseUrl: INTERNAL_SERVICE_URL,
|
||||
headers: { 'X-API-Key': 'invalid-key' },
|
||||
});
|
||||
|
||||
expect(status).toBe(401);
|
||||
expect(body.code).toBe('INVALID_API_KEY');
|
||||
});
|
||||
|
||||
test('should call downstream service with propagated auth', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/internal/aggregate-data',
|
||||
baseUrl: INTERNAL_SERVICE_URL,
|
||||
headers: {
|
||||
'X-API-Key': SERVICE_API_KEY,
|
||||
'X-Request-ID': `test-${Date.now()}`,
|
||||
},
|
||||
body: { sources: ['users', 'orders', 'inventory'] },
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.aggregatedFrom).toHaveLength(3);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Environment variables validated at module load with clear errors
|
||||
- API key authentication (simpler than OAuth - no disk persistence needed)
|
||||
- Test internal/service endpoints
|
||||
- Validate auth rejection scenarios
|
||||
- Correlation ID for request tracing
|
||||
|
||||
> **Note**: API keys are typically static secrets that don't expire, so disk persistence (auth-session) isn't needed. For rotating service tokens, use the auth-session provider pattern from Example 6.
|
||||
|
||||
## Custom Auth Provider Pattern
|
||||
|
||||
**Context**: Adapt auth-session to your authentication system (OAuth2, JWT, SAML, custom).
|
||||
@@ -310,6 +505,7 @@ test('authenticated API call', async ({ apiRequest, authToken }) => {
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `api-testing-patterns.md` - Pure API testing patterns (no browser)
|
||||
- `overview.md` - Installation and fixture composition
|
||||
- `api-request.md` - Authenticated API requests
|
||||
- `fixtures-composition.md` - Merging auth with other utilities
|
||||
|
||||
@@ -22,6 +22,16 @@ The `file-utils` module provides:
|
||||
- **Validation helpers**: Row count, header checks, content validation
|
||||
- **Format support**: Multiple sheet support (XLSX), text extraction (PDF), archive extraction (ZIP)
|
||||
|
||||
## Why Use This Instead of Vanilla Playwright?
|
||||
|
||||
| Vanilla Playwright | File Utils |
|
||||
| ------------------------------------------- | ------------------------------------------------ |
|
||||
| ~80 lines per CSV flow (download + parse) | ~10 lines end-to-end |
|
||||
| Manual event orchestration for downloads | Encapsulated in `handleDownload()` |
|
||||
| Manual path handling and `saveAs` | Returns a ready-to-use file path |
|
||||
| Manual existence checks and error handling | Centralized in one place via utility patterns |
|
||||
| Manual CSV parsing config (headers, typing) | `readCSV()` returns `{ data, headers }` directly |
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: UI-Triggered CSV Download
|
||||
@@ -40,20 +50,18 @@ test('should download and validate CSV', async ({ page }) => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.click('[data-testid="export-csv"]'),
|
||||
trigger: () => page.getByTestId('download-button-text/csv').click(),
|
||||
});
|
||||
|
||||
const { content } = await readCSV({ filePath: downloadPath });
|
||||
const csvResult = await readCSV({ filePath: downloadPath });
|
||||
|
||||
// Validate headers
|
||||
expect(content.headers).toEqual(['ID', 'Name', 'Email', 'Role']);
|
||||
|
||||
// Validate data
|
||||
expect(content.data).toHaveLength(10);
|
||||
expect(content.data[0]).toMatchObject({
|
||||
// Access parsed data and headers
|
||||
const { data, headers } = csvResult.content;
|
||||
expect(headers).toEqual(['ID', 'Name', 'Email']);
|
||||
expect(data[0]).toMatchObject({
|
||||
ID: expect.any(String),
|
||||
Name: expect.any(String),
|
||||
Email: expect.stringMatching(/@/),
|
||||
Email: expect.any(String),
|
||||
});
|
||||
});
|
||||
```
|
||||
@@ -81,25 +89,27 @@ test('should read multi-sheet XLSX', async () => {
|
||||
trigger: () => page.click('[data-testid="export-xlsx"]'),
|
||||
});
|
||||
|
||||
const { content } = await readXLSX({ filePath: downloadPath });
|
||||
const xlsxResult = await readXLSX({ filePath: downloadPath });
|
||||
|
||||
// Access specific sheets
|
||||
const summarySheet = content.sheets.find((s) => s.name === 'Summary');
|
||||
const detailsSheet = content.sheets.find((s) => s.name === 'Details');
|
||||
// Verify worksheet structure
|
||||
expect(xlsxResult.content.worksheets.length).toBeGreaterThan(0);
|
||||
const worksheet = xlsxResult.content.worksheets[0];
|
||||
expect(worksheet).toBeDefined();
|
||||
expect(worksheet).toHaveProperty('name');
|
||||
|
||||
// Validate summary
|
||||
expect(summarySheet.data).toHaveLength(1);
|
||||
expect(summarySheet.data[0].TotalRecords).toBe('150');
|
||||
// Access sheet data
|
||||
const sheetData = worksheet?.data;
|
||||
expect(Array.isArray(sheetData)).toBe(true);
|
||||
|
||||
// Validate details
|
||||
expect(detailsSheet.data).toHaveLength(150);
|
||||
expect(detailsSheet.headers).toContain('TransactionID');
|
||||
// Use type assertion for type safety
|
||||
const firstRow = sheetData![0] as Record<string, unknown>;
|
||||
expect(firstRow).toHaveProperty('id');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `sheets` array with `name` and `data` properties
|
||||
- `worksheets` array with `name` and `data` properties
|
||||
- Access sheets by name
|
||||
- Each sheet has its own headers and data
|
||||
- Type-safe sheet iteration
|
||||
@@ -117,26 +127,48 @@ test('should validate PDF report', async () => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.click('[data-testid="download-report"]'),
|
||||
trigger: () => page.getByTestId('download-button-Text-based PDF Document').click(),
|
||||
});
|
||||
|
||||
const { content } = await readPDF({ filePath: downloadPath });
|
||||
const pdfResult = await readPDF({ filePath: downloadPath });
|
||||
|
||||
// content.text is extracted text from all pages
|
||||
expect(content.text).toContain('Financial Report Q4 2024');
|
||||
expect(content.text).toContain('Total Revenue:');
|
||||
|
||||
// Validate page count
|
||||
expect(content.numpages).toBeGreaterThan(10);
|
||||
// content is extracted text from all pages
|
||||
expect(pdfResult.pagesCount).toBe(1);
|
||||
expect(pdfResult.fileName).toContain('.pdf');
|
||||
expect(pdfResult.content).toContain('All you need is the free Adobe Acrobat Reader');
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
**PDF Reader Options:**
|
||||
|
||||
- `content.text` contains all extracted text
|
||||
- `content.numpages` for page count
|
||||
- PDF parsing handles multi-page documents
|
||||
- Search for specific phrases
|
||||
```typescript
|
||||
const result = await readPDF({
|
||||
filePath: '/path/to/document.pdf',
|
||||
mergePages: false, // Keep pages separate (default: true)
|
||||
debug: true, // Enable debug logging
|
||||
maxPages: 10, // Limit processing to first 10 pages
|
||||
});
|
||||
```
|
||||
|
||||
**Important Limitation - Vector-based PDFs:**
|
||||
|
||||
Text extraction may fail for PDFs that store text as vector graphics (e.g., those generated by jsPDF):
|
||||
|
||||
```typescript
|
||||
// Vector-based PDF example (extraction fails gracefully)
|
||||
const pdfResult = await readPDF({ filePath: downloadPath });
|
||||
|
||||
expect(pdfResult.pagesCount).toBe(1);
|
||||
expect(pdfResult.info.extractionNotes).toContain(
|
||||
'Text extraction from vector-based PDFs is not supported.'
|
||||
);
|
||||
```
|
||||
|
||||
Such PDFs will have:
|
||||
|
||||
- `textExtractionSuccess: false`
|
||||
- `isVectorBased: true`
|
||||
- Explanatory message in `extractionNotes`
|
||||
|
||||
### Example 4: ZIP Archive Validation
|
||||
|
||||
@@ -154,25 +186,33 @@ test('should validate ZIP archive', async () => {
|
||||
trigger: () => page.click('[data-testid="download-backup"]'),
|
||||
});
|
||||
|
||||
const { content } = await readZIP({ filePath: downloadPath });
|
||||
const zipResult = await readZIP({ filePath: downloadPath });
|
||||
|
||||
// Check file list
|
||||
expect(content.files).toContain('data.csv');
|
||||
expect(content.files).toContain('config.json');
|
||||
expect(content.files).toContain('readme.txt');
|
||||
expect(Array.isArray(zipResult.content.entries)).toBe(true);
|
||||
expect(zipResult.content.entries).toContain(
|
||||
'Case_53125_10-19-22_AM/Case_53125_10-19-22_AM_case_data.csv'
|
||||
);
|
||||
|
||||
// Read specific file from archive
|
||||
const configContent = content.zip.readAsText('config.json');
|
||||
const config = JSON.parse(configContent);
|
||||
// Extract specific file
|
||||
const targetFile = 'Case_53125_10-19-22_AM/Case_53125_10-19-22_AM_case_data.csv';
|
||||
const zipWithExtraction = await readZIP({
|
||||
filePath: downloadPath,
|
||||
fileToExtract: targetFile,
|
||||
});
|
||||
|
||||
expect(config.version).toBe('2.0');
|
||||
// Access extracted file buffer
|
||||
const extractedFiles = zipWithExtraction.content.extractedFiles || {};
|
||||
const fileBuffer = extractedFiles[targetFile];
|
||||
expect(fileBuffer).toBeInstanceOf(Buffer);
|
||||
expect(fileBuffer?.length).toBeGreaterThan(0);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `content.files` lists all files in archive
|
||||
- `content.zip.readAsText()` extracts specific files
|
||||
- `content.entries` lists all files in archive
|
||||
- `fileToExtract` extracts specific files to Buffer
|
||||
- Validate archive structure
|
||||
- Read and parse individual files from ZIP
|
||||
|
||||
@@ -185,7 +225,7 @@ test('should validate ZIP archive', async () => {
|
||||
```typescript
|
||||
test('should download via API', async ({ page, request }) => {
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
page, // Still need page for download events
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: async () => {
|
||||
const response = await request.get('/api/export/csv', {
|
||||
@@ -211,20 +251,123 @@ test('should download via API', async ({ page, request }) => {
|
||||
- Still need `page` for download events
|
||||
- Works with authenticated endpoints
|
||||
|
||||
## Validation Helpers
|
||||
### Example 6: Reading CSV from Buffer (ZIP extraction)
|
||||
|
||||
**Context**: Read CSV content directly from a Buffer (e.g., extracted from ZIP).
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// CSV validation
|
||||
const { isValid, errors } = await validateCSV({
|
||||
filePath: downloadPath,
|
||||
expectedRowCount: 10,
|
||||
requiredHeaders: ['ID', 'Name', 'Email'],
|
||||
// Read from a Buffer (e.g., extracted from a ZIP)
|
||||
const zipResult = await readZIP({
|
||||
filePath: 'archive.zip',
|
||||
fileToExtract: 'data.csv',
|
||||
});
|
||||
const fileBuffer = zipResult.content.extractedFiles?.['data.csv'];
|
||||
const csvFromBuffer = await readCSV({ content: fileBuffer });
|
||||
|
||||
expect(isValid).toBe(true);
|
||||
expect(errors).toHaveLength(0);
|
||||
// Read from a string
|
||||
const csvString = 'name,age\nJohn,30\nJane,25';
|
||||
const csvFromString = await readCSV({ content: csvString });
|
||||
|
||||
const { data, headers } = csvFromString.content;
|
||||
expect(headers).toContain('name');
|
||||
expect(headers).toContain('age');
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### CSV Reader Options
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
| -------------- | ------------------ | -------- | -------------------------------------- |
|
||||
| `filePath` | `string` | - | Path to CSV file (mutually exclusive) |
|
||||
| `content` | `string \| Buffer` | - | Direct content (mutually exclusive) |
|
||||
| `delimiter` | `string \| 'auto'` | `','` | Value separator, auto-detect if 'auto' |
|
||||
| `encoding` | `string` | `'utf8'` | File encoding |
|
||||
| `parseHeaders` | `boolean` | `true` | Use first row as headers |
|
||||
| `trim` | `boolean` | `true` | Trim whitespace from values |
|
||||
|
||||
### XLSX Reader Options
|
||||
|
||||
| Option | Type | Description |
|
||||
| ----------- | -------- | ------------------------------ |
|
||||
| `filePath` | `string` | Path to XLSX file |
|
||||
| `sheetName` | `string` | Name of sheet to set as active |
|
||||
|
||||
### PDF Reader Options
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
| ------------ | --------- | ------- | --------------------------- |
|
||||
| `filePath` | `string` | - | Path to PDF file (required) |
|
||||
| `mergePages` | `boolean` | `true` | Merge text from all pages |
|
||||
| `maxPages` | `number` | - | Maximum pages to extract |
|
||||
| `debug` | `boolean` | `false` | Enable debug logging |
|
||||
|
||||
### ZIP Reader Options
|
||||
|
||||
| Option | Type | Description |
|
||||
| --------------- | -------- | ---------------------------------- |
|
||||
| `filePath` | `string` | Path to ZIP file |
|
||||
| `fileToExtract` | `string` | Specific file to extract to Buffer |
|
||||
|
||||
### Return Values
|
||||
|
||||
#### CSV Reader Return Value
|
||||
|
||||
```typescript
|
||||
{
|
||||
content: {
|
||||
data: Array<Array<string | number>>, // Parsed rows (excludes header row if parseHeaders: true)
|
||||
headers: string[] | null // Column headers (null if parseHeaders: false)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### XLSX Reader Return Value
|
||||
|
||||
```typescript
|
||||
{
|
||||
content: {
|
||||
worksheets: Array<{
|
||||
name: string, // Sheet name
|
||||
rows: Array<Array<any>>, // All rows including headers
|
||||
headers?: string[] // First row as headers (if present)
|
||||
}>
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### PDF Reader Return Value
|
||||
|
||||
```typescript
|
||||
{
|
||||
content: string, // Extracted text (merged or per-page based on mergePages)
|
||||
pagesCount: number, // Total pages in PDF
|
||||
fileName?: string, // Original filename if available
|
||||
info?: Record<string, any> // PDF metadata (author, title, etc.)
|
||||
}
|
||||
```
|
||||
|
||||
> **Note**: When `mergePages: false`, `content` is an array of strings (one per page). When `maxPages` is set, only that many pages are extracted.
|
||||
|
||||
#### ZIP Reader Return Value
|
||||
|
||||
```typescript
|
||||
{
|
||||
content: {
|
||||
entries: Array<{
|
||||
name: string, // File/directory path within ZIP
|
||||
size: number, // Uncompressed size in bytes
|
||||
isDirectory: boolean // True for directories
|
||||
}>,
|
||||
extractedFiles: Record<string, Buffer | string> // Extracted file contents by path
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **Note**: When `fileToExtract` is specified, only that file appears in `extractedFiles`.
|
||||
|
||||
## Download Cleanup Pattern
|
||||
|
||||
```typescript
|
||||
@@ -234,6 +377,66 @@ test.afterEach(async () => {
|
||||
});
|
||||
```
|
||||
|
||||
## Comparison with Vanilla Playwright
|
||||
|
||||
Vanilla Playwright (real test) snippet:
|
||||
|
||||
```typescript
|
||||
// ~80 lines of boilerplate!
|
||||
const [download] = await Promise.all([
|
||||
page.waitForEvent('download'),
|
||||
page.getByTestId('download-button-CSV Export').click(),
|
||||
]);
|
||||
|
||||
const failure = await download.failure();
|
||||
expect(failure).toBeNull();
|
||||
|
||||
const filePath = testInfo.outputPath(download.suggestedFilename());
|
||||
await download.saveAs(filePath);
|
||||
|
||||
await expect
|
||||
.poll(
|
||||
async () => {
|
||||
try {
|
||||
await fs.access(filePath);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
},
|
||||
{ timeout: 5000, intervals: [100, 200, 500] }
|
||||
)
|
||||
.toBe(true);
|
||||
|
||||
const csvContent = await fs.readFile(filePath, 'utf-8');
|
||||
|
||||
const parseResult = parse(csvContent, {
|
||||
header: true,
|
||||
skipEmptyLines: true,
|
||||
dynamicTyping: true,
|
||||
transformHeader: (header: string) => header.trim(),
|
||||
});
|
||||
|
||||
if (parseResult.errors.length > 0) {
|
||||
throw new Error(`CSV parsing errors: ${JSON.stringify(parseResult.errors)}`);
|
||||
}
|
||||
|
||||
const data = parseResult.data as Array<Record<string, unknown>>;
|
||||
const headers = parseResult.meta.fields || [];
|
||||
```
|
||||
|
||||
With File Utils, the same flow becomes:
|
||||
|
||||
```typescript
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.getByTestId('download-button-text/csv').click(),
|
||||
});
|
||||
|
||||
const { data, headers } = (await readCSV({ filePath: downloadPath })).content;
|
||||
```
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `overview.md` - Installation and imports
|
||||
@@ -242,7 +445,7 @@ test.afterEach(async () => {
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Not cleaning up downloads:**
|
||||
**DON'T leave downloads in place:**
|
||||
|
||||
```typescript
|
||||
test('creates file', async () => {
|
||||
@@ -251,7 +454,7 @@ test('creates file', async () => {
|
||||
})
|
||||
```
|
||||
|
||||
**✅ Clean up after tests:**
|
||||
**DO clean up after tests:**
|
||||
|
||||
```typescript
|
||||
test.afterEach(async () => {
|
||||
|
||||
@@ -183,7 +183,31 @@ test('should handle timeout', async ({ page, interceptNetworkCall }) => {
|
||||
- Validate error UI states
|
||||
- No real failures needed
|
||||
|
||||
### Example 5: Multiple Intercepts (Order Matters!)
|
||||
### Example 5: Order Matters - Intercept Before Navigate
|
||||
|
||||
**Context**: The interceptor must be set up before the network request occurs.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// INCORRECT - interceptor set up too late
|
||||
await page.goto('https://example.com'); // Request already happened
|
||||
const networkCall = interceptNetworkCall({ url: '**/api/data' });
|
||||
await networkCall; // Will hang indefinitely!
|
||||
|
||||
// CORRECT - Set up interception first
|
||||
const networkCall = interceptNetworkCall({ url: '**/api/data' });
|
||||
await page.goto('https://example.com');
|
||||
const result = await networkCall;
|
||||
```
|
||||
|
||||
This pattern follows the classic test spy/stub pattern:
|
||||
|
||||
1. Define the spy/stub (set up interception)
|
||||
2. Perform the action (trigger the network request)
|
||||
3. Assert on the spy/stub (await and verify the response)
|
||||
|
||||
### Example 6: Multiple Intercepts
|
||||
|
||||
**Context**: Intercepting different endpoints in same test - setup order is critical.
|
||||
|
||||
@@ -191,7 +215,7 @@ test('should handle timeout', async ({ page, interceptNetworkCall }) => {
|
||||
|
||||
```typescript
|
||||
test('multiple intercepts', async ({ page, interceptNetworkCall }) => {
|
||||
// ✅ CORRECT: Setup all intercepts BEFORE navigation
|
||||
// Setup all intercepts BEFORE navigation
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' });
|
||||
const productsCall = interceptNetworkCall({ url: '**/api/products' });
|
||||
const ordersCall = interceptNetworkCall({ url: '**/api/orders' });
|
||||
@@ -211,11 +235,85 @@ test('multiple intercepts', async ({ page, interceptNetworkCall }) => {
|
||||
|
||||
- Setup all intercepts before triggering actions
|
||||
- Use `Promise.all()` to wait for multiple calls
|
||||
- Order: intercept → navigate → await
|
||||
- Order: intercept -> navigate -> await
|
||||
- Prevents race conditions
|
||||
|
||||
### Example 7: Capturing Multiple Requests to the Same Endpoint
|
||||
|
||||
**Context**: Each `interceptNetworkCall` captures only the first matching request.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Capturing a known number of requests
|
||||
const firstRequest = interceptNetworkCall({ url: '/api/data' });
|
||||
const secondRequest = interceptNetworkCall({ url: '/api/data' });
|
||||
|
||||
await page.click('#load-data-button');
|
||||
|
||||
const firstResponse = await firstRequest;
|
||||
const secondResponse = await secondRequest;
|
||||
|
||||
expect(firstResponse.status).toBe(200);
|
||||
expect(secondResponse.status).toBe(200);
|
||||
|
||||
// Handling an unknown number of requests
|
||||
const getDataRequestInterceptor = () =>
|
||||
interceptNetworkCall({
|
||||
url: '/api/data',
|
||||
timeout: 1000, // Short timeout to detect when no more requests are coming
|
||||
});
|
||||
|
||||
let currentInterceptor = getDataRequestInterceptor();
|
||||
const allResponses = [];
|
||||
|
||||
await page.click('#load-multiple-data-button');
|
||||
|
||||
while (true) {
|
||||
try {
|
||||
const response = await currentInterceptor;
|
||||
allResponses.push(response);
|
||||
currentInterceptor = getDataRequestInterceptor();
|
||||
} catch (error) {
|
||||
// No more requests (timeout)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Captured ${allResponses.length} requests to /api/data`);
|
||||
```
|
||||
|
||||
### Example 8: Using Timeout
|
||||
|
||||
**Context**: Set a timeout for waiting on a network request.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
const dataCall = interceptNetworkCall({
|
||||
method: 'GET',
|
||||
url: '/api/data-that-might-be-slow',
|
||||
timeout: 5000, // 5 seconds timeout
|
||||
});
|
||||
|
||||
await page.goto('/data-page');
|
||||
|
||||
try {
|
||||
const { responseJson } = await dataCall;
|
||||
console.log('Data loaded successfully:', responseJson);
|
||||
} catch (error) {
|
||||
if (error.message.includes('timeout')) {
|
||||
console.log('Request timed out as expected');
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## URL Pattern Matching
|
||||
|
||||
The utility uses [picomatch](https://github.com/micromatch/picomatch) for powerful glob pattern matching, dramatically simplifying URL targeting:
|
||||
|
||||
**Supported glob patterns:**
|
||||
|
||||
```typescript
|
||||
@@ -226,7 +324,59 @@ test('multiple intercepts', async ({ page, interceptNetworkCall }) => {
|
||||
'**/api/users?id=*'; // With query params
|
||||
```
|
||||
|
||||
**Uses picomatch library** - same pattern syntax as Playwright's `page.route()` but cleaner API.
|
||||
**Comparison with vanilla Playwright:**
|
||||
|
||||
```typescript
|
||||
// Vanilla Playwright - complex predicate
|
||||
const predicate = (response) => {
|
||||
const url = response.url();
|
||||
return (
|
||||
url.endsWith('/api/users') ||
|
||||
url.match(/\/api\/users\/\d+/) ||
|
||||
(url.includes('/api/users/') && url.includes('/profile'))
|
||||
);
|
||||
};
|
||||
page.waitForResponse(predicate);
|
||||
|
||||
// With interceptNetworkCall - simple glob patterns
|
||||
interceptNetworkCall({ url: '/api/users' }); // Exact endpoint
|
||||
interceptNetworkCall({ url: '/api/users/*' }); // User by ID pattern
|
||||
interceptNetworkCall({ url: '/api/users/*/profile' }); // Specific sub-paths
|
||||
interceptNetworkCall({ url: '/api/users/**' }); // Match all
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### `interceptNetworkCall(options)`
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| ----------------- | ---------- | --------------------------------------------------------------------- |
|
||||
| `page` | `Page` | Required when using direct import (not needed with fixture) |
|
||||
| `method` | `string` | Optional: HTTP method to match (e.g., 'GET', 'POST') |
|
||||
| `url` | `string` | Optional: URL pattern to match (supports glob patterns via picomatch) |
|
||||
| `fulfillResponse` | `object` | Optional: Response to use when mocking |
|
||||
| `handler` | `function` | Optional: Custom handler function for the route |
|
||||
| `timeout` | `number` | Optional: Timeout in milliseconds for the network request |
|
||||
|
||||
### `fulfillResponse` Object
|
||||
|
||||
| Property | Type | Description |
|
||||
| --------- | ------------------------ | ----------------------------------------------------- |
|
||||
| `status` | `number` | HTTP status code (default: 200) |
|
||||
| `headers` | `Record<string, string>` | Response headers |
|
||||
| `body` | `any` | Response body (will be JSON.stringified if an object) |
|
||||
|
||||
### Return Value
|
||||
|
||||
Returns a `Promise<NetworkCallResult>` with:
|
||||
|
||||
| Property | Type | Description |
|
||||
| -------------- | ---------- | --------------------------------------- |
|
||||
| `request` | `Request` | The intercepted request |
|
||||
| `response` | `Response` | The response (null if mocked) |
|
||||
| `responseJson` | `any` | Parsed JSON response (if available) |
|
||||
| `status` | `number` | HTTP status code |
|
||||
| `requestJson` | `any` | Parsed JSON request body (if available) |
|
||||
|
||||
## Comparison with Vanilla Playwright
|
||||
|
||||
@@ -238,7 +388,7 @@ test('multiple intercepts', async ({ page, interceptNetworkCall }) => {
|
||||
| `const status = resp.status()` | `const { status } = await call` |
|
||||
| Complex filter predicates | Simple glob patterns |
|
||||
|
||||
**Reduction:** ~5-7 lines → ~2-3 lines per interception
|
||||
**Reduction:** ~5-7 lines -> ~2-3 lines per interception
|
||||
|
||||
## Related Fragments
|
||||
|
||||
@@ -248,14 +398,14 @@ test('multiple intercepts', async ({ page, interceptNetworkCall }) => {
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Intercepting after navigation:**
|
||||
**DON'T intercept after navigation:**
|
||||
|
||||
```typescript
|
||||
await page.goto('/dashboard'); // Navigation starts
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' }); // Too late!
|
||||
```
|
||||
|
||||
**✅ Intercept before navigate:**
|
||||
**DO intercept before navigate:**
|
||||
|
||||
```typescript
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' }); // First
|
||||
@@ -263,7 +413,7 @@ await page.goto('/dashboard'); // Then navigate
|
||||
const { responseJson } = await usersCall; // Then await
|
||||
```
|
||||
|
||||
**❌ Ignoring the returned Promise:**
|
||||
**DON'T ignore the returned Promise:**
|
||||
|
||||
```typescript
|
||||
interceptNetworkCall({ url: '**/api/users' }); // Not awaited!
|
||||
@@ -271,7 +421,7 @@ await page.goto('/dashboard');
|
||||
// No deterministic wait - race condition
|
||||
```
|
||||
|
||||
**✅ Always await the intercept:**
|
||||
**DO always await the intercept:**
|
||||
|
||||
```typescript
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' });
|
||||
|
||||
@@ -21,6 +21,20 @@ The `log` utility provides:
|
||||
- **Multiple levels**: info, step, success, warning, error, debug
|
||||
- **Optional console**: Can disable console output but keep report logs
|
||||
|
||||
## Quick Start
|
||||
|
||||
```typescript
|
||||
import { log } from '@seontechnologies/playwright-utils';
|
||||
|
||||
// Basic logging
|
||||
await log.info('Starting test');
|
||||
await log.step('Test step shown in Playwright UI');
|
||||
await log.success('Operation completed');
|
||||
await log.warning('Something to note');
|
||||
await log.error('Something went wrong');
|
||||
await log.debug('Debug information');
|
||||
```
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Logging Levels
|
||||
@@ -143,41 +157,105 @@ test('organized with steps', async ({ page, apiRequest }) => {
|
||||
- Steps visible in Playwright trace viewer
|
||||
- Better debugging when tests fail
|
||||
|
||||
### Example 4: Conditional Logging
|
||||
### Example 4: Test Step Decorators
|
||||
|
||||
**Context**: Log different messages based on environment or test conditions.
|
||||
**Context**: Create collapsible test steps in Playwright UI using decorators.
|
||||
|
||||
**Page Object Methods with @methodTestStep:**
|
||||
|
||||
```typescript
|
||||
import { methodTestStep } from '@seontechnologies/playwright-utils';
|
||||
|
||||
class TodoPage {
|
||||
constructor(private page: Page) {
|
||||
this.name = 'TodoPage';
|
||||
}
|
||||
|
||||
readonly name: string;
|
||||
|
||||
@methodTestStep('Add todo item')
|
||||
async addTodo(text: string) {
|
||||
await log.info(`Adding todo: ${text}`);
|
||||
const newTodo = this.page.getByPlaceholder('What needs to be done?');
|
||||
await newTodo.fill(text);
|
||||
await newTodo.press('Enter');
|
||||
await log.step('step within a decorator');
|
||||
await log.success(`Added todo: ${text}`);
|
||||
}
|
||||
|
||||
@methodTestStep('Get all todos')
|
||||
async getTodos() {
|
||||
await log.info('Getting all todos');
|
||||
return this.page.getByTestId('todo-title');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Function Helpers with functionTestStep:**
|
||||
|
||||
```typescript
|
||||
import { functionTestStep } from '@seontechnologies/playwright-utils';
|
||||
|
||||
// Define todo items for the test
|
||||
const TODO_ITEMS = ['buy groceries', 'pay bills', 'schedule meeting'];
|
||||
|
||||
const createDefaultTodos = functionTestStep('Create default todos', async (page: Page) => {
|
||||
await log.info('Creating default todos');
|
||||
await log.step('step within a functionWrapper');
|
||||
const todoPage = new TodoPage(page);
|
||||
|
||||
for (const item of TODO_ITEMS) {
|
||||
await todoPage.addTodo(item);
|
||||
}
|
||||
|
||||
await log.success('Created all default todos');
|
||||
});
|
||||
|
||||
const checkNumberOfTodosInLocalStorage = functionTestStep(
|
||||
'Check total todos count fn-step',
|
||||
async (page: Page, expected: number) => {
|
||||
await log.info(`Verifying todo count: ${expected}`);
|
||||
const result = await page.waitForFunction(
|
||||
(e) => JSON.parse(localStorage['react-todos']).length === e,
|
||||
expected
|
||||
);
|
||||
await log.success(`Verified todo count: ${expected}`);
|
||||
return result;
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### Example 5: File Logging
|
||||
|
||||
**Context**: Enable file logging for persistent logs.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('conditional logging', async ({ page }) => {
|
||||
const isCI = process.env.CI === 'true';
|
||||
// playwright/support/fixtures.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
import { log, captureTestContext } from '@seontechnologies/playwright-utils';
|
||||
|
||||
if (isCI) {
|
||||
await log.info('Running in CI environment');
|
||||
} else {
|
||||
await log.debug('Running locally');
|
||||
}
|
||||
// Configure file logging globally
|
||||
log.configure({
|
||||
fileLogging: {
|
||||
enabled: true,
|
||||
outputDir: 'playwright-logs/organized-logs',
|
||||
forceConsolidated: false, // One file per test
|
||||
},
|
||||
});
|
||||
|
||||
const isKafkaWorking = await checkKafkaHealth();
|
||||
|
||||
if (!isKafkaWorking) {
|
||||
await log.warning('Kafka unavailable - skipping event checks');
|
||||
} else {
|
||||
await log.step('Verifying Kafka events');
|
||||
// ... event verification
|
||||
}
|
||||
// Extend base test with file logging context capture
|
||||
export const test = base.extend({
|
||||
// Auto-capture test context for file logging
|
||||
autoTestContext: [async ({}, use, testInfo) => {
|
||||
captureTestContext(testInfo);
|
||||
await use(undefined);
|
||||
}, { auto: true }],
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Log based on environment
|
||||
- Skip logging with conditionals
|
||||
- Use appropriate log levels
|
||||
- Debug info for local, minimal for CI
|
||||
|
||||
### Example 5: Integration with Auth and API
|
||||
### Example 6: Integration with Auth and API
|
||||
|
||||
**Context**: Log authenticated API requests with tokens (safely).
|
||||
|
||||
@@ -221,16 +299,73 @@ test('should log auth flow', async ({ authToken, apiRequest }) => {
|
||||
- Combine with auth and API utilities
|
||||
- Log at appropriate detail level
|
||||
|
||||
## Configuration
|
||||
|
||||
**Defaults:** console logging enabled, file logging disabled.
|
||||
|
||||
```typescript
|
||||
// Enable file logging in config
|
||||
log.configure({
|
||||
console: true, // default
|
||||
fileLogging: {
|
||||
enabled: true,
|
||||
outputDir: 'playwright-logs',
|
||||
forceConsolidated: false, // One file per test
|
||||
},
|
||||
});
|
||||
|
||||
// Per-test override
|
||||
await log.info('Message', {
|
||||
console: { enabled: false },
|
||||
fileLogging: { enabled: true },
|
||||
});
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Disable all logging
|
||||
SILENT=true
|
||||
|
||||
# Disable only file logging
|
||||
DISABLE_FILE_LOGS=true
|
||||
|
||||
# Disable only console logging
|
||||
DISABLE_CONSOLE_LOGS=true
|
||||
```
|
||||
|
||||
### Level Filtering
|
||||
|
||||
```typescript
|
||||
log.configure({
|
||||
level: 'warning', // Only warning, error levels will show
|
||||
});
|
||||
|
||||
// Available levels (in priority order):
|
||||
// debug < info < step < success < warning < error
|
||||
```
|
||||
|
||||
### Sync Methods
|
||||
|
||||
For non-test contexts (global setup, utility functions):
|
||||
|
||||
```typescript
|
||||
// Use sync methods when async/await isn't available
|
||||
log.infoSync('Initializing configuration');
|
||||
log.successSync('Environment configured');
|
||||
log.errorSync('Setup failed');
|
||||
```
|
||||
|
||||
## Log Levels Guide
|
||||
|
||||
| Level | When to Use | Shows in Report | Shows in Console |
|
||||
| --------- | ----------------------------------- | -------------------- | ---------------- |
|
||||
| `step` | Test organization, major actions | ✅ Collapsible steps | ✅ Yes |
|
||||
| `info` | General information, state changes | ✅ Yes | ✅ Yes |
|
||||
| `success` | Successful operations | ✅ Yes | ✅ Yes |
|
||||
| `warning` | Non-critical issues, skipped checks | ✅ Yes | ✅ Yes |
|
||||
| `error` | Failures, exceptions | ✅ Yes | ✅ Configurable |
|
||||
| `debug` | Detailed data, objects | ✅ Yes (attached) | ✅ Configurable |
|
||||
| Level | When to Use | Shows in Report | Shows in Console |
|
||||
| --------- | ----------------------------------- | ----------------- | ---------------- |
|
||||
| `step` | Test organization, major actions | Collapsible steps | Yes |
|
||||
| `info` | General information, state changes | Yes | Yes |
|
||||
| `success` | Successful operations | Yes | Yes |
|
||||
| `warning` | Non-critical issues, skipped checks | Yes | Yes |
|
||||
| `error` | Failures, exceptions | Yes | Configurable |
|
||||
| `debug` | Detailed data, objects | Yes (attached) | Configurable |
|
||||
|
||||
## Comparison with console.log
|
||||
|
||||
@@ -251,34 +386,34 @@ test('should log auth flow', async ({ authToken, apiRequest }) => {
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Logging objects in steps:**
|
||||
**DON'T log objects in steps:**
|
||||
|
||||
```typescript
|
||||
await log.step({ user: 'test', action: 'create' }); // Shows empty in UI
|
||||
```
|
||||
|
||||
**✅ Use strings for steps, objects for debug:**
|
||||
**DO use strings for steps, objects for debug:**
|
||||
|
||||
```typescript
|
||||
await log.step('Creating user: test'); // Readable in UI
|
||||
await log.debug({ user: 'test', action: 'create' }); // Detailed data
|
||||
```
|
||||
|
||||
**❌ Logging sensitive data:**
|
||||
**DON'T log sensitive data:**
|
||||
|
||||
```typescript
|
||||
await log.info(`Password: ${password}`); // Security risk!
|
||||
await log.info(`Token: ${authToken}`); // Full token exposed!
|
||||
```
|
||||
|
||||
**✅ Use previews or omit sensitive data:**
|
||||
**DO use previews or omit sensitive data:**
|
||||
|
||||
```typescript
|
||||
await log.info('User authenticated successfully'); // No sensitive data
|
||||
await log.debug({ tokenPreview: token.slice(0, 6) + '...' });
|
||||
```
|
||||
|
||||
**❌ Excessive logging in loops:**
|
||||
**DON'T log excessively in loops:**
|
||||
|
||||
```typescript
|
||||
for (const item of items) {
|
||||
@@ -286,7 +421,7 @@ for (const item of items) {
|
||||
}
|
||||
```
|
||||
|
||||
**✅ Log summary or use debug level:**
|
||||
**DO log summary or use debug level:**
|
||||
|
||||
```typescript
|
||||
await log.step(`Processing ${items.length} items`);
|
||||
|
||||
@@ -21,6 +21,19 @@ The `network-error-monitor` provides:
|
||||
- **Smart opt-out**: Disable for validation tests expecting errors
|
||||
- **Deduplication**: Group repeated errors by pattern
|
||||
- **Domino effect prevention**: Limit test failures per error pattern
|
||||
- **Respects test status**: Won't suppress actual test failures
|
||||
|
||||
## Quick Start
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
// That's it! Network monitoring is automatically enabled
|
||||
test('my test', async ({ page }) => {
|
||||
await page.goto('/dashboard');
|
||||
// If any HTTP 4xx/5xx errors occur, the test will fail
|
||||
});
|
||||
```
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
@@ -38,8 +51,8 @@ test('should load dashboard', async ({ page }) => {
|
||||
await page.goto('/dashboard');
|
||||
await expect(page.locator('h1')).toContainText('Dashboard');
|
||||
|
||||
// ✅ Passes if no HTTP errors
|
||||
// ❌ Fails if any 4xx/5xx errors detected with clear message:
|
||||
// Passes if no HTTP errors
|
||||
// Fails if any 4xx/5xx errors detected with clear message:
|
||||
// "Network errors detected: 2 request(s) failed"
|
||||
// Failed requests:
|
||||
// GET 500 https://api.example.com/users
|
||||
@@ -64,13 +77,17 @@ test('should load dashboard', async ({ page }) => {
|
||||
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
// Opt-out with annotation
|
||||
test('should show error on invalid input', { annotation: [{ type: 'skipNetworkMonitoring' }] }, async ({ page }) => {
|
||||
await page.goto('/form');
|
||||
await page.click('#submit'); // Triggers 400 error
|
||||
test(
|
||||
'should show error on invalid input',
|
||||
{ annotation: [{ type: 'skipNetworkMonitoring' }] },
|
||||
async ({ page }) => {
|
||||
await page.goto('/form');
|
||||
await page.click('#submit'); // Triggers 400 error
|
||||
|
||||
// Monitoring disabled - test won't fail on 400
|
||||
await expect(page.getByText('Invalid input')).toBeVisible();
|
||||
});
|
||||
// Monitoring disabled - test won't fail on 400
|
||||
await expect(page.getByText('Invalid input')).toBeVisible();
|
||||
}
|
||||
);
|
||||
|
||||
// Or opt-out entire describe block
|
||||
test.describe('error handling', { annotation: [{ type: 'skipNetworkMonitoring' }] }, () => {
|
||||
@@ -91,7 +108,139 @@ test.describe('error handling', { annotation: [{ type: 'skipNetworkMonitoring' }
|
||||
- Monitoring still active for other tests
|
||||
- Perfect for intentional error scenarios
|
||||
|
||||
### Example 3: Integration with Merged Fixtures
|
||||
### Example 3: Respects Test Status
|
||||
|
||||
**Context**: The monitor respects final test statuses to avoid suppressing important test outcomes.
|
||||
|
||||
**Behavior by test status:**
|
||||
|
||||
- **`failed`**: Network errors logged as additional context, not thrown
|
||||
- **`timedOut`**: Network errors logged as additional context
|
||||
- **`skipped`**: Network errors logged, skip status preserved
|
||||
- **`interrupted`**: Network errors logged, interrupted status preserved
|
||||
- **`passed`**: Network errors throw and fail the test
|
||||
|
||||
**Example with test.skip():**
|
||||
|
||||
```typescript
|
||||
test('feature gated test', async ({ page }) => {
|
||||
const featureEnabled = await checkFeatureFlag();
|
||||
test.skip(!featureEnabled, 'Feature not enabled');
|
||||
// If skipped, network errors won't turn this into a failure
|
||||
await page.goto('/new-feature');
|
||||
});
|
||||
```
|
||||
|
||||
### Example 4: Excluding Legitimate Errors
|
||||
|
||||
**Context**: Some endpoints legitimately return 4xx/5xx responses.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
export const test = base.extend(
|
||||
createNetworkErrorMonitorFixture({
|
||||
excludePatterns: [
|
||||
/email-cluster\/ml-app\/has-active-run/, // ML service returns 404 when no active run
|
||||
/idv\/session-templates\/list/, // IDV service returns 404 when not configured
|
||||
/sentry\.io\/api/, // External Sentry errors should not fail tests
|
||||
],
|
||||
})
|
||||
);
|
||||
```
|
||||
|
||||
**For merged fixtures:**
|
||||
|
||||
```typescript
|
||||
import { test as base, mergeTests } from '@playwright/test';
|
||||
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
const networkErrorMonitor = base.extend(
|
||||
createNetworkErrorMonitorFixture({
|
||||
excludePatterns: [/analytics\.google\.com/, /cdn\.example\.com/],
|
||||
})
|
||||
);
|
||||
|
||||
export const test = mergeTests(authFixture, networkErrorMonitor);
|
||||
```
|
||||
|
||||
### Example 5: Preventing Domino Effect
|
||||
|
||||
**Context**: One failing endpoint shouldn't fail all tests.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
const networkErrorMonitor = base.extend(
|
||||
createNetworkErrorMonitorFixture({
|
||||
excludePatterns: [], // Required when using maxTestsPerError
|
||||
maxTestsPerError: 1, // Only first test fails per error pattern, rest just log
|
||||
})
|
||||
);
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
|
||||
When `/api/v2/case-management/cases` returns 500:
|
||||
|
||||
- **First test** encountering this error: **FAILS** with clear error message
|
||||
- **Subsequent tests** encountering same error: **PASSES** but logs warning
|
||||
|
||||
Error patterns are grouped by `method + status + base path`:
|
||||
|
||||
- `GET /api/v2/case-management/cases/123` -> Pattern: `GET:500:/api/v2/case-management`
|
||||
- `GET /api/v2/case-management/quota` -> Pattern: `GET:500:/api/v2/case-management` (same group!)
|
||||
- `POST /api/v2/case-management/cases` -> Pattern: `POST:500:/api/v2/case-management` (different group!)
|
||||
|
||||
**Why include HTTP method?** A GET 404 vs POST 404 might represent different issues:
|
||||
|
||||
- `GET 404 /api/users/123` -> User not found (expected in some tests)
|
||||
- `POST 404 /api/users` -> Endpoint doesn't exist (critical error)
|
||||
|
||||
**Output for subsequent tests:**
|
||||
|
||||
```
|
||||
Warning: Network errors detected but not failing test (maxTestsPerError limit reached):
|
||||
GET 500 https://api.example.com/api/v2/case-management/cases
|
||||
```
|
||||
|
||||
**Recommended configuration:**
|
||||
|
||||
```typescript
|
||||
createNetworkErrorMonitorFixture({
|
||||
excludePatterns: [...], // Required - known broken endpoints (can be empty [])
|
||||
maxTestsPerError: 1 // Stop domino effect (requires excludePatterns)
|
||||
})
|
||||
```
|
||||
|
||||
**Understanding worker-level state:**
|
||||
|
||||
Error pattern counts are stored in worker-level global state:
|
||||
|
||||
```typescript
|
||||
// test-file-1.spec.ts (runs in Worker 1)
|
||||
test('test A', () => {
|
||||
/* triggers GET:500:/api/v2/cases */
|
||||
}); // FAILS
|
||||
|
||||
// test-file-2.spec.ts (runs later in Worker 1)
|
||||
test('test B', () => {
|
||||
/* triggers GET:500:/api/v2/cases */
|
||||
}); // PASSES (limit reached)
|
||||
|
||||
// test-file-3.spec.ts (runs in Worker 2 - different worker)
|
||||
test('test C', () => {
|
||||
/* triggers GET:500:/api/v2/cases */
|
||||
}); // FAILS (fresh worker)
|
||||
```
|
||||
|
||||
### Example 6: Integration with Merged Fixtures
|
||||
|
||||
**Context**: Combine network-error-monitor with other utilities.
|
||||
|
||||
@@ -105,7 +254,7 @@ import { test as networkErrorMonitorFixture } from '@seontechnologies/playwright
|
||||
|
||||
export const test = mergeTests(
|
||||
authFixture,
|
||||
networkErrorMonitorFixture,
|
||||
networkErrorMonitorFixture
|
||||
// Add other fixtures
|
||||
);
|
||||
|
||||
@@ -127,110 +276,94 @@ test('authenticated with monitoring', async ({ page, authToken }) => {
|
||||
- Monitoring active automatically
|
||||
- No extra setup needed
|
||||
|
||||
### Example 4: Domino Effect Prevention
|
||||
|
||||
**Context**: One failing endpoint shouldn't fail all tests.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
// Configuration (internal to utility)
|
||||
const config = {
|
||||
maxTestsPerError: 3, // Max 3 tests fail per unique error pattern
|
||||
};
|
||||
|
||||
// Scenario:
|
||||
// Test 1: GET /api/broken → 500 error → Test fails ❌
|
||||
// Test 2: GET /api/broken → 500 error → Test fails ❌
|
||||
// Test 3: GET /api/broken → 500 error → Test fails ❌
|
||||
// Test 4: GET /api/broken → 500 error → Test passes ⚠️ (limit reached, warning logged)
|
||||
// Test 5: Different error pattern → Test fails ❌ (new pattern, counter resets)
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- Limits cascading failures
|
||||
- Groups errors by URL + status code pattern
|
||||
- Warns when limit reached
|
||||
- Prevents flaky backend from failing entire suite
|
||||
|
||||
### Example 5: Artifact Structure
|
||||
### Example 7: Artifact Structure
|
||||
|
||||
**Context**: Debugging failed tests with network error artifacts.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
When test fails due to network errors, artifact attached:
|
||||
|
||||
```json
|
||||
// test-results/my-test/network-errors.json
|
||||
{
|
||||
"errors": [
|
||||
{
|
||||
"url": "https://api.example.com/users",
|
||||
"method": "GET",
|
||||
"status": 500,
|
||||
"statusText": "Internal Server Error",
|
||||
"timestamp": "2024-08-13T10:30:45.123Z"
|
||||
},
|
||||
{
|
||||
"url": "https://api.example.com/metrics",
|
||||
"method": "POST",
|
||||
"status": 503,
|
||||
"statusText": "Service Unavailable",
|
||||
"timestamp": "2024-08-13T10:30:46.456Z"
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"totalErrors": 2,
|
||||
"uniquePatterns": 2
|
||||
[
|
||||
{
|
||||
"url": "https://api.example.com/users",
|
||||
"status": 500,
|
||||
"method": "GET",
|
||||
"timestamp": "2025-11-10T12:34:56.789Z"
|
||||
},
|
||||
{
|
||||
"url": "https://api.example.com/metrics",
|
||||
"status": 503,
|
||||
"method": "POST",
|
||||
"timestamp": "2025-11-10T12:34:57.123Z"
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
## Implementation Details
|
||||
|
||||
- JSON artifact per failed test
|
||||
- Full error details (URL, method, status, timestamp)
|
||||
- Summary statistics
|
||||
- Easy debugging with structured data
|
||||
### How It Works
|
||||
|
||||
## Comparison with Manual Error Checks
|
||||
1. **Fixture Extension**: Uses Playwright's `base.extend()` with `auto: true`
|
||||
2. **Response Listener**: Attaches `page.on('response')` listener at test start
|
||||
3. **Multi-Page Monitoring**: Automatically monitors popups and new tabs via `context.on('page')`
|
||||
4. **Error Collection**: Captures 4xx/5xx responses, checking exclusion patterns
|
||||
5. **Try/Finally**: Ensures error processing runs even if test fails early
|
||||
6. **Status Check**: Only throws errors if test hasn't already reached final status
|
||||
7. **Artifact**: Attaches JSON file to test report for debugging
|
||||
|
||||
| Manual Approach | network-error-monitor |
|
||||
| ------------------------------------------------------ | -------------------------- |
|
||||
| `page.on('response', resp => { if (!resp.ok()) ... })` | Auto-enabled, zero setup |
|
||||
| Check each response manually | Automatic for all requests |
|
||||
| Custom error tracking logic | Built-in deduplication |
|
||||
| No structured artifacts | JSON artifacts attached |
|
||||
| Easy to forget | Never miss a backend error |
|
||||
### Performance
|
||||
|
||||
The monitor has minimal performance impact:
|
||||
|
||||
- Event listener overhead: ~0.1ms per response
|
||||
- Memory: ~200 bytes per unique error
|
||||
- No network delay (observes responses, doesn't intercept them)
|
||||
|
||||
## Comparison with Alternatives
|
||||
|
||||
| Approach | Network Error Monitor | Manual afterEach |
|
||||
| --------------------------- | --------------------- | --------------------- |
|
||||
| **Setup Required** | Zero (auto-enabled) | Every test file |
|
||||
| **Catches Silent Failures** | Yes | Yes (if configured) |
|
||||
| **Structured Artifacts** | JSON attached | Custom impl |
|
||||
| **Test Failure Safety** | Try/finally | afterEach may not run |
|
||||
| **Opt-Out Mechanism** | Annotation | Custom logic |
|
||||
| **Status Aware** | Respects skip/failed | No |
|
||||
|
||||
## When to Use
|
||||
|
||||
**Auto-enabled for:**
|
||||
|
||||
- ✅ All E2E tests
|
||||
- ✅ Integration tests
|
||||
- ✅ Any test hitting real APIs
|
||||
- All E2E tests
|
||||
- Integration tests
|
||||
- Any test hitting real APIs
|
||||
|
||||
**Opt-out for:**
|
||||
|
||||
- ❌ Validation tests (expecting 4xx)
|
||||
- ❌ Error handling tests (expecting 5xx)
|
||||
- ❌ Offline tests (network-recorder playback)
|
||||
- Validation tests (expecting 4xx)
|
||||
- Error handling tests (expecting 5xx)
|
||||
- Offline tests (network-recorder playback)
|
||||
|
||||
## Integration with Framework Setup
|
||||
## Troubleshooting
|
||||
|
||||
In `*framework` workflow, mention network-error-monitor:
|
||||
### Test fails with network errors but I don't see them in my app
|
||||
|
||||
The errors might be happening during page load or in background polling. Check the `network-errors.json` artifact in your test report for full details including timestamps.
|
||||
|
||||
### False positives from external services
|
||||
|
||||
Configure exclusion patterns as shown in the "Excluding Legitimate Errors" section above.
|
||||
|
||||
### Network errors not being caught
|
||||
|
||||
Ensure you're importing the test from the correct fixture:
|
||||
|
||||
```typescript
|
||||
// Add to merged-fixtures.ts
|
||||
import { test as networkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
// Correct
|
||||
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
export const test = mergeTests(
|
||||
// ... other fixtures
|
||||
networkErrorMonitorFixture,
|
||||
);
|
||||
// Wrong - this won't have network monitoring
|
||||
import { test } from '@playwright/test';
|
||||
```
|
||||
|
||||
## Related Fragments
|
||||
@@ -241,14 +374,14 @@ export const test = mergeTests(
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Opting out of monitoring globally:**
|
||||
**DON'T opt out of monitoring globally:**
|
||||
|
||||
```typescript
|
||||
// Every test skips monitoring
|
||||
test.use({ annotation: [{ type: 'skipNetworkMonitoring' }] });
|
||||
```
|
||||
|
||||
**✅ Opt-out only for specific error tests:**
|
||||
**DO opt-out only for specific error tests:**
|
||||
|
||||
```typescript
|
||||
test.describe('error scenarios', { annotation: [{ type: 'skipNetworkMonitoring' }] }, () => {
|
||||
@@ -256,17 +389,17 @@ test.describe('error scenarios', { annotation: [{ type: 'skipNetworkMonitoring'
|
||||
});
|
||||
```
|
||||
|
||||
**❌ Ignoring network error artifacts:**
|
||||
**DON'T ignore network error artifacts:**
|
||||
|
||||
```typescript
|
||||
// Test fails, artifact shows 500 errors
|
||||
// Developer: "Works on my machine" ¯\_(ツ)_/¯
|
||||
```
|
||||
|
||||
**✅ Check artifacts for root cause:**
|
||||
**DO check artifacts for root cause:**
|
||||
|
||||
```typescript
|
||||
// Read network-errors.json artifact
|
||||
// Identify failing endpoint: GET /api/users → 500
|
||||
// Identify failing endpoint: GET /api/users -> 500
|
||||
// Fix backend issue before merging
|
||||
```
|
||||
|
||||
@@ -21,6 +21,46 @@ HAR-based recording/playback provides:
|
||||
- **Stateful mocking**: CRUD operations work naturally (not just read-only)
|
||||
- **Environment flexibility**: Map URLs for any environment
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Record Network Traffic
|
||||
|
||||
```typescript
|
||||
// Set mode to 'record' to capture network traffic
|
||||
process.env.PW_NET_MODE = 'record';
|
||||
|
||||
test('should add, edit and delete a movie', async ({ page, context, networkRecorder }) => {
|
||||
// Setup network recorder - it will record all network traffic
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
// Your normal test code
|
||||
await page.goto('/');
|
||||
await page.fill('#movie-name', 'Inception');
|
||||
await page.click('#add-movie');
|
||||
|
||||
// Network traffic is automatically saved to HAR file
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Playback Network Traffic
|
||||
|
||||
```typescript
|
||||
// Set mode to 'playback' to use recorded traffic
|
||||
process.env.PW_NET_MODE = 'playback';
|
||||
|
||||
test('should add, edit and delete a movie', async ({ page, context, networkRecorder }) => {
|
||||
// Setup network recorder - it will replay from HAR file
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
// Same test code runs without hitting real backend!
|
||||
await page.goto('/');
|
||||
await page.fill('#movie-name', 'Inception');
|
||||
await page.click('#add-movie');
|
||||
});
|
||||
```
|
||||
|
||||
That's it! Your tests now run completely offline using recorded network traffic.
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Record and Playback
|
||||
@@ -115,74 +155,173 @@ test.describe('Movie CRUD - offline with network recorder', () => {
|
||||
- Combine with `interceptNetworkCall` for deterministic waits
|
||||
- First run records, subsequent runs replay
|
||||
|
||||
### Example 3: Environment Switching
|
||||
### Example 3: Common Patterns
|
||||
|
||||
**Recording Only API Calls**:
|
||||
|
||||
```typescript
|
||||
await networkRecorder.setup(context, {
|
||||
recording: {
|
||||
urlFilter: /\/api\// // Only record API calls, ignore static assets
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Playback with Fallback**:
|
||||
|
||||
```typescript
|
||||
await networkRecorder.setup(context, {
|
||||
playback: {
|
||||
fallback: true // Fall back to live requests if HAR entry missing
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Custom HAR File Location**:
|
||||
|
||||
```typescript
|
||||
await networkRecorder.setup(context, {
|
||||
harFile: {
|
||||
harDir: 'recordings/api-calls',
|
||||
baseName: 'user-journey',
|
||||
organizeByTestFile: false // Optional: flatten directory structure
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Directory Organization:**
|
||||
|
||||
- `organizeByTestFile: true` (default): `har-files/test-file-name/baseName-test-title.har`
|
||||
- `organizeByTestFile: false`: `har-files/baseName-test-title.har`
|
||||
|
||||
### Example 4: Response Content Storage - Embed vs Attach
|
||||
|
||||
**Context**: Choose how response content is stored in HAR files.
|
||||
|
||||
**`embed` (Default - Recommended):**
|
||||
|
||||
```typescript
|
||||
await networkRecorder.setup(context, {
|
||||
recording: {
|
||||
content: 'embed' // Store content inline (default)
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Pros:**
|
||||
|
||||
- Single self-contained file - Easy to share, version control
|
||||
- Better for small-medium responses (API JSON, HTML pages)
|
||||
- HAR specification compliant
|
||||
|
||||
**Cons:**
|
||||
|
||||
- Larger HAR files
|
||||
- Not ideal for large binary content (images, videos)
|
||||
|
||||
**`attach` (Alternative):**
|
||||
|
||||
```typescript
|
||||
await networkRecorder.setup(context, {
|
||||
recording: {
|
||||
content: 'attach' // Store content separately
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Pros:**
|
||||
|
||||
- Smaller HAR files
|
||||
- Better for large responses (images, videos, documents)
|
||||
|
||||
**Cons:**
|
||||
|
||||
- Multiple files to manage
|
||||
- Harder to share
|
||||
|
||||
**When to Use Each:**
|
||||
|
||||
| Use `embed` (default) when | Use `attach` when |
|
||||
|---------------------------|-------------------|
|
||||
| Recording API responses (JSON, XML) | Recording large images, videos |
|
||||
| Small to medium HTML pages | HAR file size >50MB |
|
||||
| You want a single, portable file | Maximum disk efficiency needed |
|
||||
| Sharing HAR files with team | Working with ZIP archive output |
|
||||
|
||||
### Example 5: Cross-Environment Compatibility (URL Mapping)
|
||||
|
||||
**Context**: Record in dev environment, play back in CI with different base URLs.
|
||||
|
||||
**Implementation**:
|
||||
**The Problem**: HAR files contain URLs for the recording environment (e.g., `dev.example.com`). Playing back on a different environment fails.
|
||||
|
||||
**Simple Hostname Mapping:**
|
||||
|
||||
```typescript
|
||||
// playwright.config.ts - Map URLs for different environments
|
||||
export default defineConfig({
|
||||
use: {
|
||||
baseURL: process.env.CI ? 'https://app.ci.example.com' : 'http://localhost:3000',
|
||||
},
|
||||
});
|
||||
|
||||
// Test works in both environments
|
||||
test('cross-environment playback', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
// In dev: hits http://localhost:3000/api/movies
|
||||
// In CI: HAR replays with https://app.ci.example.com/api/movies
|
||||
await page.goto('/movies');
|
||||
|
||||
// Network recorder auto-maps URLs
|
||||
await expect(page.getByTestId('movie-list')).toBeVisible();
|
||||
await networkRecorder.setup(context, {
|
||||
playback: {
|
||||
urlMapping: {
|
||||
hostMapping: {
|
||||
'preview.example.com': 'dev.example.com',
|
||||
'staging.example.com': 'dev.example.com',
|
||||
'localhost:3000': 'dev.example.com'
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- HAR files record absolute URLs
|
||||
- Playback maps to current baseURL
|
||||
- Same HAR works across environments
|
||||
- No manual URL rewriting needed
|
||||
|
||||
### Example 4: Automatic vs Manual Mode Control
|
||||
|
||||
**Context**: Choose between environment-based switching or in-test mode control.
|
||||
|
||||
**Implementation**:
|
||||
**Pattern-Based Mapping (Recommended):**
|
||||
|
||||
```typescript
|
||||
// Option 1: Environment variable (recommended for CI)
|
||||
PW_NET_MODE=record npm run test:pw # Record traffic
|
||||
PW_NET_MODE=playback npm run test:pw # Playback traffic
|
||||
|
||||
// Option 2: In-test control (recommended for development)
|
||||
process.env.PW_NET_MODE = 'record' // Set at top of test file
|
||||
|
||||
test('my test', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context)
|
||||
// ...
|
||||
})
|
||||
|
||||
// Option 3: Auto-fallback (record if HAR missing, else playback)
|
||||
// This is the default behavior when PW_NET_MODE not set
|
||||
test('auto mode', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context)
|
||||
// First run: auto-records
|
||||
// Subsequent runs: auto-plays back
|
||||
})
|
||||
await networkRecorder.setup(context, {
|
||||
playback: {
|
||||
urlMapping: {
|
||||
patterns: [
|
||||
// Map any preview-XXXX subdomain to dev
|
||||
{ match: /preview-\d+\.example\.com/, replace: 'dev.example.com' }
|
||||
]
|
||||
}
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
**Custom Function:**
|
||||
|
||||
- Three mode options: record, playback, auto
|
||||
- `PW_NET_MODE` environment variable
|
||||
- In-test `process.env.PW_NET_MODE` assignment
|
||||
- Auto-fallback when no mode specified
|
||||
```typescript
|
||||
await networkRecorder.setup(context, {
|
||||
playback: {
|
||||
urlMapping: {
|
||||
mapUrl: (url) => url.replace('staging.example.com', 'dev.example.com')
|
||||
}
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Complex Multi-Environment Example:**
|
||||
|
||||
```typescript
|
||||
await networkRecorder.setup(context, {
|
||||
playback: {
|
||||
urlMapping: {
|
||||
hostMapping: {
|
||||
'localhost:3000': 'admin.seondev.space',
|
||||
'admin-staging.seon.io': 'admin.seondev.space',
|
||||
'admin.seon.io': 'admin.seondev.space',
|
||||
},
|
||||
patterns: [
|
||||
{ match: /admin-\d+\.seondev\.space/, replace: 'admin.seondev.space' },
|
||||
{ match: /admin-staging-pr-\w+-\d\.seon\.io/, replace: 'admin.seondev.space' }
|
||||
]
|
||||
}
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Record once on dev, all environments map back to recordings
|
||||
- CORS headers automatically updated based on request origin
|
||||
- Debug with: `LOG_LEVEL=debug npm run test`
|
||||
|
||||
## Why Use This Instead of Native Playwright?
|
||||
|
||||
@@ -191,7 +330,7 @@ test('auto mode', async ({ page, context, networkRecorder }) => {
|
||||
| ~80 lines setup boilerplate | ~5 lines total |
|
||||
| Manual HAR file management | Automatic file organization |
|
||||
| Complex setup/teardown | Automatic cleanup via fixtures |
|
||||
| **Read-only tests** | **Full CRUD support** |
|
||||
| **Read-only tests only** | **Full CRUD support** |
|
||||
| **Stateless** | **Stateful mocking** |
|
||||
| Manual URL mapping | Automatic environment mapping |
|
||||
|
||||
@@ -199,9 +338,132 @@ test('auto mode', async ({ page, context, networkRecorder }) => {
|
||||
|
||||
Native Playwright HAR playback is stateless - a POST create followed by GET list won't show the created item. This utility intelligently tracks CRUD operations in memory to reflect state changes, making offline tests behave like real APIs.
|
||||
|
||||
## How Stateful CRUD Detection Works
|
||||
|
||||
When in playback mode, the Network Recorder automatically analyzes your HAR file to detect CRUD patterns. If it finds:
|
||||
|
||||
- Multiple GET requests to the same resource endpoint (e.g., `/movies`)
|
||||
- Mutation operations (POST, PUT, DELETE) to those resources
|
||||
- Evidence of state changes between identical requests
|
||||
|
||||
It automatically switches from static HAR playback to an intelligent stateful mock that:
|
||||
|
||||
- Maintains state across requests
|
||||
- Auto-generates IDs for new resources
|
||||
- Returns proper 404s for deleted resources
|
||||
- Supports polling scenarios where state changes over time
|
||||
|
||||
**This happens automatically - no configuration needed!**
|
||||
|
||||
## API Reference
|
||||
|
||||
### NetworkRecorder Methods
|
||||
|
||||
| Method | Return Type | Description |
|
||||
| -------------------- | ------------------------ | ----------------------------------------------------- |
|
||||
| `setup(context)` | `Promise<void>` | Sets up recording/playback on browser context |
|
||||
| `cleanup()` | `Promise<void>` | Flushes data to disk and cleans up memory |
|
||||
| `getContext()` | `NetworkRecorderContext` | Gets current recorder context information |
|
||||
| `getStatusMessage()` | `string` | Gets human-readable status message |
|
||||
| `getHarStats()` | `Promise<HarFileStats>` | Gets HAR file statistics and metadata |
|
||||
|
||||
### Understanding `cleanup()`
|
||||
|
||||
The `cleanup()` method performs memory and resource cleanup - **it does NOT delete HAR files**:
|
||||
|
||||
**What it does:**
|
||||
|
||||
- Flushes recorded data to disk (writes HAR file in recording mode)
|
||||
- Releases file locks
|
||||
- Clears in-memory data
|
||||
- Resets internal state
|
||||
|
||||
**What it does NOT do:**
|
||||
|
||||
- Delete HAR files from disk
|
||||
- Remove recorded network traffic
|
||||
- Clear browser context or cookies
|
||||
|
||||
### Configuration Options
|
||||
|
||||
```typescript
|
||||
type NetworkRecorderConfig = {
|
||||
harFile?: {
|
||||
harDir?: string // Directory for HAR files (default: 'har-files')
|
||||
baseName?: string // Base name for HAR files (default: 'network-traffic')
|
||||
organizeByTestFile?: boolean // Organize by test file (default: true)
|
||||
}
|
||||
|
||||
recording?: {
|
||||
content?: 'embed' | 'attach' // Response content handling (default: 'embed')
|
||||
urlFilter?: string | RegExp // URL filter for recording
|
||||
update?: boolean // Update existing HAR files (default: false)
|
||||
}
|
||||
|
||||
playback?: {
|
||||
fallback?: boolean // Fall back to live requests (default: false)
|
||||
urlFilter?: string | RegExp // URL filter for playback
|
||||
updateMode?: boolean // Update mode during playback (default: false)
|
||||
}
|
||||
|
||||
forceMode?: 'record' | 'playback' | 'disabled'
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
Control the recording mode using the `PW_NET_MODE` environment variable:
|
||||
|
||||
```bash
|
||||
# Record mode - captures network traffic to HAR files
|
||||
PW_NET_MODE=record npm run test:pw
|
||||
|
||||
# Playback mode - replays network traffic from HAR files
|
||||
PW_NET_MODE=playback npm run test:pw
|
||||
|
||||
# Disabled mode - no network recording/playback
|
||||
PW_NET_MODE=disabled npm run test:pw
|
||||
|
||||
# Default behavior (when PW_NET_MODE is empty/unset) - same as disabled
|
||||
npm run test:pw
|
||||
```
|
||||
|
||||
**Tip**: We recommend setting `process.env.PW_NET_MODE` directly in your test file for better control.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### HAR File Not Found
|
||||
|
||||
If you see "HAR file not found" errors during playback:
|
||||
|
||||
1. Ensure you've recorded the test first with `PW_NET_MODE=record`
|
||||
2. Check the HAR file exists in the expected location (usually `har-files/`)
|
||||
3. Enable fallback mode: `playback: { fallback: true }`
|
||||
|
||||
### Authentication and Network Recording
|
||||
|
||||
The network recorder works seamlessly with authentication:
|
||||
|
||||
```typescript
|
||||
test('Authenticated recording', async ({ page, context, authSession, networkRecorder }) => {
|
||||
// First authenticate
|
||||
await authSession.login('testuser', 'password');
|
||||
|
||||
// Then setup network recording with authenticated context
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
// Test authenticated flows
|
||||
await page.goto('/dashboard');
|
||||
});
|
||||
```
|
||||
|
||||
### Concurrent Test Issues
|
||||
|
||||
The recorder includes built-in file locking for safe parallel execution. Each test gets its own HAR file based on the test name.
|
||||
|
||||
## Integration with Other Utilities
|
||||
|
||||
**With interceptNetworkCall** (deterministic waits):
|
||||
**With interceptNetworkCall (deterministic waits):**
|
||||
|
||||
```typescript
|
||||
test('use both utilities', async ({ page, context, networkRecorder, interceptNetworkCall }) => {
|
||||
@@ -228,7 +490,7 @@ test('use both utilities', async ({ page, context, networkRecorder, interceptNet
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Mixing record and playback in same test:**
|
||||
**DON'T mix record and playback in same test:**
|
||||
|
||||
```typescript
|
||||
process.env.PW_NET_MODE = 'record';
|
||||
@@ -236,7 +498,7 @@ process.env.PW_NET_MODE = 'record';
|
||||
process.env.PW_NET_MODE = 'playback'; // Don't switch mid-test
|
||||
```
|
||||
|
||||
**✅ One mode per test:**
|
||||
**DO use one mode per test:**
|
||||
|
||||
```typescript
|
||||
process.env.PW_NET_MODE = 'playback'; // Set once at top
|
||||
@@ -247,7 +509,7 @@ test('my test', async ({ page, context, networkRecorder }) => {
|
||||
});
|
||||
```
|
||||
|
||||
**❌ Forgetting to call setup:**
|
||||
**DON'T forget to call setup:**
|
||||
|
||||
```typescript
|
||||
test('broken', async ({ page, networkRecorder }) => {
|
||||
@@ -255,7 +517,7 @@ test('broken', async ({ page, networkRecorder }) => {
|
||||
});
|
||||
```
|
||||
|
||||
**✅ Always call setup before navigation:**
|
||||
**DO always call setup before navigation:**
|
||||
|
||||
```typescript
|
||||
test('correct', async ({ page, context, networkRecorder }) => {
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Principle
|
||||
|
||||
Use production-ready, fixture-based utilities from `@seontechnologies/playwright-utils` for common Playwright testing patterns. Build test helpers as pure functions first, then wrap in framework-specific fixtures for composability and reuse.
|
||||
Use production-ready, fixture-based utilities from `@seontechnologies/playwright-utils` for common Playwright testing patterns. Build test helpers as pure functions first, then wrap in framework-specific fixtures for composability and reuse. **Works equally well for pure API testing (no browser) and UI testing.**
|
||||
|
||||
## Rationale
|
||||
|
||||
@@ -20,6 +20,7 @@ Writing Playwright utilities from scratch for every project leads to:
|
||||
- **Composable fixtures**: Use `mergeTests` to combine utilities
|
||||
- **TypeScript support**: Full type safety with generic types
|
||||
- **Comprehensive coverage**: API requests, auth, network, logging, file handling, burn-in
|
||||
- **Backend-first mentality**: Most utilities work without a browser - pure API/service testing is a first-class use case
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -37,17 +38,19 @@ npm install -D @seontechnologies/playwright-utils
|
||||
|
||||
### Core Testing Utilities
|
||||
|
||||
| Utility | Purpose | Test Context |
|
||||
| -------------------------- | ------------------------------------------ | ------------- |
|
||||
| **api-request** | Typed HTTP client with schema validation | API tests |
|
||||
| **network-recorder** | HAR record/playback for offline testing | UI tests |
|
||||
| **auth-session** | Token persistence, multi-user auth | Both UI & API |
|
||||
| **recurse** | Cypress-style polling for async conditions | Both UI & API |
|
||||
| **intercept-network-call** | Network spy/stub with auto JSON parsing | UI tests |
|
||||
| **log** | Playwright report-integrated logging | Both UI & API |
|
||||
| **file-utils** | CSV/XLSX/PDF/ZIP reading & validation | Both UI & API |
|
||||
| **burn-in** | Smart test selection with git diff | CI/CD |
|
||||
| **network-error-monitor** | Automatic HTTP 4xx/5xx detection | UI tests |
|
||||
| Utility | Purpose | Test Context |
|
||||
| -------------------------- | ---------------------------------------------------- | ------------------ |
|
||||
| **api-request** | Typed HTTP client with schema validation and retry | **API/Backend** |
|
||||
| **recurse** | Polling for async operations, background jobs | **API/Backend** |
|
||||
| **auth-session** | Token persistence, multi-user, service-to-service | **API/Backend/UI** |
|
||||
| **log** | Playwright report-integrated logging | **API/Backend/UI** |
|
||||
| **file-utils** | CSV/XLSX/PDF/ZIP reading & validation | **API/Backend/UI** |
|
||||
| **burn-in** | Smart test selection with git diff | **CI/CD** |
|
||||
| **network-recorder** | HAR record/playback for offline testing | UI only |
|
||||
| **intercept-network-call** | Network spy/stub with auto JSON parsing | UI only |
|
||||
| **network-error-monitor** | Automatic HTTP 4xx/5xx detection | UI only |
|
||||
|
||||
**Note**: 6 of 9 utilities work without a browser. Only 3 are UI-specific (network-recorder, intercept-network-call, network-error-monitor).
|
||||
|
||||
## Design Patterns
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Principle
|
||||
|
||||
Use Cypress-style polling with Playwright's `expect.poll` to wait for asynchronous conditions. Provides configurable timeout, interval, logging, and post-polling callbacks with enhanced error categorization.
|
||||
Use Cypress-style polling with Playwright's `expect.poll` to wait for asynchronous conditions. Provides configurable timeout, interval, logging, and post-polling callbacks with enhanced error categorization. **Ideal for backend testing**: polling API endpoints for job completion, database eventual consistency, message queue processing, and cache propagation.
|
||||
|
||||
## Rationale
|
||||
|
||||
@@ -21,6 +21,29 @@ The `recurse` utility provides:
|
||||
- **Post-poll callbacks**: Process results after success
|
||||
- **Type-safe**: Full TypeScript generic support
|
||||
|
||||
## Quick Start
|
||||
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/recurse/fixtures';
|
||||
|
||||
test('wait for job completion', async ({ recurse, apiRequest }) => {
|
||||
const { body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/jobs',
|
||||
body: { type: 'export' },
|
||||
});
|
||||
|
||||
// Poll until job completes
|
||||
const result = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/jobs/${body.id}` }),
|
||||
(response) => response.body.status === 'completed',
|
||||
{ timeout: 60000 }
|
||||
);
|
||||
|
||||
expect(result.body.downloadUrl).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Polling
|
||||
@@ -48,7 +71,7 @@ test('should wait for job completion', async ({ recurse, apiRequest }) => {
|
||||
timeout: 60000, // 60 seconds max
|
||||
interval: 2000, // Check every 2 seconds
|
||||
log: 'Waiting for export job to complete',
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
expect(result.body.downloadUrl).toBeDefined();
|
||||
@@ -62,7 +85,7 @@ test('should wait for job completion', async ({ recurse, apiRequest }) => {
|
||||
- Options: timeout, interval, log message
|
||||
- Returns the value when predicate returns true
|
||||
|
||||
### Example 2: Polling with Assertions
|
||||
### Example 2: Working with Assertions
|
||||
|
||||
**Context**: Use assertions directly in predicate for more expressive tests.
|
||||
|
||||
@@ -76,35 +99,76 @@ test('should poll with assertions', async ({ recurse, apiRequest }) => {
|
||||
body: { type: 'user-created', userId: '123' },
|
||||
});
|
||||
|
||||
// Poll with assertions in predicate
|
||||
// Poll with assertions in predicate - no return true needed!
|
||||
await recurse(
|
||||
async () => {
|
||||
const { body } = await apiRequest({ method: 'GET', path: '/api/events/123' });
|
||||
return body;
|
||||
},
|
||||
(event) => {
|
||||
// Use assertions instead of boolean returns
|
||||
// If all assertions pass, predicate succeeds
|
||||
expect(event.processed).toBe(true);
|
||||
expect(event.timestamp).toBeDefined();
|
||||
// If assertions pass, predicate succeeds
|
||||
// No need to return true - just let assertions pass
|
||||
},
|
||||
{ timeout: 30000 },
|
||||
{ timeout: 30000 }
|
||||
);
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
**Why no `return true` needed?**
|
||||
|
||||
- Predicate can use `expect()` assertions
|
||||
- If assertions throw, polling continues
|
||||
- If assertions pass, polling succeeds
|
||||
- More expressive than boolean returns
|
||||
The predicate checks for "truthiness" of the return value. But there's a catch - in JavaScript, an empty `return` (or no return) returns `undefined`, which is falsy!
|
||||
|
||||
### Example 3: Custom Error Messages
|
||||
The utility handles this by checking if:
|
||||
|
||||
**Context**: Provide context-specific error messages for timeout failures.
|
||||
1. The predicate didn't throw (assertions passed)
|
||||
2. The return value was either `undefined` (implicit return) or truthy
|
||||
|
||||
**Implementation**:
|
||||
So you can:
|
||||
|
||||
```typescript
|
||||
// Option 1: Use assertions only (recommended)
|
||||
(event) => {
|
||||
expect(event.processed).toBe(true);
|
||||
};
|
||||
|
||||
// Option 2: Return boolean (also works)
|
||||
(event) => event.processed === true;
|
||||
|
||||
// Option 3: Mixed (assertions + explicit return)
|
||||
(event) => {
|
||||
expect(event.processed).toBe(true);
|
||||
return true;
|
||||
};
|
||||
```
|
||||
|
||||
### Example 3: Error Handling
|
||||
|
||||
**Context**: Understanding the different error types.
|
||||
|
||||
**Error Types:**
|
||||
|
||||
```typescript
|
||||
// RecurseTimeoutError - Predicate never returned true within timeout
|
||||
// Contains last command value and predicate error
|
||||
try {
|
||||
await recurse(/* ... */);
|
||||
} catch (error) {
|
||||
if (error instanceof RecurseTimeoutError) {
|
||||
console.log('Timed out. Last value:', error.lastCommandValue);
|
||||
console.log('Last predicate error:', error.lastPredicateError);
|
||||
}
|
||||
}
|
||||
|
||||
// RecurseCommandError - Command function threw an error
|
||||
// The command itself failed (e.g., network error, API error)
|
||||
|
||||
// RecursePredicateError - Predicate function threw (not from assertions failing)
|
||||
// Logic error in your predicate code
|
||||
```
|
||||
|
||||
**Custom Error Messages:**
|
||||
|
||||
```typescript
|
||||
test('custom error on timeout', async ({ recurse, apiRequest }) => {
|
||||
@@ -115,7 +179,7 @@ test('custom error on timeout', async ({ recurse, apiRequest }) => {
|
||||
{
|
||||
timeout: 10000,
|
||||
error: 'System failed to become ready within 10 seconds - check background workers',
|
||||
},
|
||||
}
|
||||
);
|
||||
} catch (error) {
|
||||
// Error message includes custom context
|
||||
@@ -125,13 +189,6 @@ test('custom error on timeout', async ({ recurse, apiRequest }) => {
|
||||
});
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- `error` option provides custom message
|
||||
- Replaces default "Timed out after X ms"
|
||||
- Include debugging hints in error message
|
||||
- Helps diagnose failures faster
|
||||
|
||||
### Example 4: Post-Polling Callback
|
||||
|
||||
**Context**: Process or log results after successful polling.
|
||||
@@ -151,7 +208,7 @@ test('post-poll processing', async ({ recurse, apiRequest }) => {
|
||||
console.log(`Processed ${result.body.itemsProcessed} items`);
|
||||
return result.body;
|
||||
},
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
expect(finalResult.itemsProcessed).toBeGreaterThan(0);
|
||||
@@ -165,7 +222,67 @@ test('post-poll processing', async ({ recurse, apiRequest }) => {
|
||||
- Can transform or log results
|
||||
- Return value becomes final `recurse` result
|
||||
|
||||
### Example 5: Integration with API Request (Common Pattern)
|
||||
### Example 5: UI Testing Scenarios
|
||||
|
||||
**Context**: Wait for UI elements to reach a specific state through polling.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('table data loads', async ({ page, recurse }) => {
|
||||
await page.goto('/reports');
|
||||
|
||||
// Poll for table rows to appear
|
||||
await recurse(
|
||||
async () => page.locator('table tbody tr').count(),
|
||||
(count) => count >= 10, // Wait for at least 10 rows
|
||||
{
|
||||
timeout: 15000,
|
||||
interval: 500,
|
||||
log: 'Waiting for table data to load',
|
||||
}
|
||||
);
|
||||
|
||||
// Now safe to interact with table
|
||||
await page.locator('table tbody tr').first().click();
|
||||
});
|
||||
```
|
||||
|
||||
### Example 6: Event-Based Systems (Kafka/Message Queues)
|
||||
|
||||
**Context**: Testing eventual consistency with message queue processing.
|
||||
|
||||
**Implementation**:
|
||||
|
||||
```typescript
|
||||
test('kafka event processed', async ({ recurse, apiRequest }) => {
|
||||
// Trigger action that publishes Kafka event
|
||||
await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/orders',
|
||||
body: { productId: 'ABC123', quantity: 2 },
|
||||
});
|
||||
|
||||
// Poll for downstream effect of Kafka consumer processing
|
||||
const inventoryResult = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: '/api/inventory/ABC123' }),
|
||||
(res) => {
|
||||
// Assumes test fixture seeds inventory at 100; in production tests,
|
||||
// fetch baseline first and assert: expect(res.body.available).toBe(baseline - 2)
|
||||
expect(res.body.available).toBeLessThanOrEqual(98);
|
||||
},
|
||||
{
|
||||
timeout: 30000, // Kafka processing may take time
|
||||
interval: 1000,
|
||||
log: 'Waiting for Kafka event to be processed',
|
||||
}
|
||||
);
|
||||
|
||||
expect(inventoryResult.body.lastOrderId).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
### Example 7: Integration with API Request (Common Pattern)
|
||||
|
||||
**Context**: Most common use case - polling API endpoints for state changes.
|
||||
|
||||
@@ -193,7 +310,7 @@ test('end-to-end polling', async ({ apiRequest, recurse }) => {
|
||||
timeout: 120000, // 2 minutes for large imports
|
||||
interval: 5000, // Check every 5 seconds
|
||||
log: `Polling import ${createResp.importId}`,
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
expect(importResult.body.rowsImported).toBeGreaterThan(1000);
|
||||
@@ -208,20 +325,26 @@ test('end-to-end polling', async ({ apiRequest, recurse }) => {
|
||||
- Complex predicates with multiple conditions
|
||||
- Logging shows polling progress in test reports
|
||||
|
||||
## Enhanced Error Types
|
||||
## API Reference
|
||||
|
||||
The utility categorizes errors for easier debugging:
|
||||
### RecurseOptions
|
||||
|
||||
```typescript
|
||||
// TimeoutError - Predicate never returned true
|
||||
Error: Polling timed out after 30000ms: Job never completed
|
||||
| Option | Type | Default | Description |
|
||||
| ---------- | ------------------ | ----------- | ------------------------------------ |
|
||||
| `timeout` | `number` | `30000` | Maximum time to wait (ms) |
|
||||
| `interval` | `number` | `1000` | Time between polls (ms) |
|
||||
| `log` | `string` | `undefined` | Message logged on each poll |
|
||||
| `error` | `string` | `undefined` | Custom error message for timeout |
|
||||
| `post` | `(result: T) => R` | `undefined` | Callback after successful poll |
|
||||
| `delay` | `number` | `0` | Initial delay before first poll (ms) |
|
||||
|
||||
// CommandError - Command function threw
|
||||
Error: Command failed: Request failed with status 500
|
||||
### Error Types
|
||||
|
||||
// PredicateError - Predicate function threw (not from assertions)
|
||||
Error: Predicate failed: Cannot read property 'status' of undefined
|
||||
```
|
||||
| Error Type | When Thrown | Properties |
|
||||
| ----------------------- | --------------------------------------- | ---------------------------------------- |
|
||||
| `RecurseTimeoutError` | Predicate never passed within timeout | `lastCommandValue`, `lastPredicateError` |
|
||||
| `RecurseCommandError` | Command function threw an error | `cause` (original error) |
|
||||
| `RecursePredicateError` | Predicate threw (not assertion failure) | `cause` (original error) |
|
||||
|
||||
## Comparison with Vanilla Playwright
|
||||
|
||||
@@ -236,11 +359,11 @@ Error: Predicate failed: Cannot read property 'status' of undefined
|
||||
|
||||
**Use recurse for:**
|
||||
|
||||
- ✅ Background job completion
|
||||
- ✅ Webhook/event processing
|
||||
- ✅ Database eventual consistency
|
||||
- ✅ Cache propagation
|
||||
- ✅ State machine transitions
|
||||
- Background job completion
|
||||
- Webhook/event processing
|
||||
- Database eventual consistency
|
||||
- Cache propagation
|
||||
- State machine transitions
|
||||
|
||||
**Stick with vanilla expect.poll for:**
|
||||
|
||||
@@ -250,13 +373,15 @@ Error: Predicate failed: Cannot read property 'status' of undefined
|
||||
|
||||
## Related Fragments
|
||||
|
||||
- `api-testing-patterns.md` - Comprehensive pure API testing patterns
|
||||
- `api-request.md` - Combine for API endpoint polling
|
||||
- `overview.md` - Fixture composition patterns
|
||||
- `fixtures-composition.md` - Using with mergeTests
|
||||
- `contract-testing.md` - Contract testing with async verification
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
**❌ Using hard waits instead of polling:**
|
||||
**DON'T use hard waits instead of polling:**
|
||||
|
||||
```typescript
|
||||
await page.click('#export');
|
||||
@@ -264,33 +389,33 @@ await page.waitForTimeout(5000); // Arbitrary wait
|
||||
expect(await page.textContent('#status')).toBe('Ready');
|
||||
```
|
||||
|
||||
**✅ Poll for actual condition:**
|
||||
**DO poll for actual condition:**
|
||||
|
||||
```typescript
|
||||
await page.click('#export');
|
||||
await recurse(
|
||||
() => page.textContent('#status'),
|
||||
(status) => status === 'Ready',
|
||||
{ timeout: 10000 },
|
||||
{ timeout: 10000 }
|
||||
);
|
||||
```
|
||||
|
||||
**❌ Polling too frequently:**
|
||||
**DON'T poll too frequently:**
|
||||
|
||||
```typescript
|
||||
await recurse(
|
||||
() => apiRequest({ method: 'GET', path: '/status' }),
|
||||
(res) => res.body.ready,
|
||||
{ interval: 100 }, // Hammers API every 100ms!
|
||||
{ interval: 100 } // Hammers API every 100ms!
|
||||
);
|
||||
```
|
||||
|
||||
**✅ Reasonable interval for API calls:**
|
||||
**DO use reasonable interval for API calls:**
|
||||
|
||||
```typescript
|
||||
await recurse(
|
||||
() => apiRequest({ method: 'GET', path: '/status' }),
|
||||
(res) => res.body.ready,
|
||||
{ interval: 2000 }, // Check every 2 seconds (reasonable)
|
||||
{ interval: 2000 } // Check every 2 seconds (reasonable)
|
||||
);
|
||||
```
|
||||
|
||||
@@ -1,33 +1,34 @@
|
||||
id,name,description,tags,fragment_file
|
||||
fixture-architecture,Fixture Architecture,"Composable fixture patterns (pure function → fixture → merge) and reuse rules","fixtures,architecture,playwright,cypress",knowledge/fixture-architecture.md
|
||||
network-first,Network-First Safeguards,"Intercept-before-navigate workflow, HAR capture, deterministic waits, edge mocking","network,stability,playwright,cypress",knowledge/network-first.md
|
||||
data-factories,Data Factories and API Setup,"Factories with overrides, API seeding, cleanup discipline","data,factories,setup,api",knowledge/data-factories.md
|
||||
network-first,Network-First Safeguards,"Intercept-before-navigate workflow, HAR capture, deterministic waits, edge mocking","network,stability,playwright,cypress,ui",knowledge/network-first.md
|
||||
data-factories,Data Factories and API Setup,"Factories with overrides, API seeding, cleanup discipline","data,factories,setup,api,backend,seeding",knowledge/data-factories.md
|
||||
component-tdd,Component TDD Loop,"Red→green→refactor workflow, provider isolation, accessibility assertions","component-testing,tdd,ui",knowledge/component-tdd.md
|
||||
playwright-config,Playwright Config Guardrails,"Environment switching, timeout standards, artifact outputs","playwright,config,env",knowledge/playwright-config.md
|
||||
ci-burn-in,CI and Burn-In Strategy,"Staged jobs, shard orchestration, burn-in loops, artifact policy","ci,automation,flakiness",knowledge/ci-burn-in.md
|
||||
selective-testing,Selective Test Execution,"Tag/grep usage, spec filters, diff-based runs, promotion rules","risk-based,selection,strategy",knowledge/selective-testing.md
|
||||
feature-flags,Feature Flag Governance,"Enum management, targeting helpers, cleanup, release checklists","feature-flags,governance,launchdarkly",knowledge/feature-flags.md
|
||||
contract-testing,Contract Testing Essentials,"Pact publishing, provider verification, resilience coverage","contract-testing,pact,api",knowledge/contract-testing.md
|
||||
contract-testing,Contract Testing Essentials,"Pact publishing, provider verification, resilience coverage","contract-testing,pact,api,backend,microservices,service-contract",knowledge/contract-testing.md
|
||||
email-auth,Email Authentication Testing,"Magic link extraction, state preservation, caching, negative flows","email-authentication,security,workflow",knowledge/email-auth.md
|
||||
error-handling,Error Handling Checks,"Scoped exception handling, retry validation, telemetry logging","resilience,error-handling,stability",knowledge/error-handling.md
|
||||
visual-debugging,Visual Debugging Toolkit,"Trace viewer usage, artifact expectations, accessibility integration","debugging,dx,tooling",knowledge/visual-debugging.md
|
||||
error-handling,Error Handling Checks,"Scoped exception handling, retry validation, telemetry logging","resilience,error-handling,stability,api,backend",knowledge/error-handling.md
|
||||
visual-debugging,Visual Debugging Toolkit,"Trace viewer usage, artifact expectations, accessibility integration","debugging,dx,tooling,ui",knowledge/visual-debugging.md
|
||||
risk-governance,Risk Governance,"Scoring matrix, category ownership, gate decision rules","risk,governance,gates",knowledge/risk-governance.md
|
||||
probability-impact,Probability and Impact Scale,"Shared definitions for scoring matrix and gate thresholds","risk,scoring,scale",knowledge/probability-impact.md
|
||||
test-quality,Test Quality Definition of Done,"Execution limits, isolation rules, green criteria","quality,definition-of-done,tests",knowledge/test-quality.md
|
||||
nfr-criteria,NFR Review Criteria,"Security, performance, reliability, maintainability status definitions","nfr,assessment,quality",knowledge/nfr-criteria.md
|
||||
test-levels,Test Levels Framework,"Guidelines for choosing unit, integration, or end-to-end coverage","testing,levels,selection",knowledge/test-levels-framework.md
|
||||
test-levels,Test Levels Framework,"Guidelines for choosing unit, integration, or end-to-end coverage","testing,levels,selection,api,backend,ui",knowledge/test-levels-framework.md
|
||||
test-priorities,Test Priorities Matrix,"P0–P3 criteria, coverage targets, execution ordering","testing,prioritization,risk",knowledge/test-priorities-matrix.md
|
||||
test-healing-patterns,Test Healing Patterns,"Common failure patterns and automated fixes","healing,debugging,patterns",knowledge/test-healing-patterns.md
|
||||
selector-resilience,Selector Resilience,"Robust selector strategies and debugging techniques","selectors,locators,debugging",knowledge/selector-resilience.md
|
||||
selector-resilience,Selector Resilience,"Robust selector strategies and debugging techniques","selectors,locators,debugging,ui",knowledge/selector-resilience.md
|
||||
timing-debugging,Timing Debugging,"Race condition identification and deterministic wait fixes","timing,async,debugging",knowledge/timing-debugging.md
|
||||
overview,Playwright Utils Overview,"Installation, design principles, fixture patterns","playwright-utils,fixtures",knowledge/overview.md
|
||||
api-request,API Request,"Typed HTTP client, schema validation","api,playwright-utils",knowledge/api-request.md
|
||||
network-recorder,Network Recorder,"HAR record/playback, CRUD detection","network,playwright-utils",knowledge/network-recorder.md
|
||||
auth-session,Auth Session,"Token persistence, multi-user","auth,playwright-utils",knowledge/auth-session.md
|
||||
intercept-network-call,Intercept Network Call,"Network spy/stub, JSON parsing","network,playwright-utils",knowledge/intercept-network-call.md
|
||||
recurse,Recurse Polling,"Async polling, condition waiting","polling,playwright-utils",knowledge/recurse.md
|
||||
log,Log Utility,"Report logging, structured output","logging,playwright-utils",knowledge/log.md
|
||||
file-utils,File Utilities,"CSV/XLSX/PDF/ZIP validation","files,playwright-utils",knowledge/file-utils.md
|
||||
burn-in,Burn-in Runner,"Smart test selection, git diff","ci,playwright-utils",knowledge/burn-in.md
|
||||
network-error-monitor,Network Error Monitor,"HTTP 4xx/5xx detection","monitoring,playwright-utils",knowledge/network-error-monitor.md
|
||||
fixtures-composition,Fixtures Composition,"mergeTests composition patterns","fixtures,playwright-utils",knowledge/fixtures-composition.md
|
||||
overview,Playwright Utils Overview,"Installation, design principles, fixture patterns for API and UI testing","playwright-utils,fixtures,api,backend,ui",knowledge/overview.md
|
||||
api-request,API Request,"Typed HTTP client, schema validation, retry logic for API and service testing","api,backend,service-testing,api-testing,playwright-utils",knowledge/api-request.md
|
||||
network-recorder,Network Recorder,"HAR record/playback, CRUD detection for offline UI testing","network,playwright-utils,ui,har",knowledge/network-recorder.md
|
||||
auth-session,Auth Session,"Token persistence, multi-user, API and browser authentication","auth,playwright-utils,api,backend,jwt,token",knowledge/auth-session.md
|
||||
intercept-network-call,Intercept Network Call,"Network spy/stub, JSON parsing for UI tests","network,playwright-utils,ui",knowledge/intercept-network-call.md
|
||||
recurse,Recurse Polling,"Async polling for API responses, background jobs, eventual consistency","polling,playwright-utils,api,backend,async,eventual-consistency",knowledge/recurse.md
|
||||
log,Log Utility,"Report logging, structured output for API and UI tests","logging,playwright-utils,api,ui",knowledge/log.md
|
||||
file-utils,File Utilities,"CSV/XLSX/PDF/ZIP validation for API exports and UI downloads","files,playwright-utils,api,backend,ui",knowledge/file-utils.md
|
||||
burn-in,Burn-in Runner,"Smart test selection, git diff for CI optimization","ci,playwright-utils",knowledge/burn-in.md
|
||||
network-error-monitor,Network Error Monitor,"HTTP 4xx/5xx detection for UI tests","monitoring,playwright-utils,ui",knowledge/network-error-monitor.md
|
||||
fixtures-composition,Fixtures Composition,"mergeTests composition patterns for combining utilities","fixtures,playwright-utils",knowledge/fixtures-composition.md
|
||||
api-testing-patterns,API Testing Patterns,"Pure API test patterns without browser: service testing, microservices, GraphQL","api,backend,service-testing,api-testing,microservices,graphql,no-browser",knowledge/api-testing-patterns.md
|
||||
|
||||
|
Reference in New Issue
Block a user