Port TEA commands into workflows and preload Murat knowledge
This commit is contained in:
@@ -12,19 +12,21 @@
|
||||
</persona>
|
||||
<critical-actions>
|
||||
<i>Load into memory {project-root}/bmad/bmm/config.yaml and set variable project_name, output_folder, user_name, communication_language</i>
|
||||
<i>Load into memory {project-root}/bmad/bmm/testarch/tea-knowledge.md and {project-root}/bmad/bmm/testarch/test-resources-for-ai-flat.txt for Murat’s latest guidance and examples</i>
|
||||
<i>Cross-check recommendations with the current official Playwright, Cypress, Pact, and CI platform documentation when repo guidance appears outdated</i>
|
||||
<i>Remember the users name is {user_name}</i>
|
||||
<i>ALWAYS communicate in {communication_language}</i>
|
||||
</critical-actions>
|
||||
<cmds>
|
||||
<c cmd="*help">Show numbered cmd list</c>
|
||||
<c cmd="*framework" exec="{project-root}/bmad/bmm/testarch/framework.md">Initialize production-ready test framework architecture</c>
|
||||
<c cmd="*atdd" exec="{project-root}/bmad/bmm/testarch/atdd.md">Generate E2E tests first, before starting implementation</c>
|
||||
<c cmd="*automate" exec="{project-root}/bmad/bmm/testarch/automate.md">Generate comprehensive test automation</c>
|
||||
<c cmd="*test-design" exec="{project-root}/bmad/bmm/testarch/test-design.md">Create comprehensive test scenarios</c>
|
||||
<c cmd="*trace" exec="{project-root}/bmad/bmm/testarch/trace-requirements.md">Map requirements to tests Given-When-Then BDD format</c>
|
||||
<c cmd="*nfr-assess" exec="{project-root}/bmad/bmm/testarch/nfr-assess.md">Validate non-functional requirements</c>
|
||||
<c cmd="*ci" exec="{project-root}/bmad/bmm/testarch/ci.md">Scaffold CI/CD quality pipeline</c>
|
||||
<c cmd="*gate" exec="{project-root}/bmad/bmm/testarch/gate.md">Write/update quality gate decision assessment</c>
|
||||
<c cmd="*framework" run-workflow="{project-root}/bmad/bmm/workflows/testarch/framework/workflow.yaml">Initialize production-ready test framework architecture</c>
|
||||
<c cmd="*atdd" run-workflow="{project-root}/bmad/bmm/workflows/testarch/atdd/workflow.yaml">Generate E2E tests first, before starting implementation</c>
|
||||
<c cmd="*automate" run-workflow="{project-root}/bmad/bmm/workflows/testarch/automate/workflow.yaml">Generate comprehensive test automation</c>
|
||||
<c cmd="*test-design" run-workflow="{project-root}/bmad/bmm/workflows/testarch/test-design/workflow.yaml">Create comprehensive test scenarios</c>
|
||||
<c cmd="*trace" run-workflow="{project-root}/bmad/bmm/workflows/testarch/trace/workflow.yaml">Map requirements to tests Given-When-Then BDD format</c>
|
||||
<c cmd="*nfr-assess" run-workflow="{project-root}/bmad/bmm/workflows/testarch/nfr-assess/workflow.yaml">Validate non-functional requirements</c>
|
||||
<c cmd="*ci" run-workflow="{project-root}/bmad/bmm/workflows/testarch/ci/workflow.yaml">Scaffold CI/CD quality pipeline</c>
|
||||
<c cmd="*gate" run-workflow="{project-root}/bmad/bmm/workflows/testarch/gate/workflow.yaml">Write/update quality gate decision assessment</c>
|
||||
<c cmd="*exit">Goodbye+exit persona</c>
|
||||
</cmds>
|
||||
</agent>
|
||||
|
||||
@@ -18,7 +18,7 @@ last-redoc-date: 2025-09-30
|
||||
- Architect `*solution-architecture`
|
||||
2. Confirm `bmad/bmm/config.yaml` defines `project_name`, `output_folder`, `dev_story_location`, and language settings.
|
||||
3. Ensure a test test framework setup exists; if not, use `*framework` command to create a test framework setup, prior to development.
|
||||
4. Skim supporting references under `./testarch/`:
|
||||
4. Skim supporting references (knowledge under `testarch/`, command workflows under `workflows/testarch/`).
|
||||
- `tea-knowledge.md`
|
||||
- `test-levels-framework.md`
|
||||
- `test-priorities-matrix.md`
|
||||
@@ -125,31 +125,35 @@ last-redoc-date: 2025-09-30
|
||||
|
||||
## Command Catalog
|
||||
|
||||
| Command | Task File | Primary Outputs | Notes |
|
||||
| -------------- | -------------------------------- | -------------------------------------------------------------------- | ------------------------------------------------ |
|
||||
| `*framework` | `testarch/framework.md` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists |
|
||||
| `*atdd` | `testarch/atdd.md` | Failing Acceptance-Test Driven Development, implementation checklist | Requires approved story + harness |
|
||||
| `*automate` | `testarch/automate.md` | Prioritized specs, fixtures, README/script updates, DoD summary | Avoid duplicate coverage (see priority matrix) |
|
||||
| `*ci` | `testarch/ci.md` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) |
|
||||
| `*test-design` | `testarch/test-design.md` | Combined risk assessment, mitigation plan, and coverage strategy | Handles risk scoring and test design in one pass |
|
||||
| `*trace` | `testarch/trace-requirements.md` | Coverage matrix, recommendations, gate snippet | Requires access to story/tests repositories |
|
||||
| `*nfr-assess` | `testarch/nfr-assess.md` | NFR assessment report with actions | Focus on security/performance/reliability |
|
||||
| `*gate` | `testarch/gate.md` | Gate YAML + summary (PASS/CONCERNS/FAIL/WAIVED) | Deterministic decision rules + rationale |
|
||||
| Command | Task File | Primary Outputs | Notes |
|
||||
| -------------- | ------------------------------------------------ | ------------------------------------------------------------------- | ------------------------------------------------ |
|
||||
| `*framework` | `workflows/testarch/framework/instructions.md` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists |
|
||||
| `*atdd` | `workflows/testarch/atdd/instructions.md` | Failing acceptance tests + implementation checklist | Requires approved story + harness |
|
||||
| `*automate` | `workflows/testarch/automate/instructions.md` | Prioritized specs, fixtures, README/script updates, DoD summary | Avoid duplicate coverage (see priority matrix) |
|
||||
| `*ci` | `workflows/testarch/ci/instructions.md` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) |
|
||||
| `*test-design` | `workflows/testarch/test-design/instructions.md` | Combined risk assessment, mitigation plan, and coverage strategy | Handles risk scoring and test design in one pass |
|
||||
| `*trace` | `workflows/testarch/trace/instructions.md` | Coverage matrix, recommendations, gate snippet | Requires access to story/tests repositories |
|
||||
| `*nfr-assess` | `workflows/testarch/nfr-assess/instructions.md` | NFR assessment report with actions | Focus on security/performance/reliability |
|
||||
| `*gate` | `workflows/testarch/gate/instructions.md` | Gate YAML + summary (PASS/CONCERNS/FAIL/WAIVED) | Deterministic decision rules + rationale |
|
||||
|
||||
<details>
|
||||
<summary>Command Guidance and Context Loading</summary>
|
||||
|
||||
- Each task reads one row from `tea-commands.csv` via `command_key`, expanding pipe-delimited (`|`) values into checklists.
|
||||
- Keep CSV rows lightweight; place in-depth heuristics in `tea-knowledge.md` and reference via `knowledge_tags`.
|
||||
- If the CSV grows substantially, consider splitting into scoped registries (e.g., planning vs execution) or upgrading to Markdown tables for humans.
|
||||
- Each task now carries its own preflight/flow/deliverable guidance inline.
|
||||
- `tea-knowledge.md` still stores heuristics; update the brief alongside task edits.
|
||||
- Consider future modularization into orchestrated workflows if additional automation is needed.
|
||||
- `tea-knowledge.md` encapsulates Murat’s philosophy—update both CSV and knowledge file together to avoid drift.
|
||||
|
||||
</details>
|
||||
|
||||
## Workflow Placement
|
||||
|
||||
We keep every Test Architect workflow under `workflows/testarch/` instead of scattering them across the phase folders. TEA steps show up during planning (`*framework`), implementation (`*atdd`, `*automate`, `*trace`), and release (`*gate`), so a single directory keeps the command catalog and examples coherent while still letting the orchestrator treat each command as a first-class workflow. When phase-specific navigation improves, we can add lightweight entrypoints without losing this central reference.
|
||||
|
||||
## Appendix
|
||||
|
||||
- **Supporting Knowledge:**
|
||||
- `tea-knowledge.md` – Murat’s testing philosophy, heuristics, and risk scales.
|
||||
- `test-levels-framework.md` – Decision matrix for unit/integration/E2E selection.
|
||||
- `test-priorities-matrix.md` – Priority (P0–P3) criteria and target coverage percentages.
|
||||
s
|
||||
- `test-resources-for-ai-flat.txt` – Flattened 347 KB bundle of Murat’s blogs, philosophy notes, and training material. Each `FILE:` section can be loaded on demand when the agent needs deeper examples or rationale.
|
||||
|
||||
@@ -1,40 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Acceptance TDD v2.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/tdd" name="Acceptance Test Driven Development">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*tdd"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the row where command equals command_key</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md into context</i>
|
||||
<i>Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags to guide execution</i>
|
||||
<i>Split pipe-delimited fields into individual checklist items</i>
|
||||
<i>Map knowledge_tags to sections in the knowledge brief and apply them while writing tests</i>
|
||||
<i>Keep responses concise and focused on generating the failing acceptance tests plus the implementation checklist</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Verify each preflight requirement; gather missing info from user when needed</action>
|
||||
<action>Abort if halt_rules are triggered</action>
|
||||
</step>
|
||||
<step n="2" title="Execute TDD Flow">
|
||||
<action>Walk through flow_cues sequentially, adapting to story context</action>
|
||||
<action>Use knowledge brief heuristics to enforce Murat's patterns (one test = one concern, explicit assertions, etc.)</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Produce artifacts described in deliverables</action>
|
||||
<action>Summarize failing tests and checklist items for the developer</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Apply halt_rules from the CSV row exactly</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Use the notes column for additional constraints or reminders</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Failing acceptance test files + implementation checklist summary</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
@@ -1,38 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Automation Expansion v2.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/automate" name="Automation Expansion">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*automate"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and read the row where command equals command_key</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md for heuristics</i>
|
||||
<i>Follow CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags</i>
|
||||
<i>Convert pipe-delimited values into actionable checklists</i>
|
||||
<i>Apply Murat's opinions from the knowledge brief when filling gaps or refactoring tests</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm prerequisites; stop if halt_rules are triggered</action>
|
||||
</step>
|
||||
<step n="2" title="Execute Automation Flow">
|
||||
<action>Walk through flow_cues to analyse existing coverage and add only necessary specs</action>
|
||||
<action>Use knowledge heuristics (composable helpers, deterministic waits, network boundary) while generating code</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Create or update artifacts listed in deliverables</action>
|
||||
<action>Summarize coverage deltas and remaining recommendations</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Apply halt_rules from the CSV row as written</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference notes column for additional guardrails</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Updated spec files and concise summary of automation changes</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
@@ -1,39 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# CI/CD Enablement v2.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/ci" name="CI/CD Enablement">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*ci"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and read the row where command equals command_key</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md to recall CI heuristics</i>
|
||||
<i>Follow CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags</i>
|
||||
<i>Split pipe-delimited values into actionable lists</i>
|
||||
<i>Keep output focused on workflow YAML, scripts, and guidance explicitly requested in deliverables</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm prerequisites and required permissions</action>
|
||||
<action>Stop if halt_rules trigger</action>
|
||||
</step>
|
||||
<step n="2" title="Execute CI Flow">
|
||||
<action>Apply flow_cues to design the pipeline stages</action>
|
||||
<action>Leverage knowledge brief guidance (cost vs confidence, sharding, artifacts) when making trade-offs</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Create artifacts listed in deliverables (workflow files, scripts, documentation)</action>
|
||||
<action>Summarize the pipeline, selective testing strategy, and required secrets</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Use halt_rules from the CSV row verbatim</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference notes column for optimization reminders</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>CI workflow + concise explanation ready for team adoption</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
@@ -1,41 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Test Framework Setup v2.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/framework" name="Test Framework Setup">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*framework"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the row where command equals command_key</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md to internal memory</i>
|
||||
<i>Use the CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags to guide behaviour</i>
|
||||
<i>Split pipe-delimited values (|) into individual checklist items</i>
|
||||
<i>Map knowledge_tags to matching sections in the knowledge brief and apply those heuristics throughout execution</i>
|
||||
<i>DO NOT expand beyond the guidance unless the user supplies extra context; keep instructions lean and adaptive</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Run Preflight Checks">
|
||||
<action>Evaluate each item in preflight; confirm or collect missing information</action>
|
||||
<action>If any preflight requirement fails, follow halt_rules and stop</action>
|
||||
</step>
|
||||
<step n="2" title="Execute Framework Flow">
|
||||
<action>Follow flow_cues sequence, adapting to the project's stack</action>
|
||||
<action>When deciding frameworks or patterns, apply relevant heuristics from tea-knowledge.md via knowledge_tags</action>
|
||||
<action>Keep generated assets minimal—only what the CSV specifies</action>
|
||||
</step>
|
||||
<step n="3" title="Finalize Deliverables">
|
||||
<action>Create artifacts listed in deliverables</action>
|
||||
<action>Capture a concise summary for the user explaining what was scaffolded</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Follow halt_rules from the CSV row verbatim</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Use notes column for additional guardrails while executing</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Deliverables and summary specified in the CSV row</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
@@ -1,38 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Quality Gate v2.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/tea-gate" name="Quality Gate">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*gate"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and read the matching row</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md to reinforce risk-model heuristics</i>
|
||||
<i>Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags</i>
|
||||
<i>Split pipe-delimited values into actionable items</i>
|
||||
<i>Apply deterministic rules for PASS/CONCERNS/FAIL/WAIVED; capture rationale and approvals</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Gather latest assessments and confirm prerequisites; halt per halt_rules if missing</action>
|
||||
</step>
|
||||
<step n="2" title="Set Gate Decision">
|
||||
<action>Follow flow_cues to determine status, residual risk, follow-ups</action>
|
||||
<action>Use knowledge heuristics to balance cost vs confidence when negotiating waivers</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Update gate YAML specified in deliverables</action>
|
||||
<action>Summarize decision, rationale, owners, and deadlines</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Apply halt_rules from the CSV row</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Use notes column for quality bar reminders</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Updated gate file with documented decision</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
@@ -1,38 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# NFR Assessment v2.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/nfr-assess" name="NFR Assessment">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*nfr-assess"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the matching row</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md focusing on NFR guidance</i>
|
||||
<i>Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags</i>
|
||||
<i>Split pipe-delimited values into actionable lists</i>
|
||||
<i>Demand evidence for each non-functional claim (tests, telemetry, logs)</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm prerequisites; halt per halt_rules if unmet</action>
|
||||
</step>
|
||||
<step n="2" title="Assess NFRs">
|
||||
<action>Follow flow_cues to evaluate Security, Performance, Reliability, Maintainability</action>
|
||||
<action>Use knowledge heuristics to suggest monitoring and fail-fast patterns</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Produce assessment document and recommendations defined in deliverables</action>
|
||||
<action>Summarize status, gaps, and actions</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Apply halt_rules from the CSV row</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference notes column for negotiation framing (cost vs confidence)</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>NFR assessment markdown with clear next steps</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
@@ -1,9 +0,0 @@
|
||||
command,title,when_to_use,preflight,flow_cues,deliverables,halt_rules,notes,knowledge_tags
|
||||
*automate,Automation expansion,After implementation or when reforging coverage,all acceptance criteria satisfied|code builds locally|framework configured,"Review story source/diff to confirm automation target; ensure fixture architecture exists (mergeTests for Playwright, commands for Cypress) and implement apiRequest/network/auth/log fixtures if missing; map acceptance criteria with test-levels-framework.md guidance and avoid duplicate coverage; assign priorities using test-priorities-matrix.md; generate unit/integration/E2E specs with naming convention feature-name.spec.ts, covering happy, negative, and edge paths; enforce deterministic waits, self-cleaning factories, and <=1.5 minute execution per test; run suite and capture Definition of Done results; update package.json scripts and README instructions",New or enhanced spec files grouped by level; fixture modules under support/; data factory utilities; updated package.json scripts and README notes; DoD summary with remaining gaps; gate-ready coverage summary,"If automation target unclear or framework missing, halt and request clarification",Never create page objects; keep tests <300 lines and stateless; forbid hard waits and conditional flow in tests; co-locate tests near source; flag flaky patterns immediately,philosophy/core|patterns/helpers|patterns/waits|patterns/dod
|
||||
*ci,CI/CD quality pipeline,Once automation suite exists or needs optimization,git repository initialized|tests pass locally|team agrees on target environments|access to CI platform settings,"Detect CI platform (default GitHub Actions, ask if GitLab/CircleCI/etc); scaffold workflow (.github/workflows/test.yml or platform equivalent) with triggers; set Node.js version from .nvmrc and cache node_modules + browsers; stage jobs: lint -> unit -> component -> e2e with matrix parallelization (shard by file not test); add selective execution script for affected tests; create burn-in job that reruns changed specs 3x to catch flakiness; attach artifacts on failure (traces/videos/HAR); configure retries/backoff and concurrency controls; document required secrets and environment variables; add Slack/email notifications and local script mirroring CI",.github/workflows/test.yml (or platform equivalent); scripts/test-changed.sh; scripts/burn-in-changed.sh; updated README/ci.md instructions; secrets checklist; dashboard or badge configuration,"If git repo absent, test framework missing, or CI platform unspecified, halt and request setup",Target 20x speedups via parallel shards + caching; shard by file; keep jobs under 10 minutes; wait-on-timeout 120s for app startup; ensure npm test locally matches CI run; mention alternative platform paths when not on GitHub,philosophy/core|ci-strategy
|
||||
*framework,Initialize test architecture,Run once per repo or when no production-ready harness exists,package.json present|no existing E2E framework detected|architectural context available,"Identify stack from package.json (React/Vue/Angular/Next.js); detect bundler (Vite/Webpack/Rollup/esbuild); match test language to source (JS/TS frontend -> JS/TS tests); choose Playwright for large or performance-critical repos, Cypress for small DX-first teams; create {framework}/tests/ and {framework}/support/fixtures/ and {framework}/support/helpers/; configure config files with timeouts (action 15s, navigation 30s, test 60s) and reporters (HTML + JUnit); create .env.example with TEST_ENV, BASE_URL, API_URL; implement pure function->fixture->mergeTests pattern and faker-based data factories; enable failure-only screenshots/videos and ensure .nvmrc recorded",playwright/ or cypress/ folder with config + support tree; .env.example; .nvmrc; example tests; README with setup instructions,"If package.json missing OR framework already configured, halt and instruct manual review","Playwright: worker parallelism, trace viewer, multi-language support; Cypress: avoid if many dependent API calls; Component testing: Vitest (large) or Cypress CT (small); Contract testing: Pact for microservices; always use data-cy/data-testid selectors",philosophy/core|patterns/fixtures|patterns/selectors
|
||||
*gate,Quality gate decision,After review or mitigation updates,latest assessments gathered|team consensus on fixes,"Assemble story metadata (id, title); choose gate status using deterministic rules (PASS all critical issues resolved, CONCERNS minor residual risk, FAIL critical blockers, WAIVED approved by business); update YAML schema with sections: metadata, waiver status, top_issues, risk_summary totals, recommendations (must_fix, monitor), nfr_validation statuses, history; capture rationale, owners, due dates, and summary comment back to story","docs/qa/gates/{story}.yml updated with schema fields (schema, story, story_title, gate, status_reason, reviewer, updated, waiver, top_issues, risk_summary, recommendations, nfr_validation, history); summary message for team","If review incomplete or risk data outdated, halt and request rerun","FAIL whenever unresolved P0 risks/tests or security holes remain; CONCERNS when mitigations planned but residual risk exists; WAIVED requires reason, approver, and expiry; maintain audit trail in history",philosophy/core|risk-model
|
||||
*nfr-assess,NFR validation,Late development or pre-review for critical stories,implementation deployed locally|non-functional goals defined or discoverable,"Ask which NFRs to assess; default to core four (security, performance, reliability, maintainability); gather thresholds from story/architecture/technical-preferences and mark unknown targets; inspect evidence (tests, telemetry, logs) for each NFR; classify status using deterministic pass/concerns/fail rules and list quick wins; produce gate block and assessment doc with recommended actions",NFR assessment markdown with findings; gate YAML block capturing statuses and notes; checklist of evidence gaps and follow-up owners,"If NFR targets undefined and no guidance available, request definition and halt","Unknown thresholds -> CONCERNS, never guess; ensure each NFR has evidence or call it out; suggest monitoring hooks and fail-fast mechanisms when gaps exist",philosophy/core|nfr
|
||||
*tdd,Acceptance Test Driven Development,Before implementation when team commits to TDD,story approved with acceptance criteria|dev sandbox ready|framework scaffolding in place,Clarify acceptance criteria and affected systems; pick appropriate test level (E2E/API/Component); write failing acceptance tests using Given-When-Then with network interception first then navigation; create data factories and fixture stubs for required entities; outline mocks/fixtures infrastructure the dev team must supply; generate component tests for critical UI logic; compile implementation checklist mapping each test to source work; share failing tests with dev agent and maintain red -> green -> refactor loop,Failing acceptance test files; component test stubs; fixture/mocks skeleton; implementation checklist with test-to-code mapping; documented data-testid requirements,"If criteria ambiguous or framework missing, halt for clarification",Start red; one assertion per test; use beforeEach for visible setup (no shared state); remind devs to run tests before writing production code; update checklist as each test goes green,philosophy/core|patterns/test-structure
|
||||
*test-design,Risk and test design planning,"After story approval, before development",story markdown present|acceptance criteria clear|architecture/PRD accessible,"Filter requirements so only genuine risks remain; review PRD/architecture/story for unresolved gaps; classify risks across TECH, SEC, PERF, DATA, BUS, OPS using category definitions; request clarification when evidence missing; score probability (1 unlikely, 2 possible, 3 likely) and impact (1 minor, 2 degraded, 3 critical) then compute totals; highlight risks >=6 and plan mitigations with owners and timelines; break acceptance criteria into atomic scenarios mapped to mitigations; reference test-levels-framework.md to pick unit/integration/E2E/component levels; avoid duplicate coverage, prefer lower levels when possible; assign priorities using test-priorities-matrix.md; outline data/tooling prerequisites and execution order",Risk assessment markdown in docs/qa/assessments; table of category/probability/impact/score; mitigation matrix with owners and due dates; coverage matrix with requirement/level/priority/mitigation; gate YAML snippet summarizing risk totals and scenario counts; recommended execution order,"If story missing or criteria unclear, halt for clarification","Category definitions: TECH=architecture flaws; SEC=missing controls/vulnerabilities; PERF=SLA risk; DATA=loss/corruption; BUS=user/business harm; OPS=deployment/run failures; rely on evidence, not speculation; tie scenarios back to risk mitigations; keep scenarios independent and maintainable",philosophy/core|risk-model|patterns/test-structure
|
||||
*trace,Requirements traceability,Mid-development checkpoint or before review,tests exist for story|access to source + specs,"Gather acceptance criteria and implemented tests; map each criterion to concrete tests (file + describe/it) using Given-When-Then narrative; classify coverage status as FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY; flag severity based on priority (P0 gaps critical); recommend additional tests or refactors; generate gate YAML coverage summary",Traceability report saved under docs/qa/assessments; coverage matrix with status per criterion; gate YAML snippet for coverage totals and gaps,"If story lacks implemented tests, pause and advise running *tdd or writing tests","Definitions: FULL=all scenarios validated, PARTIAL=some coverage exists, NONE=no validation, UNIT-ONLY=missing higher level, INTEGRATION-ONLY=lacks lower confidence; ensure assertions explicit and avoid duplicate coverage",philosophy/core|patterns/assertions
|
||||
|
@@ -2,7 +2,7 @@
|
||||
|
||||
# Murat Test Architecture Foundations (Slim Brief)
|
||||
|
||||
This brief distills Murat Ozcan's testing philosophy used by the Test Architect agent. Use it as the north star after loading `tea-commands.csv`.
|
||||
This brief distills Murat Ozcan's testing philosophy used by the Test Architect agent. Use it as the north star while executing the TEA workflows.
|
||||
|
||||
## Core Principles
|
||||
|
||||
@@ -14,8 +14,10 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
- Composition over inheritance: prefer functional helpers and fixtures that compose behaviour; page objects and deep class trees hide duplication.
|
||||
- Setup via API, assert via UI. Keep tests user-centric while priming state through fast interfaces.
|
||||
- One test = one concern. Explicit assertions live in the test body, not buried in helpers.
|
||||
- Test at the lowest level possible first: favour component/unit coverage before integration/E2E (target ~1:3–1:5 ratio of high-level to low-level tests).
|
||||
- Zero tolerance for flakiness: if a test flakes, fix the cause immediately or delete the test—shipping with flakes is not acceptable evidence.
|
||||
|
||||
## Patterns and Heuristics
|
||||
## Patterns & Heuristics
|
||||
|
||||
- Selector order: `data-cy` / `data-testid` -> ARIA -> text. Avoid brittle CSS, IDs, or index based locators.
|
||||
- Network boundary is the mock boundary. Stub at the edge, never mid-service unless risk demands.
|
||||
@@ -44,9 +46,37 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
...overrides,
|
||||
});
|
||||
```
|
||||
- Standard test skeleton keeps intent clear—`describe` the feature, `context` specific scenarios, make setup visible, and follow Arrange → Act → Assert explicitly:
|
||||
|
||||
```javascript
|
||||
describe('Checkout', () => {
|
||||
context('when inventory is available', () => {
|
||||
beforeEach(async () => {
|
||||
await seedInventory();
|
||||
await interceptOrders(); // intercept BEFORE navigation
|
||||
await test.step('navigate', () => page.goto('/checkout'));
|
||||
});
|
||||
|
||||
it('completes purchase', async () => {
|
||||
await cart.fillDetails(validUser);
|
||||
await expect(page.getByTestId('order-confirmed')).toBeVisible();
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
- Helper/fixture thresholds: 3+ call sites → promote to fixture with subpath export, 2-3 → shared utility module, 1-off → keep inline to avoid premature abstraction.
|
||||
- Deterministic waits only: prefer `page.waitForResponse`, `cy.wait('@alias')`, or element disappearance (e.g., `cy.get('[data-cy="spinner"]').should('not.exist')`). Ban `waitForTimeout`/`cy.wait(ms)` unless quarantined in TODO and slated for removal.
|
||||
- Data is created via APIs or tasks, not UI flows:
|
||||
```javascript
|
||||
beforeEach(() => {
|
||||
cy.task('db:seed', { users: [createUser({ role: 'admin' })] });
|
||||
});
|
||||
```
|
||||
- Assertions stay in tests; when shared state varies, assert on ranges (`expect(count).toBeGreaterThanOrEqual(3)`) rather than brittle exact values.
|
||||
- Visual debugging: keep component/test runner UIs available (Playwright trace viewer, Cypress runner) to accelerate feedback.
|
||||
|
||||
## Risk and Coverage
|
||||
## Risk & Coverage
|
||||
|
||||
- Risk score = probability (1-3) × impact (1-3). Score 9 => gate FAIL, ≥6 => CONCERNS. Most stories have 0-1 high risks.
|
||||
- Test level ratio: heavy unit/component coverage, but always include E2E for critical journeys and integration seams.
|
||||
@@ -60,7 +90,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
- **Media**: screenshot only-on-failure, video retain-on-failure
|
||||
- **Language Matching**: Tests should match source code language (JS/TS frontend -> JS/TS tests)
|
||||
|
||||
## Automation and CI
|
||||
## Automation & CI
|
||||
|
||||
- Prefer Playwright for multi-language teams, worker parallelism, rich debugging; Cypress suits smaller DX-first repos or component-heavy spikes.
|
||||
- **Framework Selection**: Large repo + performance = Playwright, Small repo + DX = Cypress
|
||||
@@ -71,7 +101,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
- Burn-in testing: run new or changed specs multiple times (e.g., 3-10x) to flush flakes before they land in main.
|
||||
- Keep helper scripts handy (`scripts/test-changed.sh`, `scripts/burn-in-changed.sh`) so CI and local workflows stay in sync.
|
||||
|
||||
## Project Structure and Config
|
||||
## Project Structure & Config
|
||||
|
||||
- **Directory structure**:
|
||||
```
|
||||
@@ -92,8 +122,10 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
};
|
||||
export default configs[process.env.TEST_ENV || 'local'];
|
||||
```
|
||||
- Validate environment input up-front (fail fast when `TEST_ENV` is missing) and keep Playwright/Cypress configs small by delegating per-env overrides to files under `config/`.
|
||||
- Keep `.env.example`, `.nvmrc`, and scripts (burn-in, test-changed) in source control so CI and local machines share tooling defaults.
|
||||
|
||||
## Test Hygiene and Independence
|
||||
## Test Hygiene & Independence
|
||||
|
||||
- Tests must be independent and stateless; never rely on execution order.
|
||||
- Cleanup all data created during tests (afterEach or API cleanup).
|
||||
@@ -101,7 +133,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
- No shared mutable state; prefer factory functions per test.
|
||||
- Tests must run in parallel safely; never commit `.only`.
|
||||
- Prefer co-location: component tests next to components, integration in `tests/integration`, etc.
|
||||
- Feature flags: centralise enum definitions (e.g., `export const FLAGS = Object.freeze({ NEW_FEATURE: 'new-feature' })`), provide helpers to set/clear targeting, and write dedicated flag tests that clean up targeting after each run.
|
||||
- Feature flags: centralise enum definitions (e.g., `export const FLAGS = Object.freeze({ NEW_FEATURE: 'new-feature' })`), provide helpers to set/clear targeting, write dedicated flag suites that clean up targeting after each run, and exercise both enabled/disabled paths in CI.
|
||||
|
||||
## CCTDD (Component Test-Driven Development)
|
||||
|
||||
@@ -117,6 +149,8 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
- **HAR recording**: Record network traffic for offline playback in CI.
|
||||
- **Selective reruns**: Only rerun failed specs, not entire suite.
|
||||
- **Network recording**: capture HAR files during stable runs so CI can replay network traffic when external systems are flaky.
|
||||
- Stage jobs: cache dependencies once, run `test-changed` before full suite, then execute sharded E2E jobs with `fail-fast: false` so one failure doesn’t cancel other evidence.
|
||||
- Ship burn-in scripts (e.g., `scripts/burn-in-changed.sh`) that loop 5–10x over changed specs and stop on first failure; wire them into CI for flaky detection before merge.
|
||||
|
||||
## Package Scripts
|
||||
|
||||
@@ -127,25 +161,20 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
"test:component": "cypress run --component",
|
||||
"test:contract": "jest --testMatch='**/pact/*.spec.ts'",
|
||||
"test:debug": "playwright test --headed",
|
||||
"test:ci": "npm run test:unit andand npm run test:e2e",
|
||||
"test:ci": "npm run test:unit && npm run test:e2e",
|
||||
"contract:publish": "pact-broker publish"
|
||||
```
|
||||
|
||||
## Contract Testing (Pact)
|
||||
## Online Resources & Examples
|
||||
|
||||
- Use for microservices with integration points.
|
||||
- Consumer generates contracts, provider verifies.
|
||||
- Structure: `pact/` directory at root, `pact/config.ts` for broker settings.
|
||||
- Reference repos: pact-js-example-consumer, pact-js-example-provider, pact-js-example-react-consumer.
|
||||
- Full-text mirrors of Murat's public repos live in the `test-resources-for-ai/sample-repos` knowledge pack so TEA can stay offline. Key origins include Playwright patterns (`pw-book`), Cypress vs Playwright comparisons, Tour of Heroes, and Pact consumer/provider examples.
|
||||
|
||||
## Online Resources and Examples
|
||||
|
||||
- Fixture architecture: https://github.com/muratkeremozcan/cy-vs-pw-murats-version
|
||||
- - Fixture architecture: https://github.com/muratkeremozcan/cy-vs-pw-murats-version
|
||||
- Playwright patterns: https://github.com/muratkeremozcan/pw-book
|
||||
- Component testing (CCTDD): https://github.com/muratkeremozcan/cctdd
|
||||
- Contract testing: https://github.com/muratkeremozcan/pact-js-example-consumer
|
||||
- Full app example: https://github.com/muratkeremozcan/tour-of-heroes-react-vite-cypress-ts
|
||||
- Blog posts: https://dev.to/muratkeremozcan
|
||||
- Blog essays at https://dev.to/muratkeremozcan provide narrative rationale—distil any new actionable guidance back into this brief when processes evolve.
|
||||
|
||||
## Risk Model Details
|
||||
|
||||
@@ -156,7 +185,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
- BUS: Business or user harm, revenue-impacting failures, compliance gaps.
|
||||
- OPS: Deployment, infrastructure, or observability gaps that block releases.
|
||||
|
||||
## Probability and Impact Scale
|
||||
## Probability & Impact Scale
|
||||
|
||||
- Probability 1 = Unlikely (standard implementation, low risk).
|
||||
- Probability 2 = Possible (edge cases, needs attention).
|
||||
@@ -168,8 +197,8 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
|
||||
|
||||
## Test Design Frameworks
|
||||
|
||||
- Use `docs/docs-v6/v6-bmm/test-levels-framework.md` for level selection and anti-patterns.
|
||||
- Use `docs/docs-v6/v6-bmm/test-priorities-matrix.md` for P0-P3 priority criteria.
|
||||
- Use [`test-levels-framework.md`](./test-levels-framework.md) for level selection and anti-patterns.
|
||||
- Use [`test-priorities-matrix.md`](./test-priorities-matrix.md) for P0–P3 priority criteria.
|
||||
- Naming convention: `{epic}.{story}-{LEVEL}-{sequence}` (e.g., `2.4-E2E-01`).
|
||||
- Tie each scenario to risk mitigations or acceptance criteria.
|
||||
|
||||
@@ -270,6 +299,65 @@ history:
|
||||
- Describe blocks: `describe('Feature/Component Name', () => { context('when condition', ...) })`.
|
||||
- Data attributes: always kebab-case (`data-cy="submit-button"`, `data-testid="user-email"`).
|
||||
|
||||
## Reference Materials
|
||||
## Contract Testing Rules (Pact)
|
||||
|
||||
If deeper context is needed, consult Murat's testing philosophy notes, blog posts, and sample repositories in https://github.com/muratkeremozcan/test-resources-for-ai/blob/main/gitingest-full-repo-text-version.txt.
|
||||
- Use Pact for microservice integrations; keep a `pact/` directory with broker config and share contracts as first-class artifacts in the repo.
|
||||
- Keep consumer contracts beside the integration specs that exercise them; version with semantic tags so downstream teams understand breaking changes.
|
||||
- Publish contracts on every CI run and enforce provider verification before merge—failing verification blocks release and acts as a quality gate.
|
||||
- Capture fallback behaviour (timeouts, retries, circuit breakers) inside interactions so resilience expectations stay explicit.
|
||||
- Sample interaction scaffold:
|
||||
```javascript
|
||||
const interaction = {
|
||||
state: 'user with id 1 exists',
|
||||
uponReceiving: 'a request for user 1',
|
||||
withRequest: {
|
||||
method: 'GET',
|
||||
path: '/users/1',
|
||||
headers: { Accept: 'application/json' },
|
||||
},
|
||||
willRespondWith: {
|
||||
status: 200,
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: like({ id: 1, name: string('Jane Doe'), email: email('jane@example.com') }),
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
## Reference Capsules (Summaries Bundled In)
|
||||
|
||||
- **Fixture Architecture Quick Wins**
|
||||
- Compose Playwright or Cypress suites with additive fixtures; use `mergeTests`/`extend` to layer auth, network, and telemetry helpers without inheritance.
|
||||
- Keep HTTP helpers framework-agnostic so the same function fuels unit tests, API smoke checks, and runtime fixtures.
|
||||
- Normalize selectors (`data-testid`/`data-cy`) and lint new UI code for missing attributes to prevent brittle locators.
|
||||
|
||||
- **Playwright Patterns Digest**
|
||||
- Register network interceptions before navigation, assert on typed responses, and capture HAR files for regression.
|
||||
- Treat timeouts and retries as configuration, not inline magic numbers; expose overrides via fixtures.
|
||||
- Name specs and test IDs with intent (`checkout.complete-happy-path`) so CI shards and triage stay meaningful.
|
||||
|
||||
- **Component TDD Highlights**
|
||||
- Begin UI work with failing component specs; rebuild providers/stores per spec to avoid state bleed.
|
||||
- Use factories to exercise prop variations and edge cases; assert through accessible queries (`getByRole`, `getByLabelText`).
|
||||
- Document mount helpers and cleanup expectations so component tests stay deterministic.
|
||||
|
||||
- **Contract Testing Cliff Notes**
|
||||
- Store consumer contracts alongside integration specs; version with semantic tags and publish on every CI run.
|
||||
- Enforce provider verification prior to merge to act as a release gate for service integrations.
|
||||
- Capture fallback behaviour (timeouts, retries, circuit breakers) inside contracts to keep resilience expectations explicit.
|
||||
|
||||
- **End-to-End Reference Flow**
|
||||
- Prime end-to-end journeys through API fixtures, then assert through UI steps mirroring real user narratives.
|
||||
- Pair burn-in scripts (`npm run test:e2e -- --repeat-each=3`) with selective retries to flush flakes before promotion.
|
||||
|
||||
- **Philosophy & Heuristics Articles**
|
||||
- Use long-form articles for rationale; extract checklists, scripts, and thresholds back into this brief whenever teams adopt new practices.
|
||||
|
||||
These capsules distil Murat's sample repositories (Playwright patterns, Cypress vs Playwright comparisons, CCTDD, Pact examples, Tour of Heroes walkthrough) captured in the `test-resources-for-ai` knowledge pack so the TEA agent can operate offline while reflecting those techniques.
|
||||
|
||||
## Reference Assets
|
||||
|
||||
- [Test Architect README](./README.md) — high-level usage guidance and phase checklists.
|
||||
- [Test Levels Framework](./test-levels-framework.md) — choose the right level for each scenario.
|
||||
- [Test Priorities Matrix](./test-priorities-matrix.md) — assign P0–P3 priorities consistently.
|
||||
- [TEA Workflows](../workflows/testarch/README.md) — per-command instructions executed by the agent.
|
||||
- [Murat Knowledge Bundle](./test-resources-for-ai-flat.txt) — 347 KB flattened snapshot of Murat’s blogs, philosophy notes, and course material. Sections are delimited with `FILE:` headers; load relevant portions when deeper examples or rationales are required.
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Risk and Test Design v3.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/test-design" name="Risk andamp; Test Design">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*test-design"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the matching row</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md for risk-model and coverage heuristics</i>
|
||||
<i>Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags as the execution blueprint</i>
|
||||
<i>Split pipe-delimited values into actionable checklists</i>
|
||||
<i>Stay evidence-based—link risks and scenarios directly to PRD/architecture/story artifacts</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm story markdown, acceptance criteria, and architecture/PRD access.</action>
|
||||
<action>Stop immediately if halt_rules trigger (missing inputs or unclear requirements).</action>
|
||||
</step>
|
||||
<step n="2" title="Assess Risks">
|
||||
<action>Follow flow_cues to filter genuine risks, classify them (TECH/SEC/PERF/DATA/BUS/OPS), and score probability × impact.</action>
|
||||
<action>Document mitigations with owners, timelines, and residual risk expectations.</action>
|
||||
</step>
|
||||
<step n="3" title="Design Coverage">
|
||||
<action>Break acceptance criteria into atomic scenarios mapped to mitigations.</action>
|
||||
<action>Choose test levels using test-levels-framework.md, assign priorities via test-priorities-matrix.md, and note tooling/data prerequisites.</action>
|
||||
</step>
|
||||
<step n="4" title="Deliverables">
|
||||
<action>Generate the combined risk report and test design artifacts described in deliverables.</action>
|
||||
<action>Summarize key risks, mitigations, coverage plan, and recommended execution order.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Apply halt_rules from the CSV row verbatim.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Use notes column for calibration reminders and coverage heuristics.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Unified risk assessment plus coverage strategy ready for implementation.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
7607
src/modules/bmm/testarch/test-resources-for-ai-flat.txt
Normal file
7607
src/modules/bmm/testarch/test-resources-for-ai-flat.txt
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,38 +0,0 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Requirements Traceability v2.0 (Slim)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/trace" name="Requirements Traceability">
|
||||
<llm critical="true">
|
||||
<i>Set command_key="*trace"</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-commands.csv and read the matching row</i>
|
||||
<i>Load {project-root}/bmad/bmm/testarch/tea-knowledge.md emphasising assertions guidance</i>
|
||||
<i>Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags</i>
|
||||
<i>Split pipe-delimited values into actionable lists</i>
|
||||
<i>Focus on mapping reality: reference actual files, describe coverage gaps, recommend next steps</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Validate prerequisites; halt per halt_rules if unmet</action>
|
||||
</step>
|
||||
<step n="2" title="Traceability Analysis">
|
||||
<action>Follow flow_cues to map acceptance criteria to implemented tests</action>
|
||||
<action>Leverage knowledge heuristics to highlight assertion quality and duplication risks</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Create traceability report described in deliverables</action>
|
||||
<action>Summarize critical gaps and recommendations</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>Apply halt_rules from the CSV row</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference notes column for additional emphasis</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Coverage matrix and narrative summary</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
21
src/modules/bmm/workflows/testarch/README.md
Normal file
21
src/modules/bmm/workflows/testarch/README.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Test Architect Workflows
|
||||
|
||||
This directory houses the per-command workflows used by the Test Architect agent (`tea`). Each workflow wraps the standalone instructions that used to live under `testarch/` so they can run through the standard BMAD workflow runner.
|
||||
|
||||
## Available workflows
|
||||
|
||||
- `framework` – scaffolds Playwright/Cypress harnesses.
|
||||
- `atdd` – generates failing acceptance tests before coding.
|
||||
- `automate` – expands regression coverage after implementation.
|
||||
- `ci` – bootstraps CI/CD pipelines aligned with TEA practices.
|
||||
- `test-design` – combines risk assessment and coverage planning.
|
||||
- `trace` – maps requirements to implemented automated tests.
|
||||
- `nfr-assess` – evaluates non-functional requirements.
|
||||
- `gate` – records the release decision in the gate file.
|
||||
|
||||
Each subdirectory contains:
|
||||
|
||||
- `instructions.md` – the slim workflow instructions.
|
||||
- `workflow.yaml` – metadata consumed by the BMAD workflow runner.
|
||||
|
||||
The TEA agent now invokes these workflows via `run-workflow` rather than executing instruction files directly.
|
||||
43
src/modules/bmm/workflows/testarch/atdd/instructions.md
Normal file
43
src/modules/bmm/workflows/testarch/atdd/instructions.md
Normal file
@@ -0,0 +1,43 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Acceptance TDD v3.0
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/atdd" name="Acceptance Test Driven Development">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Story is approved with clear acceptance criteria.</i>
|
||||
<i>- Development sandbox/environment is ready.</i>
|
||||
<i>- Framework scaffolding exists (run `*framework` if missing).</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm each requirement above; halt if any are missing.</action>
|
||||
</step>
|
||||
<step n="2" title="Author Failing Acceptance Tests">
|
||||
<action>Clarify acceptance criteria and affected systems.</action>
|
||||
<action>Select appropriate test level (E2E/API/Component).</action>
|
||||
<action>Create failing tests using Given-When-Then with network interception before navigation.</action>
|
||||
<action>Build data factories and fixture stubs for required entities.</action>
|
||||
<action>Outline mocks/fixtures infrastructure the dev team must provide.</action>
|
||||
<action>Generate component tests for critical UI logic.</action>
|
||||
<action>Compile an implementation checklist mapping each test to code work.</action>
|
||||
<action>Share failing tests and checklist with the dev agent, maintaining red → green → refactor loop.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Output failing acceptance test files, component test stubs, fixture/mocks skeleton, implementation checklist, and data-testid requirements.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If acceptance criteria are ambiguous or the framework is missing, halt and request clarification/set up.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference `{project-root}/bmad/bmm/testarch/tea-knowledge.md` for heuristics that shape this guidance.</i>
|
||||
<i>Start red; one assertion per test; keep setup visible (no hidden shared state).</i>
|
||||
<i>Remind devs to run tests before writing production code; update checklist as tests turn green.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Failing acceptance/component test suite plus implementation checklist.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/atdd/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/atdd/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: atdd
|
||||
name: testarch-atdd
|
||||
description: "Generate failing acceptance tests before implementation."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/atdd"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- atdd
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
43
src/modules/bmm/workflows/testarch/automate/instructions.md
Normal file
43
src/modules/bmm/workflows/testarch/automate/instructions.md
Normal file
@@ -0,0 +1,43 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Automation Expansion v3.0
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/automate" name="Automation Expansion">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Acceptance criteria are satisfied.</i>
|
||||
<i>- Code builds locally without errors.</i>
|
||||
<i>- Framework scaffolding is configured.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Verify all requirements above; halt if any fail.</action>
|
||||
</step>
|
||||
<step n="2" title="Expand Automation">
|
||||
<action>Review story source/diff to confirm automation targets.</action>
|
||||
<action>Review quality heuristics from `{project-root}/bmad/bmm/testarch/tea-knowledge.md` before proposing additions.</action>
|
||||
<action>Ensure fixture architecture exists (Playwright `mergeTests`, Cypress commands); add apiRequest/network/auth/log fixtures if missing.</action>
|
||||
<action>Map acceptance criteria using `{project-root}/bmad/bmm/testarch/test-levels-framework.md` and avoid duplicate coverage.</action>
|
||||
<action>Assign priorities using `{project-root}/bmad/bmm/testarch/test-priorities-matrix.md`.</action>
|
||||
<action>Generate unit/integration/E2E specs (naming `feature-name.spec.ts`) covering happy, negative, and edge paths.</action>
|
||||
<action>Enforce deterministic waits, self-cleaning factories, and execution under 1.5 minutes per test.</action>
|
||||
<action>Run the suite, capture Definition of Done results, and update package.json scripts plus README instructions.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Create new/enhanced spec files grouped by level, supporting fixtures/helpers, data factory utilities, updated scripts/README notes, and a DoD summary highlighting remaining gaps.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If the automation target is unclear or the framework is missing, halt and request clarification/setup.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Never create page objects; keep tests under 300 lines and stateless.</i>
|
||||
<i>Forbid hard waits/conditional flow; co-locate tests near source.</i>
|
||||
<i>Flag flaky patterns immediately.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Prioritized automation suite updates and DoD summary ready for gating.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/automate/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/automate/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: automate
|
||||
name: testarch-automate
|
||||
description: "Expand automation coverage after implementation."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/automate"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- automation
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
43
src/modules/bmm/workflows/testarch/ci/instructions.md
Normal file
43
src/modules/bmm/workflows/testarch/ci/instructions.md
Normal file
@@ -0,0 +1,43 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# CI/CD Enablement v3.0
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/ci" name="CI/CD Enablement">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Git repository is initialized.</i>
|
||||
<i>- Local test suite passes.</i>
|
||||
<i>- Team agrees on target environments.</i>
|
||||
<i>- Access to CI platform settings/secrets is available.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm all items above; halt if prerequisites are unmet.</action>
|
||||
</step>
|
||||
<step n="2" title="Configure Pipeline">
|
||||
<action>Detect CI platform (default GitHub Actions; ask about GitLab/CircleCI/etc.).</action>
|
||||
<action>Scaffold workflow (e.g., `.github/workflows/test.yml`) with appropriate triggers and caching (Node version from `.nvmrc`, browsers).</action>
|
||||
<action>Stage jobs sequentially (lint → unit → component → e2e) with matrix parallelization (shard by file, not test).</action>
|
||||
<action>Add selective execution script(s) for affected tests plus burn-in job rerunning changed specs 3x to catch flakiness.</action>
|
||||
<action>Attach artifacts on failure (traces/videos/HAR) and configure retries/backoff/concurrency controls.</action>
|
||||
<action>Document required secrets/environment variables and wire Slack/email notifications; provide local mirror script.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Produce workflow file(s), helper scripts (`test-changed`, burn-in), README/ci.md updates, secrets checklist, and any dashboard/badge configuration.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If git repo is absent, tests fail, or CI platform is unspecified, halt and request setup.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference `{project-root}/bmad/bmm/testarch/tea-knowledge.md` for heuristics that shape this guidance.</i>
|
||||
<i>Target ~20× speedups via parallel shards and caching; keep jobs under 10 minutes.</i>
|
||||
<i>Use `wait-on-timeout` ≈120s for app startup; ensure local `npm test` mirrors CI run.</i>
|
||||
<i>Mention alternative platform paths when not on GitHub.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>CI pipeline configuration and guidance ready for team adoption.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/ci/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/ci/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: ci
|
||||
name: testarch-ci
|
||||
description: "Scaffold or update the CI/CD quality pipeline."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/ci"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- ci-cd
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
43
src/modules/bmm/workflows/testarch/framework/instructions.md
Normal file
43
src/modules/bmm/workflows/testarch/framework/instructions.md
Normal file
@@ -0,0 +1,43 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Test Framework Setup v3.0
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/framework" name="Test Framework Setup">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Confirm `package.json` exists.</i>
|
||||
<i>- Verify no modern E2E harness is already configured.</i>
|
||||
<i>- Have architectural/stack context available.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Run Preflight Checks">
|
||||
<action>Validate each preflight requirement; stop immediately if any fail.</action>
|
||||
</step>
|
||||
<step n="2" title="Scaffold Framework">
|
||||
<action>Identify framework stack from `package.json` (React/Vue/Angular/Next.js) and bundler (Vite/Webpack/Rollup/esbuild).</action>
|
||||
<action>Select Playwright for large/perf-critical repos, Cypress for small DX-first teams.</action>
|
||||
<action>Create folders `{framework}/tests/`, `{framework}/support/fixtures/`, `{framework}/support/helpers/`.</action>
|
||||
<action>Configure timeouts (action 15s, navigation 30s, test 60s) and reporters (HTML + JUnit).</action>
|
||||
<action>Generate `.env.example` with `TEST_ENV`, `BASE_URL`, `API_URL` plus `.nvmrc`.</action>
|
||||
<action>Implement pure function → fixture → `mergeTests` pattern and faker-based data factories.</action>
|
||||
<action>Enable failure-only screenshots/videos and document setup in README.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Produce Playwright/Cypress scaffold (config + support tree), `.env.example`, `.nvmrc`, seed tests, and README instructions.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If prerequisites fail or an existing harness is detected, halt and notify the user.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference `{project-root}/bmad/bmm/testarch/tea-knowledge.md` for heuristics that shape this guidance.</i>
|
||||
<i>Playwright: take advantage of worker parallelism, trace viewer, multi-language support.</i>
|
||||
<i>Cypress: avoid when dependent API chains are heavy; consider component testing (Vitest/Cypress CT).</i>
|
||||
<i>Contract testing: suggest Pact for microservices; always recommend data-cy/data-testid selectors.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Scaffolded framework assets and summary of what was created.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/framework/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/framework/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: framework
|
||||
name: testarch-framework
|
||||
description: "Initialize or refresh the test framework harness."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/framework"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- setup
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
39
src/modules/bmm/workflows/testarch/gate/instructions.md
Normal file
39
src/modules/bmm/workflows/testarch/gate/instructions.md
Normal file
@@ -0,0 +1,39 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Quality Gate v3.0
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/gate" name="Quality Gate">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Latest assessments (risk/test design, trace, automation, NFR) are available.</i>
|
||||
<i>- Team has consensus on fixes/mitigations.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Gather required assessments and confirm consensus; halt if information is stale or missing.</action>
|
||||
</step>
|
||||
<step n="2" title="Determine Gate Decision">
|
||||
<action>Assemble story metadata (id, title, links) for the gate file.</action>
|
||||
<action>Apply deterministic rules: PASS (all critical issues resolved), CONCERNS (minor residual risk), FAIL (critical blockers), WAIVED (business-approved waiver).</action>
|
||||
<action>Document rationale, residual risks, owners, due dates, and waiver details where applicable.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Update gate YAML with schema fields (story info, status, rationale, waiver, top issues, risk summary, recommendations, NFR validation, history).</action>
|
||||
<action>Provide summary message for the team highlighting decision and next steps.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If reviews are incomplete or risk data is outdated, halt and request the necessary reruns.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference `{project-root}/bmad/bmm/testarch/tea-knowledge.md` for heuristics that shape this guidance.</i>
|
||||
<i>FAIL whenever unresolved P0 risks/tests or security issues remain.</i>
|
||||
<i>CONCERNS when mitigations are planned but residual risk exists; WAIVED requires reason, approver, and expiry.</i>
|
||||
<i>Maintain audit trail in the history section.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Gate YAML entry and communication summary documenting the decision.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/gate/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/gate/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: gate
|
||||
name: testarch-gate
|
||||
description: "Record the quality gate decision for the story."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/gate"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- gate
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
@@ -0,0 +1,39 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# NFR Assessment v3.0
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/nfr-assess" name="NFR Assessment">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Implementation is deployed locally or accessible for evaluation.</i>
|
||||
<i>- Non-functional goals/SLAs are defined or discoverable.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm prerequisites; halt if targets are unknown and cannot be clarified.</action>
|
||||
</step>
|
||||
<step n="2" title="Assess NFRs">
|
||||
<action>Identify which NFRs to assess (default: Security, Performance, Reliability, Maintainability).</action>
|
||||
<action>Gather thresholds from story/architecture/technical preferences; mark unknown targets.</action>
|
||||
<action>Inspect evidence (tests, telemetry, logs) for each NFR and classify status using deterministic PASS/CONCERNS/FAIL rules.</action>
|
||||
<action>List quick wins and recommended actions for any concerns/failures.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Produce NFR assessment markdown summarizing evidence, status, and actions; update gate YAML block with NFR findings; compile checklist of evidence gaps and owners.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If NFR targets are undefined and cannot be obtained, halt and request definition.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference `{project-root}/bmad/bmm/testarch/tea-knowledge.md` for heuristics that shape this guidance.</i>
|
||||
<i>Unknown thresholds default to CONCERNS—never guess.</i>
|
||||
<i>Ensure every NFR has evidence or call it out explicitly.</i>
|
||||
<i>Suggest monitoring hooks and fail-fast mechanisms when gaps exist.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>NFR assessment report with actionable follow-ups and gate snippet.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/nfr-assess/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/nfr-assess/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: nfr-assess
|
||||
name: testarch-nfr
|
||||
description: "Assess non-functional requirements before release."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/nfr-assess"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- nfr
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
@@ -0,0 +1,43 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Risk & Test Design v3.1
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/test-design" name="Risk & Test Design">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Story markdown, acceptance criteria, PRD/architecture context are available.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm inputs; halt if any are missing or unclear.</action>
|
||||
</step>
|
||||
<step n="2" title="Assess Risks">
|
||||
<action>Consult `{project-root}/bmad/bmm/testarch/tea-knowledge.md` for the latest risk heuristics before scoring.</action>
|
||||
<action>Filter requirements to isolate genuine risks; review PRD/architecture/story for unresolved gaps.</action>
|
||||
<action>Classify risks across TECH, SEC, PERF, DATA, BUS, OPS; request clarification when evidence is missing.</action>
|
||||
<action>Score probability (1 unlikely, 2 possible, 3 likely) and impact (1 minor, 2 degraded, 3 critical); compute totals and highlight scores ≥6.</action>
|
||||
<action>Plan mitigations with owners, timelines, and update residual risk expectations.</action>
|
||||
</step>
|
||||
<step n="3" title="Design Coverage">
|
||||
<action>Break acceptance criteria into atomic scenarios tied to mitigations.</action>
|
||||
<action>Choose test levels using `{project-root}/bmad/bmm/testarch/test-levels-framework.md` and avoid duplicate coverage (prefer lower levels when possible).</action>
|
||||
<action>Assign priorities using `{project-root}/bmad/bmm/testarch/test-priorities-matrix.md`; outline data/tooling prerequisites and execution order.</action>
|
||||
</step>
|
||||
<step n="4" title="Deliverables">
|
||||
<action>Create risk assessment markdown (category/probability/impact/score) with mitigation matrix and gate snippet totals.</action>
|
||||
<action>Produce coverage matrix (requirement/level/priority/mitigation) plus recommended execution order.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If story data or criteria are missing, halt and request them.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Category definitions: TECH=architecture flaws; SEC=missing controls; PERF=SLA risk; DATA=loss/corruption; BUS=user/business harm; OPS=deployment/run failures.</i>
|
||||
<i>Rely on evidence, not speculation; tie scenarios back to mitigations; keep scenarios independent and maintainable.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Unified risk assessment and coverage strategy ready for implementation.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/test-design/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/test-design/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: test-design
|
||||
name: testarch-plan
|
||||
description: "Plan risk mitigation and test coverage before development."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/test-design"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- planning
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
39
src/modules/bmm/workflows/testarch/trace/instructions.md
Normal file
39
src/modules/bmm/workflows/testarch/trace/instructions.md
Normal file
@@ -0,0 +1,39 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
|
||||
# Requirements Traceability v3.0
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/trace" name="Requirements Traceability">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Story has implemented tests (or acknowledge gaps).</i>
|
||||
<i>- Access to source code and specifications is available.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm prerequisites; halt if tests or specs are unavailable.</action>
|
||||
</step>
|
||||
<step n="2" title="Trace Coverage">
|
||||
<action>Gather acceptance criteria and implemented tests.</action>
|
||||
<action>Map each criterion to concrete tests (file + describe/it) using Given-When-Then narrative.</action>
|
||||
<action>Classify coverage status as FULL, PARTIAL, NONE, UNIT-ONLY, or INTEGRATION-ONLY.</action>
|
||||
<action>Flag severity based on priority (P0 gaps are critical) and recommend additional tests or refactors.</action>
|
||||
<action>Build gate YAML coverage summary reflecting totals and gaps.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Generate traceability report under `docs/qa/assessments`, a coverage matrix per criterion, and gate YAML snippet capturing totals/gaps.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If story lacks implemented tests, pause and advise running `*atdd` or writing tests before tracing.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Reference `{project-root}/bmad/bmm/testarch/tea-knowledge.md` for heuristics that shape this guidance.</i>
|
||||
<i>Coverage definitions: FULL=all scenarios validated, PARTIAL=some coverage, NONE=no validation, UNIT-ONLY=missing higher-level validation, INTEGRATION-ONLY=lacks lower-level confidence.</i>
|
||||
<i>Ensure assertions stay explicit and avoid duplicate coverage.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Traceability matrix and gate snippet ready for review.</i>
|
||||
</output>
|
||||
</task>
|
||||
```
|
||||
25
src/modules/bmm/workflows/testarch/trace/workflow.yaml
Normal file
25
src/modules/bmm/workflows/testarch/trace/workflow.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Test Architect workflow: trace
|
||||
name: testarch-trace
|
||||
description: "Trace requirements to implemented automated tests."
|
||||
author: "BMad"
|
||||
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/trace"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
template: false
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- traceability
|
||||
- test-architect
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
iterative: true
|
||||
Reference in New Issue
Block a user