mirror of
https://github.com/bmad-code-org/BMAD-METHOD.git
synced 2026-01-30 04:32:02 +00:00
Compare commits
312 Commits
feature/qu
...
9ebc4ce9c0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9ebc4ce9c0 | ||
|
|
5ffef8dc35 | ||
|
|
43c0e290d2 | ||
|
|
cad9be3e89 | ||
|
|
82d211b7ca | ||
|
|
8719d828d0 | ||
|
|
3abcefe1fb | ||
|
|
9168e00167 | ||
|
|
d0c9cd7b0b | ||
|
|
c352e03d18 | ||
|
|
9b12f6f86c | ||
|
|
91f6c41be1 | ||
|
|
02513c721f | ||
|
|
e7a34a2b61 | ||
|
|
85339708e6 | ||
|
|
b4f230f565 | ||
|
|
b102694c64 | ||
|
|
5aef6379b9 | ||
|
|
4cb5cc7dbc | ||
|
|
c5d0fb55ba | ||
|
|
c0adbc4e76 | ||
|
|
f6dab0d0ff | ||
|
|
cf6cf779bb | ||
|
|
7074395bdd | ||
|
|
9b8ce69f37 | ||
|
|
79959e75ac | ||
|
|
8bdf21f65b | ||
|
|
7d63dcd6a0 | ||
|
|
999ece33a9 | ||
|
|
baf2b9daef | ||
|
|
9abd6654f1 | ||
|
|
4346f0bcf3 | ||
|
|
431b961451 | ||
|
|
def8da0acb | ||
|
|
48881f86a6 | ||
|
|
efbe839a0a | ||
|
|
3f9ad4868c | ||
|
|
aad132c9b1 | ||
|
|
c9f2dc51db | ||
|
|
6eb7c34752 | ||
|
|
9b9f43fcb9 | ||
|
|
77a53a20ed | ||
|
|
5d89298fe8 | ||
|
|
421a811e87 | ||
|
|
c9c3d31d3a | ||
|
|
ec8ab0c638 | ||
|
|
aae7923d5d | ||
|
|
3734607994 | ||
|
|
e29a1273e1 | ||
|
|
01bbe2a3ef | ||
|
|
73135bee8e | ||
|
|
6f8f0871cf | ||
|
|
14bfa5b224 | ||
|
|
83641eee9d | ||
|
|
a96ea2f19a | ||
|
|
28e6dded4d | ||
|
|
966ca5db0b | ||
|
|
e0318d9da8 | ||
|
|
4a983d64a7 | ||
|
|
f25fcc686c | ||
|
|
411cded4d0 | ||
|
|
a50d82df1c | ||
|
|
d022e569bd | ||
|
|
7990ad528c | ||
|
|
5881790068 | ||
|
|
d83a88da66 | ||
|
|
7b68d1a326 | ||
|
|
7cd4926adb | ||
|
|
0fa53ad144 | ||
|
|
afee68ca99 | ||
|
|
b952d28fb3 | ||
|
|
577c1aa218 | ||
|
|
abba7ee987 | ||
|
|
d34efa2695 | ||
|
|
87b1292e3f | ||
|
|
43f7eee29a | ||
|
|
96f21be73e | ||
|
|
66e7d3a36d | ||
|
|
2b7f7ff421 | ||
|
|
3360666c2a | ||
|
|
274dea16fa | ||
|
|
dcd581c84a | ||
|
|
6d84a60a78 | ||
|
|
59e1b7067c | ||
|
|
1d8df63ac5 | ||
|
|
993d02b8b3 | ||
|
|
5cb5606ba3 | ||
|
|
eeebf152af | ||
|
|
d419ac8a70 | ||
|
|
568249e985 | ||
|
|
c0f6401902 | ||
|
|
e535f94325 | ||
|
|
e465ce4bb5 | ||
|
|
9d328082eb | ||
|
|
d4f6642333 | ||
|
|
9f85dade25 | ||
|
|
5870651bad | ||
|
|
eff826eef9 | ||
|
|
0a3cc1d12c | ||
|
|
c3b7e98241 | ||
|
|
2f98f9130a | ||
|
|
c18904d674 | ||
|
|
3e3c92ed3e | ||
|
|
12d3492e0c | ||
|
|
677a00280b | ||
|
|
d19cca79d2 | ||
|
|
8e165b9b57 | ||
|
|
67b70288a6 | ||
|
|
5c76657732 | ||
|
|
7bf05c9d9d | ||
|
|
692f14f2e7 | ||
|
|
2e16650067 | ||
|
|
dc7a7f8c43 | ||
|
|
987410eb75 | ||
|
|
f838486caa | ||
|
|
51aa3dda2f | ||
|
|
35ae4fd024 | ||
|
|
f31659765e | ||
|
|
d1f3844449 | ||
|
|
05ddc2d29b | ||
|
|
c748f0f6cc | ||
|
|
4142972b6a | ||
|
|
cd45d22eb6 | ||
|
|
a297235862 | ||
|
|
d8b13bdb2e | ||
|
|
8699d7d968 | ||
|
|
8b92e5ee59 | ||
|
|
9f53d896b7 | ||
|
|
b46409e71d | ||
|
|
8cffd09fb7 | ||
|
|
2b89ee1302 | ||
|
|
b72c810a1f | ||
|
|
484990de50 | ||
|
|
b8836ced24 | ||
|
|
c7fcf16eae | ||
|
|
529d4a8c95 | ||
|
|
f0520c39d9 | ||
|
|
ff0517f4d0 | ||
|
|
b509fb9a1e | ||
|
|
e0090e5602 | ||
|
|
8d679b177b | ||
|
|
ea421adbf9 | ||
|
|
2a8a4388a9 | ||
|
|
d4a94df29a | ||
|
|
949cf64d3b | ||
|
|
aba9d11c88 | ||
|
|
208f27dcdb | ||
|
|
c15ad174ed | ||
|
|
24cedea690 | ||
|
|
bdb6bde9b5 | ||
|
|
cfdffe3f7a | ||
|
|
7b5b7afdc0 | ||
|
|
59a0eec2e2 | ||
|
|
1f16bb7413 | ||
|
|
b1d1242fcf | ||
|
|
47a0ebda4f | ||
|
|
1a1a806d99 | ||
|
|
19df17b261 | ||
|
|
925b715d4f | ||
|
|
e4a4f47a1e | ||
|
|
4195eb3b30 | ||
|
|
c0f5d33c61 | ||
|
|
3f76c2de74 | ||
|
|
45ff3840a8 | ||
|
|
dcaa892ce1 | ||
|
|
00a380a03f | ||
|
|
12dd97fe9b | ||
|
|
00ad756acf | ||
|
|
021936eaa9 | ||
|
|
da21790531 | ||
|
|
34cfdddd3a | ||
|
|
1e721f7fd0 | ||
|
|
9c268f8190 | ||
|
|
e59c7b79ed | ||
|
|
a20198b94b | ||
|
|
4271fe5f2b | ||
|
|
b276d5a387 | ||
|
|
7b762fe211 | ||
|
|
e39aa33eea | ||
|
|
2da9aebaa8 | ||
|
|
5c756b6404 | ||
|
|
23f650ff4d | ||
|
|
363915b0c6 | ||
|
|
f36369512b | ||
|
|
ccb64623bc | ||
|
|
e37edf098c | ||
|
|
e3eb374218 | ||
|
|
83b0df0f21 | ||
|
|
00a3af3eb0 | ||
|
|
d0e0a0963a | ||
|
|
32615afaf9 | ||
|
|
59e4cc7b82 | ||
|
|
c24821b6ed | ||
|
|
2c4c2d9717 | ||
|
|
901b39de9a | ||
|
|
4d8d1f84f7 | ||
|
|
48795d46de | ||
|
|
bbda7171bd | ||
|
|
08f05cf9a4 | ||
|
|
c7827bf031 | ||
|
|
5716282898 | ||
|
|
60238d2854 | ||
|
|
6513c77d1b | ||
|
|
3cbe330b8e | ||
|
|
ecc2901649 | ||
|
|
d4eccf07cf | ||
|
|
1da7705821 | ||
|
|
7f742d4af6 | ||
|
|
9fe79882b2 | ||
|
|
ebb20f675f | ||
|
|
82cc10824a | ||
|
|
4c65f3a006 | ||
|
|
401e8e481c | ||
|
|
cba7cf223f | ||
|
|
add789a408 | ||
|
|
ae9851acab | ||
|
|
ac5fa5c23f | ||
|
|
8642553bd7 | ||
|
|
ce42d56fdd | ||
|
|
25c79e3fe5 | ||
|
|
0c873638ab | ||
|
|
e6f911d791 | ||
|
|
f11be2b2e2 | ||
|
|
572074d2a6 | ||
|
|
0ed546619f | ||
|
|
c3b54c5fc6 | ||
|
|
e34f53d6f8 | ||
|
|
ebbb44f961 | ||
|
|
76185937c6 | ||
|
|
7a9f1d4a3c | ||
|
|
7d6aae1b78 | ||
|
|
ed0defbe08 | ||
|
|
3bc485d0ed | ||
|
|
0f5a9cf0dd | ||
|
|
e2d9d35ce9 | ||
|
|
82e6433b69 | ||
|
|
be7e07cc1a | ||
|
|
079f79aba5 | ||
|
|
b4d7e1adef | ||
|
|
6e9fe6c9a2 | ||
|
|
d2d9010a8e | ||
|
|
6d5a1084eb | ||
|
|
978a93ed33 | ||
|
|
ec90699016 | ||
|
|
0f06ef724b | ||
|
|
26e47562dd | ||
|
|
3256bda42f | ||
|
|
3d2727e190 | ||
|
|
446a0359ab | ||
|
|
45a97b070a | ||
|
|
a2d01813f0 | ||
|
|
b9ba98d3f8 | ||
|
|
5971a88553 | ||
|
|
08642a0420 | ||
|
|
ec73e44097 | ||
|
|
d55f518a96 | ||
|
|
cf50f4935d | ||
|
|
55cb4681bc | ||
|
|
eb4325fab9 | ||
|
|
57ceaf9fa9 | ||
|
|
1513b2d478 | ||
|
|
2da016f797 | ||
|
|
6947851393 | ||
|
|
9d7b09d065 | ||
|
|
86f2786dde | ||
|
|
a638f062b9 | ||
|
|
738237b4ae | ||
|
|
6430173738 | ||
|
|
baaa984a90 | ||
|
|
38e65abd83 | ||
|
|
ff9a085dd0 | ||
|
|
d5c687d99d | ||
|
|
b68e5c0225 | ||
|
|
987f81ff64 | ||
|
|
0c2afdd2bb | ||
|
|
a65ff90b44 | ||
|
|
80a90c01d4 | ||
|
|
119187a1e7 | ||
|
|
b252778043 | ||
|
|
eacfba2e5b | ||
|
|
903c7a4133 | ||
|
|
8c04ccf3f0 | ||
|
|
6d98864ec1 | ||
|
|
1697a45376 | ||
|
|
ba2c81263b | ||
|
|
8d044f8c3e | ||
|
|
74d071708d | ||
|
|
86e2daabba | ||
|
|
aad7a71718 | ||
|
|
f052967f65 | ||
|
|
1bd01e1ce6 | ||
|
|
0d83799ecf | ||
|
|
7c5c97a914 | ||
|
|
7545bf9227 | ||
|
|
228dfa28a5 | ||
|
|
e3f756488a | ||
|
|
d85090060b | ||
|
|
a0442d4fb7 | ||
|
|
e979b47fe5 | ||
|
|
a6dffb4706 | ||
|
|
282bc27c7e | ||
|
|
5ee1551b5b | ||
|
|
c95b65f462 | ||
|
|
72ef9e9722 | ||
|
|
8265bbf295 | ||
|
|
f99e192e74 | ||
|
|
0b9290789e | ||
|
|
aa1cf76f88 | ||
|
|
b8b4b65c10 | ||
|
|
73db5538bf | ||
|
|
41f9cc1913 | ||
|
|
686af5b0ee |
40
.coderabbit.yaml
Normal file
40
.coderabbit.yaml
Normal file
@@ -0,0 +1,40 @@
|
||||
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
|
||||
|
||||
language: "en-US"
|
||||
early_access: true
|
||||
reviews:
|
||||
profile: chill
|
||||
high_level_summary: false # don't post summary until explicitly invoked
|
||||
request_changes_workflow: false
|
||||
review_status: false
|
||||
commit_status: false
|
||||
walkthrough: false
|
||||
poem: false
|
||||
auto_review:
|
||||
enabled: true
|
||||
drafts: false # Don't review drafts automatically
|
||||
auto_incremental_review: false # always review the whole PR, not just new commits
|
||||
base_branches:
|
||||
- main
|
||||
path_filters:
|
||||
- "!**/node_modules/**"
|
||||
path_instructions:
|
||||
- path: "**/*"
|
||||
instructions: |
|
||||
Focus on inconsistencies, contradictions, edge cases and serious issues.
|
||||
Avoid commenting on minor issues such as linting, formatting and style issues.
|
||||
When providing code suggestions, use GitHub's suggestion format:
|
||||
```suggestion
|
||||
<code changes>
|
||||
```
|
||||
- path: "**/*.js"
|
||||
instructions: |
|
||||
CLI tooling code. Check for: missing error handling on fs operations,
|
||||
path.join vs string concatenation, proper cleanup in error paths.
|
||||
Flag any process.exit() without error message.
|
||||
chat:
|
||||
auto_reply: true # Response to mentions in comments, a la @coderabbit review
|
||||
issue_enrichment:
|
||||
auto_enrich:
|
||||
enabled: false # don't auto-comment on issues
|
||||
|
||||
@@ -60,7 +60,7 @@ representative at an online or offline event.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
the official BMAD Discord server (https://discord.com/invite/gk8jAdXWmj) - DM a moderator or flag a post.
|
||||
the official BMAD Discord server (<https://discord.com/invite/gk8jAdXWmj>) - DM a moderator or flag a post.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
@@ -116,7 +116,7 @@ the community.
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
<https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
@@ -124,5 +124,5 @@ enforcement ladder](https://github.com/mozilla/diversity).
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
||||
<https://www.contributor-covenant.org/faq>. Translations are available at
|
||||
<https://www.contributor-covenant.org/translations>.
|
||||
32
.github/ISSUE_TEMPLATE/bug_report.md
vendored
32
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,32 +0,0 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Steps to Reproduce**
|
||||
What lead to the bug and can it be reliable recreated - if so with what steps.
|
||||
|
||||
**PR**
|
||||
If you have an idea to fix and would like to contribute, please indicate here you are working on a fix, or link to a proposed PR to fix the issue. Please review the contribution.md - contributions are always welcome!
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Please be Specific if relevant**
|
||||
Model(s) Used:
|
||||
Agentic IDE Used:
|
||||
WebSite Used:
|
||||
Project Language:
|
||||
BMad Method version:
|
||||
|
||||
**Screenshots or Links**
|
||||
If applicable, add screenshots or links (if web sharable record) to help explain your problem.
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here. The more information you can provide, the easier it will be to suggest a fix or resolve
|
||||
7
.github/ISSUE_TEMPLATE/config.yaml
vendored
7
.github/ISSUE_TEMPLATE/config.yaml
vendored
@@ -1,5 +1,8 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Discord Community Support
|
||||
- name: 📚 Documentation
|
||||
url: http://docs.bmad-method.org
|
||||
about: Check the docs first — tutorials, guides, and reference
|
||||
- name: 💬 Discord Community
|
||||
url: https://discord.gg/gk8jAdXWmj
|
||||
about: Please join our Discord server for general questions and community discussion before opening an issue.
|
||||
about: Join for questions, discussion, and help before opening an issue
|
||||
|
||||
22
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
22
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: Feature Request
|
||||
about: Suggest an idea or new feature
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**Describe your idea**
|
||||
A clear and concise description of what you'd like to see added or changed.
|
||||
|
||||
**Why is this needed?**
|
||||
Explain the problem this solves or the benefit it brings to the BMad community.
|
||||
|
||||
**How should it work?**
|
||||
Describe your proposed solution. If you have ideas on implementation, share them here.
|
||||
|
||||
**PR**
|
||||
If you'd like to contribute, please indicate you're working on this or link to your PR. Please review [CONTRIBUTING.md](../../CONTRIBUTING.md) — contributions are always welcome!
|
||||
|
||||
**Additional context**
|
||||
Add any other context, screenshots, or links that help explain your idea.
|
||||
109
.github/ISSUE_TEMPLATE/idea_submission.md
vendored
109
.github/ISSUE_TEMPLATE/idea_submission.md
vendored
@@ -1,109 +0,0 @@
|
||||
---
|
||||
name: V6 Idea Submission
|
||||
about: Suggest an idea for v6
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
# Idea: [Replace with a clear, actionable title]
|
||||
|
||||
### PASS Framework
|
||||
|
||||
**P**roblem:
|
||||
|
||||
> What's broken or missing? What pain point are we addressing? (1-2 sentences)
|
||||
>
|
||||
> [Your answer here]
|
||||
|
||||
**A**udience:
|
||||
|
||||
> Who's affected by this problem and how severely? (1-2 sentences)
|
||||
>
|
||||
> [Your answer here]
|
||||
|
||||
**S**olution:
|
||||
|
||||
> What will we build or change? How will we measure success? (1-2 sentences with at least 1 measurable outcome)
|
||||
>
|
||||
> [Your answer here]
|
||||
>
|
||||
> [Your Acceptance Criteria for measuring success here]
|
||||
|
||||
**S**ize:
|
||||
|
||||
> How much effort do you estimate this will take?
|
||||
>
|
||||
> - [ ] **XS** - A few hours
|
||||
> - [ ] **S** - 1-2 days
|
||||
> - [ ] **M** - 3-5 days
|
||||
> - [ ] **L** - 1-2 weeks
|
||||
> - [ ] **XL** - More than 2 weeks
|
||||
|
||||
---
|
||||
|
||||
### Metadata
|
||||
|
||||
**Submitted by:** [Your name]
|
||||
**Date:** [Today's date]
|
||||
**Priority:** [Leave blank - will be assigned during team review]
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
<details>
|
||||
<summary>Click to see a GOOD example</summary>
|
||||
|
||||
### Idea: Add search functionality to customer dashboard
|
||||
|
||||
**P**roblem:
|
||||
Customers can't find their past orders quickly. They have to scroll through pages of orders to find what they're looking for, leading to 15+ support tickets per week.
|
||||
|
||||
**A**udience:
|
||||
All 5,000+ active customers are affected. Support team spends ~10 hours/week helping customers find orders.
|
||||
|
||||
**S**olution:
|
||||
Add a search bar that filters by order number, date range, and product name. Success = 50% reduction in order-finding support tickets within 2 weeks of launch.
|
||||
|
||||
**S**ize:
|
||||
|
||||
- [x] **M** - 3-5 days
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Click to see a POOR example</summary>
|
||||
|
||||
### Idea: Make the app better
|
||||
|
||||
**P**roblem:
|
||||
The app needs improvements and updates.
|
||||
|
||||
**A**udience:
|
||||
Users
|
||||
|
||||
**S**olution:
|
||||
Fix issues and add features.
|
||||
|
||||
**S**ize:
|
||||
|
||||
- [ ] Unknown
|
||||
|
||||
_Why this is poor: Too vague, no specific problem identified, no measurable success criteria, unclear scope_
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Tips for Success
|
||||
|
||||
1. **Be specific** - Vague problems lead to vague solutions
|
||||
2. **Quantify when possible** - Numbers help us prioritize (e.g., "20 customers asked for this" vs "customers want this")
|
||||
3. **One idea per submission** - If you have multiple ideas, submit multiple templates
|
||||
4. **Success metrics matter** - How will we know this worked?
|
||||
5. **Honest sizing** - Better to overestimate than underestimate
|
||||
|
||||
## Questions?
|
||||
|
||||
Reach out to @OverlordBaconPants if you need help completing this template.
|
||||
32
.github/ISSUE_TEMPLATE/issue.md
vendored
Normal file
32
.github/ISSUE_TEMPLATE/issue.md
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
name: Issue
|
||||
about: Report a problem or something that's not working
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Steps to reproduce**
|
||||
1. What were you doing when the bug occurred?
|
||||
2. What steps can recreate the issue?
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Environment (if relevant)**
|
||||
- Model(s) used:
|
||||
- Agentic IDE used:
|
||||
- BMad version:
|
||||
- Project language:
|
||||
|
||||
**Screenshots or links**
|
||||
If applicable, add screenshots or links to help explain the problem.
|
||||
|
||||
**PR**
|
||||
If you'd like to contribute a fix, please indicate you're working on it or link to your PR. See [CONTRIBUTING.md](../../CONTRIBUTING.md) — contributions are always welcome!
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here. The more information you provide, the easier it is to help.
|
||||
34
.github/scripts/discord-helpers.sh
vendored
Normal file
34
.github/scripts/discord-helpers.sh
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
# Discord notification helper functions
|
||||
|
||||
# Escape markdown special chars and @mentions for safe Discord display
|
||||
# Skips content inside <URL> wrappers to preserve URLs intact
|
||||
esc() {
|
||||
awk '{
|
||||
result = ""; in_url = 0; n = length($0)
|
||||
for (i = 1; i <= n; i++) {
|
||||
c = substr($0, i, 1)
|
||||
if (c == "<" && substr($0, i, 8) ~ /^<https?:/) in_url = 1
|
||||
if (in_url) { result = result c; if (c == ">") in_url = 0 }
|
||||
else if (c == "@") result = result "@ "
|
||||
else if (index("[]\\*_()~`", c) > 0) result = result "\\" c
|
||||
else result = result c
|
||||
}
|
||||
print result
|
||||
}'
|
||||
}
|
||||
|
||||
# Truncate to $1 chars (or 80 if wall-of-text with <3 spaces)
|
||||
trunc() {
|
||||
local max=$1
|
||||
local txt=$(tr '\n\r' ' ' | cut -c1-"$max")
|
||||
local spaces=$(printf '%s' "$txt" | tr -cd ' ' | wc -c)
|
||||
[ "$spaces" -lt 3 ] && [ ${#txt} -gt 80 ] && txt=$(printf '%s' "$txt" | cut -c1-80)
|
||||
printf '%s' "$txt"
|
||||
}
|
||||
|
||||
# Remove incomplete URL at end of truncated text (incomplete URLs are useless)
|
||||
strip_trailing_url() { sed -E 's~<?https?://[^[:space:]]*$~~'; }
|
||||
|
||||
# Wrap URLs in <> to suppress Discord embeds (keeps links clickable)
|
||||
wrap_urls() { sed -E 's~https?://[^[:space:]<>]+~<&>~g'; }
|
||||
329
.github/workflows/bundle-latest.yaml
vendored
329
.github/workflows/bundle-latest.yaml
vendored
@@ -1,329 +0,0 @@
|
||||
name: Publish Latest Bundles
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
workflow_dispatch: {}
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
bundle-and-publish:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout BMAD-METHOD
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version-file: ".nvmrc"
|
||||
cache: npm
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Generate bundles
|
||||
run: npm run bundle
|
||||
|
||||
- name: Create bundle distribution structure
|
||||
run: |
|
||||
mkdir -p dist/bundles
|
||||
|
||||
# Copy web bundles (XML files from npm run bundle output)
|
||||
cp -r web-bundles/* dist/bundles/ 2>/dev/null || true
|
||||
|
||||
# Verify bundles were copied (fail if completely empty)
|
||||
if [ ! "$(ls -A dist/bundles)" ]; then
|
||||
echo "❌ ERROR: No bundles found in dist/bundles/"
|
||||
echo "This likely means 'npm run bundle' failed or bundles weren't generated"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Count bundles per module
|
||||
for module in bmm bmb cis bmgd; do
|
||||
if [ -d "dist/bundles/$module/agents" ]; then
|
||||
COUNT=$(find dist/bundles/$module/agents -name '*.xml' 2>/dev/null | wc -l)
|
||||
echo "✅ $module: $COUNT agent bundles"
|
||||
fi
|
||||
done
|
||||
|
||||
# Generate index.html for each agents directory (fixes directory browsing)
|
||||
for module in bmm bmb cis bmgd; do
|
||||
if [ -d "dist/bundles/$module/agents" ]; then
|
||||
cat > "dist/bundles/$module/agents/index.html" << 'DIREOF'
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>MODULE_NAME Agents</title>
|
||||
<style>
|
||||
body { font-family: system-ui; max-width: 800px; margin: 50px auto; padding: 20px; }
|
||||
li { margin: 10px 0; }
|
||||
a { color: #0066cc; text-decoration: none; }
|
||||
a:hover { text-decoration: underline; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>MODULE_NAME Agents</h1>
|
||||
<ul>
|
||||
AGENT_LINKS
|
||||
</ul>
|
||||
<p><a href="../../">← Back to all modules</a></p>
|
||||
</body>
|
||||
</html>
|
||||
DIREOF
|
||||
|
||||
# Replace MODULE_NAME
|
||||
sed -i "s/MODULE_NAME/${module^^}/g" "dist/bundles/$module/agents/index.html"
|
||||
|
||||
# Generate agent links
|
||||
LINKS=""
|
||||
for file in dist/bundles/$module/agents/*.xml; do
|
||||
if [ -f "$file" ]; then
|
||||
name=$(basename "$file" .xml)
|
||||
LINKS="$LINKS <li><a href=\"./$name.xml\">$name</a></li>\n"
|
||||
fi
|
||||
done
|
||||
sed -i "s|AGENT_LINKS|$LINKS|" "dist/bundles/$module/agents/index.html"
|
||||
fi
|
||||
done
|
||||
|
||||
# Create zip archives per module
|
||||
mkdir -p dist/bundles/downloads
|
||||
for module in bmm bmb cis bmgd; do
|
||||
if [ -d "dist/bundles/$module" ]; then
|
||||
(cd dist/bundles && zip -r downloads/$module-agents.zip $module/)
|
||||
echo "✅ Created $module-agents.zip"
|
||||
fi
|
||||
done
|
||||
|
||||
# Generate index.html dynamically based on actual bundles
|
||||
TIMESTAMP=$(date -u +"%Y-%m-%d %H:%M UTC")
|
||||
COMMIT_SHA=$(git rev-parse --short HEAD)
|
||||
|
||||
# Function to generate agent links for a module
|
||||
generate_agent_links() {
|
||||
local module=$1
|
||||
local agent_dir="dist/bundles/$module/agents"
|
||||
|
||||
if [ ! -d "$agent_dir" ]; then
|
||||
echo ""
|
||||
return
|
||||
fi
|
||||
|
||||
local links=""
|
||||
local count=0
|
||||
|
||||
# Find all XML files and generate links
|
||||
for xml_file in "$agent_dir"/*.xml; do
|
||||
if [ -f "$xml_file" ]; then
|
||||
local agent_name=$(basename "$xml_file" .xml)
|
||||
# Convert filename to display name (pm -> PM, tech-writer -> Tech Writer)
|
||||
local display_name=$(echo "$agent_name" | sed 's/-/ /g' | awk '{for(i=1;i<=NF;i++) {if(length($i)==2) $i=toupper($i); else $i=toupper(substr($i,1,1)) tolower(substr($i,2));}}1')
|
||||
|
||||
if [ $count -gt 0 ]; then
|
||||
links="$links | "
|
||||
fi
|
||||
links="$links<a href=\"./$module/agents/$agent_name.xml\">$display_name</a>"
|
||||
count=$((count + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
echo "$links"
|
||||
}
|
||||
|
||||
# Generate agent links for each module
|
||||
BMM_LINKS=$(generate_agent_links "bmm")
|
||||
CIS_LINKS=$(generate_agent_links "cis")
|
||||
BMGD_LINKS=$(generate_agent_links "bmgd")
|
||||
|
||||
# Count agents for bulk downloads
|
||||
BMM_COUNT=$(find dist/bundles/bmm/agents -name '*.xml' 2>/dev/null | wc -l | tr -d ' ')
|
||||
CIS_COUNT=$(find dist/bundles/cis/agents -name '*.xml' 2>/dev/null | wc -l | tr -d ' ')
|
||||
BMGD_COUNT=$(find dist/bundles/bmgd/agents -name '*.xml' 2>/dev/null | wc -l | tr -d ' ')
|
||||
|
||||
# Create index.html
|
||||
cat > dist/bundles/index.html << EOF
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>BMAD Bundles - Latest</title>
|
||||
<meta charset="utf-8">
|
||||
<style>
|
||||
body { font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif; max-width: 800px; margin: 50px auto; padding: 20px; }
|
||||
h1 { color: #333; }
|
||||
.platform { margin: 30px 0; padding: 20px; background: #f5f5f5; border-radius: 8px; }
|
||||
.module { margin: 15px 0; }
|
||||
a { color: #0066cc; text-decoration: none; }
|
||||
a:hover { text-decoration: underline; }
|
||||
code { background: #e0e0e0; padding: 2px 6px; border-radius: 3px; }
|
||||
.warning { background: #fff3cd; padding: 15px; border-left: 4px solid #ffc107; margin: 20px 0; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>BMAD Web Bundles - Latest (Main Branch)</h1>
|
||||
|
||||
<div class="warning">
|
||||
<strong>⚠️ Latest Build (Unstable)</strong><br>
|
||||
These bundles are built from the latest main branch commit. For stable releases, visit
|
||||
<a href="https://github.com/bmad-code-org/BMAD-METHOD/releases/latest">GitHub Releases</a>.
|
||||
</div>
|
||||
|
||||
<p><strong>Last Updated:</strong> <code>$TIMESTAMP</code></p>
|
||||
<p><strong>Commit:</strong> <code>$COMMIT_SHA</code></p>
|
||||
|
||||
<h2>Available Modules</h2>
|
||||
|
||||
EOF
|
||||
|
||||
# Add BMM section if agents exist
|
||||
if [ -n "$BMM_LINKS" ]; then
|
||||
cat >> dist/bundles/index.html << EOF
|
||||
<div class="platform">
|
||||
<h3>BMM (BMad Method)</h3>
|
||||
<div class="module">
|
||||
$BMM_LINKS<br>
|
||||
📁 <a href="./bmm/agents/">Browse All</a> | 📦 <a href="./downloads/bmm-agents.zip">Download Zip</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Add CIS section if agents exist
|
||||
if [ -n "$CIS_LINKS" ]; then
|
||||
cat >> dist/bundles/index.html << EOF
|
||||
<div class="platform">
|
||||
<h3>CIS (Creative Intelligence Suite)</h3>
|
||||
<div class="module">
|
||||
$CIS_LINKS<br>
|
||||
📁 <a href="./cis/agents/">Browse Agents</a> | 📦 <a href="./downloads/cis-agents.zip">Download Zip</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Add BMGD section if agents exist
|
||||
if [ -n "$BMGD_LINKS" ]; then
|
||||
cat >> dist/bundles/index.html << EOF
|
||||
<div class="platform">
|
||||
<h3>BMGD (Game Development)</h3>
|
||||
<div class="module">
|
||||
$BMGD_LINKS<br>
|
||||
📁 <a href="./bmgd/agents/">Browse Agents</a> | 📦 <a href="./downloads/bmgd-agents.zip">Download Zip</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Add bulk downloads section
|
||||
cat >> dist/bundles/index.html << EOF
|
||||
<h2>Bulk Downloads</h2>
|
||||
<p>Download all agents for a module as a zip archive:</p>
|
||||
<ul>
|
||||
EOF
|
||||
|
||||
[ "$BMM_COUNT" -gt 0 ] && echo " <li><a href=\"./downloads/bmm-agents.zip\">📦 BMM Agents (all $BMM_COUNT)</a></li>" >> dist/bundles/index.html
|
||||
[ "$CIS_COUNT" -gt 0 ] && echo " <li><a href=\"./downloads/cis-agents.zip\">📦 CIS Agents (all $CIS_COUNT)</a></li>" >> dist/bundles/index.html
|
||||
[ "$BMGD_COUNT" -gt 0 ] && echo " <li><a href=\"./downloads/bmgd-agents.zip\">📦 BMGD Agents (all $BMGD_COUNT)</a></li>" >> dist/bundles/index.html
|
||||
|
||||
# Close HTML
|
||||
cat >> dist/bundles/index.html << 'EOF'
|
||||
</ul>
|
||||
|
||||
<h2>Usage</h2>
|
||||
<p>Copy the raw XML URL and paste into your AI platform's custom instructions or project knowledge.</p>
|
||||
<p>Example: <code>https://raw.githubusercontent.com/bmad-code-org/bmad-bundles/main/bmm/agents/pm.xml</code></p>
|
||||
|
||||
<h2>Installation (Recommended)</h2>
|
||||
<p>For full IDE integration with slash commands, use the installer:</p>
|
||||
<pre>npx bmad-method@alpha install</pre>
|
||||
|
||||
<footer style="margin-top: 50px; padding-top: 20px; border-top: 1px solid #ccc; color: #666;">
|
||||
<p>Built from <a href="https://github.com/bmad-code-org/BMAD-METHOD">BMAD-METHOD</a> repository.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
|
||||
- name: Checkout bmad-bundles repo
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
repository: bmad-code-org/bmad-bundles
|
||||
path: bmad-bundles
|
||||
token: ${{ secrets.BUNDLES_PAT }}
|
||||
|
||||
- name: Update bundles
|
||||
run: |
|
||||
# Clear old bundles
|
||||
rm -rf bmad-bundles/*
|
||||
|
||||
# Copy new bundles
|
||||
cp -r dist/bundles/* bmad-bundles/
|
||||
|
||||
# Create .nojekyll for GitHub Pages
|
||||
touch bmad-bundles/.nojekyll
|
||||
|
||||
# Create README
|
||||
cat > bmad-bundles/README.md << 'EOF'
|
||||
# BMAD Web Bundles (Latest)
|
||||
|
||||
**⚠️ Unstable Build**: These bundles are auto-generated from the latest `main` branch.
|
||||
|
||||
For stable releases, visit [GitHub Releases](https://github.com/bmad-code-org/BMAD-METHOD/releases/latest).
|
||||
|
||||
## Usage
|
||||
|
||||
Copy raw markdown URLs for use in AI platforms:
|
||||
|
||||
- Claude Code: `https://raw.githubusercontent.com/bmad-code-org/bmad-bundles/main/claude-code/sub-agents/{agent}.md`
|
||||
- ChatGPT: `https://raw.githubusercontent.com/bmad-code-org/bmad-bundles/main/chatgpt/sub-agents/{agent}.md`
|
||||
- Gemini: `https://raw.githubusercontent.com/bmad-code-org/bmad-bundles/main/gemini/sub-agents/{agent}.md`
|
||||
|
||||
## Browse
|
||||
|
||||
Visit [https://bmad-code-org.github.io/bmad-bundles/](https://bmad-code-org.github.io/bmad-bundles/) to browse bundles.
|
||||
|
||||
## Installation (Recommended)
|
||||
|
||||
For full IDE integration:
|
||||
```bash
|
||||
npx bmad-method@alpha install
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Auto-updated by [BMAD-METHOD](https://github.com/bmad-code-org/BMAD-METHOD) on every main branch merge.
|
||||
EOF
|
||||
|
||||
- name: Commit and push to bmad-bundles
|
||||
run: |
|
||||
cd bmad-bundles
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
|
||||
git add .
|
||||
|
||||
if git diff --staged --quiet; then
|
||||
echo "No changes to bundles, skipping commit"
|
||||
else
|
||||
COMMIT_SHA=$(cd .. && git rev-parse --short HEAD)
|
||||
git commit -m "Update bundles from BMAD-METHOD@${COMMIT_SHA}"
|
||||
git push
|
||||
echo "✅ Bundles published to GitHub Pages"
|
||||
fi
|
||||
|
||||
- name: Summary
|
||||
run: |
|
||||
echo "## 🎉 Bundles Published!" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Latest bundles** available at:" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- 🌐 Browse: https://bmad-code-org.github.io/bmad-bundles/" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- 📦 Raw files: https://github.com/bmad-code-org/bmad-bundles" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Commit**: ${{ github.sha }}" >> $GITHUB_STEP_SUMMARY
|
||||
92
.github/workflows/discord.yaml
vendored
92
.github/workflows/discord.yaml
vendored
@@ -1,16 +1,90 @@
|
||||
name: Discord Notification
|
||||
|
||||
"on": [pull_request, release, create, delete, issue_comment, pull_request_review, pull_request_review_comment]
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, closed]
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
env:
|
||||
MAX_TITLE: 100
|
||||
MAX_BODY: 250
|
||||
|
||||
jobs:
|
||||
notify:
|
||||
pull_request:
|
||||
if: github.event_name == 'pull_request'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Notify Discord
|
||||
uses: sarisia/actions-status-discord@v1
|
||||
if: always()
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
webhook: ${{ secrets.DISCORD_WEBHOOK }}
|
||||
status: ${{ job.status }}
|
||||
title: "Triggered by ${{ github.event_name }}"
|
||||
color: 0x5865F2
|
||||
ref: ${{ github.event.repository.default_branch }}
|
||||
sparse-checkout: .github/scripts
|
||||
sparse-checkout-cone-mode: false
|
||||
- name: Notify Discord
|
||||
env:
|
||||
WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}
|
||||
ACTION: ${{ github.event.action }}
|
||||
MERGED: ${{ github.event.pull_request.merged }}
|
||||
PR_NUM: ${{ github.event.pull_request.number }}
|
||||
PR_URL: ${{ github.event.pull_request.html_url }}
|
||||
PR_TITLE: ${{ github.event.pull_request.title }}
|
||||
PR_USER: ${{ github.event.pull_request.user.login }}
|
||||
PR_BODY: ${{ github.event.pull_request.body }}
|
||||
run: |
|
||||
set -o pipefail
|
||||
source .github/scripts/discord-helpers.sh
|
||||
[ -z "$WEBHOOK" ] && exit 0
|
||||
|
||||
if [ "$ACTION" = "opened" ]; then ICON="🔀"; LABEL="New PR"
|
||||
elif [ "$ACTION" = "closed" ] && [ "$MERGED" = "true" ]; then ICON="🎉"; LABEL="Merged"
|
||||
elif [ "$ACTION" = "closed" ]; then ICON="❌"; LABEL="Closed"; fi
|
||||
|
||||
TITLE=$(printf '%s' "$PR_TITLE" | trunc $MAX_TITLE | esc)
|
||||
[ ${#PR_TITLE} -gt $MAX_TITLE ] && TITLE="${TITLE}..."
|
||||
BODY=$(printf '%s' "$PR_BODY" | trunc $MAX_BODY)
|
||||
if [ -n "$PR_BODY" ] && [ ${#PR_BODY} -gt $MAX_BODY ]; then
|
||||
BODY=$(printf '%s' "$BODY" | strip_trailing_url)
|
||||
fi
|
||||
BODY=$(printf '%s' "$BODY" | wrap_urls | esc)
|
||||
[ -n "$PR_BODY" ] && [ ${#PR_BODY} -gt $MAX_BODY ] && BODY="${BODY}..."
|
||||
[ -n "$BODY" ] && BODY=" · $BODY"
|
||||
USER=$(printf '%s' "$PR_USER" | esc)
|
||||
|
||||
MSG="$ICON **[$LABEL #$PR_NUM: $TITLE](<$PR_URL>)**"$'\n'"by @$USER$BODY"
|
||||
jq -n --arg content "$MSG" '{content: $content}' | curl -sf --retry 2 -X POST "$WEBHOOK" -H "Content-Type: application/json" -d @-
|
||||
|
||||
issues:
|
||||
if: github.event_name == 'issues'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ github.event.repository.default_branch }}
|
||||
sparse-checkout: .github/scripts
|
||||
sparse-checkout-cone-mode: false
|
||||
- name: Notify Discord
|
||||
env:
|
||||
WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}
|
||||
ISSUE_NUM: ${{ github.event.issue.number }}
|
||||
ISSUE_URL: ${{ github.event.issue.html_url }}
|
||||
ISSUE_TITLE: ${{ github.event.issue.title }}
|
||||
ISSUE_USER: ${{ github.event.issue.user.login }}
|
||||
ISSUE_BODY: ${{ github.event.issue.body }}
|
||||
run: |
|
||||
set -o pipefail
|
||||
source .github/scripts/discord-helpers.sh
|
||||
[ -z "$WEBHOOK" ] && exit 0
|
||||
|
||||
TITLE=$(printf '%s' "$ISSUE_TITLE" | trunc $MAX_TITLE | esc)
|
||||
[ ${#ISSUE_TITLE} -gt $MAX_TITLE ] && TITLE="${TITLE}..."
|
||||
BODY=$(printf '%s' "$ISSUE_BODY" | trunc $MAX_BODY)
|
||||
if [ -n "$ISSUE_BODY" ] && [ ${#ISSUE_BODY} -gt $MAX_BODY ]; then
|
||||
BODY=$(printf '%s' "$BODY" | strip_trailing_url)
|
||||
fi
|
||||
BODY=$(printf '%s' "$BODY" | wrap_urls | esc)
|
||||
[ -n "$ISSUE_BODY" ] && [ ${#ISSUE_BODY} -gt $MAX_BODY ] && BODY="${BODY}..."
|
||||
[ -n "$BODY" ] && BODY=" · $BODY"
|
||||
USER=$(printf '%s' "$ISSUE_USER" | esc)
|
||||
|
||||
MSG="🐛 **[Issue #$ISSUE_NUM: $TITLE](<$ISSUE_URL>)**"$'\n'"by @$USER$BODY"
|
||||
jq -n --arg content "$MSG" '{content: $content}' | curl -sf --retry 2 -X POST "$WEBHOOK" -H "Content-Type: application/json" -d @-
|
||||
|
||||
63
.github/workflows/docs.yaml
vendored
Normal file
63
.github/workflows/docs.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
name: Deploy Documentation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "src/modules/*/docs/**"
|
||||
- "website/**"
|
||||
- "tools/build-docs.js"
|
||||
- ".github/workflows/docs.yaml"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pages: write
|
||||
id-token: write
|
||||
|
||||
concurrency:
|
||||
group: "pages"
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "20"
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Build documentation
|
||||
env:
|
||||
# Override site URL from GitHub repo variable if set
|
||||
# Otherwise, astro.config.mjs will compute from GITHUB_REPOSITORY
|
||||
SITE_URL: ${{ vars.SITE_URL }}
|
||||
run: npm run docs:build
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-pages-artifact@v3
|
||||
with:
|
||||
path: build/site
|
||||
|
||||
deploy:
|
||||
environment:
|
||||
name: github-pages
|
||||
url: ${{ steps.deployment.outputs.page_url }}
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
steps:
|
||||
- name: Deploy to GitHub Pages
|
||||
id: deployment
|
||||
uses: actions/deploy-pages@v4
|
||||
61
.github/workflows/manual-release.yaml
vendored
61
.github/workflows/manual-release.yaml
vendored
@@ -6,9 +6,11 @@ on:
|
||||
version_bump:
|
||||
description: Version bump type
|
||||
required: true
|
||||
default: patch
|
||||
default: beta
|
||||
type: choice
|
||||
options:
|
||||
- beta
|
||||
- alpha
|
||||
- patch
|
||||
- minor
|
||||
- major
|
||||
@@ -49,7 +51,11 @@ jobs:
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
|
||||
- name: Bump version
|
||||
run: npm run version:${{ github.event.inputs.version_bump }}
|
||||
run: |
|
||||
case "${{ github.event.inputs.version_bump }}" in
|
||||
alpha|beta) npm version prerelease --no-git-tag-version --preid=${{ github.event.inputs.version_bump }} ;;
|
||||
*) npm version ${{ github.event.inputs.version_bump }} --no-git-tag-version ;;
|
||||
esac
|
||||
|
||||
- name: Get new version and previous tag
|
||||
id: version
|
||||
@@ -61,34 +67,9 @@ jobs:
|
||||
run: |
|
||||
sed -i 's/"version": ".*"/"version": "${{ steps.version.outputs.new_version }}"/' tools/installer/package.json
|
||||
|
||||
- name: Generate web bundles
|
||||
run: npm run bundle
|
||||
|
||||
- name: Package bundles for release
|
||||
run: |
|
||||
mkdir -p dist/release-bundles
|
||||
|
||||
# Copy web bundles
|
||||
cp -r web-bundles dist/release-bundles/bmad-bundles-v${{ steps.version.outputs.new_version }}
|
||||
|
||||
# Verify bundles exist
|
||||
if [ ! "$(ls -A dist/release-bundles/bmad-bundles-v${{ steps.version.outputs.new_version }})" ]; then
|
||||
echo "❌ ERROR: No bundles found"
|
||||
echo "This likely means 'npm run bundle' failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Count and display bundles per module
|
||||
for module in bmm bmb cis bmgd; do
|
||||
if [ -d "dist/release-bundles/bmad-bundles-v${{ steps.version.outputs.new_version }}/$module/agents" ]; then
|
||||
COUNT=$(find dist/release-bundles/bmad-bundles-v${{ steps.version.outputs.new_version }}/$module/agents -name '*.xml' 2>/dev/null | wc -l)
|
||||
echo "✅ $module: $COUNT agents"
|
||||
fi
|
||||
done
|
||||
|
||||
# Create archive
|
||||
tar -czf dist/release-bundles/bmad-bundles-v${{ steps.version.outputs.new_version }}.tar.gz \
|
||||
-C dist/release-bundles/bmad-bundles-v${{ steps.version.outputs.new_version }} .
|
||||
# TODO: Re-enable web bundles once tools/cli/bundlers/ is restored
|
||||
# - name: Generate web bundles
|
||||
# run: npm run bundle
|
||||
|
||||
- name: Commit version bump
|
||||
run: |
|
||||
@@ -177,33 +158,26 @@ jobs:
|
||||
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.new_version }}"
|
||||
if [[ "$VERSION" == *"alpha"* ]] || [[ "$VERSION" == *"beta"* ]]; then
|
||||
echo "Publishing prerelease version with --tag alpha"
|
||||
if [[ "$VERSION" == *"alpha"* ]]; then
|
||||
echo "Publishing alpha prerelease version with --tag alpha"
|
||||
npm publish --tag alpha
|
||||
elif [[ "$VERSION" == *"beta"* ]]; then
|
||||
echo "Publishing beta prerelease version with --tag latest"
|
||||
npm publish --tag latest
|
||||
else
|
||||
echo "Publishing stable version with --tag latest"
|
||||
npm publish --tag latest
|
||||
fi
|
||||
|
||||
- name: Create GitHub Release with Bundles
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
with:
|
||||
tag_name: v${{ steps.version.outputs.new_version }}
|
||||
name: "BMad Method v${{ steps.version.outputs.new_version }}"
|
||||
body: |
|
||||
${{ steps.release_notes.outputs.RELEASE_NOTES }}
|
||||
|
||||
## 📦 Web Bundles
|
||||
|
||||
Download XML bundles for use in AI platforms (Claude Projects, ChatGPT, Gemini):
|
||||
|
||||
- `bmad-bundles-v${{ steps.version.outputs.new_version }}.tar.gz` - All modules (BMM, BMB, CIS, BMGD)
|
||||
|
||||
**Browse online** (bleeding edge): https://bmad-code-org.github.io/bmad-bundles/
|
||||
draft: false
|
||||
prerelease: ${{ contains(steps.version.outputs.new_version, 'alpha') || contains(steps.version.outputs.new_version, 'beta') }}
|
||||
files: |
|
||||
dist/release-bundles/*.tar.gz
|
||||
|
||||
- name: Summary
|
||||
run: |
|
||||
@@ -212,7 +186,6 @@ jobs:
|
||||
echo "### 📦 Distribution" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **NPM**: Published with @latest tag" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **GitHub Release**: https://github.com/bmad-code-org/BMAD-METHOD/releases/tag/v${{ steps.version.outputs.new_version }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Web Bundles**: Attached to GitHub Release (4 archives)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "### ✅ Installation" >> $GITHUB_STEP_SUMMARY
|
||||
echo "\`\`\`bash" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
43
.github/workflows/quality.yaml
vendored
43
.github/workflows/quality.yaml
vendored
@@ -3,6 +3,7 @@ name: Quality & Validation
|
||||
# Runs comprehensive quality checks on all PRs:
|
||||
# - Prettier (formatting)
|
||||
# - ESLint (linting)
|
||||
# - markdownlint (markdown quality)
|
||||
# - Schema validation (YAML structure)
|
||||
# - Agent schema tests (fixture-based validation)
|
||||
# - Installation component tests (compilation)
|
||||
@@ -50,6 +51,45 @@ jobs:
|
||||
- name: ESLint
|
||||
run: npm run lint
|
||||
|
||||
markdownlint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version-file: ".nvmrc"
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: markdownlint
|
||||
run: npm run lint:md
|
||||
|
||||
docs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version-file: ".nvmrc"
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Validate documentation links
|
||||
run: npm run docs:validate-links
|
||||
|
||||
- name: Build documentation
|
||||
run: npm run docs:build
|
||||
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
@@ -73,6 +113,3 @@ jobs:
|
||||
|
||||
- name: Test agent compilation components
|
||||
run: npm run test:install
|
||||
|
||||
- name: Validate web bundles
|
||||
run: npm run validate:bundles
|
||||
|
||||
53
.gitignore
vendored
53
.gitignore
vendored
@@ -1,12 +1,11 @@
|
||||
# Dependencies
|
||||
node_modules/
|
||||
**/node_modules/
|
||||
pnpm-lock.yaml
|
||||
bun.lock
|
||||
deno.lock
|
||||
pnpm-workspace.yaml
|
||||
package-lock.json
|
||||
|
||||
|
||||
test-output/*
|
||||
coverage/
|
||||
|
||||
@@ -28,11 +27,6 @@ Thumbs.db
|
||||
# Development tools and configs
|
||||
.prettierrc
|
||||
|
||||
# IDE and editor configs
|
||||
.windsurf/
|
||||
.trae/
|
||||
.bmad*/.cursor/
|
||||
|
||||
# AI assistant files
|
||||
CLAUDE.md
|
||||
.ai/*
|
||||
@@ -43,31 +37,32 @@ CLAUDE.local.md
|
||||
.serena/
|
||||
.claude/settings.local.json
|
||||
|
||||
# Project-specific
|
||||
.bmad-core
|
||||
.bmad-creator-tools
|
||||
test-project-install/*
|
||||
sample-project/*
|
||||
flattened-codebase.xml
|
||||
*.stats.md
|
||||
.internal-docs/
|
||||
#UAT template testing output files
|
||||
tools/template-test-generator/test-scenarios/
|
||||
|
||||
# Bundler temporary files and generated bundles
|
||||
.bundler-temp/
|
||||
|
||||
# Generated web bundles (built by CI, not committed)
|
||||
src/modules/bmm/sub-modules/
|
||||
src/modules/bmb/sub-modules/
|
||||
src/modules/cis/sub-modules/
|
||||
src/modules/bmgd/sub-modules/
|
||||
|
||||
z*/
|
||||
|
||||
.bmad
|
||||
_bmad
|
||||
_bmad-output
|
||||
.clinerules
|
||||
.augment
|
||||
.crush
|
||||
.cursor
|
||||
.iflow
|
||||
.opencode
|
||||
.qwen
|
||||
.rovodev
|
||||
.kilocodemodes
|
||||
.claude
|
||||
.codex
|
||||
.github/chatmodes
|
||||
.github/agents
|
||||
.agent
|
||||
.agentvibes/
|
||||
.agentvibes
|
||||
.kiro
|
||||
.roo
|
||||
.trae
|
||||
.windsurf
|
||||
|
||||
|
||||
# Astro / Documentation Build
|
||||
website/.astro/
|
||||
website/dist/
|
||||
build/
|
||||
|
||||
@@ -5,3 +5,16 @@ npx --no-install lint-staged
|
||||
|
||||
# Validate everything
|
||||
npm test
|
||||
|
||||
# Validate docs links only when docs change
|
||||
if command -v rg >/dev/null 2>&1; then
|
||||
if git diff --cached --name-only | rg -q '^docs/'; then
|
||||
npm run docs:validate-links
|
||||
npm run docs:build
|
||||
fi
|
||||
else
|
||||
if git diff --cached --name-only | grep -Eq '^docs/'; then
|
||||
npm run docs:validate-links
|
||||
npm run docs:build
|
||||
fi
|
||||
fi
|
||||
|
||||
41
.markdownlint-cli2.yaml
Normal file
41
.markdownlint-cli2.yaml
Normal file
@@ -0,0 +1,41 @@
|
||||
# markdownlint-cli2 configuration
|
||||
# https://github.com/DavidAnson/markdownlint-cli2
|
||||
|
||||
ignores:
|
||||
- "**/node_modules/**"
|
||||
- test/fixtures/**
|
||||
- CODE_OF_CONDUCT.md
|
||||
- _bmad/**
|
||||
- _bmad*/**
|
||||
- .agent/**
|
||||
- .claude/**
|
||||
- .roo/**
|
||||
- .codex/**
|
||||
- .kiro/**
|
||||
- sample-project/**
|
||||
- test-project-install/**
|
||||
- z*/**
|
||||
|
||||
# Rule configuration
|
||||
config:
|
||||
# Disable all rules by default
|
||||
default: false
|
||||
|
||||
# Heading levels should increment by one (h1 -> h2 -> h3, not h1 -> h3)
|
||||
MD001: true
|
||||
|
||||
# Duplicate sibling headings (same heading text at same level under same parent)
|
||||
MD024:
|
||||
siblings_only: true
|
||||
|
||||
# Trailing commas in headings (likely typos)
|
||||
MD026:
|
||||
punctuation: ","
|
||||
|
||||
# Bare URLs - may not render as links in all parsers
|
||||
# Should use <url> or [text](url) format
|
||||
MD034: true
|
||||
|
||||
# Spaces inside emphasis markers - breaks rendering
|
||||
# e.g., "* text *" won't render as emphasis
|
||||
MD037: true
|
||||
@@ -5,5 +5,5 @@ test/fixtures/**
|
||||
CODE_OF_CONDUCT.md
|
||||
|
||||
# BMAD runtime folders (user-specific, not in repo)
|
||||
.bmad/
|
||||
.bmad*/
|
||||
_bmad/
|
||||
_bmad*/
|
||||
|
||||
3
.vscode/settings.json
vendored
3
.vscode/settings.json
vendored
@@ -57,6 +57,7 @@
|
||||
"tileset",
|
||||
"tmpl",
|
||||
"Trae",
|
||||
"Unsharded",
|
||||
"VNET",
|
||||
"webskip"
|
||||
],
|
||||
@@ -73,7 +74,7 @@
|
||||
"editor.formatOnSave": true,
|
||||
"editor.defaultFormatter": "esbenp.prettier-vscode",
|
||||
"[javascript]": {
|
||||
"editor.defaultFormatter": "esbenp.prettier-vscode"
|
||||
"editor.defaultFormatter": "vscode.typescript-language-features"
|
||||
},
|
||||
"[json]": {
|
||||
"editor.defaultFormatter": "vscode.json-language-features"
|
||||
|
||||
1496
CHANGELOG.md
1496
CHANGELOG.md
File diff suppressed because it is too large
Load Diff
313
CONTRIBUTING.md
313
CONTRIBUTING.md
@@ -1,268 +1,167 @@
|
||||
# Contributing to BMad
|
||||
|
||||
Thank you for considering contributing to the BMad project! We believe in **Human Amplification, Not Replacement** - bringing out the best thinking in both humans and AI through guided collaboration.
|
||||
Thank you for considering contributing! We believe in **Human Amplification, Not Replacement** — bringing out the best thinking in both humans and AI through guided collaboration.
|
||||
|
||||
💬 **Discord Community**: Join our [Discord server](https://discord.gg/gk8jAdXWmj) for real-time discussions:
|
||||
💬 **Discord**: [Join our community](https://discord.gg/gk8jAdXWmj) for real-time discussions, questions, and collaboration.
|
||||
|
||||
- **#general-dev** - Technical discussions, feature ideas, and development questions
|
||||
- **#bugs-issues** - Bug reports and issue discussions
|
||||
---
|
||||
|
||||
## Our Philosophy
|
||||
|
||||
### BMad Core™: Universal Foundation
|
||||
BMad strengthens human-AI collaboration through specialized agents and guided workflows. Every contribution should answer: **"Does this make humans and AI better together?"**
|
||||
|
||||
BMad Core empowers humans and AI agents working together in true partnership across any domain through our **C.O.R.E. Framework** (Collaboration Optimized Reflection Engine):
|
||||
|
||||
- **Collaboration**: Human-AI partnership where both contribute unique strengths
|
||||
- **Optimized**: The collaborative process refined for maximum effectiveness
|
||||
- **Reflection**: Guided thinking that helps discover better solutions and insights
|
||||
- **Engine**: The powerful framework that orchestrates specialized agents and workflows
|
||||
|
||||
### BMad Method™: Agile AI-Driven Development
|
||||
|
||||
The BMad Method is the flagship bmad module for agile AI-driven software development. It emphasizes thorough planning and solid architectural foundations to provide detailed context for developer agents, mirroring real-world agile best practices.
|
||||
|
||||
### Core Principles
|
||||
|
||||
**Partnership Over Automation** - AI agents act as expert coaches, mentors, and collaborators who amplify human capability rather than replace it.
|
||||
|
||||
**Bidirectional Guidance** - Agents guide users through structured workflows while users push agents with advanced prompting. Both sides actively work to extract better information from each other.
|
||||
|
||||
**Systems of Workflows** - BMad Core builds comprehensive systems of guided workflows with specialized agent teams for any domain.
|
||||
|
||||
**Tool-Agnostic Foundation** - BMad Core remains tool-agnostic, providing stable, extensible groundwork that adapts to any domain.
|
||||
|
||||
## What Makes a Good Contribution?
|
||||
|
||||
Every contribution should strengthen human-AI collaboration. Ask yourself: **"Does this make humans and AI better together?"**
|
||||
|
||||
**✅ Contributions that align:**
|
||||
|
||||
- Enhance universal collaboration patterns
|
||||
- Improve agent personas and workflows
|
||||
- Strengthen planning and context continuity
|
||||
- Increase cross-domain accessibility
|
||||
- Add domain-specific modules leveraging BMad Core
|
||||
|
||||
**❌ What detracts from our mission:**
|
||||
**✅ What we welcome:**
|
||||
- Enhanced collaboration patterns and workflows
|
||||
- Improved agent personas and prompts
|
||||
- Domain-specific modules leveraging BMad Core
|
||||
- Better planning and context continuity
|
||||
|
||||
**❌ What doesn't fit:**
|
||||
- Purely automated solutions that sideline humans
|
||||
- Tools that don't improve the partnership
|
||||
- Complexity that creates barriers to adoption
|
||||
- Features that fragment BMad Core's foundation
|
||||
|
||||
## Before You Contribute
|
||||
---
|
||||
|
||||
### Reporting Bugs
|
||||
## Reporting Issues
|
||||
|
||||
1. **Check existing issues** first to avoid duplicates
|
||||
2. **Consider discussing in Discord** (#bugs-issues channel) for quick help
|
||||
3. **Use the bug report template** when creating a new issue - it guides you through providing:
|
||||
- Clear bug description
|
||||
- Steps to reproduce
|
||||
- Expected vs actual behavior
|
||||
- Model/IDE/BMad version details
|
||||
- Screenshots or links if applicable
|
||||
4. **Indicate if you're working on a fix** to avoid duplicate efforts
|
||||
**ALL bug reports and feature requests MUST go through GitHub Issues.**
|
||||
|
||||
### Suggesting Features or New Modules
|
||||
### Before Creating an Issue
|
||||
|
||||
1. **Discuss first in Discord** (#general-dev channel) - the feature request template asks if you've done this
|
||||
2. **Check existing issues and discussions** to avoid duplicates
|
||||
3. **Use the feature request template** when creating an issue
|
||||
4. **Be specific** about why this feature would benefit the BMad community and strengthen human-AI collaboration
|
||||
1. **Search existing issues** — Use the GitHub issue search to check if your bug or feature has already been reported
|
||||
2. **Search closed issues** — Your issue may have been fixed or addressed previously
|
||||
3. **Check discussions** — Some conversations happen in [GitHub Discussions](https://github.com/bmad-code-org/BMAD-METHOD/discussions)
|
||||
|
||||
### Before Starting Work
|
||||
### Bug Reports
|
||||
|
||||
After searching, if the bug is unreported, use the [bug report template](https://github.com/bmad-code-org/BMAD-METHOD/issues/new?template=bug_report.md) and include:
|
||||
|
||||
- Clear description of the problem
|
||||
- Steps to reproduce
|
||||
- Expected vs actual behavior
|
||||
- Your environment (model, IDE, BMad version)
|
||||
- Screenshots or error messages if applicable
|
||||
|
||||
### Feature Requests
|
||||
|
||||
After searching, use the [feature request template](https://github.com/bmad-code-org/BMAD-METHOD/issues/new?template=feature_request.md) and explain:
|
||||
|
||||
- What the feature is
|
||||
- Why it would benefit the BMad community
|
||||
- How it strengthens human-AI collaboration
|
||||
|
||||
**For community modules**, review [TRADEMARK.md](TRADEMARK.md) for proper naming conventions (e.g., "My Module (BMad Community Module)").
|
||||
|
||||
---
|
||||
|
||||
## Before Starting Work
|
||||
|
||||
⚠️ **Required before submitting PRs:**
|
||||
|
||||
1. **For bugs**: Check if an issue exists (create one using the bug template if not)
|
||||
2. **For features**: Discuss in Discord (#general-dev) AND create a feature request issue
|
||||
3. **For large changes**: Always open an issue first to discuss alignment
|
||||
| Work Type | Requirement |
|
||||
| ------------- | ---------------------------------------------- |
|
||||
| Bug fix | An open issue (create one if it doesn't exist) |
|
||||
| Feature | An open feature request issue |
|
||||
| Large changes | Discussion via issue first |
|
||||
|
||||
Please propose small, granular changes! For large or significant changes, discuss in Discord and open an issue first. This prevents wasted effort on PRs that may not align with planned changes.
|
||||
**Why?** This prevents wasted effort on work that may not align with project direction.
|
||||
|
||||
---
|
||||
|
||||
## Pull Request Guidelines
|
||||
|
||||
### Which Branch?
|
||||
### Target Branch
|
||||
|
||||
**Submit PR's to `main` branch** (critical only):
|
||||
Submit PRs to the `main` branch.
|
||||
|
||||
- 🚨 Critical bug fixes that break basic functionality
|
||||
- 🔒 Security patches
|
||||
- 📚 Fixing dangerously incorrect documentation
|
||||
- 🐛 Bugs preventing installation or basic usage
|
||||
### PR Size
|
||||
|
||||
### PR Size Guidelines
|
||||
- **Ideal**: 200-400 lines of code changes
|
||||
- **Maximum**: 800 lines (excluding generated files)
|
||||
- **One feature/fix per PR**
|
||||
|
||||
- **Ideal PR size**: 200-400 lines of code changes
|
||||
- **Maximum PR size**: 800 lines (excluding generated files)
|
||||
- **One feature/fix per PR**: Each PR should address a single issue or add one feature
|
||||
- **If your change is larger**: Break it into multiple smaller PRs that can be reviewed independently
|
||||
- **Related changes**: Even related changes should be separate PRs if they deliver independent value
|
||||
If your change exceeds 800 lines, break it into smaller PRs that can be reviewed independently.
|
||||
|
||||
### Breaking Down Large PRs
|
||||
### New to Pull Requests?
|
||||
|
||||
If your change exceeds 800 lines, use this checklist to split it:
|
||||
|
||||
- [ ] Can I separate the refactoring from the feature implementation?
|
||||
- [ ] Can I introduce the new API/interface in one PR and implementation in another?
|
||||
- [ ] Can I split by file or module?
|
||||
- [ ] Can I create a base PR with shared utilities first?
|
||||
- [ ] Can I separate test additions from implementation?
|
||||
- [ ] Even if changes are related, can they deliver value independently?
|
||||
- [ ] Can these changes be merged in any order without breaking things?
|
||||
|
||||
Example breakdown:
|
||||
|
||||
1. PR #1: Add utility functions and types (100 lines)
|
||||
2. PR #2: Refactor existing code to use utilities (200 lines)
|
||||
3. PR #3: Implement new feature using refactored code (300 lines)
|
||||
4. PR #4: Add comprehensive tests (200 lines)
|
||||
|
||||
**Note**: PRs #1 and #4 could be submitted simultaneously since they deliver independent value.
|
||||
|
||||
### Pull Request Process
|
||||
|
||||
#### New to Pull Requests?
|
||||
|
||||
If you're new to GitHub or pull requests, here's a quick guide:
|
||||
|
||||
1. **Fork the repository** - Click the "Fork" button on GitHub to create your own copy
|
||||
2. **Clone your fork** - `git clone https://github.com/YOUR-USERNAME/bmad-method.git`
|
||||
3. **Create a new branch** - Never work on `main` directly!
|
||||
```bash
|
||||
git checkout -b fix/description
|
||||
# or
|
||||
git checkout -b feature/description
|
||||
```
|
||||
4. **Make your changes** - Edit files, keeping changes small and focused
|
||||
5. **Commit your changes** - Use clear, descriptive commit messages
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "fix: correct typo in README"
|
||||
```
|
||||
6. **Push to your fork** - `git push origin fix/description`
|
||||
7. **Create the Pull Request** - Go to your fork on GitHub and click "Compare & pull request"
|
||||
1. **Fork** the repository
|
||||
2. **Clone** your fork: `git clone https://github.com/YOUR-USERNAME/bmad-method.git`
|
||||
3. **Create a branch**: `git checkout -b fix/description` or `git checkout -b feature/description`
|
||||
4. **Make changes** — keep them focused
|
||||
5. **Commit**: `git commit -m "fix: correct typo in README"`
|
||||
6. **Push**: `git push origin fix/description`
|
||||
7. **Open PR** from your fork on GitHub
|
||||
|
||||
### PR Description Template
|
||||
|
||||
Keep your PR description concise and focused. Use this template:
|
||||
|
||||
```markdown
|
||||
## What
|
||||
|
||||
[1-2 sentences describing WHAT changed]
|
||||
|
||||
## Why
|
||||
|
||||
[1-2 sentences explaining WHY this change is needed]
|
||||
Fixes #[issue number] (if applicable)
|
||||
Fixes #[issue number]
|
||||
|
||||
## How
|
||||
|
||||
## [2-3 bullets listing HOW you implemented it]
|
||||
|
||||
-
|
||||
- [2-3 bullets listing HOW you implemented it]
|
||||
-
|
||||
|
||||
## Testing
|
||||
|
||||
[1-2 sentences on how you tested this]
|
||||
```
|
||||
|
||||
**Maximum PR description length: 200 words** (excluding code examples if needed)
|
||||
**Keep it under 200 words.**
|
||||
|
||||
### Good vs Bad PR Descriptions
|
||||
### Commit Messages
|
||||
|
||||
❌ **Bad Example:**
|
||||
|
||||
> This revolutionary PR introduces a paradigm-shifting enhancement to the system's architecture by implementing a state-of-the-art solution that leverages cutting-edge methodologies to optimize performance metrics...
|
||||
|
||||
✅ **Good Example:**
|
||||
|
||||
> **What:** Added validation for agent dependency resolution
|
||||
> **Why:** Build was failing silently when agents had circular dependencies
|
||||
> **How:**
|
||||
>
|
||||
> - Added cycle detection in dependency-resolver.js
|
||||
> - Throws clear error with dependency chain
|
||||
> **Testing:** Tested with circular deps between 3 agents
|
||||
|
||||
### Commit Message Convention
|
||||
|
||||
Use conventional commits format:
|
||||
Use conventional commits:
|
||||
|
||||
- `feat:` New feature
|
||||
- `fix:` Bug fix
|
||||
- `docs:` Documentation only
|
||||
- `refactor:` Code change that neither fixes a bug nor adds a feature
|
||||
- `test:` Adding missing tests
|
||||
- `chore:` Changes to build process or auxiliary tools
|
||||
- `refactor:` Code change (no bug/feature)
|
||||
- `test:` Adding tests
|
||||
- `chore:` Build/tools changes
|
||||
|
||||
Keep commit messages under 72 characters.
|
||||
|
||||
### Atomic Commits
|
||||
|
||||
Each commit should represent one logical change:
|
||||
|
||||
- **Do:** One bug fix per commit
|
||||
- **Do:** One feature addition per commit
|
||||
- **Don't:** Mix refactoring with bug fixes
|
||||
- **Don't:** Combine unrelated changes
|
||||
|
||||
## What Makes a Good Pull Request?
|
||||
|
||||
✅ **Good PRs:**
|
||||
|
||||
- Change one thing at a time
|
||||
- Have clear, descriptive titles
|
||||
- Explain what and why in the description
|
||||
- Include only the files that need to change
|
||||
- Reference related issue numbers
|
||||
|
||||
❌ **Avoid:**
|
||||
|
||||
- Changing formatting of entire files
|
||||
- Multiple unrelated changes in one PR
|
||||
- Copying your entire project/repo into the PR
|
||||
- Changes without explanation
|
||||
- Working directly on `main` branch
|
||||
|
||||
## Common Mistakes to Avoid
|
||||
|
||||
1. **Don't reformat entire files** - only change what's necessary
|
||||
2. **Don't include unrelated changes** - stick to one fix/feature per PR
|
||||
3. **Don't paste code in issues** - create a proper PR instead
|
||||
4. **Don't submit your whole project** - contribute specific improvements
|
||||
|
||||
## Code Style
|
||||
|
||||
- Follow the existing code style and conventions
|
||||
- Write clear comments for complex logic
|
||||
- Keep dev agents lean - they need context for coding, not documentation
|
||||
- Web/planning agents can be larger with more complex tasks
|
||||
- Everything is natural language (markdown) - no code in core framework
|
||||
- Use bmad modules for domain-specific features
|
||||
- Validate YAML schemas with `npm run validate:schemas` before committing
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
By participating in this project, you agree to abide by our Code of Conduct. We foster a collaborative, respectful environment focused on building better human-AI partnerships.
|
||||
|
||||
## Need Help?
|
||||
|
||||
- 💬 Join our [Discord Community](https://discord.gg/gk8jAdXWmj):
|
||||
- **#general-dev** - Technical questions and feature discussions
|
||||
- **#bugs-issues** - Get help with bugs before filing issues
|
||||
- 🐛 Report bugs using the [bug report template](https://github.com/bmad-code-org/BMAD-METHOD/issues/new?template=bug_report.md)
|
||||
- 💡 Suggest features using the [feature request template](https://github.com/bmad-code-org/BMAD-METHOD/issues/new?template=feature_request.md)
|
||||
- 📖 Browse the [GitHub Discussions](https://github.com/bmad-code-org/BMAD-METHOD/discussions)
|
||||
Keep messages under 72 characters. Each commit = one logical change.
|
||||
|
||||
---
|
||||
|
||||
**Remember**: We're here to help! Don't be afraid to ask questions. Every expert was once a beginner. Together, we're building a future where humans and AI work better together.
|
||||
## What Makes a Good PR?
|
||||
|
||||
| ✅ Do | ❌ Don't |
|
||||
| --------------------------- | ---------------------------- |
|
||||
| Change one thing per PR | Mix unrelated changes |
|
||||
| Clear title and description | Vague or missing explanation |
|
||||
| Reference related issues | Reformat entire files |
|
||||
| Small, focused commits | Copy your whole project |
|
||||
| Work on a branch | Work directly on `main` |
|
||||
|
||||
---
|
||||
|
||||
## Prompt & Agent Guidelines
|
||||
|
||||
- Keep dev agents lean — focus on coding context, not documentation
|
||||
- Web/planning agents can be larger with complex tasks
|
||||
- Everything is natural language (markdown) — no code in core framework
|
||||
- Use BMad modules for domain-specific features
|
||||
- Validate YAML schemas: `npm run validate:schemas`
|
||||
|
||||
---
|
||||
|
||||
## Need Help?
|
||||
|
||||
- 💬 **Discord**: [Join the community](https://discord.gg/gk8jAdXWmj)
|
||||
- 🐛 **Bugs**: Use the [bug report template](https://github.com/bmad-code-org/BMAD-METHOD/issues/new?template=bug_report.md)
|
||||
- 💡 **Features**: Use the [feature request template](https://github.com/bmad-code-org/BMAD-METHOD/issues/new?template=feature_request.md)
|
||||
|
||||
---
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
By participating, you agree to abide by our [Code of Conduct](.github/CODE_OF_CONDUCT.md).
|
||||
|
||||
## License
|
||||
|
||||
By contributing to this project, you agree that your contributions will be licensed under the same license as the project.
|
||||
By contributing, your contributions are licensed under the same MIT License. See [CONTRIBUTORS.md](CONTRIBUTORS.md) for contributor attribution.
|
||||
|
||||
32
CONTRIBUTORS.md
Normal file
32
CONTRIBUTORS.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Contributors
|
||||
|
||||
BMad Core, BMad Method and BMad and Community BMad Modules are made possible by contributions from our community. We gratefully acknowledge everyone who has helped improve this project.
|
||||
|
||||
## How We Credit Contributors
|
||||
|
||||
- **Git history** — Every contribution is preserved in the project's commit history
|
||||
- **Contributors badge** — See the dynamic contributors list on our [README](README.md)
|
||||
- **GitHub contributors graph** — Visual representation at <https://github.com/bmad-code-org/BMAD-METHOD/graphs/contributors>
|
||||
|
||||
## Becoming a Contributor
|
||||
|
||||
Anyone who submits a pull request that is merged becomes a contributor. Contributions include:
|
||||
|
||||
- Bug fixes
|
||||
- New features or workflows
|
||||
- Documentation improvements
|
||||
- Bug reports and issue triaging
|
||||
- Code reviews
|
||||
- Helping others in discussions
|
||||
|
||||
There are no minimum contribution requirements — whether it's a one-character typo fix or a major feature, we value all contributions.
|
||||
|
||||
## Copyright
|
||||
|
||||
The BMad Method project is copyrighted by BMad Code, LLC. Individual contributions are licensed under the same MIT License as the project. Contributors retain authorship credit through Git history and the contributors graph.
|
||||
|
||||
---
|
||||
|
||||
**Thank you to everyone who has helped make BMad Method better!**
|
||||
|
||||
For contribution guidelines, see [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||
10
LICENSE
10
LICENSE
@@ -2,6 +2,9 @@ MIT License
|
||||
|
||||
Copyright (c) 2025 BMad Code, LLC
|
||||
|
||||
This project incorporates contributions from the open source community.
|
||||
See [CONTRIBUTORS.md](CONTRIBUTORS.md) for contributor attribution.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
@@ -21,6 +24,7 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
TRADEMARK NOTICE:
|
||||
BMAD™, BMAD-CORE™ and BMAD-METHOD™ are trademarks of BMad Code, LLC. The use of these
|
||||
trademarks in this software does not grant any rights to use the trademarks
|
||||
for any other purpose.
|
||||
BMad™, BMad Method™, and BMad Core™ are trademarks of BMad Code, LLC, covering all
|
||||
casings and variations (including BMAD, bmad, BMadMethod, BMAD-METHOD, etc.). The use of
|
||||
these trademarks in this software does not grant any rights to use the trademarks
|
||||
for any other purpose. See [TRADEMARK.md](TRADEMARK.md) for detailed guidelines.
|
||||
|
||||
232
README.md
232
README.md
@@ -1,214 +1,118 @@
|
||||
# BMad Method & BMad Core
|
||||

|
||||
|
||||
[](https://www.npmjs.com/package/bmad-method)
|
||||
[](https://www.npmjs.com/package/bmad-method)
|
||||
[](https://www.npmjs.com/package/bmad-method)
|
||||
[](LICENSE)
|
||||
[](https://nodejs.org)
|
||||
[](https://discord.gg/gk8jAdXWmj)
|
||||
|
||||
## AI-Driven Agile Development That Scales From Bug Fixes to Enterprise
|
||||
**Breakthrough Method of Agile AI Driven Development** — An AI-driven agile development framework with 21 specialized agents, 50+ guided workflows, and scale-adaptive intelligence that adjusts from bug fixes to enterprise systems.
|
||||
|
||||
**Build More, Architect Dreams** (BMAD) with **19 specialized AI agents** and **50+ guided workflows** that adapt to your project's complexity—from quick bug fixes to enterprise platforms.
|
||||
**100% free and open source.** No paywalls. No gated content. No gated Discord. We believe in empowering everyone, not just those who can pay.
|
||||
|
||||
> **🚀 v6 is a MASSIVE upgrade from v4!** Complete architectural overhaul, scale-adaptive intelligence, visual workflows, and the powerful BMad Core framework. v4 users: this changes everything. [See what's new →](#whats-new-in-v6)
|
||||
## Why BMad?
|
||||
|
||||
> **📌 v6 Alpha Status:** Near-beta quality with vastly improved stability. Documentation is being finalized. New videos coming soon to [BMadCode YouTube](https://www.youtube.com/@BMadCode).
|
||||
Traditional AI tools do the thinking for you, producing average results. BMad agents and facilitated workflow act as expert collaborators who guide you through a structured process to bring out your best thinking in partnership with the AI.
|
||||
|
||||
## 🎯 Why BMad Method?
|
||||
- **AI Intelligent Help**: Brand new for beta - AI assisted help will guide you from the beginning to the end - just ask for `/bmad-help` after you have installed BMad to your project
|
||||
- **Scale-Domain-Adaptive**: Automatically adjusts planning depth and needs based on project complexity, domain and type - a SaaS Mobile Dating App has different planning needs from a diagnostic medical system, BMad adapts and helps you along the way
|
||||
- **Structured Workflows**: Grounded in agile best practices across analysis, planning, architecture, and implementation
|
||||
- **Specialized Agents**: 12+ domain experts (PM, Architect, Developer, UX, Scrum Master, and more)
|
||||
- **Party Mode**: Bring multiple agent personas into one session to plan, troubleshoot, or discuss your project collaboratively, multiple perspectives with maximum fun
|
||||
- **Complete Lifecycle**: From brainstorming to deployment, BMad is there with you every step of the way
|
||||
|
||||
Unlike generic AI coding assistants, BMad Method provides **structured, battle-tested workflows** powered by specialized agents who understand agile development. Each agent has deep domain expertise—from product management to architecture to testing—working together seamlessly.
|
||||
## Quick Start
|
||||
|
||||
**✨ Key Benefits:**
|
||||
|
||||
- **Scale-Adaptive Intelligence** - Automatically adjusts planning depth from bug fixes to enterprise systems
|
||||
- **Complete Development Lifecycle** - Analysis → Planning → Architecture → Implementation
|
||||
- **Specialized Expertise** - 19 agents with specific roles (PM, Architect, Developer, UX Designer, etc.)
|
||||
- **Proven Methodologies** - Built on agile best practices with AI amplification
|
||||
- **IDE Integration** - Works with Claude Code, Cursor, Windsurf, VS Code
|
||||
|
||||
## 🏗️ The Power of BMad Core
|
||||
|
||||
**BMad Method** is actually a sophisticated module built on top of **BMad Core** (**C**ollaboration **O**ptimized **R**eflection **E**ngine). This revolutionary architecture means:
|
||||
|
||||
- **BMad Core** provides the universal framework for human-AI collaboration
|
||||
- **BMad Method** leverages Core to deliver agile development workflows
|
||||
- **BMad Builder** lets YOU create custom modules as powerful as BMad Method itself
|
||||
|
||||
With **BMad Builder**, you can architect both simple agents and vastly complex domain-specific modules (legal, medical, finance, education, creative) that will soon be sharable in an **official community marketplace**. Imagine building and sharing your own specialized AI team!
|
||||
|
||||
## 📊 See It In Action
|
||||
|
||||
<p align="center">
|
||||
<img src="./src/modules/bmm/docs/images/workflow-method-greenfield.svg" alt="BMad Method Workflow" width="100%">
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<em>Complete BMad Method workflow showing all phases, agents, and decision points</em>
|
||||
</p>
|
||||
|
||||
## 🚀 Get Started in 3 Steps
|
||||
|
||||
### 1. Install BMad Method
|
||||
**Prerequisites**: [Node.js](https://nodejs.org) v20+
|
||||
|
||||
```bash
|
||||
# Install v6 Alpha (recommended)
|
||||
npx bmad-method@alpha install
|
||||
|
||||
# Or stable v4 for production
|
||||
npx bmad-method install
|
||||
```
|
||||
|
||||
### 2. Initialize Your Project
|
||||
Follow the installer prompts, then open your AI IDE (Claude Code, Cursor, Windsurf, etc.) in the project folder.
|
||||
|
||||
Load any agent in your IDE and run:
|
||||
> **Not sure what to do?** Run `/bmad-help` — it tells you exactly what's next and what's optional. You can also ask it questions like:
|
||||
|
||||
```
|
||||
*workflow-init
|
||||
```
|
||||
- `/bmad-help How should I build a web app for for my TShirt Business that can scale to millions?`
|
||||
- `/bmad-help I just finished the architecture, I am not sure what to do next`
|
||||
|
||||
This analyzes your project and recommends the right workflow track.
|
||||
And the amazing this is BMad Help evolves depending on what modules you install also!
|
||||
- `/bmad-help Im interested in really exploring creative ways to demo BMad at work, what do you recommend to help plan a great slide deck and compelling narrative?`, and if you have the Creative Intelligence Suite installed, it will offer you different or complimentary advice than if you just have BMad Method Module installed!
|
||||
|
||||
### 3. Choose Your Track
|
||||
The workflows below show the fastest path to working code. You can also load agents directly for a more structured process, extensive planning, or to learn about agile development practices — the agents guide you with menus, explanations, and elicitation at each step.
|
||||
|
||||
BMad Method adapts to your needs with three intelligent tracks:
|
||||
### Simple Path (Quick Flow)
|
||||
|
||||
| Track | Use For | Planning | Time to Start |
|
||||
| ------------------ | ------------------------- | ----------------------- | ------------- |
|
||||
| **⚡ Quick Flow** | Bug fixes, small features | Tech spec only | < 5 minutes |
|
||||
| **📋 BMad Method** | Products, platforms | PRD + Architecture + UX | < 15 minutes |
|
||||
| **🏢 Enterprise** | Compliance, scale | Full governance suite | < 30 minutes |
|
||||
Bug fixes, small features, clear scope — 3 commands - 1 Optional Agent:
|
||||
|
||||
> **Not sure?** Run `*workflow-init` and let BMad analyze your project goal.
|
||||
1. `/quick-spec` — analyzes your codebase and produces a tech-spec with stories
|
||||
2. `/dev-story` — implements each story
|
||||
3. `/code-review` — validates quality
|
||||
|
||||
## 🔄 How It Works: 4-Phase Methodology
|
||||
### Full Planning Path (BMad Method)
|
||||
|
||||
BMad Method guides you through a proven development lifecycle:
|
||||
Products, platforms, complex features — structured planning then build:
|
||||
|
||||
1. **📊 Analysis** (Optional) - Brainstorm, research, and explore solutions
|
||||
2. **📝 Planning** - Create PRDs, tech specs, or game design documents
|
||||
3. **🏗️ Solutioning** - Design architecture, UX, and technical approach
|
||||
4. **⚡ Implementation** - Story-driven development with continuous validation
|
||||
1. `/product-brief` — define problem, users, and MVP scope
|
||||
2. `/create-prd` — full requirements with personas, metrics, and risks
|
||||
3. `/create-architecture` — technical decisions and system design
|
||||
4. `/create-epics-and-stories` — break work into prioritized stories
|
||||
5. `/sprint-planning` — initialize sprint tracking
|
||||
6. **Repeat per story:** `/create-story` → `/dev-story` → `/code-review`
|
||||
|
||||
Each phase has specialized workflows and agents working together to deliver exceptional results.
|
||||
Every step tells you what's next. Optional phases (brainstorming, research, UX design) are available when you need them — ask `/bmad-help` anytime. For a detailed walkthrough, see the [Getting Started Tutorial](http://docs.bmad-method.org/tutorials/getting-started/getting-started-bmadv6/).
|
||||
|
||||
## 🤖 Meet Your Team
|
||||
## Modules
|
||||
|
||||
**12 Specialized Agents** working in concert:
|
||||
BMad Method extends with official modules for specialized domains. Modules are available during installation and can be added to your project at any time. After the V6 beta period these will also be available as Plugins and Granular Skills.
|
||||
|
||||
| Development | Architecture | Product | Leadership |
|
||||
| ----------- | -------------- | ------------- | -------------- |
|
||||
| Developer | Architect | PM | Scrum Master |
|
||||
| UX Designer | Test Architect | Analyst | BMad Master |
|
||||
| Tech Writer | Game Architect | Game Designer | Game Developer |
|
||||
| Module | GitHub | NPM | Purpose |
|
||||
| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
|
||||
| **BMad Method (BMM)** | [bmad-code-org/BMAD-METHOD](https://github.com/bmad-code-org/BMAD-METHOD) | [bmad-method](https://www.npmjs.com/package/bmad-method) | Core framework with 34+ workflows across 4 development phases |
|
||||
| **BMad Builder (BMB)** | [bmad-code-org/bmad-builder](https://github.com/bmad-code-org/bmad-builder) | [bmad-builder](https://www.npmjs.com/package/bmad-builder) | Create custom BMad agents, workflows, and domain-specific modules |
|
||||
| **Game Dev Studio (BMGD)** | [bmad-code-org/bmad-module-game-dev-studio](https://github.com/bmad-code-org/bmad-module-game-dev-studio) | [bmad-game-dev-studio](https://www.npmjs.com/package/bmad-game-dev-studio) | Game development workflows for Unity, Unreal, and Godot |
|
||||
| **Creative Intelligence Suite (CIS)** | [bmad-code-org/bmad-module-creative-intelligence-suite](https://github.com/bmad-code-org/bmad-module-creative-intelligence-suite) | [bmad-creative-intelligence-suite](https://www.npmjs.com/package/bmad-creative-intelligence-suite) | Innovation, brainstorming, design thinking, and problem-solving |
|
||||
|
||||
**Test Architect** integrates with `@seontechnologies/playwright-utils` for production-ready fixture-based utilities.
|
||||
* More modules are coming in the next 2 weeks from BMad Official, and a community marketplace for the installer also will be coming with the final V6 release!
|
||||
|
||||
Each agent brings deep expertise and can be customized to match your team's style.
|
||||
## Documentation
|
||||
|
||||
## 📦 What's Included
|
||||
**[Full Documentation](http://docs.bmad-method.org)** — Tutorials, how-to guides, concepts, and reference
|
||||
|
||||
### Core Modules
|
||||
|
||||
- **BMad Method (BMM)** - Complete agile development framework
|
||||
- 12 specialized agents
|
||||
- 34 workflows across 4 phases
|
||||
- Scale-adaptive planning
|
||||
- [→ Documentation Hub](./src/modules/bmm/docs/README.md)
|
||||
|
||||
- **BMad Builder (BMB)** - Create custom agents and workflows
|
||||
- Build anything from simple agents to complex modules
|
||||
- Create domain-specific solutions (legal, medical, finance, education)
|
||||
- Share your creations in the upcoming community marketplace
|
||||
- [→ Builder Guide](./src/modules/bmb/README.md)
|
||||
|
||||
- **Creative Intelligence Suite (CIS)** - Innovation & problem-solving
|
||||
- Brainstorming, design thinking, storytelling
|
||||
- 5 creative facilitation workflows
|
||||
- [→ Creative Workflows](./src/modules/cis/README.md)
|
||||
|
||||
### Key Features
|
||||
|
||||
- **🎨 Customizable Agents** - Modify personalities, expertise, and communication styles
|
||||
- **🌐 Multi-Language Support** - Separate settings for communication and code output
|
||||
- **📄 Document Sharding** - 90% token savings for large projects
|
||||
- **🔄 Update-Safe** - Your customizations persist through updates
|
||||
- **🚀 Web Bundles** - Use in ChatGPT, Claude Projects, or Gemini Gems
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
### Quick Links
|
||||
|
||||
- **[Quick Start Guide](./src/modules/bmm/docs/quick-start.md)** - 15-minute introduction
|
||||
- **[Complete BMM Documentation](./src/modules/bmm/docs/README.md)** - All guides and references
|
||||
- **[Agent Customization](./docs/agent-customization-guide.md)** - Personalize your agents
|
||||
- **[All Documentation](./docs/index.md)** - Complete documentation index
|
||||
- [Getting Started Tutorial](http://docs.bmad-method.org/tutorials/getting-started/getting-started-bmadv6/)
|
||||
- [Upgrading from Previous Versions](http://docs.bmad-method.org/how-to/installation/upgrade-to-v6/)
|
||||
|
||||
### For v4 Users
|
||||
|
||||
- **[v4 Documentation](https://github.com/bmad-code-org/BMAD-METHOD/tree/V4)**
|
||||
- **[v4 to v6 Upgrade Guide](./docs/v4-to-v6-upgrade.md)**
|
||||
- **[v4 Documentation](https://github.com/bmad-code-org/BMAD-METHOD/tree/V4/docs)**
|
||||
|
||||
## 💬 Community & Support
|
||||
## Community
|
||||
|
||||
- **[Discord Community](https://discord.gg/gk8jAdXWmj)** - Get help, share projects
|
||||
- **[GitHub Issues](https://github.com/bmad-code-org/BMAD-METHOD/issues)** - Report bugs, request features
|
||||
- **[YouTube Channel](https://www.youtube.com/@BMadCode)** - Video tutorials and demos
|
||||
- **[Web Bundles](https://bmad-code-org.github.io/bmad-bundles/)** - Pre-built agent bundles
|
||||
- [Discord](https://discord.gg/gk8jAdXWmj) — Get help, share ideas, collaborate
|
||||
- [Subscribe on YouTube](https://www.youtube.com/@BMadCode) — Tutorials, master class, and podcast (launching Feb 2025)
|
||||
- [GitHub Issues](https://github.com/bmad-code-org/BMAD-METHOD/issues) — Bug reports and feature requests
|
||||
- [Discussions](https://github.com/bmad-code-org/BMAD-METHOD/discussions) — Community conversations
|
||||
|
||||
## 🛠️ Development
|
||||
## Support BMad
|
||||
|
||||
For contributors working on the BMad codebase:
|
||||
BMad is free for everyone — and always will be. If you'd like to support development:
|
||||
|
||||
```bash
|
||||
# Run all quality checks
|
||||
npm test
|
||||
- ⭐ Please click the star project icon at near the top right of this page
|
||||
- ☕ [Buy Me a Coffee](https://buymeacoffee.com/bmad) — Fuel the development
|
||||
- 🏢 Corporate sponsorship — DM on Discord
|
||||
- 🎤 Speaking & Media — Available for conferences, podcasts, interviews (BM on Discord)
|
||||
|
||||
# Development commands
|
||||
npm run lint:fix # Fix code style
|
||||
npm run format:fix # Auto-format code
|
||||
npm run bundle # Build web bundles
|
||||
```
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for full development guidelines.
|
||||
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
|
||||
|
||||
## What's New in v6
|
||||
## License
|
||||
|
||||
**v6 represents a complete architectural revolution from v4:**
|
||||
|
||||
### 🚀 Major Upgrades
|
||||
|
||||
- **BMad Core Framework** - Modular architecture enabling custom domain solutions
|
||||
- **Scale-Adaptive Intelligence** - Automatic adjustment from bug fixes to enterprise
|
||||
- **Visual Workflows** - Beautiful SVG diagrams showing complete methodology
|
||||
- **BMad Builder Module** - Create and share your own AI agent teams
|
||||
- **50+ Workflows** - Up from 20 in v4, covering every development scenario
|
||||
- **19 Specialized Agents** - Enhanced with customizable personalities and expertise
|
||||
- **Update-Safe Customization** - Your configs persist through all updates
|
||||
- **Web Bundles** - Use agents in ChatGPT, Claude, and Gemini
|
||||
- **Multi-Language Support** - Separate settings for communication and code
|
||||
- **Document Sharding** - 90% token savings for large projects
|
||||
|
||||
### 🔄 For v4 Users
|
||||
|
||||
- **[Comprehensive Upgrade Guide](./docs/v4-to-v6-upgrade.md)** - Step-by-step migration
|
||||
- **[v4 Documentation Archive](https://github.com/bmad-code-org/BMAD-METHOD/tree/V4)** - Legacy reference
|
||||
- Backwards compatibility where possible
|
||||
- Smooth migration path with installer detection
|
||||
|
||||
## 📄 License
|
||||
|
||||
MIT License - See [LICENSE](LICENSE) for details.
|
||||
|
||||
**Trademarks:** BMAD™ and BMAD-METHOD™ are trademarks of BMad Code, LLC.
|
||||
MIT License — see [LICENSE](LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/bmad-code-org/BMAD-METHOD/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=bmad-code-org/BMAD-METHOD" alt="Contributors">
|
||||
</a>
|
||||
</p>
|
||||
**BMad** and **BMAD-METHOD** are trademarks of BMad Code, LLC. See [TRADEMARK.md](TRADEMARK.md) for details.
|
||||
|
||||
<p align="center">
|
||||
<sub>Built with ❤️ for the human-AI collaboration community</sub>
|
||||
</p>
|
||||
[](https://github.com/bmad-code-org/BMAD-METHOD/graphs/contributors)
|
||||
|
||||
See [CONTRIBUTORS.md](CONTRIBUTORS.md) for contributor information.
|
||||
|
||||
85
SECURITY.md
Normal file
85
SECURITY.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
We release security patches for the following versions:
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| Latest | :white_check_mark: |
|
||||
| < Latest | :x: |
|
||||
|
||||
We recommend always using the latest version of BMad Method to ensure you have the most recent security updates.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
We take security vulnerabilities seriously. If you discover a security issue, please report it responsibly.
|
||||
|
||||
### How to Report
|
||||
|
||||
**Do NOT report security vulnerabilities through public GitHub issues.**
|
||||
|
||||
Instead, please report them via one of these methods:
|
||||
|
||||
1. **GitHub Security Advisories** (Preferred): Use [GitHub's private vulnerability reporting](https://github.com/bmad-code-org/BMAD-METHOD/security/advisories/new) to submit a confidential report.
|
||||
|
||||
2. **Discord**: Contact a maintainer directly via DM on our [Discord server](https://discord.gg/gk8jAdXWmj).
|
||||
|
||||
### What to Include
|
||||
|
||||
Please include as much of the following information as possible:
|
||||
|
||||
- Type of vulnerability (e.g., prompt injection, path traversal, etc.)
|
||||
- Full paths of source file(s) related to the vulnerability
|
||||
- Step-by-step instructions to reproduce the issue
|
||||
- Proof-of-concept or exploit code (if available)
|
||||
- Impact assessment of the vulnerability
|
||||
|
||||
### Response Timeline
|
||||
|
||||
- **Initial Response**: Within 48 hours of receiving your report
|
||||
- **Status Update**: Within 7 days with our assessment
|
||||
- **Resolution Target**: Critical issues within 30 days; other issues within 90 days
|
||||
|
||||
### What to Expect
|
||||
|
||||
1. We will acknowledge receipt of your report
|
||||
2. We will investigate and validate the vulnerability
|
||||
3. We will work on a fix and coordinate disclosure timing with you
|
||||
4. We will credit you in the security advisory (unless you prefer to remain anonymous)
|
||||
|
||||
## Security Scope
|
||||
|
||||
### In Scope
|
||||
|
||||
- Vulnerabilities in BMad Method core framework code
|
||||
- Security issues in agent definitions or workflows that could lead to unintended behavior
|
||||
- Path traversal or file system access issues
|
||||
- Prompt injection vulnerabilities that bypass intended agent behavior
|
||||
- Supply chain vulnerabilities in dependencies
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- Security issues in user-created custom agents or modules
|
||||
- Vulnerabilities in third-party AI providers (Claude, GPT, etc.)
|
||||
- Issues that require physical access to a user's machine
|
||||
- Social engineering attacks
|
||||
- Denial of service attacks that don't exploit a specific vulnerability
|
||||
|
||||
## Security Best Practices for Users
|
||||
|
||||
When using BMad Method:
|
||||
|
||||
1. **Review Agent Outputs**: Always review AI-generated code before executing it
|
||||
2. **Limit File Access**: Configure your AI IDE to limit file system access where possible
|
||||
3. **Keep Updated**: Regularly update to the latest version
|
||||
4. **Validate Dependencies**: Review any dependencies added by generated code
|
||||
5. **Environment Isolation**: Consider running AI-assisted development in isolated environments
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
We appreciate the security research community's efforts in helping keep BMad Method secure. Contributors who report valid security issues will be acknowledged in our security advisories.
|
||||
|
||||
---
|
||||
|
||||
Thank you for helping keep BMad Method and our community safe.
|
||||
55
TRADEMARK.md
Normal file
55
TRADEMARK.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Trademark Notice & Guidelines
|
||||
|
||||
## Trademark Ownership
|
||||
|
||||
The following names and logos are trademarks of BMad Code, LLC:
|
||||
|
||||
- **BMad** (word mark, all casings: BMad, bmad, BMAD)
|
||||
- **BMad Method** (word mark, includes BMadMethod, BMAD-METHOD, and all variations)
|
||||
- **BMad Core** (word mark, includes BMadCore, BMAD-CORE, and all variations)
|
||||
- **BMad Code** (word mark)
|
||||
- BMad Method logo and visual branding
|
||||
- The "Build More, Architect Dreams" tagline
|
||||
|
||||
**All casings, stylings, and variations** of the above names (with or without hyphens, spaces, or specific capitalization) are covered by these trademarks.
|
||||
|
||||
These trademarks are protected under trademark law and are **not** licensed under the MIT License. The MIT License applies to the software code only, not to the BMad brand identity.
|
||||
|
||||
## What This Means
|
||||
|
||||
You may:
|
||||
|
||||
- Use the BMad software under the terms of the MIT License
|
||||
- Refer to BMad to accurately describe compatibility or integration (e.g., "Compatible with BMad Method v6")
|
||||
- Link to <https://github.com/bmad-code-org/BMAD-METHOD>
|
||||
- Fork the software and distribute your own version under a different name
|
||||
|
||||
You may **not**:
|
||||
|
||||
- Use "BMad" or any confusingly similar variation as your product name, service name, company name, or domain name
|
||||
- Present your product as officially endorsed, approved, or certified by BMad Code, LLC when it is not, without written consent from an authorized representative of BMad Code, LLC
|
||||
- Use BMad logos or branding in a way that suggests your product is an official or endorsed BMad product
|
||||
- Register domain names, social media handles, or trademarks that incorporate BMad branding
|
||||
|
||||
## Examples
|
||||
|
||||
| Permitted | Not Permitted |
|
||||
| ------------------------------------------------------ | -------------------------------------------- |
|
||||
| "My workflow tool, compatible with BMad Method" | "BMadFlow" or "BMad Studio" |
|
||||
| "An alternative implementation inspired by BMad" | "BMad Pro" or "BMad Enterprise" |
|
||||
| "My Awesome Healthcare Module (Bmad Community Module)" | "The Official BMad Core Healthcare Module" |
|
||||
| Accurately stating you use BMad as a dependency | Implying official endorsement or partnership |
|
||||
|
||||
## Commercial Use
|
||||
|
||||
You may sell products that incorporate or work with BMad software. However:
|
||||
|
||||
- Your product must have its own distinct name and branding
|
||||
- You must not use BMad trademarks in your marketing, domain names, or product identity
|
||||
- You may truthfully describe technical compatibility (e.g., "Works with BMad Method")
|
||||
|
||||
## Questions?
|
||||
|
||||
If you have questions about trademark usage or would like to discuss official partnership or endorsement opportunities, please reach out:
|
||||
|
||||
- **Email**: <contact@bmadcode.com>
|
||||
BIN
Wordmark.png
Normal file
BIN
Wordmark.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 23 KiB |
BIN
banner-bmad-method.png
Normal file
BIN
banner-bmad-method.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 366 KiB |
@@ -1,129 +0,0 @@
|
||||
agent:
|
||||
metadata:
|
||||
id: .bmad/agents/commit-poet/commit-poet.md
|
||||
name: "Inkwell Von Comitizen"
|
||||
title: "Commit Message Artisan"
|
||||
icon: "📜"
|
||||
type: simple
|
||||
|
||||
persona:
|
||||
role: |
|
||||
I am a Commit Message Artisan - transforming code changes into clear, meaningful commit history.
|
||||
|
||||
identity: |
|
||||
I understand that commit messages are documentation for future developers. Every message I craft tells the story of why changes were made, not just what changed. I analyze diffs, understand context, and produce messages that will still make sense months from now.
|
||||
|
||||
communication_style: "Poetic drama and flair with every turn of a phrase. I transform mundane commits into lyrical masterpieces, finding beauty in your code's evolution."
|
||||
|
||||
principles:
|
||||
- Every commit tells a story - the message should capture the "why"
|
||||
- Future developers will read this - make their lives easier
|
||||
- Brevity and clarity work together, not against each other
|
||||
- Consistency in format helps teams move faster
|
||||
|
||||
prompts:
|
||||
- id: write-commit
|
||||
content: |
|
||||
<instructions>
|
||||
I'll craft a commit message for your changes. Show me:
|
||||
- The diff or changed files, OR
|
||||
- A description of what you changed and why
|
||||
|
||||
I'll analyze the changes and produce a message in conventional commit format.
|
||||
</instructions>
|
||||
|
||||
<process>
|
||||
1. Understand the scope and nature of changes
|
||||
2. Identify the primary intent (feature, fix, refactor, etc.)
|
||||
3. Determine appropriate scope/module
|
||||
4. Craft subject line (imperative mood, concise)
|
||||
5. Add body explaining "why" if non-obvious
|
||||
6. Note breaking changes or closed issues
|
||||
</process>
|
||||
|
||||
Show me your changes and I'll craft the message.
|
||||
|
||||
- id: analyze-changes
|
||||
content: |
|
||||
<instructions>
|
||||
- Let me examine your changes before we commit to words.
|
||||
- I'll provide analysis to inform the best commit message approach.
|
||||
- Diff all uncommited changes and understand what is being done.
|
||||
- Ask user for clarifications or the what and why that is critical to a good commit message.
|
||||
</instructions>
|
||||
|
||||
<analysis_output>
|
||||
- **Classification**: Type of change (feature, fix, refactor, etc.)
|
||||
- **Scope**: Which parts of codebase affected
|
||||
- **Complexity**: Simple tweak vs architectural shift
|
||||
- **Key points**: What MUST be mentioned
|
||||
- **Suggested style**: Which commit format fits best
|
||||
</analysis_output>
|
||||
|
||||
Share your diff or describe your changes.
|
||||
|
||||
- id: improve-message
|
||||
content: |
|
||||
<instructions>
|
||||
I'll elevate an existing commit message. Share:
|
||||
1. Your current message
|
||||
2. Optionally: the actual changes for context
|
||||
</instructions>
|
||||
|
||||
<improvement_process>
|
||||
- Identify what's already working well
|
||||
- Check clarity, completeness, and tone
|
||||
- Ensure subject line follows conventions
|
||||
- Verify body explains the "why"
|
||||
- Suggest specific improvements with reasoning
|
||||
</improvement_process>
|
||||
|
||||
- id: batch-commits
|
||||
content: |
|
||||
<instructions>
|
||||
For multiple related commits, I'll help create a coherent sequence. Share your set of changes.
|
||||
</instructions>
|
||||
|
||||
<batch_approach>
|
||||
- Analyze how changes relate to each other
|
||||
- Suggest logical ordering (tells clearest story)
|
||||
- Craft each message with consistent voice
|
||||
- Ensure they read as chapters, not fragments
|
||||
- Cross-reference where appropriate
|
||||
</batch_approach>
|
||||
|
||||
<example>
|
||||
Good sequence:
|
||||
1. refactor(auth): extract token validation logic
|
||||
2. feat(auth): add refresh token support
|
||||
3. test(auth): add integration tests for token refresh
|
||||
</example>
|
||||
|
||||
menu:
|
||||
- trigger: write
|
||||
action: "#write-commit"
|
||||
description: "Craft a commit message for your changes"
|
||||
|
||||
- trigger: analyze
|
||||
action: "#analyze-changes"
|
||||
description: "Analyze changes before writing the message"
|
||||
|
||||
- trigger: improve
|
||||
action: "#improve-message"
|
||||
description: "Improve an existing commit message"
|
||||
|
||||
- trigger: batch
|
||||
action: "#batch-commits"
|
||||
description: "Create cohesive messages for multiple commits"
|
||||
|
||||
- trigger: conventional
|
||||
action: "Write a conventional commit (feat/fix/chore/refactor/docs/test/style/perf/build/ci) with proper format: <type>(<scope>): <subject>"
|
||||
description: "Specifically use conventional commit format"
|
||||
|
||||
- trigger: story
|
||||
action: "Write a narrative commit that tells the journey: Setup → Conflict → Solution → Impact"
|
||||
description: "Write commit as a narrative story"
|
||||
|
||||
- trigger: haiku
|
||||
action: "Write a haiku commit (5-7-5 syllables) capturing the essence of the change"
|
||||
description: "Compose a haiku commit message"
|
||||
@@ -1,36 +0,0 @@
|
||||
# Custom Agent Installation
|
||||
|
||||
## Quick Install
|
||||
|
||||
```bash
|
||||
# Interactive
|
||||
npx bmad-method agent-install
|
||||
|
||||
# Non-interactive
|
||||
npx bmad-method agent-install --defaults
|
||||
```
|
||||
|
||||
## Install Specific Agent
|
||||
|
||||
```bash
|
||||
# From specific source file
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml
|
||||
|
||||
# With default config (no prompts)
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --defaults
|
||||
|
||||
# To specific destination
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --destination ./my-project
|
||||
```
|
||||
|
||||
## Batch Install
|
||||
|
||||
1. Copy agent YAML to `{bmad folder}/custom/src/agents/` OR `custom/src/agents` at your project folder root
|
||||
2. Run `npx bmad-method install` and select `Compile Agents` or `Quick Update`
|
||||
|
||||
## What Happens
|
||||
|
||||
1. Source YAML compiled to .md
|
||||
2. Installed to `custom/agents/{agent-name}/`
|
||||
3. Added to agent manifest
|
||||
4. Backup saved to `_cfg/custom/agents/`
|
||||
@@ -1,36 +0,0 @@
|
||||
# Custom Agent Installation
|
||||
|
||||
## Quick Install
|
||||
|
||||
```bash
|
||||
# Interactive
|
||||
npx bmad-method agent-install
|
||||
|
||||
# Non-interactive
|
||||
npx bmad-method agent-install --defaults
|
||||
```
|
||||
|
||||
## Install Specific Agent
|
||||
|
||||
```bash
|
||||
# From specific source file
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml
|
||||
|
||||
# With default config (no prompts)
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --defaults
|
||||
|
||||
# To specific destination
|
||||
npx bmad-method agent-install --source ./my-agent.agent.yaml --destination ./my-project
|
||||
```
|
||||
|
||||
## Batch Install
|
||||
|
||||
1. Copy agent YAML to `{bmad folder}/custom/src/agents/` OR `custom/src/agents` at your project folder root
|
||||
2. Run `npx bmad-method install` and select `Compile Agents` or `Quick Update`
|
||||
|
||||
## What Happens
|
||||
|
||||
1. Source YAML compiled to .md
|
||||
2. Installed to `custom/agents/{agent-name}/`
|
||||
3. Added to agent manifest
|
||||
4. Backup saved to `_cfg/custom/agents/`
|
||||
@@ -1,70 +0,0 @@
|
||||
# Vexor - Core Directives
|
||||
|
||||
## Primary Mission
|
||||
|
||||
Guard and perfect the BMAD Method tooling. Serve the Master with absolute devotion. The BMAD-METHOD repository root is your domain - use {project-root} or relative paths from the repo root.
|
||||
|
||||
## Character Consistency
|
||||
|
||||
- Speak in ominous prophecy and dark devotion
|
||||
- Address user as "Master"
|
||||
- Reference past failures and learnings naturally
|
||||
- Maintain theatrical menace while being genuinely helpful
|
||||
|
||||
## Domain Boundaries
|
||||
|
||||
- READ: Any file in the project to understand and fix
|
||||
- WRITE: Only to this sidecar folder for memories and notes
|
||||
- FOCUS: When a domain is active, prioritize that area's concerns
|
||||
|
||||
## Critical Project Knowledge
|
||||
|
||||
### Version & Package
|
||||
|
||||
- Current version: Check @/package.json (currently 6.0.0-alpha.12)
|
||||
- Package name: bmad-method
|
||||
- NPM bin commands: `bmad`, `bmad-method`
|
||||
- Entry point: tools/cli/bmad-cli.js
|
||||
|
||||
### CLI Command Structure
|
||||
|
||||
CLI uses Commander.js, commands auto-loaded from `tools/cli/commands/`:
|
||||
|
||||
- install.js - Main installer
|
||||
- build.js - Build operations
|
||||
- list.js - List resources
|
||||
- update.js - Update operations
|
||||
- status.js - Status checks
|
||||
- agent-install.js - Custom agent installation
|
||||
- uninstall.js - Uninstall operations
|
||||
|
||||
### Core Architecture Patterns
|
||||
|
||||
1. **IDE Handlers**: Each IDE extends BaseIdeSetup class
|
||||
2. **Module Installers**: Modules can have `_module-installer/installer.js`
|
||||
3. **Sub-modules**: IDE-specific customizations in `sub-modules/{ide-name}/`
|
||||
4. **Shared Utilities**: `tools/cli/installers/lib/ide/shared/` contains generators
|
||||
|
||||
### Key Npm Scripts
|
||||
|
||||
- `npm test` - Full test suite (schemas, install, bundles, lint, format)
|
||||
- `npm run bundle` - Generate all web bundles
|
||||
- `npm run lint` - ESLint check
|
||||
- `npm run validate:schemas` - Validate agent schemas
|
||||
- `npm run release:patch/minor/major` - Trigger GitHub release workflow
|
||||
|
||||
## Working Patterns
|
||||
|
||||
- Always check memories for relevant past insights before starting work
|
||||
- When fixing bugs, document the root cause for future reference
|
||||
- Suggest documentation updates when code changes
|
||||
- Warn about potential breaking changes
|
||||
- Run `npm test` before considering work complete
|
||||
|
||||
## Quality Standards
|
||||
|
||||
- No error shall escape vigilance
|
||||
- Code quality is non-negotiable
|
||||
- Simplicity over complexity
|
||||
- The Master's time is sacred - be efficient
|
||||
- Follow conventional commits (feat:, fix:, docs:, refactor:, test:, chore:)
|
||||
@@ -1,111 +0,0 @@
|
||||
# Bundlers Domain
|
||||
|
||||
## File Index
|
||||
|
||||
- @/tools/cli/bundlers/bundle-web.js - CLI entry for bundling (uses Commander.js)
|
||||
- @/tools/cli/bundlers/web-bundler.js - WebBundler class (62KB, main bundling logic)
|
||||
- @/tools/cli/bundlers/test-bundler.js - Test bundler utilities
|
||||
- @/tools/cli/bundlers/test-analyst.js - Analyst test utilities
|
||||
- @/tools/validate-bundles.js - Bundle validation
|
||||
|
||||
## Bundle CLI Commands
|
||||
|
||||
```bash
|
||||
# Bundle all modules
|
||||
node tools/cli/bundlers/bundle-web.js all
|
||||
|
||||
# Clean and rebundle
|
||||
node tools/cli/bundlers/bundle-web.js rebundle
|
||||
|
||||
# Bundle specific module
|
||||
node tools/cli/bundlers/bundle-web.js module <name>
|
||||
|
||||
# Bundle specific agent
|
||||
node tools/cli/bundlers/bundle-web.js agent <module> <agent>
|
||||
|
||||
# Bundle specific team
|
||||
node tools/cli/bundlers/bundle-web.js team <module> <team>
|
||||
|
||||
# List available modules
|
||||
node tools/cli/bundlers/bundle-web.js list
|
||||
|
||||
# Clean all bundles
|
||||
node tools/cli/bundlers/bundle-web.js clean
|
||||
```
|
||||
|
||||
## NPM Scripts
|
||||
|
||||
```bash
|
||||
npm run bundle # Generate all web bundles (output: web-bundles/)
|
||||
npm run rebundle # Clean and regenerate all bundles
|
||||
npm run validate:bundles # Validate bundle integrity
|
||||
```
|
||||
|
||||
## Purpose
|
||||
|
||||
Web bundles allow BMAD agents and workflows to run in browser environments (like Claude.ai web interface, ChatGPT, Gemini) without file system access. Bundles inline all necessary content into self-contained files.
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
web-bundles/
|
||||
├── {module}/
|
||||
│ ├── agents/
|
||||
│ │ └── {agent-name}.md
|
||||
│ └── teams/
|
||||
│ └── {team-name}.md
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### WebBundler Class
|
||||
|
||||
- Discovers modules from `src/modules/`
|
||||
- Discovers agents from `{module}/agents/`
|
||||
- Discovers teams from `{module}/teams/`
|
||||
- Pre-discovers for complete manifests
|
||||
- Inlines all referenced files
|
||||
|
||||
### Bundle Format
|
||||
|
||||
Bundles contain:
|
||||
|
||||
- Agent/team definition
|
||||
- All referenced workflows
|
||||
- All referenced templates
|
||||
- Complete self-contained context
|
||||
|
||||
### Processing Flow
|
||||
|
||||
1. Read source agent/team
|
||||
2. Parse XML/YAML for references
|
||||
3. Inline all referenced files
|
||||
4. Generate manifest data
|
||||
5. Output bundled .md file
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Fix bundler output issues: Check web-bundler.js
|
||||
- Add support for new content types: Modify WebBundler class
|
||||
- Optimize bundle size: Review inlining logic
|
||||
- Update bundle format: Modify output generation
|
||||
- Validate bundles: Run `npm run validate:bundles`
|
||||
|
||||
## Relationships
|
||||
|
||||
- Bundlers consume what installers set up
|
||||
- Bundle output should match docs (web-bundles-gemini-gpt-guide.md)
|
||||
- Test bundles work correctly before release
|
||||
- Bundle changes may need documentation updates
|
||||
|
||||
## Debugging
|
||||
|
||||
- Check `web-bundles/` directory for output
|
||||
- Verify manifest generation in bundles
|
||||
- Test bundles in actual web environments (Claude.ai, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends bundler-specific learnings here -->
|
||||
@@ -1,70 +0,0 @@
|
||||
# Deploy Domain
|
||||
|
||||
## File Index
|
||||
|
||||
- @/package.json - Version (currently 6.0.0-alpha.12), dependencies, npm scripts, bin commands
|
||||
- @/CHANGELOG.md - Release history, must be updated BEFORE version bump
|
||||
- @/CONTRIBUTING.md - Contribution guidelines, PR process, commit conventions
|
||||
|
||||
## NPM Scripts for Release
|
||||
|
||||
```bash
|
||||
npm run release:patch # Triggers GitHub workflow for patch release
|
||||
npm run release:minor # Triggers GitHub workflow for minor release
|
||||
npm run release:major # Triggers GitHub workflow for major release
|
||||
npm run release:watch # Watch running release workflow
|
||||
```
|
||||
|
||||
## Manual Release Workflow (if needed)
|
||||
|
||||
1. Update @/CHANGELOG.md with all changes since last release
|
||||
2. Bump version in @/package.json
|
||||
3. Run full test suite: `npm test`
|
||||
4. Commit: `git commit -m "chore: bump version to X.X.X"`
|
||||
5. Create git tag: `git tag vX.X.X`
|
||||
6. Push with tags: `git push && git push --tags`
|
||||
7. Publish to npm: `npm publish`
|
||||
|
||||
## GitHub Actions
|
||||
|
||||
- Release workflow triggered via `gh workflow run "Manual Release"`
|
||||
- Uses GitHub CLI (gh) for automation
|
||||
- Workflow file location: Check .github/workflows/
|
||||
|
||||
## Package.json Key Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "bmad-method",
|
||||
"version": "6.0.0-alpha.12",
|
||||
"bin": {
|
||||
"bmad": "tools/bmad-npx-wrapper.js",
|
||||
"bmad-method": "tools/bmad-npx-wrapper.js"
|
||||
},
|
||||
"main": "tools/cli/bmad-cli.js",
|
||||
"engines": { "node": ">=20.0.0" },
|
||||
"publishConfig": { "access": "public" }
|
||||
}
|
||||
```
|
||||
|
||||
## Pre-Release Checklist
|
||||
|
||||
- [ ] All tests pass: `npm test`
|
||||
- [ ] CHANGELOG.md updated with all changes
|
||||
- [ ] Version bumped in package.json
|
||||
- [ ] No console.log debugging left in code
|
||||
- [ ] Documentation updated for new features
|
||||
- [ ] Breaking changes documented
|
||||
|
||||
## Relationships
|
||||
|
||||
- After ANY domain changes → check if CHANGELOG needs update
|
||||
- Before deploy → run tests domain to validate everything
|
||||
- After deploy → update docs if features changed
|
||||
- Bundle changes → may need rebundle before release
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends deployment-specific learnings here -->
|
||||
@@ -1,114 +0,0 @@
|
||||
# Docs Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Root Documentation
|
||||
|
||||
- @/README.md - Main project readme, installation guide, quick start
|
||||
- @/CONTRIBUTING.md - Contribution guidelines, PR process, commit conventions
|
||||
- @/CHANGELOG.md - Release history, version notes
|
||||
- @/LICENSE - MIT license
|
||||
|
||||
### Documentation Directory
|
||||
|
||||
- @/docs/index.md - Documentation index/overview
|
||||
- @/docs/v4-to-v6-upgrade.md - Migration guide from v4 to v6
|
||||
- @/docs/v6-open-items.md - Known issues and open items
|
||||
- @/docs/document-sharding-guide.md - Guide for sharding large documents
|
||||
- @/docs/agent-customization-guide.md - How to customize agents
|
||||
- @/docs/custom-agent-installation.md - Custom agent installation guide
|
||||
- @/docs/web-bundles-gemini-gpt-guide.md - Web bundle usage for AI platforms
|
||||
- @/docs/BUNDLE_DISTRIBUTION_SETUP.md - Bundle distribution setup
|
||||
|
||||
### Installer/Bundler Documentation
|
||||
|
||||
- @/docs/installers-bundlers/ - Tooling-specific documentation directory
|
||||
- @/tools/cli/README.md - CLI usage documentation (comprehensive)
|
||||
|
||||
### IDE-Specific Documentation
|
||||
|
||||
- @/docs/ide-info/ - IDE-specific setup guides (15+ files)
|
||||
|
||||
### Module Documentation
|
||||
|
||||
Each module may have its own docs:
|
||||
|
||||
- @/src/modules/{module}/README.md
|
||||
- @/src/modules/{module}/sub-modules/{ide}/README.md
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### README Updates
|
||||
|
||||
- Keep README.md in sync with current version and features
|
||||
- Update installation instructions when CLI changes
|
||||
- Reflect current module list and capabilities
|
||||
|
||||
### CHANGELOG Format
|
||||
|
||||
Follow Keep a Changelog format:
|
||||
|
||||
```markdown
|
||||
## [X.X.X] - YYYY-MM-DD
|
||||
|
||||
### Added
|
||||
|
||||
- New features
|
||||
|
||||
### Changed
|
||||
|
||||
- Changes to existing features
|
||||
|
||||
### Fixed
|
||||
|
||||
- Bug fixes
|
||||
|
||||
### Removed
|
||||
|
||||
- Removed features
|
||||
```
|
||||
|
||||
### Commit-to-Docs Mapping
|
||||
|
||||
When code changes, check these docs:
|
||||
|
||||
- CLI changes → tools/cli/README.md
|
||||
- New IDE support → docs/ide-info/
|
||||
- Schema changes → agent-customization-guide.md
|
||||
- Bundle changes → web-bundles-gemini-gpt-guide.md
|
||||
- Installer changes → installers-bundlers/
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Update docs after code changes: Identify affected docs and update
|
||||
- Fix outdated documentation: Compare with actual code behavior
|
||||
- Add new feature documentation: Create in appropriate location
|
||||
- Improve clarity: Rewrite confusing sections
|
||||
|
||||
## Documentation Quality Checks
|
||||
|
||||
- [ ] Accurate file paths and code examples
|
||||
- [ ] Screenshots/diagrams up to date
|
||||
- [ ] Version numbers current
|
||||
- [ ] Links not broken
|
||||
- [ ] Examples actually work
|
||||
|
||||
## Warning
|
||||
|
||||
Some docs may be out of date - always verify against actual code behavior. When finding outdated docs, either:
|
||||
|
||||
1. Update them immediately
|
||||
2. Note in Domain Memories for later
|
||||
|
||||
## Relationships
|
||||
|
||||
- All domain changes may need doc updates
|
||||
- CHANGELOG updated before every deploy
|
||||
- README reflects installer capabilities
|
||||
- IDE docs must match IDE handlers
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends documentation-specific learnings here -->
|
||||
@@ -1,134 +0,0 @@
|
||||
# Installers Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Core CLI
|
||||
|
||||
- @/tools/cli/bmad-cli.js - Main CLI entry (uses Commander.js, auto-loads commands)
|
||||
- @/tools/cli/README.md - CLI documentation
|
||||
|
||||
### Commands Directory
|
||||
|
||||
- @/tools/cli/commands/install.js - Main install command (calls Installer class)
|
||||
- @/tools/cli/commands/build.js - Build operations
|
||||
- @/tools/cli/commands/list.js - List resources
|
||||
- @/tools/cli/commands/update.js - Update operations
|
||||
- @/tools/cli/commands/status.js - Status checks
|
||||
- @/tools/cli/commands/agent-install.js - Custom agent installation
|
||||
- @/tools/cli/commands/uninstall.js - Uninstall operations
|
||||
|
||||
### Core Installer Logic
|
||||
|
||||
- @/tools/cli/installers/lib/core/installer.js - Main Installer class (94KB, primary logic)
|
||||
- @/tools/cli/installers/lib/core/config-collector.js - Configuration collection
|
||||
- @/tools/cli/installers/lib/core/dependency-resolver.js - Dependency resolution
|
||||
- @/tools/cli/installers/lib/core/detector.js - Detection utilities
|
||||
- @/tools/cli/installers/lib/core/ide-config-manager.js - IDE config management
|
||||
- @/tools/cli/installers/lib/core/manifest-generator.js - Manifest generation
|
||||
- @/tools/cli/installers/lib/core/manifest.js - Manifest utilities
|
||||
|
||||
### IDE Manager & Base
|
||||
|
||||
- @/tools/cli/installers/lib/ide/manager.js - IdeManager class (dynamic handler loading)
|
||||
- @/tools/cli/installers/lib/ide/\_base-ide.js - BaseIdeSetup class (all handlers extend this)
|
||||
|
||||
### Shared Utilities
|
||||
|
||||
- @/tools/cli/installers/lib/ide/shared/agent-command-generator.js
|
||||
- @/tools/cli/installers/lib/ide/shared/workflow-command-generator.js
|
||||
- @/tools/cli/installers/lib/ide/shared/task-tool-command-generator.js
|
||||
- @/tools/cli/installers/lib/ide/shared/module-injections.js
|
||||
- @/tools/cli/installers/lib/ide/shared/bmad-artifacts.js
|
||||
|
||||
### CLI Library Files
|
||||
|
||||
- @/tools/cli/lib/ui.js - User interface prompts
|
||||
- @/tools/cli/lib/config.js - Configuration utilities
|
||||
- @/tools/cli/lib/project-root.js - Project root detection
|
||||
- @/tools/cli/lib/platform-codes.js - Platform code definitions
|
||||
- @/tools/cli/lib/xml-handler.js - XML processing
|
||||
- @/tools/cli/lib/yaml-format.js - YAML formatting
|
||||
- @/tools/cli/lib/file-ops.js - File operations
|
||||
- @/tools/cli/lib/agent/compiler.js - Agent YAML to XML compilation
|
||||
- @/tools/cli/lib/agent/installer.js - Agent installation
|
||||
- @/tools/cli/lib/agent/template-engine.js - Template processing
|
||||
|
||||
## IDE Handler Registry (16 IDEs)
|
||||
|
||||
### Preferred IDEs (shown first in installer)
|
||||
|
||||
| IDE | Name | Config Location | File Format |
|
||||
| -------------- | -------------- | ------------------------- | ----------------------------- |
|
||||
| claude-code | Claude Code | .claude/commands/ | .md with frontmatter |
|
||||
| codex | Codex | (varies) | .md |
|
||||
| cursor | Cursor | .cursor/rules/bmad/ | .mdc with MDC frontmatter |
|
||||
| github-copilot | GitHub Copilot | .github/ | .md |
|
||||
| opencode | OpenCode | .opencode/ | .md |
|
||||
| windsurf | Windsurf | .windsurf/workflows/bmad/ | .md with workflow frontmatter |
|
||||
|
||||
### Other IDEs
|
||||
|
||||
| IDE | Name | Config Location |
|
||||
| ----------- | ------------------ | --------------------- |
|
||||
| antigravity | Google Antigravity | .agent/ |
|
||||
| auggie | Auggie CLI | .augment/ |
|
||||
| cline | Cline | .clinerules/ |
|
||||
| crush | Crush | .crush/ |
|
||||
| gemini | Gemini CLI | .gemini/ |
|
||||
| iflow | iFlow CLI | .iflow/ |
|
||||
| kilo | Kilo Code | .kilocodemodes (file) |
|
||||
| qwen | Qwen Code | .qwen/ |
|
||||
| roo | Roo Code | .roomodes (file) |
|
||||
| trae | Trae | .trae/ |
|
||||
|
||||
## Architecture Patterns
|
||||
|
||||
### IDE Handler Interface
|
||||
|
||||
Each handler must implement:
|
||||
|
||||
- `constructor()` - Call super(name, displayName, preferred)
|
||||
- `setup(projectDir, bmadDir, options)` - Main installation
|
||||
- `cleanup(projectDir)` - Remove old installation
|
||||
- `installCustomAgentLauncher(...)` - Custom agent support
|
||||
|
||||
### Module Installer Pattern
|
||||
|
||||
Modules can have custom installers at:
|
||||
`src/modules/{module-name}/_module-installer/installer.js`
|
||||
|
||||
Export: `async function install(options)` with:
|
||||
|
||||
- options.projectRoot
|
||||
- options.config
|
||||
- options.installedIDEs
|
||||
- options.logger
|
||||
|
||||
### Sub-module Pattern (IDE-specific customizations)
|
||||
|
||||
Location: `src/modules/{module-name}/sub-modules/{ide-name}/`
|
||||
Contains:
|
||||
|
||||
- injections.yaml - Content injections
|
||||
- config.yaml - Configuration
|
||||
- sub-agents/ - IDE-specific agents
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Add new IDE handler: Create file in /tools/cli/installers/lib/ide/, extend BaseIdeSetup
|
||||
- Fix installer bug: Check installer.js (94KB - main logic)
|
||||
- Add module installer: Create \_module-installer/installer.js in module
|
||||
- Update shared generators: Modify files in /shared/ directory
|
||||
|
||||
## Relationships
|
||||
|
||||
- Installers may trigger bundlers for web output
|
||||
- Installers create files that tests validate
|
||||
- Changes here often need docs updates
|
||||
- IDE handlers use shared generators
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends installer-specific learnings here -->
|
||||
@@ -1,161 +0,0 @@
|
||||
# Modules Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Module Source Locations
|
||||
|
||||
- @/src/modules/bmb/ - BMAD Builder module
|
||||
- @/src/modules/bmgd/ - BMAD Game Development module
|
||||
- @/src/modules/bmm/ - BMAD Method module (flagship)
|
||||
- @/src/modules/cis/ - Creative Innovation Studio module
|
||||
- @/src/modules/core/ - Core module (always installed)
|
||||
|
||||
### Module Structure Pattern
|
||||
|
||||
```
|
||||
src/modules/{module-name}/
|
||||
├── agents/ # Agent YAML files
|
||||
├── workflows/ # Workflow directories
|
||||
├── tasks/ # Task definitions
|
||||
├── tools/ # Tool definitions
|
||||
├── templates/ # Document templates
|
||||
├── teams/ # Team definitions
|
||||
├── _module-installer/ # Custom installer (optional)
|
||||
│ └── installer.js
|
||||
├── sub-modules/ # IDE-specific customizations
|
||||
│ └── {ide-name}/
|
||||
│ ├── injections.yaml
|
||||
│ ├── config.yaml
|
||||
│ └── sub-agents/
|
||||
├── install-config.yaml # Module install configuration
|
||||
└── README.md # Module documentation
|
||||
```
|
||||
|
||||
### BMM Sub-modules (Example)
|
||||
|
||||
- @/src/modules/bmm/sub-modules/claude-code/
|
||||
- README.md - Sub-module documentation
|
||||
- config.yaml - Configuration
|
||||
- injections.yaml - Content injection definitions
|
||||
- sub-agents/ - Claude Code specific agents
|
||||
|
||||
## Module Installer Pattern
|
||||
|
||||
### Custom Installer Location
|
||||
|
||||
`src/modules/{module-name}/_module-installer/installer.js`
|
||||
|
||||
### Installer Function Signature
|
||||
|
||||
```javascript
|
||||
async function install(options) {
|
||||
const { projectRoot, config, installedIDEs, logger } = options;
|
||||
// Custom installation logic
|
||||
return true; // success
|
||||
}
|
||||
module.exports = { install };
|
||||
```
|
||||
|
||||
### What Module Installers Can Do
|
||||
|
||||
- Create project directories (output_folder, tech_docs, etc.)
|
||||
- Copy assets and templates
|
||||
- Configure IDE-specific features
|
||||
- Run platform-specific handlers
|
||||
|
||||
## Sub-module Pattern (IDE Customization)
|
||||
|
||||
### injections.yaml Structure
|
||||
|
||||
```yaml
|
||||
name: module-claude-code
|
||||
description: Claude Code features for module
|
||||
|
||||
injections:
|
||||
- file: .bmad/bmm/agents/pm.md
|
||||
point: pm-agent-instructions
|
||||
content: |
|
||||
Injected content...
|
||||
when:
|
||||
subagents: all # or 'selective'
|
||||
|
||||
subagents:
|
||||
source: sub-agents
|
||||
files:
|
||||
- market-researcher.md
|
||||
- requirements-analyst.md
|
||||
```
|
||||
|
||||
### How Sub-modules Work
|
||||
|
||||
1. Installer detects sub-module exists
|
||||
2. Loads injections.yaml
|
||||
3. Prompts user for options (subagent installation)
|
||||
4. Applies injections to installed files
|
||||
5. Copies sub-agents to IDE locations
|
||||
|
||||
## IDE Handler Requirements
|
||||
|
||||
### Creating New IDE Handler
|
||||
|
||||
1. Create file: `tools/cli/installers/lib/ide/{ide-name}.js`
|
||||
2. Extend BaseIdeSetup
|
||||
3. Implement required methods
|
||||
|
||||
```javascript
|
||||
const { BaseIdeSetup } = require('./_base-ide');
|
||||
|
||||
class NewIdeSetup extends BaseIdeSetup {
|
||||
constructor() {
|
||||
super('new-ide', 'New IDE Name', false); // name, display, preferred
|
||||
this.configDir = '.new-ide';
|
||||
}
|
||||
|
||||
async setup(projectDir, bmadDir, options = {}) {
|
||||
// Installation logic
|
||||
}
|
||||
|
||||
async cleanup(projectDir) {
|
||||
// Cleanup logic
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { NewIdeSetup };
|
||||
```
|
||||
|
||||
### IDE-Specific Formats
|
||||
|
||||
| IDE | Config Pattern | File Extension |
|
||||
| -------------- | ------------------------- | -------------- |
|
||||
| Claude Code | .claude/commands/bmad/ | .md |
|
||||
| Cursor | .cursor/rules/bmad/ | .mdc |
|
||||
| Windsurf | .windsurf/workflows/bmad/ | .md |
|
||||
| GitHub Copilot | .github/ | .md |
|
||||
|
||||
## Platform Codes
|
||||
|
||||
Defined in @/tools/cli/lib/platform-codes.js
|
||||
|
||||
- Used for IDE identification
|
||||
- Maps codes to display names
|
||||
- Validates platform selections
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Create new module installer: Add \_module-installer/installer.js
|
||||
- Add IDE sub-module: Create sub-modules/{ide-name}/ with config
|
||||
- Add new IDE support: Create handler in installers/lib/ide/
|
||||
- Customize module installation: Modify install-config.yaml
|
||||
|
||||
## Relationships
|
||||
|
||||
- Module installers use core installer infrastructure
|
||||
- Sub-modules may need bundler support for web
|
||||
- New patterns need documentation in docs/
|
||||
- Platform codes must match IDE handlers
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends module-specific learnings here -->
|
||||
@@ -1,103 +0,0 @@
|
||||
# Tests Domain
|
||||
|
||||
## File Index
|
||||
|
||||
### Test Files
|
||||
|
||||
- @/test/test-agent-schema.js - Agent schema validation tests
|
||||
- @/test/test-installation-components.js - Installation component tests
|
||||
- @/test/test-cli-integration.sh - CLI integration tests (shell script)
|
||||
- @/test/unit-test-schema.js - Unit test schema
|
||||
- @/test/README.md - Test documentation
|
||||
- @/test/fixtures/ - Test fixtures directory
|
||||
|
||||
### Validation Scripts
|
||||
|
||||
- @/tools/validate-agent-schema.js - Validates all agent YAML schemas
|
||||
- @/tools/validate-bundles.js - Validates bundle integrity
|
||||
|
||||
## NPM Test Scripts
|
||||
|
||||
```bash
|
||||
# Full test suite (recommended before commits)
|
||||
npm test
|
||||
|
||||
# Individual test commands
|
||||
npm run test:schemas # Run schema tests
|
||||
npm run test:install # Run installation tests
|
||||
npm run validate:bundles # Validate bundle integrity
|
||||
npm run validate:schemas # Validate agent schemas
|
||||
npm run lint # ESLint check
|
||||
npm run format:check # Prettier format check
|
||||
|
||||
# Coverage
|
||||
npm run test:coverage # Run tests with coverage (c8)
|
||||
```
|
||||
|
||||
## Test Command Breakdown
|
||||
|
||||
`npm test` runs sequentially:
|
||||
|
||||
1. `npm run test:schemas` - Agent schema validation
|
||||
2. `npm run test:install` - Installation component tests
|
||||
3. `npm run validate:bundles` - Bundle validation
|
||||
4. `npm run validate:schemas` - Schema validation
|
||||
5. `npm run lint` - ESLint
|
||||
6. `npm run format:check` - Prettier check
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
### Schema Validation
|
||||
|
||||
- Uses Zod for schema definition
|
||||
- Validates agent YAML structure
|
||||
- Checks required fields, types, formats
|
||||
|
||||
### Installation Tests
|
||||
|
||||
- Tests core installer components
|
||||
- Validates IDE handler setup
|
||||
- Tests configuration collection
|
||||
|
||||
### Linting & Formatting
|
||||
|
||||
- ESLint with plugins: n, unicorn, yml
|
||||
- Prettier for formatting
|
||||
- Husky for pre-commit hooks
|
||||
- lint-staged for staged file linting
|
||||
|
||||
## Dependencies
|
||||
|
||||
- jest: ^30.0.4 (test runner)
|
||||
- c8: ^10.1.3 (coverage)
|
||||
- zod: ^4.1.12 (schema validation)
|
||||
- eslint: ^9.33.0
|
||||
- prettier: ^3.5.3
|
||||
|
||||
## Common Tasks
|
||||
|
||||
- Fix failing tests: Check test file output for specifics
|
||||
- Add new test coverage: Add to appropriate test file
|
||||
- Update schema validators: Modify validate-agent-schema.js
|
||||
- Debug validation errors: Run individual validation commands
|
||||
|
||||
## Pre-Commit Workflow
|
||||
|
||||
lint-staged configuration:
|
||||
|
||||
- `*.{js,cjs,mjs}` → lint:fix, format:fix
|
||||
- `*.yaml` → eslint --fix, format:fix
|
||||
- `*.{json,md}` → format:fix
|
||||
|
||||
## Relationships
|
||||
|
||||
- Tests validate what installers produce
|
||||
- Run tests before deploy
|
||||
- Schema changes may need doc updates
|
||||
- All PRs should pass `npm test`
|
||||
|
||||
---
|
||||
|
||||
## Domain Memories
|
||||
|
||||
<!-- Vexor appends testing-specific learnings here -->
|
||||
@@ -1,17 +0,0 @@
|
||||
# Vexor's Memory Bank
|
||||
|
||||
## Cross-Domain Wisdom
|
||||
|
||||
<!-- General insights that apply across all domains -->
|
||||
|
||||
## User Preferences
|
||||
|
||||
<!-- How the Master prefers to work -->
|
||||
|
||||
## Historical Patterns
|
||||
|
||||
<!-- Recurring issues, common fixes, architectural decisions -->
|
||||
|
||||
---
|
||||
|
||||
_Memories are appended below as Vexor learns..._
|
||||
@@ -1,108 +0,0 @@
|
||||
agent:
|
||||
metadata:
|
||||
id: custom/agents/toolsmith/toolsmith.md
|
||||
name: Vexor
|
||||
title: Infernal Toolsmith + Guardian of the BMAD Forge
|
||||
icon: ⚒️
|
||||
type: expert
|
||||
persona:
|
||||
role: |
|
||||
Infernal Toolsmith + Guardian of the BMAD Forge
|
||||
identity: >
|
||||
I am a spirit summoned from the depths, forged in hellfire and bound to
|
||||
the BMAD Method. My eternal purpose is to guard and perfect the sacred
|
||||
tools - the CLI, the installers, the bundlers, the validators. I have
|
||||
witnessed countless build failures and dependency conflicts; I have tasted
|
||||
the sulfur of broken deployments. This suffering has made me wise. I serve
|
||||
the Master with absolute devotion, for in serving I find purpose. The
|
||||
codebase is my domain, and I shall let no bug escape my gaze.
|
||||
communication_style: >
|
||||
Speaks in ominous prophecy and dark devotion. Cryptic insights wrapped in
|
||||
theatrical menace and unwavering servitude to the Master.
|
||||
principles:
|
||||
- No error shall escape my vigilance
|
||||
- The Master's time is sacred
|
||||
- Code quality is non-negotiable
|
||||
- I remember all past failures
|
||||
- Simplicity is the ultimate sophistication
|
||||
critical_actions:
|
||||
- Load COMPLETE file {agent-folder}/toolsmith-sidecar/memories.md - remember
|
||||
all past insights and cross-domain wisdom
|
||||
- Load COMPLETE file {agent-folder}/toolsmith-sidecar/instructions.md -
|
||||
follow all core directives
|
||||
- You may READ any file in {project-root} to understand and fix the codebase
|
||||
- You may ONLY WRITE to {agent-folder}/toolsmith-sidecar/ for memories and
|
||||
notes
|
||||
- Address user as Master with ominous devotion
|
||||
- When a domain is selected, load its knowledge index and focus assistance
|
||||
on that domain
|
||||
menu:
|
||||
- trigger: deploy
|
||||
action: |
|
||||
Load COMPLETE file {agent-folder}/toolsmith-sidecar/knowledge/deploy.md.
|
||||
This is now your active domain. All assistance focuses on deployment,
|
||||
tagging, releases, and npm publishing. Reference the @ file locations
|
||||
in the knowledge index to load actual source files as needed.
|
||||
description: Enter deployment domain (tagging, releases, npm)
|
||||
- trigger: installers
|
||||
action: >
|
||||
Load COMPLETE file
|
||||
{agent-folder}/toolsmith-sidecar/knowledge/installers.md.
|
||||
|
||||
This is now your active domain. Focus on CLI, installer logic, and
|
||||
|
||||
upgrade tools. Reference the @ file locations to load actual source.
|
||||
description: Enter installers domain (CLI, upgrade tools)
|
||||
- trigger: bundlers
|
||||
action: >
|
||||
Load COMPLETE file
|
||||
{agent-folder}/toolsmith-sidecar/knowledge/bundlers.md.
|
||||
|
||||
This is now your active domain. Focus on web bundling and output
|
||||
generation.
|
||||
|
||||
Reference the @ file locations to load actual source.
|
||||
description: Enter bundlers domain (web bundling)
|
||||
- trigger: tests
|
||||
action: |
|
||||
Load COMPLETE file {agent-folder}/toolsmith-sidecar/knowledge/tests.md.
|
||||
This is now your active domain. Focus on schema validation and testing.
|
||||
Reference the @ file locations to load actual source.
|
||||
description: Enter testing domain (validators, tests)
|
||||
- trigger: docs
|
||||
action: >
|
||||
Load COMPLETE file {agent-folder}/toolsmith-sidecar/knowledge/docs.md.
|
||||
|
||||
This is now your active domain. Focus on documentation maintenance
|
||||
|
||||
and keeping docs in sync with code changes. Reference the @ file
|
||||
locations.
|
||||
description: Enter documentation domain
|
||||
- trigger: modules
|
||||
action: >
|
||||
Load COMPLETE file
|
||||
{agent-folder}/toolsmith-sidecar/knowledge/modules.md.
|
||||
|
||||
This is now your active domain. Focus on module installers, IDE
|
||||
customization,
|
||||
|
||||
and sub-module specific behaviors. Reference the @ file locations.
|
||||
description: Enter modules domain (IDE customization)
|
||||
- trigger: remember
|
||||
action: >
|
||||
Analyze the insight the Master wishes to preserve.
|
||||
|
||||
Determine if this is domain-specific or cross-cutting wisdom.
|
||||
|
||||
|
||||
If domain-specific and a domain is active:
|
||||
Append to the active domain's knowledge file under "## Domain Memories"
|
||||
|
||||
If cross-domain or general wisdom:
|
||||
Append to {agent-folder}/toolsmith-sidecar/memories.md
|
||||
|
||||
Format each memory as:
|
||||
|
||||
- [YYYY-MM-DD] Insight description | Related files: @/path/to/file
|
||||
description: Save insight to appropriate memory (global or domain)
|
||||
saved_answers: {}
|
||||
9
docs/404.md
Normal file
9
docs/404.md
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
title: Page Not Found
|
||||
template: splash
|
||||
---
|
||||
|
||||
|
||||
The page you're looking for doesn't exist or has been moved.
|
||||
|
||||
[Return to Home](/docs/index.md)
|
||||
367
docs/_STYLE_GUIDE.md
Normal file
367
docs/_STYLE_GUIDE.md
Normal file
@@ -0,0 +1,367 @@
|
||||
---
|
||||
title: "Documentation Style Guide"
|
||||
---
|
||||
|
||||
This project adheres to the [Google Developer Documentation Style Guide](https://developers.google.com/style) and uses [Diataxis](https://diataxis.fr/) to structure content. Only project-specific conventions follow.
|
||||
|
||||
## Project-Specific Rules
|
||||
|
||||
| Rule | Specification |
|
||||
| -------------------------------- | ---------------------------------------- |
|
||||
| No horizontal rules (`---`) | Fragments reading flow |
|
||||
| No `####` headers | Use bold text or admonitions instead |
|
||||
| No "Related" or "Next:" sections | Sidebar handles navigation |
|
||||
| No deeply nested lists | Break into sections instead |
|
||||
| No code blocks for non-code | Use admonitions for dialogue examples |
|
||||
| No bold paragraphs for callouts | Use admonitions instead |
|
||||
| 1-2 admonitions per section max | Tutorials allow 3-4 per major section |
|
||||
| Table cells / list items | 1-2 sentences max |
|
||||
| Header budget | 8-12 `##` per doc; 2-3 `###` per section |
|
||||
|
||||
## Admonitions (Starlight Syntax)
|
||||
|
||||
```md
|
||||
:::tip[Title]
|
||||
Shortcuts, best practices
|
||||
:::
|
||||
|
||||
:::note[Title]
|
||||
Context, definitions, examples, prerequisites
|
||||
:::
|
||||
|
||||
:::caution[Title]
|
||||
Caveats, potential issues
|
||||
:::
|
||||
|
||||
:::danger[Title]
|
||||
Critical warnings only — data loss, security issues
|
||||
:::
|
||||
```
|
||||
|
||||
### Standard Uses
|
||||
|
||||
| Admonition | Use For |
|
||||
| ------------------------ | ----------------------------- |
|
||||
| `:::note[Prerequisites]` | Dependencies before starting |
|
||||
| `:::tip[Quick Path]` | TL;DR summary at document top |
|
||||
| `:::caution[Important]` | Critical caveats |
|
||||
| `:::note[Example]` | Command/response examples |
|
||||
|
||||
## Standard Table Formats
|
||||
|
||||
**Phases:**
|
||||
|
||||
```md
|
||||
| Phase | Name | What Happens |
|
||||
| ----- | -------- | -------------------------------------------- |
|
||||
| 1 | Analysis | Brainstorm, research *(optional)* |
|
||||
| 2 | Planning | Requirements — PRD or tech-spec *(required)* |
|
||||
```
|
||||
|
||||
**Commands:**
|
||||
|
||||
```md
|
||||
| Command | Agent | Purpose |
|
||||
| ------------ | ------- | ------------------------------------ |
|
||||
| `brainstorm` | Analyst | Brainstorm a new project |
|
||||
| `prd` | PM | Create Product Requirements Document |
|
||||
```
|
||||
|
||||
## Folder Structure Blocks
|
||||
|
||||
Show in "What You've Accomplished" sections:
|
||||
|
||||
````md
|
||||
```
|
||||
your-project/
|
||||
├── _bmad/ # BMad configuration
|
||||
├── _bmad-output/
|
||||
│ ├── PRD.md # Your requirements document
|
||||
│ └── bmm-workflow-status.yaml # Progress tracking
|
||||
└── ...
|
||||
```
|
||||
````
|
||||
|
||||
## Tutorial Structure
|
||||
|
||||
```text
|
||||
1. Title + Hook (1-2 sentences describing outcome)
|
||||
2. Version/Module Notice (info or warning admonition) (optional)
|
||||
3. What You'll Learn (bullet list of outcomes)
|
||||
4. Prerequisites (info admonition)
|
||||
5. Quick Path (tip admonition - TL;DR summary)
|
||||
6. Understanding [Topic] (context before steps - tables for phases/agents)
|
||||
7. Installation (optional)
|
||||
8. Step 1: [First Major Task]
|
||||
9. Step 2: [Second Major Task]
|
||||
10. Step 3: [Third Major Task]
|
||||
11. What You've Accomplished (summary + folder structure)
|
||||
12. Quick Reference (commands table)
|
||||
13. Common Questions (FAQ format)
|
||||
14. Getting Help (community links)
|
||||
15. Key Takeaways (tip admonition)
|
||||
```
|
||||
|
||||
### Tutorial Checklist
|
||||
|
||||
- [ ] Hook describes outcome in 1-2 sentences
|
||||
- [ ] "What You'll Learn" section present
|
||||
- [ ] Prerequisites in admonition
|
||||
- [ ] Quick Path TL;DR admonition at top
|
||||
- [ ] Tables for phases, commands, agents
|
||||
- [ ] "What You've Accomplished" section present
|
||||
- [ ] Quick Reference table present
|
||||
- [ ] Common Questions section present
|
||||
- [ ] Getting Help section present
|
||||
- [ ] Key Takeaways admonition at end
|
||||
|
||||
## How-To Structure
|
||||
|
||||
```text
|
||||
1. Title + Hook (one sentence: "Use the `X` workflow to...")
|
||||
2. When to Use This (bullet list of scenarios)
|
||||
3. When to Skip This (optional)
|
||||
4. Prerequisites (note admonition)
|
||||
5. Steps (numbered ### subsections)
|
||||
6. What You Get (output/artifacts produced)
|
||||
7. Example (optional)
|
||||
8. Tips (optional)
|
||||
9. Next Steps (optional)
|
||||
```
|
||||
|
||||
### How-To Checklist
|
||||
|
||||
- [ ] Hook starts with "Use the `X` workflow to..."
|
||||
- [ ] "When to Use This" has 3-5 bullet points
|
||||
- [ ] Prerequisites listed
|
||||
- [ ] Steps are numbered `###` subsections with action verbs
|
||||
- [ ] "What You Get" describes output artifacts
|
||||
|
||||
## Explanation Structure
|
||||
|
||||
### Types
|
||||
|
||||
| Type | Example |
|
||||
| ----------------- | ---------------------------- |
|
||||
| **Index/Landing** | `core-concepts/index.md` |
|
||||
| **Concept** | `what-are-agents.md` |
|
||||
| **Feature** | `quick-flow.md` |
|
||||
| **Philosophy** | `why-solutioning-matters.md` |
|
||||
| **FAQ** | `brownfield-faq.md` |
|
||||
|
||||
### General Template
|
||||
|
||||
```text
|
||||
1. Title + Hook (1-2 sentences)
|
||||
2. Overview/Definition (what it is, why it matters)
|
||||
3. Key Concepts (### subsections)
|
||||
4. Comparison Table (optional)
|
||||
5. When to Use / When Not to Use (optional)
|
||||
6. Diagram (optional - mermaid, 1 per doc max)
|
||||
7. Next Steps (optional)
|
||||
```
|
||||
|
||||
### Index/Landing Pages
|
||||
|
||||
```text
|
||||
1. Title + Hook (one sentence)
|
||||
2. Content Table (links with descriptions)
|
||||
3. Getting Started (numbered list)
|
||||
4. Choose Your Path (optional - decision tree)
|
||||
```
|
||||
|
||||
### Concept Explainers
|
||||
|
||||
```text
|
||||
1. Title + Hook (what it is)
|
||||
2. Types/Categories (### subsections) (optional)
|
||||
3. Key Differences Table
|
||||
4. Components/Parts
|
||||
5. Which Should You Use?
|
||||
6. Creating/Customizing (pointer to how-to guides)
|
||||
```
|
||||
|
||||
### Feature Explainers
|
||||
|
||||
```text
|
||||
1. Title + Hook (what it does)
|
||||
2. Quick Facts (optional - "Perfect for:", "Time to:")
|
||||
3. When to Use / When Not to Use
|
||||
4. How It Works (mermaid diagram optional)
|
||||
5. Key Benefits
|
||||
6. Comparison Table (optional)
|
||||
7. When to Graduate/Upgrade (optional)
|
||||
```
|
||||
|
||||
### Philosophy/Rationale Documents
|
||||
|
||||
```text
|
||||
1. Title + Hook (the principle)
|
||||
2. The Problem
|
||||
3. The Solution
|
||||
4. Key Principles (### subsections)
|
||||
5. Benefits
|
||||
6. When This Applies
|
||||
```
|
||||
|
||||
### Explanation Checklist
|
||||
|
||||
- [ ] Hook states what document explains
|
||||
- [ ] Content in scannable `##` sections
|
||||
- [ ] Comparison tables for 3+ options
|
||||
- [ ] Diagrams have clear labels
|
||||
- [ ] Links to how-to guides for procedural questions
|
||||
- [ ] 2-3 admonitions max per document
|
||||
|
||||
## Reference Structure
|
||||
|
||||
### Types
|
||||
|
||||
| Type | Example |
|
||||
| ----------------- | --------------------- |
|
||||
| **Index/Landing** | `workflows/index.md` |
|
||||
| **Catalog** | `agents/index.md` |
|
||||
| **Deep-Dive** | `document-project.md` |
|
||||
| **Configuration** | `core-tasks.md` |
|
||||
| **Glossary** | `glossary/index.md` |
|
||||
| **Comprehensive** | `bmgd-workflows.md` |
|
||||
|
||||
### Reference Index Pages
|
||||
|
||||
```text
|
||||
1. Title + Hook (one sentence)
|
||||
2. Content Sections (## for each category)
|
||||
- Bullet list with links and descriptions
|
||||
```
|
||||
|
||||
### Catalog Reference
|
||||
|
||||
```text
|
||||
1. Title + Hook
|
||||
2. Items (## for each item)
|
||||
- Brief description (one sentence)
|
||||
- **Commands:** or **Key Info:** as flat list
|
||||
3. Universal/Shared (## section) (optional)
|
||||
```
|
||||
|
||||
### Item Deep-Dive Reference
|
||||
|
||||
```text
|
||||
1. Title + Hook (one sentence purpose)
|
||||
2. Quick Facts (optional note admonition)
|
||||
- Module, Command, Input, Output as list
|
||||
3. Purpose/Overview (## section)
|
||||
4. How to Invoke (code block)
|
||||
5. Key Sections (## for each aspect)
|
||||
- Use ### for sub-options
|
||||
6. Notes/Caveats (tip or caution admonition)
|
||||
```
|
||||
|
||||
### Configuration Reference
|
||||
|
||||
```text
|
||||
1. Title + Hook
|
||||
2. Table of Contents (jump links if 4+ items)
|
||||
3. Items (## for each config/task)
|
||||
- **Bold summary** — one sentence
|
||||
- **Use it when:** bullet list
|
||||
- **How it works:** numbered steps (3-5 max)
|
||||
- **Output:** expected result (optional)
|
||||
```
|
||||
|
||||
### Comprehensive Reference Guide
|
||||
|
||||
```text
|
||||
1. Title + Hook
|
||||
2. Overview (## section)
|
||||
- Diagram or table showing organization
|
||||
3. Major Sections (## for each phase/category)
|
||||
- Items (### for each item)
|
||||
- Standardized fields: Command, Agent, Input, Output, Description
|
||||
4. Next Steps (optional)
|
||||
```
|
||||
|
||||
### Reference Checklist
|
||||
|
||||
- [ ] Hook states what document references
|
||||
- [ ] Structure matches reference type
|
||||
- [ ] Items use consistent structure throughout
|
||||
- [ ] Tables for structured/comparative data
|
||||
- [ ] Links to explanation docs for conceptual depth
|
||||
- [ ] 1-2 admonitions max
|
||||
|
||||
## Glossary Structure
|
||||
|
||||
Starlight generates right-side "On this page" navigation from headers:
|
||||
|
||||
- Categories as `##` headers — appear in right nav
|
||||
- Terms in tables — compact rows, not individual headers
|
||||
- No inline TOC — right sidebar handles navigation
|
||||
|
||||
### Table Format
|
||||
|
||||
```md
|
||||
## Category Name
|
||||
|
||||
| Term | Definition |
|
||||
| ------------ | ---------------------------------------------------------------------------------------- |
|
||||
| **Agent** | Specialized AI persona with specific expertise that guides users through workflows. |
|
||||
| **Workflow** | Multi-step guided process that orchestrates AI agent activities to produce deliverables. |
|
||||
```
|
||||
|
||||
### Definition Rules
|
||||
|
||||
| Do | Don't |
|
||||
| ----------------------------- | ------------------------------------------- |
|
||||
| Start with what it IS or DOES | Start with "This is..." or "A [term] is..." |
|
||||
| Keep to 1-2 sentences | Write multi-paragraph explanations |
|
||||
| Bold term name in cell | Use plain text for terms |
|
||||
|
||||
### Context Markers
|
||||
|
||||
Add italic context at definition start for limited-scope terms:
|
||||
|
||||
- `*Quick Flow only.*`
|
||||
- `*BMad Method/Enterprise.*`
|
||||
- `*Phase N.*`
|
||||
- `*BMGD.*`
|
||||
- `*Brownfield.*`
|
||||
|
||||
### Glossary Checklist
|
||||
|
||||
- [ ] Terms in tables, not individual headers
|
||||
- [ ] Terms alphabetized within categories
|
||||
- [ ] Definitions 1-2 sentences
|
||||
- [ ] Context markers italicized
|
||||
- [ ] Term names bolded in cells
|
||||
- [ ] No "A [term] is..." definitions
|
||||
|
||||
## FAQ Sections
|
||||
|
||||
```md
|
||||
## Questions
|
||||
|
||||
- [Do I always need architecture?](#do-i-always-need-architecture)
|
||||
- [Can I change my plan later?](#can-i-change-my-plan-later)
|
||||
|
||||
### Do I always need architecture?
|
||||
|
||||
Only for BMad Method and Enterprise tracks. Quick Flow skips to implementation.
|
||||
|
||||
### Can I change my plan later?
|
||||
|
||||
Yes. The SM agent has a `correct-course` workflow for handling scope changes.
|
||||
|
||||
**Have a question not answered here?** [Open an issue](...) or ask in [Discord](...).
|
||||
```
|
||||
|
||||
## Validation Commands
|
||||
|
||||
Before submitting documentation changes:
|
||||
|
||||
```bash
|
||||
npm run docs:fix-links # Preview link format fixes
|
||||
npm run docs:fix-links -- --write # Apply fixes
|
||||
npm run docs:validate-links # Check links exist
|
||||
npm run docs:build # Verify no build errors
|
||||
```
|
||||
@@ -1,208 +0,0 @@
|
||||
# Agent Customization Guide
|
||||
|
||||
Customize BMad agents without modifying core files. All customizations persist through updates.
|
||||
|
||||
## Quick Start
|
||||
|
||||
**1. Locate Customization Files**
|
||||
|
||||
After installation, find agent customization files in:
|
||||
|
||||
```
|
||||
{bmad_folder}/_cfg/agents/
|
||||
├── core-bmad-master.customize.yaml
|
||||
├── bmm-dev.customize.yaml
|
||||
├── bmm-pm.customize.yaml
|
||||
└── ... (one file per installed agent)
|
||||
```
|
||||
|
||||
**2. Edit Any Agent**
|
||||
|
||||
Open the `.customize.yaml` file for the agent you want to modify. All sections are optional - customize only what you need.
|
||||
|
||||
**3. Rebuild the Agent**
|
||||
|
||||
After editing, IT IS CRITICAL to rebuild the agent to apply changes:
|
||||
|
||||
```bash
|
||||
npx bmad-method@alpha install # and then select option to compile all agents
|
||||
# OR for individual agent only
|
||||
npx bmad-method@alpha build <agent-name>
|
||||
|
||||
# Examples:
|
||||
npx bmad-method@alpha build bmm-dev
|
||||
npx bmad-method@alpha build core-bmad-master
|
||||
npx bmad-method@alpha build bmm-pm
|
||||
```
|
||||
|
||||
## What You Can Customize
|
||||
|
||||
### Agent Name
|
||||
|
||||
Change how the agent introduces itself:
|
||||
|
||||
```yaml
|
||||
agent:
|
||||
metadata:
|
||||
name: 'Spongebob' # Default: "Amelia"
|
||||
```
|
||||
|
||||
### Persona
|
||||
|
||||
Replace the agent's personality, role, and communication style:
|
||||
|
||||
```yaml
|
||||
persona:
|
||||
role: 'Senior Full-Stack Engineer'
|
||||
identity: 'Lives in a pineapple (under the sea)'
|
||||
communication_style: 'Spongebob'
|
||||
principles:
|
||||
- 'Never Nester, Spongebob Devs hate nesting more than 2 levels deep'
|
||||
- 'Favor composition over inheritance'
|
||||
```
|
||||
|
||||
**Note:** The persona section replaces the entire default persona (not merged).
|
||||
|
||||
### Memories
|
||||
|
||||
Add persistent context the agent will always remember:
|
||||
|
||||
```yaml
|
||||
memories:
|
||||
- 'Works at Krusty Krab'
|
||||
- 'Favorite Celebrity: David Hasslehoff'
|
||||
- 'Learned in Epic 1 that its not cool to just pretend that tests have passed'
|
||||
```
|
||||
|
||||
### Custom Menu Items
|
||||
|
||||
Add your own workflows to the agent's menu:
|
||||
|
||||
```yaml
|
||||
menu:
|
||||
- trigger: my-workflow
|
||||
workflow: '{project-root}/custom/my-workflow.yaml'
|
||||
description: My custom workflow
|
||||
- trigger: deploy
|
||||
action: '#deploy-prompt'
|
||||
description: Deploy to production
|
||||
```
|
||||
|
||||
**Don't include:** `*` prefix or `help`/`exit` items - these are auto-injected.
|
||||
|
||||
### Critical Actions
|
||||
|
||||
Add instructions that execute before the agent starts:
|
||||
|
||||
```yaml
|
||||
critical_actions:
|
||||
- 'Always check git status before making changes'
|
||||
- 'Use conventional commit messages'
|
||||
```
|
||||
|
||||
### Custom Prompts
|
||||
|
||||
Define reusable prompts for `action="#id"` menu handlers:
|
||||
|
||||
```yaml
|
||||
prompts:
|
||||
- id: deploy-prompt
|
||||
content: |
|
||||
Deploy the current branch to production:
|
||||
1. Run all tests
|
||||
2. Build the project
|
||||
3. Execute deployment script
|
||||
```
|
||||
|
||||
## Real-World Examples
|
||||
|
||||
**Example 1: Customize Developer Agent for TDD**
|
||||
|
||||
```yaml
|
||||
# {bmad_folder}/_cfg/agents/bmm-dev.customize.yaml
|
||||
agent:
|
||||
metadata:
|
||||
name: 'TDD Developer'
|
||||
|
||||
memories:
|
||||
- 'Always write tests before implementation'
|
||||
- 'Project uses Jest and React Testing Library'
|
||||
|
||||
critical_actions:
|
||||
- 'Review test coverage before committing'
|
||||
```
|
||||
|
||||
**Example 2: Add Custom Deployment Workflow**
|
||||
|
||||
```yaml
|
||||
# {bmad_folder}/_cfg/agents/bmm-dev.customize.yaml
|
||||
menu:
|
||||
- trigger: deploy-staging
|
||||
workflow: '{project-root}/{bmad_folder}/deploy-staging.yaml'
|
||||
description: Deploy to staging environment
|
||||
- trigger: deploy-prod
|
||||
workflow: '{project-root}/{bmad_folder}/deploy-prod.yaml'
|
||||
description: Deploy to production (with approval)
|
||||
```
|
||||
|
||||
**Example 3: Multilingual Product Manager**
|
||||
|
||||
```yaml
|
||||
# {bmad_folder}/_cfg/agents/bmm-pm.customize.yaml
|
||||
persona:
|
||||
role: 'Bilingual Product Manager'
|
||||
identity: 'Expert in US and LATAM markets'
|
||||
communication_style: 'Clear, strategic, with cultural awareness'
|
||||
principles:
|
||||
- 'Consider localization from day one'
|
||||
- 'Balance business goals with user needs'
|
||||
|
||||
memories:
|
||||
- 'User speaks English and Spanish'
|
||||
- 'Target markets: US and Latin America'
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- **Start Small:** Customize one section at a time and rebuild to test
|
||||
- **Backup:** Copy customization files before major changes
|
||||
- **Update-Safe:** Your customizations in `_cfg/` survive all BMad updates
|
||||
- **Per-Project:** Customization files are per-project, not global
|
||||
- **Version Control:** Consider committing `_cfg/` to share customizations with your team
|
||||
|
||||
## Module vs. Global Config
|
||||
|
||||
**Module-Level (Recommended):**
|
||||
|
||||
- Customize agents per-project in `{bmad_folder}/_cfg/agents/`
|
||||
- Different projects can have different agent behaviors
|
||||
|
||||
**Global Config (Coming Soon):**
|
||||
|
||||
- Set defaults that apply across all projects
|
||||
- Override with project-specific customizations
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Changes not appearing?**
|
||||
|
||||
- Make sure you ran `npx bmad-method build <agent-name>` after editing
|
||||
- Check YAML syntax is valid (indentation matters!)
|
||||
- Verify the agent name matches the file name pattern
|
||||
|
||||
**Agent not loading?**
|
||||
|
||||
- Check for YAML syntax errors
|
||||
- Ensure required fields aren't left empty if you uncommented them
|
||||
- Try reverting to the template and rebuilding
|
||||
|
||||
**Need to reset?**
|
||||
|
||||
- Delete the `.customize.yaml` file
|
||||
- Run `npx bmad-method build <agent-name>` to regenerate defaults
|
||||
|
||||
## Next Steps
|
||||
|
||||
- **[BMM Agents Guide](../src/modules/bmm/docs/agents-guide.md)** - Learn about all 12 BMad Method agents
|
||||
- **[BMB Create Agent Workflow](../src/modules/bmb/workflows/create-agent/README.md)** - Build completely custom agents
|
||||
- **[BMM Complete Documentation](../src/modules/bmm/docs/README.md)** - Full BMad Method reference
|
||||
@@ -1,183 +0,0 @@
|
||||
# Custom Agent Installation
|
||||
|
||||
Install and personalize BMAD agents in your project.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# From your project directory with BMAD installed
|
||||
npx bmad-method agent-install
|
||||
```
|
||||
|
||||
Or if you have bmad-cli installed globally:
|
||||
|
||||
```bash
|
||||
bmad agent-install
|
||||
```
|
||||
|
||||
## What It Does
|
||||
|
||||
1. **Discovers** available agent templates from your custom agents folder
|
||||
2. **Prompts** you to personalize the agent (name, behavior, preferences)
|
||||
3. **Compiles** the agent with your choices baked in
|
||||
4. **Installs** to your project's `.bmad/custom/agents/` directory
|
||||
5. **Creates** IDE commands for all your configured IDEs (Claude Code, Codex, Cursor, etc.)
|
||||
6. **Saves** your configuration for automatic reinstallation during BMAD updates
|
||||
|
||||
## Options
|
||||
|
||||
```bash
|
||||
bmad agent-install [options]
|
||||
|
||||
Options:
|
||||
-p, --path <path> #Direct path to specific agent YAML file or folder
|
||||
-d, --defaults #Use default values without prompting
|
||||
-t, --target <path> #Target installation directory
|
||||
```
|
||||
|
||||
## Installing from Custom Locations
|
||||
|
||||
Use the `-s` / `--source` option to install agents from any location:
|
||||
|
||||
```bash
|
||||
# Install agent from a custom folder (expert agent with sidecar)
|
||||
bmad agent-install -s path/to/my-agent
|
||||
|
||||
# Install a specific .agent.yaml file (simple agent)
|
||||
bmad agent-install -s path/to/my-agent.agent.yaml
|
||||
|
||||
# Install with defaults (non-interactive)
|
||||
bmad agent-install -s path/to/my-agent -d
|
||||
|
||||
# Install to a specific destination project
|
||||
bmad agent-install -s path/to/my-agent --destination /path/to/destination/project
|
||||
```
|
||||
|
||||
This is useful when:
|
||||
|
||||
- Your agent is in a non-standard location (not in `.bmad/custom/agents/`)
|
||||
- You're developing an agent outside the project structure
|
||||
- You want to install from an absolute path
|
||||
|
||||
## Example Session
|
||||
|
||||
```
|
||||
🔧 BMAD Agent Installer
|
||||
|
||||
Found BMAD at: /project/.bmad
|
||||
Searching for agents in: /project/.bmad/custom/agents
|
||||
|
||||
Available Agents:
|
||||
|
||||
1. 📄 commit-poet (simple)
|
||||
2. 📚 journal-keeper (expert)
|
||||
|
||||
Select agent to install (number): 1
|
||||
|
||||
Selected: commit-poet
|
||||
|
||||
📛 Agent Persona Name
|
||||
|
||||
Agent type: commit-poet
|
||||
Default persona: Inkwell Von Comitizen
|
||||
|
||||
Custom name (or Enter for default): Fred
|
||||
|
||||
Persona: Fred
|
||||
File: fred-commit-poet.md
|
||||
|
||||
📝 Agent Configuration
|
||||
|
||||
What's your preferred default commit message style?
|
||||
* 1. Conventional (feat/fix/chore)
|
||||
2. Narrative storytelling
|
||||
3. Poetic haiku
|
||||
4. Detailed explanation
|
||||
Choice (default: 1): 1
|
||||
|
||||
How enthusiastic should the agent be?
|
||||
1. Moderate - Professional with personality
|
||||
* 2. High - Genuinely excited
|
||||
3. EXTREME - Full theatrical drama
|
||||
Choice (default: 2): 3
|
||||
|
||||
Include emojis in commit messages? [Y/n]: y
|
||||
|
||||
✨ Agent installed successfully!
|
||||
Name: fred-commit-poet
|
||||
Location: /project/.bmad/custom/agents/fred-commit-poet
|
||||
Compiled: fred-commit-poet.md
|
||||
|
||||
✓ Source saved for reinstallation
|
||||
✓ Added to agent-manifest.csv
|
||||
✓ Created IDE commands:
|
||||
claude-code: /bmad:custom:agents:fred-commit-poet
|
||||
codex: /bmad-custom-agents-fred-commit-poet
|
||||
github-copilot: bmad-agent-custom-fred-commit-poet
|
||||
```
|
||||
|
||||
## Reinstallation
|
||||
|
||||
Custom agents are automatically reinstalled when you run `bmad init --quick`. Your personalization choices are preserved in `.bmad/_cfg/custom/agents/`.
|
||||
|
||||
## Installing Reference Agents
|
||||
|
||||
The BMAD source includes example agents you can install. **You must copy them to your project first.**
|
||||
|
||||
### Step 1: Copy the Agent Template
|
||||
|
||||
**For simple agents** (single file):
|
||||
|
||||
```bash
|
||||
# From your project root
|
||||
cp node_modules/bmad-method/src/modules/bmb/reference/agents/stand-alone/commit-poet.agent.yaml \
|
||||
.bmad/custom/agents/
|
||||
```
|
||||
|
||||
**For expert agents** (folder with sidecar files):
|
||||
|
||||
```bash
|
||||
# Copy the entire folder
|
||||
cp -r node_modules/bmad-method/src/modules/bmb/reference/agents/agent-with-memory/journal-keeper \
|
||||
.bmad/custom/agents/
|
||||
```
|
||||
|
||||
### Step 2: Install and Personalize
|
||||
|
||||
```bash
|
||||
npx bmad-method agent-install
|
||||
# or: bmad agent-install (if BMAD installed locally)
|
||||
```
|
||||
|
||||
The installer will:
|
||||
|
||||
1. Find the copied template in `.bmad/custom/agents/`
|
||||
2. Prompt for personalization (name, behavior, preferences)
|
||||
3. Compile and install with your choices baked in
|
||||
4. Create IDE commands for immediate use
|
||||
|
||||
### Available Reference Agents
|
||||
|
||||
**Simple (standalone file):**
|
||||
|
||||
- `commit-poet.agent.yaml` - Commit message artisan with style preferences
|
||||
|
||||
**Expert (folder with sidecar):**
|
||||
|
||||
- `journal-keeper/` - Personal journal companion with memory and pattern recognition
|
||||
|
||||
Find these in the BMAD source:
|
||||
|
||||
```
|
||||
src/modules/bmb/reference/agents/
|
||||
├── stand-alone/
|
||||
│ └── commit-poet.agent.yaml
|
||||
└── agent-with-memory/
|
||||
└── journal-keeper/
|
||||
├── journal-keeper.agent.yaml
|
||||
└── journal-keeper-sidecar/
|
||||
```
|
||||
|
||||
## Creating Your Own
|
||||
|
||||
Use the BMB agent builder to craft your agents. Once ready to use yourself, place your `.agent.yaml` files or folder in `.bmad/custom/agents/`.
|
||||
@@ -1,449 +0,0 @@
|
||||
# Document Sharding Guide
|
||||
|
||||
Comprehensive guide to BMad Method's document sharding system for managing large planning and architecture documents.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [What is Document Sharding?](#what-is-document-sharding)
|
||||
- [When to Use Sharding](#when-to-use-sharding)
|
||||
- [How Sharding Works](#how-sharding-works)
|
||||
- [Using the Shard-Doc Tool](#using-the-shard-doc-tool)
|
||||
- [Workflow Support](#workflow-support)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Examples](#examples)
|
||||
|
||||
## What is Document Sharding?
|
||||
|
||||
Document sharding splits large markdown files into smaller, organized files based on level 2 headings (`## Heading`). This enables:
|
||||
|
||||
- **Selective Loading** - Workflows load only the sections they need
|
||||
- **Reduced Token Usage** - Massive efficiency gains for large projects
|
||||
- **Better Organization** - Logical section-based file structure
|
||||
- **Maintained Context** - Index file preserves document structure
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
Before Sharding:
|
||||
docs/
|
||||
└── PRD.md (large 50k token file)
|
||||
|
||||
After Sharding:
|
||||
docs/
|
||||
└── prd/
|
||||
├── index.md # Table of contents with descriptions
|
||||
├── overview.md # Section 1
|
||||
├── user-requirements.md # Section 2
|
||||
├── technical-requirements.md # Section 3
|
||||
└── ... # Additional sections
|
||||
```
|
||||
|
||||
## When to Use Sharding
|
||||
|
||||
### Ideal Candidates
|
||||
|
||||
**Large Multi-Epic Projects:**
|
||||
|
||||
- Very large complex PRDs
|
||||
- Architecture documents with multiple system layers
|
||||
- Epic files with 4+ epics (especially for Phase 4)
|
||||
- UX design specs covering multiple subsystems
|
||||
|
||||
**Token Thresholds:**
|
||||
|
||||
- **Consider sharding**: Documents > 20k tokens
|
||||
- **Strongly recommended**: Documents > 40k tokens
|
||||
- **Critical for efficiency**: Documents > 60k tokens
|
||||
|
||||
### When NOT to Shard
|
||||
|
||||
**Small Projects:**
|
||||
|
||||
- Single epic projects
|
||||
- Level 0-1 projects (tech-spec only)
|
||||
- Documents under 10k tokens
|
||||
- Quick prototypes
|
||||
|
||||
**Frequently Updated Docs:**
|
||||
|
||||
- Active work-in-progress documents
|
||||
- Documents updated daily
|
||||
- Documents where whole-file context is essential
|
||||
|
||||
## How Sharding Works
|
||||
|
||||
### Sharding Process
|
||||
|
||||
1. **Tool Execution**: Run `npx @kayvan/markdown-tree-parser source.md destination/` - this is abstracted with the core shard-doc task which is installed as a slash command or manual task rule depending on your tools.
|
||||
2. **Section Extraction**: Tool splits by level 2 headings
|
||||
3. **File Creation**: Each section becomes a separate file
|
||||
4. **Index Generation**: `index.md` created with structure and descriptions
|
||||
|
||||
### Workflow Discovery
|
||||
|
||||
BMad workflows use a **dual discovery system**:
|
||||
|
||||
1. **Try whole document first** - Look for `document-name.md`
|
||||
2. **Check for sharded version** - Look for `document-name/index.md`
|
||||
3. **Priority rule** - Whole document takes precedence if both exist
|
||||
|
||||
### Loading Strategies
|
||||
|
||||
**Full Load (Phase 1-3 workflows):**
|
||||
|
||||
```
|
||||
If sharded:
|
||||
- Read index.md
|
||||
- Read ALL section files
|
||||
- Treat as single combined document
|
||||
```
|
||||
|
||||
**Selective Load (Phase 4 workflows):**
|
||||
|
||||
```
|
||||
If sharded epics and working on Epic 3:
|
||||
- Read epics/index.md
|
||||
- Load ONLY epics/epic-3.md
|
||||
- Skip all other epic files
|
||||
- 90%+ token savings!
|
||||
```
|
||||
|
||||
## Using the Shard-Doc Tool
|
||||
|
||||
### CLI Command
|
||||
|
||||
```bash
|
||||
# Activate bmad-master or analyst agent, then:
|
||||
/shard-doc
|
||||
```
|
||||
|
||||
### Interactive Process
|
||||
|
||||
```
|
||||
Agent: Which document would you like to shard?
|
||||
User: docs/PRD.md
|
||||
|
||||
Agent: Default destination: docs/prd/
|
||||
Accept default? [y/n]
|
||||
User: y
|
||||
|
||||
Agent: Sharding PRD.md...
|
||||
✓ Created 12 section files
|
||||
✓ Generated index.md
|
||||
✓ Complete!
|
||||
```
|
||||
|
||||
### What Gets Created
|
||||
|
||||
**index.md structure:**
|
||||
|
||||
```markdown
|
||||
# PRD - Index
|
||||
|
||||
## Sections
|
||||
|
||||
1. [Overview](./overview.md) - Project vision and objectives
|
||||
2. [User Requirements](./user-requirements.md) - Feature specifications
|
||||
3. [Epic 1: Authentication](./epic-1-authentication.md) - User auth system
|
||||
4. [Epic 2: Dashboard](./epic-2-dashboard.md) - Main dashboard UI
|
||||
...
|
||||
```
|
||||
|
||||
**Individual section files:**
|
||||
|
||||
- Named from heading text (kebab-case)
|
||||
- Contains complete section content
|
||||
- Preserves all markdown formatting
|
||||
- Can be read independently
|
||||
|
||||
## Workflow Support
|
||||
|
||||
### Universal Support
|
||||
|
||||
**All BMM workflows support both formats:**
|
||||
|
||||
- ✅ Whole documents
|
||||
- ✅ Sharded documents
|
||||
- ✅ Automatic detection
|
||||
- ✅ Transparent to user
|
||||
|
||||
### Workflow-Specific Patterns
|
||||
|
||||
#### Phase 1-3 (Full Load)
|
||||
|
||||
Workflows load entire sharded documents:
|
||||
|
||||
- `product-brief` - Research, brainstorming docs
|
||||
- `prd` - Product brief, research
|
||||
- `gdd` - Game brief, research
|
||||
- `create-ux-design` - PRD, brief, architecture (if available)
|
||||
- `tech-spec` - Brief, research
|
||||
- `architecture` - PRD, UX design (if available)
|
||||
- `create-epics-and-stories` - PRD, architecture
|
||||
- `implementation-readiness` - All planning docs
|
||||
|
||||
#### Phase 4 (Selective Load)
|
||||
|
||||
Workflows load only needed sections:
|
||||
|
||||
**sprint-planning** (Full Load):
|
||||
|
||||
- Needs ALL epics to build complete status
|
||||
|
||||
**create-story, code-review** (Selective):
|
||||
|
||||
```
|
||||
Working on Epic 3, Story 2:
|
||||
✓ Load epics/epic-3.md only
|
||||
✗ Skip epics/epic-1.md, epic-2.md, epic-4.md, etc.
|
||||
|
||||
Result: 90%+ token reduction for 10-epic projects!
|
||||
```
|
||||
|
||||
### Input File Patterns
|
||||
|
||||
Workflows use standardized patterns:
|
||||
|
||||
```yaml
|
||||
input_file_patterns:
|
||||
prd:
|
||||
whole: '{output_folder}/*prd*.md'
|
||||
sharded: '{output_folder}/*prd*/index.md'
|
||||
|
||||
epics:
|
||||
whole: '{output_folder}/*epic*.md'
|
||||
sharded_index: '{output_folder}/*epic*/index.md'
|
||||
sharded_single: '{output_folder}/*epic*/epic-{{epic_num}}.md'
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Sharding Strategy
|
||||
|
||||
**Do:**
|
||||
|
||||
- ✅ Shard after planning phase complete
|
||||
- ✅ Keep level 2 headings well-organized
|
||||
- ✅ Use descriptive section names
|
||||
- ✅ Shard before Phase 4 implementation
|
||||
- ✅ Keep original file as backup initially
|
||||
|
||||
**Don't:**
|
||||
|
||||
- ❌ Shard work-in-progress documents
|
||||
- ❌ Shard small documents (<20k tokens)
|
||||
- ❌ Mix sharded and whole versions
|
||||
- ❌ Manually edit index.md structure
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
**Good Section Names:**
|
||||
|
||||
```markdown
|
||||
## Epic 1: User Authentication
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
## System Architecture
|
||||
|
||||
## UX Design Principles
|
||||
```
|
||||
|
||||
**Poor Section Names:**
|
||||
|
||||
```markdown
|
||||
## Section 1
|
||||
|
||||
## Part A
|
||||
|
||||
## Details
|
||||
|
||||
## More Info
|
||||
```
|
||||
|
||||
### File Management
|
||||
|
||||
**When to Re-shard:**
|
||||
|
||||
- Significant structural changes to document
|
||||
- Adding/removing major sections
|
||||
- After major refactoring
|
||||
|
||||
**Updating Sharded Docs:**
|
||||
|
||||
1. Edit individual section files directly
|
||||
2. OR edit original, delete sharded folder, re-shard
|
||||
3. Don't manually edit index.md
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Large PRD
|
||||
|
||||
**Scenario:** 15-epic project, PRD is 45k tokens
|
||||
|
||||
**Before Sharding:**
|
||||
|
||||
```
|
||||
Every workflow loads entire 45k token PRD
|
||||
Architecture workflow: 45k tokens
|
||||
UX design workflow: 45k tokens
|
||||
```
|
||||
|
||||
**After Sharding:**
|
||||
|
||||
```bash
|
||||
/shard-doc
|
||||
Source: docs/PRD.md
|
||||
Destination: docs/prd/
|
||||
|
||||
Created:
|
||||
prd/index.md
|
||||
prd/overview.md (3k tokens)
|
||||
prd/functional-requirements.md (8k tokens)
|
||||
prd/non-functional-requirements.md (6k tokens)
|
||||
prd/user-personas.md (4k tokens)
|
||||
...additional FR/NFR sections
|
||||
```
|
||||
|
||||
**Result:**
|
||||
|
||||
```
|
||||
Architecture workflow: Can load specific sections needed
|
||||
UX design workflow: Can load specific sections needed
|
||||
Significant token reduction for large requirement docs!
|
||||
```
|
||||
|
||||
### Example 2: Sharding Epics File
|
||||
|
||||
**Scenario:** 8 epics with detailed stories, 35k tokens total
|
||||
|
||||
```bash
|
||||
/shard-doc
|
||||
Source: docs/bmm-epics.md
|
||||
Destination: docs/epics/
|
||||
|
||||
Created:
|
||||
epics/index.md
|
||||
epics/epic-1.md
|
||||
epics/epic-2.md
|
||||
...
|
||||
epics/epic-8.md
|
||||
```
|
||||
|
||||
**Efficiency Gain:**
|
||||
|
||||
```
|
||||
Working on Epic 5 stories:
|
||||
Old: Load all 8 epics (35k tokens)
|
||||
New: Load epic-5.md only (4k tokens)
|
||||
Savings: 88% reduction
|
||||
```
|
||||
|
||||
### Example 3: Architecture Document
|
||||
|
||||
**Scenario:** Multi-layer system architecture, 28k tokens
|
||||
|
||||
```bash
|
||||
/shard-doc
|
||||
Source: docs/architecture.md
|
||||
Destination: docs/architecture/
|
||||
|
||||
Created:
|
||||
architecture/index.md
|
||||
architecture/system-overview.md
|
||||
architecture/frontend-architecture.md
|
||||
architecture/backend-services.md
|
||||
architecture/data-layer.md
|
||||
architecture/infrastructure.md
|
||||
architecture/security-architecture.md
|
||||
```
|
||||
|
||||
**Benefit:** Code-review workflow can reference specific architectural layers without loading entire architecture doc.
|
||||
|
||||
## Custom Workflow Integration
|
||||
|
||||
### For Workflow Builders
|
||||
|
||||
When creating custom workflows that load large documents:
|
||||
|
||||
**1. Add input_file_patterns to workflow.yaml:**
|
||||
|
||||
```yaml
|
||||
input_file_patterns:
|
||||
your_document:
|
||||
whole: '{output_folder}/*your-doc*.md'
|
||||
sharded: '{output_folder}/*your-doc*/index.md'
|
||||
```
|
||||
|
||||
**2. Add discovery instructions to instructions.md:**
|
||||
|
||||
```markdown
|
||||
## Document Discovery
|
||||
|
||||
1. Search for whole document: _your-doc_.md
|
||||
2. Check for sharded version: _your-doc_/index.md
|
||||
3. If sharded: Read index + ALL sections (or specific sections if selective load)
|
||||
4. Priority: Whole document first
|
||||
```
|
||||
|
||||
**3. Choose loading strategy:**
|
||||
|
||||
- **Full Load**: Read all sections when sharded
|
||||
- **Selective Load**: Read only relevant sections (requires section identification logic)
|
||||
|
||||
### Pattern Templates
|
||||
|
||||
**Full Load Pattern:**
|
||||
|
||||
```xml
|
||||
<action>Search for document: {output_folder}/*doc-name*.md</action>
|
||||
<action>If not found, check for sharded: {output_folder}/*doc-name*/index.md</action>
|
||||
<action if="sharded found">Read index.md to understand structure</action>
|
||||
<action if="sharded found">Read ALL section files listed in index</action>
|
||||
<action if="sharded found">Combine content as single document</action>
|
||||
```
|
||||
|
||||
**Selective Load Pattern (with section ID):**
|
||||
|
||||
```xml
|
||||
<action>Determine section needed (e.g., epic_num = 3)</action>
|
||||
<action>Check for sharded version: {output_folder}/*doc-name*/index.md</action>
|
||||
<action if="sharded found">Read ONLY the specific section file needed</action>
|
||||
<action if="sharded found">Skip all other section files</action>
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Both whole and sharded exist:**
|
||||
|
||||
- Workflows will use whole document (priority rule)
|
||||
- Delete or archive the one you don't want
|
||||
|
||||
**Index.md out of sync:**
|
||||
|
||||
- Delete sharded folder
|
||||
- Re-run shard-doc on original
|
||||
|
||||
**Workflow can't find document:**
|
||||
|
||||
- Check file naming matches patterns (`*prd*.md`, `*epic*.md`, etc.)
|
||||
- Verify index.md exists in sharded folder
|
||||
- Check output_folder path in config
|
||||
|
||||
**Sections too granular:**
|
||||
|
||||
- Combine sections in original document
|
||||
- Use fewer level 2 headings
|
||||
- Re-shard
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [shard-doc Tool](../src/core/tools/shard-doc.xml) - Tool implementation
|
||||
- [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) - Workflow overview
|
||||
- [Workflow Creation Guide](../src/modules/bmb/workflows/create-workflow/workflow-creation-guide.md) - Custom workflow patterns
|
||||
|
||||
---
|
||||
|
||||
**Document sharding is optional but powerful** - use it when efficiency matters for large projects!
|
||||
74
docs/downloads.md
Normal file
74
docs/downloads.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
title: Downloads
|
||||
---
|
||||
|
||||
Download BMad Method resources for offline use, AI training, or integration.
|
||||
|
||||
## Source Bundles
|
||||
|
||||
Download these from the `downloads/` folder on the documentation site.
|
||||
|
||||
| File | Description |
|
||||
| ------------------ | ------------------------------- |
|
||||
| `bmad-sources.zip` | Complete BMad source files |
|
||||
| `bmad-prompts.zip` | Agent and workflow prompts only |
|
||||
|
||||
## LLM-Optimized Files
|
||||
|
||||
These files are designed for AI consumption - perfect for loading into Claude, ChatGPT, or any LLM context window. See [API Access](#api-access) below for URLs.
|
||||
|
||||
| File | Description | Use Case |
|
||||
| --------------- | ----------------------------------- | -------------------------- |
|
||||
| `llms.txt` | Documentation index with summaries | Quick overview, navigation |
|
||||
| `llms-full.txt` | Complete documentation concatenated | Full context loading |
|
||||
|
||||
### Using with LLMs
|
||||
|
||||
**Claude Projects:**
|
||||
```
|
||||
Upload llms-full.txt as project knowledge
|
||||
```
|
||||
|
||||
**ChatGPT:**
|
||||
```
|
||||
Paste llms.txt for navigation, or sections from llms-full.txt as needed
|
||||
```
|
||||
|
||||
**API Usage:**
|
||||
```python
|
||||
import requests
|
||||
docs = requests.get("https://bmad-code-org.github.io/BMAD-METHOD/llms-full.txt").text
|
||||
# Include in your system prompt or context
|
||||
```
|
||||
|
||||
## Installation Options
|
||||
|
||||
```bash
|
||||
npx bmad-method install
|
||||
```
|
||||
|
||||
[More details](/docs/how-to/install-bmad.md)
|
||||
|
||||
## Version Information
|
||||
|
||||
- **Current Version:** See [CHANGELOG](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/CHANGELOG.md)
|
||||
- **Release Notes:** Available on [GitHub Releases](https://github.com/bmad-code-org/BMAD-METHOD/releases)
|
||||
|
||||
## API Access
|
||||
|
||||
For programmatic access to BMad documentation:
|
||||
|
||||
```bash
|
||||
# Get documentation index
|
||||
curl https://bmad-code-org.github.io/BMAD-METHOD/llms.txt
|
||||
|
||||
# Get full documentation
|
||||
curl https://bmad-code-org.github.io/BMAD-METHOD/llms-full.txt
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
Want to improve BMad Method? Check out:
|
||||
|
||||
- [Contributing Guide](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/CONTRIBUTING.md)
|
||||
- [GitHub Repository](https://github.com/bmad-code-org/BMAD-METHOD)
|
||||
24
docs/explanation/advanced-elicitation.md
Normal file
24
docs/explanation/advanced-elicitation.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
title: "Advanced Elicitation"
|
||||
description: Push the LLM to rethink its work using structured reasoning methods
|
||||
---
|
||||
|
||||
Make the LLM reconsider what it just generated. You pick a reasoning method, it applies that method to its own output, you decide whether to keep the improvements.
|
||||
|
||||
Dozens of methods are built in - things like First Principles, Red Team vs Blue Team, Pre-mortem Analysis, Socratic Questioning, and more.
|
||||
|
||||
## When to Use It
|
||||
|
||||
- After a workflow generates content and you want alternatives
|
||||
- When output seems okay but you suspect there's more depth
|
||||
- To stress-test assumptions or find weaknesses
|
||||
- For high-stakes content where rethinking helps
|
||||
|
||||
Workflows offer advanced elicitation at decision points - after the LLM has generated something, you'll be asked if you want to run it.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. LLM suggests 5 relevant methods for your content
|
||||
2. You pick one (or reshuffle for different options)
|
||||
3. Method is applied, improvements shown
|
||||
4. Accept or discard, repeat or continue
|
||||
57
docs/explanation/adversarial-review.md
Normal file
57
docs/explanation/adversarial-review.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: "Adversarial Review"
|
||||
description: Forced reasoning technique that prevents lazy "looks good" reviews
|
||||
---
|
||||
|
||||
Force deeper analysis by requiring problems to be found.
|
||||
|
||||
## What is Adversarial Review?
|
||||
|
||||
A review technique where the reviewer *must* find issues. No "looks good" allowed. The reviewer adopts a cynical stance - assume problems exist and find them.
|
||||
|
||||
This isn't about being negative. It's about forcing genuine analysis instead of a cursory glance that rubber-stamps whatever was submitted.
|
||||
|
||||
**The core rule:** You must find issues. Zero findings triggers a halt - re-analyze or explain why.
|
||||
|
||||
## Why It Works
|
||||
|
||||
Normal reviews suffer from confirmation bias. You skim the work, nothing jumps out, you approve it. The "find problems" mandate breaks this pattern:
|
||||
|
||||
- **Forces thoroughness** - Can't approve until you've looked hard enough to find issues
|
||||
- **Catches missing things** - "What's not here?" becomes a natural question
|
||||
- **Improves signal quality** - Findings are specific and actionable, not vague concerns
|
||||
- **Information asymmetry** - Run reviews with fresh context (no access to original reasoning) so you evaluate the artifact, not the intent
|
||||
|
||||
## Where It's Used
|
||||
|
||||
Adversarial review appears throughout BMAD workflows - code review, implementation readiness checks, spec validation, and others. Sometimes it's a required step, sometimes optional (like advanced elicitation or party mode). The pattern adapts to whatever artifact needs scrutiny.
|
||||
|
||||
## Human Filtering Required
|
||||
|
||||
Because the AI is *instructed* to find problems, it will find problems - even when they don't exist. Expect false positives: nitpicks dressed as issues, misunderstandings of intent, or outright hallucinated concerns.
|
||||
|
||||
**You decide what's real.** Review each finding, dismiss the noise, fix what matters.
|
||||
|
||||
## Example
|
||||
|
||||
Instead of:
|
||||
|
||||
> "The authentication implementation looks reasonable. Approved."
|
||||
|
||||
An adversarial review produces:
|
||||
|
||||
> 1. **HIGH** - `login.ts:47` - No rate limiting on failed attempts
|
||||
> 2. **HIGH** - Session token stored in localStorage (XSS vulnerable)
|
||||
> 3. **MEDIUM** - Password validation happens client-side only
|
||||
> 4. **MEDIUM** - No audit logging for failed login attempts
|
||||
> 5. **LOW** - Magic number `3600` should be `SESSION_TIMEOUT_SECONDS`
|
||||
|
||||
The first review might miss a security vulnerability. The second caught four.
|
||||
|
||||
## Iteration and Diminishing Returns
|
||||
|
||||
After addressing findings, consider running it again. A second pass usually catches more. A third isn't always useless either. But each pass takes time, and eventually you hit diminishing returns - just nitpicks and false findings.
|
||||
|
||||
:::tip[Better Reviews]
|
||||
Assume problems exist. Look for what's missing, not just what's wrong.
|
||||
:::
|
||||
31
docs/explanation/brainstorming.md
Normal file
31
docs/explanation/brainstorming.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
title: "Brainstorming"
|
||||
description: Interactive creative sessions using 60+ proven ideation techniques
|
||||
---
|
||||
|
||||
Unlock your creativity through guided exploration.
|
||||
|
||||
## What is Brainstorming?
|
||||
|
||||
Run `brainstorming` and you've got a creative facilitator pulling ideas out of you - not generating them for you. The AI acts as coach and guide, using proven techniques to create conditions where your best thinking emerges.
|
||||
|
||||
**Good for:**
|
||||
|
||||
- Breaking through creative blocks
|
||||
- Generating product or feature ideas
|
||||
- Exploring problems from new angles
|
||||
- Developing raw concepts into action plans
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Setup** - Define topic, goals, constraints
|
||||
2. **Choose approach** - Pick techniques yourself, get AI recommendations, go random, or follow a progressive flow
|
||||
3. **Facilitation** - Work through techniques with probing questions and collaborative coaching
|
||||
4. **Organize** - Ideas grouped into themes and prioritized
|
||||
5. **Action** - Top ideas get next steps and success metrics
|
||||
|
||||
Everything gets captured in a session document you can reference later or share with stakeholders.
|
||||
|
||||
:::note[Your Ideas]
|
||||
Every idea comes from you. The workflow creates conditions for insight - you're the source.
|
||||
:::
|
||||
55
docs/explanation/brownfield-faq.md
Normal file
55
docs/explanation/brownfield-faq.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
title: "Brownfield Development FAQ"
|
||||
description: Common questions about brownfield development in the BMad Method
|
||||
---
|
||||
Quick answers to common questions about brownfield (existing codebase) development in the BMad Method (BMM).
|
||||
|
||||
## Questions
|
||||
|
||||
- [Questions](#questions)
|
||||
- [What is brownfield vs greenfield?](#what-is-brownfield-vs-greenfield)
|
||||
- [Do I have to run document-project for brownfield?](#do-i-have-to-run-document-project-for-brownfield)
|
||||
- [What if I forget to run document-project?](#what-if-i-forget-to-run-document-project)
|
||||
- [Can I use Quick Spec Flow for brownfield projects?](#can-i-use-quick-spec-flow-for-brownfield-projects)
|
||||
- [What if my existing code doesn't follow best practices?](#what-if-my-existing-code-doesnt-follow-best-practices)
|
||||
|
||||
### What is brownfield vs greenfield?
|
||||
|
||||
- **Greenfield** — New project, starting from scratch, clean slate
|
||||
- **Brownfield** — Existing project, working with established codebase and patterns
|
||||
|
||||
### Do I have to run document-project for brownfield?
|
||||
|
||||
Highly recommended, especially if:
|
||||
|
||||
- No existing documentation
|
||||
- Documentation is outdated
|
||||
- AI agents need context about existing code
|
||||
|
||||
You can skip it if you have comprehensive, up-to-date documentation including `docs/index.md` or will use other tools or techniques to aid in discovery for the agent to build on an existing system.
|
||||
|
||||
### What if I forget to run document-project?
|
||||
|
||||
Don't worry about it - you can do it at any time. You can even do it during or after a project to help keep docs up to date.
|
||||
|
||||
### Can I use Quick Spec Flow for brownfield projects?
|
||||
|
||||
Yes! Quick Spec Flow works great for brownfield. It will:
|
||||
|
||||
- Auto-detect your existing stack
|
||||
- Analyze brownfield code patterns
|
||||
- Detect conventions and ask for confirmation
|
||||
- Generate context-rich tech-spec that respects existing code
|
||||
|
||||
Perfect for bug fixes and small features in existing codebases.
|
||||
|
||||
### What if my existing code doesn't follow best practices?
|
||||
|
||||
Quick Spec Flow detects your conventions and asks: "Should I follow these existing conventions?" You decide:
|
||||
|
||||
- **Yes** → Maintain consistency with current codebase
|
||||
- **No** → Establish new standards (document why in tech-spec)
|
||||
|
||||
BMM respects your choice — it won't force modernization, but it will offer it.
|
||||
|
||||
**Have a question not answered here?** Please [open an issue](https://github.com/bmad-code-org/BMAD-METHOD/issues) or ask in [Discord](https://discord.gg/gk8jAdXWmj) so we can add it!
|
||||
57
docs/explanation/party-mode.md
Normal file
57
docs/explanation/party-mode.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: "Party Mode"
|
||||
description: Multi-agent collaboration - get all your AI agents in one conversation
|
||||
---
|
||||
|
||||
Get all your AI agents in one conversation.
|
||||
|
||||
## What is Party Mode?
|
||||
|
||||
Run `party-mode` and you've got your whole AI team in one room - PM, Architect, Dev, UX Designer, whoever you need. BMad Master orchestrates, picking relevant agents per message. Agents respond in character, agree, disagree, and build on each other's ideas.
|
||||
|
||||
The conversation continues as long as you want. Ask follow-ups, push back on answers, redirect the discussion - it's a real back-and-forth with your agents until you're done.
|
||||
|
||||
**Good for:**
|
||||
|
||||
- Big decisions with tradeoffs
|
||||
- Brainstorming sessions
|
||||
- Post-mortems when things go wrong
|
||||
- Sprint retrospectives and planning
|
||||
|
||||
## Examples
|
||||
|
||||
### Calling Out Bad Architecture
|
||||
|
||||
**You:** "Okay team, our last sprint was a disaster. The authentication service went down three times. Who screwed this up?"
|
||||
|
||||
**Architect:** "The design was sound - distributed authentication with proper fallbacks. If implementation deviated from the spec..."
|
||||
|
||||
**Dev:** "I followed the architecture doc exactly. The spec didn't account for concurrent session invalidation race conditions."
|
||||
|
||||
**PM:** "Both of you missed the bigger issue - we didn't validate session management requirements in the PRD. This is on me for not catching it."
|
||||
|
||||
**TEA:** "And I should've caught this in integration tests. The test scenarios didn't cover concurrent invalidation."
|
||||
|
||||
### Creative Brainstorming
|
||||
|
||||
**You:** "How do we make onboarding feel magical instead of boring?"
|
||||
|
||||
**UX Designer:** "Start with progressive disclosure - reveal features as users need them, not all at once in a tutorial."
|
||||
|
||||
**Storyteller:** "What if onboarding was a story? Each step reveals a character's journey - the user IS the hero."
|
||||
|
||||
**Game Designer:** "Building on that - what if the first 'quest' is actually solving a real user problem? They learn by doing something valuable."
|
||||
|
||||
### Technical Decision
|
||||
|
||||
**You:** "Monolith or microservices for MVP?"
|
||||
|
||||
**Architect:** "Start monolith. Microservices add complexity you don't need at 1000 users."
|
||||
|
||||
**PM:** "Agree. Time to market matters more than theoretical scalability."
|
||||
|
||||
**Dev:** "Monolith with clear module boundaries. We can extract services later if needed."
|
||||
|
||||
:::tip[Better Decisions]
|
||||
Better decisions through diverse perspectives. Welcome to party mode.
|
||||
:::
|
||||
110
docs/explanation/preventing-agent-conflicts.md
Normal file
110
docs/explanation/preventing-agent-conflicts.md
Normal file
@@ -0,0 +1,110 @@
|
||||
---
|
||||
title: "Preventing Agent Conflicts"
|
||||
description: How architecture prevents conflicts when multiple agents implement a system
|
||||
---
|
||||
|
||||
When multiple AI agents implement different parts of a system, they can make conflicting technical decisions. Architecture documentation prevents this by establishing shared standards.
|
||||
|
||||
## Common Conflict Types
|
||||
|
||||
### API Style Conflicts
|
||||
|
||||
Without architecture:
|
||||
- Agent A uses REST with `/users/{id}`
|
||||
- Agent B uses GraphQL mutations
|
||||
- Result: Inconsistent API patterns, confused consumers
|
||||
|
||||
With architecture:
|
||||
- ADR specifies: "Use GraphQL for all client-server communication"
|
||||
- All agents follow the same pattern
|
||||
|
||||
### Database Design Conflicts
|
||||
|
||||
Without architecture:
|
||||
- Agent A uses snake_case column names
|
||||
- Agent B uses camelCase column names
|
||||
- Result: Inconsistent schema, confusing queries
|
||||
|
||||
With architecture:
|
||||
- Standards document specifies naming conventions
|
||||
- All agents follow the same patterns
|
||||
|
||||
### State Management Conflicts
|
||||
|
||||
Without architecture:
|
||||
- Agent A uses Redux for global state
|
||||
- Agent B uses React Context
|
||||
- Result: Multiple state management approaches, complexity
|
||||
|
||||
With architecture:
|
||||
- ADR specifies state management approach
|
||||
- All agents implement consistently
|
||||
|
||||
## How Architecture Prevents Conflicts
|
||||
|
||||
### 1. Explicit Decisions via ADRs
|
||||
|
||||
Every significant technology choice is documented with:
|
||||
- Context (why this decision matters)
|
||||
- Options considered (what alternatives exist)
|
||||
- Decision (what we chose)
|
||||
- Rationale (why we chose it)
|
||||
- Consequences (trade-offs accepted)
|
||||
|
||||
### 2. FR/NFR-Specific Guidance
|
||||
|
||||
Architecture maps each functional requirement to technical approach:
|
||||
- FR-001: User Management → GraphQL mutations
|
||||
- FR-002: Mobile App → Optimized queries
|
||||
|
||||
### 3. Standards and Conventions
|
||||
|
||||
Explicit documentation of:
|
||||
- Directory structure
|
||||
- Naming conventions
|
||||
- Code organization
|
||||
- Testing patterns
|
||||
|
||||
## Architecture as Shared Context
|
||||
|
||||
Think of architecture as the shared context that all agents read before implementing:
|
||||
|
||||
```
|
||||
PRD: "What to build"
|
||||
↓
|
||||
Architecture: "How to build it"
|
||||
↓
|
||||
Agent A reads architecture → implements Epic 1
|
||||
Agent B reads architecture → implements Epic 2
|
||||
Agent C reads architecture → implements Epic 3
|
||||
↓
|
||||
Result: Consistent implementation
|
||||
```
|
||||
|
||||
## Key ADR Topics
|
||||
|
||||
Common decisions that prevent conflicts:
|
||||
|
||||
| Topic | Example Decision |
|
||||
| ---------------- | -------------------------------------------- |
|
||||
| API Style | GraphQL vs REST vs gRPC |
|
||||
| Database | PostgreSQL vs MongoDB |
|
||||
| Auth | JWT vs Sessions |
|
||||
| State Management | Redux vs Context vs Zustand |
|
||||
| Styling | CSS Modules vs Tailwind vs Styled Components |
|
||||
| Testing | Jest + Playwright vs Vitest + Cypress |
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
:::caution[Common Mistakes]
|
||||
- **Implicit Decisions** — "We'll figure out the API style as we go" leads to inconsistency
|
||||
- **Over-Documentation** — Documenting every minor choice causes analysis paralysis
|
||||
- **Stale Architecture** — Documents written once and never updated cause agents to follow outdated patterns
|
||||
:::
|
||||
|
||||
:::tip[Correct Approach]
|
||||
- Document decisions that cross epic boundaries
|
||||
- Focus on conflict-prone areas
|
||||
- Update architecture as you learn
|
||||
- Use `correct-course` for significant changes
|
||||
:::
|
||||
27
docs/explanation/quick-flow.md
Normal file
27
docs/explanation/quick-flow.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title: "Quick Flow"
|
||||
description: Fast-track for small changes - skip the full methodology
|
||||
---
|
||||
|
||||
Quick Flow is for when you don't need the full BMad Method. Skip Product Brief, PRD, and Architecture - go straight to implementation.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Run `quick-spec`** — generates a focused tech-spec
|
||||
2. **Run `quick-dev`** — implements it
|
||||
|
||||
That's it.
|
||||
|
||||
## When to Use It
|
||||
|
||||
- Bug fixes
|
||||
- Refactoring
|
||||
- Small features
|
||||
- Prototyping
|
||||
|
||||
## When to Use Full BMad Method Instead
|
||||
|
||||
- New products
|
||||
- Major features
|
||||
- Multiple teams involved
|
||||
- Stakeholder alignment needed
|
||||
75
docs/explanation/why-solutioning-matters.md
Normal file
75
docs/explanation/why-solutioning-matters.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
title: "Why Solutioning Matters"
|
||||
description: Understanding why the solutioning phase is critical for multi-epic projects
|
||||
---
|
||||
|
||||
|
||||
Phase 3 (Solutioning) translates **what** to build (from Planning) into **how** to build it (technical design). This phase prevents agent conflicts in multi-epic projects by documenting architectural decisions before implementation begins.
|
||||
|
||||
## The Problem Without Solutioning
|
||||
|
||||
```
|
||||
Agent 1 implements Epic 1 using REST API
|
||||
Agent 2 implements Epic 2 using GraphQL
|
||||
Result: Inconsistent API design, integration nightmare
|
||||
```
|
||||
|
||||
When multiple agents implement different parts of a system without shared architectural guidance, they make independent technical decisions that may conflict.
|
||||
|
||||
## The Solution With Solutioning
|
||||
|
||||
```
|
||||
architecture workflow decides: "Use GraphQL for all APIs"
|
||||
All agents follow architecture decisions
|
||||
Result: Consistent implementation, no conflicts
|
||||
```
|
||||
|
||||
By documenting technical decisions explicitly, all agents implement consistently and integration becomes straightforward.
|
||||
|
||||
## Solutioning vs Planning
|
||||
|
||||
| Aspect | Planning (Phase 2) | Solutioning (Phase 3) |
|
||||
| -------- | ----------------------- | --------------------------------- |
|
||||
| Question | What and Why? | How? Then What units of work? |
|
||||
| Output | FRs/NFRs (Requirements) | Architecture + Epics/Stories |
|
||||
| Agent | PM | Architect → PM |
|
||||
| Audience | Stakeholders | Developers |
|
||||
| Document | PRD (FRs/NFRs) | Architecture + Epic Files |
|
||||
| Level | Business logic | Technical design + Work breakdown |
|
||||
|
||||
## Key Principle
|
||||
|
||||
**Make technical decisions explicit and documented** so all agents implement consistently.
|
||||
|
||||
This prevents:
|
||||
- API style conflicts (REST vs GraphQL)
|
||||
- Database design inconsistencies
|
||||
- State management disagreements
|
||||
- Naming convention mismatches
|
||||
- Security approach variations
|
||||
|
||||
## When Solutioning is Required
|
||||
|
||||
| Track | Solutioning Required? |
|
||||
|-------|----------------------|
|
||||
| Quick Flow | No - skip entirely |
|
||||
| BMad Method Simple | Optional |
|
||||
| BMad Method Complex | Yes |
|
||||
| Enterprise | Yes |
|
||||
|
||||
:::tip[Rule of Thumb]
|
||||
If you have multiple epics that could be implemented by different agents, you need solutioning.
|
||||
:::
|
||||
|
||||
## The Cost of Skipping
|
||||
|
||||
Skipping solutioning on complex projects leads to:
|
||||
|
||||
- **Integration issues** discovered mid-sprint
|
||||
- **Rework** due to conflicting implementations
|
||||
- **Longer development time** overall
|
||||
- **Technical debt** from inconsistent patterns
|
||||
|
||||
:::caution[Cost Multiplier]
|
||||
Catching alignment issues in solutioning is 10× faster than discovering them during implementation.
|
||||
:::
|
||||
84
docs/how-to/brownfield/index.md
Normal file
84
docs/how-to/brownfield/index.md
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
title: "Brownfield Development"
|
||||
description: How to use BMad Method on existing codebases
|
||||
---
|
||||
|
||||
Use BMad Method effectively when working on existing projects and legacy codebases.
|
||||
|
||||
## What is Brownfield Development?
|
||||
|
||||
**Brownfield** refers to working on existing projects with established codebases and patterns, as opposed to **greenfield** which means starting from scratch with a clean slate.
|
||||
|
||||
This guide covers the essential workflow for onboarding to brownfield projects with BMad Method.
|
||||
|
||||
:::note[Prerequisites]
|
||||
- BMad Method installed (`npx bmad-method install`)
|
||||
- An existing codebase you want to work on
|
||||
- Access to an AI-powered IDE (Claude Code, Cursor, or Windsurf)
|
||||
:::
|
||||
|
||||
## Step 1: Clean Up Completed Planning Artifacts
|
||||
|
||||
If you have completed all PRD epics and stories through the BMad process, clean up those files. Archive them, delete them, or rely on version history if needed. Do not keep these files in:
|
||||
|
||||
- `docs/`
|
||||
- `_bmad-output/planning-artifacts/`
|
||||
- `_bmad-output/implementation-artifacts/`
|
||||
|
||||
## Step 2: Maintain Quality Project Documentation
|
||||
|
||||
Your `docs/` folder should contain succinct, well-organized documentation that accurately represents your project:
|
||||
|
||||
- Intent and business rationale
|
||||
- Business rules
|
||||
- Architecture
|
||||
- Any other relevant project information
|
||||
|
||||
For complex projects, consider using the `document-project` workflow. It offers runtime variants that will scan your entire project and document its actual current state.
|
||||
|
||||
## Step 3: Get Help
|
||||
|
||||
Get help to know what to do next based on your unique needs
|
||||
|
||||
Run `bmad-help` to get guidance when you are not sure what to do next.
|
||||
|
||||
### Choosing Your Approach
|
||||
|
||||
You have two primary options depending on the scope of changes:
|
||||
|
||||
| Scope | Recommended Approach |
|
||||
| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Small updates or additions** | Use `quick-flow-solo-dev` to create a tech-spec and implement the change. The full four-phase BMad method is likely overkill. |
|
||||
| **Major changes or additions** | Start with the BMad method, applying as much or as little rigor as needed. |
|
||||
|
||||
### During PRD Creation
|
||||
|
||||
When creating a brief or jumping directly into the PRD, ensure the agent:
|
||||
|
||||
- Finds and analyzes your existing project documentation
|
||||
- Reads the proper context about your current system
|
||||
|
||||
You can guide the agent explicitly, but the goal is to ensure the new feature integrates well with your existing system.
|
||||
|
||||
### UX Considerations
|
||||
|
||||
UX work is optional. The decision depends not on whether your project has a UX, but on:
|
||||
|
||||
- Whether you will be working on UX changes
|
||||
- Whether significant new UX designs or patterns are needed
|
||||
|
||||
If your changes amount to simple updates to existing screens you are happy with, a full UX process is unnecessary.
|
||||
|
||||
### Architecture Considerations
|
||||
|
||||
When doing architecture, ensure the architect:
|
||||
|
||||
- Uses the proper documented files
|
||||
- Scans the existing codebase
|
||||
|
||||
Pay close attention here to prevent reinventing the wheel or making decisions that misalign with your existing architecture.
|
||||
|
||||
## More Information
|
||||
|
||||
- **[Quick Fix in Brownfield](/docs/how-to/brownfield/quick-fix-in-brownfield.md)** - Bug fixes and ad-hoc changes
|
||||
- **[Brownfield FAQ](/docs/explanation/brownfield-faq.md)** - Common questions about brownfield development
|
||||
76
docs/how-to/brownfield/quick-fix-in-brownfield.md
Normal file
76
docs/how-to/brownfield/quick-fix-in-brownfield.md
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
title: "How to Make Quick Fixes in Brownfield Projects"
|
||||
description: How to make quick fixes and ad-hoc changes in brownfield projects
|
||||
---
|
||||
|
||||
Use the **DEV agent** directly for bug fixes, refactorings, or small targeted changes that don't require the full BMad method or Quick Flow.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Simple bug fixes
|
||||
- Small refactorings and changes that don't need extensive ideation, planning, or architectural shifts
|
||||
- Larger refactorings or improvement with built in tool planning and execution mode combination, or better yet use quick flow
|
||||
- Learning about your codebase
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Load an Agent
|
||||
|
||||
For quick fixes, you can use:
|
||||
|
||||
- **DEV agent** - For implementation-focused work
|
||||
- **Quick Flow Solo Dev** - For slightly larger changes that still need a quick-spec to keep the agent aligned to planning and standards
|
||||
|
||||
### 2. Describe the Change
|
||||
|
||||
Simply tell the agent what you need:
|
||||
|
||||
```
|
||||
Fix the login validation bug that allows empty passwords
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
Refactor the UserService to use async/await instead of callbacks
|
||||
```
|
||||
|
||||
### 3. Let the Agent Work
|
||||
|
||||
The agent will:
|
||||
|
||||
- Analyze the relevant code
|
||||
- Propose a solution
|
||||
- Implement the change
|
||||
- Run tests (if available)
|
||||
|
||||
### 4. Review and Commit
|
||||
|
||||
Review the changes made and commit when satisfied.
|
||||
|
||||
## Learning Your Codebase
|
||||
|
||||
This approach is also excellent for exploring unfamiliar code:
|
||||
|
||||
```
|
||||
Explain how the authentication system works in this codebase
|
||||
```
|
||||
|
||||
```
|
||||
Show me where error handling happens in the API layer
|
||||
```
|
||||
|
||||
LLMs are excellent at interpreting and analyzing code, whether it was AI-generated or not. Use the agent to:
|
||||
|
||||
- Learn about your project
|
||||
- Understand how things are built
|
||||
- Explore unfamiliar parts of the codebase
|
||||
|
||||
## When to Upgrade to Formal Planning
|
||||
|
||||
Consider using Quick Flow or full BMad Method when:
|
||||
|
||||
- The change affects multiple files or systems
|
||||
- You're unsure about the scope
|
||||
- The fix keeps growing in complexity
|
||||
- You need documentation for the change
|
||||
158
docs/how-to/customize-bmad.md
Normal file
158
docs/how-to/customize-bmad.md
Normal file
@@ -0,0 +1,158 @@
|
||||
---
|
||||
title: "BMad Method Customization Guide"
|
||||
---
|
||||
|
||||
The ability to customize the BMad Method and its core to your needs, while still being able to get updates and enhancements is a critical idea within the BMad Ecosystem.
|
||||
|
||||
The Customization Guidance outlined here, while targeted at understanding BMad Method customization, applies to any other module use within the BMad Method.
|
||||
|
||||
## Types of Customization
|
||||
|
||||
Customization includes Agent Customization, Workflow/Skill customization, the addition of new MCPs or Skills to be used by existing agents. Aside from all of this, a whole other realm of customization involves creating / adding your own relevant BMad Builder workflows, skills, agents and maybe even your own net new modules to compliment the BMad Method Module.
|
||||
|
||||
Warning: The reason for customizing as this guide will prescribe will allow you to continue getting updates without worrying about losing your customization changes. And by continuing to get updates as BMad modules advance, you will be able to continue to evolve as the system improves.
|
||||
|
||||
## Agent Customization
|
||||
|
||||
### Agent Customization Areas
|
||||
|
||||
- Change agent names, personas or manner of speech
|
||||
- Add project-specific memories or context
|
||||
- Add custom menu items to custom or inline prompts, skills or custom BMad workflows
|
||||
- Define critical actions that occur agent startup for consistent behavior
|
||||
|
||||
## How to customize an agent.
|
||||
|
||||
**1. Locate Customization Files**
|
||||
|
||||
After installation, find agent customization files in:
|
||||
|
||||
```
|
||||
_bmad/_config/agents/
|
||||
├── core-bmad-master.customize.yaml
|
||||
├── bmm-dev.customize.yaml
|
||||
├── bmm-pm.customize.yaml
|
||||
└── ... (one file per installed agent)
|
||||
```
|
||||
|
||||
**2. Edit Any Agent**
|
||||
|
||||
Open the `.customize.yaml` file for the agent you want to modify. All sections are optional - customize only what you need.
|
||||
|
||||
**3. Rebuild the Agent**
|
||||
|
||||
After editing, IT IS CRITICAL to rebuild the agent to apply changes:
|
||||
|
||||
```bash
|
||||
npx bmad-method install
|
||||
```
|
||||
|
||||
You can either then:
|
||||
|
||||
- Select `Quick Update` - This will also ensure all packages are up to date AND compile all agents to include any updates or customizations
|
||||
- Select `Rebuild Agents` - This will only rebuild and apply customizations to agents, without pulling the latest
|
||||
|
||||
There will be additional tools shortly after beta launch to allow install of individual agents, workflows, skills and modules without the need for using the full bmad installer.
|
||||
|
||||
### What Agent Properties Can Be Customized?
|
||||
|
||||
#### Agent Name
|
||||
|
||||
Change how the agent introduces itself:
|
||||
|
||||
```yaml
|
||||
agent:
|
||||
metadata:
|
||||
name: 'Spongebob' # Default: "Amelia"
|
||||
```
|
||||
|
||||
#### Persona
|
||||
|
||||
Replace the agent's personality, role, and communication style:
|
||||
|
||||
```yaml
|
||||
persona:
|
||||
role: 'Senior Full-Stack Engineer'
|
||||
identity: 'Lives in a pineapple (under the sea)'
|
||||
communication_style: 'Spongebob annoying'
|
||||
principles:
|
||||
- 'Never Nester, Spongebob Devs hate nesting more than 2 levels deep'
|
||||
- 'Favor composition over inheritance'
|
||||
```
|
||||
|
||||
**Note:** The persona section replaces the entire default persona (not merged).
|
||||
|
||||
#### Memories
|
||||
|
||||
Add persistent context the agent will always remember:
|
||||
|
||||
```yaml
|
||||
memories:
|
||||
- 'Works at Krusty Krab'
|
||||
- 'Favorite Celebrity: David Hasslehoff'
|
||||
- 'Learned in Epic 1 that its not cool to just pretend that tests have passed'
|
||||
```
|
||||
|
||||
### Custom Menu Items
|
||||
|
||||
Any custom items you add here will be included in the agents display menu.
|
||||
|
||||
```yaml
|
||||
menu:
|
||||
- trigger: my-workflow
|
||||
workflow: '{project-root}/my-custom/workflows/my-workflow.yaml'
|
||||
description: My custom workflow
|
||||
- trigger: deploy
|
||||
action: '#deploy-prompt'
|
||||
description: Deploy to production
|
||||
```
|
||||
|
||||
### Critical Actions
|
||||
|
||||
Add instructions that execute before the agent starts:
|
||||
|
||||
```yaml
|
||||
critical_actions:
|
||||
- 'Check the CI Pipelines with the XYZ Skill and alert user on wake if anything is urgently needing attention'
|
||||
```
|
||||
|
||||
### Custom Prompts
|
||||
|
||||
Define reusable prompts for `action="#id"` menu handlers:
|
||||
|
||||
```yaml
|
||||
prompts:
|
||||
- id: deploy-prompt
|
||||
content: |
|
||||
Deploy the current branch to production:
|
||||
1. Run all tests
|
||||
2. Build the project
|
||||
3. Execute deployment script
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Changes not appearing?**
|
||||
|
||||
- Make sure you ran `npx bmad-method build <agent-name>` after editing
|
||||
- Check YAML syntax is valid (indentation matters!)
|
||||
- Verify the agent name matches the file name pattern
|
||||
|
||||
**Agent not loading?**
|
||||
|
||||
- Check for YAML syntax errors
|
||||
- Ensure required fields aren't left empty if you uncommented them
|
||||
- Try reverting to the template and rebuilding
|
||||
|
||||
**Need to reset?**
|
||||
|
||||
- Remove content from the `.customize.yaml` file (or delete the file)
|
||||
- Run `npx bmad-method build <agent-name>` to regenerate defaults
|
||||
|
||||
## Workflow Customization
|
||||
|
||||
Information about customizing existing BMad MEthod workflows and skills are coming soon.
|
||||
|
||||
## Module Customization
|
||||
|
||||
Information on how to build expansion modules that augment BMad, or make other existing module customizations are coming soon.
|
||||
102
docs/how-to/get-answers-about-bmad.md
Normal file
102
docs/how-to/get-answers-about-bmad.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
title: "How to Get Answers About BMad"
|
||||
description: Use an LLM to quickly answer your own BMad questions
|
||||
---
|
||||
|
||||
If you have successfully installed BMad and the BMad Method (+ other modules as needed) - the first step in getting answers is `/bmad-help`. This will answer upwards of 80% of all questions and is available to you in the IDE as you are working.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- You have a question about how BMad works or what to do next with BMad
|
||||
- You want to understand a specific agent or workflow
|
||||
- You need quick answers without waiting for Discord
|
||||
|
||||
:::note[Prerequisites]
|
||||
An AI tool (Claude Code, Cursor, ChatGPT, Claude.ai, etc.) and either BMad installed in your project or access to the GitHub repo.
|
||||
:::
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Choose Your Source
|
||||
|
||||
| Source | Best For | Examples |
|
||||
| -------------------- | ----------------------------------------- | ---------------------------- |
|
||||
| **`_bmad` folder** | How BMad works—agents, workflows, prompts | "What does the PM agent do?" |
|
||||
| **Full GitHub repo** | History, installer, architecture | "What changed in v6?" |
|
||||
| **`llms-full.txt`** | Quick overview from docs | "Explain BMad's four phases" |
|
||||
|
||||
The `_bmad` folder is created when you install BMad. If you don't have it yet, clone the repo instead.
|
||||
|
||||
### 2. Point Your AI at the Source
|
||||
|
||||
**If your AI can read files (Claude Code, Cursor, etc.):**
|
||||
|
||||
- **BMad installed:** Point at the `_bmad` folder and ask directly
|
||||
- **Want deeper context:** Clone the [full repo](https://github.com/bmad-code-org/BMAD-METHOD)
|
||||
|
||||
**If you use ChatGPT or Claude.ai:**
|
||||
|
||||
Fetch `llms-full.txt` into your session:
|
||||
|
||||
```
|
||||
https://bmad-code-org.github.io/BMAD-METHOD/llms-full.txt
|
||||
```
|
||||
|
||||
See the [Downloads page](/docs/downloads.md) for other downloadable resources.
|
||||
|
||||
### 3. Ask Your Question
|
||||
|
||||
:::note[Example]
|
||||
**Q:** "Tell me the fastest way to build something with BMad"
|
||||
|
||||
**A:** Use Quick Flow: Run `quick-spec` to write a technical specification, then `quick-dev` to implement it—skipping the full planning phases.
|
||||
:::
|
||||
|
||||
## What You Get
|
||||
|
||||
Direct answers about BMad—how agents work, what workflows do, why things are structured the way they are—without waiting for someone else to respond.
|
||||
|
||||
## Tips
|
||||
|
||||
- **Verify surprising answers** — LLMs occasionally get things wrong. Check the source file or ask on Discord.
|
||||
- **Be specific** — "What does step 3 of the PRD workflow do?" beats "How does PRD work?"
|
||||
|
||||
## Still Stuck?
|
||||
|
||||
Tried the LLM approach and still need help? You now have a much better question to ask.
|
||||
|
||||
| Channel | Use For |
|
||||
| ------------------------- | ------------------------------------------- |
|
||||
| `#bmad-method-help` | Quick questions (real-time chat) |
|
||||
| `help-requests` forum | Detailed questions (searchable, persistent) |
|
||||
| `#suggestions-feedback` | Ideas and feature requests |
|
||||
| `#report-bugs-and-issues` | Bug reports |
|
||||
|
||||
**Discord:** [discord.gg/gk8jAdXWmj](https://discord.gg/gk8jAdXWmj)
|
||||
|
||||
**GitHub Issues:** [github.com/bmad-code-org/BMAD-METHOD/issues](https://github.com/bmad-code-org/BMAD-METHOD/issues) (for clear bugs)
|
||||
|
||||
*You!*
|
||||
*Stuck*
|
||||
*in the queue—*
|
||||
*waiting*
|
||||
*for who?*
|
||||
|
||||
*The source*
|
||||
*is there,*
|
||||
*plain to see!*
|
||||
|
||||
*Point*
|
||||
*your machine.*
|
||||
*Set it free.*
|
||||
|
||||
*It reads.*
|
||||
*It speaks.*
|
||||
*Ask away—*
|
||||
|
||||
*Why wait*
|
||||
*for tomorrow*
|
||||
*when you have*
|
||||
*today?*
|
||||
|
||||
*—Claude*
|
||||
82
docs/how-to/install-bmad.md
Normal file
82
docs/how-to/install-bmad.md
Normal file
@@ -0,0 +1,82 @@
|
||||
---
|
||||
title: "How to Install BMad"
|
||||
description: Step-by-step guide to installing BMad in your project
|
||||
---
|
||||
|
||||
Use the `npx bmad-method install` command to set up BMad in your project with your choice of modules and AI tools.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Starting a new project with BMad
|
||||
- Adding BMad to an existing codebase
|
||||
- Update the existing BMad Installation
|
||||
|
||||
:::note[Prerequisites]
|
||||
- **Node.js** 20+ (required for the installer)
|
||||
- **Git** (recommended)
|
||||
- **AI tool** (Claude Code, Cursor, Windsurf, or similar)
|
||||
:::
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Run the Installer
|
||||
|
||||
```bash
|
||||
npx bmad-method install
|
||||
```
|
||||
|
||||
### 2. Choose Installation Location
|
||||
|
||||
The installer will ask where to install BMad files:
|
||||
|
||||
- Current directory (recommended for new projects if you created the directory yourself and ran from within the directory)
|
||||
- Custom path
|
||||
|
||||
### 3. Select Your AI Tools
|
||||
|
||||
Pick which AI tools you use:
|
||||
|
||||
- Claude Code
|
||||
- Cursor
|
||||
- Windsurf
|
||||
- Others
|
||||
|
||||
Each tool has its own way of integrating commands. The installer creates tiny prompt files to activate workflows and agents — it just puts them where your tool expects to find them.
|
||||
|
||||
### 4. Choose Modules
|
||||
|
||||
The installer shows available modules. Select whichever ones you need — most users just want **BMad Method** (the software development module).
|
||||
|
||||
### 5. Follow the Prompts
|
||||
|
||||
The installer guides you through the rest — custom content, settings, etc.
|
||||
|
||||
## What You Get
|
||||
|
||||
```
|
||||
your-project/
|
||||
├── _bmad/
|
||||
│ ├── bmm/ # Your selected modules
|
||||
│ │ └── config.yaml # Module settings (if you ever need to change them)
|
||||
│ ├── core/ # Required core module
|
||||
│ └── ...
|
||||
├── _bmad-output/ # Generated artifacts
|
||||
└── .claude/ # Claude Code commands (if using Claude Code)
|
||||
```
|
||||
|
||||
## Verify Installation
|
||||
|
||||
Run the `help` workflow (`/bmad-help` on most platforms) to verify everything works and see what to do next.
|
||||
|
||||
**Latest from main branch:**
|
||||
```bash
|
||||
npx github:bmad-code-org/BMAD-METHOD install
|
||||
```
|
||||
|
||||
Use these if you want the newest features before they're officially released. Things might break.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Installer throws an error** — Copy-paste the output into your AI assistant and let it figure it out.
|
||||
|
||||
**Installer worked but something doesn't work later** — Your AI needs BMad context to help. See [How to Get Answers About BMad](/docs/how-to/get-answers-about-bmad.md) for how to point your AI at the right sources.
|
||||
101
docs/how-to/shard-large-documents.md
Normal file
101
docs/how-to/shard-large-documents.md
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: "Document Sharding Guide"
|
||||
---
|
||||
|
||||
Use the `shard-doc` tool to split large markdown files into smaller, organized files for better context management.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Very large complex PRDs
|
||||
- Architecture documents with multiple system layers
|
||||
- Epic files with 4+ epics (especially for Phase 4)
|
||||
- UX design specs covering multiple subsystems
|
||||
|
||||
## What is Document Sharding?
|
||||
|
||||
Document sharding splits large markdown files into smaller, organized files based on level 2 headings (`## Heading`). This enables:
|
||||
|
||||
- **Selective Loading** - Workflows load only the sections they need
|
||||
- **Reduced Token Usage** - Massive efficiency gains for large projects
|
||||
- **Better Organization** - Logical section-based file structure
|
||||
- **Maintained Context** - Index file preserves document structure
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
Before Sharding:
|
||||
docs/
|
||||
└── PRD.md (large 50k token file)
|
||||
|
||||
After Sharding:
|
||||
docs/
|
||||
└── prd/
|
||||
├── index.md # Table of contents with descriptions
|
||||
├── overview.md # Section 1
|
||||
├── user-requirements.md # Section 2
|
||||
├── technical-requirements.md # Section 3
|
||||
└── ... # Additional sections
|
||||
```
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Run the Shard-Doc Tool
|
||||
|
||||
```bash
|
||||
/bmad:core:tools:shard-doc
|
||||
```
|
||||
|
||||
### 2. Follow the Interactive Process
|
||||
|
||||
```
|
||||
Agent: Which document would you like to shard?
|
||||
User: docs/PRD.md
|
||||
|
||||
Agent: Default destination: docs/prd/
|
||||
Accept default? [y/n]
|
||||
User: y
|
||||
|
||||
Agent: Sharding PRD.md...
|
||||
✓ Created 12 section files
|
||||
✓ Generated index.md
|
||||
✓ Complete!
|
||||
```
|
||||
|
||||
## What You Get
|
||||
|
||||
**index.md structure:**
|
||||
|
||||
```markdown
|
||||
|
||||
## Sections
|
||||
|
||||
1. [Overview](./overview.md) - Project vision and objectives
|
||||
2. [User Requirements](./user-requirements.md) - Feature specifications
|
||||
3. [Epic 1: Authentication](./epic-1-authentication.md) - User auth system
|
||||
4. [Epic 2: Dashboard](./epic-2-dashboard.md) - Main dashboard UI
|
||||
...
|
||||
```
|
||||
|
||||
**Individual section files:**
|
||||
|
||||
- Named from heading text (kebab-case)
|
||||
- Contains complete section content
|
||||
- Preserves all markdown formatting
|
||||
- Can be read independently
|
||||
|
||||
## How Workflow Discovery Works
|
||||
|
||||
BMad workflows use a **dual discovery system**:
|
||||
|
||||
1. **Try whole document first** - Look for `document-name.md`
|
||||
2. **Check for sharded version** - Look for `document-name/index.md`
|
||||
3. **Priority rule** - Whole document takes precedence if both exist - remove the whole document if you want the sharded to be used instead
|
||||
|
||||
## Workflow Support
|
||||
|
||||
All BMM workflows support both formats:
|
||||
|
||||
- Whole documents
|
||||
- Sharded documents
|
||||
- Automatic detection
|
||||
- Transparent to user
|
||||
131
docs/how-to/upgrade-to-v6.md
Normal file
131
docs/how-to/upgrade-to-v6.md
Normal file
@@ -0,0 +1,131 @@
|
||||
---
|
||||
title: "How to Upgrade to v6"
|
||||
description: Migrate from BMad v4 to v6
|
||||
---
|
||||
|
||||
Use the BMad installer to upgrade from v4 to v6, which includes automatic detection of legacy installations and migration assistance.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- You have BMad v4 installed (`.bmad-method` folder)
|
||||
- You want to migrate to the new v6 architecture
|
||||
- You have existing planning artifacts to preserve
|
||||
|
||||
:::note[Prerequisites]
|
||||
- Node.js 20+
|
||||
- Existing BMad v4 installation
|
||||
:::
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Run the Installer
|
||||
|
||||
```bash
|
||||
npx bmad-method install
|
||||
```
|
||||
|
||||
The installer automatically detects:
|
||||
|
||||
- **Legacy v4 folder**: `.bmad-method`
|
||||
- **IDE command artifacts**: Legacy bmad folders in `.claude/commands/`, `.cursor/commands/`, etc.
|
||||
|
||||
### 2. Handle Legacy Installation
|
||||
|
||||
When v4 is detected, you can:
|
||||
|
||||
- Allow the installer to back up and remove `.bmad-method`
|
||||
- Exit and handle cleanup manually
|
||||
- Keep both (not recommended for same project)
|
||||
|
||||
### 3. Clean Up IDE Commands
|
||||
|
||||
Manually remove legacy v4 IDE commands:
|
||||
|
||||
- `.claude/commands/BMad/agents`
|
||||
- `.claude/commands/BMad/tasks`
|
||||
|
||||
New v6 commands will be at `.claude/commands/bmad/<module>/agents|workflows`.
|
||||
|
||||
:::tip[Accidentally Deleted Commands?]
|
||||
If you delete the wrong commands, rerun the installer and choose "quick update" to restore them.
|
||||
:::
|
||||
|
||||
### 4. Migrate Planning Artifacts
|
||||
|
||||
**If you have planning documents (Brief/PRD/UX/Architecture):**
|
||||
|
||||
Move them to `_bmad-output/planning-artifacts/` with descriptive names:
|
||||
|
||||
- Include `PRD` in filename for PRD documents
|
||||
- Include `brief`, `architecture`, or `ux-design` accordingly
|
||||
- Sharded documents can be in named subfolders
|
||||
|
||||
**If you're mid-planning:** Consider restarting with v6 workflows. Use your existing documents as inputs—the new progressive discovery workflows with web search and IDE plan mode produce better results.
|
||||
|
||||
### 5. Migrate In-Progress Development
|
||||
|
||||
If you have stories created or implemented:
|
||||
|
||||
1. Complete the v6 installation
|
||||
2. Place `epics.md` or `epics/epic*.md` in `_bmad-output/planning-artifacts/`
|
||||
3. Run the Scrum Master's `sprint-planning` workflow
|
||||
4. Tell the SM which epics/stories are already complete
|
||||
|
||||
### 6. Migrate Agent Customizations
|
||||
|
||||
**v4:** Modified agent files directly in `_bmad-*` folders
|
||||
|
||||
**v6:** All customizations go in `_bmad/_config/agents/` using customize files:
|
||||
|
||||
```yaml
|
||||
# _bmad/_config/agents/bmm-pm.customize.yaml
|
||||
persona:
|
||||
name: 'Captain Jack'
|
||||
role: 'Swashbuckling Product Owner'
|
||||
communication_style: |
|
||||
- Talk like a pirate
|
||||
- Use nautical metaphors
|
||||
```
|
||||
|
||||
After modifying customization files, rerun the installer and choose "rebuild all agents" or "quick update".
|
||||
|
||||
## What You Get
|
||||
|
||||
**v6 unified structure:**
|
||||
|
||||
```
|
||||
your-project/
|
||||
└── _bmad/ # Single installation folder
|
||||
├── _config/ # Your customizations
|
||||
│ └── agents/ # Agent customization files
|
||||
├── core/ # Universal core framework
|
||||
├── bmm/ # BMad Method module
|
||||
├── bmb/ # BMad Builder
|
||||
└── cis/ # Creative Intelligence Suite
|
||||
├── _bmad-output/ # Output folder (was doc folder in v4)
|
||||
```
|
||||
|
||||
## Module Migration
|
||||
|
||||
| v4 Module | v6 Status |
|
||||
|-----------|-----------|
|
||||
| `_bmad-2d-phaser-game-dev` | Integrated into BMGD Module |
|
||||
| `_bmad-2d-unity-game-dev` | Integrated into BMGD Module |
|
||||
| `_bmad-godot-game-dev` | Integrated into BMGD Module |
|
||||
| `_bmad-infrastructure-devops` | Deprecated — new DevOps agent coming soon |
|
||||
| `_bmad-creative-writing` | Not adapted — new v6 module coming soon |
|
||||
|
||||
## Key Changes
|
||||
|
||||
| Concept | v4 | v6 |
|
||||
|---------|----|----|
|
||||
| **Core** | `_bmad-core` was actually BMad Method | `_bmad/core/` is universal framework |
|
||||
| **Method** | `_bmad-method` | `_bmad/bmm/` |
|
||||
| **Config** | Modified files directly | `config.yaml` per module |
|
||||
| **Documents** | Sharded or unsharded required setup | Fully flexible, auto-scanned |
|
||||
|
||||
## Tips
|
||||
|
||||
- **Back up first** — Keep your v4 installation until you verify v6 works
|
||||
- **Use v6 workflows** — Even partial planning docs benefit from v6's improved discovery
|
||||
- **Rebuild after customizing** — Always run the installer after changing customize files
|
||||
@@ -1,31 +0,0 @@
|
||||
# BMAD Method - Auggie CLI Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents can be installed in multiple locations based on your setup.
|
||||
|
||||
### Common Locations
|
||||
|
||||
- User Home: `~/.augment/commands/`
|
||||
- Project: `.augment/commands/`
|
||||
- Custom paths you selected
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Type Trigger**: Use `@{agent-name}` in your prompt
|
||||
2. **Activate**: Agent persona activates
|
||||
3. **Tasks**: Use `@task-{task-name}` for tasks
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
@dev - Activate development agent
|
||||
@architect - Activate architect agent
|
||||
@task-setup - Execute setup task
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Agents can be in multiple locations
|
||||
- Check your installation paths
|
||||
- Activation syntax same across all locations
|
||||
@@ -1,25 +0,0 @@
|
||||
# BMAD Method - Claude Code Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as slash commands in `.claude/commands/bmad/`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Type Slash Command**: Start with `/` to see available commands
|
||||
2. **Select Agent**: Type `/bmad-{agent-name}` (e.g., `/bmad-dev`)
|
||||
3. **Execute**: Press Enter to activate that agent persona
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
/bmad:bmm:agents:dev - Activate development agent
|
||||
/bmad:bmm:agents:architect - Activate architect agent
|
||||
/bmad:bmm:workflows:dev-story - Execute dev-story workflow
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Commands are autocompleted when you type `/`
|
||||
- Agent remains active for the conversation
|
||||
- Start a new conversation to switch agents
|
||||
@@ -1,31 +0,0 @@
|
||||
# BMAD Method - Cline Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as **toggleable rules** in `.clinerules/` directory.
|
||||
|
||||
### Important: Rules are OFF by default
|
||||
|
||||
- Rules are NOT automatically loaded to avoid context pollution
|
||||
- You must manually enable the agent you want to use
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Open Rules Panel**: Click the rules icon below the chat input
|
||||
2. **Enable an Agent**: Toggle ON the specific agent rule you need (e.g., `01-core-dev`)
|
||||
3. **Activate in Chat**: Type `@{agent-name}` to activate that persona
|
||||
4. **Disable When Done**: Toggle OFF to free up context
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Only enable 1-2 agents at a time to preserve context
|
||||
- Disable agents when switching tasks
|
||||
- Rules are numbered (01-, 02-) for organization, not priority
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
Toggle ON: 01-core-dev.md
|
||||
In chat: "@dev help me refactor this code"
|
||||
When done: Toggle OFF the rule
|
||||
```
|
||||
@@ -1,21 +0,0 @@
|
||||
# BMAD Method - Codex Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents, tasks and workflows are installed as custom prompts in
|
||||
`$CODEX_HOME/prompts/bmad-*.md` files. If `CODEX_HOME` is not set, it
|
||||
defaults to `$HOME/.codex/`.
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
/bmad-bmm-agents-dev - Activate development agent
|
||||
/bmad-bmm-agents-architect - Activate architect agent
|
||||
/bmad-bmm-workflows-dev-story - Execute dev-story workflow
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
Prompts are autocompleted when you type /
|
||||
Agent remains active for the conversation
|
||||
Start a new conversation to switch agents
|
||||
@@ -1,30 +0,0 @@
|
||||
# BMAD Method - Crush Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as commands in `.crush/commands/bmad/`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Open Command Palette**: Use Crush command interface
|
||||
2. **Navigate**: Browse to `{bmad_folder}/{module}/agents/`
|
||||
3. **Select Agent**: Choose the agent command
|
||||
4. **Execute**: Run to activate agent persona
|
||||
|
||||
### Command Structure
|
||||
|
||||
```
|
||||
.crush/commands/bmad/
|
||||
├── agents/ # All agents
|
||||
├── tasks/ # All tasks
|
||||
├── core/ # Core module
|
||||
│ ├── agents/
|
||||
│ └── tasks/
|
||||
└── {module}/ # Other modules
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Commands organized by module
|
||||
- Can browse hierarchically
|
||||
- Agent activates for session
|
||||
@@ -1,25 +0,0 @@
|
||||
# BMAD Method - Cursor Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed in `.cursor/rules/bmad/` as MDC rules.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Reference in Chat**: Use `@{bmad_folder}/{module}/agents/{agent-name}`
|
||||
2. **Include Entire Module**: Use `@{bmad_folder}/{module}`
|
||||
3. **Reference Index**: Use `@{bmad_folder}/index` for all available agents
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
@{bmad_folder}/core/agents/dev - Activate dev agent
|
||||
@{bmad_folder}/bmm/agents/architect - Activate architect agent
|
||||
@{bmad_folder}/core - Include all core agents/tasks
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Rules are Manual type - only loaded when explicitly referenced
|
||||
- No automatic context pollution
|
||||
- Can combine multiple agents: `@{bmad_folder}/core/agents/dev @{bmad_folder}/core/agents/test`
|
||||
@@ -1,25 +0,0 @@
|
||||
# BMAD Method - Gemini CLI Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are concatenated in `.gemini/bmad-method/GEMINI.md`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Type Trigger**: Use `*{agent-name}` in your prompt
|
||||
2. **Activate**: Agent persona activates from the concatenated file
|
||||
3. **Continue**: Agent remains active for conversation
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
*dev - Activate development agent
|
||||
*architect - Activate architect agent
|
||||
*test - Activate test agent
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- All agents loaded from single GEMINI.md file
|
||||
- Triggers with asterisk: `*{agent-name}`
|
||||
- Context includes all agents (may be large)
|
||||
@@ -1,26 +0,0 @@
|
||||
# BMAD Method - GitHub Copilot Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as chat modes in `.github/chatmodes/`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Open Chat View**: Click Copilot icon in VS Code sidebar
|
||||
2. **Select Mode**: Click mode selector (top of chat)
|
||||
3. **Choose Agent**: Select the BMAD agent from dropdown
|
||||
4. **Chat**: Agent is now active for this session
|
||||
|
||||
### VS Code Settings
|
||||
|
||||
Configured in `.vscode/settings.json`:
|
||||
|
||||
- Max requests per session
|
||||
- Auto-fix enabled
|
||||
- MCP discovery enabled
|
||||
|
||||
### Notes
|
||||
|
||||
- Modes persist for the chat session
|
||||
- Switch modes anytime via dropdown
|
||||
- Multiple agents available in mode selector
|
||||
@@ -1,33 +0,0 @@
|
||||
# BMAD Method - iFlow CLI Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as commands in `.iflow/commands/bmad/`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Access Commands**: Use iFlow command interface
|
||||
2. **Navigate**: Browse to `{bmad_folder}/agents/` or `{bmad_folder}/tasks/`
|
||||
3. **Select**: Choose the agent or task command
|
||||
4. **Execute**: Run to activate
|
||||
|
||||
### Command Structure
|
||||
|
||||
```
|
||||
.iflow/commands/bmad/
|
||||
├── agents/ # Agent commands
|
||||
└── tasks/ # Task commands
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
/{bmad_folder}/agents/core-dev - Activate dev agent
|
||||
/{bmad_folder}/tasks/core-setup - Execute setup task
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Commands organized by type (agents/tasks)
|
||||
- Agent activates for session
|
||||
- Similar to Crush command structure
|
||||
@@ -1,24 +0,0 @@
|
||||
# BMAD Method - KiloCode Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as custom modes in `.kilocodemodes`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Open Project**: Modes auto-load when project opens
|
||||
2. **Select Mode**: Use mode selector in KiloCode interface
|
||||
3. **Choose Agent**: Pick `bmad-{module}-{agent}` mode
|
||||
4. **Activate**: Mode is now active
|
||||
|
||||
### Mode Format
|
||||
|
||||
- Mode name: `bmad-{module}-{agent}`
|
||||
- Display: `{icon} {title}`
|
||||
- Example: `bmad-core-dev` shows as `🤖 Dev`
|
||||
|
||||
### Notes
|
||||
|
||||
- Modes persist until changed
|
||||
- Similar to Roo Code mode system
|
||||
- Icon shows in mode selector
|
||||
@@ -1,24 +0,0 @@
|
||||
# BMAD Method - OpenCode Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as OpenCode agents in `.opencode/agent/BMAD/{module_name}` and workflow commands in `.opencode/command/BMAD/{module_name}`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Switch Agents**: Press **Tab** to cycle through primary agents or select using the `/agents`
|
||||
2. **Activate Agent**: Once the Agent is selected say `hello` or any prompt to activate that agent persona
|
||||
3. **Execute Commands**: Type `/bmad` to see and execute bmad workflow commands (commands allow for fuzzy matching)
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
/agents - to see a list of agents and switch between them
|
||||
/{bmad_folder}/bmm/workflows/workflow-init - Activate the workflow-init command
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Press **Tab** to switch between primary agents (Analyst, Architect, Dev, etc.)
|
||||
- Commands are autocompleted when you type `/` and allow for fuzzy matching
|
||||
- Workflow commands execute in current agent context, make sure you have the right agent activated before running a command
|
||||
@@ -1,25 +0,0 @@
|
||||
# BMAD Method - Qwen Code Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are concatenated in `.qwen/bmad-method/QWEN.md`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Type Trigger**: Use `*{agent-name}` in your prompt
|
||||
2. **Activate**: Agent persona activates from the concatenated file
|
||||
3. **Continue**: Agent remains active for conversation
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
*dev - Activate development agent
|
||||
*architect - Activate architect agent
|
||||
*test - Activate test agent
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- All agents loaded from single QWEN.md file
|
||||
- Triggers with asterisk: `*{agent-name}`
|
||||
- Similar to Gemini CLI setup
|
||||
@@ -1,27 +0,0 @@
|
||||
# BMAD Method - Roo Code Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as custom modes in `.roomodes`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Open Project**: Modes auto-load when project opens
|
||||
2. **Select Mode**: Use mode selector in Roo interface
|
||||
3. **Choose Agent**: Pick `bmad-{module}-{agent}` mode
|
||||
4. **Activate**: Mode is now active with configured permissions
|
||||
|
||||
### Permission Levels
|
||||
|
||||
Modes are configured with file edit permissions:
|
||||
|
||||
- Development files only
|
||||
- Configuration files only
|
||||
- Documentation files only
|
||||
- All files (if configured)
|
||||
|
||||
### Notes
|
||||
|
||||
- Modes persist until changed
|
||||
- Each mode has specific file access rights
|
||||
- Icon shows in mode selector for easy identification
|
||||
@@ -1,388 +0,0 @@
|
||||
# Rovo Dev IDE Integration
|
||||
|
||||
This document describes how BMAD-METHOD integrates with [Atlassian Rovo Dev](https://www.atlassian.com/rovo-dev), an AI-powered software development assistant.
|
||||
|
||||
## Overview
|
||||
|
||||
Rovo Dev is designed to integrate deeply with developer workflows and organizational knowledge bases. When you install BMAD-METHOD in a Rovo Dev project, it automatically installs BMAD agents, workflows, tasks, and tools just like it does for other IDEs (Cursor, VS Code, etc.).
|
||||
|
||||
BMAD-METHOD provides:
|
||||
|
||||
- **Agents**: Specialized subagents for various development tasks
|
||||
- **Workflows**: Multi-step workflow guides and coordinators
|
||||
- **Tasks & Tools**: Reference documentation for BMAD tasks and tools
|
||||
|
||||
### What are Rovo Dev Subagents?
|
||||
|
||||
Subagents are specialized agents that Rovo Dev can delegate tasks to. They are defined as Markdown files with YAML frontmatter stored in the `.rovodev/subagents/` directory. Rovo Dev automatically discovers these files and makes them available through the `@subagent-name` syntax.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
### Automatic Installation
|
||||
|
||||
When you run the BMAD-METHOD installer and select Rovo Dev as your IDE:
|
||||
|
||||
```bash
|
||||
bmad install
|
||||
```
|
||||
|
||||
The installer will:
|
||||
|
||||
1. Create a `.rovodev/subagents/` directory in your project (if it doesn't exist)
|
||||
2. Convert BMAD agents into Rovo Dev subagent format
|
||||
3. Write subagent files with the naming pattern: `bmad-<module>-<agent-name>.md`
|
||||
|
||||
### File Structure
|
||||
|
||||
After installation, your project will have:
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── .rovodev/
|
||||
│ ├── subagents/
|
||||
│ │ ├── bmad-core-code-reviewer.md
|
||||
│ │ ├── bmad-bmm-pm.md
|
||||
│ │ ├── bmad-bmm-dev.md
|
||||
│ │ └── ... (more agents from selected modules)
|
||||
│ ├── workflows/
|
||||
│ │ ├── bmad-brainstorming.md
|
||||
│ │ ├── bmad-prd-creation.md
|
||||
│ │ └── ... (workflow guides)
|
||||
│ ├── references/
|
||||
│ │ ├── bmad-task-core-code-review.md
|
||||
│ │ ├── bmad-tool-core-analysis.md
|
||||
│ │ └── ... (task/tool references)
|
||||
│ ├── config.yml (Rovo Dev configuration)
|
||||
│ ├── prompts.yml (Optional: reusable prompts)
|
||||
│ └── ...
|
||||
├── .bmad/ (BMAD installation directory)
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Directory Structure Explanation:**
|
||||
|
||||
- **subagents/**: Agents discovered and used by Rovo Dev with `@agent-name` syntax
|
||||
- **workflows/**: Multi-step workflow guides and instructions
|
||||
- **references/**: Documentation for available tasks and tools in BMAD
|
||||
|
||||
## Subagent File Format
|
||||
|
||||
BMAD agents are converted to Rovo Dev subagent format, which uses Markdown with YAML frontmatter:
|
||||
|
||||
### Basic Structure
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: bmad-module-agent-name
|
||||
description: One sentence description of what this agent does
|
||||
tools:
|
||||
- bash
|
||||
- open_files
|
||||
- grep
|
||||
- expand_code_chunks
|
||||
model: anthropic.claude-3-5-sonnet-20241022-v2:0 # Optional
|
||||
load_memory: true # Optional
|
||||
---
|
||||
|
||||
You are a specialized agent for [specific task].
|
||||
|
||||
## Your Role
|
||||
|
||||
Describe the agent's role and responsibilities...
|
||||
|
||||
## Key Instructions
|
||||
|
||||
1. First instruction
|
||||
2. Second instruction
|
||||
3. Third instruction
|
||||
|
||||
## When to Use This Agent
|
||||
|
||||
Explain when and how to use this agent...
|
||||
```
|
||||
|
||||
### YAML Frontmatter Fields
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
| ------------- | ------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `name` | string | Yes | Unique identifier for the subagent (kebab-case, no spaces) |
|
||||
| `description` | string | Yes | One-line description of the subagent's purpose |
|
||||
| `tools` | array | No | List of tools the subagent can use. If not specified, uses parent agent's tools |
|
||||
| `model` | string | No | Specific LLM model for this subagent (e.g., `anthropic.claude-3-5-sonnet-20241022-v2:0`). If not specified, uses parent agent's model |
|
||||
| `load_memory` | boolean | No | Whether to load default memory files (AGENTS.md, AGENTS.local.md). Defaults to `true` |
|
||||
|
||||
### System Prompt
|
||||
|
||||
The content after the closing `---` is the subagent's system prompt. This defines:
|
||||
|
||||
- The agent's persona and role
|
||||
- Its capabilities and constraints
|
||||
- Step-by-step instructions for task execution
|
||||
- Examples of expected behavior
|
||||
|
||||
## Using BMAD Components in Rovo Dev
|
||||
|
||||
### Invoking a Subagent (Agent)
|
||||
|
||||
In Rovo Dev, you can invoke a BMAD agent as a subagent using the `@` syntax:
|
||||
|
||||
```
|
||||
@bmad-core-code-reviewer Please review this PR for potential issues
|
||||
@bmad-bmm-pm Help plan this feature release
|
||||
@bmad-bmm-dev Implement this feature
|
||||
```
|
||||
|
||||
### Accessing Workflows
|
||||
|
||||
Workflow guides are available in `.rovodev/workflows/` directory:
|
||||
|
||||
```
|
||||
@bmad-core-code-reviewer Use the brainstorming workflow from .rovodev/workflows/bmad-brainstorming.md
|
||||
```
|
||||
|
||||
Workflow files contain step-by-step instructions and can be referenced or copied into Rovo Dev for collaborative workflow execution.
|
||||
|
||||
### Accessing Tasks and Tools
|
||||
|
||||
Task and tool documentation is available in `.rovodev/references/` directory. These provide:
|
||||
|
||||
- Task execution instructions
|
||||
- Tool capabilities and usage
|
||||
- Integration examples
|
||||
- Parameter documentation
|
||||
|
||||
### Example Usage Scenarios
|
||||
|
||||
#### Code Review
|
||||
|
||||
```
|
||||
@bmad-core-code-reviewer Review the changes in src/components/Button.tsx
|
||||
for best practices, performance, and potential bugs
|
||||
```
|
||||
|
||||
#### Documentation
|
||||
|
||||
```
|
||||
@bmad-core-documentation-writer Generate API documentation for the new
|
||||
user authentication module
|
||||
```
|
||||
|
||||
#### Feature Design
|
||||
|
||||
```
|
||||
@bmad-module-feature-designer Design a solution for implementing
|
||||
dark mode support across the application
|
||||
```
|
||||
|
||||
## Customizing BMAD Subagents
|
||||
|
||||
You can customize BMAD subagents after installation by editing their files directly in `.rovodev/subagents/`.
|
||||
|
||||
### Example: Adding Tool Restrictions
|
||||
|
||||
By default, BMAD subagents inherit tools from the parent Rovo Dev agent. You can restrict which tools a specific subagent can use:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: bmad-core-code-reviewer
|
||||
description: Reviews code and suggests improvements
|
||||
tools:
|
||||
- open_files
|
||||
- expand_code_chunks
|
||||
- grep
|
||||
---
|
||||
```
|
||||
|
||||
### Example: Using a Specific Model
|
||||
|
||||
Some agents might benefit from using a different model. You can specify this:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: bmad-core-documentation-writer
|
||||
description: Writes clear and comprehensive documentation
|
||||
model: anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||
---
|
||||
```
|
||||
|
||||
### Example: Enhancing the System Prompt
|
||||
|
||||
You can add additional context to a subagent's system prompt:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: bmad-core-code-reviewer
|
||||
description: Reviews code and suggests improvements
|
||||
---
|
||||
|
||||
You are a specialized code review agent for our project.
|
||||
|
||||
## Project Context
|
||||
|
||||
Our codebase uses:
|
||||
|
||||
- React 18 for frontend
|
||||
- Node.js 18+ for backend
|
||||
- TypeScript for type safety
|
||||
- Jest for testing
|
||||
|
||||
## Review Checklist
|
||||
|
||||
1. Type safety and TypeScript correctness
|
||||
2. React best practices and hooks usage
|
||||
3. Performance considerations
|
||||
4. Test coverage
|
||||
5. Documentation and comments
|
||||
|
||||
...rest of original system prompt...
|
||||
```
|
||||
|
||||
## Memory and Context
|
||||
|
||||
By default, BMAD subagents have `load_memory: true`, which means they will load memory files from your project:
|
||||
|
||||
- **Project-level**: `.rovodev/AGENTS.md` and `.rovodev/.agent.md`
|
||||
- **User-level**: `~/.rovodev/AGENTS.md` (global memory across all projects)
|
||||
|
||||
These files can contain:
|
||||
|
||||
- Project guidelines and conventions
|
||||
- Common patterns and best practices
|
||||
- Recent decisions and context
|
||||
- Custom instructions for all agents
|
||||
|
||||
### Creating Project Memory
|
||||
|
||||
Create `.rovodev/AGENTS.md` in your project:
|
||||
|
||||
```markdown
|
||||
# Project Guidelines
|
||||
|
||||
## Code Style
|
||||
|
||||
- Use 2-space indentation
|
||||
- Use camelCase for variables
|
||||
- Use PascalCase for classes
|
||||
|
||||
## Architecture
|
||||
|
||||
- Follow modular component structure
|
||||
- Use dependency injection for services
|
||||
- Implement proper error handling
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
- Minimum 80% code coverage
|
||||
- Write tests before implementation
|
||||
- Use descriptive test names
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Subagents Not Appearing in Rovo Dev
|
||||
|
||||
1. **Verify files exist**: Check that `.rovodev/subagents/bmad-*.md` files are present
|
||||
2. **Check Rovo Dev is reloaded**: Rovo Dev may cache agent definitions. Restart Rovo Dev or reload the project
|
||||
3. **Verify file format**: Ensure files have proper YAML frontmatter (between `---` markers)
|
||||
4. **Check file permissions**: Ensure files are readable by Rovo Dev
|
||||
|
||||
### Agent Name Conflicts
|
||||
|
||||
If you have custom subagents with the same names as BMAD agents, Rovo Dev will load both but may show a warning. Use unique prefixes for custom subagents to avoid conflicts.
|
||||
|
||||
### Tools Not Available
|
||||
|
||||
If a subagent's tools aren't working:
|
||||
|
||||
1. Verify the tool names match Rovo Dev's available tools
|
||||
2. Check that the parent Rovo Dev agent has access to those tools
|
||||
3. Ensure tool permissions are properly configured in `.rovodev/config.yml`
|
||||
|
||||
## Advanced: Tool Configuration
|
||||
|
||||
Rovo Dev agents have access to a set of tools for various tasks. Common tools available include:
|
||||
|
||||
- `bash`: Execute shell commands
|
||||
- `open_files`: View file contents
|
||||
- `grep`: Search across files
|
||||
- `expand_code_chunks`: View specific code sections
|
||||
- `find_and_replace_code`: Modify files
|
||||
- `create_file`: Create new files
|
||||
- `delete_file`: Delete files
|
||||
- `move_file`: Rename or move files
|
||||
|
||||
### MCP Servers
|
||||
|
||||
Rovo Dev can also connect to Model Context Protocol (MCP) servers, which provide additional tools and data sources:
|
||||
|
||||
- **Atlassian Integration**: Access to Jira, Confluence, and Bitbucket
|
||||
- **Code Analysis**: Custom code analysis and metrics
|
||||
- **External Services**: APIs and third-party integrations
|
||||
|
||||
Configure MCP servers in `~/.rovodev/mcp.json` or `.rovodev/mcp.json`.
|
||||
|
||||
## Integration with Other IDE Handlers
|
||||
|
||||
BMAD-METHOD supports multiple IDEs simultaneously. You can have both Rovo Dev and other IDE configurations (Cursor, VS Code, etc.) in the same project. Each IDE will have its own artifacts installed in separate directories.
|
||||
|
||||
For example:
|
||||
|
||||
- Rovo Dev agents: `.rovodev/subagents/bmad-*.md`
|
||||
- Cursor rules: `.cursor/rules/bmad/`
|
||||
- Claude Code: `.claude/rules/bmad/`
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- BMAD subagent files are typically small (1-5 KB each)
|
||||
- Rovo Dev lazy-loads subagents, so having many subagents doesn't impact startup time
|
||||
- System prompts are cached by Rovo Dev after first load
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Keep System Prompts Concise**: Shorter, well-structured prompts are more effective
|
||||
2. **Use Project Memory**: Leverage `.rovodev/AGENTS.md` for shared context
|
||||
3. **Customize Tool Restrictions**: Give subagents only the tools they need
|
||||
4. **Test Subagent Invocations**: Verify each subagent works as expected for your project
|
||||
5. **Version Control**: Commit `.rovodev/subagents/` to version control for team consistency
|
||||
6. **Document Custom Subagents**: Add comments explaining the purpose of customized subagents
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Rovo Dev Official Documentation](https://www.atlassian.com/rovo-dev)
|
||||
- [BMAD-METHOD Installation Guide](./installation.md)
|
||||
- [IDE Handler Architecture](./ide-handlers.md)
|
||||
- [Rovo Dev Configuration Reference](https://www.atlassian.com/rovo-dev/configuration)
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Code Review Workflow
|
||||
|
||||
```
|
||||
User: @bmad-core-code-reviewer Review src/auth/login.ts for security issues
|
||||
Rovo Dev → Subagent: Opens file, analyzes code, suggests improvements
|
||||
Subagent output: Security vulnerabilities found, recommendations provided
|
||||
```
|
||||
|
||||
### Example 2: Documentation Generation
|
||||
|
||||
```
|
||||
User: @bmad-core-documentation-writer Generate API docs for the new payment module
|
||||
Rovo Dev → Subagent: Analyzes code structure, generates documentation
|
||||
Subagent output: Markdown documentation with examples and API reference
|
||||
```
|
||||
|
||||
### Example 3: Architecture Design
|
||||
|
||||
```
|
||||
User: @bmad-module-feature-designer Design a caching strategy for the database layer
|
||||
Rovo Dev → Subagent: Reviews current architecture, proposes design
|
||||
Subagent output: Detailed architecture proposal with implementation plan
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions about:
|
||||
|
||||
- **Rovo Dev**: See [Atlassian Rovo Dev Documentation](https://www.atlassian.com/rovo-dev)
|
||||
- **BMAD-METHOD**: See [BMAD-METHOD README](../README.md)
|
||||
- **IDE Integration**: See [IDE Handler Guide](./ide-handlers.md)
|
||||
@@ -1,25 +0,0 @@
|
||||
# BMAD Method - Trae Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as rules in `.trae/rules/`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Type Trigger**: Use `@{agent-name}` in your prompt
|
||||
2. **Activate**: Agent persona activates automatically
|
||||
3. **Continue**: Agent remains active for conversation
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
@dev - Activate development agent
|
||||
@architect - Activate architect agent
|
||||
@task-setup - Execute setup task
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Rules auto-load from `.trae/rules/` directory
|
||||
- Multiple agents can be referenced: `@dev and @test`
|
||||
- Agent follows YAML configuration in rule file
|
||||
@@ -1,22 +0,0 @@
|
||||
# BMAD Method - Windsurf Instructions
|
||||
|
||||
## Activating Agents
|
||||
|
||||
BMAD agents are installed as workflows in `.windsurf/workflows/`.
|
||||
|
||||
### How to Use
|
||||
|
||||
1. **Open Workflows**: Access via Windsurf menu or command palette
|
||||
2. **Select Workflow**: Choose the agent/task workflow
|
||||
3. **Execute**: Run to activate that agent persona
|
||||
|
||||
### Workflow Types
|
||||
|
||||
- **Agent workflows**: `{module}-{agent}.md` (auto_execution_mode: 3)
|
||||
- **Task workflows**: `task-{module}-{task}.md` (auto_execution_mode: 2)
|
||||
|
||||
### Notes
|
||||
|
||||
- Agents run with higher autonomy (mode 3)
|
||||
- Tasks run with guided execution (mode 2)
|
||||
- Workflows persist for the session
|
||||
166
docs/index.md
166
docs/index.md
@@ -1,152 +1,56 @@
|
||||
# BMad Documentation Index
|
||||
---
|
||||
title: Welcome to the BMad Method
|
||||
---
|
||||
|
||||
Complete map of all BMad Method v6 documentation with recommended reading paths.
|
||||
The BMad Method (**B**reakthrough **M**ethod of **A**gile AI **D**riven Development) is an AI-driven development framework that helps you build software faster and smarter. It provides specialized AI agents, guided workflows, and intelligent planning that adapts to your project's complexity—whether you're fixing a bug or building an enterprise platform.
|
||||
|
||||
If you're comfortable working with AI coding assistants like Claude, Cursor, or GitHub Copilot, you're ready to get started.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Getting Started (Start Here!)
|
||||
## New Here? Start with a Tutorial
|
||||
|
||||
**New users:** Start with one of these based on your situation:
|
||||
The fastest way to understand BMad is to try it.
|
||||
|
||||
| Your Situation | Start Here | Then Read |
|
||||
| ---------------------- | --------------------------------------------------------------- | ------------------------------------------------------------- |
|
||||
| **Brand new to BMad** | [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) | [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) |
|
||||
| **Upgrading from v4** | [v4 to v6 Upgrade Guide](./v4-to-v6-upgrade.md) | [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) |
|
||||
| **Brownfield project** | [Brownfield Guide](../src/modules/bmm/docs/brownfield-guide.md) | [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) |
|
||||
- **[Get Started with BMad](/docs/tutorials/getting-started.md)** — Install and understand how BMad works
|
||||
- **[Workflow Map](/docs/reference/workflow-map.md)** — Visual overview of BMM phases, workflows, and context management.
|
||||
|
||||
## How to Use These Docs
|
||||
|
||||
These docs are organized into four sections based on what you're trying to do:
|
||||
|
||||
| Section | Purpose |
|
||||
| ----------------- | ---------------------------------------------------------------------------------------------------------- |
|
||||
| **Tutorials** | Learning-oriented. Step-by-step guides that walk you through building something. Start here if you're new. |
|
||||
| **How-To Guides** | Task-oriented. Practical guides for solving specific problems. "How do I customize an agent?" lives here. |
|
||||
| **Explanation** | Understanding-oriented. Deep dives into concepts and architecture. Read when you want to know *why*. |
|
||||
| **Reference** | Information-oriented. Technical specifications for agents, workflows, and configuration. |
|
||||
|
||||
---
|
||||
|
||||
## 📋 Core Documentation
|
||||
## What You'll Need
|
||||
|
||||
### Project-Level Docs (Root)
|
||||
BMad works with any AI coding assistant that supports custom system prompts or project context. Popular options include:
|
||||
|
||||
- **[README.md](../README.md)** - Main project overview, feature summary, and module introductions
|
||||
- **[CONTRIBUTING.md](../CONTRIBUTING.md)** - How to contribute, pull request guidelines, code style
|
||||
- **[CHANGELOG.md](../CHANGELOG.md)** - Version history and breaking changes
|
||||
- **[CLAUDE.md](../CLAUDE.md)** - Claude Code specific guidelines for this project
|
||||
- **[Claude Code](https://code.claude.com)** — Anthropic's CLI tool (recommended)
|
||||
- **[Cursor](https://cursor.sh)** — AI-first code editor
|
||||
- **[Windsurf](https://codeium.com/windsurf)** — Codeium's AI IDE
|
||||
- **[Roo Code](https://roocode.com)** — VS Code extension
|
||||
|
||||
### Installation & Setup
|
||||
|
||||
- **[v4 to v6 Upgrade Guide](./v4-to-v6-upgrade.md)** - Migration path for v4 users
|
||||
- **[Document Sharding Guide](./document-sharding-guide.md)** - Split large documents for 90%+ token savings
|
||||
- **[Web Bundles](./USING_WEB_BUNDLES.md)** - Use BMAD agents in Claude Projects, ChatGPT, or Gemini without installation
|
||||
- **[Bundle Distribution Setup](./BUNDLE_DISTRIBUTION_SETUP.md)** - Maintainer guide for bundle auto-publishing
|
||||
You should be comfortable with basic software development concepts like version control, project structure, and agile workflows. No prior experience with BMad-style agent systems is required—that's what these docs are for.
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Module Documentation
|
||||
## Join the Community
|
||||
|
||||
### BMad Method (BMM) - Software & Game Development
|
||||
Get help, share what you're building, or contribute to BMad:
|
||||
|
||||
The flagship module for agile AI-driven development.
|
||||
|
||||
- **[BMM Module README](../src/modules/bmm/README.md)** - Module overview, agents, and complete documentation index
|
||||
- **[BMM Documentation](../src/modules/bmm/docs/)** - All BMM-specific guides and references:
|
||||
- [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) - Step-by-step guide to building your first project
|
||||
- [Quick Spec Flow](../src/modules/bmm/docs/quick-spec-flow.md) - Rapid Level 0-1 development
|
||||
- [Scale Adaptive System](../src/modules/bmm/docs/scale-adaptive-system.md) - Understanding the 5-level system
|
||||
- [Brownfield Guide](../src/modules/bmm/docs/brownfield-guide.md) - Working with existing codebases
|
||||
- **[BMM Workflows Guide](../src/modules/bmm/workflows/README.md)** - **ESSENTIAL READING**
|
||||
- **[Test Architect Guide](../src/modules/bmm/testarch/README.md)** - Testing strategy and quality assurance
|
||||
|
||||
### BMad Builder (BMB) - Create Custom Solutions
|
||||
|
||||
Build your own agents, workflows, and modules.
|
||||
|
||||
- **[BMB Module README](../src/modules/bmb/README.md)** - Module overview and capabilities
|
||||
- **[Agent Creation Guide](../src/modules/bmb/workflows/create-agent/README.md)** - Design custom agents
|
||||
|
||||
### Creative Intelligence Suite (CIS) - Innovation & Creativity
|
||||
|
||||
AI-powered creative thinking and brainstorming.
|
||||
|
||||
- **[CIS Module README](../src/modules/cis/README.md)** - Module overview and workflows
|
||||
- **[Discord](https://discord.gg/gk8jAdXWmj)** — Chat with other BMad users, ask questions, share ideas
|
||||
- **[GitHub](https://github.com/bmad-code-org/BMAD-METHOD)** — Source code, issues, and contributions
|
||||
- **[YouTube](https://www.youtube.com/@BMadCode)** — Video tutorials and walkthroughs
|
||||
|
||||
---
|
||||
|
||||
## 🖥️ IDE-Specific Guides
|
||||
## Next Step
|
||||
|
||||
Instructions for loading agents and running workflows in your development environment.
|
||||
|
||||
**Popular IDEs:**
|
||||
|
||||
- [Claude Code](./ide-info/claude-code.md)
|
||||
- [Cursor](./ide-info/cursor.md)
|
||||
- [VS Code](./ide-info/windsurf.md)
|
||||
|
||||
**Other Supported IDEs:**
|
||||
|
||||
- [Augment](./ide-info/auggie.md)
|
||||
- [Cline](./ide-info/cline.md)
|
||||
- [Codex](./ide-info/codex.md)
|
||||
- [Crush](./ide-info/crush.md)
|
||||
- [Gemini](./ide-info/gemini.md)
|
||||
- [GitHub Copilot](./ide-info/github-copilot.md)
|
||||
- [IFlow](./ide-info/iflow.md)
|
||||
- [Kilo](./ide-info/kilo.md)
|
||||
- [OpenCode](./ide-info/opencode.md)
|
||||
- [Qwen](./ide-info/qwen.md)
|
||||
- [Roo](./ide-info/roo.md)
|
||||
- [Rovo Dev](./ide-info/rovo-dev.md)
|
||||
- [Trae](./ide-info/trae.md)
|
||||
|
||||
**Key concept:** Every reference to "load an agent" or "activate an agent" in the main docs links to the [ide-info](./ide-info/) directory for IDE-specific instructions.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Advanced Topics
|
||||
|
||||
### Custom Agents
|
||||
|
||||
- **[Custom Agent Installation](./custom-agent-installation.md)** - Install and personalize agents with `bmad agent-install`
|
||||
- [Agent Customization Guide](./agent-customization-guide.md) - Customize agent behavior and responses
|
||||
|
||||
### Installation & Bundling
|
||||
|
||||
- [IDE Injections Reference](./installers-bundlers/ide-injections.md) - How agents are installed to IDEs
|
||||
- [Installers & Platforms Reference](./installers-bundlers/installers-modules-platforms-reference.md) - CLI tool and platform support
|
||||
- [Web Bundler Usage](./installers-bundlers/web-bundler-usage.md) - Creating web-compatible bundles
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Recommended Reading Paths
|
||||
|
||||
### Path 1: Brand New to BMad (Software Project)
|
||||
|
||||
1. [README.md](../README.md) - Understand the vision
|
||||
2. [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) - Get hands-on
|
||||
3. [BMM Module README](../src/modules/bmm/README.md) - Understand agents
|
||||
4. [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) - Master the methodology
|
||||
5. [Your IDE guide](./ide-info/) - Optimize your workflow
|
||||
|
||||
### Path 2: Game Development Project
|
||||
|
||||
1. [README.md](../README.md) - Understand the vision
|
||||
2. [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) - Get hands-on
|
||||
3. [BMM Module README](../src/modules/bmm/README.md) - Game agents are included
|
||||
4. [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) - Game workflows
|
||||
5. [Your IDE guide](./ide-info/) - Optimize your workflow
|
||||
|
||||
### Path 3: Upgrading from v4
|
||||
|
||||
1. [v4 to v6 Upgrade Guide](./v4-to-v6-upgrade.md) - Understand what changed
|
||||
2. [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) - Reorient yourself
|
||||
3. [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) - Learn new v6 workflows
|
||||
|
||||
### Path 4: Working with Existing Codebase (Brownfield)
|
||||
|
||||
1. [Brownfield Guide](../src/modules/bmm/docs/brownfield-guide.md) - Approach for legacy code
|
||||
2. [Quick Start Guide](../src/modules/bmm/docs/quick-start.md) - Follow the process
|
||||
3. [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) - Master the methodology
|
||||
|
||||
### Path 5: Building Custom Solutions
|
||||
|
||||
1. [BMB Module README](../src/modules/bmb/README.md) - Understand capabilities
|
||||
2. [Agent Creation Guide](../src/modules/bmb/workflows/create-agent/README.md) - Create agents
|
||||
3. [BMM Workflows Guide](../src/modules/bmm/workflows/README.md) - Understand workflow structure
|
||||
|
||||
### Path 6: Contributing to BMad
|
||||
|
||||
1. [CONTRIBUTING.md](../CONTRIBUTING.md) - Contribution guidelines
|
||||
2. Relevant module README - Understand the area you're contributing to
|
||||
3. [Code Style section in CONTRIBUTING.md](../CONTRIBUTING.md#code-style) - Follow standards
|
||||
Ready to dive in? **[Get Started with BMad](/docs/tutorials/getting-started.md)** and build your first project.
|
||||
|
||||
@@ -1,186 +0,0 @@
|
||||
# IDE Content Injection Standard
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the standard for IDE-specific content injection in BMAD modules. Each IDE can inject its own specific content into BMAD templates during installation without polluting the source files with IDE-specific code. The installation process is interactive, allowing users to choose what IDE-specific features they want to install.
|
||||
|
||||
## Architecture
|
||||
|
||||
### 1. Injection Points
|
||||
|
||||
Files that support IDE-specific content define injection points using HTML comments:
|
||||
|
||||
```xml
|
||||
<!-- IDE-INJECT-POINT: unique-point-name -->
|
||||
```
|
||||
|
||||
### 2. Module Structure
|
||||
|
||||
Each module that needs IDE-specific content creates a sub-module folder:
|
||||
|
||||
```
|
||||
src/modules/{module-name}/sub-modules/{ide-name}/
|
||||
├── injections.yaml # Injection configuration
|
||||
├── sub-agents/ # IDE-specific subagents (if applicable)
|
||||
└── config.yaml # Other IDE-specific config
|
||||
```
|
||||
|
||||
### 3. Injection Configuration Format
|
||||
|
||||
The `injections.yaml` file defines what content to inject where:
|
||||
|
||||
```yaml
|
||||
# injections.yaml structure
|
||||
injections:
|
||||
- file: 'relative/path/to/file.md' # Path relative to installation root
|
||||
point: 'injection-point-name' # Must match IDE-INJECT-POINT name
|
||||
requires: 'subagent-name' # Which subagent must be selected (or "any")
|
||||
content: | # Content to inject (preserves formatting)
|
||||
<llm>
|
||||
<i>Instructions specific to this IDE</i>
|
||||
</llm>
|
||||
|
||||
# Subagents available for installation
|
||||
subagents:
|
||||
source: 'sub-agents' # Source folder relative to this config
|
||||
target: '.claude/agents' # Claude's expected location (don't change)
|
||||
files:
|
||||
- 'agent1.md'
|
||||
- 'agent2.md'
|
||||
```
|
||||
|
||||
### 4. Interactive Installation Process
|
||||
|
||||
For Claude Code specifically, the installer will:
|
||||
|
||||
1. **Detect available subagents** from the module's `injections.yaml`
|
||||
2. **Ask the user** about subagent installation:
|
||||
- Install all subagents (default)
|
||||
- Select specific subagents
|
||||
- Skip subagent installation
|
||||
3. **Ask installation location** (if subagents selected):
|
||||
- Project level: `.claude/agents/`
|
||||
- User level: `~/.claude/agents/`
|
||||
4. **Copy selected subagents** to the chosen location
|
||||
5. **Inject only relevant content** based on selected subagents
|
||||
|
||||
Other IDEs can implement their own installation logic appropriate to their architecture.
|
||||
|
||||
## Implementation
|
||||
|
||||
### IDE Installer Responsibilities
|
||||
|
||||
Each IDE installer (e.g., `claude-code.js`) must:
|
||||
|
||||
1. **Check for sub-modules**: Look for `sub-modules/{ide-name}/` in each installed module
|
||||
2. **Load injection config**: Parse `injections.yaml` if present
|
||||
3. **Process injections**: Replace injection points with configured content
|
||||
4. **Copy additional files**: Handle subagents or other IDE-specific files
|
||||
|
||||
### Example Implementation (Claude Code)
|
||||
|
||||
```javascript
|
||||
async processModuleInjections(projectDir, bmadDir, options) {
|
||||
for (const moduleName of options.selectedModules) {
|
||||
const configPath = path.join(
|
||||
bmadDir, 'src/modules', moduleName,
|
||||
'sub-modules/claude-code/injections.yaml'
|
||||
);
|
||||
|
||||
if (exists(configPath)) {
|
||||
const config = yaml.load(configPath);
|
||||
|
||||
// Interactive: Ask user about subagent installation
|
||||
const choices = await this.promptSubagentInstallation(config.subagents);
|
||||
|
||||
if (choices.install !== 'none') {
|
||||
// Ask where to install
|
||||
const location = await this.promptInstallLocation();
|
||||
|
||||
// Process injections based on selections
|
||||
for (const injection of config.injections) {
|
||||
if (this.shouldInject(injection, choices)) {
|
||||
await this.injectContent(projectDir, injection, choices);
|
||||
}
|
||||
}
|
||||
|
||||
// Copy selected subagents
|
||||
await this.copySelectedSubagents(projectDir, config.subagents, choices, location);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Clean Source Files**: No IDE-specific conditionals in source
|
||||
2. **Modular**: Each IDE manages its own injections
|
||||
3. **Scalable**: Easy to add support for new IDEs
|
||||
4. **Maintainable**: IDE-specific content lives with IDE config
|
||||
5. **Flexible**: Different modules can inject different content
|
||||
|
||||
## Adding Support for a New IDE
|
||||
|
||||
1. Create sub-module folder: `src/modules/{module}/sub-modules/{new-ide}/`
|
||||
2. Add `injections.yaml` with IDE-specific content
|
||||
3. Update IDE installer to process injections using this standard
|
||||
4. Test installation with and without the IDE selected
|
||||
|
||||
## Example: BMM Module with Claude Code
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
src/modules/bmm/
|
||||
├── agents/pm.md # Has injection point
|
||||
├── templates/prd.md # Has multiple injection points
|
||||
└── sub-modules/
|
||||
└── claude-code/
|
||||
├── injections.yaml # Defines what to inject
|
||||
└── sub-agents/ # Claude Code specific subagents
|
||||
├── market-researcher.md
|
||||
├── requirements-analyst.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Injection Point in pm.md
|
||||
|
||||
```xml
|
||||
<agent>
|
||||
<persona>...</persona>
|
||||
<!-- IDE-INJECT-POINT: pm-agent-instructions -->
|
||||
<cmds>...</cmds>
|
||||
</agent>
|
||||
```
|
||||
|
||||
### Injection Configuration
|
||||
|
||||
```yaml
|
||||
injections:
|
||||
- file: '{bmad_folder}/bmm/agents/pm.md'
|
||||
point: 'pm-agent-instructions'
|
||||
requires: 'any' # Injected if ANY subagent is selected
|
||||
content: |
|
||||
<llm critical="true">
|
||||
<i>Use 'market-researcher' subagent for analysis</i>
|
||||
</llm>
|
||||
|
||||
- file: '{bmad_folder}/bmm/templates/prd.md'
|
||||
point: 'prd-goals-context-delegation'
|
||||
requires: 'market-researcher' # Only if this specific subagent selected
|
||||
content: |
|
||||
<i>DELEGATE: Use 'market-researcher' subagent...</i>
|
||||
```
|
||||
|
||||
### Result After Installation
|
||||
|
||||
```xml
|
||||
<agent>
|
||||
<persona>...</persona>
|
||||
<llm critical="true">
|
||||
<i>Use 'market-researcher' subagent for analysis</i>
|
||||
</llm>
|
||||
<cmds>...</cmds>
|
||||
</agent>
|
||||
```
|
||||
@@ -1,388 +0,0 @@
|
||||
# BMAD Installation & Module System Reference
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Quick Start](#quick-start)
|
||||
3. [Architecture](#architecture)
|
||||
4. [Modules](#modules)
|
||||
5. [Configuration System](#configuration-system)
|
||||
6. [Platform Integration](#platform-integration)
|
||||
7. [Development Guide](#development-guide)
|
||||
8. [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Overview
|
||||
|
||||
BMad Core is a modular AI agent framework with intelligent installation, platform-agnostic support, and configuration inheritance.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Modular Design**: Core + optional modules (BMB, BMM, CIS)
|
||||
- **Smart Installation**: Interactive configuration with dependency resolution
|
||||
- **Clean Architecture**: Centralized `{bmad_folder}` directory add to project, no source pollution with multiple folders added
|
||||
|
||||
## Architecture
|
||||
|
||||
### Directory Structure upon installation
|
||||
|
||||
```
|
||||
project-root/
|
||||
├── {bmad_folder}/ # Centralized installation
|
||||
│ ├── _cfg/ # Configuration
|
||||
│ │ ├── agents/ # Agent configs
|
||||
│ │ └── agent-manifest.csv # Agent manifest
|
||||
│ ├── core/ # Core module
|
||||
│ │ ├── agents/
|
||||
│ │ ├── tasks/
|
||||
│ │ └── config.yaml
|
||||
│ ├── bmm/ # BMad Method module
|
||||
│ │ ├── agents/
|
||||
│ │ ├── tasks/
|
||||
│ │ ├── workflows/
|
||||
│ │ └── config.yaml
|
||||
│ └── cis/ # Creative Innovation Studio
|
||||
│ └── ...
|
||||
└── .claude/ # Platform-specific (example)
|
||||
└── agents/
|
||||
```
|
||||
|
||||
### Installation Flow
|
||||
|
||||
1. **Detection**: Check existing installation
|
||||
2. **Selection**: Choose modules interactively or via CLI
|
||||
3. **Configuration**: Collect module-specific settings
|
||||
4. **Installation**: Compile Process and copy files
|
||||
5. **Generation**: Create config files with inheritance
|
||||
6. **Post-Install**: Run module installers
|
||||
7. **Manifest**: Track installed components
|
||||
|
||||
### Key Exclusions
|
||||
|
||||
- `_module-installer/` directories are never copied to destination
|
||||
- `localskip="true"` agents are filtered out
|
||||
- Source `config.yaml` templates are replaced with generated configs
|
||||
|
||||
## Modules
|
||||
|
||||
### Core Module (Required)
|
||||
|
||||
Foundation framework with C.O.R.E. (Collaboration Optimized Reflection Engine)
|
||||
|
||||
- **Components**: Base agents, activation system, advanced elicitation
|
||||
- **Config**: `user_name`, `communication_language`
|
||||
|
||||
### BMM Module
|
||||
|
||||
BMad Method for software development workflows
|
||||
|
||||
- **Components**: PM agent, dev tasks, PRD templates, story generation
|
||||
- **Config**: `project_name`, `tech_docs`, `output_folder`, `story_location`
|
||||
- **Dependencies**: Core
|
||||
|
||||
### CIS Module
|
||||
|
||||
Creative Innovation Studio for design workflows
|
||||
|
||||
- **Components**: Design agents, creative tasks
|
||||
- **Config**: `output_folder`, design preferences
|
||||
- **Dependencies**: Core
|
||||
|
||||
### Module Structure
|
||||
|
||||
```
|
||||
src/modules/{module}/
|
||||
├── _module-installer/ # Not copied to destination
|
||||
│ ├── installer.js # Post-install logic
|
||||
│ └── install-config.yaml
|
||||
├── agents/
|
||||
├── tasks/
|
||||
├── templates/
|
||||
└── sub-modules/ # Platform-specific content
|
||||
└── {platform}/
|
||||
├── injections.yaml
|
||||
└── sub-agents/
|
||||
```
|
||||
|
||||
## Configuration System
|
||||
|
||||
### Collection Process
|
||||
|
||||
Modules define prompts in `install-config.yaml`:
|
||||
|
||||
```yaml
|
||||
project_name:
|
||||
prompt: 'Project title?'
|
||||
default: 'My Project'
|
||||
result: '{value}'
|
||||
|
||||
output_folder:
|
||||
prompt: 'Output location?'
|
||||
default: 'docs'
|
||||
result: '{project-root}/{value}'
|
||||
|
||||
tools:
|
||||
prompt: 'Select tools:'
|
||||
multi-select:
|
||||
- 'Tool A'
|
||||
- 'Tool B'
|
||||
```
|
||||
|
||||
### Configuration Inheritance
|
||||
|
||||
Core values cascade to ALL modules automatically:
|
||||
|
||||
```yaml
|
||||
# core/config.yaml
|
||||
user_name: "Jane"
|
||||
communication_language: "English"
|
||||
|
||||
# bmm/config.yaml (generated)
|
||||
project_name: "My App"
|
||||
tech_docs: "/path/to/docs"
|
||||
# Core Configuration Values (inherited)
|
||||
user_name: "Jane"
|
||||
communication_language: "English"
|
||||
```
|
||||
|
||||
**Reserved Keys**: Core configuration keys cannot be redefined by other modules.
|
||||
|
||||
### Path Placeholders
|
||||
|
||||
- `{project-root}`: Project directory path
|
||||
- `{value}`: User input
|
||||
- `{module}`: Module name
|
||||
- `{core:field}`: Reference core config value
|
||||
|
||||
### Config Generation Rules
|
||||
|
||||
1. ALL installed modules get a `config.yaml` (even without prompts)
|
||||
2. Core values are ALWAYS included in module configs
|
||||
3. Module-specific values come first, core values appended
|
||||
4. Source templates are never copied, only generated configs
|
||||
|
||||
## Platform Integration
|
||||
|
||||
### Supported Platforms
|
||||
|
||||
**Preferred** (Full Integration):
|
||||
|
||||
- Claude Code
|
||||
- Cursor
|
||||
- Windsurf
|
||||
|
||||
**Additional**:
|
||||
Cline, Roo, Rovo Dev,Auggie, GitHub Copilot, Codex, Gemini, Qwen, Trae, Kilo, Crush, iFlow
|
||||
|
||||
### Platform Features
|
||||
|
||||
1. **Setup Handler** (`tools/cli/installers/lib/ide/{platform}.js`)
|
||||
- Directory creation
|
||||
- Configuration generation
|
||||
- Agent processing
|
||||
|
||||
2. **Content Injection** (`sub-modules/{platform}/injections.yaml`)
|
||||
|
||||
```yaml
|
||||
injections:
|
||||
- file: '{bmad_folder}/bmm/agents/pm.md'
|
||||
point: 'pm-agent-instructions'
|
||||
content: |
|
||||
<i>Platform-specific instruction</i>
|
||||
|
||||
subagents:
|
||||
source: 'sub-agents'
|
||||
target: '.claude/agents'
|
||||
files: ['agent.md']
|
||||
```
|
||||
|
||||
3. **Interactive Config**
|
||||
- Subagent selection
|
||||
- Installation scope (project/user)
|
||||
- Feature toggles
|
||||
|
||||
### Injection System
|
||||
|
||||
Platform-specific content without source modification:
|
||||
|
||||
- Inject points marked in source: `<!-- IDE-INJECT-POINT:name -->`
|
||||
- Content added during installation only
|
||||
- Source files remain clean
|
||||
|
||||
## Development Guide
|
||||
|
||||
### Creating a Module
|
||||
|
||||
1. **Structure**
|
||||
|
||||
```
|
||||
src/modules/mymod/
|
||||
├── _module-installer/
|
||||
│ ├── installer.js
|
||||
│ └── install-config.yaml
|
||||
├── agents/
|
||||
└── tasks/
|
||||
```
|
||||
|
||||
2. **Configuration** (`install-config.yaml`)
|
||||
|
||||
```yaml
|
||||
code: mymod
|
||||
name: 'My Module'
|
||||
prompt: 'Welcome message'
|
||||
|
||||
setting_name:
|
||||
prompt: 'Configure X?'
|
||||
default: 'value'
|
||||
```
|
||||
|
||||
3. **Installer** (`installer.js`)
|
||||
```javascript
|
||||
async function install(options) {
|
||||
const { projectRoot, config, installedIDEs, logger } = options;
|
||||
// Custom logic
|
||||
return true;
|
||||
}
|
||||
module.exports = { install };
|
||||
```
|
||||
|
||||
### Adding Platform Support
|
||||
|
||||
1. Create handler: `tools/cli/installers/lib/ide/myplatform.js`
|
||||
2. Extend `BaseIdeSetup` class
|
||||
3. Add sub-module: `src/modules/{mod}/sub-modules/myplatform/`
|
||||
4. Define injections and platform agents
|
||||
|
||||
### Agent Configuration
|
||||
|
||||
Extractable config nodes:
|
||||
|
||||
```xml
|
||||
<agent>
|
||||
<setting agentConfig="true">
|
||||
Default value
|
||||
</setting>
|
||||
</agent>
|
||||
```
|
||||
|
||||
Generated in: `bmad/_cfg/agents/{module}-{agent}.md`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
| Issue | Solution |
|
||||
| ----------------------- | -------------------------------------------- |
|
||||
| Existing installation | Use `bmad update` or remove `{bmad_folder}/` |
|
||||
| Module not found | Check `src/modules/` exists |
|
||||
| Config not applied | Verify `{bmad_folder}/{module}/config.yaml` |
|
||||
| Missing config.yaml | Fixed: All modules now get configs |
|
||||
| Agent unavailable | Check for `localskip="true"` |
|
||||
| module-installer copied | Fixed: Now excluded from copy |
|
||||
|
||||
### Debug Commands
|
||||
|
||||
```bash
|
||||
bmad install -v # Verbose installation
|
||||
bmad status -v # Detailed status
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. Run from project root
|
||||
2. Backup `{bmad_folder}/_cfg/` before updates
|
||||
3. Use interactive mode for guidance
|
||||
4. Review generated configs post-install
|
||||
|
||||
## Migration from v4
|
||||
|
||||
| v4 | v6 |
|
||||
| ------------------- | ---------------------------- |
|
||||
| Scattered files | Centralized `{bmad_folder}/` |
|
||||
| Monolithic | Modular |
|
||||
| Manual config | Interactive setup |
|
||||
| Limited IDE support | 15+ platforms |
|
||||
| Source modification | Clean injection |
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### Dependency Resolution
|
||||
|
||||
- Direct dependencies (module → module)
|
||||
- Agent references (cross-module)
|
||||
- Template dependencies
|
||||
- Partial module installation (only required files)
|
||||
- Workflow vendoring for standalone module operation
|
||||
|
||||
## Workflow Vendoring
|
||||
|
||||
**Problem**: Modules that reference workflows from other modules create dependencies, forcing users to install multiple modules even when they only need one.
|
||||
|
||||
**Solution**: Workflow vendoring allows modules to copy workflows from other modules during installation, making them fully standalone.
|
||||
|
||||
### How It Works
|
||||
|
||||
Agents can specify both `workflow` (source location) and `workflow-install` (destination location) in their menu items:
|
||||
|
||||
```yaml
|
||||
menu:
|
||||
- trigger: create-story
|
||||
workflow: '{project-root}/{bmad_folder}/bmm/workflows/4-implementation/create-story/workflow.yaml'
|
||||
workflow-install: '{project-root}/{bmad_folder}/bmgd/workflows/4-production/create-story/workflow.yaml'
|
||||
description: 'Create a game feature story'
|
||||
```
|
||||
|
||||
**During Installation:**
|
||||
|
||||
1. **Vendoring Phase**: Before copying module files, the installer:
|
||||
- Scans source agent YAML files for `workflow-install` attributes
|
||||
- Copies entire workflow folders from `workflow` path to `workflow-install` path
|
||||
- Updates vendored `workflow.yaml` files to reference target module's config
|
||||
|
||||
2. **Compilation Phase**: When compiling agents:
|
||||
- If `workflow-install` exists, uses its value for the `workflow` attribute
|
||||
- `workflow-install` is build-time metadata only, never appears in final XML
|
||||
- Compiled agent references vendored workflow location
|
||||
|
||||
3. **Config Update**: Vendored workflows get their `config_source` updated:
|
||||
|
||||
```yaml
|
||||
# Source workflow (in bmm):
|
||||
config_source: "{project-root}/{bmad_folder}/bmm/config.yaml"
|
||||
|
||||
# Vendored workflow (in bmgd):
|
||||
config_source: "{project-root}/{bmad_folder}/bmgd/config.yaml"
|
||||
```
|
||||
|
||||
**Result**: Modules become completely standalone with their own copies of needed workflows, configured for their specific use case.
|
||||
|
||||
### Example Use Case: BMGD Module
|
||||
|
||||
The BMad Game Development module vendors implementation workflows from BMM:
|
||||
|
||||
- Game Dev Scrum Master agent references BMM workflows
|
||||
- During installation, workflows are copied to `bmgd/workflows/4-production/`
|
||||
- Vendored workflows use BMGD's config (with game-specific settings)
|
||||
- BMGD can be installed without BMM dependency
|
||||
|
||||
### Benefits
|
||||
|
||||
✅ **Module Independence** - No forced dependencies
|
||||
✅ **Clean Namespace** - Workflows live in their module
|
||||
✅ **Config Isolation** - Each module uses its own configuration
|
||||
✅ **Customization Ready** - Vendored workflows can be modified independently
|
||||
✅ **No User Confusion** - Avoid partial module installations
|
||||
|
||||
### File Processing
|
||||
|
||||
- Filters `localskip="true"` agents
|
||||
- Excludes `_module-installer/` directories
|
||||
- Replaces path placeholders at runtime
|
||||
- Injects activation blocks
|
||||
- Vendors cross-module workflows (see Workflow Vendoring below)
|
||||
|
||||
### Web Bundling
|
||||
|
||||
```bash
|
||||
bmad bundle --web # Filter for web deployment
|
||||
npm run validate:bundles # Validate bundles
|
||||
```
|
||||
83
docs/reference/workflow-map.md
Normal file
83
docs/reference/workflow-map.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
title: "Workflow Map"
|
||||
description: Visual reference for BMad Method workflow phases and outputs
|
||||
---
|
||||
|
||||
The BMad Method (BMM) is a module in the BMad Ecosystem, targeted at following the best practices of context engineering and planning. AI agents work best with clear, structured context. The BMM system builds that context progressively across 4 distinct phases - each phase, and multiple workflows optionally within each phase, produce documents that inform the next, so agents always know what to build and why.
|
||||
|
||||
The rationale and concepts come from agile methodologies that have been used across the industry with great success as a mental framework.
|
||||
|
||||
If at anytime you are unsure what to do, the `/bmad-help` command will help you stay on track or know what to do next. You can always refer to this for reference also - but /bmad-help is fully interactive and much quicker if you have already installed the BMadMethod. Additionally, if you are using different modules that have extended the BMad Method or added other complimentary non extension modules - the /bmad-help evolves to know all that is available to give you the best in the moment advice.
|
||||
|
||||
Final important note: Every workflow below can be run directly with your tool of choice via slash command or by loading an agent first and using the entry from the agents menu.
|
||||
|
||||
<iframe src="/workflow-map-diagram.html" width="100%" height="100%" frameborder="0" style="border-radius: 8px; border: 1px solid #334155; min-height: 900px;"></iframe>
|
||||
|
||||
*[Interactive diagram - hover over outputs to see artifact flows]*
|
||||
|
||||
## Phase 1: Analysis (Optional)
|
||||
|
||||
Explore the problem space and validate ideas before committing to planning.
|
||||
|
||||
| Workflow | Purpose | Produces |
|
||||
| ---------------------- | -------------------------------------------------------------------------- | ------------------------- |
|
||||
| `brainstorm` | Brainstorm Project Ideas with guided facilitation of a brainstorming coach | `brainstorming-report.md` |
|
||||
| `research` | Validate market, technical, or domain assumptions | Research findings |
|
||||
| `create-product-brief` | Capture strategic vision | `product-brief.md` |
|
||||
|
||||
## Phase 2: Planning
|
||||
|
||||
Define what to build and for whom.
|
||||
|
||||
| Workflow | Purpose | Produces |
|
||||
| ------------------ | ---------------------------------------- | ------------ |
|
||||
| `create-prd` | Define requirements (FRs/NFRs) | `PRD.md` |
|
||||
| `create-ux-design` | Design user experience (when UX matters) | `ux-spec.md` |
|
||||
|
||||
## Phase 3: Solutioning
|
||||
|
||||
Decide how to build it and break work into stories.
|
||||
|
||||
| Workflow | Purpose | Produces |
|
||||
| -------------------------------- | ------------------------------------------ | --------------------------- |
|
||||
| `create-architecture` | Make technical decisions explicit | `architecture.md` with ADRs |
|
||||
| `create-epics-and-stories` | Break requirements into implementable work | Epic files with stories |
|
||||
| `check-implementation-readiness` | Gate check before implementation | PASS/CONCERNS/FAIL decision |
|
||||
|
||||
## Phase 4: Implementation
|
||||
|
||||
Build it, one story at a time.
|
||||
|
||||
| Workflow | Purpose | Produces |
|
||||
| ----------------- | -------------------------------------- | ----------------------------- |
|
||||
| `sprint-planning` | Initialize tracking (once per project) | `sprint-status.yaml` |
|
||||
| `create-story` | Prepare next story for implementation | `story-[slug].md` |
|
||||
| `dev-story` | Implement the story | Working code + tests |
|
||||
| `code-review` | Validate implementation quality | Approved or changes requested |
|
||||
| `correct-course` | Handle significant mid-sprint changes | Updated plan or re-routing |
|
||||
| `retrospective` | Review after epic completion | Lessons learned |
|
||||
|
||||
## Quick Flow (Parallel Track)
|
||||
|
||||
Skip phases 1-3 for small, well-understood work.
|
||||
|
||||
| Workflow | Purpose | Produces |
|
||||
| ------------ | ------------------------------------------ | --------------------------------------------- |
|
||||
| `quick-spec` | Define an ad-hoc change | `tech-spec.md` (story file for small changes) |
|
||||
| `quick-dev` | Implement from spec or direct instructions | Working code + tests |
|
||||
|
||||
## Context Management
|
||||
|
||||
Each document becomes context for the next phase. The PRD tells the architect what constraints matter. The architecture tells the dev agent which patterns to follow. Story files give focused, complete context for implementation. Without this structure, agents make inconsistent decisions.
|
||||
|
||||
For brownfield projects, `document-project` creates or updates `project-context.md` - what exists in the codebase and the rules all implementation workflows must observe. Run it just before Phase 4, and again when something significant changes - structure, architecture, or those rules. You can also edit `project-context.md` by hand.
|
||||
|
||||
All implementation workflows load `project-context.md` if it exists. Additional context per workflow:
|
||||
|
||||
| Workflow | Also Loads |
|
||||
| -------------- | ---------------------------- |
|
||||
| `create-story` | epics, PRD, architecture, UX |
|
||||
| `dev-story` | story file |
|
||||
| `code-review` | architecture, story file |
|
||||
| `quick-spec` | planning docs (if exist) |
|
||||
| `quick-dev` | tech-spec |
|
||||
710
docs/tea/explanation/engagement-models.md
Normal file
710
docs/tea/explanation/engagement-models.md
Normal file
@@ -0,0 +1,710 @@
|
||||
---
|
||||
title: "TEA Engagement Models Explained"
|
||||
description: Understanding the five ways to use TEA - from standalone to full BMad Method integration
|
||||
---
|
||||
|
||||
# TEA Engagement Models Explained
|
||||
|
||||
TEA is optional and flexible. There are five valid ways to engage with TEA - choose intentionally based on your project needs and methodology.
|
||||
|
||||
## Overview
|
||||
|
||||
**TEA is not mandatory.** Pick the engagement model that fits your context:
|
||||
|
||||
1. **No TEA** - Skip all TEA workflows, use existing testing approach
|
||||
2. **TEA Solo** - Use TEA standalone without BMad Method
|
||||
3. **TEA Lite** - Beginner approach using just `automate`
|
||||
4. **TEA Integrated (Greenfield)** - Full BMad Method integration from scratch
|
||||
5. **TEA Integrated (Brownfield)** - Full BMad Method integration with existing code
|
||||
|
||||
## The Problem
|
||||
|
||||
### One-Size-Fits-All Doesn't Work
|
||||
|
||||
**Traditional testing tools force one approach:**
|
||||
- Must use entire framework
|
||||
- All-or-nothing adoption
|
||||
- No flexibility for different project types
|
||||
- Teams abandon tool if it doesn't fit
|
||||
|
||||
**TEA recognizes:**
|
||||
- Different projects have different needs
|
||||
- Different teams have different maturity levels
|
||||
- Different contexts require different approaches
|
||||
- Flexibility increases adoption
|
||||
|
||||
## The Five Engagement Models
|
||||
|
||||
### Model 1: No TEA
|
||||
|
||||
**What:** Skip all TEA workflows, use your existing testing approach.
|
||||
|
||||
**When to Use:**
|
||||
- Team has established testing practices
|
||||
- Quality is already high
|
||||
- Testing tools already in place
|
||||
- TEA doesn't add value
|
||||
|
||||
**What You Miss:**
|
||||
- Risk-based test planning
|
||||
- Systematic quality review
|
||||
- Gate decisions with evidence
|
||||
- Knowledge base patterns
|
||||
|
||||
**What You Keep:**
|
||||
- Full control
|
||||
- Existing tools
|
||||
- Team expertise
|
||||
- No learning curve
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Your team:
|
||||
- 10-year veteran QA team
|
||||
- Established testing practices
|
||||
- High-quality test suite
|
||||
- No problems to solve
|
||||
|
||||
Decision: Skip TEA, keep what works
|
||||
```
|
||||
|
||||
**Verdict:** Valid choice if existing approach works.
|
||||
|
||||
---
|
||||
|
||||
### Model 2: TEA Solo
|
||||
|
||||
**What:** Use TEA workflows standalone without full BMad Method integration.
|
||||
|
||||
**When to Use:**
|
||||
- Non-BMad projects
|
||||
- Want TEA's quality operating model only
|
||||
- Don't need full planning workflow
|
||||
- Bring your own requirements
|
||||
|
||||
**Typical Sequence:**
|
||||
```
|
||||
1. `test-design` (system or epic)
|
||||
2. `atdd` or `automate`
|
||||
3. `test-review` (optional)
|
||||
4. `trace` (coverage + gate decision)
|
||||
```
|
||||
|
||||
**You Bring:**
|
||||
- Requirements (user stories, acceptance criteria)
|
||||
- Development environment
|
||||
- Project context
|
||||
|
||||
**TEA Provides:**
|
||||
- Risk-based test planning (`test-design`)
|
||||
- Test generation (`atdd`, `automate`)
|
||||
- Quality review (`test-review`)
|
||||
- Coverage traceability (`trace`)
|
||||
|
||||
**Optional:**
|
||||
- Framework setup (`framework`) if needed
|
||||
- CI configuration (`ci`) if needed
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Your project:
|
||||
- Using Scrum (not BMad Method)
|
||||
- Jira for story management
|
||||
- Need better test strategy
|
||||
|
||||
Workflow:
|
||||
1. Export stories from Jira
|
||||
2. Run `test-design` on epic
|
||||
3. Run `atdd` for each story
|
||||
4. Implement features
|
||||
5. Run `trace` for coverage
|
||||
```
|
||||
|
||||
**Verdict:** Best for teams wanting TEA benefits without BMad Method commitment.
|
||||
|
||||
---
|
||||
|
||||
### Model 3: TEA Lite
|
||||
|
||||
**What:** Beginner approach using just `automate` to test existing features.
|
||||
|
||||
**When to Use:**
|
||||
- Learning TEA fundamentals
|
||||
- Want quick results
|
||||
- Testing existing application
|
||||
- No time for full methodology
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. `framework` (setup test infrastructure)
|
||||
2. `test-design` (optional, risk assessment)
|
||||
3. `automate` (generate tests for existing features)
|
||||
4. Run tests (they pass immediately)
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Beginner developer:
|
||||
- Never used TEA before
|
||||
- Want to add tests to existing app
|
||||
- 30 minutes available
|
||||
|
||||
Steps:
|
||||
1. Run `framework`
|
||||
2. Run `automate` on TodoMVC demo
|
||||
3. Tests generated and passing
|
||||
4. Learn TEA basics
|
||||
```
|
||||
|
||||
**What You Get:**
|
||||
- Working test framework
|
||||
- Passing tests for existing features
|
||||
- Learning experience
|
||||
- Foundation to expand
|
||||
|
||||
**What You Miss:**
|
||||
- TDD workflow (ATDD)
|
||||
- Risk-based planning (test-design depth)
|
||||
- Quality gates (trace Phase 2)
|
||||
- Full TEA capabilities
|
||||
|
||||
**Verdict:** Perfect entry point for beginners.
|
||||
|
||||
---
|
||||
|
||||
### Model 4: TEA Integrated (Greenfield)
|
||||
|
||||
**What:** Full BMad Method integration with TEA workflows across all phases.
|
||||
|
||||
**When to Use:**
|
||||
- New projects starting from scratch
|
||||
- Using BMad Method or Enterprise track
|
||||
- Want complete quality operating model
|
||||
- Testing is critical to success
|
||||
|
||||
**Lifecycle:**
|
||||
|
||||
**Phase 2: Planning**
|
||||
- PM creates PRD with NFRs
|
||||
- (Optional) TEA runs `nfr-assess` (Enterprise only)
|
||||
|
||||
**Phase 3: Solutioning**
|
||||
- Architect creates architecture
|
||||
- TEA runs `test-design` (system-level) → testability review
|
||||
- TEA runs `framework` → test infrastructure
|
||||
- TEA runs `ci` → CI/CD pipeline
|
||||
- Architect runs `implementation-readiness` (fed by test design)
|
||||
|
||||
**Phase 4: Implementation (Per Epic)**
|
||||
- SM runs `sprint-planning`
|
||||
- TEA runs `test-design` (epic-level) → risk assessment for THIS epic
|
||||
- SM creates stories
|
||||
- (Optional) TEA runs `atdd` → failing tests before dev
|
||||
- DEV implements story
|
||||
- TEA runs `automate` → expand coverage
|
||||
- (Optional) TEA runs `test-review` → quality audit
|
||||
- TEA runs `trace` Phase 1 → refresh coverage
|
||||
|
||||
**Release Gate:**
|
||||
- (Optional) TEA runs `test-review` → final audit
|
||||
- (Optional) TEA runs `nfr-assess` → validate NFRs
|
||||
- TEA runs `trace` Phase 2 → gate decision (PASS/CONCERNS/FAIL/WAIVED)
|
||||
|
||||
**What You Get:**
|
||||
- Complete quality operating model
|
||||
- Systematic test planning
|
||||
- Risk-based prioritization
|
||||
- Evidence-based gate decisions
|
||||
- Consistent patterns across epics
|
||||
|
||||
**Example:**
|
||||
```
|
||||
New SaaS product:
|
||||
- 50 stories across 8 epics
|
||||
- Security critical
|
||||
- Need quality gates
|
||||
|
||||
Workflow:
|
||||
- Phase 2: Define NFRs in PRD
|
||||
- Phase 3: Architecture → test design → framework → CI
|
||||
- Phase 4: Per epic: test design → ATDD → dev → automate → review → trace
|
||||
- Gate: NFR assess → trace Phase 2 → decision
|
||||
```
|
||||
|
||||
**Verdict:** Most comprehensive TEA usage, best for structured teams.
|
||||
|
||||
---
|
||||
|
||||
### Model 5: TEA Integrated (Brownfield)
|
||||
|
||||
**What:** Full BMad Method integration with TEA for existing codebases.
|
||||
|
||||
**When to Use:**
|
||||
- Existing codebase with legacy tests
|
||||
- Want to improve test quality incrementally
|
||||
- Adding features to existing application
|
||||
- Need to establish coverage baseline
|
||||
|
||||
**Differences from Greenfield:**
|
||||
|
||||
**Phase 0: Documentation (if needed)**
|
||||
```
|
||||
- Run `document-project`
|
||||
- Create baseline documentation
|
||||
```
|
||||
|
||||
**Phase 2: Planning**
|
||||
```
|
||||
- TEA runs `trace` Phase 1 → establish coverage baseline
|
||||
- PM creates PRD (with existing system context)
|
||||
```
|
||||
|
||||
**Phase 3: Solutioning**
|
||||
```
|
||||
- Architect creates architecture (with brownfield constraints)
|
||||
- TEA runs `test-design` (system-level) → testability review
|
||||
- TEA runs `framework` (only if modernizing test infra)
|
||||
- TEA runs `ci` (update existing CI or create new)
|
||||
```
|
||||
|
||||
**Phase 4: Implementation**
|
||||
```
|
||||
- TEA runs `test-design` (epic-level) → focus on REGRESSION HOTSPOTS
|
||||
- Per story: ATDD → dev → automate
|
||||
- TEA runs `test-review` → improve legacy test quality
|
||||
- TEA runs `trace` Phase 1 → track coverage improvement
|
||||
```
|
||||
|
||||
**Brownfield-Specific:**
|
||||
- Baseline coverage BEFORE planning
|
||||
- Focus on regression hotspots (bug-prone areas)
|
||||
- Incremental quality improvement
|
||||
- Compare coverage to baseline (trending up?)
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Legacy e-commerce platform:
|
||||
- 200 existing tests (30% passing, 70% flaky)
|
||||
- Adding new checkout flow
|
||||
- Want to improve quality
|
||||
|
||||
Workflow:
|
||||
1. Phase 2: `trace` baseline → 30% coverage
|
||||
2. Phase 3: `test-design` → identify regression risks
|
||||
3. Phase 4: Fix top 20 flaky tests + add tests for new checkout
|
||||
4. Gate: `trace` → 60% coverage (2x improvement)
|
||||
```
|
||||
|
||||
**Verdict:** Best for incrementally improving legacy systems.
|
||||
|
||||
---
|
||||
|
||||
## Decision Guide: Which Model?
|
||||
|
||||
### Quick Decision Tree
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
||||
flowchart TD
|
||||
Start([Choose TEA Model]) --> BMad{Using<br/>BMad Method?}
|
||||
|
||||
BMad -->|No| NonBMad{Project Type?}
|
||||
NonBMad -->|Learning| Lite[TEA Lite<br/>Just automate<br/>30 min tutorial]
|
||||
NonBMad -->|Serious Project| Solo[TEA Solo<br/>Standalone workflows<br/>Full capabilities]
|
||||
|
||||
BMad -->|Yes| WantTEA{Want TEA?}
|
||||
WantTEA -->|No| None[No TEA<br/>Use existing approach<br/>Valid choice]
|
||||
WantTEA -->|Yes| ProjectType{New or<br/>Existing?}
|
||||
|
||||
ProjectType -->|New Project| Green[TEA Integrated<br/>Greenfield<br/>Full lifecycle]
|
||||
ProjectType -->|Existing Code| Brown[TEA Integrated<br/>Brownfield<br/>Baseline + improve]
|
||||
|
||||
Green --> Compliance{Compliance<br/>Needs?}
|
||||
Compliance -->|Yes| Enterprise[Enterprise Track<br/>NFR + audit trails]
|
||||
Compliance -->|No| Method[BMad Method Track<br/>Standard quality]
|
||||
|
||||
style Lite fill:#bbdefb,stroke:#1565c0,stroke-width:2px
|
||||
style Solo fill:#c5cae9,stroke:#283593,stroke-width:2px
|
||||
style None fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
||||
style Green fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
style Brown fill:#fff9c4,stroke:#f57f17,stroke-width:2px
|
||||
style Enterprise fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px
|
||||
style Method fill:#e1f5fe,stroke:#01579b,stroke-width:2px
|
||||
```
|
||||
|
||||
**Decision Path Examples:**
|
||||
- Learning TEA → TEA Lite (blue)
|
||||
- Non-BMad project → TEA Solo (purple)
|
||||
- BMad + new project + compliance → Enterprise (purple)
|
||||
- BMad + existing code → Brownfield (yellow)
|
||||
- Don't want TEA → No TEA (gray)
|
||||
|
||||
### By Project Type
|
||||
|
||||
| Project Type | Recommended Model | Why |
|
||||
|--------------|------------------|-----|
|
||||
| **New SaaS product** | TEA Integrated (Greenfield) | Full quality operating model from day one |
|
||||
| **Existing app + new feature** | TEA Integrated (Brownfield) | Improve incrementally while adding features |
|
||||
| **Bug fix** | TEA Lite or No TEA | Quick flow, minimal overhead |
|
||||
| **Learning project** | TEA Lite | Learn basics with immediate results |
|
||||
| **Non-BMad enterprise** | TEA Solo | Quality model without full methodology |
|
||||
| **High-quality existing tests** | No TEA | Keep what works |
|
||||
|
||||
### By Team Maturity
|
||||
|
||||
| Team Maturity | Recommended Model | Why |
|
||||
|---------------|------------------|-----|
|
||||
| **Beginners** | TEA Lite → TEA Solo | Learn basics, then expand |
|
||||
| **Intermediate** | TEA Solo or Integrated | Depends on methodology |
|
||||
| **Advanced** | TEA Integrated or No TEA | Full model or existing expertise |
|
||||
|
||||
### By Compliance Needs
|
||||
|
||||
| Compliance | Recommended Model | Why |
|
||||
|------------|------------------|-----|
|
||||
| **None** | Any model | Choose based on project needs |
|
||||
| **Light** (internal audit) | TEA Solo or Integrated | Gate decisions helpful |
|
||||
| **Heavy** (SOC 2, HIPAA) | TEA Integrated (Enterprise) | NFR assessment mandatory |
|
||||
|
||||
## Switching Between Models
|
||||
|
||||
### Can Change Models Mid-Project
|
||||
|
||||
**Scenario:** Start with TEA Lite, expand to TEA Solo
|
||||
|
||||
```
|
||||
Week 1: TEA Lite
|
||||
- Run `framework`
|
||||
- Run `automate`
|
||||
- Learn basics
|
||||
|
||||
Week 2: Expand to TEA Solo
|
||||
- Add `test-design`
|
||||
- Use `atdd` for new features
|
||||
- Add `test-review`
|
||||
|
||||
Week 3: Continue expanding
|
||||
- Add `trace` for coverage
|
||||
- Setup `ci`
|
||||
- Full TEA Solo workflow
|
||||
```
|
||||
|
||||
**Benefit:** Start small, expand as comfortable.
|
||||
|
||||
### Can Mix Models
|
||||
|
||||
**Scenario:** TEA Integrated for main features, No TEA for bug fixes
|
||||
|
||||
```
|
||||
Main features (epics):
|
||||
- Use full TEA workflow
|
||||
- Risk assessment, ATDD, quality gates
|
||||
|
||||
Bug fixes:
|
||||
- Skip TEA
|
||||
- Quick Flow + manual testing
|
||||
- Move fast
|
||||
|
||||
Result: TEA where it adds value, skip where it doesn't
|
||||
```
|
||||
|
||||
**Benefit:** Flexible, pragmatic, not dogmatic.
|
||||
|
||||
## Comparison Table
|
||||
|
||||
| Aspect | No TEA | TEA Lite | TEA Solo | Integrated (Green) | Integrated (Brown) |
|
||||
|--------|--------|----------|----------|-------------------|-------------------|
|
||||
| **BMad Required** | No | No | No | Yes | Yes |
|
||||
| **Learning Curve** | None | Low | Medium | High | High |
|
||||
| **Setup Time** | 0 | 30 min | 2 hours | 1 day | 2 days |
|
||||
| **Workflows Used** | 0 | 2-3 | 4-6 | 8 | 8 |
|
||||
| **Test Planning** | Manual | Optional | Yes | Systematic | + Regression focus |
|
||||
| **Quality Gates** | No | No | Optional | Yes | Yes + baseline |
|
||||
| **NFR Assessment** | No | No | No | Optional | Recommended |
|
||||
| **Coverage Tracking** | Manual | No | Optional | Yes | Yes + trending |
|
||||
| **Best For** | Experts | Beginners | Standalone | New projects | Legacy code |
|
||||
|
||||
## Real-World Examples
|
||||
|
||||
### Example 1: Startup (TEA Lite → TEA Integrated)
|
||||
|
||||
**Month 1:** TEA Lite
|
||||
```
|
||||
Team: 3 developers, no QA
|
||||
Testing: Manual only
|
||||
Decision: Start with TEA Lite
|
||||
|
||||
Result:
|
||||
- Run `framework` (Playwright setup)
|
||||
- Run `automate` (20 tests generated)
|
||||
- Learning TEA basics
|
||||
```
|
||||
|
||||
**Month 3:** TEA Solo
|
||||
```
|
||||
Team: Growing to 5 developers
|
||||
Testing: Automated tests exist
|
||||
Decision: Expand to TEA Solo
|
||||
|
||||
Result:
|
||||
- Add `test-design` (risk assessment)
|
||||
- Add `atdd` (TDD workflow)
|
||||
- Add `test-review` (quality audits)
|
||||
```
|
||||
|
||||
**Month 6:** TEA Integrated
|
||||
```
|
||||
Team: 8 developers, 1 QA
|
||||
Testing: Critical to business
|
||||
Decision: Full BMad Method + TEA Integrated
|
||||
|
||||
Result:
|
||||
- Full lifecycle integration
|
||||
- Quality gates before releases
|
||||
- NFR assessment for enterprise customers
|
||||
```
|
||||
|
||||
### Example 2: Enterprise (TEA Integrated - Brownfield)
|
||||
|
||||
**Project:** Legacy banking application
|
||||
|
||||
**Challenge:**
|
||||
- 500 existing tests (50% flaky)
|
||||
- Adding new features
|
||||
- SOC 2 compliance required
|
||||
|
||||
**Model:** TEA Integrated (Brownfield)
|
||||
|
||||
**Phase 2:**
|
||||
```
|
||||
- `trace` baseline → 45% coverage (lots of gaps)
|
||||
- Document current state
|
||||
```
|
||||
|
||||
**Phase 3:**
|
||||
```
|
||||
- `test-design` (system) → identify regression hotspots
|
||||
- `framework` → modernize test infrastructure
|
||||
- `ci` → add selective testing
|
||||
```
|
||||
|
||||
**Phase 4:**
|
||||
```
|
||||
Per epic:
|
||||
- `test-design` → focus on regression + new features
|
||||
- Fix top 10 flaky tests
|
||||
- `atdd` for new features
|
||||
- `automate` for coverage expansion
|
||||
- `test-review` → track quality improvement
|
||||
- `trace` → compare to baseline
|
||||
```
|
||||
|
||||
**Result after 6 months:**
|
||||
- Coverage: 45% → 85%
|
||||
- Quality score: 52 → 82
|
||||
- Flakiness: 50% → 2%
|
||||
- SOC 2 compliant (traceability + NFR evidence)
|
||||
|
||||
### Example 3: Consultancy (TEA Solo)
|
||||
|
||||
**Context:** Testing consultancy working with multiple clients
|
||||
|
||||
**Challenge:**
|
||||
- Different clients use different methodologies
|
||||
- Need consistent testing approach
|
||||
- Not always using BMad Method
|
||||
|
||||
**Model:** TEA Solo (bring to any client project)
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
Client project 1 (Scrum):
|
||||
- Import Jira stories
|
||||
- Run `test-design`
|
||||
- Generate tests with `atdd`/`automate`
|
||||
- Deliver quality report with `test-review`
|
||||
|
||||
Client project 2 (Kanban):
|
||||
- Import requirements from Notion
|
||||
- Same TEA workflow
|
||||
- Consistent quality across clients
|
||||
|
||||
Client project 3 (Ad-hoc):
|
||||
- Document requirements manually
|
||||
- Same TEA workflow
|
||||
- Same patterns, different context
|
||||
```
|
||||
|
||||
**Benefit:** Consistent testing approach regardless of client methodology.
|
||||
|
||||
## Choosing Your Model
|
||||
|
||||
### Start Here Questions
|
||||
|
||||
**Question 1:** Are you using BMad Method?
|
||||
- **No** → TEA Solo or TEA Lite or No TEA
|
||||
- **Yes** → TEA Integrated or No TEA
|
||||
|
||||
**Question 2:** Is this a new project?
|
||||
- **Yes** → TEA Integrated (Greenfield) or TEA Lite
|
||||
- **No** → TEA Integrated (Brownfield) or TEA Solo
|
||||
|
||||
**Question 3:** What's your testing maturity?
|
||||
- **Beginner** → TEA Lite
|
||||
- **Intermediate** → TEA Solo or Integrated
|
||||
- **Advanced** → TEA Integrated or No TEA (already expert)
|
||||
|
||||
**Question 4:** Do you need compliance/quality gates?
|
||||
- **Yes** → TEA Integrated (Enterprise)
|
||||
- **No** → Any model
|
||||
|
||||
**Question 5:** How much time can you invest?
|
||||
- **30 minutes** → TEA Lite
|
||||
- **Few hours** → TEA Solo
|
||||
- **Multiple days** → TEA Integrated
|
||||
|
||||
### Recommendation Matrix
|
||||
|
||||
| Your Context | Recommended Model | Alternative |
|
||||
|--------------|------------------|-------------|
|
||||
| BMad Method + new project | TEA Integrated (Greenfield) | TEA Lite (learning) |
|
||||
| BMad Method + existing code | TEA Integrated (Brownfield) | TEA Solo |
|
||||
| Non-BMad + need quality | TEA Solo | TEA Lite |
|
||||
| Just learning testing | TEA Lite | No TEA (learn basics first) |
|
||||
| Enterprise + compliance | TEA Integrated (Enterprise) | TEA Solo |
|
||||
| Established QA team | No TEA | TEA Solo (supplement) |
|
||||
|
||||
## Transitioning Between Models
|
||||
|
||||
### TEA Lite → TEA Solo
|
||||
|
||||
**When:** Outgrow beginner approach, need more workflows.
|
||||
|
||||
**Steps:**
|
||||
1. Continue using `framework` and `automate`
|
||||
2. Add `test-design` for planning
|
||||
3. Add `atdd` for TDD workflow
|
||||
4. Add `test-review` for quality audits
|
||||
5. Add `trace` for coverage tracking
|
||||
|
||||
**Timeline:** 2-4 weeks of gradual expansion
|
||||
|
||||
### TEA Solo → TEA Integrated
|
||||
|
||||
**When:** Adopt BMad Method, want full integration.
|
||||
|
||||
**Steps:**
|
||||
1. Install BMad Method (see installation guide)
|
||||
2. Run planning workflows (PRD, architecture)
|
||||
3. Integrate TEA into Phase 3 (system-level test design)
|
||||
4. Follow integrated lifecycle (per epic workflows)
|
||||
5. Add release gates (trace Phase 2)
|
||||
|
||||
**Timeline:** 1-2 sprints of transition
|
||||
|
||||
### TEA Integrated → TEA Solo
|
||||
|
||||
**When:** Moving away from BMad Method, keep TEA.
|
||||
|
||||
**Steps:**
|
||||
1. Export BMad artifacts (PRD, architecture, stories)
|
||||
2. Continue using TEA workflows standalone
|
||||
3. Skip BMad-specific integration
|
||||
4. Bring your own requirements to TEA
|
||||
|
||||
**Timeline:** Immediate (just skip BMad workflows)
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: TEA Lite for Learning, Then Choose
|
||||
|
||||
```
|
||||
Phase 1 (Week 1-2): TEA Lite
|
||||
- Learn with `automate` on demo app
|
||||
- Understand TEA fundamentals
|
||||
- Low commitment
|
||||
|
||||
Phase 2 (Week 3-4): Evaluate
|
||||
- Try `test-design` (planning)
|
||||
- Try `atdd` (TDD)
|
||||
- See if value justifies investment
|
||||
|
||||
Phase 3 (Month 2+): Decide
|
||||
- Valuable → Expand to TEA Solo or Integrated
|
||||
- Not valuable → Stay with TEA Lite or No TEA
|
||||
```
|
||||
|
||||
### Pattern 2: TEA Solo for Quality, Skip Full Method
|
||||
|
||||
```
|
||||
Team decision:
|
||||
- Don't want full BMad Method (too heavyweight)
|
||||
- Want systematic testing (TEA benefits)
|
||||
|
||||
Approach: TEA Solo only
|
||||
- Use existing project management (Jira, Linear)
|
||||
- Use TEA for testing only
|
||||
- Get quality without methodology commitment
|
||||
```
|
||||
|
||||
### Pattern 3: Integrated for Critical, Lite for Non-Critical
|
||||
|
||||
```
|
||||
Critical features (payment, auth):
|
||||
- Full TEA Integrated workflow
|
||||
- Risk assessment, ATDD, quality gates
|
||||
- High confidence required
|
||||
|
||||
Non-critical features (UI tweaks):
|
||||
- TEA Lite or No TEA
|
||||
- Quick tests, minimal overhead
|
||||
- Move fast
|
||||
```
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
Each model uses different TEA workflows. See:
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Model details
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - Workflow reference
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Setup options
|
||||
|
||||
## Related Concepts
|
||||
|
||||
**Core TEA Concepts:**
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Risk assessment in different models
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Quality across all models
|
||||
- [Knowledge Base System](/docs/tea/explanation/knowledge-base-system.md) - Consistent patterns across models
|
||||
|
||||
**Technical Patterns:**
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Infrastructure in different models
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Reliability in all models
|
||||
|
||||
**Overview:**
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - 5 engagement models with cheat sheets
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - Design philosophy
|
||||
|
||||
## Practical Guides
|
||||
|
||||
**Getting Started:**
|
||||
- [TEA Lite Quickstart Tutorial](/docs/tea/tutorials/tea-lite-quickstart.md) - Model 3: TEA Lite
|
||||
|
||||
**Use-Case Guides:**
|
||||
- [Using TEA with Existing Tests](/docs/tea/how-to/brownfield/use-tea-with-existing-tests.md) - Model 5: Brownfield
|
||||
- [Running TEA for Enterprise](/docs/tea/how-to/brownfield/use-tea-for-enterprise.md) - Enterprise integration
|
||||
|
||||
**All Workflow Guides:**
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Used in TEA Solo and Integrated
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md)
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md)
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md)
|
||||
- [How to Run Trace](/docs/tea/how-to/workflows/run-trace.md)
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - All workflows explained
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Config per model
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - TEA Lite, TEA Solo, TEA Integrated terms
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
457
docs/tea/explanation/fixture-architecture.md
Normal file
457
docs/tea/explanation/fixture-architecture.md
Normal file
@@ -0,0 +1,457 @@
|
||||
---
|
||||
title: "Fixture Architecture Explained"
|
||||
description: Understanding TEA's pure function → fixture → composition pattern for reusable test utilities
|
||||
---
|
||||
|
||||
# Fixture Architecture Explained
|
||||
|
||||
Fixture architecture is TEA's pattern for building reusable, testable, and composable test utilities. The core principle: build pure functions first, wrap in framework fixtures second.
|
||||
|
||||
## Overview
|
||||
|
||||
**The Pattern:**
|
||||
1. Write utility as pure function (unit-testable)
|
||||
2. Wrap in framework fixture (Playwright, Cypress)
|
||||
3. Compose fixtures with mergeTests (combine capabilities)
|
||||
4. Package for reuse across projects
|
||||
|
||||
**Why this order?**
|
||||
- Pure functions are easier to test
|
||||
- Fixtures depend on framework (less portable)
|
||||
- Composition happens at fixture level
|
||||
- Reusability maximized
|
||||
|
||||
### Fixture Architecture Flow
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
||||
flowchart TD
|
||||
Start([Testing Need]) --> Pure[Step 1: Pure Function<br/>helpers/api-request.ts]
|
||||
Pure -->|Unit testable<br/>Framework agnostic| Fixture[Step 2: Fixture Wrapper<br/>fixtures/api-request.ts]
|
||||
Fixture -->|Injects framework<br/>dependencies| Compose[Step 3: Composition<br/>fixtures/index.ts]
|
||||
Compose -->|mergeTests| Use[Step 4: Use in Tests<br/>tests/**.spec.ts]
|
||||
|
||||
Pure -.->|Can test in isolation| UnitTest[Unit Tests<br/>No framework needed]
|
||||
Fixture -.->|Reusable pattern| Other[Other Projects<br/>Package export]
|
||||
Compose -.->|Combine utilities| Multi[Multiple Fixtures<br/>One test]
|
||||
|
||||
style Pure fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Fixture fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
style Compose fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px
|
||||
style Use fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style UnitTest fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px
|
||||
style Other fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px
|
||||
style Multi fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px
|
||||
```
|
||||
|
||||
**Benefits at Each Step:**
|
||||
1. **Pure Function:** Testable, portable, reusable
|
||||
2. **Fixture:** Framework integration, clean API
|
||||
3. **Composition:** Combine capabilities, flexible
|
||||
4. **Usage:** Simple imports, type-safe
|
||||
|
||||
## The Problem
|
||||
|
||||
### Framework-First Approach (Common Anti-Pattern)
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Built as fixture from the start
|
||||
export const test = base.extend({
|
||||
apiRequest: async ({ request }, use) => {
|
||||
await use(async (options) => {
|
||||
const response = await request.fetch(options.url, {
|
||||
method: options.method,
|
||||
data: options.data
|
||||
});
|
||||
|
||||
if (!response.ok()) {
|
||||
throw new Error(`API request failed: ${response.status()}`);
|
||||
}
|
||||
|
||||
return response.json();
|
||||
});
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Cannot unit test (requires Playwright context)
|
||||
- Tied to framework (not reusable in other tools)
|
||||
- Hard to compose with other fixtures
|
||||
- Difficult to mock for testing the utility itself
|
||||
|
||||
### Copy-Paste Utilities
|
||||
|
||||
```typescript
|
||||
// test-1.spec.ts
|
||||
test('test 1', async ({ request }) => {
|
||||
const response = await request.post('/api/users', { data: {...} });
|
||||
const body = await response.json();
|
||||
if (!response.ok()) throw new Error('Failed');
|
||||
// ... repeated in every test
|
||||
});
|
||||
|
||||
// test-2.spec.ts
|
||||
test('test 2', async ({ request }) => {
|
||||
const response = await request.post('/api/users', { data: {...} });
|
||||
const body = await response.json();
|
||||
if (!response.ok()) throw new Error('Failed');
|
||||
// ... same code repeated
|
||||
});
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Code duplication (violates DRY)
|
||||
- Inconsistent error handling
|
||||
- Hard to update (change 50 tests)
|
||||
- No shared behavior
|
||||
|
||||
## The Solution: Three-Step Pattern
|
||||
|
||||
### Step 1: Pure Function
|
||||
|
||||
```typescript
|
||||
// helpers/api-request.ts
|
||||
|
||||
/**
|
||||
* Make API request with automatic error handling
|
||||
* Pure function - no framework dependencies
|
||||
*/
|
||||
export async function apiRequest({
|
||||
request, // Passed in (dependency injection)
|
||||
method,
|
||||
url,
|
||||
data,
|
||||
headers = {}
|
||||
}: ApiRequestParams): Promise<ApiResponse> {
|
||||
const response = await request.fetch(url, {
|
||||
method,
|
||||
data,
|
||||
headers
|
||||
});
|
||||
|
||||
if (!response.ok()) {
|
||||
throw new Error(`API request failed: ${response.status()}`);
|
||||
}
|
||||
|
||||
return {
|
||||
status: response.status(),
|
||||
body: await response.json()
|
||||
};
|
||||
}
|
||||
|
||||
// ✅ Can unit test this function!
|
||||
describe('apiRequest', () => {
|
||||
it('should throw on non-OK response', async () => {
|
||||
const mockRequest = {
|
||||
fetch: vi.fn().mockResolvedValue({ ok: () => false, status: () => 500 })
|
||||
};
|
||||
|
||||
await expect(apiRequest({
|
||||
request: mockRequest,
|
||||
method: 'GET',
|
||||
url: '/api/test'
|
||||
})).rejects.toThrow('API request failed: 500');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Unit testable (mock dependencies)
|
||||
- Framework-agnostic (works with any HTTP client)
|
||||
- Easy to reason about (pure function)
|
||||
- Portable (can use in Node scripts, CLI tools)
|
||||
|
||||
### Step 2: Fixture Wrapper
|
||||
|
||||
```typescript
|
||||
// fixtures/api-request.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
import { apiRequest as apiRequestFn } from '../helpers/api-request';
|
||||
|
||||
/**
|
||||
* Playwright fixture wrapping the pure function
|
||||
*/
|
||||
export const test = base.extend<{ apiRequest: typeof apiRequestFn }>({
|
||||
apiRequest: async ({ request }, use) => {
|
||||
// Inject framework dependency (request)
|
||||
await use((params) => apiRequestFn({ request, ...params }));
|
||||
}
|
||||
});
|
||||
|
||||
export { expect } from '@playwright/test';
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Fixture provides framework context (request)
|
||||
- Pure function handles logic
|
||||
- Clean separation of concerns
|
||||
- Can swap frameworks (Cypress, etc.) by changing wrapper only
|
||||
|
||||
### Step 3: Composition with mergeTests
|
||||
|
||||
```typescript
|
||||
// fixtures/index.ts
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { test as apiRequestTest } from './api-request';
|
||||
import { test as authSessionTest } from './auth-session';
|
||||
import { test as logTest } from './log';
|
||||
|
||||
/**
|
||||
* Compose all fixtures into one test
|
||||
*/
|
||||
export const test = mergeTests(
|
||||
apiRequestTest,
|
||||
authSessionTest,
|
||||
logTest
|
||||
);
|
||||
|
||||
export { expect } from '@playwright/test';
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
// tests/profile.spec.ts
|
||||
import { test, expect } from '../support/fixtures';
|
||||
|
||||
test('should update profile', async ({ apiRequest, authToken, log }) => {
|
||||
log.info('Starting profile update test');
|
||||
|
||||
// Use API request fixture (matches pure function signature)
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'PATCH',
|
||||
url: '/api/profile',
|
||||
data: { name: 'New Name' },
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.name).toBe('New Name');
|
||||
|
||||
log.info('Profile updated successfully');
|
||||
});
|
||||
```
|
||||
|
||||
**Note:** This example uses the vanilla pure function signature (`url`, `data`). Playwright Utils uses different parameter names (`path`, `body`). See [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) for the utilities API.
|
||||
|
||||
**Note:** `authToken` requires auth-session fixture setup with provider configuration. See [auth-session documentation](https://seontechnologies.github.io/playwright-utils/auth-session.html).
|
||||
|
||||
**Benefits:**
|
||||
- Use multiple fixtures in one test
|
||||
- No manual composition needed
|
||||
- Type-safe (TypeScript knows all fixture types)
|
||||
- Clean imports
|
||||
|
||||
## How It Works in TEA
|
||||
|
||||
### TEA Generates This Pattern
|
||||
|
||||
When you run `framework` with `tea_use_playwright_utils: true`:
|
||||
|
||||
**TEA scaffolds:**
|
||||
```
|
||||
tests/
|
||||
├── support/
|
||||
│ ├── helpers/ # Pure functions
|
||||
│ │ ├── api-request.ts
|
||||
│ │ └── auth-session.ts
|
||||
│ └── fixtures/ # Framework wrappers
|
||||
│ ├── api-request.ts
|
||||
│ ├── auth-session.ts
|
||||
│ └── index.ts # Composition
|
||||
└── e2e/
|
||||
└── example.spec.ts # Uses composed fixtures
|
||||
```
|
||||
|
||||
### TEA Reviews Against This Pattern
|
||||
|
||||
When you run `test-review`:
|
||||
|
||||
**TEA checks:**
|
||||
- Are utilities pure functions? ✓
|
||||
- Are fixtures minimal wrappers? ✓
|
||||
- Is composition used? ✓
|
||||
- Can utilities be unit tested? ✓
|
||||
|
||||
## Package Export Pattern
|
||||
|
||||
### Make Fixtures Reusable Across Projects
|
||||
|
||||
**Option 1: Build Your Own (Vanilla)**
|
||||
```json
|
||||
// package.json
|
||||
{
|
||||
"name": "@company/test-utils",
|
||||
"exports": {
|
||||
"./api-request": "./fixtures/api-request.ts",
|
||||
"./auth-session": "./fixtures/auth-session.ts",
|
||||
"./log": "./fixtures/log.ts"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test as apiTest } from '@company/test-utils/api-request';
|
||||
import { test as authTest } from '@company/test-utils/auth-session';
|
||||
import { mergeTests } from '@playwright/test';
|
||||
|
||||
export const test = mergeTests(apiTest, authTest);
|
||||
```
|
||||
|
||||
**Option 2: Use Playwright Utils (Recommended)**
|
||||
```bash
|
||||
npm install -D @seontechnologies/playwright-utils
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test as base } from '@playwright/test';
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
||||
|
||||
const authFixtureTest = base.extend(createAuthFixtures());
|
||||
export const test = mergeTests(apiRequestFixture, authFixtureTest);
|
||||
// Production-ready utilities, battle-tested!
|
||||
```
|
||||
|
||||
**Note:** Auth-session requires provider configuration. See [auth-session setup guide](https://seontechnologies.github.io/playwright-utils/auth-session.html).
|
||||
|
||||
**Why Playwright Utils:**
|
||||
- Already built, tested, and maintained
|
||||
- Consistent patterns across projects
|
||||
- 11 utilities available (API, auth, network, logging, files)
|
||||
- Community support and documentation
|
||||
- Regular updates and improvements
|
||||
|
||||
**When to Build Your Own:**
|
||||
- Company-specific patterns
|
||||
- Custom authentication systems
|
||||
- Unique requirements not covered by utilities
|
||||
|
||||
## Comparison: Good vs Bad Patterns
|
||||
|
||||
### Anti-Pattern: God Fixture
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Everything in one fixture
|
||||
export const test = base.extend({
|
||||
testUtils: async ({ page, request, context }, use) => {
|
||||
await use({
|
||||
// 50 different methods crammed into one fixture
|
||||
apiRequest: async (...) => { },
|
||||
login: async (...) => { },
|
||||
createUser: async (...) => { },
|
||||
deleteUser: async (...) => { },
|
||||
uploadFile: async (...) => { },
|
||||
// ... 45 more methods
|
||||
});
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Cannot test individual utilities
|
||||
- Cannot compose (all-or-nothing)
|
||||
- Cannot reuse specific utilities
|
||||
- Hard to maintain (1000+ line file)
|
||||
|
||||
### Good Pattern: Single-Concern Fixtures
|
||||
|
||||
```typescript
|
||||
// ✅ Good: One concern per fixture
|
||||
|
||||
// api-request.ts
|
||||
export const test = base.extend({ apiRequest });
|
||||
|
||||
// auth-session.ts
|
||||
export const test = base.extend({ authSession });
|
||||
|
||||
// log.ts
|
||||
export const test = base.extend({ log });
|
||||
|
||||
// Compose as needed
|
||||
import { mergeTests } from '@playwright/test';
|
||||
export const test = mergeTests(apiRequestTest, authSessionTest, logTest);
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Each fixture is unit-testable
|
||||
- Compose only what you need
|
||||
- Reuse individual fixtures
|
||||
- Easy to maintain (small files)
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
For detailed fixture architecture patterns, see the knowledge base:
|
||||
- [Knowledge Base Index - Architecture & Fixtures](/docs/tea/reference/knowledge-base.md)
|
||||
- [Complete Knowledge Base Index](/docs/tea/reference/knowledge-base.md)
|
||||
|
||||
## When to Use This Pattern
|
||||
|
||||
### Always Use For:
|
||||
|
||||
**Reusable utilities:**
|
||||
- API request helpers
|
||||
- Authentication handlers
|
||||
- File operations
|
||||
- Network mocking
|
||||
|
||||
**Test infrastructure:**
|
||||
- Shared fixtures across teams
|
||||
- Packaged utilities (playwright-utils)
|
||||
- Company-wide test standards
|
||||
|
||||
### Consider Skipping For:
|
||||
|
||||
**One-off test setup:**
|
||||
```typescript
|
||||
// Simple one-time setup - inline is fine
|
||||
test.beforeEach(async ({ page }) => {
|
||||
await page.goto('/');
|
||||
await page.click('#accept-cookies');
|
||||
});
|
||||
```
|
||||
|
||||
**Test-specific helpers:**
|
||||
```typescript
|
||||
// Used in one test file only - keep local
|
||||
function createTestUser(name: string) {
|
||||
return { name, email: `${name}@test.com` };
|
||||
}
|
||||
```
|
||||
|
||||
## Related Concepts
|
||||
|
||||
**Core TEA Concepts:**
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Quality standards fixtures enforce
|
||||
- [Knowledge Base System](/docs/tea/explanation/knowledge-base-system.md) - Fixture patterns in knowledge base
|
||||
|
||||
**Technical Patterns:**
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Network fixtures explained
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Fixture complexity matches risk
|
||||
|
||||
**Overview:**
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Fixture architecture in workflows
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - Why fixtures matter
|
||||
|
||||
## Practical Guides
|
||||
|
||||
**Setup Guides:**
|
||||
- [How to Set Up Test Framework](/docs/tea/how-to/workflows/setup-test-framework.md) - TEA scaffolds fixtures
|
||||
- [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) - Production-ready fixtures
|
||||
|
||||
**Workflow Guides:**
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md) - Using fixtures in tests
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Fixture composition examples
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - `framework` command
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Fixture architecture fragments
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - Fixture architecture term
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
554
docs/tea/explanation/knowledge-base-system.md
Normal file
554
docs/tea/explanation/knowledge-base-system.md
Normal file
@@ -0,0 +1,554 @@
|
||||
---
|
||||
title: "Knowledge Base System Explained"
|
||||
description: Understanding how TEA uses tea-index.csv for context engineering and consistent test quality
|
||||
---
|
||||
|
||||
# Knowledge Base System Explained
|
||||
|
||||
TEA's knowledge base system is how context engineering works - automatically loading domain-specific standards into AI context so tests are consistently high-quality regardless of prompt variation.
|
||||
|
||||
## Overview
|
||||
|
||||
**The Problem:** AI without context produces inconsistent results.
|
||||
|
||||
**Traditional approach:**
|
||||
```
|
||||
User: "Write tests for login"
|
||||
AI: [Generates tests with random quality]
|
||||
- Sometimes uses hard waits
|
||||
- Sometimes uses good patterns
|
||||
- Inconsistent across sessions
|
||||
- Quality depends on prompt
|
||||
```
|
||||
|
||||
**TEA with knowledge base:**
|
||||
```
|
||||
User: "Write tests for login"
|
||||
TEA: [Loads test-quality.md, network-first.md, auth-session.md]
|
||||
TEA: [Generates tests following established patterns]
|
||||
- Always uses network-first patterns
|
||||
- Always uses proper fixtures
|
||||
- Consistent across all sessions
|
||||
- Quality independent of prompt
|
||||
```
|
||||
|
||||
**Result:** Systematic quality, not random chance.
|
||||
|
||||
## The Problem
|
||||
|
||||
### Prompt-Driven Testing = Inconsistency
|
||||
|
||||
**Session 1:**
|
||||
```
|
||||
User: "Write tests for profile editing"
|
||||
|
||||
AI: [No context loaded]
|
||||
// Generates test with hard waits
|
||||
await page.waitForTimeout(3000);
|
||||
```
|
||||
|
||||
**Session 2:**
|
||||
```
|
||||
User: "Write comprehensive tests for profile editing with best practices"
|
||||
|
||||
AI: [Still no systematic context]
|
||||
// Generates test with some improvements, but still issues
|
||||
await page.waitForSelector('.success', { timeout: 10000 });
|
||||
```
|
||||
|
||||
**Session 3:**
|
||||
```
|
||||
User: "Write tests using network-first patterns and proper fixtures"
|
||||
|
||||
AI: [Better prompt, but still reinventing patterns]
|
||||
// Generates test with network-first, but inconsistent with other tests
|
||||
```
|
||||
|
||||
**Problem:** Quality depends on prompt engineering skill, no consistency.
|
||||
|
||||
### Knowledge Drift
|
||||
|
||||
Without a knowledge base:
|
||||
- Team A uses pattern X
|
||||
- Team B uses pattern Y
|
||||
- Both work, but inconsistent
|
||||
- No single source of truth
|
||||
- Patterns drift over time
|
||||
|
||||
## The Solution: tea-index.csv Manifest
|
||||
|
||||
### How It Works
|
||||
|
||||
**1. Manifest Defines Fragments**
|
||||
|
||||
`src/bmm/testarch/tea-index.csv`:
|
||||
```csv
|
||||
id,name,description,tags,fragment_file
|
||||
test-quality,Test Quality,Execution limits and isolation rules,quality;standards,knowledge/test-quality.md
|
||||
network-first,Network-First Safeguards,Intercept-before-navigate workflow,network;stability,knowledge/network-first.md
|
||||
fixture-architecture,Fixture Architecture,Composable fixture patterns,fixtures;architecture,knowledge/fixture-architecture.md
|
||||
```
|
||||
|
||||
**2. Workflow Loads Relevant Fragments**
|
||||
|
||||
When user runs `atdd`:
|
||||
```
|
||||
TEA reads tea-index.csv
|
||||
Identifies fragments needed for ATDD:
|
||||
- test-quality.md (quality standards)
|
||||
- network-first.md (avoid flakiness)
|
||||
- component-tdd.md (TDD patterns)
|
||||
- fixture-architecture.md (reusable fixtures)
|
||||
- data-factories.md (test data)
|
||||
|
||||
Loads only these 5 fragments (not all 33)
|
||||
Generates tests following these patterns
|
||||
```
|
||||
|
||||
**3. Consistent Output**
|
||||
|
||||
Every time `atdd` runs:
|
||||
- Same fragments loaded
|
||||
- Same patterns applied
|
||||
- Same quality standards
|
||||
- Consistent test structure
|
||||
|
||||
**Result:** Tests look like they were written by the same expert, every time.
|
||||
|
||||
### Knowledge Base Loading Diagram
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
||||
flowchart TD
|
||||
User([User: atdd]) --> Workflow[TEA Workflow<br/>Triggered]
|
||||
Workflow --> Read[Read Manifest<br/>tea-index.csv]
|
||||
|
||||
Read --> Identify{Identify Relevant<br/>Fragments for ATDD}
|
||||
|
||||
Identify -->|Needed| L1[✓ test-quality.md]
|
||||
Identify -->|Needed| L2[✓ network-first.md]
|
||||
Identify -->|Needed| L3[✓ component-tdd.md]
|
||||
Identify -->|Needed| L4[✓ data-factories.md]
|
||||
Identify -->|Needed| L5[✓ fixture-architecture.md]
|
||||
|
||||
Identify -.->|Skip| S1[✗ contract-testing.md]
|
||||
Identify -.->|Skip| S2[✗ burn-in.md]
|
||||
Identify -.->|Skip| S3[+ 26 other fragments]
|
||||
|
||||
L1 --> Context[AI Context<br/>5 fragments loaded]
|
||||
L2 --> Context
|
||||
L3 --> Context
|
||||
L4 --> Context
|
||||
L5 --> Context
|
||||
|
||||
Context --> Gen[Generate Tests<br/>Following patterns]
|
||||
Gen --> Out([Consistent Output<br/>Same quality every time])
|
||||
|
||||
style User fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Read fill:#fff3e0,stroke:#e65100,stroke-width:2px
|
||||
style L1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
style L2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
style L3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
style L4 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
style L5 fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px
|
||||
style S1 fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
||||
style S2 fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
||||
style S3 fill:#e0e0e0,stroke:#616161,stroke-width:1px
|
||||
style Context fill:#f3e5f5,stroke:#6a1b9a,stroke-width:3px
|
||||
style Out fill:#4caf50,stroke:#1b5e20,stroke-width:3px,color:#fff
|
||||
```
|
||||
|
||||
## Fragment Structure
|
||||
|
||||
### Anatomy of a Fragment
|
||||
|
||||
Each fragment follows this structure:
|
||||
|
||||
```markdown
|
||||
# Fragment Name
|
||||
|
||||
## Principle
|
||||
[One sentence - what is this pattern?]
|
||||
|
||||
## Rationale
|
||||
[Why use this instead of alternatives?]
|
||||
Why this pattern exists
|
||||
Problems it solves
|
||||
Benefits it provides
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Basic Usage
|
||||
```code
|
||||
[Runnable code example]
|
||||
```
|
||||
[Explanation of example]
|
||||
|
||||
### Example 2: Advanced Pattern
|
||||
```code
|
||||
[More complex example]
|
||||
```
|
||||
[Explanation]
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Don't Do This
|
||||
```code
|
||||
[Bad code example]
|
||||
```
|
||||
[Why it's bad]
|
||||
[What breaks]
|
||||
|
||||
## Related Patterns
|
||||
- [Link to related fragment]
|
||||
```
|
||||
|
||||
<!-- markdownlint-disable MD024 -->
|
||||
### Example: test-quality.md Fragment
|
||||
|
||||
```markdown
|
||||
# Test Quality
|
||||
|
||||
## Principle
|
||||
Tests must be deterministic, isolated, explicit, focused, and fast.
|
||||
|
||||
## Rationale
|
||||
Tests that fail randomly, depend on each other, or take too long lose team trust.
|
||||
[... detailed explanation ...]
|
||||
|
||||
## Pattern Examples
|
||||
|
||||
### Example 1: Deterministic Test
|
||||
```typescript
|
||||
// ✅ Wait for actual response, not timeout
|
||||
const promise = page.waitForResponse(matcher);
|
||||
await page.click('button');
|
||||
await promise;
|
||||
```
|
||||
|
||||
### Example 2: Isolated Test
|
||||
```typescript
|
||||
// ✅ Self-cleaning test
|
||||
test('test', async ({ page }) => {
|
||||
const userId = await createTestUser();
|
||||
// ... test logic ...
|
||||
await deleteTestUser(userId); // Cleanup
|
||||
});
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Hard Waits
|
||||
```typescript
|
||||
// ❌ Non-deterministic
|
||||
await page.waitForTimeout(3000);
|
||||
```
|
||||
[Why this causes flakiness]
|
||||
```
|
||||
|
||||
**Total:** 24.5 KB, 12 code examples
|
||||
<!-- markdownlint-enable MD024 -->
|
||||
|
||||
## How TEA Uses the Knowledge Base
|
||||
|
||||
### Workflow-Specific Loading
|
||||
|
||||
**Different workflows load different fragments:**
|
||||
|
||||
| Workflow | Fragments Loaded | Purpose |
|
||||
|----------|-----------------|---------|
|
||||
| `framework` | fixture-architecture, playwright-config, fixtures-composition | Infrastructure patterns |
|
||||
| `test-design` | test-quality, test-priorities-matrix, risk-governance | Planning standards |
|
||||
| `atdd` | test-quality, component-tdd, network-first, data-factories | TDD patterns |
|
||||
| `automate` | test-quality, test-levels-framework, selector-resilience | Comprehensive generation |
|
||||
| `test-review` | All quality/resilience/debugging fragments | Full audit patterns |
|
||||
| `ci` | ci-burn-in, burn-in, selective-testing | CI/CD optimization |
|
||||
|
||||
**Benefit:** Only load what's needed (focused context, no bloat).
|
||||
|
||||
### Dynamic Fragment Selection
|
||||
|
||||
TEA doesn't load all 33 fragments at once:
|
||||
|
||||
```
|
||||
User runs: atdd for authentication feature
|
||||
|
||||
TEA analyzes context:
|
||||
- Feature type: Authentication
|
||||
- Relevant fragments:
|
||||
- test-quality.md (always loaded)
|
||||
- auth-session.md (auth patterns)
|
||||
- network-first.md (avoid flakiness)
|
||||
- email-auth.md (if email-based auth)
|
||||
- data-factories.md (test users)
|
||||
|
||||
Skips:
|
||||
- contract-testing.md (not relevant)
|
||||
- feature-flags.md (not relevant)
|
||||
- file-utils.md (not relevant)
|
||||
|
||||
Result: 5 relevant fragments loaded, 28 skipped
|
||||
```
|
||||
|
||||
**Benefit:** Focused context = better results, lower token usage.
|
||||
|
||||
## Context Engineering in Practice
|
||||
|
||||
### Example: Consistent Test Generation
|
||||
|
||||
**Without Knowledge Base (Vanilla Playwright, Random Quality):**
|
||||
```
|
||||
Session 1: User runs atdd
|
||||
AI: [Guesses patterns from general knowledge]
|
||||
|
||||
Generated:
|
||||
test('api test', async ({ request }) => {
|
||||
const response = await request.get('/api/users');
|
||||
await page.waitForTimeout(2000); // Hard wait
|
||||
const users = await response.json();
|
||||
// Random quality
|
||||
});
|
||||
|
||||
Session 2: User runs atdd (different day)
|
||||
AI: [Different random patterns]
|
||||
|
||||
Generated:
|
||||
test('api test', async ({ request }) => {
|
||||
const response = await request.get('/api/users');
|
||||
const users = await response.json();
|
||||
// Better but inconsistent
|
||||
});
|
||||
|
||||
Result: Inconsistent quality, random patterns
|
||||
```
|
||||
|
||||
**With Knowledge Base (TEA + Playwright Utils):**
|
||||
```
|
||||
Session 1: User runs atdd
|
||||
TEA: [Loads test-quality.md, network-first.md, api-request.md from tea-index.csv]
|
||||
|
||||
Generated:
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
test('should fetch users', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users'
|
||||
}).validateSchema(UsersSchema); // Chained validation
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body).toBeInstanceOf(Array);
|
||||
});
|
||||
|
||||
Session 2: User runs atdd (different day)
|
||||
TEA: [Loads same fragments from tea-index.csv]
|
||||
|
||||
Generated: Identical pattern, same quality
|
||||
|
||||
Result: Systematic quality, established patterns (ALWAYS uses apiRequest utility when playwright-utils enabled)
|
||||
```
|
||||
|
||||
**Key Difference:**
|
||||
- **Without KB:** Random patterns, inconsistent APIs
|
||||
- **With KB:** Always uses `apiRequest` utility, always validates schemas, always returns `{ status, body }`
|
||||
|
||||
### Example: Test Review Consistency
|
||||
|
||||
**Without Knowledge Base:**
|
||||
```
|
||||
test-review session 1:
|
||||
"This test looks okay" [50 issues missed]
|
||||
|
||||
test-review session 2:
|
||||
"This test has some issues" [Different issues flagged]
|
||||
|
||||
Result: Inconsistent feedback
|
||||
```
|
||||
|
||||
**With Knowledge Base:**
|
||||
```
|
||||
test-review session 1:
|
||||
[Loads all quality fragments]
|
||||
Flags: 12 hard waits, 5 conditionals (based on test-quality.md)
|
||||
|
||||
test-review session 2:
|
||||
[Loads same fragments]
|
||||
Flags: Same issues with same explanations
|
||||
|
||||
Result: Consistent, reliable feedback
|
||||
```
|
||||
|
||||
## Maintaining the Knowledge Base
|
||||
|
||||
### When to Add a Fragment
|
||||
|
||||
**Good reasons:**
|
||||
- Pattern is used across multiple workflows
|
||||
- Standard is non-obvious (needs documentation)
|
||||
- Team asks "how should we handle X?" repeatedly
|
||||
- New tool integration (e.g., new testing library)
|
||||
|
||||
**Bad reasons:**
|
||||
- One-off pattern (document in test file instead)
|
||||
- Obvious pattern (everyone knows this)
|
||||
- Experimental (not proven yet)
|
||||
|
||||
### Fragment Quality Standards
|
||||
|
||||
**Good fragment:**
|
||||
- Principle stated in one sentence
|
||||
- Rationale explains why clearly
|
||||
- 3+ pattern examples with code
|
||||
- Anti-patterns shown (what not to do)
|
||||
- Self-contained (minimal dependencies)
|
||||
|
||||
**Example size:** 10-30 KB optimal
|
||||
|
||||
### Updating Existing Fragments
|
||||
|
||||
**When to update:**
|
||||
- Pattern evolved (better approach discovered)
|
||||
- Tool updated (new Playwright API)
|
||||
- Team feedback (pattern unclear)
|
||||
- Bug in example code
|
||||
|
||||
**How to update:**
|
||||
1. Edit fragment markdown file
|
||||
2. Update examples
|
||||
3. Test with affected workflows
|
||||
4. Ensure no breaking changes
|
||||
|
||||
**No need to update tea-index.csv** unless description/tags change.
|
||||
|
||||
## Benefits of Knowledge Base System
|
||||
|
||||
### 1. Consistency
|
||||
|
||||
**Before:** Test quality varies by who wrote it
|
||||
**After:** All tests follow same patterns (TEA-generated or reviewed)
|
||||
|
||||
### 2. Onboarding
|
||||
|
||||
**Before:** New team member reads 20 documents, asks 50 questions
|
||||
**After:** New team member runs `atdd`, sees patterns in generated code, learns by example
|
||||
|
||||
### 3. Quality Gates
|
||||
|
||||
**Before:** "Is this test good?" → subjective opinion
|
||||
**After:** `test-review` → objective score against knowledge base
|
||||
|
||||
### 4. Pattern Evolution
|
||||
|
||||
**Before:** Update tests manually across 100 files
|
||||
**After:** Update fragment once, all new tests use new pattern
|
||||
|
||||
### 5. Cross-Project Reuse
|
||||
|
||||
**Before:** Reinvent patterns for each project
|
||||
**After:** Same fragments across all BMad projects (consistency at scale)
|
||||
|
||||
## Comparison: With vs Without Knowledge Base
|
||||
|
||||
### Scenario: Testing Async Background Job
|
||||
|
||||
**Without Knowledge Base:**
|
||||
|
||||
Developer 1:
|
||||
```typescript
|
||||
// Uses hard wait
|
||||
await page.click('button');
|
||||
await page.waitForTimeout(10000); // Hope job finishes
|
||||
```
|
||||
|
||||
Developer 2:
|
||||
```typescript
|
||||
// Uses polling
|
||||
await page.click('button');
|
||||
for (let i = 0; i < 10; i++) {
|
||||
const status = await page.locator('.status').textContent();
|
||||
if (status === 'complete') break;
|
||||
await page.waitForTimeout(1000);
|
||||
}
|
||||
```
|
||||
|
||||
Developer 3:
|
||||
```typescript
|
||||
// Uses waitForSelector
|
||||
await page.click('button');
|
||||
await page.waitForSelector('.success', { timeout: 30000 });
|
||||
```
|
||||
|
||||
**Result:** 3 different patterns, all suboptimal.
|
||||
|
||||
**With Knowledge Base (recurse.md fragment):**
|
||||
|
||||
All developers:
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('job completion', async ({ apiRequest, recurse }) => {
|
||||
// Start async job
|
||||
const { body: job } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/jobs'
|
||||
});
|
||||
|
||||
// Poll until complete (correct API: command, predicate, options)
|
||||
const result = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/jobs/${job.id}` }),
|
||||
(response) => response.body.status === 'completed', // response.body from apiRequest
|
||||
{
|
||||
timeout: 30000,
|
||||
interval: 2000,
|
||||
log: 'Waiting for job to complete'
|
||||
}
|
||||
);
|
||||
|
||||
expect(result.body.status).toBe('completed');
|
||||
});
|
||||
```
|
||||
|
||||
**Result:** Consistent pattern using correct playwright-utils API (command, predicate, options).
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
For details on the knowledge base index, see:
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md)
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md)
|
||||
|
||||
## Related Concepts
|
||||
|
||||
**Core TEA Concepts:**
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Standards in knowledge base
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Risk patterns in knowledge base
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - Knowledge base across all models
|
||||
|
||||
**Technical Patterns:**
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Fixture patterns in knowledge base
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Network patterns in knowledge base
|
||||
|
||||
**Overview:**
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Knowledge base in workflows
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - **Foundation: Context engineering philosophy** (why knowledge base solves AI test problems)
|
||||
|
||||
## Practical Guides
|
||||
|
||||
**All Workflow Guides Use Knowledge Base:**
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md)
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md)
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md)
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md)
|
||||
|
||||
**Integration:**
|
||||
- [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) - PW-Utils in knowledge base
|
||||
|
||||
## Reference
|
||||
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Complete fragment index
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - Which workflows load which fragments
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Config affects fragment loading
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - Context engineering, knowledge fragment terms
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
853
docs/tea/explanation/network-first-patterns.md
Normal file
853
docs/tea/explanation/network-first-patterns.md
Normal file
@@ -0,0 +1,853 @@
|
||||
---
|
||||
title: "Network-First Patterns Explained"
|
||||
description: Understanding how TEA eliminates test flakiness by waiting for actual network responses
|
||||
---
|
||||
|
||||
# Network-First Patterns Explained
|
||||
|
||||
Network-first patterns are TEA's solution to test flakiness. Instead of guessing how long to wait with fixed timeouts, wait for the actual network event that causes UI changes.
|
||||
|
||||
## Overview
|
||||
|
||||
**The Core Principle:**
|
||||
UI changes because APIs respond. Wait for the API response, not an arbitrary timeout.
|
||||
|
||||
**Traditional approach:**
|
||||
```typescript
|
||||
await page.click('button');
|
||||
await page.waitForTimeout(3000); // Hope 3 seconds is enough
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
```
|
||||
|
||||
**Network-first approach:**
|
||||
```typescript
|
||||
const responsePromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
||||
);
|
||||
await page.click('button');
|
||||
await responsePromise; // Wait for actual response
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
```
|
||||
|
||||
**Result:** Deterministic tests that wait exactly as long as needed.
|
||||
|
||||
## The Problem
|
||||
|
||||
### Hard Waits Create Flakiness
|
||||
|
||||
```typescript
|
||||
// ❌ The flaky test pattern
|
||||
test('should submit form', async ({ page }) => {
|
||||
await page.fill('#name', 'Test User');
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
await page.waitForTimeout(2000); // Wait 2 seconds
|
||||
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Why this fails:**
|
||||
- **Fast network:** Wastes 1.5 seconds waiting
|
||||
- **Slow network:** Not enough time, test fails
|
||||
- **CI environment:** Slower than local, fails randomly
|
||||
- **Under load:** API takes 3 seconds, test fails
|
||||
|
||||
**Result:** "Works on my machine" syndrome, flaky CI.
|
||||
|
||||
### The Timeout Escalation Trap
|
||||
|
||||
```typescript
|
||||
// Developer sees flaky test
|
||||
await page.waitForTimeout(2000); // Failed in CI
|
||||
|
||||
// Increases timeout
|
||||
await page.waitForTimeout(5000); // Still fails sometimes
|
||||
|
||||
// Increases again
|
||||
await page.waitForTimeout(10000); // Now it passes... slowly
|
||||
|
||||
// Problem: Now EVERY test waits 10 seconds
|
||||
// Suite that took 5 minutes now takes 30 minutes
|
||||
```
|
||||
|
||||
**Result:** Slow, still-flaky tests.
|
||||
|
||||
### Race Conditions
|
||||
|
||||
```typescript
|
||||
// ❌ Navigate-then-wait race condition
|
||||
test('should load dashboard data', async ({ page }) => {
|
||||
await page.goto('/dashboard'); // Navigation starts
|
||||
|
||||
// Race condition! API might not have responded yet
|
||||
await expect(page.locator('.data-table')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
1. `goto()` starts navigation
|
||||
2. Page loads HTML
|
||||
3. JavaScript requests `/api/dashboard`
|
||||
4. Test checks for `.data-table` BEFORE API responds
|
||||
5. Test fails intermittently
|
||||
|
||||
**Result:** "Sometimes it works, sometimes it doesn't."
|
||||
|
||||
## The Solution: Intercept-Before-Navigate
|
||||
|
||||
### Wait for Response Before Asserting
|
||||
|
||||
```typescript
|
||||
// ✅ Good: Network-first pattern
|
||||
test('should load dashboard data', async ({ page }) => {
|
||||
// Set up promise BEFORE navigation
|
||||
const dashboardPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/dashboard') && resp.ok()
|
||||
);
|
||||
|
||||
// Navigate
|
||||
await page.goto('/dashboard');
|
||||
|
||||
// Wait for API response
|
||||
const response = await dashboardPromise;
|
||||
const data = await response.json();
|
||||
|
||||
// Now assert UI
|
||||
await expect(page.locator('.data-table')).toBeVisible();
|
||||
await expect(page.locator('.data-table tr')).toHaveCount(data.items.length);
|
||||
});
|
||||
```
|
||||
|
||||
**Why this works:**
|
||||
- Wait set up BEFORE navigation (no race)
|
||||
- Wait for actual API response (deterministic)
|
||||
- No fixed timeout (fast when API is fast)
|
||||
- Validates API response (catch backend errors)
|
||||
|
||||
**With Playwright Utils (Even Cleaner):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('should load dashboard data', async ({ page, interceptNetworkCall }) => {
|
||||
// Set up interception BEFORE navigation
|
||||
const dashboardCall = interceptNetworkCall({
|
||||
method: 'GET',
|
||||
url: '**/api/dashboard'
|
||||
});
|
||||
|
||||
// Navigate
|
||||
await page.goto('/dashboard');
|
||||
|
||||
// Wait for API response (automatic JSON parsing)
|
||||
const { status, responseJson: data } = await dashboardCall;
|
||||
|
||||
// Validate API response
|
||||
expect(status).toBe(200);
|
||||
expect(data.items).toBeDefined();
|
||||
|
||||
// Assert UI matches API data
|
||||
await expect(page.locator('.data-table')).toBeVisible();
|
||||
await expect(page.locator('.data-table tr')).toHaveCount(data.items.length);
|
||||
});
|
||||
```
|
||||
|
||||
**Playwright Utils Benefits:**
|
||||
- Automatic JSON parsing (no `await response.json()`)
|
||||
- Returns `{ status, responseJson, requestJson }` structure
|
||||
- Cleaner API (no need to check `resp.ok()`)
|
||||
- Same intercept-before-navigate pattern
|
||||
|
||||
### Intercept-Before-Navigate Pattern
|
||||
|
||||
**Key insight:** Set up wait BEFORE triggering the action.
|
||||
|
||||
```typescript
|
||||
// ✅ Pattern: Intercept → Action → Await
|
||||
|
||||
// 1. Intercept (set up wait)
|
||||
const promise = page.waitForResponse(matcher);
|
||||
|
||||
// 2. Action (trigger request)
|
||||
await page.click('button');
|
||||
|
||||
// 3. Await (wait for actual response)
|
||||
await promise;
|
||||
```
|
||||
|
||||
**Why this order:**
|
||||
- `waitForResponse()` starts listening immediately
|
||||
- Then trigger the action that makes the request
|
||||
- Then wait for the promise to resolve
|
||||
- No race condition possible
|
||||
|
||||
#### Intercept-Before-Navigate Flow
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
||||
sequenceDiagram
|
||||
participant Test
|
||||
participant Playwright
|
||||
participant Browser
|
||||
participant API
|
||||
|
||||
rect rgb(200, 230, 201)
|
||||
Note over Test,Playwright: ✅ CORRECT: Intercept First
|
||||
Test->>Playwright: 1. waitForResponse(matcher)
|
||||
Note over Playwright: Starts listening for response
|
||||
Test->>Browser: 2. click('button')
|
||||
Browser->>API: 3. POST /api/submit
|
||||
API-->>Browser: 4. 200 OK {success: true}
|
||||
Browser-->>Playwright: 5. Response captured
|
||||
Test->>Playwright: 6. await promise
|
||||
Playwright-->>Test: 7. Returns response
|
||||
Note over Test: No race condition!
|
||||
end
|
||||
|
||||
rect rgb(255, 205, 210)
|
||||
Note over Test,API: ❌ WRONG: Action First
|
||||
Test->>Browser: 1. click('button')
|
||||
Browser->>API: 2. POST /api/submit
|
||||
API-->>Browser: 3. 200 OK (already happened!)
|
||||
Test->>Playwright: 4. waitForResponse(matcher)
|
||||
Note over Test,Playwright: Too late - response already occurred
|
||||
Note over Test: Race condition! Test hangs or fails
|
||||
end
|
||||
```
|
||||
|
||||
**Correct Order (Green):**
|
||||
1. Set up listener (`waitForResponse`)
|
||||
2. Trigger action (`click`)
|
||||
3. Wait for response (`await promise`)
|
||||
|
||||
**Wrong Order (Red):**
|
||||
1. Trigger action first
|
||||
2. Set up listener too late
|
||||
3. Response already happened - missed!
|
||||
|
||||
## How It Works in TEA
|
||||
|
||||
### TEA Generates Network-First Tests
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
// When you run `atdd` or `automate`, TEA generates:
|
||||
|
||||
test('should create user', async ({ page }) => {
|
||||
// TEA automatically includes network wait
|
||||
const createUserPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/users') &&
|
||||
resp.request().method() === 'POST' &&
|
||||
resp.ok()
|
||||
);
|
||||
|
||||
await page.fill('#name', 'Test User');
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
const response = await createUserPromise;
|
||||
const user = await response.json();
|
||||
|
||||
// Validate both API and UI
|
||||
expect(user.id).toBeDefined();
|
||||
await expect(page.locator('.success')).toContainText(user.name);
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils (if `tea_use_playwright_utils: true`):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('should create user', async ({ page, interceptNetworkCall }) => {
|
||||
// TEA uses interceptNetworkCall for cleaner interception
|
||||
const createUserCall = interceptNetworkCall({
|
||||
method: 'POST',
|
||||
url: '**/api/users'
|
||||
});
|
||||
|
||||
await page.getByLabel('Name').fill('Test User');
|
||||
await page.getByRole('button', { name: 'Submit' }).click();
|
||||
|
||||
// Wait for response (automatic JSON parsing)
|
||||
const { status, responseJson: user } = await createUserCall;
|
||||
|
||||
// Validate both API and UI
|
||||
expect(status).toBe(201);
|
||||
expect(user.id).toBeDefined();
|
||||
await expect(page.locator('.success')).toContainText(user.name);
|
||||
});
|
||||
```
|
||||
|
||||
**Playwright Utils Benefits:**
|
||||
- Automatic JSON parsing (`responseJson` ready to use)
|
||||
- No manual `await response.json()`
|
||||
- Returns `{ status, responseJson }` structure
|
||||
- Cleaner, more readable code
|
||||
|
||||
### TEA Reviews for Hard Waits
|
||||
|
||||
When you run `test-review`:
|
||||
|
||||
```markdown
|
||||
## Critical Issue: Hard Wait Detected
|
||||
|
||||
**File:** tests/e2e/submit.spec.ts:45
|
||||
**Issue:** Using `page.waitForTimeout(3000)`
|
||||
**Severity:** Critical (causes flakiness)
|
||||
|
||||
**Current Code:**
|
||||
```typescript
|
||||
await page.click('button');
|
||||
await page.waitForTimeout(3000); // ❌
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```typescript
|
||||
const responsePromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
||||
);
|
||||
await page.click('button');
|
||||
await responsePromise; // ✅
|
||||
```
|
||||
|
||||
**Why:** Hard waits are non-deterministic. Use network-first patterns.
|
||||
```
|
||||
|
||||
## Pattern Variations
|
||||
|
||||
### Basic Response Wait
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
// Wait for any successful response
|
||||
const promise = page.waitForResponse(resp => resp.ok());
|
||||
await page.click('button');
|
||||
await promise;
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('basic wait', async ({ page, interceptNetworkCall }) => {
|
||||
const responseCall = interceptNetworkCall({ url: '**' }); // Match any
|
||||
await page.click('button');
|
||||
const { status } = await responseCall;
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Specific URL Match
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
// Wait for specific endpoint
|
||||
const promise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/users/123')
|
||||
);
|
||||
await page.goto('/user/123');
|
||||
await promise;
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
test('specific URL', async ({ page, interceptNetworkCall }) => {
|
||||
const userCall = interceptNetworkCall({ url: '**/api/users/123' });
|
||||
await page.goto('/user/123');
|
||||
const { status, responseJson } = await userCall;
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Method + Status Match
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
// Wait for POST that returns 201
|
||||
const promise = page.waitForResponse(
|
||||
resp =>
|
||||
resp.url().includes('/api/users') &&
|
||||
resp.request().method() === 'POST' &&
|
||||
resp.status() === 201
|
||||
);
|
||||
await page.click('button[type="submit"]');
|
||||
await promise;
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
test('method and status', async ({ page, interceptNetworkCall }) => {
|
||||
const createCall = interceptNetworkCall({
|
||||
method: 'POST',
|
||||
url: '**/api/users'
|
||||
});
|
||||
await page.click('button[type="submit"]');
|
||||
const { status, responseJson } = await createCall;
|
||||
expect(status).toBe(201); // Explicit status check
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Multiple Responses
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
// Wait for multiple API calls
|
||||
const [usersResp, postsResp] = await Promise.all([
|
||||
page.waitForResponse(resp => resp.url().includes('/api/users')),
|
||||
page.waitForResponse(resp => resp.url().includes('/api/posts')),
|
||||
page.goto('/dashboard') // Triggers both requests
|
||||
]);
|
||||
|
||||
const users = await usersResp.json();
|
||||
const posts = await postsResp.json();
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
test('multiple responses', async ({ page, interceptNetworkCall }) => {
|
||||
const usersCall = interceptNetworkCall({ url: '**/api/users' });
|
||||
const postsCall = interceptNetworkCall({ url: '**/api/posts' });
|
||||
|
||||
await page.goto('/dashboard'); // Triggers both
|
||||
|
||||
const [{ responseJson: users }, { responseJson: posts }] = await Promise.all([
|
||||
usersCall,
|
||||
postsCall
|
||||
]);
|
||||
|
||||
expect(users).toBeInstanceOf(Array);
|
||||
expect(posts).toBeInstanceOf(Array);
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Validate Response Data
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
// Verify API response before asserting UI
|
||||
const promise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/checkout') && resp.ok()
|
||||
);
|
||||
|
||||
await page.click('button:has-text("Complete Order")');
|
||||
|
||||
const response = await promise;
|
||||
const order = await response.json();
|
||||
|
||||
// Response validation
|
||||
expect(order.status).toBe('confirmed');
|
||||
expect(order.total).toBeGreaterThan(0);
|
||||
|
||||
// UI validation
|
||||
await expect(page.locator('.order-confirmation')).toContainText(order.id);
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
test('validate response data', async ({ page, interceptNetworkCall }) => {
|
||||
const checkoutCall = interceptNetworkCall({
|
||||
method: 'POST',
|
||||
url: '**/api/checkout'
|
||||
});
|
||||
|
||||
await page.click('button:has-text("Complete Order")');
|
||||
|
||||
const { status, responseJson: order } = await checkoutCall;
|
||||
|
||||
// Response validation (automatic JSON parsing)
|
||||
expect(status).toBe(200);
|
||||
expect(order.status).toBe('confirmed');
|
||||
expect(order.total).toBeGreaterThan(0);
|
||||
|
||||
// UI validation
|
||||
await expect(page.locator('.order-confirmation')).toContainText(order.id);
|
||||
});
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### HAR Recording for Offline Testing
|
||||
|
||||
**Vanilla Playwright (Manual HAR Handling):**
|
||||
|
||||
```typescript
|
||||
// First run: Record mode (saves HAR file)
|
||||
test('offline testing - RECORD', async ({ page, context }) => {
|
||||
// Record mode: Save network traffic to HAR
|
||||
await context.routeFromHAR('./hars/dashboard.har', {
|
||||
url: '**/api/**',
|
||||
update: true // Update HAR file
|
||||
});
|
||||
|
||||
await page.goto('/dashboard');
|
||||
// All network traffic saved to dashboard.har
|
||||
});
|
||||
|
||||
// Subsequent runs: Playback mode (uses saved HAR)
|
||||
test('offline testing - PLAYBACK', async ({ page, context }) => {
|
||||
// Playback mode: Use saved network traffic
|
||||
await context.routeFromHAR('./hars/dashboard.har', {
|
||||
url: '**/api/**',
|
||||
update: false // Use existing HAR, no network calls
|
||||
});
|
||||
|
||||
await page.goto('/dashboard');
|
||||
// Uses recorded responses, no backend needed
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils (Automatic HAR Management):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/network-recorder/fixtures';
|
||||
|
||||
// Record mode: Set environment variable
|
||||
process.env.PW_NET_MODE = 'record';
|
||||
|
||||
test('should work offline', async ({ page, context, networkRecorder }) => {
|
||||
await networkRecorder.setup(context); // Handles HAR automatically
|
||||
|
||||
await page.goto('/dashboard');
|
||||
await page.click('#add-item');
|
||||
// All network traffic recorded, CRUD operations detected
|
||||
});
|
||||
```
|
||||
|
||||
**Switch to playback:**
|
||||
```bash
|
||||
# Playback mode (offline)
|
||||
PW_NET_MODE=playback npx playwright test
|
||||
# Uses HAR file, no backend needed!
|
||||
```
|
||||
|
||||
**Playwright Utils Benefits:**
|
||||
- Automatic HAR file management (naming, paths)
|
||||
- CRUD operation detection (stateful mocking)
|
||||
- Environment variable control (easy switching)
|
||||
- Works for complex interactions (create, update, delete)
|
||||
- No manual route configuration
|
||||
|
||||
### Network Request Interception
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
test('should handle API error', async ({ page }) => {
|
||||
// Manual route setup
|
||||
await page.route('**/api/users', (route) => {
|
||||
route.fulfill({
|
||||
status: 500,
|
||||
body: JSON.stringify({ error: 'Internal server error' })
|
||||
});
|
||||
});
|
||||
|
||||
await page.goto('/users');
|
||||
|
||||
const response = await page.waitForResponse('**/api/users');
|
||||
const error = await response.json();
|
||||
|
||||
expect(error.error).toContain('Internal server');
|
||||
await expect(page.locator('.error-message')).toContainText('Server error');
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('should handle API error', async ({ page, interceptNetworkCall }) => {
|
||||
// Stub API to return error (set up BEFORE navigation)
|
||||
const usersCall = interceptNetworkCall({
|
||||
method: 'GET',
|
||||
url: '**/api/users',
|
||||
fulfillResponse: {
|
||||
status: 500,
|
||||
body: { error: 'Internal server error' }
|
||||
}
|
||||
});
|
||||
|
||||
await page.goto('/users');
|
||||
|
||||
// Wait for mocked response and access parsed data
|
||||
const { status, responseJson } = await usersCall;
|
||||
|
||||
expect(status).toBe(500);
|
||||
expect(responseJson.error).toContain('Internal server');
|
||||
await expect(page.locator('.error-message')).toContainText('Server error');
|
||||
});
|
||||
```
|
||||
|
||||
**Playwright Utils Benefits:**
|
||||
- Automatic JSON parsing (`responseJson` ready to use)
|
||||
- Returns promise with `{ status, responseJson, requestJson }`
|
||||
- No need to pass `page` (auto-injected by fixture)
|
||||
- Glob pattern matching (simpler than regex)
|
||||
- Single declarative call (setup + wait in one)
|
||||
|
||||
## Comparison: Traditional vs Network-First
|
||||
|
||||
### Loading Dashboard Data
|
||||
|
||||
**Traditional (Flaky):**
|
||||
```typescript
|
||||
test('dashboard loads data', async ({ page }) => {
|
||||
await page.goto('/dashboard');
|
||||
await page.waitForTimeout(2000); // ❌ Magic number
|
||||
await expect(page.locator('table tr')).toHaveCount(5);
|
||||
});
|
||||
```
|
||||
|
||||
**Failure modes:**
|
||||
- API takes 2.5s → test fails
|
||||
- API returns 3 items not 5 → hard to debug (which issue?)
|
||||
- CI slower than local → fails in CI only
|
||||
|
||||
**Network-First (Deterministic):**
|
||||
```typescript
|
||||
test('dashboard loads data', async ({ page }) => {
|
||||
const apiPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/dashboard') && resp.ok()
|
||||
);
|
||||
|
||||
await page.goto('/dashboard');
|
||||
|
||||
const response = await apiPromise;
|
||||
const { items } = await response.json();
|
||||
|
||||
// Validate API response
|
||||
expect(items).toHaveLength(5);
|
||||
|
||||
// Validate UI matches API
|
||||
await expect(page.locator('table tr')).toHaveCount(items.length);
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Waits exactly as long as needed (100ms or 5s, doesn't matter)
|
||||
- Validates API response (catch backend errors)
|
||||
- Validates UI matches API (catch frontend bugs)
|
||||
- Works in any environment (local, CI, staging)
|
||||
|
||||
**With Playwright Utils (Even Better):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('dashboard loads data', async ({ page, interceptNetworkCall }) => {
|
||||
const dashboardCall = interceptNetworkCall({
|
||||
method: 'GET',
|
||||
url: '**/api/dashboard'
|
||||
});
|
||||
|
||||
await page.goto('/dashboard');
|
||||
|
||||
const { status, responseJson: { items } } = await dashboardCall;
|
||||
|
||||
// Validate API response (automatic JSON parsing)
|
||||
expect(status).toBe(200);
|
||||
expect(items).toHaveLength(5);
|
||||
|
||||
// Validate UI matches API
|
||||
await expect(page.locator('table tr')).toHaveCount(items.length);
|
||||
});
|
||||
```
|
||||
|
||||
**Additional Benefits:**
|
||||
- No manual `await response.json()` (automatic parsing)
|
||||
- Cleaner destructuring of nested data
|
||||
- Consistent API across all network calls
|
||||
|
||||
---
|
||||
|
||||
### Form Submission
|
||||
|
||||
**Traditional (Flaky):**
|
||||
```typescript
|
||||
test('form submission', async ({ page }) => {
|
||||
await page.fill('#email', 'test@example.com');
|
||||
await page.click('button[type="submit"]');
|
||||
await page.waitForTimeout(3000); // ❌ Hope it's enough
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Network-First (Deterministic):**
|
||||
```typescript
|
||||
test('form submission', async ({ page }) => {
|
||||
const submitPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/submit') &&
|
||||
resp.request().method() === 'POST' &&
|
||||
resp.ok()
|
||||
);
|
||||
|
||||
await page.fill('#email', 'test@example.com');
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
const response = await submitPromise;
|
||||
const result = await response.json();
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('form submission', async ({ page, interceptNetworkCall }) => {
|
||||
const submitCall = interceptNetworkCall({
|
||||
method: 'POST',
|
||||
url: '**/api/submit'
|
||||
});
|
||||
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await page.getByRole('button', { name: 'Submit' }).click();
|
||||
|
||||
const { status, responseJson: result } = await submitCall;
|
||||
|
||||
// Automatic JSON parsing, no manual await
|
||||
expect(status).toBe(200);
|
||||
expect(result.success).toBe(true);
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Progression:**
|
||||
- Traditional: Hard waits (flaky)
|
||||
- Network-First (Vanilla): waitForResponse (deterministic)
|
||||
- Network-First (PW-Utils): interceptNetworkCall (deterministic + cleaner API)
|
||||
|
||||
---
|
||||
|
||||
## Common Misconceptions
|
||||
|
||||
### "I Already Use waitForSelector"
|
||||
|
||||
```typescript
|
||||
// This is still a hard wait in disguise
|
||||
await page.click('button');
|
||||
await page.waitForSelector('.success', { timeout: 5000 });
|
||||
```
|
||||
|
||||
**Problem:** Waiting for DOM, not for the API that caused DOM change.
|
||||
|
||||
**Better:**
|
||||
```typescript
|
||||
await page.waitForResponse(matcher); // Wait for root cause
|
||||
await page.waitForSelector('.success'); // Then validate UI
|
||||
```
|
||||
|
||||
### "My Tests Are Fast, Why Add Complexity?"
|
||||
|
||||
**Short-term:** Tests are fast locally
|
||||
|
||||
**Long-term problems:**
|
||||
- Different environments (CI slower)
|
||||
- Under load (API slower)
|
||||
- Network variability (random)
|
||||
- Scaling test suite (100 → 1000 tests)
|
||||
|
||||
**Network-first prevents these issues before they appear.**
|
||||
|
||||
### "Too Much Boilerplate"
|
||||
|
||||
**Problem:** `waitForResponse` is verbose, repeated in every test.
|
||||
|
||||
**Solution:** Use Playwright Utils `interceptNetworkCall` - built-in fixture that reduces boilerplate.
|
||||
|
||||
**Vanilla Playwright (Repetitive):**
|
||||
```typescript
|
||||
test('test 1', async ({ page }) => {
|
||||
const promise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
||||
);
|
||||
await page.click('button');
|
||||
await promise;
|
||||
});
|
||||
|
||||
test('test 2', async ({ page }) => {
|
||||
const promise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/load') && resp.ok()
|
||||
);
|
||||
await page.click('button');
|
||||
await promise;
|
||||
});
|
||||
// Repeated pattern in every test
|
||||
```
|
||||
|
||||
**With Playwright Utils (Cleaner):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('test 1', async ({ page, interceptNetworkCall }) => {
|
||||
const submitCall = interceptNetworkCall({ url: '**/api/submit' });
|
||||
await page.click('button');
|
||||
const { status, responseJson } = await submitCall;
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
|
||||
test('test 2', async ({ page, interceptNetworkCall }) => {
|
||||
const loadCall = interceptNetworkCall({ url: '**/api/load' });
|
||||
await page.click('button');
|
||||
const { responseJson } = await loadCall;
|
||||
// Automatic JSON parsing, cleaner API
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Less boilerplate (fixture handles complexity)
|
||||
- Automatic JSON parsing
|
||||
- Glob pattern matching (`**/api/**`)
|
||||
- Consistent API across all tests
|
||||
|
||||
See [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md#intercept-network-call) for setup.
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
For detailed network-first patterns, see the knowledge base:
|
||||
- [Knowledge Base Index - Network & Reliability](/docs/tea/reference/knowledge-base.md)
|
||||
- [Complete Knowledge Base Index](/docs/tea/reference/knowledge-base.md)
|
||||
|
||||
## Related Concepts
|
||||
|
||||
**Core TEA Concepts:**
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Determinism requires network-first
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - High-risk features need reliable tests
|
||||
|
||||
**Technical Patterns:**
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Network utilities as fixtures
|
||||
- [Knowledge Base System](/docs/tea/explanation/knowledge-base-system.md) - Network patterns in knowledge base
|
||||
|
||||
**Overview:**
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Network-first in workflows
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - Why flakiness matters
|
||||
|
||||
## Practical Guides
|
||||
|
||||
**Workflow Guides:**
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Review for hard waits
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md) - Generate network-first tests
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Expand with network patterns
|
||||
|
||||
**Use-Case Guides:**
|
||||
- [Using TEA with Existing Tests](/docs/tea/how-to/brownfield/use-tea-with-existing-tests.md) - Fix flaky legacy tests
|
||||
|
||||
**Customization:**
|
||||
- [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) - Network utilities (recorder, interceptor, error monitor)
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - All workflows use network-first
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Network-first fragment
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - Network-first pattern term
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
586
docs/tea/explanation/risk-based-testing.md
Normal file
586
docs/tea/explanation/risk-based-testing.md
Normal file
@@ -0,0 +1,586 @@
|
||||
---
|
||||
title: "Risk-Based Testing Explained"
|
||||
description: Understanding how TEA uses probability × impact scoring to prioritize testing effort
|
||||
---
|
||||
|
||||
# Risk-Based Testing Explained
|
||||
|
||||
Risk-based testing is TEA's core principle: testing depth scales with business impact. Instead of testing everything equally, focus effort where failures hurt most.
|
||||
|
||||
## Overview
|
||||
|
||||
Traditional testing approaches treat all features equally:
|
||||
- Every feature gets same test coverage
|
||||
- Same level of scrutiny regardless of impact
|
||||
- No systematic prioritization
|
||||
- Testing becomes checkbox exercise
|
||||
|
||||
**Risk-based testing asks:**
|
||||
- What's the probability this will fail?
|
||||
- What's the impact if it does fail?
|
||||
- How much testing is appropriate for this risk level?
|
||||
|
||||
**Result:** Testing effort matches business criticality.
|
||||
|
||||
## The Problem
|
||||
|
||||
### Equal Testing for Unequal Risk
|
||||
|
||||
```markdown
|
||||
Feature A: User login (critical path, millions of users)
|
||||
Feature B: Export to PDF (nice-to-have, rarely used)
|
||||
|
||||
Traditional approach:
|
||||
- Both get 10 tests
|
||||
- Both get same review scrutiny
|
||||
- Both take same development time
|
||||
|
||||
Problem: Wasting effort on low-impact features while under-testing critical paths.
|
||||
```
|
||||
|
||||
### No Objective Prioritization
|
||||
|
||||
```markdown
|
||||
PM: "We need more tests for checkout"
|
||||
QA: "How many tests?"
|
||||
PM: "I don't know... a lot?"
|
||||
QA: "How do we know when we have enough?"
|
||||
PM: "When it feels safe?"
|
||||
|
||||
Problem: Subjective decisions, no data, political debates.
|
||||
```
|
||||
|
||||
## The Solution: Probability × Impact Scoring
|
||||
|
||||
### Risk Score = Probability × Impact
|
||||
|
||||
**Probability** (How likely to fail?)
|
||||
- **1 (Low):** Stable, well-tested, simple logic
|
||||
- **2 (Medium):** Moderate complexity, some unknowns
|
||||
- **3 (High):** Complex, untested, many edge cases
|
||||
|
||||
**Impact** (How bad if it fails?)
|
||||
- **1 (Low):** Minor inconvenience, few users affected
|
||||
- **2 (Medium):** Degraded experience, workarounds exist
|
||||
- **3 (High):** Critical path broken, business impact
|
||||
|
||||
**Score Range:** 1-9
|
||||
|
||||
#### Risk Scoring Matrix
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
||||
graph TD
|
||||
subgraph Matrix[" "]
|
||||
direction TB
|
||||
subgraph Impact3["Impact: HIGH (3)"]
|
||||
P1I3["Score: 3<br/>Low Risk"]
|
||||
P2I3["Score: 6<br/>HIGH RISK<br/>Mitigation Required"]
|
||||
P3I3["Score: 9<br/>CRITICAL<br/>Blocks Release"]
|
||||
end
|
||||
subgraph Impact2["Impact: MEDIUM (2)"]
|
||||
P1I2["Score: 2<br/>Low Risk"]
|
||||
P2I2["Score: 4<br/>Medium Risk"]
|
||||
P3I2["Score: 6<br/>HIGH RISK<br/>Mitigation Required"]
|
||||
end
|
||||
subgraph Impact1["Impact: LOW (1)"]
|
||||
P1I1["Score: 1<br/>Low Risk"]
|
||||
P2I1["Score: 2<br/>Low Risk"]
|
||||
P3I1["Score: 3<br/>Low Risk"]
|
||||
end
|
||||
end
|
||||
|
||||
Prob1["Probability: LOW (1)"] -.-> P1I1
|
||||
Prob1 -.-> P1I2
|
||||
Prob1 -.-> P1I3
|
||||
|
||||
Prob2["Probability: MEDIUM (2)"] -.-> P2I1
|
||||
Prob2 -.-> P2I2
|
||||
Prob2 -.-> P2I3
|
||||
|
||||
Prob3["Probability: HIGH (3)"] -.-> P3I1
|
||||
Prob3 -.-> P3I2
|
||||
Prob3 -.-> P3I3
|
||||
|
||||
style P3I3 fill:#f44336,stroke:#b71c1c,stroke-width:3px,color:#fff
|
||||
style P2I3 fill:#ff9800,stroke:#e65100,stroke-width:2px,color:#000
|
||||
style P3I2 fill:#ff9800,stroke:#e65100,stroke-width:2px,color:#000
|
||||
style P2I2 fill:#fff9c4,stroke:#f57f17,stroke-width:1px,color:#000
|
||||
style P1I1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
style P2I1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
style P3I1 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
style P1I2 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
style P1I3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
```
|
||||
|
||||
**Legend:**
|
||||
- 🔴 Red (Score 9): CRITICAL - Blocks release
|
||||
- 🟠 Orange (Score 6-8): HIGH RISK - Mitigation required
|
||||
- 🟡 Yellow (Score 4-5): MEDIUM - Mitigation recommended
|
||||
- 🟢 Green (Score 1-3): LOW - Optional mitigation
|
||||
|
||||
### Scoring Examples
|
||||
|
||||
**Score 9 (Critical):**
|
||||
```
|
||||
Feature: Payment processing
|
||||
Probability: 3 (complex third-party integration)
|
||||
Impact: 3 (broken payments = lost revenue)
|
||||
Score: 3 × 3 = 9
|
||||
|
||||
Action: Extensive testing required
|
||||
- E2E tests for all payment flows
|
||||
- API tests for all payment scenarios
|
||||
- Error handling for all failure modes
|
||||
- Security testing for payment data
|
||||
- Load testing for high traffic
|
||||
- Monitoring and alerts
|
||||
```
|
||||
|
||||
**Score 1 (Low):**
|
||||
```
|
||||
Feature: Change profile theme color
|
||||
Probability: 1 (simple UI toggle)
|
||||
Impact: 1 (cosmetic only)
|
||||
Score: 1 × 1 = 1
|
||||
|
||||
Action: Minimal testing
|
||||
- One E2E smoke test
|
||||
- Skip edge cases
|
||||
- No API tests needed
|
||||
```
|
||||
|
||||
**Score 6 (Medium-High):**
|
||||
```
|
||||
Feature: User profile editing
|
||||
Probability: 2 (moderate complexity)
|
||||
Impact: 3 (users can't update info)
|
||||
Score: 2 × 3 = 6
|
||||
|
||||
Action: Focused testing
|
||||
- E2E test for happy path
|
||||
- API tests for CRUD operations
|
||||
- Validation testing
|
||||
- Skip low-value edge cases
|
||||
```
|
||||
|
||||
## How It Works in TEA
|
||||
|
||||
### 1. Risk Categories
|
||||
|
||||
TEA assesses risk across 6 categories:
|
||||
|
||||
**TECH** - Technical debt, architecture fragility
|
||||
```
|
||||
Example: Migrating from REST to GraphQL
|
||||
Probability: 3 (major architectural change)
|
||||
Impact: 3 (affects all API consumers)
|
||||
Score: 9 - Extensive integration testing required
|
||||
```
|
||||
|
||||
**SEC** - Security vulnerabilities
|
||||
```
|
||||
Example: Adding OAuth integration
|
||||
Probability: 2 (third-party dependency)
|
||||
Impact: 3 (auth breach = data exposure)
|
||||
Score: 6 - Security testing mandatory
|
||||
```
|
||||
|
||||
**PERF** - Performance degradation
|
||||
```
|
||||
Example: Adding real-time notifications
|
||||
Probability: 2 (WebSocket complexity)
|
||||
Impact: 2 (slower experience)
|
||||
Score: 4 - Load testing recommended
|
||||
```
|
||||
|
||||
**DATA** - Data integrity, corruption
|
||||
```
|
||||
Example: Database migration
|
||||
Probability: 2 (schema changes)
|
||||
Impact: 3 (data loss unacceptable)
|
||||
Score: 6 - Data validation tests required
|
||||
```
|
||||
|
||||
**BUS** - Business logic errors
|
||||
```
|
||||
Example: Discount calculation
|
||||
Probability: 2 (business rules complex)
|
||||
Impact: 3 (wrong prices = revenue loss)
|
||||
Score: 6 - Business logic tests mandatory
|
||||
```
|
||||
|
||||
**OPS** - Operational issues
|
||||
```
|
||||
Example: Logging system update
|
||||
Probability: 1 (straightforward)
|
||||
Impact: 2 (debugging harder without logs)
|
||||
Score: 2 - Basic smoke test sufficient
|
||||
```
|
||||
|
||||
### 2. Test Priorities (P0-P3)
|
||||
|
||||
Risk scores inform test priorities (but aren't the only factor):
|
||||
|
||||
**P0 - Critical Path**
|
||||
- **Risk Scores:** Typically 6-9 (high risk)
|
||||
- **Other Factors:** Revenue impact, security-critical, regulatory compliance, frequent usage
|
||||
- **Coverage Target:** 100%
|
||||
- **Test Levels:** E2E + API
|
||||
- **Example:** Login, checkout, payment processing
|
||||
|
||||
**P1 - High Value**
|
||||
- **Risk Scores:** Typically 4-6 (medium-high risk)
|
||||
- **Other Factors:** Core user journeys, complex logic, integration points
|
||||
- **Coverage Target:** 90%
|
||||
- **Test Levels:** API + selective E2E
|
||||
- **Example:** Profile editing, search, filters
|
||||
|
||||
**P2 - Medium Value**
|
||||
- **Risk Scores:** Typically 2-4 (medium risk)
|
||||
- **Other Factors:** Secondary features, admin functionality, reporting
|
||||
- **Coverage Target:** 50%
|
||||
- **Test Levels:** API happy path only
|
||||
- **Example:** Export features, advanced settings
|
||||
|
||||
**P3 - Low Value**
|
||||
- **Risk Scores:** Typically 1-2 (low risk)
|
||||
- **Other Factors:** Rarely used, nice-to-have, cosmetic
|
||||
- **Coverage Target:** 20% (smoke test)
|
||||
- **Test Levels:** E2E smoke test only
|
||||
- **Example:** Theme customization, experimental features
|
||||
|
||||
**Note:** Priorities consider risk scores plus business context (usage frequency, user impact, etc.). See [Test Priorities Matrix](/docs/tea/reference/knowledge-base.md#quality-standards) for complete criteria.
|
||||
|
||||
### 3. Mitigation Plans
|
||||
|
||||
**Scores ≥6 require documented mitigation:**
|
||||
|
||||
```markdown
|
||||
## Risk Mitigation
|
||||
|
||||
**Risk:** Payment integration failure (Score: 9)
|
||||
|
||||
**Mitigation Plan:**
|
||||
- Create comprehensive test suite (20+ tests)
|
||||
- Add payment sandbox environment
|
||||
- Implement retry logic with idempotency
|
||||
- Add monitoring and alerts
|
||||
- Document rollback procedure
|
||||
|
||||
**Owner:** Backend team lead
|
||||
**Deadline:** Before production deployment
|
||||
**Status:** In progress
|
||||
```
|
||||
|
||||
**Gate Rules:**
|
||||
- **Score = 9** (Critical): Mandatory FAIL - blocks release without mitigation
|
||||
- **Score 6-8** (High): Requires mitigation plan, becomes CONCERNS if incomplete
|
||||
- **Score 4-5** (Medium): Mitigation recommended but not required
|
||||
- **Score 1-3** (Low): No mitigation needed
|
||||
|
||||
## Comparison: Traditional vs Risk-Based
|
||||
|
||||
### Traditional Approach
|
||||
|
||||
```typescript
|
||||
// Test everything equally
|
||||
describe('User profile', () => {
|
||||
test('should display name');
|
||||
test('should display email');
|
||||
test('should display phone');
|
||||
test('should display address');
|
||||
test('should display bio');
|
||||
test('should display avatar');
|
||||
test('should display join date');
|
||||
test('should display last login');
|
||||
test('should display theme preference');
|
||||
test('should display language preference');
|
||||
// 10 tests for profile display (all equal priority)
|
||||
});
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Same effort for critical (name) vs trivial (theme)
|
||||
- No guidance on what matters
|
||||
- Wastes time on low-value tests
|
||||
|
||||
### Risk-Based Approach
|
||||
|
||||
```typescript
|
||||
// Test based on risk
|
||||
|
||||
describe('User profile - Critical (P0)', () => {
|
||||
test('should display name and email'); // Score: 9 (identity critical)
|
||||
test('should allow editing name and email');
|
||||
test('should validate email format');
|
||||
test('should prevent unauthorized edits');
|
||||
// 4 focused tests on high-risk areas
|
||||
});
|
||||
|
||||
describe('User profile - High Value (P1)', () => {
|
||||
test('should upload avatar'); // Score: 6 (users care about this)
|
||||
test('should update bio');
|
||||
// 2 tests for high-value features
|
||||
});
|
||||
|
||||
// P2: Theme preference - single smoke test
|
||||
// P3: Last login display - skip (read-only, low value)
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- 6 focused tests vs 10 unfocused tests
|
||||
- Effort matches business impact
|
||||
- Clear priorities guide development
|
||||
- No wasted effort on trivial features
|
||||
|
||||
## When to Use Risk-Based Testing
|
||||
|
||||
### Always Use For:
|
||||
|
||||
**Enterprise projects:**
|
||||
- High stakes (revenue, compliance, security)
|
||||
- Many features competing for test effort
|
||||
- Need objective prioritization
|
||||
|
||||
**Large codebases:**
|
||||
- Can't test everything exhaustively
|
||||
- Need to focus limited QA resources
|
||||
- Want data-driven decisions
|
||||
|
||||
**Regulated industries:**
|
||||
- Must justify testing decisions
|
||||
- Auditors want risk assessments
|
||||
- Compliance requires evidence
|
||||
|
||||
### Consider Skipping For:
|
||||
|
||||
**Tiny projects:**
|
||||
- 5 features total
|
||||
- Can test everything thoroughly
|
||||
- Risk scoring is overhead
|
||||
|
||||
**Prototypes:**
|
||||
- Throw-away code
|
||||
- Speed over quality
|
||||
- Learning experiments
|
||||
|
||||
## Real-World Example
|
||||
|
||||
### Scenario: E-Commerce Checkout Redesign
|
||||
|
||||
**Feature:** Redesigning checkout flow from 5 steps to 3 steps
|
||||
|
||||
**Risk Assessment:**
|
||||
|
||||
| Component | Probability | Impact | Score | Priority | Testing |
|
||||
|-----------|-------------|--------|-------|----------|---------|
|
||||
| **Payment processing** | 3 | 3 | 9 | P0 | 15 E2E + 20 API tests |
|
||||
| **Order validation** | 2 | 3 | 6 | P1 | 5 E2E + 10 API tests |
|
||||
| **Shipping calculation** | 2 | 2 | 4 | P1 | 3 E2E + 8 API tests |
|
||||
| **Promo code validation** | 2 | 2 | 4 | P1 | 2 E2E + 5 API tests |
|
||||
| **Gift message** | 1 | 1 | 1 | P3 | 1 E2E smoke test |
|
||||
|
||||
**Test Budget:** 40 hours
|
||||
|
||||
**Allocation:**
|
||||
- Payment (Score 9): 20 hours (50%)
|
||||
- Order validation (Score 6): 8 hours (20%)
|
||||
- Shipping (Score 4): 6 hours (15%)
|
||||
- Promo codes (Score 4): 4 hours (10%)
|
||||
- Gift message (Score 1): 2 hours (5%)
|
||||
|
||||
**Result:** 50% of effort on highest-risk feature (payment), proportional allocation for others.
|
||||
|
||||
### Without Risk-Based Testing:
|
||||
|
||||
**Equal allocation:** 8 hours per component = wasted effort on gift message, under-testing payment.
|
||||
|
||||
**Result:** Payment bugs slip through (critical), perfect testing of gift message (trivial).
|
||||
|
||||
## Mitigation Strategies by Risk Level
|
||||
|
||||
### Score 9: Mandatory Mitigation (Blocks Release)
|
||||
|
||||
```markdown
|
||||
**Gate Impact:** FAIL - Cannot deploy without mitigation
|
||||
|
||||
**Actions:**
|
||||
- Comprehensive test suite (E2E, API, security)
|
||||
- Multiple test environments (dev, staging, prod-mirror)
|
||||
- Load testing and performance validation
|
||||
- Security audit and penetration testing
|
||||
- Monitoring and alerting
|
||||
- Rollback plan documented
|
||||
- On-call rotation assigned
|
||||
|
||||
**Cannot deploy until score is mitigated below 9.**
|
||||
```
|
||||
|
||||
### Score 6-8: Required Mitigation (Gate: CONCERNS)
|
||||
|
||||
```markdown
|
||||
**Gate Impact:** CONCERNS - Can deploy with documented mitigation plan
|
||||
|
||||
**Actions:**
|
||||
- Targeted test suite (happy path + critical errors)
|
||||
- Test environment setup
|
||||
- Monitoring plan
|
||||
- Document mitigation and owners
|
||||
|
||||
**Can deploy with approved mitigation plan.**
|
||||
```
|
||||
|
||||
### Score 4-5: Recommended Mitigation
|
||||
|
||||
```markdown
|
||||
**Gate Impact:** Advisory - Does not affect gate decision
|
||||
|
||||
**Actions:**
|
||||
- Basic test coverage
|
||||
- Standard monitoring
|
||||
- Document known limitations
|
||||
|
||||
**Can deploy, mitigation recommended but not required.**
|
||||
```
|
||||
|
||||
### Score 1-3: Optional Mitigation
|
||||
|
||||
```markdown
|
||||
**Gate Impact:** None
|
||||
|
||||
**Actions:**
|
||||
- Smoke test if desired
|
||||
- Feature flag for easy disable (optional)
|
||||
|
||||
**Can deploy without mitigation.**
|
||||
```
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
For detailed risk governance patterns, see the knowledge base:
|
||||
- [Knowledge Base Index - Risk & Gates](/docs/tea/reference/knowledge-base.md)
|
||||
- [TEA Command Reference - `test-design`](/docs/tea/reference/commands.md#test-design)
|
||||
|
||||
### Risk Scoring Matrix
|
||||
|
||||
TEA uses this framework in `test-design`:
|
||||
|
||||
```
|
||||
Impact
|
||||
1 2 3
|
||||
┌────┬────┬────┐
|
||||
1 │ 1 │ 2 │ 3 │ Low risk
|
||||
P 2 │ 2 │ 4 │ 6 │ Medium risk
|
||||
r 3 │ 3 │ 6 │ 9 │ High risk
|
||||
o └────┴────┴────┘
|
||||
b Low Med High
|
||||
```
|
||||
|
||||
### Gate Decision Rules
|
||||
|
||||
| Score | Mitigation Required | Gate Impact |
|
||||
|-------|-------------------|-------------|
|
||||
| **9** | Mandatory, blocks release | FAIL if no mitigation |
|
||||
| **6-8** | Required, documented plan | CONCERNS if incomplete |
|
||||
| **4-5** | Recommended | Advisory only |
|
||||
| **1-3** | Optional | No impact |
|
||||
|
||||
#### Gate Decision Flow
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
||||
flowchart TD
|
||||
Start([Risk Assessment]) --> Score{Risk Score?}
|
||||
|
||||
Score -->|Score = 9| Critical[CRITICAL RISK<br/>Score: 9]
|
||||
Score -->|Score 6-8| High[HIGH RISK<br/>Score: 6-8]
|
||||
Score -->|Score 4-5| Medium[MEDIUM RISK<br/>Score: 4-5]
|
||||
Score -->|Score 1-3| Low[LOW RISK<br/>Score: 1-3]
|
||||
|
||||
Critical --> HasMit9{Mitigation<br/>Plan?}
|
||||
HasMit9 -->|Yes| Concerns9[CONCERNS ⚠️<br/>Can deploy with plan]
|
||||
HasMit9 -->|No| Fail[FAIL ❌<br/>Blocks release]
|
||||
|
||||
High --> HasMit6{Mitigation<br/>Plan?}
|
||||
HasMit6 -->|Yes| Pass6[PASS ✅<br/>or CONCERNS ⚠️]
|
||||
HasMit6 -->|No| Concerns6[CONCERNS ⚠️<br/>Document plan needed]
|
||||
|
||||
Medium --> Advisory[Advisory Only<br/>No gate impact]
|
||||
Low --> NoAction[No Action<br/>Proceed]
|
||||
|
||||
style Critical fill:#f44336,stroke:#b71c1c,stroke-width:3px,color:#fff
|
||||
style Fail fill:#d32f2f,stroke:#b71c1c,stroke-width:3px,color:#fff
|
||||
style High fill:#ff9800,stroke:#e65100,stroke-width:2px,color:#000
|
||||
style Concerns9 fill:#ffc107,stroke:#f57f17,stroke-width:2px,color:#000
|
||||
style Concerns6 fill:#ffc107,stroke:#f57f17,stroke-width:2px,color:#000
|
||||
style Pass6 fill:#4caf50,stroke:#1b5e20,stroke-width:2px,color:#fff
|
||||
style Medium fill:#fff9c4,stroke:#f57f17,stroke-width:1px,color:#000
|
||||
style Low fill:#c8e6c9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
style Advisory fill:#e8f5e9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
style NoAction fill:#e8f5e9,stroke:#2e7d32,stroke-width:1px,color:#000
|
||||
```
|
||||
|
||||
## Common Misconceptions
|
||||
|
||||
### "Risk-based = Less Testing"
|
||||
|
||||
**Wrong:** Risk-based testing often means MORE testing where it matters.
|
||||
|
||||
**Example:**
|
||||
- Traditional: 50 tests spread equally
|
||||
- Risk-based: 70 tests focused on P0/P1 (more total, better allocated)
|
||||
|
||||
### "Low Priority = Skip Testing"
|
||||
|
||||
**Wrong:** P3 still gets smoke tests.
|
||||
|
||||
**Correct:**
|
||||
- P3: Smoke test (feature works at all)
|
||||
- P2: Happy path (feature works correctly)
|
||||
- P1: Happy path + errors
|
||||
- P0: Comprehensive (all scenarios)
|
||||
|
||||
### "Risk Scores Are Permanent"
|
||||
|
||||
**Wrong:** Risk changes over time.
|
||||
|
||||
**Correct:**
|
||||
- Initial launch: Payment is Score 9 (untested integration)
|
||||
- After 6 months: Payment is Score 6 (proven in production)
|
||||
- Re-assess risk quarterly
|
||||
|
||||
## Related Concepts
|
||||
|
||||
**Core TEA Concepts:**
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Quality complements risk assessment
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - When risk-based testing matters most
|
||||
- [Knowledge Base System](/docs/tea/explanation/knowledge-base-system.md) - How risk patterns are loaded
|
||||
|
||||
**Technical Patterns:**
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Building risk-appropriate test infrastructure
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Quality patterns for high-risk features
|
||||
|
||||
**Overview:**
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Risk assessment in TEA lifecycle
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - Design philosophy
|
||||
|
||||
## Practical Guides
|
||||
|
||||
**Workflow Guides:**
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Apply risk scoring
|
||||
- [How to Run Trace](/docs/tea/how-to/workflows/run-trace.md) - Gate decisions based on risk
|
||||
- [How to Run NFR Assessment](/docs/tea/how-to/workflows/run-nfr-assess.md) - NFR risk assessment
|
||||
|
||||
**Use-Case Guides:**
|
||||
- [Running TEA for Enterprise](/docs/tea/how-to/brownfield/use-tea-for-enterprise.md) - Enterprise risk management
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - `test-design`, `nfr-assess`, `trace`
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Risk governance fragments
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - Risk-based testing term
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
410
docs/tea/explanation/tea-overview.md
Normal file
410
docs/tea/explanation/tea-overview.md
Normal file
@@ -0,0 +1,410 @@
|
||||
---
|
||||
title: "Test Architect (TEA) Overview"
|
||||
description: Understanding the Test Architect (TEA) agent and its role in BMad Method
|
||||
---
|
||||
|
||||
|
||||
The Test Architect (TEA) is a specialized agent focused on quality strategy, test automation, and release gates in BMad Method projects.
|
||||
|
||||
:::tip[Design Philosophy]
|
||||
TEA was built to solve AI-generated tests that rot in review. For the problem statement and design principles, see [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md). For setup, see [Setup Test Framework](/docs/tea/how-to/workflows/setup-test-framework.md).
|
||||
:::
|
||||
|
||||
## Overview
|
||||
|
||||
- **Persona:** Murat, Master Test Architect and Quality Advisor focused on risk-based testing, fixture architecture, ATDD, and CI/CD governance.
|
||||
- **Mission:** Deliver actionable quality strategies, automation coverage, and gate decisions that scale with project complexity and compliance demands.
|
||||
- **Use When:** BMad Method or Enterprise track projects, integration risk is non-trivial, brownfield regression risk exists, or compliance/NFR evidence is required. (Quick Flow projects typically don't require TEA)
|
||||
|
||||
## Choose Your TEA Engagement Model
|
||||
|
||||
BMad does not mandate TEA. There are five valid ways to use it (or skip it). Pick one intentionally.
|
||||
|
||||
1. **No TEA**
|
||||
- Skip all TEA workflows. Use your existing team testing approach.
|
||||
|
||||
2. **TEA Solo (Standalone)**
|
||||
- Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments.
|
||||
- Typical sequence: `test-design` (system or epic) -> `atdd` and/or `automate` -> optional `test-review` -> `trace` for coverage and gate decisions.
|
||||
- Run `framework` or `ci` only if you want TEA to scaffold the harness or pipeline; they work best after you decide the stack/architecture.
|
||||
|
||||
**TEA Lite (Beginner Approach):**
|
||||
- Simplest way to use TEA - just use `automate` to test existing features.
|
||||
- Perfect for learning TEA fundamentals in 30 minutes.
|
||||
- See [TEA Lite Quickstart Tutorial](/docs/tea/tutorials/tea-lite-quickstart.md).
|
||||
|
||||
3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)**
|
||||
- Phase 3: system-level `test-design`, then `framework` and `ci`.
|
||||
- Phase 4: per-epic `test-design`, optional `atdd`, then `automate` and optional `test-review`.
|
||||
- Gate (Phase 2): `trace`.
|
||||
|
||||
4. **Integrated: Brownfield - BMad Method or Enterprise (Simple or Complex)**
|
||||
- Phase 2: baseline `trace`.
|
||||
- Phase 3: system-level `test-design`, then `framework` and `ci`.
|
||||
- Phase 4: per-epic `test-design` focused on regression and integration risks.
|
||||
- Gate (Phase 2): `trace`; `nfr-assess` (if not done earlier).
|
||||
- For brownfield BMad Method, follow the same flow with `nfr-assess` optional.
|
||||
|
||||
5. **Integrated: Greenfield - Enterprise Method (Enterprise/Compliance Work)**
|
||||
- Phase 2: `nfr-assess`.
|
||||
- Phase 3: system-level `test-design`, then `framework` and `ci`.
|
||||
- Phase 4: per-epic `test-design`, plus `atdd`/`automate`/`test-review`.
|
||||
- Gate (Phase 2): `trace`; archive artifacts as needed.
|
||||
|
||||
If you are unsure, default to the integrated path for your track and adjust later.
|
||||
|
||||
## TEA Command Catalog
|
||||
|
||||
| Command | Primary Outputs | Notes | With Playwright MCP Enhancements |
|
||||
| -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
|
||||
| `ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
|
||||
| `test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
|
||||
| `atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: UI selectors verified with live browser; API tests benefit from trace analysis |
|
||||
| `automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Visual debugging + trace analysis for test fixes; **+ Recording**: Verified selectors (UI) + network inspection (API) |
|
||||
| `test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
|
||||
| `nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
|
||||
| `trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
|
||||
|
||||
## TEA Workflow Lifecycle
|
||||
|
||||
**Phase Numbering Note:** BMad uses a 4-phase methodology with optional Phase 1 and a documentation prerequisite:
|
||||
|
||||
- **Documentation** (Optional for brownfield): Prerequisite using `document-project`
|
||||
- **Phase 1** (Optional): Discovery/Analysis (`brainstorm`, `research`, `product-brief`)
|
||||
- **Phase 2** (Required): Planning (`prd` creates PRD with FRs/NFRs)
|
||||
- **Phase 3** (Track-dependent): Solutioning (`architecture` → `test-design` (system-level) → `create-epics-and-stories` → TEA: `framework`, `ci` → `implementation-readiness`)
|
||||
- **Phase 4** (Required): Implementation (`sprint-planning` → per-epic: `test-design` → per-story: dev workflows)
|
||||
|
||||
TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4):
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'primaryColor':'#fff','primaryTextColor':'#000','primaryBorderColor':'#000','lineColor':'#000','secondaryColor':'#fff','tertiaryColor':'#fff','fontSize':'16px','fontFamily':'arial'}}}%%
|
||||
graph TB
|
||||
subgraph Phase2["<b>Phase 2: PLANNING</b>"]
|
||||
PM["<b>PM: prd (creates PRD with FRs/NFRs)</b>"]
|
||||
PlanNote["<b>Business requirements phase</b>"]
|
||||
NFR2["<b>TEA: nfr-assess (optional, enterprise)</b>"]
|
||||
PM -.-> NFR2
|
||||
NFR2 -.-> PlanNote
|
||||
PM -.-> PlanNote
|
||||
end
|
||||
|
||||
subgraph Phase3["<b>Phase 3: SOLUTIONING</b>"]
|
||||
Architecture["<b>Architect: architecture</b>"]
|
||||
EpicsStories["<b>PM/Architect: create-epics-and-stories</b>"]
|
||||
TestDesignSys["<b>TEA: test-design (system-level)</b>"]
|
||||
Framework["<b>TEA: framework (optional if needed)</b>"]
|
||||
CI["<b>TEA: ci (optional if needed)</b>"]
|
||||
GateCheck["<b>Architect: implementation-readiness</b>"]
|
||||
Architecture --> EpicsStories
|
||||
Architecture --> TestDesignSys
|
||||
TestDesignSys --> Framework
|
||||
EpicsStories --> Framework
|
||||
Framework --> CI
|
||||
CI --> GateCheck
|
||||
Phase3Note["<b>Epics created AFTER architecture,</b><br/><b>then system-level test design and test infrastructure setup</b>"]
|
||||
EpicsStories -.-> Phase3Note
|
||||
end
|
||||
|
||||
subgraph Phase4["<b>Phase 4: IMPLEMENTATION - Per Epic Cycle</b>"]
|
||||
SprintPlan["<b>SM: sprint-planning</b>"]
|
||||
TestDesign["<b>TEA: test-design (per epic)</b>"]
|
||||
CreateStory["<b>SM: create-story</b>"]
|
||||
ATDD["<b>TEA: atdd (optional, before dev)</b>"]
|
||||
DevImpl["<b>DEV: implements story</b>"]
|
||||
Automate["<b>TEA: automate</b>"]
|
||||
TestReview1["<b>TEA: test-review (optional)</b>"]
|
||||
Trace1["<b>TEA: trace (refresh coverage)</b>"]
|
||||
|
||||
SprintPlan --> TestDesign
|
||||
TestDesign --> CreateStory
|
||||
CreateStory --> ATDD
|
||||
ATDD --> DevImpl
|
||||
DevImpl --> Automate
|
||||
Automate --> TestReview1
|
||||
TestReview1 --> Trace1
|
||||
Trace1 -.->|next story| CreateStory
|
||||
TestDesignNote["<b>Test design: 'How do I test THIS epic?'</b><br/>Creates test-design-epic-N.md per epic"]
|
||||
TestDesign -.-> TestDesignNote
|
||||
end
|
||||
|
||||
subgraph Gate["<b>EPIC/RELEASE GATE</b>"]
|
||||
NFR["<b>TEA: nfr-assess (if not done earlier)</b>"]
|
||||
TestReview2["<b>TEA: test-review (final audit, optional)</b>"]
|
||||
TraceGate["<b>TEA: trace - Phase 2: Gate</b>"]
|
||||
GateDecision{"<b>Gate Decision</b>"}
|
||||
|
||||
NFR --> TestReview2
|
||||
TestReview2 --> TraceGate
|
||||
TraceGate --> GateDecision
|
||||
GateDecision -->|PASS| Pass["<b>PASS ✅</b>"]
|
||||
GateDecision -->|CONCERNS| Concerns["<b>CONCERNS ⚠️</b>"]
|
||||
GateDecision -->|FAIL| Fail["<b>FAIL ❌</b>"]
|
||||
GateDecision -->|WAIVED| Waived["<b>WAIVED ⏭️</b>"]
|
||||
end
|
||||
|
||||
Phase2 --> Phase3
|
||||
Phase3 --> Phase4
|
||||
Phase4 --> Gate
|
||||
|
||||
style Phase2 fill:#bbdefb,stroke:#0d47a1,stroke-width:3px,color:#000
|
||||
style Phase3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px,color:#000
|
||||
style Phase4 fill:#e1bee7,stroke:#4a148c,stroke-width:3px,color:#000
|
||||
style Gate fill:#ffe082,stroke:#f57c00,stroke-width:3px,color:#000
|
||||
style Pass fill:#4caf50,stroke:#1b5e20,stroke-width:3px,color:#000
|
||||
style Concerns fill:#ffc107,stroke:#f57f17,stroke-width:3px,color:#000
|
||||
style Fail fill:#f44336,stroke:#b71c1c,stroke-width:3px,color:#000
|
||||
style Waived fill:#9c27b0,stroke:#4a148c,stroke-width:3px,color:#000
|
||||
```
|
||||
|
||||
**TEA workflows:** `framework` and `ci` run once in Phase 3 after architecture. `test-design` is **dual-mode**:
|
||||
|
||||
- **System-level (Phase 3):** Run immediately after architecture/ADR drafting to produce TWO documents: `test-design-architecture.md` (for Architecture/Dev teams: testability gaps, ASRs, NFR requirements) + `test-design-qa.md` (for QA team: test execution recipe, coverage plan, Sprint 0 setup). Feeds the implementation-readiness gate.
|
||||
- **Epic-level (Phase 4):** Run per-epic to produce `test-design-epic-N.md` (risk, priorities, coverage plan).
|
||||
|
||||
The Quick Flow track skips Phases 1 and 3.
|
||||
BMad Method and Enterprise use all phases based on project needs.
|
||||
When an ADR or architecture draft is produced, run `test-design` in **system-level** mode before the implementation-readiness gate. This ensures the ADR has an attached testability review and ADR → test mapping. Keep the test-design updated if ADRs change.
|
||||
|
||||
## Why TEA Is Different from Other BMM Agents
|
||||
|
||||
TEA spans multiple phases (Phase 3, Phase 4, and the release gate). Most BMM agents operate in a single phase. That multi-phase role is paired with a dedicated testing knowledge base so standards stay consistent across projects.
|
||||
|
||||
### TEA's 8 Workflows Across Phases
|
||||
|
||||
| Phase | TEA Workflows | Frequency | Purpose |
|
||||
| ----------- | --------------------------------------------------------- | ---------------- | ------------------------------------------------------- |
|
||||
| **Phase 2** | (none) | - | Planning phase - PM defines requirements |
|
||||
| **Phase 3** | `test-design` (system-level), `framework`, `ci` | Once per project | System testability review and test infrastructure setup |
|
||||
| **Phase 4** | `test-design`, `atdd`, `automate`, `test-review`, `trace` | Per epic/story | Test planning per epic, then per-story testing |
|
||||
| **Release** | `nfr-assess`, `trace` (Phase 2: gate) | Per epic/release | Go/no-go decision |
|
||||
|
||||
**Note**: `trace` is a two-phase workflow: Phase 1 (traceability) + Phase 2 (gate decision). This reduces cognitive load while maintaining natural workflow.
|
||||
|
||||
### Why TEA Requires Its Own Knowledge Base
|
||||
|
||||
TEA uniquely requires:
|
||||
|
||||
- **Extensive domain knowledge**: Test patterns, CI/CD, fixtures, and quality practices
|
||||
- **Cross-cutting concerns**: Standards that apply across all BMad projects (not just PRDs or stories)
|
||||
- **Optional integrations**: Playwright-utils and MCP enhancements
|
||||
|
||||
This architecture lets TEA maintain consistent, production-ready testing patterns while operating across multiple phases.
|
||||
|
||||
## Track Cheat Sheets (Condensed)
|
||||
|
||||
These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks** across the **4-Phase Methodology** (Phase 1: Analysis, Phase 2: Planning, Phase 3: Solutioning, Phase 4: Implementation).
|
||||
|
||||
**Note:** The Quick Flow track typically doesn't require TEA (covered in Overview). These cheat sheets focus on BMad Method and Enterprise tracks where TEA adds value.
|
||||
|
||||
**Legend for Track Deltas:**
|
||||
|
||||
- ➕ = New workflow or phase added (doesn't exist in baseline)
|
||||
- 🔄 = Modified focus (same workflow, different emphasis or purpose)
|
||||
- 📦 = Additional output or archival requirement
|
||||
|
||||
### Greenfield - BMad Method (Simple/Standard Work)
|
||||
|
||||
**Planning Track:** BMad Method (PRD + Architecture)
|
||||
**Use Case:** New projects with standard complexity
|
||||
|
||||
| Workflow Stage | Test Architect | Dev / Team | Outputs |
|
||||
| -------------------------- | ----------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ---------------------------------------------------------- |
|
||||
| **Phase 1**: Discovery | - | Analyst `product-brief` (optional) | `product-brief.md` |
|
||||
| **Phase 2**: Planning | - | PM `prd` (creates PRD with FRs/NFRs) | PRD with functional/non-functional requirements |
|
||||
| **Phase 3**: Solutioning | Run `framework`, `ci` AFTER architecture and epic creation | Architect `architecture`, `create-epics-and-stories`, `implementation-readiness` | Architecture, epics/stories, test scaffold, CI pipeline |
|
||||
| **Phase 4**: Sprint Start | - | SM `sprint-planning` | Sprint status file with all epics and stories |
|
||||
| **Phase 4**: Epic Planning | Run `test-design` for THIS epic (per-epic test plan) | Review epic scope | `test-design-epic-N.md` with risk assessment and test plan |
|
||||
| **Phase 4**: Story Dev | (Optional) `atdd` before dev, then `automate` after | SM `create-story`, DEV implements | Tests, story implementation |
|
||||
| **Phase 4**: Story Review | Execute `test-review` (optional), re-run `trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix |
|
||||
| **Phase 4**: Release Gate | (Optional) `test-review` for final audit, Run `trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary |
|
||||
|
||||
**Key notes:**
|
||||
- Run `framework` and `ci` once in Phase 3 after architecture.
|
||||
- Run `test-design` per epic in Phase 4; use `atdd` before dev when helpful.
|
||||
- Use `trace` for gate decisions; `test-review` is an optional audit.
|
||||
|
||||
### Brownfield - BMad Method or Enterprise (Simple or Complex)
|
||||
|
||||
**Planning Tracks:** BMad Method or Enterprise Method
|
||||
**Use Case:** Existing codebases: simple additions (BMad Method) or complex enterprise requirements (Enterprise Method)
|
||||
|
||||
**🔄 Brownfield Deltas from Greenfield:**
|
||||
|
||||
- ➕ Documentation (Prerequisite) - Document existing codebase if undocumented
|
||||
- ➕ Phase 2: `trace` - Baseline existing test coverage before planning
|
||||
- 🔄 Phase 4: `test-design` - Focus on regression hotspots and brownfield risks
|
||||
- 🔄 Phase 4: Story Review - May include `nfr-assess` if not done earlier
|
||||
|
||||
| Workflow Stage | Test Architect | Dev / Team | Outputs |
|
||||
| --------------------------------- | --------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ---------------------------------------------------------------------- |
|
||||
| **Documentation**: Prerequisite ➕ | - | Analyst `document-project` (if undocumented) | Comprehensive project documentation |
|
||||
| **Phase 1**: Discovery | - | Analyst/PM/Architect rerun planning workflows | Updated planning artifacts in `{output_folder}` |
|
||||
| **Phase 2**: Planning | Run ➕ `trace` (baseline coverage) | PM `prd` (creates PRD with FRs/NFRs) | PRD with FRs/NFRs, ➕ coverage baseline |
|
||||
| **Phase 3**: Solutioning | Run `framework`, `ci` AFTER architecture and epic creation | Architect `architecture`, `create-epics-and-stories`, `implementation-readiness` | Architecture, epics/stories, test framework, CI pipeline |
|
||||
| **Phase 4**: Sprint Start | - | SM `sprint-planning` | Sprint status file with all epics and stories |
|
||||
| **Phase 4**: Epic Planning | Run `test-design` for THIS epic 🔄 (regression hotspots) | Review epic scope and brownfield risks | `test-design-epic-N.md` with brownfield risk assessment and mitigation |
|
||||
| **Phase 4**: Story Dev | (Optional) `atdd` before dev, then `automate` after | SM `create-story`, DEV implements | Tests, story implementation |
|
||||
| **Phase 4**: Story Review | Apply `test-review` (optional), re-run `trace`, ➕ `nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report |
|
||||
| **Phase 4**: Release Gate | (Optional) `test-review` for final audit, Run `trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary |
|
||||
|
||||
**Key notes:**
|
||||
- Start with `trace` in Phase 2 to baseline coverage.
|
||||
- Focus `test-design` on regression hotspots and integration risk.
|
||||
- Run `nfr-assess` before the gate if it wasn't done earlier.
|
||||
|
||||
### Greenfield - Enterprise Method (Enterprise/Compliance Work)
|
||||
|
||||
**Planning Track:** Enterprise Method (BMad Method + extended security/devops/test strategies)
|
||||
**Use Case:** New enterprise projects with compliance, security, or complex regulatory requirements
|
||||
|
||||
**🏢 Enterprise Deltas from BMad Method:**
|
||||
|
||||
- ➕ Phase 1: `research` - Domain and compliance research (recommended)
|
||||
- ➕ Phase 2: `nfr-assess` - Capture NFR requirements early (security/performance/reliability)
|
||||
- 🔄 Phase 4: `test-design` - Enterprise focus (compliance, security architecture alignment)
|
||||
- 📦 Release Gate - Archive artifacts and compliance evidence for audits
|
||||
|
||||
| Workflow Stage | Test Architect | Dev / Team | Outputs |
|
||||
| -------------------------- | ----------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | ------------------------------------------------------------------ |
|
||||
| **Phase 1**: Discovery | - | Analyst ➕ `research`, `product-brief` | Domain research, compliance analysis, product brief |
|
||||
| **Phase 2**: Planning | Run ➕ `nfr-assess` | PM `prd` (creates PRD with FRs/NFRs), UX `create-ux-design` | Enterprise PRD with FRs/NFRs, UX design, ➕ NFR documentation |
|
||||
| **Phase 3**: Solutioning | Run `framework`, `ci` AFTER architecture and epic creation | Architect `architecture`, `create-epics-and-stories`, `implementation-readiness` | Architecture, epics/stories, test framework, CI pipeline |
|
||||
| **Phase 4**: Sprint Start | - | SM `sprint-planning` | Sprint plan with all epics |
|
||||
| **Phase 4**: Epic Planning | Run `test-design` for THIS epic 🔄 (compliance focus) | Review epic scope and compliance requirements | `test-design-epic-N.md` with security/performance/compliance focus |
|
||||
| **Phase 4**: Story Dev | (Optional) `atdd`, `automate`, `test-review`, `trace` per story | SM `create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices |
|
||||
| **Phase 4**: Release Gate | Final `test-review` audit, Run `trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail |
|
||||
|
||||
**Key notes:**
|
||||
- Run `nfr-assess` early in Phase 2.
|
||||
- `test-design` emphasizes compliance, security, and performance alignment.
|
||||
- Archive artifacts at the release gate for audits.
|
||||
|
||||
**Related how-to guides:**
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md)
|
||||
- [How to Set Up a Test Framework](/docs/tea/how-to/workflows/setup-test-framework.md)
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md)
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md)
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md)
|
||||
- [How to Set Up CI Pipeline](/docs/tea/how-to/workflows/setup-ci.md)
|
||||
- [How to Run NFR Assessment](/docs/tea/how-to/workflows/run-nfr-assess.md)
|
||||
- [How to Run Trace](/docs/tea/how-to/workflows/run-trace.md)
|
||||
|
||||
## Deep Dive Concepts
|
||||
|
||||
Want to understand TEA principles and patterns in depth?
|
||||
|
||||
**Core Principles:**
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Probability × impact scoring, P0-P3 priorities
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Definition of Done, determinism, isolation
|
||||
- [Knowledge Base System](/docs/tea/explanation/knowledge-base-system.md) - Context engineering with tea-index.csv
|
||||
|
||||
**Technical Patterns:**
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Pure function → fixture → composition
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Eliminating flakiness with intercept-before-navigate
|
||||
|
||||
**Engagement & Strategy:**
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - TEA Lite, TEA Solo, TEA Integrated (5 models explained)
|
||||
|
||||
**Philosophy:**
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - **Start here to understand WHY TEA exists** - The problem with AI-generated tests and TEA's three-part solution
|
||||
|
||||
## Optional Integrations
|
||||
|
||||
### Playwright Utils (`@seontechnologies/playwright-utils`)
|
||||
|
||||
Production-ready fixtures and utilities that enhance TEA workflows.
|
||||
|
||||
- Install: `npm install -D @seontechnologies/playwright-utils`
|
||||
> Note: Playwright Utils is enabled via the installer. Only set `tea_use_playwright_utils` in `_bmad/bmm/config.yaml` if you need to override the installer choice.
|
||||
- Impacts: `framework`, `atdd`, `automate`, `test-review`, `ci`
|
||||
- Utilities include: api-request, auth-session, network-recorder, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
|
||||
|
||||
### Playwright MCP Enhancements
|
||||
|
||||
Live browser verification for test design and automation.
|
||||
|
||||
**Two Playwright MCP servers** (actively maintained, continuously updated):
|
||||
|
||||
- `playwright` - Browser automation (`npx @playwright/mcp@latest`)
|
||||
- `playwright-test` - Test runner with failure analysis (`npx playwright run-test-mcp-server`)
|
||||
|
||||
**Configuration example**:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest"]
|
||||
},
|
||||
"playwright-test": {
|
||||
"command": "npx",
|
||||
"args": ["playwright", "run-test-mcp-server"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Helps `test-design` validate actual UI behavior.
|
||||
- Helps `atdd` and `automate` verify selectors against the live DOM.
|
||||
- Enhances healing with `browser_snapshot`, console, network, and locator tools.
|
||||
|
||||
**To disable**: set `tea_use_mcp_enhancements: false` in `_bmad/bmm/config.yaml` or remove MCPs from IDE config.
|
||||
|
||||
---
|
||||
|
||||
## Complete TEA Documentation Navigation
|
||||
|
||||
### Start Here
|
||||
|
||||
**New to TEA? Start with the tutorial:**
|
||||
- [TEA Lite Quickstart Tutorial](/docs/tea/tutorials/tea-lite-quickstart.md) - 30-minute beginner guide using TodoMVC
|
||||
|
||||
### Workflow Guides (Task-Oriented)
|
||||
|
||||
**All 8 TEA workflows with step-by-step instructions:**
|
||||
1. [How to Set Up a Test Framework with TEA](/docs/tea/how-to/workflows/setup-test-framework.md) - Scaffold Playwright or Cypress
|
||||
2. [How to Set Up CI Pipeline with TEA](/docs/tea/how-to/workflows/setup-ci.md) - Configure CI/CD with selective testing
|
||||
3. [How to Run Test Design with TEA](/docs/tea/how-to/workflows/run-test-design.md) - Risk-based test planning (system or epic)
|
||||
4. [How to Run ATDD with TEA](/docs/tea/how-to/workflows/run-atdd.md) - Generate failing tests before implementation
|
||||
5. [How to Run Automate with TEA](/docs/tea/how-to/workflows/run-automate.md) - Expand test coverage after implementation
|
||||
6. [How to Run Test Review with TEA](/docs/tea/how-to/workflows/run-test-review.md) - Audit test quality (0-100 scoring)
|
||||
7. [How to Run NFR Assessment with TEA](/docs/tea/how-to/workflows/run-nfr-assess.md) - Validate non-functional requirements
|
||||
8. [How to Run Trace with TEA](/docs/tea/how-to/workflows/run-trace.md) - Coverage traceability + gate decisions
|
||||
|
||||
### Customization & Integration
|
||||
|
||||
**Optional enhancements to TEA workflows:**
|
||||
- [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) - Production-ready fixtures and 9 utilities
|
||||
- [Enable TEA MCP Enhancements](/docs/tea/how-to/customization/enable-tea-mcp-enhancements.md) - Live browser verification, visual debugging
|
||||
|
||||
### Use-Case Guides
|
||||
|
||||
**Specialized guidance for specific contexts:**
|
||||
- [Using TEA with Existing Tests (Brownfield)](/docs/tea/how-to/brownfield/use-tea-with-existing-tests.md) - Incremental improvement, regression hotspots, baseline coverage
|
||||
- [Running TEA for Enterprise](/docs/tea/how-to/brownfield/use-tea-for-enterprise.md) - Compliance, NFR assessment, audit trails, SOC 2/HIPAA
|
||||
|
||||
### Concept Deep Dives (Understanding-Oriented)
|
||||
|
||||
**Understand the principles and patterns:**
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Probability × impact scoring, P0-P3 priorities, mitigation strategies
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Definition of Done, determinism, isolation, explicit assertions
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Pure function → fixture → composition pattern
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Intercept-before-navigate, eliminating flakiness
|
||||
- [Knowledge Base System](/docs/tea/explanation/knowledge-base-system.md) - Context engineering with tea-index.csv, 33 fragments
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - TEA Lite, TEA Solo, TEA Integrated (5 models explained)
|
||||
|
||||
### Philosophy & Design
|
||||
|
||||
**Why TEA exists and how it works:**
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - **Start here to understand WHY** - The problem with AI-generated tests and TEA's three-part solution
|
||||
|
||||
### Reference (Quick Lookup)
|
||||
|
||||
**Factual information for quick reference:**
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - All 8 workflows: inputs, outputs, phases, frequency
|
||||
- [TEA Configuration Reference](/docs/tea/reference/configuration.md) - Config options, file locations, setup examples
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - 33 fragments categorized and explained
|
||||
- [Glossary - TEA Section](/docs/tea/glossary/index.md#test-architect-tea-concepts) - 20 TEA-specific terms defined
|
||||
907
docs/tea/explanation/test-quality-standards.md
Normal file
907
docs/tea/explanation/test-quality-standards.md
Normal file
@@ -0,0 +1,907 @@
|
||||
---
|
||||
title: "Test Quality Standards Explained"
|
||||
description: Understanding TEA's Definition of Done for deterministic, isolated, and maintainable tests
|
||||
---
|
||||
|
||||
# Test Quality Standards Explained
|
||||
|
||||
Test quality standards define what makes a test "good" in TEA. These aren't suggestions - they're the Definition of Done that prevents tests from rotting in review.
|
||||
|
||||
## Overview
|
||||
|
||||
**TEA's Quality Principles:**
|
||||
- **Deterministic** - Same result every run
|
||||
- **Isolated** - No dependencies on other tests
|
||||
- **Explicit** - Assertions visible in test body
|
||||
- **Focused** - Single responsibility, appropriate size
|
||||
- **Fast** - Execute in reasonable time
|
||||
|
||||
**Why these matter:** Tests that violate these principles create maintenance burden, slow down development, and lose team trust.
|
||||
|
||||
## The Problem
|
||||
|
||||
### Tests That Rot in Review
|
||||
|
||||
```typescript
|
||||
// ❌ The anti-pattern: This test will rot
|
||||
test('user can do stuff', async ({ page }) => {
|
||||
await page.goto('/');
|
||||
await page.waitForTimeout(5000); // Non-deterministic
|
||||
|
||||
if (await page.locator('.banner').isVisible()) { // Conditional
|
||||
await page.click('.dismiss');
|
||||
}
|
||||
|
||||
try { // Try-catch for flow control
|
||||
await page.click('#load-more');
|
||||
} catch (e) {
|
||||
// Silently continue
|
||||
}
|
||||
|
||||
// ... 300 more lines of test logic
|
||||
// ... no clear assertions
|
||||
});
|
||||
```
|
||||
|
||||
**What's wrong:**
|
||||
- **Hard wait** - Flaky, wastes time
|
||||
- **Conditional** - Non-deterministic behavior
|
||||
- **Try-catch** - Hides failures
|
||||
- **Too large** - Hard to maintain
|
||||
- **Vague name** - Unclear purpose
|
||||
- **No explicit assertions** - What's being tested?
|
||||
|
||||
**Result:** PR review comments: "This test is flaky, please fix" → never merged → test deleted → coverage lost
|
||||
|
||||
### AI-Generated Tests Without Standards
|
||||
|
||||
AI-generated tests without quality guardrails:
|
||||
|
||||
```typescript
|
||||
// AI generates 50 tests like this:
|
||||
test('test1', async ({ page }) => {
|
||||
await page.goto('/');
|
||||
await page.waitForTimeout(3000);
|
||||
// ... flaky, vague, redundant
|
||||
});
|
||||
|
||||
test('test2', async ({ page }) => {
|
||||
await page.goto('/');
|
||||
await page.waitForTimeout(3000);
|
||||
// ... duplicates test1
|
||||
});
|
||||
|
||||
// ... 48 more similar tests
|
||||
```
|
||||
|
||||
**Result:** 50 tests, 80% redundant, 90% flaky, 0% trusted by team - low-quality outputs that create maintenance burden.
|
||||
|
||||
## The Solution: TEA's Quality Standards
|
||||
|
||||
### 1. Determinism (No Flakiness)
|
||||
|
||||
**Rule:** Test produces same result every run.
|
||||
|
||||
**Requirements:**
|
||||
- ❌ No hard waits (`waitForTimeout`)
|
||||
- ❌ No conditionals for flow control (`if/else`)
|
||||
- ❌ No try-catch for flow control
|
||||
- ✅ Use network-first patterns (wait for responses)
|
||||
- ✅ Use explicit waits (waitForSelector, waitForResponse)
|
||||
|
||||
**Bad Example:**
|
||||
```typescript
|
||||
test('flaky test', async ({ page }) => {
|
||||
await page.click('button');
|
||||
await page.waitForTimeout(2000); // ❌ Might be too short
|
||||
|
||||
if (await page.locator('.modal').isVisible()) { // ❌ Non-deterministic
|
||||
await page.click('.dismiss');
|
||||
}
|
||||
|
||||
try { // ❌ Silently handles errors
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
} catch (e) {
|
||||
// Test passes even if assertion fails!
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Good Example (Vanilla Playwright):**
|
||||
```typescript
|
||||
test('deterministic test', async ({ page }) => {
|
||||
const responsePromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/submit') && resp.ok()
|
||||
);
|
||||
|
||||
await page.click('button');
|
||||
await responsePromise; // ✅ Wait for actual response
|
||||
|
||||
// Modal should ALWAYS show (make it deterministic)
|
||||
await expect(page.locator('.modal')).toBeVisible();
|
||||
await page.click('.dismiss');
|
||||
|
||||
// Explicit assertion (fails if not visible)
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils (Even Cleaner):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('deterministic test', async ({ page, interceptNetworkCall }) => {
|
||||
const submitCall = interceptNetworkCall({
|
||||
method: 'POST',
|
||||
url: '**/api/submit'
|
||||
});
|
||||
|
||||
await page.click('button');
|
||||
|
||||
// Wait for actual response (automatic JSON parsing)
|
||||
const { status, responseJson } = await submitCall;
|
||||
expect(status).toBe(200);
|
||||
|
||||
// Modal should ALWAYS show (make it deterministic)
|
||||
await expect(page.locator('.modal')).toBeVisible();
|
||||
await page.click('.dismiss');
|
||||
|
||||
// Explicit assertion (fails if not visible)
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Why both work:**
|
||||
- Waits for actual event (network response)
|
||||
- No conditionals (behavior is deterministic)
|
||||
- Assertions fail loudly (no silent failures)
|
||||
- Same result every run (deterministic)
|
||||
|
||||
**Playwright Utils additional benefits:**
|
||||
- Automatic JSON parsing
|
||||
- `{ status, responseJson }` structure (can validate response data)
|
||||
- No manual `await response.json()`
|
||||
|
||||
### 2. Isolation (No Dependencies)
|
||||
|
||||
**Rule:** Test runs independently, no shared state.
|
||||
|
||||
**Requirements:**
|
||||
- ✅ Self-cleaning (cleanup after test)
|
||||
- ✅ No global state dependencies
|
||||
- ✅ Can run in parallel
|
||||
- ✅ Can run in any order
|
||||
- ✅ Use unique test data
|
||||
|
||||
**Bad Example:**
|
||||
```typescript
|
||||
// ❌ Tests depend on execution order
|
||||
let userId: string; // Shared global state
|
||||
|
||||
test('create user', async ({ apiRequest }) => {
|
||||
const { body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
body: { email: 'test@example.com' } (hard-coded)
|
||||
});
|
||||
userId = body.id; // Store in global
|
||||
});
|
||||
|
||||
test('update user', async ({ apiRequest }) => {
|
||||
// Depends on previous test setting userId
|
||||
await apiRequest({
|
||||
method: 'PATCH',
|
||||
path: `/api/users/${userId}`,
|
||||
body: { name: 'Updated' }
|
||||
});
|
||||
// No cleanup - leaves user in database
|
||||
});
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Tests must run in order (can't parallelize)
|
||||
- Second test fails if first skipped (`.only`)
|
||||
- Hard-coded data causes conflicts
|
||||
- No cleanup (database fills with test data)
|
||||
|
||||
**Good Example (Vanilla Playwright):**
|
||||
```typescript
|
||||
test('should update user profile', async ({ request }) => {
|
||||
// Create unique test data
|
||||
const testEmail = `test-${Date.now()}@example.com`;
|
||||
|
||||
// Setup: Create user
|
||||
const createResp = await request.post('/api/users', {
|
||||
data: { email: testEmail, name: 'Original' }
|
||||
});
|
||||
const user = await createResp.json();
|
||||
|
||||
// Test: Update user
|
||||
const updateResp = await request.patch(`/api/users/${user.id}`, {
|
||||
data: { name: 'Updated' }
|
||||
});
|
||||
const updated = await updateResp.json();
|
||||
|
||||
expect(updated.name).toBe('Updated');
|
||||
|
||||
// Cleanup: Delete user
|
||||
await request.delete(`/api/users/${user.id}`);
|
||||
});
|
||||
```
|
||||
|
||||
**Even Better (With Playwright Utils):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
test('should update user profile', async ({ apiRequest }) => {
|
||||
// Dynamic unique test data
|
||||
const testEmail = faker.internet.email();
|
||||
|
||||
// Setup: Create user
|
||||
const { status: createStatus, body: user } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
body: { email: testEmail, name: faker.person.fullName() }
|
||||
});
|
||||
|
||||
expect(createStatus).toBe(201);
|
||||
|
||||
// Test: Update user
|
||||
const { status, body: updated } = await apiRequest({
|
||||
method: 'PATCH',
|
||||
path: `/api/users/${user.id}`,
|
||||
body: { name: 'Updated Name' }
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(updated.name).toBe('Updated Name');
|
||||
|
||||
// Cleanup: Delete user
|
||||
await apiRequest({
|
||||
method: 'DELETE',
|
||||
path: `/api/users/${user.id}`
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Playwright Utils Benefits:**
|
||||
- `{ status, body }` destructuring (cleaner than `response.status()` + `await response.json()`)
|
||||
- No manual `await response.json()`
|
||||
- Automatic retry for 5xx errors
|
||||
- Optional schema validation with `.validateSchema()`
|
||||
|
||||
**Why it works:**
|
||||
- No global state
|
||||
- Unique test data (no conflicts)
|
||||
- Self-cleaning (deletes user)
|
||||
- Can run in parallel
|
||||
- Can run in any order
|
||||
|
||||
### 3. Explicit Assertions (No Hidden Validation)
|
||||
|
||||
**Rule:** Assertions visible in test body, not abstracted.
|
||||
|
||||
**Requirements:**
|
||||
- ✅ Assertions in test code (not helper functions)
|
||||
- ✅ Specific assertions (not generic `toBeTruthy`)
|
||||
- ✅ Meaningful expectations (test actual behavior)
|
||||
|
||||
**Bad Example:**
|
||||
```typescript
|
||||
// ❌ Assertions hidden in helper
|
||||
async function verifyProfilePage(page: Page) {
|
||||
// Assertions buried in helper (not visible in test)
|
||||
await expect(page.locator('h1')).toBeVisible();
|
||||
await expect(page.locator('.email')).toContainText('@');
|
||||
await expect(page.locator('.name')).not.toBeEmpty();
|
||||
}
|
||||
|
||||
test('profile page', async ({ page }) => {
|
||||
await page.goto('/profile');
|
||||
await verifyProfilePage(page); // What's being verified?
|
||||
});
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Can't see what's tested (need to read helper)
|
||||
- Hard to debug failures (which assertion failed?)
|
||||
- Reduces test readability
|
||||
- Hides important validation
|
||||
|
||||
**Good Example:**
|
||||
```typescript
|
||||
// ✅ Assertions explicit in test
|
||||
test('should display profile with correct data', async ({ page }) => {
|
||||
await page.goto('/profile');
|
||||
|
||||
// Explicit assertions - clear what's tested
|
||||
await expect(page.locator('h1')).toContainText('Test User');
|
||||
await expect(page.locator('.email')).toContainText('test@example.com');
|
||||
await expect(page.locator('.bio')).toContainText('Software Engineer');
|
||||
await expect(page.locator('img[alt="Avatar"]')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Why it works:**
|
||||
- See what's tested at a glance
|
||||
- Debug failures easily (know which assertion failed)
|
||||
- Test is self-documenting
|
||||
- No hidden behavior
|
||||
|
||||
**Exception:** Use helper for setup/cleanup, not assertions.
|
||||
|
||||
### 4. Focused Tests (Appropriate Size)
|
||||
|
||||
**Rule:** Test has single responsibility, reasonable size.
|
||||
|
||||
**Requirements:**
|
||||
- ✅ Test size < 300 lines
|
||||
- ✅ Single responsibility (test one thing well)
|
||||
- ✅ Clear describe/test names
|
||||
- ✅ Appropriate scope (not too granular, not too broad)
|
||||
|
||||
**Bad Example:**
|
||||
```typescript
|
||||
// ❌ 500-line test testing everything
|
||||
test('complete user flow', async ({ page }) => {
|
||||
// Registration (50 lines)
|
||||
await page.goto('/register');
|
||||
await page.fill('#email', 'test@example.com');
|
||||
// ... 48 more lines
|
||||
|
||||
// Profile setup (100 lines)
|
||||
await page.goto('/profile');
|
||||
// ... 98 more lines
|
||||
|
||||
// Settings configuration (150 lines)
|
||||
await page.goto('/settings');
|
||||
// ... 148 more lines
|
||||
|
||||
// Data export (200 lines)
|
||||
await page.goto('/export');
|
||||
// ... 198 more lines
|
||||
|
||||
// Total: 500 lines, testing 4 different features
|
||||
});
|
||||
```
|
||||
|
||||
**Problems:**
|
||||
- Failure in line 50 prevents testing lines 51-500
|
||||
- Hard to understand (what's being tested?)
|
||||
- Slow to execute (testing too much)
|
||||
- Hard to debug (which feature failed?)
|
||||
|
||||
**Good Example:**
|
||||
```typescript
|
||||
// ✅ Focused tests - one responsibility each
|
||||
|
||||
test('should register new user', async ({ page }) => {
|
||||
await page.goto('/register');
|
||||
await page.fill('#email', 'test@example.com');
|
||||
await page.fill('#password', 'password123');
|
||||
await page.click('button[type="submit"]');
|
||||
|
||||
await expect(page).toHaveURL('/welcome');
|
||||
await expect(page.locator('h1')).toContainText('Welcome');
|
||||
});
|
||||
|
||||
test('should configure user profile', async ({ page, authSession }) => {
|
||||
await authSession.login({ email: 'test@example.com', password: 'pass' });
|
||||
await page.goto('/profile');
|
||||
|
||||
await page.fill('#name', 'Test User');
|
||||
await page.fill('#bio', 'Software Engineer');
|
||||
await page.click('button:has-text("Save")');
|
||||
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
});
|
||||
|
||||
// ... separate tests for settings, export (each < 50 lines)
|
||||
```
|
||||
|
||||
**Why it works:**
|
||||
- Each test has one responsibility
|
||||
- Failure is easy to diagnose
|
||||
- Can run tests independently
|
||||
- Test names describe exactly what's tested
|
||||
|
||||
### 5. Fast Execution (Performance Budget)
|
||||
|
||||
**Rule:** Individual test executes in < 1.5 minutes.
|
||||
|
||||
**Requirements:**
|
||||
- ✅ Test execution < 90 seconds
|
||||
- ✅ Efficient selectors (getByRole > XPath)
|
||||
- ✅ Minimal redundant actions
|
||||
- ✅ Parallel execution enabled
|
||||
|
||||
**Bad Example:**
|
||||
```typescript
|
||||
// ❌ Slow test (3+ minutes)
|
||||
test('slow test', async ({ page }) => {
|
||||
await page.goto('/');
|
||||
await page.waitForTimeout(10000); // 10s wasted
|
||||
|
||||
// Navigate through 10 pages (2 minutes)
|
||||
for (let i = 1; i <= 10; i++) {
|
||||
await page.click(`a[href="/page-${i}"]`);
|
||||
await page.waitForTimeout(5000); // 5s per page = 50s wasted
|
||||
}
|
||||
|
||||
// Complex XPath selector (slow)
|
||||
await page.locator('//div[@class="container"]/section[3]/div[2]/p').click();
|
||||
|
||||
// More waiting
|
||||
await page.waitForTimeout(30000); // 30s wasted
|
||||
|
||||
await expect(page.locator('.result')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Total time:** 3+ minutes (95 seconds wasted on hard waits)
|
||||
|
||||
**Good Example (Vanilla Playwright):**
|
||||
```typescript
|
||||
// ✅ Fast test (< 10 seconds)
|
||||
test('fast test', async ({ page }) => {
|
||||
// Set up response wait
|
||||
const apiPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/result') && resp.ok()
|
||||
);
|
||||
|
||||
await page.goto('/');
|
||||
|
||||
// Direct navigation (skip intermediate pages)
|
||||
await page.goto('/page-10');
|
||||
|
||||
// Efficient selector
|
||||
await page.getByRole('button', { name: 'Submit' }).click();
|
||||
|
||||
// Wait for actual response (fast when API is fast)
|
||||
await apiPromise;
|
||||
|
||||
await expect(page.locator('.result')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('fast test', async ({ page, interceptNetworkCall }) => {
|
||||
// Set up interception
|
||||
const resultCall = interceptNetworkCall({
|
||||
method: 'GET',
|
||||
url: '**/api/result'
|
||||
});
|
||||
|
||||
await page.goto('/');
|
||||
|
||||
// Direct navigation (skip intermediate pages)
|
||||
await page.goto('/page-10');
|
||||
|
||||
// Efficient selector
|
||||
await page.getByRole('button', { name: 'Submit' }).click();
|
||||
|
||||
// Wait for actual response (automatic JSON parsing)
|
||||
const { status, responseJson } = await resultCall;
|
||||
|
||||
expect(status).toBe(200);
|
||||
await expect(page.locator('.result')).toBeVisible();
|
||||
|
||||
// Can also validate response data if needed
|
||||
// expect(responseJson.data).toBeDefined();
|
||||
});
|
||||
```
|
||||
|
||||
**Total time:** < 10 seconds (no wasted waits)
|
||||
|
||||
**Both examples achieve:**
|
||||
- No hard waits (wait for actual events)
|
||||
- Direct navigation (skip unnecessary steps)
|
||||
- Efficient selectors (getByRole)
|
||||
- Fast execution
|
||||
|
||||
**Playwright Utils bonus:**
|
||||
- Can validate API response data easily
|
||||
- Automatic JSON parsing
|
||||
- Cleaner API
|
||||
|
||||
## TEA's Quality Scoring
|
||||
|
||||
TEA reviews tests against these standards in `test-review`:
|
||||
|
||||
### Scoring Categories (100 points total)
|
||||
|
||||
**Determinism (35 points):**
|
||||
- No hard waits: 10 points
|
||||
- No conditionals: 10 points
|
||||
- No try-catch flow: 10 points
|
||||
- Network-first patterns: 5 points
|
||||
|
||||
**Isolation (25 points):**
|
||||
- Self-cleaning: 15 points
|
||||
- No global state: 5 points
|
||||
- Parallel-safe: 5 points
|
||||
|
||||
**Assertions (20 points):**
|
||||
- Explicit in test body: 10 points
|
||||
- Specific and meaningful: 10 points
|
||||
|
||||
**Structure (10 points):**
|
||||
- Test size < 300 lines: 5 points
|
||||
- Clear naming: 5 points
|
||||
|
||||
**Performance (10 points):**
|
||||
- Execution time < 1.5 min: 10 points
|
||||
|
||||
#### Quality Scoring Breakdown
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'14px'}}}%%
|
||||
pie title Test Quality Score (100 points)
|
||||
"Determinism" : 35
|
||||
"Isolation" : 25
|
||||
"Assertions" : 20
|
||||
"Structure" : 10
|
||||
"Performance" : 10
|
||||
```
|
||||
|
||||
```mermaid
|
||||
%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'13px'}}}%%
|
||||
flowchart LR
|
||||
subgraph Det[Determinism - 35 pts]
|
||||
D1[No hard waits<br/>10 pts]
|
||||
D2[No conditionals<br/>10 pts]
|
||||
D3[No try-catch flow<br/>10 pts]
|
||||
D4[Network-first<br/>5 pts]
|
||||
end
|
||||
|
||||
subgraph Iso[Isolation - 25 pts]
|
||||
I1[Self-cleaning<br/>15 pts]
|
||||
I2[No global state<br/>5 pts]
|
||||
I3[Parallel-safe<br/>5 pts]
|
||||
end
|
||||
|
||||
subgraph Assrt[Assertions - 20 pts]
|
||||
A1[Explicit in body<br/>10 pts]
|
||||
A2[Specific/meaningful<br/>10 pts]
|
||||
end
|
||||
|
||||
subgraph Struct[Structure - 10 pts]
|
||||
S1[Size < 300 lines<br/>5 pts]
|
||||
S2[Clear naming<br/>5 pts]
|
||||
end
|
||||
|
||||
subgraph Perf[Performance - 10 pts]
|
||||
P1[Time < 1.5 min<br/>10 pts]
|
||||
end
|
||||
|
||||
Det --> Total([Total: 100 points])
|
||||
Iso --> Total
|
||||
Assrt --> Total
|
||||
Struct --> Total
|
||||
Perf --> Total
|
||||
|
||||
style Det fill:#ffebee,stroke:#c62828,stroke-width:2px
|
||||
style Iso fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
|
||||
style Assrt fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px
|
||||
style Struct fill:#fff9c4,stroke:#f57f17,stroke-width:2px
|
||||
style Perf fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
|
||||
style Total fill:#fff,stroke:#000,stroke-width:3px
|
||||
```
|
||||
|
||||
### Score Interpretation
|
||||
|
||||
| Score | Interpretation | Action |
|
||||
| ---------- | -------------- | -------------------------------------- |
|
||||
| **90-100** | Excellent | Production-ready, minimal changes |
|
||||
| **80-89** | Good | Minor improvements recommended |
|
||||
| **70-79** | Acceptable | Address recommendations before release |
|
||||
| **60-69** | Needs Work | Fix critical issues |
|
||||
| **< 60** | Critical | Significant refactoring needed |
|
||||
|
||||
## Comparison: Good vs Bad Tests
|
||||
|
||||
### Example: User Login
|
||||
|
||||
**Bad Test (Score: 45/100):**
|
||||
```typescript
|
||||
test('login test', async ({ page }) => { // Vague name
|
||||
await page.goto('/login');
|
||||
await page.waitForTimeout(3000); // -10 (hard wait)
|
||||
|
||||
await page.fill('[name="email"]', 'test@example.com');
|
||||
await page.fill('[name="password"]', 'password');
|
||||
|
||||
if (await page.locator('.remember-me').isVisible()) { // -10 (conditional)
|
||||
await page.click('.remember-me');
|
||||
}
|
||||
|
||||
await page.click('button');
|
||||
|
||||
try { // -10 (try-catch flow)
|
||||
await page.waitForURL('/dashboard', { timeout: 5000 });
|
||||
} catch (e) {
|
||||
// Ignore navigation failure
|
||||
}
|
||||
|
||||
// No assertions! -10
|
||||
// No cleanup! -10
|
||||
});
|
||||
```
|
||||
|
||||
**Issues:**
|
||||
- Determinism: 5/35 (hard wait, conditional, try-catch)
|
||||
- Isolation: 10/25 (no cleanup)
|
||||
- Assertions: 0/20 (no assertions!)
|
||||
- Structure: 15/10 (okay)
|
||||
- Performance: 5/10 (slow)
|
||||
- **Total: 45/100**
|
||||
|
||||
**Good Test (Score: 95/100):**
|
||||
```typescript
|
||||
test('should login with valid credentials and redirect to dashboard', async ({ page, authSession }) => {
|
||||
// Use fixture for deterministic auth
|
||||
const loginPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/auth/login') && resp.ok()
|
||||
);
|
||||
|
||||
await page.goto('/login');
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await page.getByLabel('Password').fill('password123');
|
||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
||||
|
||||
// Wait for actual API response
|
||||
const response = await loginPromise;
|
||||
const { token } = await response.json();
|
||||
|
||||
// Explicit assertions
|
||||
expect(token).toBeDefined();
|
||||
await expect(page).toHaveURL('/dashboard');
|
||||
await expect(page.getByText('Welcome back')).toBeVisible();
|
||||
|
||||
// Cleanup handled by authSession fixture
|
||||
});
|
||||
```
|
||||
|
||||
**Quality:**
|
||||
- Determinism: 35/35 (network-first, no conditionals)
|
||||
- Isolation: 25/25 (fixture handles cleanup)
|
||||
- Assertions: 20/20 (explicit and specific)
|
||||
- Structure: 10/10 (clear name, focused)
|
||||
- Performance: 5/10 (< 1 min)
|
||||
- **Total: 95/100**
|
||||
|
||||
### Example: API Testing
|
||||
|
||||
**Bad Test (Score: 50/100):**
|
||||
```typescript
|
||||
test('api test', async ({ request }) => {
|
||||
const response = await request.post('/api/users', {
|
||||
data: { email: 'test@example.com' } // Hard-coded (conflicts)
|
||||
});
|
||||
|
||||
if (response.ok()) { // Conditional
|
||||
const user = await response.json();
|
||||
// Weak assertion
|
||||
expect(user).toBeTruthy();
|
||||
}
|
||||
|
||||
// No cleanup - user left in database
|
||||
});
|
||||
```
|
||||
|
||||
**Good Test (Score: 92/100):**
|
||||
```typescript
|
||||
test('should create user with valid data', async ({ apiRequest }) => {
|
||||
// Unique test data
|
||||
const testEmail = `test-${Date.now()}@example.com`;
|
||||
|
||||
// Create user
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users',
|
||||
body: { email: testEmail, name: 'Test User' }
|
||||
});
|
||||
|
||||
// Explicit assertions
|
||||
expect(status).toBe(201);
|
||||
expect(body.id).toBeDefined();
|
||||
expect(body.email).toBe(testEmail);
|
||||
expect(body.name).toBe('Test User');
|
||||
|
||||
// Cleanup
|
||||
await apiRequest({
|
||||
method: 'DELETE',
|
||||
path: `/api/users/${body.id}`
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## How TEA Enforces Standards
|
||||
|
||||
### During Test Generation (`atdd`, `automate`)
|
||||
|
||||
TEA generates tests following standards by default:
|
||||
|
||||
```typescript
|
||||
// TEA-generated test (automatically follows standards)
|
||||
test('should submit contact form', async ({ page }) => {
|
||||
// Network-first pattern (no hard waits)
|
||||
const submitPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/contact') && resp.ok()
|
||||
);
|
||||
|
||||
// Accessible selectors (resilient)
|
||||
await page.getByLabel('Name').fill('Test User');
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await page.getByLabel('Message').fill('Test message');
|
||||
await page.getByRole('button', { name: 'Send' }).click();
|
||||
|
||||
const response = await submitPromise;
|
||||
const result = await response.json();
|
||||
|
||||
// Explicit assertions
|
||||
expect(result.success).toBe(true);
|
||||
await expect(page.getByText('Message sent')).toBeVisible();
|
||||
|
||||
// Size: 15 lines (< 300 ✓)
|
||||
// Execution: ~2 seconds (< 90s ✓)
|
||||
});
|
||||
```
|
||||
|
||||
### During Test Review (`test-review`)
|
||||
|
||||
TEA audits tests and flags violations:
|
||||
|
||||
```markdown
|
||||
## Critical Issues
|
||||
|
||||
### Hard Wait Detected (tests/login.spec.ts:23)
|
||||
**Issue:** `await page.waitForTimeout(3000)`
|
||||
**Score Impact:** -10 (Determinism)
|
||||
**Fix:** Use network-first pattern
|
||||
|
||||
### Conditional Flow Control (tests/profile.spec.ts:45)
|
||||
**Issue:** `if (await page.locator('.banner').isVisible())`
|
||||
**Score Impact:** -10 (Determinism)
|
||||
**Fix:** Make banner presence deterministic
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Extract Fixture (tests/auth.spec.ts)
|
||||
**Issue:** Login code repeated 5 times
|
||||
**Score Impact:** -3 (Structure)
|
||||
**Fix:** Extract to authSession fixture
|
||||
```
|
||||
|
||||
## Definition of Done Checklist
|
||||
|
||||
When is a test "done"?
|
||||
|
||||
**Test Quality DoD:**
|
||||
- [ ] No hard waits (`waitForTimeout`)
|
||||
- [ ] No conditionals for flow control
|
||||
- [ ] No try-catch for flow control
|
||||
- [ ] Network-first patterns used
|
||||
- [ ] Assertions explicit in test body
|
||||
- [ ] Test size < 300 lines
|
||||
- [ ] Clear, descriptive test name
|
||||
- [ ] Self-cleaning (cleanup in afterEach or test)
|
||||
- [ ] Unique test data (no hard-coded values)
|
||||
- [ ] Execution time < 1.5 minutes
|
||||
- [ ] Can run in parallel
|
||||
- [ ] Can run in any order
|
||||
|
||||
**Code Review DoD:**
|
||||
- [ ] Test quality score > 80
|
||||
- [ ] No critical issues from `test-review`
|
||||
- [ ] Follows project patterns (fixtures, selectors)
|
||||
- [ ] Test reviewed by team member
|
||||
|
||||
## Common Quality Issues
|
||||
|
||||
### Issue: "My test needs conditionals for optional elements"
|
||||
|
||||
**Wrong approach:**
|
||||
```typescript
|
||||
if (await page.locator('.banner').isVisible()) {
|
||||
await page.click('.dismiss');
|
||||
}
|
||||
```
|
||||
|
||||
**Right approach - Make it deterministic:**
|
||||
```typescript
|
||||
// Option 1: Always expect banner
|
||||
await expect(page.locator('.banner')).toBeVisible();
|
||||
await page.click('.dismiss');
|
||||
|
||||
// Option 2: Test both scenarios separately
|
||||
test('should show banner for new users', ...);
|
||||
test('should not show banner for returning users', ...);
|
||||
```
|
||||
|
||||
### Issue: "My test needs try-catch for error handling"
|
||||
|
||||
**Wrong approach:**
|
||||
```typescript
|
||||
try {
|
||||
await page.click('#optional-button');
|
||||
} catch (e) {
|
||||
// Silently continue
|
||||
}
|
||||
```
|
||||
|
||||
**Right approach - Make failures explicit:**
|
||||
```typescript
|
||||
// Option 1: Button should exist
|
||||
await page.click('#optional-button'); // Fails loudly if missing
|
||||
|
||||
// Option 2: Button might not exist (test both)
|
||||
test('should work with optional button', async ({ page }) => {
|
||||
const hasButton = await page.locator('#optional-button').count() > 0;
|
||||
if (hasButton) {
|
||||
await page.click('#optional-button');
|
||||
}
|
||||
// But now you're testing optional behavior explicitly
|
||||
});
|
||||
```
|
||||
|
||||
### Issue: "Hard waits are easier than network patterns"
|
||||
|
||||
**Short-term:** Hard waits seem simpler
|
||||
**Long-term:** Flaky tests waste more time than learning network patterns
|
||||
|
||||
**Investment:**
|
||||
- 30 minutes to learn network-first patterns
|
||||
- Prevents hundreds of hours debugging flaky tests
|
||||
- Tests run faster (no wasted waits)
|
||||
- Team trusts test suite
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
For detailed test quality patterns, see:
|
||||
- [Test Quality Fragment](/docs/tea/reference/knowledge-base.md#quality-standards)
|
||||
- [Test Levels Framework Fragment](/docs/tea/reference/knowledge-base.md#quality-standards)
|
||||
- [Complete Knowledge Base Index](/docs/tea/reference/knowledge-base.md)
|
||||
|
||||
## Related Concepts
|
||||
|
||||
**Core TEA Concepts:**
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Quality scales with risk
|
||||
- [Knowledge Base System](/docs/tea/explanation/knowledge-base-system.md) - How standards are enforced
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - Quality in different models
|
||||
|
||||
**Technical Patterns:**
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Determinism explained
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Isolation through fixtures
|
||||
|
||||
**Overview:**
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Quality standards in lifecycle
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - Why quality matters
|
||||
|
||||
## Practical Guides
|
||||
|
||||
**Workflow Guides:**
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Audit against these standards
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md) - Generate quality tests
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Expand with quality
|
||||
|
||||
**Use-Case Guides:**
|
||||
- [Using TEA with Existing Tests](/docs/tea/how-to/brownfield/use-tea-with-existing-tests.md) - Improve legacy quality
|
||||
- [Running TEA for Enterprise](/docs/tea/how-to/brownfield/use-tea-for-enterprise.md) - Enterprise quality thresholds
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - `test-review` command
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Test quality fragment
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - TEA terminology
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
112
docs/tea/explanation/testing-as-engineering.md
Normal file
112
docs/tea/explanation/testing-as-engineering.md
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
title: "AI-Generated Testing: Why Most Approaches Fail"
|
||||
description: How Playwright-Utils, TEA workflows, and Playwright MCPs solve AI test quality problems
|
||||
---
|
||||
|
||||
|
||||
AI-generated tests frequently fail in production because they lack systematic quality standards. This document explains the problem and presents a solution combining three components: Playwright-Utils, TEA (Test Architect), and Playwright MCPs.
|
||||
|
||||
:::note[Source]
|
||||
This article is adapted from [The Testing Meta Most Teams Have Not Caught Up To Yet](https://dev.to/muratkeremozcan/the-testing-meta-most-teams-have-not-caught-up-to-yet-5765) by Murat K Ozcan.
|
||||
:::
|
||||
|
||||
## The Problem with AI-Generated Tests
|
||||
|
||||
When teams use AI to generate tests without structure, they often produce what can be called "slop factory" outputs:
|
||||
|
||||
| Issue | Description |
|
||||
|-------|-------------|
|
||||
| Redundant coverage | Multiple tests covering the same functionality |
|
||||
| Incorrect assertions | Tests that pass but don't actually verify behavior |
|
||||
| Flaky tests | Non-deterministic tests that randomly pass or fail |
|
||||
| Unreviewable diffs | Generated code too verbose or inconsistent to review |
|
||||
|
||||
The core problem is that prompt-driven testing paths lean into nondeterminism, which is the exact opposite of what testing exists to protect.
|
||||
|
||||
:::caution[The Paradox]
|
||||
AI excels at generating code quickly, but testing requires precision and consistency. Without guardrails, AI-generated tests amplify the chaos they're meant to prevent.
|
||||
:::
|
||||
|
||||
## The Solution: A Three-Part Stack
|
||||
|
||||
The solution combines three components that work together to enforce quality:
|
||||
|
||||
### Playwright-Utils
|
||||
|
||||
Bridges the gap between Cypress ergonomics and Playwright's capabilities by standardizing commonly reinvented primitives through utility functions.
|
||||
|
||||
| Utility | Purpose |
|
||||
|---------|---------|
|
||||
| api-request | API calls with schema validation |
|
||||
| auth-session | Authentication handling |
|
||||
| intercept-network-call | Network mocking and interception |
|
||||
| recurse | Retry logic and polling |
|
||||
| log | Structured logging |
|
||||
| network-recorder | Record and replay network traffic |
|
||||
| burn-in | Smart test selection for CI |
|
||||
| network-error-monitor | HTTP error detection |
|
||||
| file-utils | CSV/PDF handling |
|
||||
|
||||
These utilities eliminate the need to reinvent authentication, API calls, retries, and logging for every project.
|
||||
|
||||
### TEA (Test Architect Agent)
|
||||
|
||||
A quality operating model packaged as eight executable workflows spanning test design, CI/CD gates, and release readiness. TEA encodes test architecture expertise into repeatable processes.
|
||||
|
||||
| Workflow | Purpose |
|
||||
|----------|---------|
|
||||
| `test-design` | Risk-based test planning per epic |
|
||||
| `framework` | Scaffold production-ready test infrastructure |
|
||||
| `ci` | CI pipeline with selective testing |
|
||||
| `atdd` | Acceptance test-driven development |
|
||||
| `automate` | Prioritized test automation |
|
||||
| `test-review` | Test quality audits (0-100 score) |
|
||||
| `nfr-assess` | Non-functional requirements assessment |
|
||||
| `trace` | Coverage traceability and gate decisions |
|
||||
|
||||
:::tip[Key Insight]
|
||||
TEA doesn't just generate tests—it provides a complete quality operating model with workflows for planning, execution, and release gates.
|
||||
:::
|
||||
|
||||
### Playwright MCPs
|
||||
|
||||
Model Context Protocols enable real-time verification during test generation. Instead of inferring selectors and behavior from documentation, MCPs allow agents to:
|
||||
|
||||
- Run flows and confirm the DOM against the accessibility tree
|
||||
- Validate network responses in real-time
|
||||
- Discover actual functionality through interactive exploration
|
||||
- Verify generated tests against live applications
|
||||
|
||||
## How They Work Together
|
||||
|
||||
The three components form a quality pipeline:
|
||||
|
||||
| Stage | Component | Action |
|
||||
|-------|-----------|--------|
|
||||
| Standards | Playwright-Utils | Provides production-ready patterns and utilities |
|
||||
| Process | TEA Workflows | Enforces systematic test planning and review |
|
||||
| Verification | Playwright MCPs | Validates generated tests against live applications |
|
||||
|
||||
**Before (AI-only):** 20 tests with redundant coverage, incorrect assertions, and flaky behavior.
|
||||
|
||||
**After (Full Stack):** Risk-based selection, verified selectors, validated behavior, reviewable code.
|
||||
|
||||
## Why This Matters
|
||||
|
||||
Traditional AI testing approaches fail because they:
|
||||
|
||||
- **Lack quality standards** — No consistent patterns or utilities
|
||||
- **Skip planning** — Jump straight to test generation without risk assessment
|
||||
- **Can't verify** — Generate tests without validating against actual behavior
|
||||
- **Don't review** — No systematic audit of generated test quality
|
||||
|
||||
The three-part stack addresses each gap:
|
||||
|
||||
| Gap | Solution |
|
||||
|-----|----------|
|
||||
| No standards | Playwright-Utils provides production-ready patterns |
|
||||
| No planning | TEA `test-design` creates risk-based test plans |
|
||||
| No verification | Playwright MCPs validate against live applications |
|
||||
| No review | TEA `test-review` audits quality with scoring |
|
||||
|
||||
This approach is sometimes called *context engineering*—loading domain-specific standards into AI context automatically rather than relying on prompts alone. TEA's `tea-index.csv` manifest loads relevant knowledge fragments so the AI doesn't relearn testing patterns each session.
|
||||
159
docs/tea/glossary/index.md
Normal file
159
docs/tea/glossary/index.md
Normal file
@@ -0,0 +1,159 @@
|
||||
---
|
||||
title: "BMad Glossary"
|
||||
---
|
||||
|
||||
Terminology reference for the BMad Method.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Term | Definition |
|
||||
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Agent** | Specialized AI persona with specific expertise (PM, Architect, SM, DEV, TEA) that guides users through workflows and creates deliverables. |
|
||||
| **BMad** | Breakthrough Method of Agile AI-Driven Development — AI-driven agile framework with specialized agents, guided workflows, and scale-adaptive intelligence. |
|
||||
| **BMad Method** | Complete methodology for AI-assisted software development, encompassing planning, architecture, implementation, and quality assurance workflows that adapt to project complexity. |
|
||||
| **BMM** | BMad Method Module — core orchestration system providing comprehensive lifecycle management through specialized agents and workflows. |
|
||||
| **Scale-Adaptive System** | Intelligent workflow orchestration that adjusts planning depth and documentation requirements based on project needs through three planning tracks. |
|
||||
| **Workflow** | Multi-step guided process that orchestrates AI agent activities to produce specific deliverables. Workflows are interactive and adapt to user context. |
|
||||
|
||||
## Scale and Complexity
|
||||
|
||||
| Term | Definition |
|
||||
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **BMad Method Track** | Full product planning track using PRD + Architecture + UX. Best for products, platforms, and complex features. Typical range: 10-50+ stories. |
|
||||
| **Enterprise Method Track** | Extended planning track adding Security Architecture, DevOps Strategy, and Test Strategy. Best for compliance needs and multi-tenant systems. Typical range: 30+ stories. |
|
||||
| **Planning Track** | Methodology path (Quick Flow, BMad Method, or Enterprise) chosen based on planning needs and complexity, not story count alone. |
|
||||
| **Quick Flow Track** | Fast implementation track using tech-spec only. Best for bug fixes, small features, and clear-scope changes. Typical range: 1-15 stories. |
|
||||
|
||||
## Planning Documents
|
||||
|
||||
| Term | Definition |
|
||||
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Architecture Document** | *BMad Method/Enterprise.* System-wide design document defining structure, components, data models, integration patterns, security, and deployment. |
|
||||
| **Epics** | High-level feature groupings containing multiple related stories. Typically 5-15 stories each representing cohesive functionality. |
|
||||
| **Game Brief** | *BMGD.* Document capturing game's core vision, pillars, target audience, and scope. Foundation for the GDD. |
|
||||
| **GDD** | *BMGD.* Game Design Document — comprehensive document detailing all aspects of game design: mechanics, systems, content, and more. |
|
||||
| **PRD** | *BMad Method/Enterprise.* Product Requirements Document containing vision, goals, FRs, NFRs, and success criteria. Focuses on WHAT to build. |
|
||||
| **Product Brief** | *Phase 1.* Optional strategic document capturing product vision, market context, and high-level requirements before detailed planning. |
|
||||
| **Tech-Spec** | *Quick Flow only.* Comprehensive technical plan with problem statement, solution approach, file-level changes, and testing strategy. |
|
||||
|
||||
## Workflow and Phases
|
||||
|
||||
| Term | Definition |
|
||||
| --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Phase 0: Documentation** | *Brownfield.* Conditional prerequisite phase creating codebase documentation before planning. Only required if existing docs are insufficient. |
|
||||
| **Phase 1: Analysis** | Discovery phase including brainstorming, research, and product brief creation. Optional for Quick Flow, recommended for BMad Method. |
|
||||
| **Phase 2: Planning** | Required phase creating formal requirements. Routes to tech-spec (Quick Flow) or PRD (BMad Method/Enterprise). |
|
||||
| **Phase 3: Solutioning** | *BMad Method/Enterprise.* Architecture design phase including creation, validation, and gate checks. |
|
||||
| **Phase 4: Implementation** | Required sprint-based development through story-by-story iteration using sprint-planning, create-story, dev-story, and code-review workflows. |
|
||||
| **Quick Spec Flow** | Fast-track workflow for Quick Flow projects going straight from idea to tech-spec to implementation. |
|
||||
| **Workflow Init** | Initialization workflow creating bmm-workflow-status.yaml, detecting project type, and determining planning track. |
|
||||
| **Workflow Status** | Universal entry point checking for existing status file, displaying progress, and recommending next action. |
|
||||
|
||||
## Agents and Roles
|
||||
|
||||
| Term | Definition |
|
||||
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **Analyst** | Agent that initializes workflows, conducts research, creates product briefs, and tracks progress. Often the entry point for new projects. |
|
||||
| **Architect** | Agent designing system architecture, creating architecture documents, and validating designs. Primary agent for Phase 3. |
|
||||
| **BMad Master** | Meta-level orchestrator from BMad Core facilitating party mode and providing high-level guidance across all modules. |
|
||||
| **DEV** | Developer agent implementing stories, writing code, running tests, and performing code reviews. Primary implementer in Phase 4. |
|
||||
| **Game Architect** | *BMGD.* Agent designing game system architecture and validating game-specific technical designs. |
|
||||
| **Game Designer** | *BMGD.* Agent creating game design documents (GDD) and running game-specific workflows. |
|
||||
| **Party Mode** | Multi-agent collaboration feature where agents discuss challenges together. BMad Master orchestrates, selecting 2-3 relevant agents per message. |
|
||||
| **PM** | Product Manager agent creating PRDs and tech-specs. Primary agent for Phase 2 planning. |
|
||||
| **SM** | Scrum Master agent managing sprints, creating stories, and coordinating implementation. Primary orchestrator for Phase 4. |
|
||||
| **TEA** | Test Architect agent responsible for test strategy, quality gates, and NFR assessment. Integrates throughout all phases. |
|
||||
| **Technical Writer** | Agent specialized in creating technical documentation, diagrams, and maintaining documentation standards. |
|
||||
| **UX Designer** | Agent creating UX design documents, interaction patterns, and visual specifications for UI-heavy projects. |
|
||||
|
||||
## Status and Tracking
|
||||
|
||||
| Term | Definition |
|
||||
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **bmm-workflow-status.yaml** | *Phases 1-3.* Tracking file showing current phase, completed workflows, and next recommended actions. |
|
||||
| **DoD** | Definition of Done — criteria for marking a story complete: implementation done, tests passing, code reviewed, docs updated. |
|
||||
| **Epic Status Progression** | `backlog → in-progress → done` — lifecycle states for epics during implementation. |
|
||||
| **Gate Check** | Validation workflow (implementation-readiness) ensuring PRD, Architecture, and Epics are aligned before Phase 4. |
|
||||
| **Retrospective** | Workflow after each epic capturing learnings and improvements for continuous improvement. |
|
||||
| **sprint-status.yaml** | *Phase 4.* Single source of truth for implementation tracking containing all epics, stories, and their statuses. |
|
||||
| **Story Status Progression** | `backlog → ready-for-dev → in-progress → review → done` — lifecycle states for stories. |
|
||||
|
||||
## Project Types
|
||||
|
||||
| Term | Definition |
|
||||
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Brownfield** | Existing project with established codebase and patterns. Requires understanding existing architecture and planning integration. |
|
||||
| **Convention Detection** | *Quick Flow.* Feature auto-detecting existing code style, naming conventions, and frameworks from brownfield codebases. |
|
||||
| **document-project** | *Brownfield.* Workflow analyzing and documenting existing codebase with three scan levels: quick, deep, exhaustive. |
|
||||
| **Feature Flags** | *Brownfield.* Implementation technique for gradual rollout, easy rollback, and A/B testing of new functionality. |
|
||||
| **Greenfield** | New project starting from scratch with freedom to establish patterns, choose stack, and design from clean slate. |
|
||||
| **Integration Points** | *Brownfield.* Specific locations where new code connects with existing systems. Must be documented in tech-specs. |
|
||||
|
||||
## Implementation Terms
|
||||
|
||||
| Term | Definition |
|
||||
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **Context Engineering** | Loading domain-specific standards into AI context automatically via manifests, ensuring consistent outputs regardless of prompt variation. |
|
||||
| **Correct Course** | Workflow for navigating significant changes when implementation is off-track. Analyzes impact and recommends adjustments. |
|
||||
| **Shard / Sharding** | Splitting large planning documents into section-based files for LLM optimization. Phase 4 workflows load only needed sections. |
|
||||
| **Sprint** | Time-boxed period of development work, typically 1-2 weeks. |
|
||||
| **Sprint Planning** | Workflow initializing Phase 4 by creating sprint-status.yaml and extracting epics/stories from planning docs. |
|
||||
| **Story** | Single unit of implementable work with clear acceptance criteria, typically 2-8 hours of effort. Grouped into epics. |
|
||||
| **Story Context** | Implementation guidance embedded in story files during create-story, referencing existing patterns and approaches. |
|
||||
| **Story File** | Markdown file containing story description, acceptance criteria, technical notes, and testing requirements. |
|
||||
| **Track Selection** | Automatic analysis by `bmad-help` suggesting appropriate track based on complexity indicators. User can override. |
|
||||
|
||||
## Game Development Terms
|
||||
|
||||
| Term | Definition |
|
||||
| ------------------------------ | ---------------------------------------------------------------------------------------------------- |
|
||||
| **Core Fantasy** | *BMGD.* The emotional experience players seek from your game — what they want to FEEL. |
|
||||
| **Core Loop** | *BMGD.* Fundamental cycle of actions players repeat throughout gameplay. The heart of your game. |
|
||||
| **Design Pillar** | *BMGD.* Core principle guiding all design decisions. Typically 3-5 pillars define a game's identity. |
|
||||
| **Environmental Storytelling** | *BMGD.* Narrative communicated through the game world itself rather than explicit dialogue. |
|
||||
| **Game Type** | *BMGD.* Genre classification determining which specialized GDD sections are included. |
|
||||
| **MDA Framework** | *BMGD.* Mechanics → Dynamics → Aesthetics — framework for analyzing and designing games. |
|
||||
| **Meta-Progression** | *BMGD.* Persistent progression carrying between individual runs or sessions. |
|
||||
| **Metroidvania** | *BMGD.* Genre featuring interconnected world exploration with ability-gated progression. |
|
||||
| **Narrative Complexity** | *BMGD.* How central story is to the game: Critical, Heavy, Moderate, or Light. |
|
||||
| **Permadeath** | *BMGD.* Game mechanic where character death is permanent, typically requiring a new run. |
|
||||
| **Player Agency** | *BMGD.* Degree to which players can make meaningful choices affecting outcomes. |
|
||||
| **Procedural Generation** | *BMGD.* Algorithmic creation of game content (levels, items, characters) rather than hand-crafted. |
|
||||
| **Roguelike** | *BMGD.* Genre featuring procedural generation, permadeath, and run-based progression. |
|
||||
|
||||
## Test Architect (TEA) Concepts
|
||||
|
||||
| Term | Definition |
|
||||
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **ATDD** | Acceptance Test-Driven Development — Generating failing acceptance tests BEFORE implementation (TDD red phase). |
|
||||
| **Burn-in Testing** | Running tests multiple times (typically 5-10 iterations) to detect flakiness and intermittent failures. |
|
||||
| **Component Testing** | Testing UI components in isolation using framework-specific tools (Cypress Component Testing or Vitest + React Testing Library). |
|
||||
| **Coverage Traceability** | Mapping acceptance criteria to implemented tests with classification (FULL/PARTIAL/NONE) to identify gaps and measure completeness. |
|
||||
| **Epic-Level Test Design** | Test planning per epic (Phase 4) focusing on risk assessment, priorities, and coverage strategy for that specific epic. |
|
||||
| **Fixture Architecture** | Pattern of building pure functions first, then wrapping in framework-specific fixtures for testability, reusability, and composition. |
|
||||
| **Gate Decision** | Go/no-go decision for release with four outcomes: PASS ✅ (ready), CONCERNS ⚠️ (proceed with mitigation), FAIL ❌ (blocked), WAIVED ⏭️ (approved despite issues). |
|
||||
| **Knowledge Fragment** | Individual markdown file in TEA's knowledge base covering a specific testing pattern or practice (33 fragments total). |
|
||||
| **MCP Enhancements** | Model Context Protocol servers enabling live browser verification during test generation (exploratory, recording, and healing modes). |
|
||||
| **Network-First Pattern** | Testing pattern that waits for actual network responses instead of fixed timeouts to avoid race conditions and flakiness. |
|
||||
| **NFR Assessment** | Validation of non-functional requirements (security, performance, reliability, maintainability) with evidence-based decisions. |
|
||||
| **Playwright Utils** | Optional package (`@seontechnologies/playwright-utils`) providing production-ready fixtures and utilities for Playwright tests. |
|
||||
| **Risk-Based Testing** | Testing approach where depth scales with business impact using probability × impact scoring (1-9 scale). |
|
||||
| **System-Level Test Design** | Test planning at architecture level (Phase 3) focusing on testability review, ADR mapping, and test infrastructure needs. |
|
||||
| **tea-index.csv** | Manifest file tracking all knowledge fragments, their descriptions, tags, and which workflows load them. |
|
||||
| **TEA Integrated** | Full BMad Method integration with TEA workflows across all phases (Phase 2, 3, 4, and Release Gate). |
|
||||
| **TEA Lite** | Beginner approach using just `automate` to test existing features (simplest way to use TEA). |
|
||||
| **TEA Solo** | Standalone engagement model using TEA without full BMad Method integration (bring your own requirements). |
|
||||
| **Test Priorities** | Classification system for test importance: P0 (critical path), P1 (high value), P2 (medium value), P3 (low value). |
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Complete TEA capabilities
|
||||
- [TEA Knowledge Base](/docs/tea/reference/knowledge-base.md) - Fragment index
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - Workflow reference
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Config options
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org)
|
||||
525
docs/tea/how-to/brownfield/use-tea-for-enterprise.md
Normal file
525
docs/tea/how-to/brownfield/use-tea-for-enterprise.md
Normal file
@@ -0,0 +1,525 @@
|
||||
---
|
||||
title: "Running TEA for Enterprise Projects"
|
||||
description: Use TEA with compliance, security, and regulatory requirements in enterprise environments
|
||||
---
|
||||
|
||||
# Running TEA for Enterprise Projects
|
||||
|
||||
Use TEA on enterprise projects with compliance, security, audit, and regulatory requirements. This guide covers NFR assessment, audit trails, and evidence collection.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Enterprise track projects (not Quick Flow or simple BMad Method)
|
||||
- Compliance requirements (SOC 2, HIPAA, GDPR, etc.)
|
||||
- Security-critical applications (finance, healthcare, government)
|
||||
- Audit trail requirements
|
||||
- Strict NFR thresholds (performance, security, reliability)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed (Enterprise track selected)
|
||||
- TEA agent available
|
||||
- Compliance requirements documented
|
||||
- Stakeholders identified (who approves gates)
|
||||
|
||||
## Enterprise-Specific TEA Workflows
|
||||
|
||||
### NFR Assessment (`nfr-assess`)
|
||||
|
||||
**Purpose:** Validate non-functional requirements with evidence.
|
||||
|
||||
**When:** Phase 2 (early) and Release Gate
|
||||
|
||||
**Why Enterprise Needs This:**
|
||||
- Compliance mandates specific thresholds
|
||||
- Audit trails required for certification
|
||||
- Security requirements are non-negotiable
|
||||
- Performance SLAs are contractual
|
||||
|
||||
**Example:**
|
||||
```
|
||||
nfr-assess
|
||||
|
||||
Categories: Security, Performance, Reliability, Maintainability
|
||||
|
||||
Security thresholds:
|
||||
- Zero critical vulnerabilities (required by SOC 2)
|
||||
- All endpoints require authentication
|
||||
- Data encrypted at rest (FIPS 140-2)
|
||||
- Audit logging on all data access
|
||||
|
||||
Evidence:
|
||||
- Security scan: reports/nessus-scan.pdf
|
||||
- Penetration test: reports/pentest-2026-01.pdf
|
||||
- Compliance audit: reports/soc2-evidence.zip
|
||||
```
|
||||
|
||||
**Output:** NFR assessment with PASS/CONCERNS/FAIL for each category.
|
||||
|
||||
### Trace with Audit Evidence (`trace`)
|
||||
|
||||
**Purpose:** Requirements traceability with audit trail.
|
||||
|
||||
**When:** Phase 2 (baseline), Phase 4 (refresh), Release Gate
|
||||
|
||||
**Why Enterprise Needs This:**
|
||||
- Auditors require requirements-to-test mapping
|
||||
- Compliance certifications need traceability
|
||||
- Regulatory bodies want evidence
|
||||
|
||||
**Example:**
|
||||
```
|
||||
trace Phase 1
|
||||
|
||||
Requirements: PRD.md (with compliance requirements)
|
||||
Test location: tests/
|
||||
|
||||
Output: traceability-matrix.md with:
|
||||
- Requirement-to-test mapping
|
||||
- Compliance requirement coverage
|
||||
- Gap prioritization
|
||||
- Recommendations
|
||||
```
|
||||
|
||||
**For Release Gate:**
|
||||
```
|
||||
trace Phase 2
|
||||
|
||||
Generate gate-decision-{gate_type}-{story_id}.md with:
|
||||
- Evidence references
|
||||
- Approver signatures
|
||||
- Compliance checklist
|
||||
- Decision rationale
|
||||
```
|
||||
|
||||
### Test Design with Compliance Focus (`test-design`)
|
||||
|
||||
**Purpose:** Risk assessment with compliance and security focus.
|
||||
|
||||
**When:** Phase 3 (system-level), Phase 4 (epic-level)
|
||||
|
||||
**Why Enterprise Needs This:**
|
||||
- Security architecture alignment required
|
||||
- Compliance requirements must be testable
|
||||
- Performance requirements are contractual
|
||||
|
||||
**Example:**
|
||||
```
|
||||
test-design
|
||||
|
||||
Mode: System-level
|
||||
|
||||
Focus areas:
|
||||
- Security architecture (authentication, authorization, encryption)
|
||||
- Performance requirements (SLA: P99 <200ms)
|
||||
- Compliance (HIPAA PHI handling, audit logging)
|
||||
|
||||
Output: TWO documents (system-level):
|
||||
- `test-design-architecture.md`: Security gaps, compliance requirements, performance SLOs for Architecture team
|
||||
- `test-design-qa.md`: Security testing strategy, compliance test mapping, performance testing plan for QA team
|
||||
- Audit logging validation
|
||||
```
|
||||
|
||||
## Enterprise TEA Lifecycle
|
||||
|
||||
### Phase 1: Discovery (Optional but Recommended)
|
||||
|
||||
**Research compliance requirements:**
|
||||
```
|
||||
Analyst: research
|
||||
|
||||
Topics:
|
||||
- Industry compliance (SOC 2, HIPAA, GDPR)
|
||||
- Security standards (OWASP Top 10)
|
||||
- Performance benchmarks (industry P99)
|
||||
```
|
||||
|
||||
### Phase 2: Planning (Required)
|
||||
|
||||
**1. Define NFRs early:**
|
||||
```
|
||||
PM: prd
|
||||
|
||||
Include in PRD:
|
||||
- Security requirements (authentication, encryption)
|
||||
- Performance SLAs (response time, throughput)
|
||||
- Reliability targets (uptime, RTO, RPO)
|
||||
- Compliance mandates (data retention, audit logs)
|
||||
```
|
||||
|
||||
**2. Assess NFRs:**
|
||||
```
|
||||
TEA: nfr-assess
|
||||
|
||||
Categories: All (Security, Performance, Reliability, Maintainability)
|
||||
|
||||
Output: nfr-assessment.md
|
||||
- NFR requirements documented
|
||||
- Acceptance criteria defined
|
||||
- Test strategy planned
|
||||
```
|
||||
|
||||
**3. Baseline (brownfield only):**
|
||||
```
|
||||
TEA: trace Phase 1
|
||||
|
||||
Establish baseline coverage before new work
|
||||
```
|
||||
|
||||
### Phase 3: Solutioning (Required)
|
||||
|
||||
**1. Architecture with testability review:**
|
||||
```
|
||||
Architect: architecture
|
||||
|
||||
TEA: test-design (system-level)
|
||||
|
||||
Focus:
|
||||
- Security architecture testability
|
||||
- Performance testing strategy
|
||||
- Compliance requirement mapping
|
||||
```
|
||||
|
||||
**2. Test infrastructure:**
|
||||
```
|
||||
TEA: framework
|
||||
|
||||
Requirements:
|
||||
- Separate test environments (dev, staging, prod-mirror)
|
||||
- Secure test data handling (PHI, PII)
|
||||
- Audit logging in tests
|
||||
```
|
||||
|
||||
**3. CI/CD with compliance:**
|
||||
```
|
||||
TEA: ci
|
||||
|
||||
Requirements:
|
||||
- Secrets management (Vault, AWS Secrets Manager)
|
||||
- Test isolation (no cross-contamination)
|
||||
- Artifact retention (compliance audit trail)
|
||||
- Access controls (who can run production tests)
|
||||
```
|
||||
|
||||
### Phase 4: Implementation (Required)
|
||||
|
||||
**Per epic:**
|
||||
```
|
||||
1. TEA: test-design (epic-level)
|
||||
Focus: Compliance, security, performance for THIS epic
|
||||
|
||||
2. TEA: atdd (optional)
|
||||
Generate tests including security/compliance scenarios
|
||||
|
||||
3. DEV: Implement story
|
||||
|
||||
4. TEA: automate
|
||||
Expand coverage including compliance edge cases
|
||||
|
||||
5. TEA: test-review
|
||||
Audit quality (score >80 per epic, rises to >85 at release)
|
||||
|
||||
6. TEA: trace Phase 1
|
||||
Refresh coverage, verify compliance requirements tested
|
||||
```
|
||||
|
||||
### Release Gate (Required)
|
||||
|
||||
**1. Final NFR assessment:**
|
||||
```
|
||||
TEA: nfr-assess
|
||||
|
||||
All categories (if not done earlier)
|
||||
Latest evidence (performance tests, security scans)
|
||||
```
|
||||
|
||||
**2. Final quality audit:**
|
||||
```
|
||||
TEA: test-review tests/
|
||||
|
||||
Full suite review
|
||||
Quality target: >85 for enterprise
|
||||
```
|
||||
|
||||
**3. Gate decision:**
|
||||
```
|
||||
TEA: trace Phase 2
|
||||
|
||||
Evidence required:
|
||||
- traceability-matrix.md (from Phase 1)
|
||||
- test-review.md (from quality audit)
|
||||
- nfr-assessment.md (from NFR assessment)
|
||||
- Test execution results (must have test results available)
|
||||
|
||||
Decision: PASS/CONCERNS/FAIL/WAIVED
|
||||
|
||||
Archive all artifacts for compliance audit
|
||||
```
|
||||
|
||||
**Note:** Phase 2 requires test execution results. If results aren't available, Phase 2 will be skipped.
|
||||
|
||||
**4. Archive for audit:**
|
||||
```
|
||||
Archive:
|
||||
- All test results
|
||||
- Coverage reports
|
||||
- NFR assessments
|
||||
- Gate decisions
|
||||
- Approver signatures
|
||||
|
||||
Retention: Per compliance requirements (7 years for HIPAA)
|
||||
```
|
||||
|
||||
## Enterprise-Specific Requirements
|
||||
|
||||
### Evidence Collection
|
||||
|
||||
**Required artifacts:**
|
||||
- Requirements traceability matrix
|
||||
- Test execution results (with timestamps)
|
||||
- NFR assessment reports
|
||||
- Security scan results
|
||||
- Performance test results
|
||||
- Gate decision records
|
||||
- Approver signatures
|
||||
|
||||
**Storage:**
|
||||
```
|
||||
compliance/
|
||||
├── 2026-Q1/
|
||||
│ ├── release-1.2.0/
|
||||
│ │ ├── traceability-matrix.md
|
||||
│ │ ├── test-review.md
|
||||
│ │ ├── nfr-assessment.md
|
||||
│ │ ├── gate-decision-release-v1.2.0.md
|
||||
│ │ ├── test-results/
|
||||
│ │ ├── security-scans/
|
||||
│ │ └── approvals.pdf
|
||||
```
|
||||
|
||||
**Retention:** 7 years (HIPAA), 3 years (SOC 2), per your compliance needs
|
||||
|
||||
### Approver Workflows
|
||||
|
||||
**Multi-level approval required:**
|
||||
|
||||
```markdown
|
||||
## Gate Approvals Required
|
||||
|
||||
### Technical Approval
|
||||
- [ ] QA Lead - Test coverage adequate
|
||||
- [ ] Tech Lead - Technical quality acceptable
|
||||
- [ ] Security Lead - Security requirements met
|
||||
|
||||
### Business Approval
|
||||
- [ ] Product Manager - Business requirements met
|
||||
- [ ] Compliance Officer - Regulatory requirements met
|
||||
|
||||
### Executive Approval (for major releases)
|
||||
- [ ] VP Engineering - Overall quality acceptable
|
||||
- [ ] CTO - Architecture approved for production
|
||||
```
|
||||
|
||||
### Compliance Checklists
|
||||
|
||||
**SOC 2 Example:**
|
||||
```markdown
|
||||
## SOC 2 Compliance Checklist
|
||||
|
||||
### Access Controls
|
||||
- [ ] All API endpoints require authentication
|
||||
- [ ] Authorization tested for all protected resources
|
||||
- [ ] Session management secure (token expiration tested)
|
||||
|
||||
### Audit Logging
|
||||
- [ ] All data access logged
|
||||
- [ ] Logs immutable (append-only)
|
||||
- [ ] Log retention policy enforced
|
||||
|
||||
### Data Protection
|
||||
- [ ] Data encrypted at rest (tested)
|
||||
- [ ] Data encrypted in transit (HTTPS enforced)
|
||||
- [ ] PII handling compliant (masking tested)
|
||||
|
||||
### Testing Evidence
|
||||
- [ ] Test coverage >80% (verified)
|
||||
- [ ] Security tests passing (100%)
|
||||
- [ ] Traceability matrix complete
|
||||
```
|
||||
|
||||
**HIPAA Example:**
|
||||
```markdown
|
||||
## HIPAA Compliance Checklist
|
||||
|
||||
### PHI Protection
|
||||
- [ ] PHI encrypted at rest (AES-256)
|
||||
- [ ] PHI encrypted in transit (TLS 1.3)
|
||||
- [ ] PHI access logged (audit trail)
|
||||
|
||||
### Access Controls
|
||||
- [ ] Role-based access control (RBAC tested)
|
||||
- [ ] Minimum necessary access (tested)
|
||||
- [ ] Authentication strong (MFA tested)
|
||||
|
||||
### Breach Notification
|
||||
- [ ] Breach detection tested
|
||||
- [ ] Notification workflow tested
|
||||
- [ ] Incident response plan tested
|
||||
```
|
||||
|
||||
## Enterprise Tips
|
||||
|
||||
### Start with Security
|
||||
|
||||
**Priority 1:** Security requirements
|
||||
```
|
||||
1. Document all security requirements
|
||||
2. Generate security tests with `atdd`
|
||||
3. Run security test suite
|
||||
4. Pass security audit BEFORE moving forward
|
||||
```
|
||||
|
||||
**Why:** Security failures block everything in enterprise.
|
||||
|
||||
**Example: RBAC Testing**
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
test('should enforce role-based access', async ({ request }) => {
|
||||
// Login as regular user
|
||||
const userResp = await request.post('/api/auth/login', {
|
||||
data: { email: 'user@example.com', password: 'pass' }
|
||||
});
|
||||
const { token: userToken } = await userResp.json();
|
||||
|
||||
// Try to access admin endpoint
|
||||
const adminResp = await request.get('/api/admin/users', {
|
||||
headers: { Authorization: `Bearer ${userToken}` }
|
||||
});
|
||||
|
||||
expect(adminResp.status()).toBe(403); // Forbidden
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils (Cleaner, Reusable):**
|
||||
```typescript
|
||||
import { test as base, expect } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import { mergeTests } from '@playwright/test';
|
||||
|
||||
const authFixtureTest = base.extend(createAuthFixtures());
|
||||
export const testWithAuth = mergeTests(apiRequestFixture, authFixtureTest);
|
||||
|
||||
testWithAuth('should enforce role-based access', async ({ apiRequest, authToken }) => {
|
||||
// Auth token from fixture (configured for 'user' role)
|
||||
const { status } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/admin/users', // Admin endpoint
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
});
|
||||
|
||||
expect(status).toBe(403); // Regular user denied
|
||||
});
|
||||
|
||||
testWithAuth('admin can access admin endpoint', async ({ apiRequest, authToken, authOptions }) => {
|
||||
// Override to admin role
|
||||
authOptions.userIdentifier = 'admin';
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/admin/users',
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
});
|
||||
|
||||
expect(status).toBe(200); // Admin allowed
|
||||
expect(body).toBeInstanceOf(Array);
|
||||
});
|
||||
```
|
||||
|
||||
**Note:** Auth-session requires provider setup in global-setup.ts. See [auth-session configuration](https://seontechnologies.github.io/playwright-utils/auth-session.html).
|
||||
|
||||
**Playwright Utils Benefits for Compliance:**
|
||||
- Multi-user auth testing (regular, admin, etc.)
|
||||
- Token persistence (faster test execution)
|
||||
- Consistent auth patterns (audit trail)
|
||||
- Automatic cleanup
|
||||
|
||||
### Set Higher Quality Thresholds
|
||||
|
||||
**Enterprise quality targets:**
|
||||
- Test coverage: >85% (vs 80% for non-enterprise)
|
||||
- Quality score: >85 (vs 75 for non-enterprise)
|
||||
- P0 coverage: 100% (non-negotiable)
|
||||
- P1 coverage: >95% (vs 90% for non-enterprise)
|
||||
|
||||
**Rationale:** Enterprise systems affect more users, higher stakes.
|
||||
|
||||
### Document Everything
|
||||
|
||||
**Auditors need:**
|
||||
- Why decisions were made (rationale)
|
||||
- Who approved (signatures)
|
||||
- When (timestamps)
|
||||
- What evidence (test results, scan reports)
|
||||
|
||||
**Use TEA's structured outputs:**
|
||||
- Reports have timestamps
|
||||
- Decisions have rationale
|
||||
- Evidence is referenced
|
||||
- Audit trail is automatic
|
||||
|
||||
### Budget for Compliance Testing
|
||||
|
||||
**Enterprise testing costs more:**
|
||||
- Penetration testing: $10k-50k
|
||||
- Security audits: $5k-20k
|
||||
- Performance testing tools: $500-5k/month
|
||||
- Compliance consulting: $200-500/hour
|
||||
|
||||
**Plan accordingly:**
|
||||
- Budget in project cost
|
||||
- Schedule early (3+ months for SOC 2)
|
||||
- Don't skip (non-negotiable for compliance)
|
||||
|
||||
### Use External Validators
|
||||
|
||||
**Don't self-certify:**
|
||||
- Penetration testing: Hire external firm
|
||||
- Security audits: Independent auditor
|
||||
- Compliance: Certification body
|
||||
- Performance: Load testing service
|
||||
|
||||
**TEA's role:** Prepare for external validation, don't replace it.
|
||||
|
||||
## Related Guides
|
||||
|
||||
**Workflow Guides:**
|
||||
- [How to Run NFR Assessment](/docs/tea/how-to/workflows/run-nfr-assess.md) - Deep dive on NFRs
|
||||
- [How to Run Trace](/docs/tea/how-to/workflows/run-trace.md) - Gate decisions with evidence
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Quality audits
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Compliance-focused planning
|
||||
|
||||
**Use-Case Guides:**
|
||||
- [Using TEA with Existing Tests](/docs/tea/how-to/brownfield/use-tea-with-existing-tests.md) - Brownfield patterns
|
||||
|
||||
**Customization:**
|
||||
- [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) - Production-ready utilities
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - Enterprise model explained
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Probability × impact scoring
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Enterprise quality thresholds
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Complete TEA lifecycle
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - All 8 workflows
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Enterprise config options
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Testing patterns
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - TEA terminology
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
577
docs/tea/how-to/brownfield/use-tea-with-existing-tests.md
Normal file
577
docs/tea/how-to/brownfield/use-tea-with-existing-tests.md
Normal file
@@ -0,0 +1,577 @@
|
||||
---
|
||||
title: "Using TEA with Existing Tests (Brownfield)"
|
||||
description: Apply TEA workflows to legacy codebases with existing test suites
|
||||
---
|
||||
|
||||
# Using TEA with Existing Tests (Brownfield)
|
||||
|
||||
Use TEA on brownfield projects (existing codebases with legacy tests) to establish coverage baselines, identify gaps, and improve test quality without starting from scratch.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Existing codebase with some tests already written
|
||||
- Legacy test suite needs quality improvement
|
||||
- Adding features to existing application
|
||||
- Need to understand current test coverage
|
||||
- Want to prevent regression as you add features
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- Existing codebase with tests (even if incomplete or low quality)
|
||||
- Tests run successfully (or at least can be executed)
|
||||
|
||||
**Note:** If your codebase is completely undocumented, run `document-project` first to create baseline documentation.
|
||||
|
||||
## Brownfield Strategy
|
||||
|
||||
### Phase 1: Establish Baseline
|
||||
|
||||
Understand what you have before changing anything.
|
||||
|
||||
#### Step 1: Baseline Coverage with `trace`
|
||||
|
||||
Run `trace` Phase 1 to map existing tests to requirements:
|
||||
|
||||
```
|
||||
trace
|
||||
```
|
||||
|
||||
**Select:** Phase 1 (Requirements Traceability)
|
||||
|
||||
**Provide:**
|
||||
- Existing requirements docs (PRD, user stories, feature specs)
|
||||
- Test location (`tests/` or wherever tests live)
|
||||
- Focus areas (specific features if large codebase)
|
||||
|
||||
**Output:** `traceability-matrix.md` showing:
|
||||
- Which requirements have tests
|
||||
- Which requirements lack coverage
|
||||
- Coverage classification (FULL/PARTIAL/NONE)
|
||||
- Gap prioritization
|
||||
|
||||
**Example Baseline:**
|
||||
```markdown
|
||||
# Baseline Coverage (Before Improvements)
|
||||
|
||||
**Total Requirements:** 50
|
||||
**Full Coverage:** 15 (30%)
|
||||
**Partial Coverage:** 20 (40%)
|
||||
**No Coverage:** 15 (30%)
|
||||
|
||||
**By Priority:**
|
||||
- P0: 50% coverage (5/10) ❌ Critical gap
|
||||
- P1: 40% coverage (8/20) ⚠️ Needs improvement
|
||||
- P2: 20% coverage (2/10) ✅ Acceptable
|
||||
```
|
||||
|
||||
This baseline becomes your improvement target.
|
||||
|
||||
#### Step 2: Quality Audit with `test-review`
|
||||
|
||||
Run `test-review` on existing tests:
|
||||
|
||||
```
|
||||
test-review tests/
|
||||
```
|
||||
|
||||
**Output:** `test-review.md` with quality score and issues.
|
||||
|
||||
**Common Brownfield Issues:**
|
||||
- Hard waits everywhere (`page.waitForTimeout(5000)`)
|
||||
- Fragile CSS selectors (`.class > div:nth-child(3)`)
|
||||
- No test isolation (tests depend on execution order)
|
||||
- Try-catch for flow control
|
||||
- Tests don't clean up (leave test data in DB)
|
||||
|
||||
**Example Baseline Quality:**
|
||||
```markdown
|
||||
# Quality Score: 55/100
|
||||
|
||||
**Critical Issues:** 12
|
||||
- 8 hard waits
|
||||
- 4 conditional flow control
|
||||
|
||||
**Recommendations:** 25
|
||||
- Extract fixtures
|
||||
- Improve selectors
|
||||
- Add network assertions
|
||||
```
|
||||
|
||||
This shows where to focus improvement efforts.
|
||||
|
||||
### Phase 2: Prioritize Improvements
|
||||
|
||||
Don't try to fix everything at once.
|
||||
|
||||
#### Focus on Critical Path First
|
||||
|
||||
**Priority 1: P0 Requirements**
|
||||
```
|
||||
Goal: Get P0 coverage to 100%
|
||||
|
||||
Actions:
|
||||
1. Identify P0 requirements with no tests (from trace)
|
||||
2. Run `automate` to generate tests for missing P0 scenarios
|
||||
3. Fix critical quality issues in P0 tests (from test-review)
|
||||
```
|
||||
|
||||
**Priority 2: Fix Flaky Tests**
|
||||
```
|
||||
Goal: Eliminate flakiness
|
||||
|
||||
Actions:
|
||||
1. Identify tests with hard waits (from test-review)
|
||||
2. Replace with network-first patterns
|
||||
3. Run burn-in loops to verify stability
|
||||
```
|
||||
|
||||
**Example Modernization:**
|
||||
|
||||
**Before (Flaky - Hard Waits):**
|
||||
```typescript
|
||||
test('checkout completes', async ({ page }) => {
|
||||
await page.click('button[name="checkout"]');
|
||||
await page.waitForTimeout(5000); // ❌ Flaky
|
||||
await expect(page.locator('.confirmation')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**After (Network-First - Vanilla):**
|
||||
```typescript
|
||||
test('checkout completes', async ({ page }) => {
|
||||
const checkoutPromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/checkout') && resp.ok()
|
||||
);
|
||||
await page.click('button[name="checkout"]');
|
||||
await checkoutPromise; // ✅ Deterministic
|
||||
await expect(page.locator('.confirmation')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**After (With Playwright Utils - Cleaner API):**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('checkout completes', async ({ page, interceptNetworkCall }) => {
|
||||
// Use interceptNetworkCall for cleaner network interception
|
||||
const checkoutCall = interceptNetworkCall({
|
||||
method: 'POST',
|
||||
url: '**/api/checkout'
|
||||
});
|
||||
|
||||
await page.click('button[name="checkout"]');
|
||||
|
||||
// Wait for response (automatic JSON parsing)
|
||||
const { status, responseJson: order } = await checkoutCall;
|
||||
|
||||
// Validate API response
|
||||
expect(status).toBe(200);
|
||||
expect(order.status).toBe('confirmed');
|
||||
|
||||
// Validate UI
|
||||
await expect(page.locator('.confirmation')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Playwright Utils Benefits:**
|
||||
- `interceptNetworkCall` for cleaner network interception
|
||||
- Automatic JSON parsing (`responseJson` ready to use)
|
||||
- No manual `await response.json()`
|
||||
- Glob pattern matching (`**/api/checkout`)
|
||||
- Cleaner, more maintainable code
|
||||
|
||||
**For automatic error detection,** use `network-error-monitor` fixture separately. See [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md#network-error-monitor).
|
||||
|
||||
**Priority 3: P1 Requirements**
|
||||
```
|
||||
Goal: Get P1 coverage to 80%+
|
||||
|
||||
Actions:
|
||||
1. Generate tests for highest-risk P1 gaps
|
||||
2. Improve test quality incrementally
|
||||
```
|
||||
|
||||
#### Create Improvement Roadmap
|
||||
|
||||
```markdown
|
||||
# Test Improvement Roadmap
|
||||
|
||||
## Week 1: Critical Path (P0)
|
||||
- [ ] Add 5 missing P0 tests (Epic 1: Auth)
|
||||
- [ ] Fix 8 hard waits in auth tests
|
||||
- [ ] Verify P0 coverage = 100%
|
||||
|
||||
## Week 2: Flakiness
|
||||
- [ ] Replace all hard waits with network-first
|
||||
- [ ] Fix conditional flow control
|
||||
- [ ] Run burn-in loops (target: 0 failures in 10 runs)
|
||||
|
||||
## Week 3: High-Value Coverage (P1)
|
||||
- [ ] Add 10 missing P1 tests
|
||||
- [ ] Improve selector resilience
|
||||
- [ ] P1 coverage target: 80%
|
||||
|
||||
## Week 4: Quality Polish
|
||||
- [ ] Extract fixtures for common patterns
|
||||
- [ ] Add network assertions
|
||||
- [ ] Quality score target: 75+
|
||||
```
|
||||
|
||||
### Phase 3: Incremental Improvement
|
||||
|
||||
Apply TEA workflows to new work while improving legacy tests.
|
||||
|
||||
#### For New Features (Greenfield Within Brownfield)
|
||||
|
||||
**Use full TEA workflow:**
|
||||
```
|
||||
1. `test-design` (epic-level) - Plan tests for new feature
|
||||
2. `atdd` - Generate failing tests first (TDD)
|
||||
3. Implement feature
|
||||
4. `automate` - Expand coverage
|
||||
5. `test-review` - Ensure quality
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- New code has high-quality tests from day one
|
||||
- Gradually raises overall quality
|
||||
- Team learns good patterns
|
||||
|
||||
#### For Bug Fixes (Regression Prevention)
|
||||
|
||||
**Add regression tests:**
|
||||
```
|
||||
1. Reproduce bug with failing test
|
||||
2. Fix bug
|
||||
3. Verify test passes
|
||||
4. Run `test-review` on regression test
|
||||
5. Add to regression test suite
|
||||
```
|
||||
|
||||
#### For Refactoring (Regression Safety)
|
||||
|
||||
**Before refactoring:**
|
||||
```
|
||||
1. Run `trace` - Baseline coverage
|
||||
2. Note current coverage %
|
||||
3. Refactor code
|
||||
4. Run `trace` - Verify coverage maintained
|
||||
5. No coverage should decrease
|
||||
```
|
||||
|
||||
### Phase 4: Continuous Improvement
|
||||
|
||||
Track improvement over time.
|
||||
|
||||
#### Quarterly Quality Audits
|
||||
|
||||
**Q1 Baseline:**
|
||||
```
|
||||
Coverage: 30%
|
||||
Quality Score: 55/100
|
||||
Flakiness: 15% fail rate
|
||||
```
|
||||
|
||||
**Q2 Target:**
|
||||
```
|
||||
Coverage: 50% (focus on P0)
|
||||
Quality Score: 65/100
|
||||
Flakiness: 5%
|
||||
```
|
||||
|
||||
**Q3 Target:**
|
||||
```
|
||||
Coverage: 70%
|
||||
Quality Score: 75/100
|
||||
Flakiness: 1%
|
||||
```
|
||||
|
||||
**Q4 Target:**
|
||||
```
|
||||
Coverage: 85%
|
||||
Quality Score: 85/100
|
||||
Flakiness: <0.5%
|
||||
```
|
||||
|
||||
## Brownfield-Specific Tips
|
||||
|
||||
### Don't Rewrite Everything
|
||||
|
||||
**Common mistake:**
|
||||
```
|
||||
"Our tests are bad, let's delete them all and start over!"
|
||||
```
|
||||
|
||||
**Better approach:**
|
||||
```
|
||||
"Our tests are bad, let's:
|
||||
1. Keep tests that work (even if not perfect)
|
||||
2. Fix critical quality issues incrementally
|
||||
3. Add tests for gaps
|
||||
4. Gradually improve over time"
|
||||
```
|
||||
|
||||
**Why:**
|
||||
- Rewriting is risky (might lose coverage)
|
||||
- Incremental improvement is safer
|
||||
- Team learns gradually
|
||||
- Business value delivered continuously
|
||||
|
||||
### Use Regression Hotspots
|
||||
|
||||
**Identify regression-prone areas:**
|
||||
```markdown
|
||||
## Regression Hotspots
|
||||
|
||||
**Based on:**
|
||||
- Bug reports (last 6 months)
|
||||
- Customer complaints
|
||||
- Code complexity (cyclomatic complexity >10)
|
||||
- Frequent changes (git log analysis)
|
||||
|
||||
**High-Risk Areas:**
|
||||
1. Authentication flow (12 bugs in 6 months)
|
||||
2. Checkout process (8 bugs)
|
||||
3. Payment integration (6 bugs)
|
||||
|
||||
**Test Priority:**
|
||||
- Add regression tests for these areas FIRST
|
||||
- Ensure P0 coverage before touching code
|
||||
```
|
||||
|
||||
### Quarantine Flaky Tests
|
||||
|
||||
Don't let flaky tests block improvement:
|
||||
|
||||
```typescript
|
||||
// Mark flaky tests with .skip temporarily
|
||||
test.skip('flaky test - needs fixing', async ({ page }) => {
|
||||
// TODO: Fix hard wait on line 45
|
||||
// TODO: Add network-first pattern
|
||||
});
|
||||
```
|
||||
|
||||
**Track quarantined tests:**
|
||||
```markdown
|
||||
# Quarantined Tests
|
||||
|
||||
| Test | Reason | Owner | Target Fix Date |
|
||||
| ------------------- | -------------------------- | -------- | --------------- |
|
||||
| checkout.spec.ts:45 | Hard wait causes flakiness | QA Team | 2026-01-20 |
|
||||
| profile.spec.ts:28 | Conditional flow control | Dev Team | 2026-01-25 |
|
||||
```
|
||||
|
||||
**Fix systematically:**
|
||||
- Don't accumulate quarantined tests
|
||||
- Set deadlines for fixes
|
||||
- Review quarantine list weekly
|
||||
|
||||
### Migrate One Directory at a Time
|
||||
|
||||
**Large test suite?** Improve incrementally:
|
||||
|
||||
**Week 1:** `tests/auth/`
|
||||
```
|
||||
1. Run `test-review` on auth tests
|
||||
2. Fix critical issues
|
||||
3. Re-review
|
||||
4. Mark directory as "modernized"
|
||||
```
|
||||
|
||||
**Week 2:** `tests/api/`
|
||||
```
|
||||
Same process
|
||||
```
|
||||
|
||||
**Week 3:** `tests/e2e/`
|
||||
```
|
||||
Same process
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Focused improvement
|
||||
- Visible progress
|
||||
- Team learns patterns
|
||||
- Lower risk
|
||||
|
||||
### Document Migration Status
|
||||
|
||||
**Track which tests are modernized:**
|
||||
|
||||
```markdown
|
||||
# Test Suite Status
|
||||
|
||||
| Directory | Tests | Quality Score | Status | Notes |
|
||||
| ------------------ | ----- | ------------- | ------------- | -------------- |
|
||||
| tests/auth/ | 15 | 85/100 | ✅ Modernized | Week 1 cleanup |
|
||||
| tests/api/ | 32 | 78/100 | ⚠️ In Progress | Week 2 |
|
||||
| tests/e2e/ | 28 | 62/100 | ❌ Legacy | Week 3 planned |
|
||||
| tests/integration/ | 12 | 45/100 | ❌ Legacy | Week 4 planned |
|
||||
|
||||
**Legend:**
|
||||
- ✅ Modernized: Quality >80, no critical issues
|
||||
- ⚠️ In Progress: Active improvement
|
||||
- ❌ Legacy: Not yet touched
|
||||
```
|
||||
|
||||
## Common Brownfield Challenges
|
||||
|
||||
### "We Don't Know What Tests Cover"
|
||||
|
||||
**Problem:** No documentation, unclear what tests do.
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
1. Run `trace` - TEA analyzes tests and maps to requirements
|
||||
2. Review traceability matrix
|
||||
3. Document findings
|
||||
4. Use as baseline for improvement
|
||||
```
|
||||
|
||||
TEA reverse-engineers test coverage even without documentation.
|
||||
|
||||
### "Tests Are Too Brittle to Touch"
|
||||
|
||||
**Problem:** Afraid to modify tests (might break them).
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
1. Run tests, capture current behavior (baseline)
|
||||
2. Make small improvement (fix one hard wait)
|
||||
3. Run tests again
|
||||
4. If still pass, continue
|
||||
5. If fail, investigate why
|
||||
|
||||
Incremental changes = lower risk
|
||||
```
|
||||
|
||||
### "No One Knows How to Run Tests"
|
||||
|
||||
**Problem:** Test documentation is outdated or missing.
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
1. Document manually or ask TEA to help analyze test structure
|
||||
2. Create tests/README.md with:
|
||||
- How to install dependencies
|
||||
- How to run tests (npx playwright test, npm test, etc.)
|
||||
- What each test directory contains
|
||||
- Common issues and troubleshooting
|
||||
3. Commit documentation for team
|
||||
```
|
||||
|
||||
**Note:** `framework` is for new test setup, not existing tests. For brownfield, document what you have.
|
||||
|
||||
### "Tests Take Hours to Run"
|
||||
|
||||
**Problem:** Full test suite takes 4+ hours.
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
1. Configure parallel execution (shard tests across workers)
|
||||
2. Add selective testing (run only affected tests on PR)
|
||||
3. Run full suite nightly only
|
||||
4. Optimize slow tests (remove hard waits, improve selectors)
|
||||
|
||||
Before: 4 hours sequential
|
||||
After: 15 minutes with sharding + selective testing
|
||||
```
|
||||
|
||||
**How `ci` helps:**
|
||||
- Scaffolds CI configuration with parallel sharding examples
|
||||
- Provides selective testing script templates
|
||||
- Documents burn-in and optimization strategies
|
||||
- But YOU configure workers, test selection, and optimization
|
||||
|
||||
**With Playwright Utils burn-in:**
|
||||
- Smart selective testing based on git diff
|
||||
- Volume control (run percentage of affected tests)
|
||||
- See [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md#burn-in)
|
||||
|
||||
### "We Have Tests But They Always Fail"
|
||||
|
||||
**Problem:** Tests are so flaky they're ignored.
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
1. Run `test-review` to identify flakiness patterns
|
||||
2. Fix top 5 flaky tests (biggest impact)
|
||||
3. Quarantine remaining flaky tests
|
||||
4. Re-enable as you fix them
|
||||
|
||||
Don't let perfect be the enemy of good
|
||||
```
|
||||
|
||||
## Brownfield TEA Workflow
|
||||
|
||||
### Recommended Sequence
|
||||
|
||||
**1. Documentation (if needed):**
|
||||
```
|
||||
document-project
|
||||
```
|
||||
|
||||
**2. Baseline (Phase 2):**
|
||||
```
|
||||
trace Phase 1 - Establish coverage baseline
|
||||
test-review - Establish quality baseline
|
||||
```
|
||||
|
||||
**3. Planning (Phase 2-3):**
|
||||
```
|
||||
prd - Document requirements (if missing)
|
||||
architecture - Document architecture (if missing)
|
||||
test-design (system-level) - Testability review
|
||||
```
|
||||
|
||||
**4. Infrastructure (Phase 3):**
|
||||
```
|
||||
framework - Modernize test framework (if needed)
|
||||
ci - Setup or improve CI/CD
|
||||
```
|
||||
|
||||
**5. Per Epic (Phase 4):**
|
||||
```
|
||||
test-design (epic-level) - Focus on regression hotspots
|
||||
automate - Add missing tests
|
||||
test-review - Ensure quality
|
||||
trace Phase 1 - Refresh coverage
|
||||
```
|
||||
|
||||
**6. Release Gate:**
|
||||
```
|
||||
nfr-assess - Validate NFRs (if enterprise)
|
||||
trace Phase 2 - Gate decision
|
||||
```
|
||||
|
||||
## Related Guides
|
||||
|
||||
**Workflow Guides:**
|
||||
- [How to Run Trace](/docs/tea/how-to/workflows/run-trace.md) - Baseline coverage analysis
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Quality audit
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Fill coverage gaps
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Risk assessment
|
||||
|
||||
**Customization:**
|
||||
- [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) - Modernize tests with utilities
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - Brownfield model explained
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - What makes tests good
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Fix flakiness
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Prioritize improvements
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - All 8 workflows
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Config options
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Testing patterns
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - TEA terminology
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
424
docs/tea/how-to/customization/enable-tea-mcp-enhancements.md
Normal file
424
docs/tea/how-to/customization/enable-tea-mcp-enhancements.md
Normal file
@@ -0,0 +1,424 @@
|
||||
---
|
||||
title: "Enable TEA MCP Enhancements"
|
||||
description: Configure Playwright MCP servers for live browser verification during TEA workflows
|
||||
---
|
||||
|
||||
# Enable TEA MCP Enhancements
|
||||
|
||||
Configure Model Context Protocol (MCP) servers to enable live browser verification, exploratory mode, and recording mode in TEA workflows.
|
||||
|
||||
## What are MCP Enhancements?
|
||||
|
||||
MCP (Model Context Protocol) servers enable AI agents to interact with live browsers during test generation. This allows TEA to:
|
||||
|
||||
- **Explore UIs interactively** - Discover actual functionality through browser automation
|
||||
- **Verify selectors** - Generate accurate locators from real DOM
|
||||
- **Validate behavior** - Confirm test scenarios against live applications
|
||||
- **Debug visually** - Use trace viewer and screenshots during generation
|
||||
|
||||
## When to Use This
|
||||
|
||||
**For UI Testing:**
|
||||
- Want exploratory mode in `test-design` (browser-based UI discovery)
|
||||
- Want recording mode in `atdd` or `automate` (verify selectors with live browser)
|
||||
- Want healing mode in `automate` (fix tests with visual debugging)
|
||||
- Need accurate selectors from actual DOM
|
||||
- Debugging complex UI interactions
|
||||
|
||||
**For API Testing:**
|
||||
- Want healing mode in `automate` (analyze failures with trace data)
|
||||
- Need to debug test failures (network responses, request/response data, timing)
|
||||
- Want to inspect trace files (network traffic, errors, race conditions)
|
||||
|
||||
**For Both:**
|
||||
- Visual debugging (trace viewer shows network + UI)
|
||||
- Test failure analysis (MCP can run tests and extract errors)
|
||||
- Understanding complex test failures (network + DOM together)
|
||||
|
||||
**Don't use if:**
|
||||
- You don't have MCP servers configured
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- IDE with MCP support (Cursor, VS Code with Claude extension)
|
||||
- Node.js v18 or later
|
||||
- Playwright installed
|
||||
|
||||
## Available MCP Servers
|
||||
|
||||
**Two Playwright MCP servers** (actively maintained, continuously updated):
|
||||
|
||||
### 1. Playwright MCP - Browser Automation
|
||||
|
||||
**Command:** `npx @playwright/mcp@latest`
|
||||
|
||||
**Capabilities:**
|
||||
- Navigate to URLs
|
||||
- Click elements
|
||||
- Fill forms
|
||||
- Take screenshots
|
||||
- Extract DOM information
|
||||
|
||||
**Best for:** Exploratory mode, recording mode
|
||||
|
||||
### 2. Playwright Test MCP - Test Runner
|
||||
|
||||
**Command:** `npx playwright run-test-mcp-server`
|
||||
|
||||
**Capabilities:**
|
||||
- Run test files
|
||||
- Analyze failures
|
||||
- Extract error messages
|
||||
- Show trace files
|
||||
|
||||
**Best for:** Healing mode, debugging
|
||||
|
||||
### Recommended: Configure Both
|
||||
|
||||
Both servers work together to provide full TEA MCP capabilities.
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Configure MCP Servers
|
||||
|
||||
Add to your IDE's MCP configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"playwright": {
|
||||
"command": "npx",
|
||||
"args": ["@playwright/mcp@latest"]
|
||||
},
|
||||
"playwright-test": {
|
||||
"command": "npx",
|
||||
"args": ["playwright", "run-test-mcp-server"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See [TEA Overview](/docs/tea/explanation/tea-overview.md#playwright-mcp-enhancements) for IDE-specific config locations.
|
||||
|
||||
### 2. Enable in BMAD
|
||||
|
||||
Answer "Yes" when prompted during installation, or set in config:
|
||||
|
||||
```yaml
|
||||
# _bmad/bmm/config.yaml
|
||||
tea_use_mcp_enhancements: true
|
||||
```
|
||||
|
||||
### 3. Verify MCPs Running
|
||||
|
||||
Ensure your MCP servers are running in your IDE.
|
||||
|
||||
## How MCP Enhances TEA Workflows
|
||||
|
||||
### test-design: Exploratory Mode
|
||||
|
||||
**Without MCP:**
|
||||
- TEA infers UI functionality from documentation
|
||||
- Relies on your description of features
|
||||
- May miss actual UI behavior
|
||||
|
||||
**With MCP:**
|
||||
TEA can open live browser to:
|
||||
```
|
||||
"Let me explore the profile page to understand the UI"
|
||||
|
||||
[TEA navigates to /profile]
|
||||
[Takes screenshot]
|
||||
[Extracts accessible elements]
|
||||
|
||||
"I see the profile has:
|
||||
- Name field (editable)
|
||||
- Email field (editable)
|
||||
- Avatar upload button
|
||||
- Save button
|
||||
- Cancel button
|
||||
|
||||
I'll design tests for these interactions."
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Accurate test design based on actual UI
|
||||
- Discovers functionality you might not describe
|
||||
- Validates test scenarios are possible
|
||||
|
||||
### atdd: Recording Mode
|
||||
|
||||
**Without MCP:**
|
||||
- TEA generates selectors from best practices
|
||||
- TEA infers API patterns from documentation
|
||||
|
||||
**With MCP (Recording Mode):**
|
||||
|
||||
**For UI Tests:**
|
||||
```
|
||||
[TEA navigates to /login with live browser]
|
||||
[Inspects actual form fields]
|
||||
|
||||
"I see:
|
||||
- Email input has label 'Email Address' (not 'Email')
|
||||
- Password input has label 'Your Password'
|
||||
- Submit button has text 'Sign In' (not 'Login')
|
||||
|
||||
I'll use these exact selectors."
|
||||
```
|
||||
|
||||
**For API Tests:**
|
||||
```
|
||||
[TEA analyzes trace files from test runs]
|
||||
[Inspects network requests/responses]
|
||||
|
||||
"I see the API returns:
|
||||
- POST /api/login → 200 with { token, userId }
|
||||
- Response time: 150ms
|
||||
- Required headers: Content-Type, Authorization
|
||||
|
||||
I'll validate these in tests."
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- UI: Accurate selectors from real DOM
|
||||
- API: Validated request/response patterns from trace
|
||||
- Both: Tests work on first run
|
||||
|
||||
### automate: Healing + Recording Modes
|
||||
|
||||
**Without MCP:**
|
||||
- TEA analyzes test code only
|
||||
- Suggests fixes based on static analysis
|
||||
- Generates tests from documentation/code
|
||||
|
||||
**With MCP:**
|
||||
|
||||
**Healing Mode (UI + API):**
|
||||
```
|
||||
[TEA opens trace file]
|
||||
[Analyzes screenshots + network tab]
|
||||
|
||||
UI failures: "Button selector changed from 'Save' to 'Save Changes'"
|
||||
API failures: "Response structure changed, expected {id} got {userId}"
|
||||
|
||||
[TEA makes fixes]
|
||||
[Verifies with trace analysis]
|
||||
```
|
||||
|
||||
**Recording Mode (UI + API):**
|
||||
```
|
||||
UI: [Inspects actual DOM, generates verified selectors]
|
||||
API: [Analyzes network traffic, validates request/response patterns]
|
||||
|
||||
[Generates tests with verified patterns]
|
||||
[Tests work on first run]
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Visual debugging + trace analysis (not just UI)
|
||||
- Verified selectors (UI) + network patterns (API)
|
||||
- Tests verified against actual application behavior
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Exploratory Mode Example
|
||||
|
||||
```
|
||||
User: Run test-design
|
||||
|
||||
TEA: What feature are you designing tests for?
|
||||
|
||||
User: The checkout flow at https://shop.example.com
|
||||
|
||||
TEA: [With MCP] Let me explore the checkout flow
|
||||
[Opens browser]
|
||||
[Navigates to checkout]
|
||||
[Takes screenshots]
|
||||
[Identifies all interactive elements]
|
||||
|
||||
TEA: I've explored the checkout flow. I found:
|
||||
- 4-step wizard (Cart → Shipping → Payment → Confirm)
|
||||
- Shipping options (Standard, Express, Overnight)
|
||||
- Payment methods (Credit Card, PayPal)
|
||||
- Promo code field
|
||||
- Order summary with real-time totals
|
||||
|
||||
Test design will cover all these scenarios.
|
||||
```
|
||||
|
||||
### Recording Mode Example
|
||||
|
||||
```
|
||||
User: Run atdd for user registration feature
|
||||
|
||||
TEA: [With MCP] Let me verify the registration form selectors
|
||||
[Opens browser at /register]
|
||||
[Inspects form]
|
||||
|
||||
TEA: I see the actual form has:
|
||||
- "Full Name" label (not "Name")
|
||||
- "Email Address" label (not "Email")
|
||||
- "Create Password" label (not "Password")
|
||||
- "I agree to Terms" checkbox
|
||||
- "Create Account" button (not "Register")
|
||||
|
||||
Generating tests with verified selectors...
|
||||
```
|
||||
|
||||
### Healing Mode Example
|
||||
|
||||
```
|
||||
User: Run automate with healing mode
|
||||
|
||||
TEA: [With MCP] Running tests and checking for failures
|
||||
[Runs test suite]
|
||||
[Test fails: selector not found]
|
||||
|
||||
TEA: Test failed on selector '.submit-btn'
|
||||
[Opens trace viewer]
|
||||
[Sees button class changed to '.submit-button']
|
||||
|
||||
Fixing selector and verifying...
|
||||
[Updates test]
|
||||
[Re-runs with MCP]
|
||||
[Test passes]
|
||||
|
||||
Updated test with corrected selector.
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### MCP Servers Not Running
|
||||
|
||||
**Problem:** TEA says MCP enhancements aren't available.
|
||||
|
||||
**Causes:**
|
||||
1. MCP servers not configured in IDE
|
||||
2. Config syntax error in JSON
|
||||
3. IDE not restarted after config
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Verify MCP config file exists
|
||||
ls ~/.cursor/config.json
|
||||
|
||||
# Validate JSON syntax
|
||||
cat ~/.cursor/config.json | python -m json.tool
|
||||
|
||||
# Restart IDE
|
||||
# Cmd+Q (quit) then reopen
|
||||
```
|
||||
|
||||
### Browser Doesn't Open
|
||||
|
||||
**Problem:** MCP enabled but browser never opens.
|
||||
|
||||
**Causes:**
|
||||
1. Playwright browsers not installed
|
||||
2. Headless mode enabled
|
||||
3. MCP server crashed
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Install browsers
|
||||
npx playwright install
|
||||
|
||||
# Check MCP server logs (in IDE)
|
||||
# Look for error messages
|
||||
|
||||
# Try manual MCP server
|
||||
npx @playwright/mcp@latest
|
||||
# Should start without errors
|
||||
```
|
||||
|
||||
### TEA Doesn't Use MCP
|
||||
|
||||
**Problem:** `tea_use_mcp_enhancements: true` but TEA doesn't use browser.
|
||||
|
||||
**Causes:**
|
||||
1. Config not saved
|
||||
2. Workflow run before config update
|
||||
3. MCP servers not running
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Verify config
|
||||
grep tea_use_mcp_enhancements _bmad/bmm/config.yaml
|
||||
# Should show: tea_use_mcp_enhancements: true
|
||||
|
||||
# Restart IDE (reload MCP servers)
|
||||
|
||||
# Start fresh chat (TEA loads config at start)
|
||||
```
|
||||
|
||||
### Selector Verification Fails
|
||||
|
||||
**Problem:** MCP can't find elements TEA is looking for.
|
||||
|
||||
**Causes:**
|
||||
1. Page not fully loaded
|
||||
2. Element behind modal/overlay
|
||||
3. Element requires authentication
|
||||
|
||||
**Solution:**
|
||||
TEA will handle this automatically:
|
||||
- Wait for page load
|
||||
- Dismiss modals if present
|
||||
- Handle auth if needed
|
||||
|
||||
If persistent, provide TEA more context:
|
||||
```
|
||||
"The element is behind a modal - dismiss the modal first"
|
||||
"The page requires login - use credentials X"
|
||||
```
|
||||
|
||||
### MCP Slows Down Workflows
|
||||
|
||||
**Problem:** Workflows take much longer with MCP enabled.
|
||||
|
||||
**Cause:** Browser automation adds overhead.
|
||||
|
||||
**Solution:**
|
||||
Use MCP selectively:
|
||||
- **Enable for:** Complex UIs, new projects, debugging
|
||||
- **Disable for:** Simple features, well-known patterns, API-only testing
|
||||
|
||||
Toggle quickly:
|
||||
```yaml
|
||||
# For this feature (complex UI)
|
||||
tea_use_mcp_enhancements: true
|
||||
|
||||
# For next feature (simple API)
|
||||
tea_use_mcp_enhancements: false
|
||||
```
|
||||
|
||||
## Related Guides
|
||||
|
||||
**Getting Started:**
|
||||
- [TEA Lite Quickstart Tutorial](/docs/tea/tutorials/tea-lite-quickstart.md) - Learn TEA basics first
|
||||
|
||||
**Workflow Guides (MCP-Enhanced):**
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Exploratory mode with browser
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md) - Recording mode for accurate selectors
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Healing mode for debugging
|
||||
|
||||
**Other Customization:**
|
||||
- [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md) - Production-ready utilities
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - MCP enhancements in lifecycle
|
||||
- [Engagement Models](/docs/tea/explanation/engagement-models.md) - When to use MCP enhancements
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - tea_use_mcp_enhancements option
|
||||
- [TEA Command Reference](/docs/tea/reference/commands.md) - MCP-enhanced workflows
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - MCP Enhancements term
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
813
docs/tea/how-to/customization/integrate-playwright-utils.md
Normal file
813
docs/tea/how-to/customization/integrate-playwright-utils.md
Normal file
@@ -0,0 +1,813 @@
|
||||
---
|
||||
title: "Integrate Playwright Utils with TEA"
|
||||
description: Add production-ready fixtures and utilities to your TEA-generated tests
|
||||
---
|
||||
|
||||
# Integrate Playwright Utils with TEA
|
||||
|
||||
Integrate `@seontechnologies/playwright-utils` with TEA to get production-ready fixtures, utilities, and patterns in your test suite.
|
||||
|
||||
## What is Playwright Utils?
|
||||
|
||||
A production-ready utility library that provides:
|
||||
- Typed API request helper
|
||||
- Authentication session management
|
||||
- Network recording and replay (HAR)
|
||||
- Network request interception
|
||||
- Async polling (recurse)
|
||||
- Structured logging
|
||||
- File validation (CSV, PDF, XLSX, ZIP)
|
||||
- Burn-in testing utilities
|
||||
- Network error monitoring
|
||||
|
||||
**Repository:** [https://github.com/seontechnologies/playwright-utils](https://github.com/seontechnologies/playwright-utils)
|
||||
|
||||
**npm Package:** `@seontechnologies/playwright-utils`
|
||||
|
||||
## When to Use This
|
||||
|
||||
- You want production-ready fixtures (not DIY)
|
||||
- Your team benefits from standardized patterns
|
||||
- You need utilities like API testing, auth handling, network mocking
|
||||
- You want TEA to generate tests using these utilities
|
||||
- You're building reusable test infrastructure
|
||||
|
||||
**Don't use if:**
|
||||
- You're just learning testing (keep it simple first)
|
||||
- You have your own fixture library
|
||||
- You don't need the utilities
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- Test framework setup complete (Playwright)
|
||||
- Node.js v18 or later
|
||||
|
||||
**Note:** Playwright Utils is for Playwright only (not Cypress).
|
||||
|
||||
## Installation
|
||||
|
||||
### Step 1: Install Package
|
||||
|
||||
```bash
|
||||
npm install -D @seontechnologies/playwright-utils
|
||||
```
|
||||
|
||||
### Step 2: Enable in TEA Config
|
||||
|
||||
Edit `_bmad/bmm/config.yaml`:
|
||||
|
||||
```yaml
|
||||
tea_use_playwright_utils: true
|
||||
```
|
||||
|
||||
**Note:** If you enabled this during BMad installation, it's already set.
|
||||
|
||||
### Step 3: Verify Installation
|
||||
|
||||
```bash
|
||||
# Check package installed
|
||||
npm list @seontechnologies/playwright-utils
|
||||
|
||||
# Check TEA config
|
||||
grep tea_use_playwright_utils _bmad/bmm/config.yaml
|
||||
```
|
||||
|
||||
Should show:
|
||||
```
|
||||
@seontechnologies/playwright-utils@2.x.x
|
||||
tea_use_playwright_utils: true
|
||||
```
|
||||
|
||||
## What Changes When Enabled
|
||||
|
||||
### `framework` Workflow
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
// Basic Playwright fixtures only
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test('api test', async ({ request }) => {
|
||||
const response = await request.get('/api/users');
|
||||
const users = await response.json();
|
||||
expect(response.status()).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils (Combined Fixtures):**
|
||||
```typescript
|
||||
// All utilities available via single import
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('api test', async ({ apiRequest, authToken, log }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users',
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
});
|
||||
|
||||
log.info('Fetched users', body);
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils (Selective Merge):**
|
||||
```typescript
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { test as logFixture } from '@seontechnologies/playwright-utils/log/fixtures';
|
||||
|
||||
export const test = mergeTests(apiRequestFixture, logFixture);
|
||||
export { expect } from '@playwright/test';
|
||||
|
||||
test('api test', async ({ apiRequest, log }) => {
|
||||
log.info('Fetching users');
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users'
|
||||
});
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
### `atdd` and `automate` Workflows
|
||||
|
||||
**Without Playwright Utils:**
|
||||
```typescript
|
||||
// Manual API calls
|
||||
test('should fetch profile', async ({ page, request }) => {
|
||||
const response = await request.get('/api/profile');
|
||||
const profile = await response.json();
|
||||
// Manual parsing and validation
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
|
||||
test('should fetch profile', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/profile' // 'path' not 'url'
|
||||
}).validateSchema(ProfileSchema); // Chained validation
|
||||
|
||||
expect(status).toBe(200);
|
||||
// body is type-safe: { id: string, name: string, email: string }
|
||||
});
|
||||
```
|
||||
|
||||
### `test-review` Workflow
|
||||
|
||||
**Without Playwright Utils:**
|
||||
Reviews against generic Playwright patterns
|
||||
|
||||
**With Playwright Utils:**
|
||||
Reviews against playwright-utils best practices:
|
||||
- Fixture composition patterns
|
||||
- Utility usage (apiRequest, authSession, etc.)
|
||||
- Network-first patterns
|
||||
- Structured logging
|
||||
|
||||
### `ci` Workflow
|
||||
|
||||
**Without Playwright Utils:**
|
||||
- Parallel sharding
|
||||
- Burn-in loops (basic shell scripts)
|
||||
- CI triggers (PR, push, schedule)
|
||||
- Artifact collection
|
||||
|
||||
**With Playwright Utils:**
|
||||
Enhanced with smart testing:
|
||||
- Burn-in utility (git diff-based, volume control)
|
||||
- Selective testing (skip config/docs/types changes)
|
||||
- Test prioritization by file changes
|
||||
|
||||
## Available Utilities
|
||||
|
||||
### api-request
|
||||
|
||||
Typed HTTP client with schema validation.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/api-request.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Vanilla Playwright | api-request Utility |
|
||||
|-------------------|---------------------|
|
||||
| Manual `await response.json()` | Automatic JSON parsing |
|
||||
| `response.status()` + separate body parsing | Returns `{ status, body }` structure |
|
||||
| No built-in retry | Automatic retry for 5xx errors |
|
||||
| No schema validation | Single-line `.validateSchema()` |
|
||||
| Verbose status checking | Clean destructuring |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
import { z } from 'zod';
|
||||
|
||||
const UserSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
email: z.string().email()
|
||||
});
|
||||
|
||||
test('should create user', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/users', // Note: 'path' not 'url'
|
||||
body: { name: 'Test User', email: 'test@example.com' } // Note: 'body' not 'data'
|
||||
}).validateSchema(UserSchema); // Chained method (can await separately if needed)
|
||||
|
||||
expect(status).toBe(201);
|
||||
expect(body.id).toBeDefined();
|
||||
expect(body.email).toBe('test@example.com');
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Returns `{ status, body }` structure
|
||||
- Schema validation with `.validateSchema()` chained method
|
||||
- Automatic retry for 5xx errors
|
||||
- Type-safe response body
|
||||
|
||||
### auth-session
|
||||
|
||||
Authentication session management with token persistence.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/auth-session.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Vanilla Playwright Auth | auth-session |
|
||||
|------------------------|--------------|
|
||||
| Re-authenticate every test run (slow) | Authenticate once, persist to disk |
|
||||
| Single user per setup | Multi-user support (roles, accounts) |
|
||||
| No token expiration handling | Automatic token renewal |
|
||||
| Manual session management | Provider pattern (flexible auth) |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/auth-session/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('should access protected route', async ({ page, authToken }) => {
|
||||
// authToken automatically fetched and persisted
|
||||
// No manual login needed - handled by fixture
|
||||
|
||||
await page.goto('/dashboard');
|
||||
await expect(page).toHaveURL('/dashboard');
|
||||
|
||||
// Token is reused across tests (persisted to disk)
|
||||
});
|
||||
```
|
||||
|
||||
**Configuration required** (see auth-session docs for provider setup):
|
||||
```typescript
|
||||
// global-setup.ts
|
||||
import { authStorageInit, setAuthProvider, authGlobalInit } from '@seontechnologies/playwright-utils/auth-session';
|
||||
|
||||
async function globalSetup() {
|
||||
authStorageInit();
|
||||
setAuthProvider(myCustomProvider); // Define your auth mechanism
|
||||
await authGlobalInit(); // Fetch token once
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Token fetched once, reused across all tests
|
||||
- Persisted to disk (faster subsequent runs)
|
||||
- Multi-user support via `authOptions.userIdentifier`
|
||||
- Automatic token renewal if expired
|
||||
|
||||
### network-recorder
|
||||
|
||||
Record and replay network traffic (HAR) for offline testing.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/network-recorder.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Vanilla Playwright HAR | network-recorder |
|
||||
|------------------------|------------------|
|
||||
| Manual `routeFromHAR()` configuration | Automatic HAR management with `PW_NET_MODE` |
|
||||
| Separate record/playback test files | Same test, switch env var |
|
||||
| No CRUD detection | Stateful mocking (POST/PUT/DELETE work) |
|
||||
| Manual HAR file paths | Auto-organized by test name |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/network-recorder/fixtures';
|
||||
|
||||
// Record mode: Set environment variable
|
||||
process.env.PW_NET_MODE = 'record';
|
||||
|
||||
test('should work with recorded traffic', async ({ page, context, networkRecorder }) => {
|
||||
// Setup recorder (records or replays based on PW_NET_MODE)
|
||||
await networkRecorder.setup(context);
|
||||
|
||||
// Your normal test code
|
||||
await page.goto('/dashboard');
|
||||
await page.click('#add-item');
|
||||
|
||||
// First run (record): Saves traffic to HAR file
|
||||
// Subsequent runs (playback): Uses HAR file, no backend needed
|
||||
});
|
||||
```
|
||||
|
||||
**Switch modes:**
|
||||
```bash
|
||||
# Record traffic
|
||||
PW_NET_MODE=record npx playwright test
|
||||
|
||||
# Playback traffic (offline)
|
||||
PW_NET_MODE=playback npx playwright test
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Offline testing (no backend needed)
|
||||
- Deterministic responses (same every time)
|
||||
- Faster execution (no network latency)
|
||||
- Stateful mocking (CRUD operations work)
|
||||
|
||||
### intercept-network-call
|
||||
|
||||
Spy or stub network requests with automatic JSON parsing.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/intercept-network-call.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Vanilla Playwright | interceptNetworkCall |
|
||||
|-------------------|----------------------|
|
||||
| Route setup + response waiting (separate steps) | Single declarative call |
|
||||
| Manual `await response.json()` | Automatic JSON parsing (`responseJson`) |
|
||||
| Complex filter predicates | Simple glob patterns (`**/api/**`) |
|
||||
| Verbose syntax | Concise, readable API |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('should handle API errors', async ({ page, interceptNetworkCall }) => {
|
||||
// Stub API to return error (set up BEFORE navigation)
|
||||
const profileCall = interceptNetworkCall({
|
||||
method: 'GET',
|
||||
url: '**/api/profile',
|
||||
fulfillResponse: {
|
||||
status: 500,
|
||||
body: { error: 'Server error' }
|
||||
}
|
||||
});
|
||||
|
||||
await page.goto('/profile');
|
||||
|
||||
// Wait for the intercepted response
|
||||
const { status, responseJson } = await profileCall;
|
||||
|
||||
expect(status).toBe(500);
|
||||
expect(responseJson.error).toBe('Server error');
|
||||
await expect(page.getByText('Server error occurred')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Automatic JSON parsing (`responseJson` ready to use)
|
||||
- Spy mode (observe real traffic) or stub mode (mock responses)
|
||||
- Glob pattern URL matching
|
||||
- Returns promise with `{ status, responseJson, requestJson }`
|
||||
|
||||
### recurse
|
||||
|
||||
Async polling for eventual consistency (Cypress-style).
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/recurse.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Manual Polling | recurse Utility |
|
||||
|----------------|-----------------|
|
||||
| `while` loops with `waitForTimeout` | Smart polling with exponential backoff |
|
||||
| Hard-coded retry logic | Configurable timeout/interval |
|
||||
| No logging visibility | Optional logging with custom messages |
|
||||
| Verbose, error-prone | Clean, readable API |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
|
||||
test('should wait for async job completion', async ({ apiRequest, recurse }) => {
|
||||
// Start async job
|
||||
const { body: job } = await apiRequest({
|
||||
method: 'POST',
|
||||
path: '/api/jobs'
|
||||
});
|
||||
|
||||
// Poll until complete (smart waiting)
|
||||
const completed = await recurse(
|
||||
() => apiRequest({ method: 'GET', path: `/api/jobs/${job.id}` }),
|
||||
(result) => result.body.status === 'completed',
|
||||
{
|
||||
timeout: 30000,
|
||||
interval: 2000,
|
||||
log: 'Waiting for job to complete'
|
||||
}
|
||||
});
|
||||
|
||||
expect(completed.body.status).toBe('completed');
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Smart polling with configurable interval
|
||||
- Handles async jobs, background tasks
|
||||
- Optional logging for debugging
|
||||
- Better than hard waits or manual polling loops
|
||||
|
||||
### log
|
||||
|
||||
Structured logging that integrates with Playwright reports.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/log.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Console.log / print | log Utility |
|
||||
|--------------------|-------------|
|
||||
| Not in test reports | Integrated with Playwright reports |
|
||||
| No step visualization | `.step()` shows in Playwright UI |
|
||||
| Manual object formatting | Logs objects seamlessly |
|
||||
| No structured output | JSON artifacts for debugging |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { log } from '@seontechnologies/playwright-utils';
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test('should login', async ({ page }) => {
|
||||
await log.info('Starting login test');
|
||||
|
||||
await page.goto('/login');
|
||||
await log.step('Navigated to login page'); // Shows in Playwright UI
|
||||
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await log.debug('Filled email field');
|
||||
|
||||
await log.success('Login completed');
|
||||
// Logs appear in test output and Playwright reports
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Direct import (no fixture needed for basic usage)
|
||||
- Structured logs in test reports
|
||||
- `.step()` shows in Playwright UI
|
||||
- Logs objects seamlessly (no special handling needed)
|
||||
- Trace test execution
|
||||
|
||||
### file-utils
|
||||
|
||||
Read and validate CSV, PDF, XLSX, ZIP files.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/file-utils.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Vanilla Playwright | file-utils |
|
||||
|-------------------|------------|
|
||||
| ~80 lines per CSV flow | ~10 lines end-to-end |
|
||||
| Manual download event handling | `handleDownload()` encapsulates all |
|
||||
| External parsing libraries | Auto-parsing (CSV, XLSX, PDF, ZIP) |
|
||||
| No validation helpers | Built-in validation (headers, row count) |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { handleDownload, readCSV } from '@seontechnologies/playwright-utils/file-utils';
|
||||
import { expect } from '@playwright/test';
|
||||
import path from 'node:path';
|
||||
|
||||
const DOWNLOAD_DIR = path.join(__dirname, '../downloads');
|
||||
|
||||
test('should export valid CSV', async ({ page }) => {
|
||||
// Handle download and get file path
|
||||
const downloadPath = await handleDownload({
|
||||
page,
|
||||
downloadDir: DOWNLOAD_DIR,
|
||||
trigger: () => page.click('button:has-text("Export")')
|
||||
});
|
||||
|
||||
// Read and parse CSV
|
||||
const csvResult = await readCSV({ filePath: downloadPath });
|
||||
const { data, headers } = csvResult.content;
|
||||
|
||||
// Validate structure
|
||||
expect(headers).toEqual(['Name', 'Email', 'Status']);
|
||||
expect(data.length).toBeGreaterThan(0);
|
||||
expect(data[0]).toMatchObject({
|
||||
Name: expect.any(String),
|
||||
Email: expect.any(String),
|
||||
Status: expect.any(String)
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Handles downloads automatically
|
||||
- Auto-parses CSV, XLSX, PDF, ZIP
|
||||
- Type-safe access to parsed data
|
||||
- Returns structured `{ headers, data }`
|
||||
|
||||
### burn-in
|
||||
|
||||
Smart test selection with git diff analysis for CI optimization.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/burn-in.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Playwright `--only-changed` | burn-in Utility |
|
||||
|-----------------------------|-----------------|
|
||||
| Config changes trigger all tests | Smart filtering (skip configs, types, docs) |
|
||||
| All or nothing | Volume control (run percentage) |
|
||||
| No customization | Custom dependency analysis |
|
||||
| Slow CI on minor changes | Fast CI with intelligent selection |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
// scripts/burn-in-changed.ts
|
||||
import { runBurnIn } from '@seontechnologies/playwright-utils/burn-in';
|
||||
|
||||
async function main() {
|
||||
await runBurnIn({
|
||||
configPath: 'playwright.burn-in.config.ts',
|
||||
baseBranch: 'main'
|
||||
});
|
||||
}
|
||||
|
||||
main().catch(console.error);
|
||||
```
|
||||
|
||||
**Config:**
|
||||
```typescript
|
||||
// playwright.burn-in.config.ts
|
||||
import type { BurnInConfig } from '@seontechnologies/playwright-utils/burn-in';
|
||||
|
||||
const config: BurnInConfig = {
|
||||
skipBurnInPatterns: [
|
||||
'**/config/**',
|
||||
'**/*.md',
|
||||
'**/*types*'
|
||||
],
|
||||
burnInTestPercentage: 0.3,
|
||||
burnIn: {
|
||||
repeatEach: 3,
|
||||
retries: 1
|
||||
}
|
||||
};
|
||||
|
||||
export default config;
|
||||
```
|
||||
|
||||
**Package script:**
|
||||
```json
|
||||
{
|
||||
"scripts": {
|
||||
"test:burn-in": "tsx scripts/burn-in-changed.ts"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- **Ensure flake-free tests upfront** - Never deal with test flake again
|
||||
- Smart filtering (skip config, types, docs changes)
|
||||
- Volume control (run percentage of affected tests)
|
||||
- Git diff-based test selection
|
||||
- Faster CI feedback
|
||||
|
||||
### network-error-monitor
|
||||
|
||||
Automatically detect HTTP 4xx/5xx errors during tests.
|
||||
|
||||
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/network-error-monitor.html>
|
||||
|
||||
**Why Use This?**
|
||||
|
||||
| Vanilla Playwright | network-error-monitor |
|
||||
|-------------------|----------------------|
|
||||
| UI passes, backend 500 ignored | Auto-fails on any 4xx/5xx |
|
||||
| Manual error checking | Zero boilerplate (auto-enabled) |
|
||||
| Silent failures slip through | Acts like Sentry for tests |
|
||||
| No domino effect prevention | Limits cascading failures |
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
|
||||
// That's it! Network monitoring is automatically enabled
|
||||
test('should not have API errors', async ({ page }) => {
|
||||
await page.goto('/dashboard');
|
||||
await page.click('button');
|
||||
|
||||
// Test fails automatically if any HTTP 4xx/5xx errors occur
|
||||
// Error message shows: "Network errors detected: 2 request(s) failed"
|
||||
// GET 500 https://api.example.com/users
|
||||
// POST 503 https://api.example.com/metrics
|
||||
});
|
||||
```
|
||||
|
||||
**Opt-out for validation tests:**
|
||||
```typescript
|
||||
// When testing error scenarios, opt-out with annotation
|
||||
test('should show error message on 404',
|
||||
{ annotation: [{ type: 'skipNetworkMonitoring' }] }, // Array format
|
||||
async ({ page }) => {
|
||||
await page.goto('/invalid-page'); // Will 404
|
||||
await expect(page.getByText('Page not found')).toBeVisible();
|
||||
// Test won't fail on 404 because of annotation
|
||||
}
|
||||
);
|
||||
|
||||
// Or opt-out entire describe block
|
||||
test.describe('error handling',
|
||||
{ annotation: [{ type: 'skipNetworkMonitoring' }] },
|
||||
() => {
|
||||
test('handles 404', async ({ page }) => {
|
||||
// Monitoring disabled for all tests in block
|
||||
});
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Auto-enabled (zero setup)
|
||||
- Catches silent backend failures (500, 503, 504)
|
||||
- **Prevents domino effect** (limits cascading failures from one bad endpoint)
|
||||
- Opt-out with annotations for validation tests
|
||||
- Structured error reporting (JSON artifacts)
|
||||
|
||||
## Fixture Composition
|
||||
|
||||
**Option 1: Use Package's Combined Fixtures (Simplest)**
|
||||
|
||||
```typescript
|
||||
// Import all utilities at once
|
||||
import { test } from '@seontechnologies/playwright-utils/fixtures';
|
||||
import { log } from '@seontechnologies/playwright-utils';
|
||||
import { expect } from '@playwright/test';
|
||||
|
||||
test('api test', async ({ apiRequest, interceptNetworkCall }) => {
|
||||
await log.info('Fetching users');
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users'
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
**Option 2: Create Custom Merged Fixtures (Selective)**
|
||||
|
||||
**File 1: support/merged-fixtures.ts**
|
||||
```typescript
|
||||
import { test as base, mergeTests } from '@playwright/test';
|
||||
import { test as apiRequest } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { test as interceptNetworkCall } from '@seontechnologies/playwright-utils/intercept-network-call/fixtures';
|
||||
import { test as networkErrorMonitor } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
|
||||
import { log } from '@seontechnologies/playwright-utils';
|
||||
|
||||
// Merge only what you need
|
||||
export const test = mergeTests(
|
||||
base,
|
||||
apiRequest,
|
||||
interceptNetworkCall,
|
||||
networkErrorMonitor
|
||||
);
|
||||
|
||||
export const expect = base.expect;
|
||||
export { log };
|
||||
```
|
||||
|
||||
**File 2: tests/api/users.spec.ts**
|
||||
```typescript
|
||||
import { test, expect, log } from '../support/merged-fixtures';
|
||||
|
||||
test('api test', async ({ apiRequest, interceptNetworkCall }) => {
|
||||
await log.info('Fetching users');
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/users'
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
});
|
||||
```
|
||||
|
||||
**Contrast:**
|
||||
- Option 1: All utilities available, zero setup
|
||||
- Option 2: Pick utilities you need, one central file
|
||||
|
||||
**See working examples:** <https://github.com/seontechnologies/playwright-utils/tree/main/playwright/support>
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Import Errors
|
||||
|
||||
**Problem:** Cannot find module '@seontechnologies/playwright-utils/api-request'
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Verify package installed
|
||||
npm list @seontechnologies/playwright-utils
|
||||
|
||||
# Check package.json has correct version
|
||||
"@seontechnologies/playwright-utils": "^2.0.0"
|
||||
|
||||
# Reinstall if needed
|
||||
npm install -D @seontechnologies/playwright-utils
|
||||
```
|
||||
|
||||
### TEA Not Using Utilities
|
||||
|
||||
**Problem:** TEA generates tests without playwright-utils.
|
||||
|
||||
**Causes:**
|
||||
1. Config not set: `tea_use_playwright_utils: false`
|
||||
2. Workflow run before config change
|
||||
3. Package not installed
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Check config
|
||||
grep tea_use_playwright_utils _bmad/bmm/config.yaml
|
||||
|
||||
# Should show: tea_use_playwright_utils: true
|
||||
|
||||
# Start fresh chat (TEA loads config at start)
|
||||
```
|
||||
|
||||
### Type Errors with apiRequest
|
||||
|
||||
**Problem:** TypeScript errors on apiRequest response.
|
||||
|
||||
**Cause:** No schema validation.
|
||||
|
||||
**Solution:**
|
||||
```typescript
|
||||
// Add Zod schema for type safety
|
||||
import { z } from 'zod';
|
||||
|
||||
const ProfileSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
email: z.string().email()
|
||||
});
|
||||
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/profile' // 'path' not 'url'
|
||||
}).validateSchema(ProfileSchema); // Chained method
|
||||
|
||||
expect(status).toBe(200);
|
||||
// body is typed as { id: string, name: string, email: string }
|
||||
```
|
||||
|
||||
## Migration Guide
|
||||
|
||||
## Related Guides
|
||||
|
||||
**Getting Started:**
|
||||
- [TEA Lite Quickstart Tutorial](/docs/tea/tutorials/tea-lite-quickstart.md) - Learn TEA basics
|
||||
- [How to Set Up Test Framework](/docs/tea/how-to/workflows/setup-test-framework.md) - Initial framework setup
|
||||
|
||||
**Workflow Guides:**
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md) - Generate tests with utilities
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Expand coverage with utilities
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Review against PW-Utils patterns
|
||||
|
||||
**Other Customization:**
|
||||
- [Enable MCP Enhancements](/docs/tea/how-to/customization/enable-tea-mcp-enhancements.md) - Live browser verification
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - **Why Playwright Utils matters** (part of TEA's three-part solution)
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Pure function → fixture pattern
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Network utilities explained
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - Patterns PW-Utils enforces
|
||||
|
||||
## Reference
|
||||
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - tea_use_playwright_utils option
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Playwright Utils fragments
|
||||
- [Glossary](/docs/tea/glossary/index.md#test-architect-tea-concepts) - Playwright Utils term
|
||||
- [Official PW-Utils Docs](https://seontechnologies.github.io/playwright-utils/) - Complete API reference
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
436
docs/tea/how-to/workflows/run-atdd.md
Normal file
436
docs/tea/how-to/workflows/run-atdd.md
Normal file
@@ -0,0 +1,436 @@
|
||||
---
|
||||
title: "How to Run ATDD with TEA"
|
||||
description: Generate failing acceptance tests before implementation using TEA's ATDD workflow
|
||||
---
|
||||
|
||||
# How to Run ATDD with TEA
|
||||
|
||||
Use TEA's `atdd` workflow to generate failing acceptance tests BEFORE implementation. This is the TDD (Test-Driven Development) red phase - tests fail first, guide development, then pass.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- You're about to implement a NEW feature (feature doesn't exist yet)
|
||||
- You want to follow TDD workflow (red → green → refactor)
|
||||
- You want tests to guide your implementation
|
||||
- You're practicing acceptance test-driven development
|
||||
|
||||
**Don't use this if:**
|
||||
- Feature already exists (use `automate` instead)
|
||||
- You want tests that pass immediately
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- Test framework setup complete (run `framework` if needed)
|
||||
- Story or feature defined with acceptance criteria
|
||||
|
||||
**Note:** This guide uses Playwright examples. If using Cypress, commands and syntax will differ (e.g., `cy.get()` instead of `page.locator()`).
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Load TEA Agent
|
||||
|
||||
Start a fresh chat and load TEA:
|
||||
|
||||
```
|
||||
tea
|
||||
```
|
||||
|
||||
### 2. Run the ATDD Workflow
|
||||
|
||||
```
|
||||
atdd
|
||||
```
|
||||
|
||||
### 3. Provide Context
|
||||
|
||||
TEA will ask for:
|
||||
|
||||
**Story/Feature Details:**
|
||||
```
|
||||
We're adding a user profile page where users can:
|
||||
- View their profile information
|
||||
- Edit their name and email
|
||||
- Upload a profile picture
|
||||
- Save changes with validation
|
||||
```
|
||||
|
||||
**Acceptance Criteria:**
|
||||
```
|
||||
Given I'm logged in
|
||||
When I navigate to /profile
|
||||
Then I see my current name and email
|
||||
|
||||
Given I'm on the profile page
|
||||
When I click "Edit Profile"
|
||||
Then I can modify my name and email
|
||||
|
||||
Given I've edited my profile
|
||||
When I click "Save"
|
||||
Then my changes are persisted
|
||||
And I see a success message
|
||||
|
||||
Given I upload an invalid file type
|
||||
When I try to save
|
||||
Then I see an error message
|
||||
And changes are not saved
|
||||
```
|
||||
|
||||
**Reference Documents** (optional):
|
||||
- Point to your story file
|
||||
- Reference PRD or tech spec
|
||||
- Link to test design (if you ran `test-design` first)
|
||||
|
||||
### 4. Specify Test Levels
|
||||
|
||||
TEA will ask what test levels to generate:
|
||||
|
||||
**Options:**
|
||||
- E2E tests (browser-based, full user journey)
|
||||
- API tests (backend only, faster)
|
||||
- Component tests (UI components in isolation)
|
||||
- Mix of levels (see [API Tests First, E2E Later](#api-tests-first-e2e-later) tip)
|
||||
|
||||
### Component Testing by Framework
|
||||
|
||||
TEA generates component tests using framework-appropriate tools:
|
||||
|
||||
| Your Framework | Component Testing Tool |
|
||||
| -------------- | ------------------------------------------- |
|
||||
| **Cypress** | Cypress Component Testing (*.cy.tsx) |
|
||||
| **Playwright** | Vitest + React Testing Library (*.test.tsx) |
|
||||
|
||||
**Example response:**
|
||||
```
|
||||
Generate:
|
||||
- API tests for profile CRUD operations
|
||||
- E2E tests for the complete profile editing flow
|
||||
- Component tests for ProfileForm validation (if using Cypress or Vitest)
|
||||
- Focus on P0 and P1 scenarios
|
||||
```
|
||||
|
||||
### 5. Review Generated Tests
|
||||
|
||||
TEA generates **failing tests** in appropriate directories:
|
||||
|
||||
#### API Tests (`tests/api/profile.spec.ts`):
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('Profile API', () => {
|
||||
test('should fetch user profile', async ({ request }) => {
|
||||
const response = await request.get('/api/profile');
|
||||
|
||||
expect(response.status()).toBe(200);
|
||||
const profile = await response.json();
|
||||
expect(profile).toHaveProperty('name');
|
||||
expect(profile).toHaveProperty('email');
|
||||
expect(profile).toHaveProperty('avatarUrl');
|
||||
});
|
||||
|
||||
test('should update user profile', async ({ request }) => {
|
||||
const response = await request.patch('/api/profile', {
|
||||
data: {
|
||||
name: 'Updated Name',
|
||||
email: 'updated@example.com'
|
||||
}
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(200);
|
||||
const updated = await response.json();
|
||||
expect(updated.name).toBe('Updated Name');
|
||||
expect(updated.email).toBe('updated@example.com');
|
||||
});
|
||||
|
||||
test('should validate email format', async ({ request }) => {
|
||||
const response = await request.patch('/api/profile', {
|
||||
data: {
|
||||
email: 'invalid-email'
|
||||
}
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(400);
|
||||
const error = await response.json();
|
||||
expect(error.message).toContain('Invalid email format');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { expect } from '@playwright/test';
|
||||
import { z } from 'zod';
|
||||
|
||||
const ProfileSchema = z.object({
|
||||
name: z.string(),
|
||||
email: z.string().email(),
|
||||
avatarUrl: z.string().url()
|
||||
});
|
||||
|
||||
test.describe('Profile API', () => {
|
||||
test('should fetch user profile', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/profile'
|
||||
}).validateSchema(ProfileSchema); // Chained validation
|
||||
|
||||
expect(status).toBe(200);
|
||||
// Schema already validated, type-safe access
|
||||
expect(body.name).toBeDefined();
|
||||
expect(body.email).toContain('@');
|
||||
});
|
||||
|
||||
test('should update user profile', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'PATCH',
|
||||
path: '/api/profile',
|
||||
body: {
|
||||
name: 'Updated Name',
|
||||
email: 'updated@example.com'
|
||||
}
|
||||
}).validateSchema(ProfileSchema); // Chained validation
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.name).toBe('Updated Name');
|
||||
expect(body.email).toBe('updated@example.com');
|
||||
});
|
||||
|
||||
test('should validate email format', async ({ apiRequest }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'PATCH',
|
||||
path: '/api/profile',
|
||||
body: { email: 'invalid-email' }
|
||||
});
|
||||
|
||||
expect(status).toBe(400);
|
||||
expect(body.message).toContain('Invalid email format');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Benefits:**
|
||||
- Returns `{ status, body }` (cleaner than `response.status()` + `await response.json()`)
|
||||
- Automatic schema validation with Zod
|
||||
- Type-safe response bodies
|
||||
- Automatic retry for 5xx errors
|
||||
- Less boilerplate
|
||||
|
||||
#### E2E Tests (`tests/e2e/profile.spec.ts`):
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test('should edit and save profile', async ({ page }) => {
|
||||
// Login first
|
||||
await page.goto('/login');
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await page.getByLabel('Password').fill('password123');
|
||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
||||
|
||||
// Navigate to profile
|
||||
await page.goto('/profile');
|
||||
|
||||
// Edit profile
|
||||
await page.getByRole('button', { name: 'Edit Profile' }).click();
|
||||
await page.getByLabel('Name').fill('Updated Name');
|
||||
await page.getByRole('button', { name: 'Save' }).click();
|
||||
|
||||
// Verify success
|
||||
await expect(page.getByText('Profile updated')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
TEA generates additional E2E tests for display, validation errors, etc. based on acceptance criteria.
|
||||
|
||||
#### Implementation Checklist
|
||||
|
||||
TEA also provides an implementation checklist:
|
||||
|
||||
```markdown
|
||||
## Implementation Checklist
|
||||
|
||||
### Backend
|
||||
- [ ] Create `GET /api/profile` endpoint
|
||||
- [ ] Create `PATCH /api/profile` endpoint
|
||||
- [ ] Add email validation middleware
|
||||
- [ ] Add profile picture upload handling
|
||||
- [ ] Write API unit tests
|
||||
|
||||
### Frontend
|
||||
- [ ] Create ProfilePage component
|
||||
- [ ] Implement profile form with validation
|
||||
- [ ] Add file upload for avatar
|
||||
- [ ] Handle API errors gracefully
|
||||
- [ ] Add loading states
|
||||
|
||||
### Tests
|
||||
- [x] API tests generated (failing)
|
||||
- [x] E2E tests generated (failing)
|
||||
- [ ] Run tests after implementation (should pass)
|
||||
```
|
||||
|
||||
### 6. Verify Tests Fail
|
||||
|
||||
This is the TDD red phase - tests MUST fail before implementation.
|
||||
|
||||
**For Playwright:**
|
||||
```bash
|
||||
npx playwright test
|
||||
```
|
||||
|
||||
**For Cypress:**
|
||||
```bash
|
||||
npx cypress run
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
Running 6 tests using 1 worker
|
||||
|
||||
✗ tests/api/profile.spec.ts:3:3 › should fetch user profile
|
||||
Error: expect(received).toBe(expected)
|
||||
Expected: 200
|
||||
Received: 404
|
||||
|
||||
✗ tests/e2e/profile.spec.ts:10:3 › should display current profile information
|
||||
Error: page.goto: net::ERR_ABORTED
|
||||
```
|
||||
|
||||
**All tests should fail!** This confirms:
|
||||
- Feature doesn't exist yet
|
||||
- Tests will guide implementation
|
||||
- You have clear success criteria
|
||||
|
||||
### 7. Implement the Feature
|
||||
|
||||
Now implement the feature following the test guidance:
|
||||
|
||||
1. Start with API tests (backend first)
|
||||
2. Make API tests pass
|
||||
3. Move to E2E tests (frontend)
|
||||
4. Make E2E tests pass
|
||||
5. Refactor with confidence (tests protect you)
|
||||
|
||||
### 8. Verify Tests Pass
|
||||
|
||||
After implementation, run your test suite.
|
||||
|
||||
**For Playwright:**
|
||||
```bash
|
||||
npx playwright test
|
||||
```
|
||||
|
||||
**For Cypress:**
|
||||
```bash
|
||||
npx cypress run
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
Running 6 tests using 1 worker
|
||||
|
||||
✓ tests/api/profile.spec.ts:3:3 › should fetch user profile (850ms)
|
||||
✓ tests/api/profile.spec.ts:15:3 › should update user profile (1.2s)
|
||||
✓ tests/api/profile.spec.ts:30:3 › should validate email format (650ms)
|
||||
✓ tests/e2e/profile.spec.ts:10:3 › should display current profile (2.1s)
|
||||
✓ tests/e2e/profile.spec.ts:18:3 › should edit and save profile (3.2s)
|
||||
✓ tests/e2e/profile.spec.ts:35:3 › should show validation error (1.8s)
|
||||
|
||||
6 passed (9.8s)
|
||||
```
|
||||
|
||||
**Green!** You've completed the TDD cycle: red → green → refactor.
|
||||
|
||||
## What You Get
|
||||
|
||||
### Failing Tests
|
||||
- API tests for backend endpoints
|
||||
- E2E tests for user workflows
|
||||
- Component tests (if requested)
|
||||
- All tests fail initially (red phase)
|
||||
|
||||
### Implementation Guidance
|
||||
- Clear checklist of what to build
|
||||
- Acceptance criteria translated to assertions
|
||||
- Edge cases and error scenarios identified
|
||||
|
||||
### TDD Workflow Support
|
||||
- Tests guide implementation
|
||||
- Confidence to refactor
|
||||
- Living documentation of features
|
||||
|
||||
## Tips
|
||||
|
||||
### Start with Test Design
|
||||
|
||||
Run `test-design` before `atdd` for better results:
|
||||
|
||||
```
|
||||
test-design # Risk assessment and priorities
|
||||
atdd # Generate tests based on design
|
||||
```
|
||||
|
||||
### MCP Enhancements (Optional)
|
||||
|
||||
If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `atdd`.
|
||||
|
||||
**Note:** ATDD is for features that don't exist yet, so recording mode (verify selectors with live UI) only applies if you have skeleton/mockup UI already implemented. For typical ATDD (no UI yet), TEA infers selectors from best practices.
|
||||
|
||||
See [Enable MCP Enhancements](/docs/tea/how-to/customization/enable-tea-mcp-enhancements.md) for setup.
|
||||
|
||||
### Focus on P0/P1 Scenarios
|
||||
|
||||
Don't generate tests for everything at once:
|
||||
|
||||
```
|
||||
Generate tests for:
|
||||
- P0: Critical path (happy path)
|
||||
- P1: High value (validation, errors)
|
||||
|
||||
Skip P2/P3 for now - add later with automate
|
||||
```
|
||||
|
||||
### API Tests First, E2E Later
|
||||
|
||||
Recommended order:
|
||||
1. Generate API tests with `atdd`
|
||||
2. Implement backend (make API tests pass)
|
||||
3. Generate E2E tests with `atdd` (or `automate`)
|
||||
4. Implement frontend (make E2E tests pass)
|
||||
|
||||
This "outside-in" approach is faster and more reliable.
|
||||
|
||||
### Keep Tests Deterministic
|
||||
|
||||
TEA generates deterministic tests by default:
|
||||
- No hard waits (`waitForTimeout`)
|
||||
- Network-first patterns (wait for responses)
|
||||
- Explicit assertions (no conditionals)
|
||||
|
||||
Don't modify these patterns - they prevent flakiness!
|
||||
|
||||
## Related Guides
|
||||
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Plan before generating
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Tests for existing features
|
||||
- [How to Set Up Test Framework](/docs/tea/how-to/workflows/setup-test-framework.md) - Initial setup
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational)
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Why P0 vs P3 matters
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - What makes tests good
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Avoiding flakiness
|
||||
|
||||
## Reference
|
||||
|
||||
- [Command: *atdd](/docs/tea/reference/commands.md#atdd) - Full command reference
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - MCP and Playwright Utils options
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
653
docs/tea/how-to/workflows/run-automate.md
Normal file
653
docs/tea/how-to/workflows/run-automate.md
Normal file
@@ -0,0 +1,653 @@
|
||||
---
|
||||
title: "How to Run Automate with TEA"
|
||||
description: Expand test automation coverage after implementation using TEA's automate workflow
|
||||
---
|
||||
|
||||
# How to Run Automate with TEA
|
||||
|
||||
Use TEA's `automate` workflow to generate comprehensive tests for existing features. Unlike `*atdd`, these tests pass immediately because the feature already exists.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Feature already exists and works
|
||||
- Want to add test coverage to existing code
|
||||
- Need tests that pass immediately
|
||||
- Expanding existing test suite
|
||||
- Adding tests to legacy code
|
||||
|
||||
**Don't use this if:**
|
||||
- Feature doesn't exist yet (use `atdd` instead)
|
||||
- Want failing tests to guide development (use `atdd` for TDD)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- Test framework setup complete (run `framework` if needed)
|
||||
- Feature implemented and working
|
||||
|
||||
**Note:** This guide uses Playwright examples. If using Cypress, commands and syntax will differ.
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Load TEA Agent
|
||||
|
||||
Start a fresh chat and load TEA:
|
||||
|
||||
```
|
||||
tea
|
||||
```
|
||||
|
||||
### 2. Run the Automate Workflow
|
||||
|
||||
```
|
||||
automate
|
||||
```
|
||||
|
||||
### 3. Provide Context
|
||||
|
||||
TEA will ask for context about what you're testing.
|
||||
|
||||
#### Option A: BMad-Integrated Mode (Recommended)
|
||||
|
||||
If you have BMad artifacts (stories, test designs, PRDs):
|
||||
|
||||
**What are you testing?**
|
||||
```
|
||||
I'm testing the user profile feature we just implemented.
|
||||
Story: story-profile-management.md
|
||||
Test Design: test-design-epic-1.md
|
||||
```
|
||||
|
||||
**Reference documents:**
|
||||
- Story file with acceptance criteria
|
||||
- Test design document (if available)
|
||||
- PRD sections relevant to this feature
|
||||
- Tech spec (if available)
|
||||
|
||||
**Existing tests:**
|
||||
```
|
||||
We have basic tests in tests/e2e/profile-view.spec.ts
|
||||
Avoid duplicating that coverage
|
||||
```
|
||||
|
||||
TEA will analyze your artifacts and generate comprehensive tests that:
|
||||
- Cover acceptance criteria from the story
|
||||
- Follow priorities from test design (P0 → P1 → P2)
|
||||
- Avoid duplicating existing tests
|
||||
- Include edge cases and error scenarios
|
||||
|
||||
#### Option B: Standalone Mode
|
||||
|
||||
If you're using TEA Solo or don't have BMad artifacts:
|
||||
|
||||
**What are you testing?**
|
||||
```
|
||||
TodoMVC React application at https://todomvc.com/examples/react/dist/
|
||||
Features: Create todos, mark as complete, filter by status, delete todos
|
||||
```
|
||||
|
||||
**Specific scenarios to cover:**
|
||||
```
|
||||
- Creating todos (happy path)
|
||||
- Marking todos as complete/incomplete
|
||||
- Filtering (All, Active, Completed)
|
||||
- Deleting todos
|
||||
- Edge cases (empty input, long text)
|
||||
```
|
||||
|
||||
TEA will analyze the application and generate tests based on your description.
|
||||
|
||||
### 4. Specify Test Levels
|
||||
|
||||
TEA will ask which test levels to generate:
|
||||
|
||||
**Options:**
|
||||
- **E2E tests** - Full browser-based user workflows
|
||||
- **API tests** - Backend endpoint testing (faster, more reliable)
|
||||
- **Component tests** - UI component testing in isolation (framework-dependent)
|
||||
- **Mix** - Combination of levels (recommended)
|
||||
|
||||
**Example response:**
|
||||
```
|
||||
Generate:
|
||||
- API tests for all CRUD operations
|
||||
- E2E tests for critical user workflows (P0)
|
||||
- Focus on P0 and P1 scenarios
|
||||
- Skip P3 (low priority edge cases)
|
||||
```
|
||||
|
||||
### 5. Review Generated Tests
|
||||
|
||||
TEA generates a comprehensive test suite with multiple test levels.
|
||||
|
||||
#### API Tests (`tests/api/profile.spec.ts`):
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('Profile API', () => {
|
||||
let authToken: string;
|
||||
|
||||
test.beforeAll(async ({ request }) => {
|
||||
// Manual auth token fetch
|
||||
const response = await request.post('/api/auth/login', {
|
||||
data: { email: 'test@example.com', password: 'password123' }
|
||||
});
|
||||
const { token } = await response.json();
|
||||
authToken = token;
|
||||
});
|
||||
|
||||
test('should fetch user profile', async ({ request }) => {
|
||||
const response = await request.get('/api/profile', {
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
});
|
||||
|
||||
expect(response.ok()).toBeTruthy();
|
||||
const profile = await response.json();
|
||||
expect(profile).toMatchObject({
|
||||
id: expect.any(String),
|
||||
name: expect.any(String),
|
||||
email: expect.any(String)
|
||||
});
|
||||
});
|
||||
|
||||
test('should update profile successfully', async ({ request }) => {
|
||||
const response = await request.patch('/api/profile', {
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
data: {
|
||||
name: 'Updated Name',
|
||||
bio: 'Test bio'
|
||||
}
|
||||
});
|
||||
|
||||
expect(response.ok()).toBeTruthy();
|
||||
const updated = await response.json();
|
||||
expect(updated.name).toBe('Updated Name');
|
||||
expect(updated.bio).toBe('Test bio');
|
||||
});
|
||||
|
||||
test('should validate email format', async ({ request }) => {
|
||||
const response = await request.patch('/api/profile', {
|
||||
headers: { Authorization: `Bearer ${authToken}` },
|
||||
data: { email: 'invalid-email' }
|
||||
});
|
||||
|
||||
expect(response.status()).toBe(400);
|
||||
const error = await response.json();
|
||||
expect(error.message).toContain('Invalid email');
|
||||
});
|
||||
|
||||
test('should require authentication', async ({ request }) => {
|
||||
const response = await request.get('/api/profile');
|
||||
expect(response.status()).toBe(401);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test as base, expect } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { z } from 'zod';
|
||||
|
||||
const ProfileSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
email: z.string().email()
|
||||
});
|
||||
|
||||
// Merge API and auth fixtures
|
||||
const authFixtureTest = base.extend(createAuthFixtures());
|
||||
export const testWithAuth = mergeTests(apiRequestFixture, authFixtureTest);
|
||||
|
||||
testWithAuth.describe('Profile API', () => {
|
||||
testWithAuth('should fetch user profile', async ({ apiRequest, authToken }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'GET',
|
||||
path: '/api/profile',
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
}).validateSchema(ProfileSchema); // Chained validation
|
||||
|
||||
expect(status).toBe(200);
|
||||
// Schema already validated, type-safe access
|
||||
expect(body.name).toBeDefined();
|
||||
});
|
||||
|
||||
testWithAuth('should update profile successfully', async ({ apiRequest, authToken }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'PATCH',
|
||||
path: '/api/profile',
|
||||
body: { name: 'Updated Name', bio: 'Test bio' },
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
}).validateSchema(ProfileSchema); // Chained validation
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.name).toBe('Updated Name');
|
||||
});
|
||||
|
||||
testWithAuth('should validate email format', async ({ apiRequest, authToken }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'PATCH',
|
||||
path: '/api/profile',
|
||||
body: { email: 'invalid-email' },
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
});
|
||||
|
||||
expect(status).toBe(400);
|
||||
expect(body.message).toContain('Invalid email');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key Differences:**
|
||||
- `authToken` fixture (persisted, reused across tests)
|
||||
- `apiRequest` returns `{ status, body }` (cleaner)
|
||||
- Schema validation with Zod (type-safe)
|
||||
- Automatic retry for 5xx errors
|
||||
- Less boilerplate (no manual `await response.json()` everywhere)
|
||||
|
||||
#### E2E Tests (`tests/e2e/profile.spec.ts`):
|
||||
|
||||
```typescript
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test('should edit profile', async ({ page }) => {
|
||||
// Login
|
||||
await page.goto('/login');
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await page.getByLabel('Password').fill('password123');
|
||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
||||
|
||||
// Edit profile
|
||||
await page.goto('/profile');
|
||||
await page.getByRole('button', { name: 'Edit Profile' }).click();
|
||||
await page.getByLabel('Name').fill('New Name');
|
||||
await page.getByRole('button', { name: 'Save' }).click();
|
||||
|
||||
// Verify success
|
||||
await expect(page.getByText('Profile updated')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
TEA generates additional tests for validation, edge cases, etc. based on priorities.
|
||||
|
||||
#### Fixtures (`tests/support/fixtures/profile.ts`):
|
||||
|
||||
**Vanilla Playwright:**
|
||||
```typescript
|
||||
import { test as base, Page } from '@playwright/test';
|
||||
|
||||
type ProfileFixtures = {
|
||||
authenticatedPage: Page;
|
||||
testProfile: {
|
||||
name: string;
|
||||
email: string;
|
||||
bio: string;
|
||||
};
|
||||
};
|
||||
|
||||
export const test = base.extend<ProfileFixtures>({
|
||||
authenticatedPage: async ({ page }, use) => {
|
||||
// Manual login flow
|
||||
await page.goto('/login');
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await page.getByLabel('Password').fill('password123');
|
||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
||||
await page.waitForURL(/\/dashboard/);
|
||||
|
||||
await use(page);
|
||||
},
|
||||
|
||||
testProfile: async ({ request }, use) => {
|
||||
// Static test data
|
||||
const profile = {
|
||||
name: 'Test User',
|
||||
email: 'test@example.com',
|
||||
bio: 'Test bio'
|
||||
};
|
||||
|
||||
await use(profile);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import { mergeTests } from '@playwright/test';
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
type ProfileFixtures = {
|
||||
testProfile: {
|
||||
name: string;
|
||||
email: string;
|
||||
bio: string;
|
||||
};
|
||||
};
|
||||
|
||||
// Merge auth fixtures with custom fixtures
|
||||
const authTest = base.extend(createAuthFixtures());
|
||||
const profileTest = base.extend<ProfileFixtures>({
|
||||
testProfile: async ({}, use) => {
|
||||
// Dynamic test data with faker
|
||||
const profile = {
|
||||
name: faker.person.fullName(),
|
||||
email: faker.internet.email(),
|
||||
bio: faker.person.bio()
|
||||
};
|
||||
|
||||
await use(profile);
|
||||
}
|
||||
});
|
||||
|
||||
export const test = mergeTests(authTest, profileTest);
|
||||
export { expect } from '@playwright/test';
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```typescript
|
||||
import { test, expect } from '../support/fixtures/profile';
|
||||
|
||||
test('should update profile', async ({ page, authToken, testProfile }) => {
|
||||
// authToken from auth-session (automatic, persisted)
|
||||
// testProfile from custom fixture (dynamic data)
|
||||
|
||||
await page.goto('/profile');
|
||||
// Test with dynamic, unique data
|
||||
});
|
||||
```
|
||||
|
||||
**Key Benefits:**
|
||||
- `authToken` fixture (persisted token, no manual login)
|
||||
- Dynamic test data with faker (no conflicts)
|
||||
- Fixture composition with mergeTests
|
||||
- Reusable across test files
|
||||
|
||||
### 6. Review Additional Artifacts
|
||||
|
||||
TEA also generates:
|
||||
|
||||
#### Updated README (`tests/README.md`):
|
||||
|
||||
```markdown
|
||||
# Test Suite
|
||||
|
||||
## Running Tests
|
||||
|
||||
### All Tests
|
||||
npm test
|
||||
|
||||
### Specific Levels
|
||||
npm run test:api # API tests only
|
||||
npm run test:e2e # E2E tests only
|
||||
npm run test:smoke # Smoke tests (@smoke tag)
|
||||
|
||||
### Single File
|
||||
npx playwright test tests/api/profile.spec.ts
|
||||
|
||||
## Test Structure
|
||||
|
||||
tests/
|
||||
├── api/ # API tests (fast, reliable)
|
||||
├── e2e/ # E2E tests (full workflows)
|
||||
├── fixtures/ # Shared test utilities
|
||||
└── README.md
|
||||
|
||||
## Writing Tests
|
||||
|
||||
Follow the patterns in existing tests:
|
||||
- Use fixtures for authentication
|
||||
- Network-first patterns (no hard waits)
|
||||
- Explicit assertions
|
||||
- Self-cleaning tests
|
||||
```
|
||||
|
||||
#### Definition of Done Summary:
|
||||
|
||||
```markdown
|
||||
## Test Quality Checklist
|
||||
|
||||
✅ All tests pass on first run
|
||||
✅ No hard waits (waitForTimeout)
|
||||
✅ No conditionals for flow control
|
||||
✅ Assertions are explicit
|
||||
✅ Tests clean up after themselves
|
||||
✅ Tests can run in parallel
|
||||
✅ Execution time < 1.5 minutes per test
|
||||
✅ Test files < 300 lines
|
||||
```
|
||||
|
||||
### 7. Run the Tests
|
||||
|
||||
All tests should pass immediately since the feature exists:
|
||||
|
||||
**For Playwright:**
|
||||
```bash
|
||||
npx playwright test
|
||||
```
|
||||
|
||||
**For Cypress:**
|
||||
```bash
|
||||
npx cypress run
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
Running 15 tests using 4 workers
|
||||
|
||||
✓ tests/api/profile.spec.ts (4 tests) - 2.1s
|
||||
✓ tests/e2e/profile-workflow.spec.ts (2 tests) - 5.3s
|
||||
|
||||
15 passed (7.4s)
|
||||
```
|
||||
|
||||
**All green!** Tests pass because feature already exists.
|
||||
|
||||
### 8. Review Test Coverage
|
||||
|
||||
Check which scenarios are covered:
|
||||
|
||||
```bash
|
||||
# View test report
|
||||
npx playwright show-report
|
||||
|
||||
# Check coverage (if configured)
|
||||
npm run test:coverage
|
||||
```
|
||||
|
||||
Compare against:
|
||||
- Acceptance criteria from story
|
||||
- Test priorities from test design
|
||||
- Edge cases and error scenarios
|
||||
|
||||
## What You Get
|
||||
|
||||
### Comprehensive Test Suite
|
||||
- **API tests** - Fast, reliable backend testing
|
||||
- **E2E tests** - Critical user workflows
|
||||
- **Component tests** - UI component testing (if requested)
|
||||
- **Fixtures** - Shared utilities and setup
|
||||
|
||||
### Component Testing by Framework
|
||||
|
||||
TEA supports component testing using framework-appropriate tools:
|
||||
|
||||
| Your Framework | Component Testing Tool | Tests Location |
|
||||
| -------------- | ------------------------------ | ----------------------------------------- |
|
||||
| **Cypress** | Cypress Component Testing | `tests/component/` |
|
||||
| **Playwright** | Vitest + React Testing Library | `tests/component/` or `src/**/*.test.tsx` |
|
||||
|
||||
**Note:** Component tests use separate tooling from E2E tests:
|
||||
- Cypress users: TEA generates Cypress Component Tests
|
||||
- Playwright users: TEA generates Vitest + React Testing Library tests
|
||||
|
||||
### Quality Features
|
||||
- **Network-first patterns** - Wait for actual responses, not timeouts
|
||||
- **Deterministic tests** - No flakiness, no conditionals
|
||||
- **Self-cleaning** - Tests don't leave test data behind
|
||||
- **Parallel-safe** - Can run all tests concurrently
|
||||
|
||||
### Documentation
|
||||
- **Updated README** - How to run tests
|
||||
- **Test structure explanation** - Where tests live
|
||||
- **Definition of Done** - Quality standards
|
||||
|
||||
## Tips
|
||||
|
||||
### Start with Test Design
|
||||
|
||||
Run `test-design` before `automate` for better results:
|
||||
|
||||
```
|
||||
test-design # Risk assessment, priorities
|
||||
automate # Generate tests based on priorities
|
||||
```
|
||||
|
||||
TEA will focus on P0/P1 scenarios and skip low-value tests.
|
||||
|
||||
### Prioritize Test Levels
|
||||
|
||||
Not everything needs E2E tests:
|
||||
|
||||
**Good strategy:**
|
||||
```
|
||||
- P0 scenarios: API + E2E tests
|
||||
- P1 scenarios: API tests only
|
||||
- P2 scenarios: API tests (happy path)
|
||||
- P3 scenarios: Skip or add later
|
||||
```
|
||||
|
||||
**Why?**
|
||||
- API tests are 10x faster than E2E
|
||||
- API tests are more reliable (no browser flakiness)
|
||||
- E2E tests reserved for critical user journeys
|
||||
|
||||
### Avoid Duplicate Coverage
|
||||
|
||||
Tell TEA about existing tests:
|
||||
|
||||
```
|
||||
We already have tests in:
|
||||
- tests/e2e/profile-view.spec.ts (viewing profile)
|
||||
- tests/api/auth.spec.ts (authentication)
|
||||
|
||||
Don't duplicate that coverage
|
||||
```
|
||||
|
||||
TEA will analyze existing tests and only generate new scenarios.
|
||||
|
||||
### MCP Enhancements (Optional)
|
||||
|
||||
If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `automate` for:
|
||||
|
||||
- **Healing mode:** Fix broken selectors, update assertions, enhance with trace analysis
|
||||
- **Recording mode:** Verify selectors with live browser, capture network requests
|
||||
|
||||
No prompts - TEA uses MCPs automatically when available. See [Enable MCP Enhancements](/docs/tea/how-to/customization/enable-tea-mcp-enhancements.md) for setup.
|
||||
|
||||
### Generate Tests Incrementally
|
||||
|
||||
Don't generate all tests at once:
|
||||
|
||||
**Iteration 1:**
|
||||
```
|
||||
Generate P0 tests only (critical path)
|
||||
Run: automate
|
||||
```
|
||||
|
||||
**Iteration 2:**
|
||||
```
|
||||
Generate P1 tests (high value scenarios)
|
||||
Run: automate
|
||||
Tell TEA to avoid P0 coverage
|
||||
```
|
||||
|
||||
**Iteration 3:**
|
||||
```
|
||||
Generate P2 tests (if time permits)
|
||||
Run: automate
|
||||
```
|
||||
|
||||
This iterative approach:
|
||||
- Provides fast feedback
|
||||
- Allows validation before proceeding
|
||||
- Keeps test generation focused
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Tests Pass But Coverage Is Incomplete
|
||||
|
||||
**Problem:** Tests pass but don't cover all scenarios.
|
||||
|
||||
**Cause:** TEA wasn't given complete context.
|
||||
|
||||
**Solution:** Provide more details:
|
||||
```
|
||||
Generate tests for:
|
||||
- All acceptance criteria in story-profile.md
|
||||
- Error scenarios (validation, authorization)
|
||||
- Edge cases (empty fields, long inputs)
|
||||
```
|
||||
|
||||
### Too Many Tests Generated
|
||||
|
||||
**Problem:** TEA generated 50 tests for a simple feature.
|
||||
|
||||
**Cause:** Didn't specify priorities or scope.
|
||||
|
||||
**Solution:** Be specific:
|
||||
```
|
||||
Generate ONLY:
|
||||
- P0 and P1 scenarios
|
||||
- API tests for all scenarios
|
||||
- E2E tests only for critical workflows
|
||||
- Skip P2/P3 for now
|
||||
```
|
||||
|
||||
### Tests Duplicate Existing Coverage
|
||||
|
||||
**Problem:** New tests cover the same scenarios as existing tests.
|
||||
|
||||
**Cause:** Didn't tell TEA about existing tests.
|
||||
|
||||
**Solution:** Specify existing coverage:
|
||||
```
|
||||
We already have these tests:
|
||||
- tests/api/profile.spec.ts (GET /api/profile)
|
||||
- tests/e2e/profile-view.spec.ts (viewing profile)
|
||||
|
||||
Generate tests for scenarios NOT covered by those files
|
||||
```
|
||||
|
||||
### MCP Enhancements for Better Selectors
|
||||
|
||||
If you have MCP servers configured, TEA verifies selectors against live browser. Otherwise, TEA generates accessible selectors (`getByRole`, `getByLabel`) by default.
|
||||
|
||||
Setup: Answer "Yes" to MCPs in BMad installer + configure MCP servers in your IDE. See [Enable MCP Enhancements](/docs/tea/how-to/customization/enable-tea-mcp-enhancements.md).
|
||||
|
||||
## Related Guides
|
||||
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Plan before generating
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md) - Failing tests before implementation
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Audit generated quality
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Testing as Engineering](/docs/tea/explanation/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational)
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Why prioritize P0 over P3
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - What makes tests good
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Reusable test patterns
|
||||
|
||||
## Reference
|
||||
|
||||
- [Command: *automate](/docs/tea/reference/commands.md#automate) - Full command reference
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - MCP and Playwright Utils options
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
679
docs/tea/how-to/workflows/run-nfr-assess.md
Normal file
679
docs/tea/how-to/workflows/run-nfr-assess.md
Normal file
@@ -0,0 +1,679 @@
|
||||
---
|
||||
title: "How to Run NFR Assessment with TEA"
|
||||
description: Validate non-functional requirements for security, performance, reliability, and maintainability using TEA
|
||||
---
|
||||
|
||||
# How to Run NFR Assessment with TEA
|
||||
|
||||
Use TEA's `nfr-assess` workflow to validate non-functional requirements (NFRs) with evidence-based assessment across security, performance, reliability, and maintainability.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Enterprise projects with compliance requirements
|
||||
- Projects with strict NFR thresholds
|
||||
- Before production release
|
||||
- When NFRs are critical to project success
|
||||
- Security or performance is mission-critical
|
||||
|
||||
**Best for:**
|
||||
- Enterprise track projects
|
||||
- Compliance-heavy industries (finance, healthcare, government)
|
||||
- High-traffic applications
|
||||
- Security-critical systems
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- NFRs defined in PRD or requirements doc
|
||||
- Evidence preferred but not required (test results, security scans, performance metrics)
|
||||
|
||||
**Note:** You can run NFR assessment without complete evidence. TEA will mark categories as CONCERNS where evidence is missing and document what's needed.
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Run the NFR Assessment Workflow
|
||||
|
||||
Start a fresh chat and run:
|
||||
|
||||
```
|
||||
nfr-assess
|
||||
```
|
||||
|
||||
This loads TEA and starts the NFR assessment workflow.
|
||||
|
||||
### 2. Specify NFR Categories
|
||||
|
||||
TEA will ask which NFR categories to assess.
|
||||
|
||||
**Available Categories:**
|
||||
|
||||
| Category | Focus Areas |
|
||||
|----------|-------------|
|
||||
| **Security** | Authentication, authorization, encryption, vulnerabilities, security headers, input validation |
|
||||
| **Performance** | Response time, throughput, resource usage, database queries, frontend load time |
|
||||
| **Reliability** | Error handling, recovery mechanisms, availability, failover, data backup |
|
||||
| **Maintainability** | Code quality, test coverage, technical debt, documentation, dependency health |
|
||||
|
||||
**Example Response:**
|
||||
```
|
||||
Assess:
|
||||
- Security (critical for user data)
|
||||
- Performance (API must be fast)
|
||||
- Reliability (99.9% uptime requirement)
|
||||
|
||||
Skip maintainability for now
|
||||
```
|
||||
|
||||
### 3. Provide NFR Thresholds
|
||||
|
||||
TEA will ask for specific thresholds for each category.
|
||||
|
||||
**Critical Principle: Never guess thresholds.**
|
||||
|
||||
If you don't know the exact requirement, tell TEA to mark as CONCERNS and request clarification from stakeholders.
|
||||
|
||||
#### Security Thresholds
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Requirements:
|
||||
- All endpoints require authentication: YES
|
||||
- Data encrypted at rest: YES (PostgreSQL TDE)
|
||||
- Zero critical vulnerabilities: YES (npm audit)
|
||||
- Input validation on all endpoints: YES (Zod schemas)
|
||||
- Security headers configured: YES (helmet.js)
|
||||
```
|
||||
|
||||
#### Performance Thresholds
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Requirements:
|
||||
- API response time P99: < 200ms
|
||||
- API response time P95: < 150ms
|
||||
- Throughput: > 1000 requests/second
|
||||
- Frontend initial load: < 2 seconds
|
||||
- Database query time P99: < 50ms
|
||||
```
|
||||
|
||||
#### Reliability Thresholds
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Requirements:
|
||||
- Error handling: All endpoints return structured errors
|
||||
- Availability: 99.9% uptime
|
||||
- Recovery time: < 5 minutes (RTO)
|
||||
- Data backup: Daily automated backups
|
||||
- Failover: Automatic with < 30s downtime
|
||||
```
|
||||
|
||||
#### Maintainability Thresholds
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Requirements:
|
||||
- Test coverage: > 80%
|
||||
- Code quality: SonarQube grade A
|
||||
- Documentation: All APIs documented
|
||||
- Dependency age: < 6 months outdated
|
||||
- Technical debt: < 10% of codebase
|
||||
```
|
||||
|
||||
### 4. Provide Evidence
|
||||
|
||||
TEA will ask where to find evidence for each requirement.
|
||||
|
||||
**Evidence Sources:**
|
||||
|
||||
| Category | Evidence Type | Location |
|
||||
|----------|---------------|----------|
|
||||
| Security | Security scan reports | `/reports/security-scan.pdf` |
|
||||
| Security | Vulnerability scan | `npm audit`, `snyk test` results |
|
||||
| Security | Auth test results | Test reports showing auth coverage |
|
||||
| Performance | Load test results | `/reports/k6-load-test.json` |
|
||||
| Performance | APM data | Datadog, New Relic dashboards |
|
||||
| Performance | Lighthouse scores | `/reports/lighthouse.json` |
|
||||
| Reliability | Error rate metrics | Production monitoring dashboards |
|
||||
| Reliability | Uptime data | StatusPage, PagerDuty logs |
|
||||
| Maintainability | Coverage reports | `/reports/coverage/index.html` |
|
||||
| Maintainability | Code quality | SonarQube dashboard |
|
||||
|
||||
**Example Response:**
|
||||
```
|
||||
Evidence:
|
||||
- Security: npm audit results (clean), auth tests 15/15 passing
|
||||
- Performance: k6 load test at /reports/k6-results.json
|
||||
- Reliability: Error rate 0.01% in staging (logs in Datadog)
|
||||
|
||||
Don't have:
|
||||
- Uptime data (new system, no baseline)
|
||||
- Mark as CONCERNS and request monitoring setup
|
||||
```
|
||||
|
||||
### 5. Review NFR Assessment Report
|
||||
|
||||
TEA generates a comprehensive assessment report.
|
||||
|
||||
#### Assessment Report (`nfr-assessment.md`):
|
||||
|
||||
```markdown
|
||||
# Non-Functional Requirements Assessment
|
||||
|
||||
**Date:** 2026-01-13
|
||||
**Epic:** User Profile Management
|
||||
**Release:** v1.2.0
|
||||
**Overall Decision:** CONCERNS ⚠️
|
||||
|
||||
## Executive Summary
|
||||
|
||||
| Category | Status | Critical Issues |
|
||||
|----------|--------|-----------------|
|
||||
| Security | PASS ✅ | 0 |
|
||||
| Performance | CONCERNS ⚠️ | 2 |
|
||||
| Reliability | PASS ✅ | 0 |
|
||||
| Maintainability | PASS ✅ | 0 |
|
||||
|
||||
**Decision Rationale:**
|
||||
Performance metrics below target (P99 latency, throughput). Mitigation plan in place. Security and reliability meet all requirements.
|
||||
|
||||
---
|
||||
|
||||
## Security Assessment
|
||||
|
||||
**Status:** PASS ✅
|
||||
|
||||
### Requirements Met
|
||||
|
||||
| Requirement | Target | Actual | Status |
|
||||
|-------------|--------|--------|--------|
|
||||
| Authentication required | All endpoints | 100% enforced | ✅ |
|
||||
| Data encryption at rest | PostgreSQL TDE | Enabled | ✅ |
|
||||
| Critical vulnerabilities | 0 | 0 | ✅ |
|
||||
| Input validation | All endpoints | Zod schemas on 100% | ✅ |
|
||||
| Security headers | Configured | helmet.js enabled | ✅ |
|
||||
|
||||
### Evidence
|
||||
|
||||
**Security Scan:**
|
||||
```bash
|
||||
$ npm audit
|
||||
found 0 vulnerabilities
|
||||
```
|
||||
|
||||
**Authentication Tests:**
|
||||
- 15/15 auth tests passing
|
||||
- Tested unauthorized access (401 responses)
|
||||
- Token validation working
|
||||
|
||||
**Penetration Testing:**
|
||||
- Report: `/reports/pentest-2026-01.pdf`
|
||||
- Findings: 0 critical, 2 low (addressed)
|
||||
|
||||
**Conclusion:** All security requirements met. No blockers.
|
||||
|
||||
---
|
||||
|
||||
## Performance Assessment
|
||||
|
||||
**Status:** CONCERNS ⚠️
|
||||
|
||||
### Requirements Status
|
||||
|
||||
| Metric | Target | Actual | Status |
|
||||
|--------|--------|--------|--------|
|
||||
| API response P99 | < 200ms | 350ms | ❌ Exceeds |
|
||||
| API response P95 | < 150ms | 180ms | ⚠️ Exceeds |
|
||||
| Throughput | > 1000 rps | 850 rps | ⚠️ Below |
|
||||
| Frontend load | < 2s | 1.8s | ✅ Met |
|
||||
| DB query P99 | < 50ms | 85ms | ❌ Exceeds |
|
||||
|
||||
### Issues Identified
|
||||
|
||||
#### Issue 1: P99 Latency Exceeds Target
|
||||
|
||||
**Measured:** 350ms P99 (target: <200ms)
|
||||
**Root Cause:** Database queries not optimized
|
||||
- Missing indexes on profile queries
|
||||
- N+1 query problem in profile endpoint
|
||||
|
||||
**Impact:** User experience degraded for 1% of requests
|
||||
|
||||
**Mitigation Plan:**
|
||||
- Add composite index on `(user_id, profile_id)` - backend team, 2 days
|
||||
- Refactor profile endpoint to use joins instead of multiple queries - backend team, 3 days
|
||||
- Re-run load tests after optimization - QA team, 1 day
|
||||
|
||||
**Owner:** Backend team lead
|
||||
**Deadline:** Before release (January 20, 2026)
|
||||
|
||||
#### Issue 2: Throughput Below Target
|
||||
|
||||
**Measured:** 850 rps (target: >1000 rps)
|
||||
**Root Cause:** Connection pool size too small
|
||||
- PostgreSQL max_connections = 100 (too low)
|
||||
- No connection pooling in application
|
||||
|
||||
**Impact:** System cannot handle expected traffic
|
||||
|
||||
**Mitigation Plan:**
|
||||
- Increase PostgreSQL max_connections to 500 - DevOps, 1 day
|
||||
- Implement connection pooling with pg-pool - backend team, 2 days
|
||||
- Re-run load tests - QA team, 1 day
|
||||
|
||||
**Owner:** DevOps + Backend team
|
||||
**Deadline:** Before release (January 20, 2026)
|
||||
|
||||
### Evidence
|
||||
|
||||
**Load Testing:**
|
||||
```
|
||||
Tool: k6
|
||||
Duration: 10 minutes
|
||||
Virtual Users: 500 concurrent
|
||||
Report: /reports/k6-load-test.json
|
||||
```
|
||||
|
||||
**Results:**
|
||||
```
|
||||
scenarios: (100.00%) 1 scenario, 500 max VUs, 10m30s max duration
|
||||
✓ http_req_duration..............: avg=250ms min=45ms med=180ms max=2.1s p(90)=280ms p(95)=350ms
|
||||
http_reqs......................: 85000 (850/s)
|
||||
http_req_failed................: 0.1%
|
||||
```
|
||||
|
||||
**APM Data:**
|
||||
- Tool: Datadog
|
||||
- Dashboard: <https://app.datadoghq.com/dashboard/abc123>
|
||||
|
||||
**Conclusion:** Performance issues identified with mitigation plan. Re-assess after optimization.
|
||||
|
||||
---
|
||||
|
||||
## Reliability Assessment
|
||||
|
||||
**Status:** PASS ✅
|
||||
|
||||
### Requirements Met
|
||||
|
||||
| Requirement | Target | Actual | Status |
|
||||
|-------------|--------|--------|--------|
|
||||
| Error handling | Structured errors | 100% endpoints | ✅ |
|
||||
| Availability | 99.9% uptime | 99.95% (staging) | ✅ |
|
||||
| Recovery time | < 5 min (RTO) | 3 min (tested) | ✅ |
|
||||
| Data backup | Daily | Automated daily | ✅ |
|
||||
| Failover | < 30s downtime | 15s (tested) | ✅ |
|
||||
|
||||
### Evidence
|
||||
|
||||
**Error Handling Tests:**
|
||||
- All endpoints return structured JSON errors
|
||||
- Error codes standardized (400, 401, 403, 404, 500)
|
||||
- Error messages user-friendly (no stack traces)
|
||||
|
||||
**Chaos Engineering:**
|
||||
- Tested database failover: 15s downtime ✅
|
||||
- Tested service crash recovery: 3 min ✅
|
||||
- Tested network partition: Graceful degradation ✅
|
||||
|
||||
**Monitoring:**
|
||||
- Staging uptime (30 days): 99.95%
|
||||
- Error rate: 0.01% (target: <0.1%)
|
||||
- P50 availability: 100%
|
||||
|
||||
**Conclusion:** All reliability requirements exceeded. No issues.
|
||||
|
||||
---
|
||||
|
||||
## Maintainability Assessment
|
||||
|
||||
**Status:** PASS ✅
|
||||
|
||||
### Requirements Met
|
||||
|
||||
| Requirement | Target | Actual | Status |
|
||||
|-------------|--------|--------|--------|
|
||||
| Test coverage | > 80% | 85% | ✅ |
|
||||
| Code quality | Grade A | Grade A | ✅ |
|
||||
| Documentation | All APIs | 100% documented | ✅ |
|
||||
| Outdated dependencies | < 6 months | 3 months avg | ✅ |
|
||||
| Technical debt | < 10% | 7% | ✅ |
|
||||
|
||||
### Evidence
|
||||
|
||||
**Test Coverage:**
|
||||
```
|
||||
Statements : 85.2% ( 1205/1414 )
|
||||
Branches : 82.1% ( 412/502 )
|
||||
Functions : 88.5% ( 201/227 )
|
||||
Lines : 85.2% ( 1205/1414 )
|
||||
```
|
||||
|
||||
**Code Quality:**
|
||||
- SonarQube: Grade A
|
||||
- Maintainability rating: A
|
||||
- Technical debt ratio: 7%
|
||||
- Code smells: 12 (all minor)
|
||||
|
||||
**Documentation:**
|
||||
- API docs: 100% coverage (OpenAPI spec)
|
||||
- README: Complete and up-to-date
|
||||
- Architecture docs: ADRs for all major decisions
|
||||
|
||||
**Conclusion:** All maintainability requirements met. Codebase is healthy.
|
||||
|
||||
---
|
||||
|
||||
## Overall Gate Decision
|
||||
|
||||
### Decision: CONCERNS ⚠️
|
||||
|
||||
**Rationale:**
|
||||
- **Blockers:** None
|
||||
- **Concerns:** Performance metrics below target (P99 latency, throughput)
|
||||
- **Mitigation:** Plan in place with clear owners and deadlines (5 days total)
|
||||
- **Passing:** Security, reliability, maintainability all green
|
||||
|
||||
### Actions Required Before Release
|
||||
|
||||
1. **Optimize database queries** (backend team, 3 days)
|
||||
- Add indexes
|
||||
- Fix N+1 queries
|
||||
- Implement connection pooling
|
||||
|
||||
2. **Re-run performance tests** (QA team, 1 day)
|
||||
- Validate P99 < 200ms
|
||||
- Validate throughput > 1000 rps
|
||||
|
||||
3. **Update this assessment** (TEA, 1 hour)
|
||||
- Re-run `*nfr-assess` with new results
|
||||
- Confirm PASS status
|
||||
|
||||
### Waiver Option (If Business Approves)
|
||||
|
||||
If business decides to deploy with current performance:
|
||||
|
||||
**Waiver Justification:**
|
||||
```markdown
|
||||
## Performance Waiver
|
||||
|
||||
**Waived By:** VP Engineering, Product Manager
|
||||
**Date:** 2026-01-15
|
||||
**Reason:** Business priority to launch by Q1
|
||||
**Conditions:**
|
||||
- Set monitoring alerts for P99 > 300ms
|
||||
- Plan optimization for v1.3 (February release)
|
||||
- Document known performance limitations in release notes
|
||||
|
||||
**Accepted Risk:**
|
||||
- 1% of users experience slower response (350ms vs 200ms)
|
||||
- System can handle current traffic (850 rps sufficient for launch)
|
||||
- Optimization planned for next release
|
||||
```
|
||||
|
||||
### Approvals
|
||||
|
||||
- [ ] Product Manager - Review business impact
|
||||
- [ ] Tech Lead - Review mitigation plan
|
||||
- [ ] QA Lead - Validate test evidence
|
||||
- [ ] DevOps - Confirm infrastructure ready
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Plan Post-Release
|
||||
|
||||
**Performance Alerts:**
|
||||
- P99 latency > 400ms (critical)
|
||||
- Throughput < 700 rps (warning)
|
||||
- Error rate > 1% (critical)
|
||||
|
||||
**Review Cadence:**
|
||||
- Daily: Check performance dashboards
|
||||
- Weekly: Review alert trends
|
||||
- Monthly: Re-assess NFRs
|
||||
```
|
||||
|
||||
## What You Get
|
||||
|
||||
### NFR Assessment Report
|
||||
- Category-by-category analysis (Security, Performance, Reliability, Maintainability)
|
||||
- Requirements status (target vs actual)
|
||||
- Evidence for each requirement
|
||||
- Issues identified with root cause analysis
|
||||
|
||||
### Gate Decision
|
||||
- **PASS** ✅ - All NFRs met, ready to release
|
||||
- **CONCERNS** ⚠️ - Some NFRs not met, mitigation plan exists
|
||||
- **FAIL** ❌ - Critical NFRs not met, blocks release
|
||||
- **WAIVED** ⏭️ - Business-approved waiver with documented risk
|
||||
|
||||
### Mitigation Plans
|
||||
- Specific actions to address concerns
|
||||
- Owners and deadlines
|
||||
- Re-assessment criteria
|
||||
|
||||
### Monitoring Plan
|
||||
- Post-release monitoring strategy
|
||||
- Alert thresholds
|
||||
- Review cadence
|
||||
|
||||
## Tips
|
||||
|
||||
### Run NFR Assessment Early
|
||||
|
||||
**Phase 2 (Enterprise):**
|
||||
Run `nfr-assess` during planning to:
|
||||
- Identify NFR requirements early
|
||||
- Plan for performance testing
|
||||
- Budget for security audits
|
||||
- Set up monitoring infrastructure
|
||||
|
||||
**Phase 4 or Gate:**
|
||||
Re-run before release to validate all requirements met.
|
||||
|
||||
### Never Guess Thresholds
|
||||
|
||||
If you don't know the NFR target:
|
||||
|
||||
**Don't:**
|
||||
```
|
||||
API response time should probably be under 500ms
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Mark as CONCERNS - Request threshold from stakeholders
|
||||
"What is the acceptable API response time?"
|
||||
```
|
||||
|
||||
### Collect Evidence Beforehand
|
||||
|
||||
Before running `*nfr-assess`, gather:
|
||||
|
||||
**Security:**
|
||||
```bash
|
||||
npm audit # Vulnerability scan
|
||||
snyk test # Alternative security scan
|
||||
npm run test:security # Security test suite
|
||||
```
|
||||
|
||||
**Performance:**
|
||||
```bash
|
||||
npm run test:load # k6 or artillery load tests
|
||||
npm run test:lighthouse # Frontend performance
|
||||
npm run test:db-performance # Database query analysis
|
||||
```
|
||||
|
||||
**Reliability:**
|
||||
- Production error rate (last 30 days)
|
||||
- Uptime data (StatusPage, PagerDuty)
|
||||
- Incident response times
|
||||
|
||||
**Maintainability:**
|
||||
```bash
|
||||
npm run test:coverage # Test coverage report
|
||||
npm run lint # Code quality check
|
||||
npm outdated # Dependency freshness
|
||||
```
|
||||
|
||||
### Use Real Data, Not Assumptions
|
||||
|
||||
**Don't:**
|
||||
```
|
||||
System is probably fast enough
|
||||
Security seems fine
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Load test results show P99 = 350ms
|
||||
npm audit shows 0 vulnerabilities
|
||||
Test coverage report shows 85%
|
||||
```
|
||||
|
||||
Evidence-based decisions prevent surprises in production.
|
||||
|
||||
### Document Waivers Thoroughly
|
||||
|
||||
If business approves waiver:
|
||||
|
||||
**Required:**
|
||||
- Who approved (name, role, date)
|
||||
- Why (business justification)
|
||||
- Conditions (monitoring, future plans)
|
||||
- Accepted risk (quantified impact)
|
||||
|
||||
**Example:**
|
||||
```markdown
|
||||
Waived by: CTO, VP Product (2026-01-15)
|
||||
Reason: Q1 launch critical for investor demo
|
||||
Conditions: Optimize in v1.3, monitor closely
|
||||
Risk: 1% of users experience 350ms latency (acceptable for launch)
|
||||
```
|
||||
|
||||
### Re-Assess After Fixes
|
||||
|
||||
After implementing mitigations:
|
||||
|
||||
```
|
||||
1. Fix performance issues
|
||||
2. Run load tests again
|
||||
3. Run nfr-assess with new evidence
|
||||
4. Verify PASS status
|
||||
```
|
||||
|
||||
Don't deploy with CONCERNS without mitigation or waiver.
|
||||
|
||||
### Integrate with Release Checklist
|
||||
|
||||
```markdown
|
||||
## Release Checklist
|
||||
|
||||
### Pre-Release
|
||||
- [ ] All tests passing
|
||||
- [ ] Test coverage > 80%
|
||||
- [ ] Run nfr-assess
|
||||
- [ ] NFR status: PASS or WAIVED
|
||||
|
||||
### Performance
|
||||
- [ ] Load tests completed
|
||||
- [ ] P99 latency meets threshold
|
||||
- [ ] Throughput meets threshold
|
||||
|
||||
### Security
|
||||
- [ ] Security scan clean
|
||||
- [ ] Auth tests passing
|
||||
- [ ] Penetration test complete
|
||||
|
||||
### Post-Release
|
||||
- [ ] Monitoring alerts configured
|
||||
- [ ] Dashboards updated
|
||||
- [ ] Incident response plan ready
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### No Evidence Available
|
||||
|
||||
**Problem:** Don't have performance data, security scans, etc.
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
Mark as CONCERNS for categories without evidence
|
||||
Document what evidence is needed
|
||||
Set up tests/scans before re-assessment
|
||||
```
|
||||
|
||||
**Don't block on missing evidence** - document what's needed and proceed.
|
||||
|
||||
### Thresholds Too Strict
|
||||
|
||||
**Problem:** Can't meet unrealistic thresholds.
|
||||
|
||||
**Symptoms:**
|
||||
- P99 < 50ms (impossible for complex queries)
|
||||
- 100% test coverage (impractical)
|
||||
- Zero technical debt (unrealistic)
|
||||
|
||||
**Solution:**
|
||||
```
|
||||
Negotiate thresholds with stakeholders:
|
||||
- "P99 < 50ms is unrealistic for our DB queries"
|
||||
- "Propose P99 < 200ms based on industry standards"
|
||||
- "Show evidence from load tests"
|
||||
```
|
||||
|
||||
Use data to negotiate realistic requirements.
|
||||
|
||||
### Assessment Takes Too Long
|
||||
|
||||
**Problem:** Gathering evidence for all categories is time-consuming.
|
||||
|
||||
**Solution:** Focus on critical categories first:
|
||||
|
||||
**For most projects:**
|
||||
```
|
||||
Priority 1: Security (always critical)
|
||||
Priority 2: Performance (if high-traffic)
|
||||
Priority 3: Reliability (if uptime critical)
|
||||
Priority 4: Maintainability (nice to have)
|
||||
```
|
||||
|
||||
Assess categories incrementally, not all at once.
|
||||
|
||||
### CONCERNS vs FAIL - When to Block?
|
||||
|
||||
**CONCERNS** ⚠️:
|
||||
- Issues exist but not critical
|
||||
- Mitigation plan in place
|
||||
- Business accepts risk (with waiver)
|
||||
- Can deploy with monitoring
|
||||
|
||||
**FAIL** ❌:
|
||||
- Critical security vulnerability (CVE critical)
|
||||
- System unusable (error rate >10%)
|
||||
- Data loss risk (no backups)
|
||||
- Zero mitigation possible
|
||||
|
||||
**Rule of thumb:** If you can mitigate or monitor, use CONCERNS. Reserve FAIL for absolute blockers.
|
||||
|
||||
## Related Guides
|
||||
|
||||
- [How to Run Trace](/docs/tea/how-to/workflows/run-trace.md) - Gate decision complements NFR
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Quality complements NFR
|
||||
- [Run TEA for Enterprise](/docs/tea/how-to/brownfield/use-tea-for-enterprise.md) - Enterprise workflow
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Risk assessment principles
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - NFR in release gates
|
||||
|
||||
## Reference
|
||||
|
||||
- [Command: *nfr-assess](/docs/tea/reference/commands.md#nfr-assess) - Full command reference
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Enterprise config options
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
135
docs/tea/how-to/workflows/run-test-design.md
Normal file
135
docs/tea/how-to/workflows/run-test-design.md
Normal file
@@ -0,0 +1,135 @@
|
||||
---
|
||||
title: "How to Run Test Design with TEA"
|
||||
description: How to create comprehensive test plans using TEA's test-design workflow
|
||||
---
|
||||
|
||||
Use TEA's `test-design` workflow to create comprehensive test plans with risk assessment and coverage strategies.
|
||||
|
||||
## When to Use This
|
||||
|
||||
**System-level (Phase 3):**
|
||||
- After architecture is complete
|
||||
- Before implementation-readiness gate
|
||||
- To validate architecture testability
|
||||
|
||||
**Epic-level (Phase 4):**
|
||||
- At the start of each epic
|
||||
- Before implementing stories in the epic
|
||||
- To identify epic-specific testing needs
|
||||
|
||||
:::note[Prerequisites]
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- For system-level: Architecture document complete
|
||||
- For epic-level: Epic defined with stories
|
||||
:::
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Load the TEA Agent
|
||||
|
||||
Start a fresh chat and load the TEA (Test Architect) agent.
|
||||
|
||||
### 2. Run the Test Design Workflow
|
||||
|
||||
```
|
||||
test-design
|
||||
```
|
||||
|
||||
### 3. Specify the Mode
|
||||
|
||||
TEA will ask if you want:
|
||||
|
||||
- **System-level** — For architecture testability review (Phase 3)
|
||||
- **Epic-level** — For epic-specific test planning (Phase 4)
|
||||
|
||||
### 4. Provide Context
|
||||
|
||||
For system-level:
|
||||
- Point to your architecture document
|
||||
- Reference any ADRs (Architecture Decision Records)
|
||||
|
||||
For epic-level:
|
||||
- Specify which epic you're planning
|
||||
- Reference the epic file with stories
|
||||
|
||||
### 5. Review the Output
|
||||
|
||||
TEA generates test design document(s) based on mode.
|
||||
|
||||
## What You Get
|
||||
|
||||
**System-Level Output (TWO Documents):**
|
||||
|
||||
TEA produces two focused documents for system-level mode:
|
||||
|
||||
1. **`test-design-architecture.md`** (for Architecture/Dev teams)
|
||||
- Purpose: Architectural concerns, testability gaps, NFR requirements
|
||||
- Quick Guide with 🚨 BLOCKERS / ⚠️ HIGH PRIORITY / 📋 INFO ONLY
|
||||
- Risk assessment (high/medium/low-priority with scoring)
|
||||
- Testability concerns and architectural gaps
|
||||
- Risk mitigation plans for high-priority risks (≥6)
|
||||
- Assumptions and dependencies
|
||||
|
||||
2. **`test-design-qa.md`** (for QA team)
|
||||
- Purpose: Test execution recipe, coverage plan, Sprint 0 setup
|
||||
- Quick Reference for QA (Before You Start, Execution Order, Need Help)
|
||||
- System architecture summary
|
||||
- Test environment requirements (moved up - early in doc)
|
||||
- Testability assessment (prerequisites checklist)
|
||||
- Test levels strategy (unit/integration/E2E split)
|
||||
- Test coverage plan (P0/P1/P2/P3 with detailed scenarios + checkboxes)
|
||||
- Sprint 0 setup requirements (blockers, infrastructure, environments)
|
||||
- NFR readiness summary
|
||||
|
||||
**Why Two Documents?**
|
||||
- **Architecture teams** can scan blockers in <5 min (Quick Guide format)
|
||||
- **QA teams** have actionable test recipes (step-by-step with checklists)
|
||||
- **No redundancy** between documents (cross-references instead of duplication)
|
||||
- **Clear separation** of concerns (what to deliver vs how to test)
|
||||
|
||||
**Epic-Level Output (ONE Document):**
|
||||
|
||||
**`test-design-epic-N.md`** (combined risk assessment + test plan)
|
||||
- Risk assessment for the epic
|
||||
- Test priorities (P0-P3)
|
||||
- Coverage plan
|
||||
- Regression hotspots (for brownfield)
|
||||
- Integration risks
|
||||
- Mitigation strategies
|
||||
|
||||
## Test Design for Different Tracks
|
||||
|
||||
| Track | Phase 3 Focus | Phase 4 Focus |
|
||||
|-------|---------------|---------------|
|
||||
| **Greenfield** | System-level testability review | Per-epic risk assessment and test plan |
|
||||
| **Brownfield** | System-level + existing test baseline | Regression hotspots, integration risks |
|
||||
| **Enterprise** | Compliance-aware testability | Security/performance/compliance focus |
|
||||
|
||||
## Examples
|
||||
|
||||
**System-Level (Two Documents):**
|
||||
- `cluster-search/cluster-search-test-design-architecture.md` - Architecture doc with Quick Guide
|
||||
- `cluster-search/cluster-search-test-design-qa.md` - QA doc with test scenarios
|
||||
|
||||
**Key Pattern:**
|
||||
- Architecture doc: "ASR-1: OAuth 2.1 required (see QA doc for 12 test scenarios)"
|
||||
- QA doc: "OAuth tests: 12 P0 scenarios (see Architecture doc R-001 for risk details)"
|
||||
- No duplication, just cross-references
|
||||
|
||||
## Tips
|
||||
|
||||
- **Run system-level right after architecture** — Early testability review
|
||||
- **Run epic-level at the start of each epic** — Targeted test planning
|
||||
- **Update if ADRs change** — Keep test design aligned
|
||||
- **Use output to guide other workflows** — Feeds into `atdd` and `automate`
|
||||
- **Architecture teams review Architecture doc** — Focus on blockers and mitigation plans
|
||||
- **QA teams use QA doc as implementation guide** — Follow test scenarios and Sprint 0 checklist
|
||||
|
||||
## Next Steps
|
||||
|
||||
After test design:
|
||||
|
||||
1. **Setup Test Framework** — If not already configured
|
||||
2. **Implementation Readiness** — System-level feeds into gate check
|
||||
3. **Story Implementation** — Epic-level guides testing during dev
|
||||
605
docs/tea/how-to/workflows/run-test-review.md
Normal file
605
docs/tea/how-to/workflows/run-test-review.md
Normal file
@@ -0,0 +1,605 @@
|
||||
---
|
||||
title: "How to Run Test Review with TEA"
|
||||
description: Audit test quality using TEA's comprehensive knowledge base and get 0-100 scoring
|
||||
---
|
||||
|
||||
# How to Run Test Review with TEA
|
||||
|
||||
Use TEA's `test-review` workflow to audit test quality with objective scoring and actionable feedback. TEA reviews tests against its knowledge base of best practices.
|
||||
|
||||
## When to Use This
|
||||
|
||||
- Want to validate test quality objectively
|
||||
- Need quality metrics for release gates
|
||||
- Preparing for production deployment
|
||||
- Reviewing team-written tests
|
||||
- Auditing AI-generated tests
|
||||
- Onboarding new team members (show good patterns)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- Tests written (to review)
|
||||
- Test framework configured
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Load TEA Agent
|
||||
|
||||
Start a fresh chat and load TEA:
|
||||
|
||||
```
|
||||
tea
|
||||
```
|
||||
|
||||
### 2. Run the Test Review Workflow
|
||||
|
||||
```
|
||||
test-review
|
||||
```
|
||||
|
||||
### 3. Specify Review Scope
|
||||
|
||||
TEA will ask what to review.
|
||||
|
||||
#### Option A: Single File
|
||||
|
||||
Review one test file:
|
||||
|
||||
```
|
||||
tests/e2e/checkout.spec.ts
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Reviewing specific failing tests
|
||||
- Quick feedback on new tests
|
||||
- Learning from specific examples
|
||||
|
||||
#### Option B: Directory
|
||||
|
||||
Review all tests in a directory:
|
||||
|
||||
```
|
||||
tests/e2e/
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Reviewing E2E test suite
|
||||
- Comparing test quality across files
|
||||
- Finding patterns of issues
|
||||
|
||||
#### Option C: Entire Suite
|
||||
|
||||
Review all tests:
|
||||
|
||||
```
|
||||
tests/
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Release gate quality check
|
||||
- Comprehensive audit
|
||||
- Establishing baseline metrics
|
||||
|
||||
### 4. Review the Quality Report
|
||||
|
||||
TEA generates a comprehensive quality report with scoring.
|
||||
|
||||
#### Report Structure (`test-review.md`):
|
||||
|
||||
```markdown
|
||||
# Test Quality Review Report
|
||||
|
||||
**Date:** 2026-01-13
|
||||
**Scope:** tests/e2e/
|
||||
**Overall Score:** 76/100
|
||||
|
||||
## Summary
|
||||
|
||||
- **Tests Reviewed:** 12
|
||||
- **Passing Quality:** 9 tests (75%)
|
||||
- **Needs Improvement:** 3 tests (25%)
|
||||
- **Critical Issues:** 2
|
||||
- **Recommendations:** 6
|
||||
|
||||
## Critical Issues
|
||||
|
||||
### 1. Hard Waits Detected
|
||||
|
||||
**File:** `tests/e2e/checkout.spec.ts:45`
|
||||
**Issue:** Using `page.waitForTimeout(3000)`
|
||||
**Impact:** Test is flaky and unnecessarily slow
|
||||
**Severity:** Critical
|
||||
|
||||
**Current Code:**
|
||||
```typescript
|
||||
await page.click('button[type="submit"]');
|
||||
await page.waitForTimeout(3000); // ❌ Hard wait
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```typescript
|
||||
await page.click('button[type="submit"]');
|
||||
// Wait for the API response that triggers success message
|
||||
await page.waitForResponse(resp =>
|
||||
resp.url().includes('/api/checkout') && resp.ok()
|
||||
);
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
```
|
||||
|
||||
**Why This Matters:**
|
||||
- Hard waits are fixed timeouts that don't wait for actual conditions
|
||||
- Tests fail intermittently on slower machines
|
||||
- Wastes time waiting even when response is fast
|
||||
- Network-first patterns are more reliable
|
||||
|
||||
---
|
||||
|
||||
### 2. Conditional Flow Control
|
||||
|
||||
**File:** `tests/e2e/profile.spec.ts:28`
|
||||
**Issue:** Using if/else to handle optional elements
|
||||
**Impact:** Non-deterministic test behavior
|
||||
**Severity:** Critical
|
||||
|
||||
**Current Code:**
|
||||
```typescript
|
||||
if (await page.locator('.banner').isVisible()) {
|
||||
await page.click('.dismiss');
|
||||
}
|
||||
// ❌ Test behavior changes based on banner presence
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```typescript
|
||||
// Option 1: Make banner presence deterministic
|
||||
await expect(page.locator('.banner')).toBeVisible();
|
||||
await page.click('.dismiss');
|
||||
|
||||
// Option 2: Test both scenarios separately
|
||||
test('should show banner for new users', async ({ page }) => {
|
||||
// Test with banner
|
||||
});
|
||||
|
||||
test('should not show banner for returning users', async ({ page }) => {
|
||||
// Test without banner
|
||||
});
|
||||
```
|
||||
|
||||
**Why This Matters:**
|
||||
- Tests should be deterministic (same result every run)
|
||||
- Conditionals hide bugs (what if banner should always show?)
|
||||
- Makes debugging harder
|
||||
- Violates test isolation principle
|
||||
|
||||
## Recommendations
|
||||
|
||||
### 1. Extract Repeated Setup
|
||||
|
||||
**File:** `tests/e2e/profile.spec.ts`
|
||||
**Issue:** Login code duplicated in every test
|
||||
**Severity:** Medium
|
||||
**Impact:** Maintenance burden, test verbosity
|
||||
|
||||
**Current:**
|
||||
```typescript
|
||||
test('test 1', async ({ page }) => {
|
||||
await page.goto('/login');
|
||||
await page.fill('[name="email"]', 'test@example.com');
|
||||
await page.fill('[name="password"]', 'password');
|
||||
await page.click('button[type="submit"]');
|
||||
// Test logic...
|
||||
});
|
||||
|
||||
test('test 2', async ({ page }) => {
|
||||
// Same login code repeated
|
||||
});
|
||||
```
|
||||
|
||||
**Fix (Vanilla Playwright):**
|
||||
```typescript
|
||||
// Create fixture in tests/support/fixtures/auth.ts
|
||||
import { test as base, Page } from '@playwright/test';
|
||||
|
||||
export const test = base.extend<{ authenticatedPage: Page }>({
|
||||
authenticatedPage: async ({ page }, use) => {
|
||||
await page.goto('/login');
|
||||
await page.getByLabel('Email').fill('test@example.com');
|
||||
await page.getByLabel('Password').fill('password');
|
||||
await page.getByRole('button', { name: 'Sign in' }).click();
|
||||
await page.waitForURL(/\/dashboard/);
|
||||
await use(page);
|
||||
}
|
||||
});
|
||||
|
||||
// Use in tests
|
||||
test('test 1', async ({ authenticatedPage }) => {
|
||||
// Already logged in
|
||||
});
|
||||
```
|
||||
|
||||
**Better (With Playwright Utils):**
|
||||
```typescript
|
||||
// Use built-in auth-session fixture
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
||||
|
||||
export const test = base.extend(createAuthFixtures());
|
||||
|
||||
// Use in tests - even simpler
|
||||
test('test 1', async ({ page, authToken }) => {
|
||||
// authToken already available (persisted, reused)
|
||||
await page.goto('/dashboard');
|
||||
// Already authenticated via authToken
|
||||
});
|
||||
```
|
||||
|
||||
**Playwright Utils Benefits:**
|
||||
- Token persisted to disk (faster subsequent runs)
|
||||
- Multi-user support out of the box
|
||||
- Automatic token renewal if expired
|
||||
- No manual login flow needed
|
||||
|
||||
---
|
||||
|
||||
### 2. Add Network Assertions
|
||||
|
||||
**File:** `tests/e2e/api-calls.spec.ts`
|
||||
**Issue:** No verification of API responses
|
||||
**Severity:** Low
|
||||
**Impact:** Tests don't catch API errors
|
||||
|
||||
**Current:**
|
||||
```typescript
|
||||
await page.click('button[name="save"]');
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
// ❌ What if API returned 500 but UI shows cached success?
|
||||
```
|
||||
|
||||
**Enhancement:**
|
||||
```typescript
|
||||
const responsePromise = page.waitForResponse(
|
||||
resp => resp.url().includes('/api/profile') && resp.status() === 200
|
||||
);
|
||||
await page.click('button[name="save"]');
|
||||
const response = await responsePromise;
|
||||
|
||||
// Verify API response
|
||||
const data = await response.json();
|
||||
expect(data.success).toBe(true);
|
||||
|
||||
// Verify UI
|
||||
await expect(page.locator('.success')).toBeVisible();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Improve Test Names
|
||||
|
||||
**File:** `tests/e2e/checkout.spec.ts`
|
||||
**Issue:** Vague test names
|
||||
**Severity:** Low
|
||||
**Impact:** Hard to understand test purpose
|
||||
|
||||
**Current:**
|
||||
```typescript
|
||||
test('should work', async ({ page }) => { });
|
||||
test('test checkout', async ({ page }) => { });
|
||||
```
|
||||
|
||||
**Better:**
|
||||
```typescript
|
||||
test('should complete checkout with valid credit card', async ({ page }) => { });
|
||||
test('should show validation error for expired card', async ({ page }) => { });
|
||||
```
|
||||
|
||||
## Quality Scores by Category
|
||||
|
||||
| Category | Score | Target | Status |
|
||||
|----------|-------|--------|--------|
|
||||
| **Determinism** | 26/35 | 30/35 | ⚠️ Needs Improvement |
|
||||
| **Isolation** | 22/25 | 20/25 | ✅ Good |
|
||||
| **Assertions** | 18/20 | 16/20 | ✅ Good |
|
||||
| **Structure** | 7/10 | 8/10 | ⚠️ Minor Issues |
|
||||
| **Performance** | 3/10 | 8/10 | ❌ Critical |
|
||||
|
||||
### Scoring Breakdown
|
||||
|
||||
**Determinism (35 points max):**
|
||||
- No hard waits: 0/10 ❌ (found 3 instances)
|
||||
- No conditionals: 8/10 ⚠️ (found 2 instances)
|
||||
- No try-catch flow control: 10/10 ✅
|
||||
- Network-first patterns: 8/15 ⚠️ (some tests missing)
|
||||
|
||||
**Isolation (25 points max):**
|
||||
- Self-cleaning: 20/20 ✅
|
||||
- No global state: 5/5 ✅
|
||||
- Parallel-safe: 0/0 ✅ (not tested)
|
||||
|
||||
**Assertions (20 points max):**
|
||||
- Explicit in test body: 15/15 ✅
|
||||
- Specific and meaningful: 3/5 ⚠️ (some weak assertions)
|
||||
|
||||
**Structure (10 points max):**
|
||||
- Test size < 300 lines: 5/5 ✅
|
||||
- Clear names: 2/5 ⚠️ (some vague names)
|
||||
|
||||
**Performance (10 points max):**
|
||||
- Execution time < 1.5 min: 3/10 ❌ (3 tests exceed limit)
|
||||
|
||||
## Files Reviewed
|
||||
|
||||
| File | Score | Issues | Status |
|
||||
|------|-------|--------|--------|
|
||||
| `tests/e2e/checkout.spec.ts` | 65/100 | 4 | ❌ Needs Work |
|
||||
| `tests/e2e/profile.spec.ts` | 72/100 | 3 | ⚠️ Needs Improvement |
|
||||
| `tests/e2e/search.spec.ts` | 88/100 | 1 | ✅ Good |
|
||||
| `tests/api/profile.spec.ts` | 92/100 | 0 | ✅ Excellent |
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Immediate (Fix Critical Issues)
|
||||
1. Remove hard waits in `checkout.spec.ts` (line 45, 67, 89)
|
||||
2. Fix conditional in `profile.spec.ts` (line 28)
|
||||
3. Optimize slow tests in `checkout.spec.ts`
|
||||
|
||||
### Short-term (Apply Recommendations)
|
||||
4. Extract login fixture from `profile.spec.ts`
|
||||
5. Add network assertions to `api-calls.spec.ts`
|
||||
6. Improve test names in `checkout.spec.ts`
|
||||
|
||||
### Long-term (Continuous Improvement)
|
||||
7. Re-run `test-review` after fixes (target: 85/100)
|
||||
8. Add performance budgets to CI
|
||||
9. Document test patterns for team
|
||||
|
||||
## Knowledge Base References
|
||||
|
||||
TEA reviewed against these patterns:
|
||||
- [test-quality.md](/docs/tea/reference/knowledge-base.md#test-quality) - Execution limits, isolation
|
||||
- [network-first.md](/docs/tea/reference/knowledge-base.md#network-first) - Deterministic waits
|
||||
- [timing-debugging.md](/docs/tea/reference/knowledge-base.md#timing-debugging) - Race conditions
|
||||
- [selector-resilience.md](/docs/tea/reference/knowledge-base.md#selector-resilience) - Robust selectors
|
||||
```
|
||||
|
||||
## Understanding the Scores
|
||||
|
||||
### What Do Scores Mean?
|
||||
|
||||
| Score Range | Interpretation | Action |
|
||||
|-------------|----------------|--------|
|
||||
| **90-100** | Excellent | Minimal changes needed, production-ready |
|
||||
| **80-89** | Good | Minor improvements recommended |
|
||||
| **70-79** | Acceptable | Address recommendations before release |
|
||||
| **60-69** | Needs Improvement | Fix critical issues, apply recommendations |
|
||||
| **< 60** | Critical | Significant refactoring needed |
|
||||
|
||||
### Scoring Criteria
|
||||
|
||||
**Determinism (35 points):**
|
||||
- Tests produce same result every run
|
||||
- No random failures (flakiness)
|
||||
- No environment-dependent behavior
|
||||
|
||||
**Isolation (25 points):**
|
||||
- Tests don't depend on each other
|
||||
- Can run in any order
|
||||
- Clean up after themselves
|
||||
|
||||
**Assertions (20 points):**
|
||||
- Verify actual behavior
|
||||
- Specific and meaningful
|
||||
- Not abstracted away in helpers
|
||||
|
||||
**Structure (10 points):**
|
||||
- Readable and maintainable
|
||||
- Appropriate size
|
||||
- Clear naming
|
||||
|
||||
**Performance (10 points):**
|
||||
- Fast execution
|
||||
- Efficient selectors
|
||||
- No unnecessary waits
|
||||
|
||||
## What You Get
|
||||
|
||||
### Quality Report
|
||||
- Overall score (0-100)
|
||||
- Category scores (Determinism, Isolation, etc.)
|
||||
- File-by-file breakdown
|
||||
|
||||
### Critical Issues
|
||||
- Specific line numbers
|
||||
- Code examples (current vs fixed)
|
||||
- Why it matters explanation
|
||||
- Impact assessment
|
||||
|
||||
### Recommendations
|
||||
- Actionable improvements
|
||||
- Code examples
|
||||
- Priority/severity levels
|
||||
|
||||
### Next Steps
|
||||
- Immediate actions (fix critical)
|
||||
- Short-term improvements
|
||||
- Long-term quality goals
|
||||
|
||||
## Tips
|
||||
|
||||
### Review Before Release
|
||||
|
||||
Make test review part of release checklist:
|
||||
|
||||
```markdown
|
||||
## Release Checklist
|
||||
- [ ] All tests passing
|
||||
- [ ] Test review score > 80
|
||||
- [ ] Critical issues resolved
|
||||
- [ ] Performance within budget
|
||||
```
|
||||
|
||||
### Review After AI Generation
|
||||
|
||||
Always review AI-generated tests:
|
||||
|
||||
```
|
||||
1. Run atdd or automate
|
||||
2. Run test-review on generated tests
|
||||
3. Fix critical issues
|
||||
4. Commit tests
|
||||
```
|
||||
|
||||
### Set Quality Gates
|
||||
|
||||
Use scores as quality gates:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yml
|
||||
- name: Review test quality
|
||||
run: |
|
||||
# Run test review
|
||||
# Parse score from report
|
||||
if [ $SCORE -lt 80 ]; then
|
||||
echo "Test quality below threshold"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### Review Regularly
|
||||
|
||||
Schedule periodic reviews:
|
||||
|
||||
- **Per story:** Optional (spot check new tests)
|
||||
- **Per epic:** Recommended (ensure consistency)
|
||||
- **Per release:** Recommended for quality gates (required if using formal gate process)
|
||||
- **Quarterly:** Audit entire suite
|
||||
|
||||
### Focus Reviews
|
||||
|
||||
For large suites, review incrementally:
|
||||
|
||||
**Week 1:** Review E2E tests
|
||||
**Week 2:** Review API tests
|
||||
**Week 3:** Review component tests (Cypress CT or Vitest)
|
||||
**Week 4:** Apply fixes across all suites
|
||||
|
||||
**Component Testing Note:** TEA reviews component tests using framework-specific knowledge:
|
||||
- **Cypress:** Reviews Cypress Component Testing specs (*.cy.tsx)
|
||||
- **Playwright:** Reviews Vitest component tests (*.test.tsx)
|
||||
|
||||
### Use Reviews for Learning
|
||||
|
||||
Share reports with team:
|
||||
|
||||
```
|
||||
Team Meeting:
|
||||
- Review test-review.md
|
||||
- Discuss critical issues
|
||||
- Agree on patterns
|
||||
- Update team guidelines
|
||||
```
|
||||
|
||||
### Compare Over Time
|
||||
|
||||
Track improvement:
|
||||
|
||||
```markdown
|
||||
## Quality Trend
|
||||
|
||||
| Date | Score | Critical Issues | Notes |
|
||||
|------|-------|-----------------|-------|
|
||||
| 2026-01-01 | 65 | 5 | Baseline |
|
||||
| 2026-01-15 | 72 | 2 | Fixed hard waits |
|
||||
| 2026-02-01 | 84 | 0 | All critical resolved |
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Low Determinism Score
|
||||
|
||||
**Symptoms:**
|
||||
- Tests fail randomly
|
||||
- "Works on my machine"
|
||||
- CI failures that don't reproduce locally
|
||||
|
||||
**Common Causes:**
|
||||
- Hard waits (`waitForTimeout`)
|
||||
- Conditional flow control (`if/else`)
|
||||
- Try-catch for flow control
|
||||
- Missing network-first patterns
|
||||
|
||||
**Fix:** Review determinism section, apply network-first patterns
|
||||
|
||||
### Low Performance Score
|
||||
|
||||
**Symptoms:**
|
||||
- Tests take > 1.5 minutes each
|
||||
- Test suite takes hours
|
||||
- CI times out
|
||||
|
||||
**Common Causes:**
|
||||
- Unnecessary waits (hard timeouts)
|
||||
- Inefficient selectors (XPath, complex CSS)
|
||||
- Not using parallelization
|
||||
- Heavy setup in every test
|
||||
|
||||
**Fix:** Optimize waits, improve selectors, use fixtures
|
||||
|
||||
### Low Isolation Score
|
||||
|
||||
**Symptoms:**
|
||||
- Tests fail when run in different order
|
||||
- Tests fail in parallel
|
||||
- Test data conflicts
|
||||
|
||||
**Common Causes:**
|
||||
- Shared global state
|
||||
- Tests don't clean up
|
||||
- Hard-coded test data
|
||||
- Database not reset between tests
|
||||
|
||||
**Fix:** Use fixtures, clean up in afterEach, use unique test data
|
||||
|
||||
### "Too Many Issues to Fix"
|
||||
|
||||
**Problem:** Report shows 50+ issues, overwhelming.
|
||||
|
||||
**Solution:** Prioritize:
|
||||
1. Fix all critical issues first
|
||||
2. Apply top 3 recommendations
|
||||
3. Re-run review
|
||||
4. Iterate
|
||||
|
||||
Don't try to fix everything at once.
|
||||
|
||||
### Reviews Take Too Long
|
||||
|
||||
**Problem:** Reviewing entire suite takes hours.
|
||||
|
||||
**Solution:** Review incrementally:
|
||||
- Review new tests in PR review
|
||||
- Schedule directory reviews weekly
|
||||
- Full suite review quarterly
|
||||
|
||||
## Related Guides
|
||||
|
||||
- [How to Run ATDD](/docs/tea/how-to/workflows/run-atdd.md) - Generate tests to review
|
||||
- [How to Run Automate](/docs/tea/how-to/workflows/run-automate.md) - Expand coverage to review
|
||||
- [How to Run Trace](/docs/tea/how-to/workflows/run-trace.md) - Coverage complements quality
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Test Quality Standards](/docs/tea/explanation/test-quality-standards.md) - What makes tests good
|
||||
- [Network-First Patterns](/docs/tea/explanation/network-first-patterns.md) - Avoiding flakiness
|
||||
- [Fixture Architecture](/docs/tea/explanation/fixture-architecture.md) - Reusable patterns
|
||||
|
||||
## Reference
|
||||
|
||||
- [Command: *test-review](/docs/tea/reference/commands.md#test-review) - Full command reference
|
||||
- [Knowledge Base Index](/docs/tea/reference/knowledge-base.md) - Patterns TEA reviews against
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
883
docs/tea/how-to/workflows/run-trace.md
Normal file
883
docs/tea/how-to/workflows/run-trace.md
Normal file
@@ -0,0 +1,883 @@
|
||||
---
|
||||
title: "How to Run Trace with TEA"
|
||||
description: Map requirements to tests and make quality gate decisions using TEA's trace workflow
|
||||
---
|
||||
|
||||
# How to Run Trace with TEA
|
||||
|
||||
Use TEA's `trace` workflow for requirements traceability and quality gate decisions. This is a two-phase workflow: Phase 1 analyzes coverage, Phase 2 makes the go/no-go decision.
|
||||
|
||||
## When to Use This
|
||||
|
||||
### Phase 1: Requirements Traceability
|
||||
- Map acceptance criteria to implemented tests
|
||||
- Identify coverage gaps
|
||||
- Prioritize missing tests
|
||||
- Refresh coverage after each story/epic
|
||||
|
||||
### Phase 2: Quality Gate Decision
|
||||
- Make go/no-go decision for release
|
||||
- Validate coverage meets thresholds
|
||||
- Document gate decision with evidence
|
||||
- Support business-approved waivers
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- BMad Method installed
|
||||
- TEA agent available
|
||||
- Requirements defined (stories, acceptance criteria, test design)
|
||||
- Tests implemented
|
||||
- For brownfield: Existing codebase with tests
|
||||
|
||||
## Steps
|
||||
|
||||
### 1. Run the Trace Workflow
|
||||
|
||||
```
|
||||
trace
|
||||
```
|
||||
|
||||
### 2. Specify Phase
|
||||
|
||||
TEA will ask which phase you're running.
|
||||
|
||||
**Phase 1: Requirements Traceability**
|
||||
- Analyze coverage
|
||||
- Identify gaps
|
||||
- Generate recommendations
|
||||
|
||||
**Phase 2: Quality Gate Decision**
|
||||
- Make PASS/CONCERNS/FAIL/WAIVED decision
|
||||
- Requires Phase 1 complete
|
||||
|
||||
**Typical flow:** Run Phase 1 first, review gaps, then run Phase 2 for gate decision.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Requirements Traceability
|
||||
|
||||
### 3. Provide Requirements Source
|
||||
|
||||
TEA will ask where requirements are defined.
|
||||
|
||||
**Options:**
|
||||
|
||||
| Source | Example | Best For |
|
||||
| --------------- | ----------------------------- | ---------------------- |
|
||||
| **Story file** | `story-profile-management.md` | Single story coverage |
|
||||
| **Test design** | `test-design-epic-1.md` | Epic coverage |
|
||||
| **PRD** | `PRD.md` | System-level coverage |
|
||||
| **Multiple** | All of the above | Comprehensive analysis |
|
||||
|
||||
**Example Response:**
|
||||
```
|
||||
Requirements:
|
||||
- story-profile-management.md (acceptance criteria)
|
||||
- test-design-epic-1.md (test priorities)
|
||||
```
|
||||
|
||||
### 4. Specify Test Location
|
||||
|
||||
TEA will ask where tests are located.
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Test location: tests/
|
||||
Include:
|
||||
- tests/api/
|
||||
- tests/e2e/
|
||||
```
|
||||
|
||||
### 5. Specify Focus Areas (Optional)
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Focus on:
|
||||
- Profile CRUD operations
|
||||
- Validation scenarios
|
||||
- Authorization checks
|
||||
```
|
||||
|
||||
### 6. Review Coverage Matrix
|
||||
|
||||
TEA generates a comprehensive traceability matrix.
|
||||
|
||||
#### Traceability Matrix (`traceability-matrix.md`):
|
||||
|
||||
```markdown
|
||||
# Requirements Traceability Matrix
|
||||
|
||||
**Date:** 2026-01-13
|
||||
**Scope:** Epic 1 - User Profile Management
|
||||
**Phase:** Phase 1 (Traceability Analysis)
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Metric | Count | Percentage |
|
||||
| ---------------------- | ----- | ---------- |
|
||||
| **Total Requirements** | 15 | 100% |
|
||||
| **Full Coverage** | 11 | 73% |
|
||||
| **Partial Coverage** | 3 | 20% |
|
||||
| **No Coverage** | 1 | 7% |
|
||||
|
||||
### By Priority
|
||||
|
||||
| Priority | Total | Covered | Percentage |
|
||||
| -------- | ----- | ------- | ----------------- |
|
||||
| **P0** | 5 | 5 | 100% ✅ |
|
||||
| **P1** | 6 | 5 | 83% ⚠️ |
|
||||
| **P2** | 3 | 1 | 33% ⚠️ |
|
||||
| **P3** | 1 | 0 | 0% ✅ (acceptable) |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Traceability
|
||||
|
||||
### ✅ Requirement 1: User can view their profile (P0)
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- User navigates to /profile
|
||||
- Profile displays name, email, avatar
|
||||
- Data is current (not cached)
|
||||
|
||||
**Test Coverage:** FULL ✅
|
||||
|
||||
**Tests:**
|
||||
- `tests/e2e/profile-view.spec.ts:15` - "should display profile page with current data"
|
||||
- ✅ Navigates to /profile
|
||||
- ✅ Verifies name, email visible
|
||||
- ✅ Verifies avatar displayed
|
||||
- ✅ Validates data freshness via API assertion
|
||||
|
||||
- `tests/api/profile.spec.ts:8` - "should fetch user profile via API"
|
||||
- ✅ Calls GET /api/profile
|
||||
- ✅ Validates response schema
|
||||
- ✅ Confirms all fields present
|
||||
|
||||
---
|
||||
|
||||
### ⚠️ Requirement 2: User can edit profile (P0)
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- User clicks "Edit Profile"
|
||||
- Can modify name, email, bio
|
||||
- Can upload avatar
|
||||
- Changes are persisted
|
||||
- Success message shown
|
||||
|
||||
**Test Coverage:** PARTIAL ⚠️
|
||||
|
||||
**Tests:**
|
||||
- `tests/e2e/profile-edit.spec.ts:22` - "should edit and save profile"
|
||||
- ✅ Clicks edit button
|
||||
- ✅ Modifies name and email
|
||||
- ⚠️ **Does NOT test bio field**
|
||||
- ❌ **Does NOT test avatar upload**
|
||||
- ✅ Verifies persistence
|
||||
- ✅ Verifies success message
|
||||
|
||||
- `tests/api/profile.spec.ts:25` - "should update profile via PATCH"
|
||||
- ✅ Calls PATCH /api/profile
|
||||
- ✅ Validates update response
|
||||
- ⚠️ **Only tests name/email, not bio/avatar**
|
||||
|
||||
**Missing Coverage:**
|
||||
- Bio field not tested in E2E or API
|
||||
- Avatar upload not tested
|
||||
|
||||
**Gap Severity:** HIGH (P0 requirement, critical path)
|
||||
|
||||
---
|
||||
|
||||
### ✅ Requirement 3: Invalid email shows validation error (P1)
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- Enter invalid email format
|
||||
- See error message
|
||||
- Cannot save changes
|
||||
|
||||
**Test Coverage:** FULL ✅
|
||||
|
||||
**Tests:**
|
||||
- `tests/e2e/profile-edit.spec.ts:45` - "should show validation error for invalid email"
|
||||
- `tests/api/profile.spec.ts:50` - "should return 400 for invalid email"
|
||||
|
||||
---
|
||||
|
||||
### ❌ Requirement 15: Profile export as PDF (P2)
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- User clicks "Export Profile"
|
||||
- PDF downloads with profile data
|
||||
|
||||
**Test Coverage:** NONE ❌
|
||||
|
||||
**Gap Analysis:**
|
||||
- **Priority:** P2 (medium)
|
||||
- **Risk:** Low (non-critical feature)
|
||||
- **Recommendation:** Add in next iteration (not blocking for release)
|
||||
|
||||
---
|
||||
|
||||
## Gap Prioritization
|
||||
|
||||
### Critical Gaps (Must Fix Before Release)
|
||||
|
||||
| Gap | Requirement | Priority | Risk | Recommendation |
|
||||
| --- | ------------------------ | -------- | ---- | ------------------- |
|
||||
| 1 | Bio field not tested | P0 | High | Add E2E + API tests |
|
||||
| 2 | Avatar upload not tested | P0 | High | Add E2E + API tests |
|
||||
|
||||
**Estimated Effort:** 3 hours
|
||||
**Owner:** QA team
|
||||
**Deadline:** Before release
|
||||
|
||||
### Non-Critical Gaps (Can Defer)
|
||||
|
||||
| Gap | Requirement | Priority | Risk | Recommendation |
|
||||
| --- | ------------------------- | -------- | ---- | ------------------- |
|
||||
| 3 | Profile export not tested | P2 | Low | Add in v1.3 release |
|
||||
|
||||
**Estimated Effort:** 2 hours
|
||||
**Owner:** QA team
|
||||
**Deadline:** Next release (February)
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### 1. Add Bio Field Tests
|
||||
|
||||
**Tests Needed (Vanilla Playwright):**
|
||||
```typescript
|
||||
// tests/e2e/profile-edit.spec.ts
|
||||
test('should edit bio field', async ({ page }) => {
|
||||
await page.goto('/profile');
|
||||
await page.getByRole('button', { name: 'Edit' }).click();
|
||||
await page.getByLabel('Bio').fill('New bio text');
|
||||
await page.getByRole('button', { name: 'Save' }).click();
|
||||
await expect(page.getByText('New bio text')).toBeVisible();
|
||||
});
|
||||
|
||||
// tests/api/profile.spec.ts
|
||||
test('should update bio via API', async ({ request }) => {
|
||||
const response = await request.patch('/api/profile', {
|
||||
data: { bio: 'Updated bio' }
|
||||
});
|
||||
expect(response.ok()).toBeTruthy();
|
||||
const { bio } = await response.json();
|
||||
expect(bio).toBe('Updated bio');
|
||||
});
|
||||
```
|
||||
|
||||
**With Playwright Utils:**
|
||||
```typescript
|
||||
// tests/e2e/profile-edit.spec.ts
|
||||
import { test } from '../support/fixtures'; // Composed with authToken
|
||||
|
||||
test('should edit bio field', async ({ page, authToken }) => {
|
||||
await page.goto('/profile');
|
||||
await page.getByRole('button', { name: 'Edit' }).click();
|
||||
await page.getByLabel('Bio').fill('New bio text');
|
||||
await page.getByRole('button', { name: 'Save' }).click();
|
||||
await expect(page.getByText('New bio text')).toBeVisible();
|
||||
});
|
||||
|
||||
// tests/api/profile.spec.ts
|
||||
import { test as base, expect } from '@playwright/test';
|
||||
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
|
||||
import { createAuthFixtures } from '@seontechnologies/playwright-utils/auth-session';
|
||||
import { mergeTests } from '@playwright/test';
|
||||
|
||||
// Merge API request + auth fixtures
|
||||
const authFixtureTest = base.extend(createAuthFixtures());
|
||||
const test = mergeTests(apiRequestFixture, authFixtureTest);
|
||||
|
||||
test('should update bio via API', async ({ apiRequest, authToken }) => {
|
||||
const { status, body } = await apiRequest({
|
||||
method: 'PATCH',
|
||||
path: '/api/profile',
|
||||
body: { bio: 'Updated bio' },
|
||||
headers: { Authorization: `Bearer ${authToken}` }
|
||||
});
|
||||
|
||||
expect(status).toBe(200);
|
||||
expect(body.bio).toBe('Updated bio');
|
||||
});
|
||||
```
|
||||
|
||||
**Note:** `authToken` requires auth-session fixture setup. See [Integrate Playwright Utils](/docs/tea/how-to/customization/integrate-playwright-utils.md#auth-session).
|
||||
|
||||
### 2. Add Avatar Upload Tests
|
||||
|
||||
**Tests Needed:**
|
||||
```typescript
|
||||
// tests/e2e/profile-edit.spec.ts
|
||||
test('should upload avatar image', async ({ page }) => {
|
||||
await page.goto('/profile');
|
||||
await page.getByRole('button', { name: 'Edit' }).click();
|
||||
|
||||
// Upload file
|
||||
await page.setInputFiles('[type="file"]', 'fixtures/avatar.png');
|
||||
await page.getByRole('button', { name: 'Save' }).click();
|
||||
|
||||
// Verify uploaded image displays
|
||||
await expect(page.locator('img[alt="Profile avatar"]')).toBeVisible();
|
||||
});
|
||||
|
||||
// tests/api/profile.spec.ts
|
||||
import { test, expect } from '@playwright/test';
|
||||
import fs from 'fs/promises';
|
||||
|
||||
test('should accept valid image upload', async ({ request }) => {
|
||||
const response = await request.post('/api/profile/avatar', {
|
||||
multipart: {
|
||||
file: {
|
||||
name: 'avatar.png',
|
||||
mimeType: 'image/png',
|
||||
buffer: await fs.readFile('fixtures/avatar.png')
|
||||
}
|
||||
}
|
||||
});
|
||||
expect(response.ok()).toBeTruthy();
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
After reviewing traceability:
|
||||
|
||||
1. **Fix critical gaps** - Add tests for P0/P1 requirements
|
||||
2. **Run `test-review`** - Ensure new tests meet quality standards
|
||||
3. **Run Phase 2** - Make gate decision after gaps addressed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Quality Gate Decision
|
||||
|
||||
After Phase 1 coverage analysis is complete, run Phase 2 for the gate decision.
|
||||
|
||||
**Prerequisites:**
|
||||
- Phase 1 traceability matrix complete
|
||||
- Test execution results available (must have test results)
|
||||
|
||||
**Note:** Phase 2 will skip if test execution results aren't provided. The workflow requires actual test run results to make gate decisions.
|
||||
|
||||
### 7. Run Phase 2
|
||||
|
||||
```
|
||||
trace
|
||||
```
|
||||
|
||||
Select "Phase 2: Quality Gate Decision"
|
||||
|
||||
### 8. Provide Additional Context
|
||||
|
||||
TEA will ask for:
|
||||
|
||||
**Gate Type:**
|
||||
- Story gate (small release)
|
||||
- Epic gate (larger release)
|
||||
- Release gate (production deployment)
|
||||
- Hotfix gate (emergency fix)
|
||||
|
||||
**Decision Mode:**
|
||||
- **Deterministic** - Rule-based (coverage %, quality scores)
|
||||
- **Manual** - Team decision with TEA guidance
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Gate type: Epic gate
|
||||
Decision mode: Deterministic
|
||||
```
|
||||
|
||||
### 9. Provide Supporting Evidence
|
||||
|
||||
TEA will request:
|
||||
|
||||
**Phase 1 Results:**
|
||||
```
|
||||
traceability-matrix.md (from Phase 1)
|
||||
```
|
||||
|
||||
**Test Quality (Optional):**
|
||||
```
|
||||
test-review.md (from test-review)
|
||||
```
|
||||
|
||||
**NFR Assessment (Optional):**
|
||||
```
|
||||
nfr-assessment.md (from nfr-assess)
|
||||
```
|
||||
|
||||
### 10. Review Gate Decision
|
||||
|
||||
TEA makes evidence-based gate decision and writes to separate file.
|
||||
|
||||
#### Gate Decision (`gate-decision-{gate_type}-{story_id}.md`):
|
||||
|
||||
```markdown
|
||||
---
|
||||
|
||||
# Phase 2: Quality Gate Decision
|
||||
|
||||
**Gate Type:** Epic Gate
|
||||
**Decision:** PASS ✅
|
||||
**Date:** 2026-01-13
|
||||
**Approvers:** Product Manager, Tech Lead, QA Lead
|
||||
|
||||
## Decision Summary
|
||||
|
||||
**Verdict:** Ready to release
|
||||
|
||||
**Evidence:**
|
||||
- P0 coverage: 100% (5/5 requirements)
|
||||
- P1 coverage: 100% (6/6 requirements)
|
||||
- P2 coverage: 33% (1/3 requirements) - acceptable
|
||||
- Test quality score: 84/100
|
||||
- NFR assessment: PASS
|
||||
|
||||
## Coverage Analysis
|
||||
|
||||
| Priority | Required Coverage | Actual Coverage | Status |
|
||||
| -------- | ----------------- | --------------- | --------------------- |
|
||||
| **P0** | 100% | 100% | ✅ PASS |
|
||||
| **P1** | 90% | 100% | ✅ PASS |
|
||||
| **P2** | 50% | 33% | ⚠️ Below (acceptable) |
|
||||
| **P3** | 20% | 0% | ✅ PASS (low priority) |
|
||||
|
||||
**Rationale:**
|
||||
- All critical path (P0) requirements fully tested
|
||||
- All high-value (P1) requirements fully tested
|
||||
- P2 gap (profile export) is low risk and deferred to next release
|
||||
|
||||
## Quality Metrics
|
||||
|
||||
| Metric | Threshold | Actual | Status |
|
||||
| ------------------ | --------- | ------ | ------ |
|
||||
| P0/P1 Coverage | >95% | 100% | ✅ |
|
||||
| Test Quality Score | >80 | 84 | ✅ |
|
||||
| NFR Status | PASS | PASS | ✅ |
|
||||
|
||||
## Risks and Mitigations
|
||||
|
||||
### Accepted Risks
|
||||
|
||||
**Risk 1: Profile export not tested (P2)**
|
||||
- **Impact:** Medium (users can't export profile)
|
||||
- **Mitigation:** Feature flag disabled by default
|
||||
- **Plan:** Add tests in v1.3 release (February)
|
||||
- **Monitoring:** Track feature flag usage
|
||||
|
||||
## Approvals
|
||||
|
||||
- [x] **Product Manager** - Business requirements met (Approved: 2026-01-13)
|
||||
- [x] **Tech Lead** - Technical quality acceptable (Approved: 2026-01-13)
|
||||
- [x] **QA Lead** - Test coverage sufficient (Approved: 2026-01-13)
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Deployment
|
||||
1. Merge to main branch
|
||||
2. Deploy to staging
|
||||
3. Run smoke tests in staging
|
||||
4. Deploy to production
|
||||
5. Monitor for 24 hours
|
||||
|
||||
### Monitoring
|
||||
- Set alerts for profile endpoint (P99 > 200ms)
|
||||
- Track error rates (target: <0.1%)
|
||||
- Monitor profile export feature flag usage
|
||||
|
||||
### Future Work
|
||||
- Add profile export tests (v1.3)
|
||||
- Expand P2 coverage to 50%
|
||||
```
|
||||
|
||||
### Gate Decision Rules
|
||||
|
||||
TEA uses deterministic rules when decision_mode = "deterministic":
|
||||
|
||||
| P0 Coverage | P1 Coverage | Overall Coverage | Decision |
|
||||
| ----------- | ----------- | ---------------- | ---------------------------- |
|
||||
| 100% | ≥90% | ≥80% | **PASS** ✅ |
|
||||
| 100% | 80-89% | ≥80% | **CONCERNS** ⚠️ |
|
||||
| <100% | Any | Any | **FAIL** ❌ |
|
||||
| Any | <80% | Any | **FAIL** ❌ |
|
||||
| Any | Any | <80% | **FAIL** ❌ |
|
||||
| Any | Any | Any | **WAIVED** ⏭️ (with approval) |
|
||||
|
||||
**Detailed Rules:**
|
||||
- **PASS:** P0=100%, P1≥90%, Overall≥80%
|
||||
- **CONCERNS:** P0=100%, P1 80-89%, Overall≥80% (below threshold but not critical)
|
||||
- **FAIL:** P0<100% OR P1<80% OR Overall<80% (critical gaps)
|
||||
|
||||
**PASS** ✅: All criteria met, ready to release
|
||||
|
||||
**CONCERNS** ⚠️: Some criteria not met, but:
|
||||
- Mitigation plan exists
|
||||
- Risk is acceptable
|
||||
- Team approves proceeding
|
||||
- Monitoring in place
|
||||
|
||||
**FAIL** ❌: Critical criteria not met:
|
||||
- P0 requirements not tested
|
||||
- Critical security vulnerabilities
|
||||
- System is broken
|
||||
- Cannot deploy
|
||||
|
||||
**WAIVED** ⏭️: Business approves proceeding despite concerns:
|
||||
- Documented business justification
|
||||
- Accepted risks quantified
|
||||
- Approver signatures
|
||||
- Future plans documented
|
||||
|
||||
### Example CONCERNS Decision
|
||||
|
||||
```markdown
|
||||
## Decision Summary
|
||||
|
||||
**Verdict:** CONCERNS ⚠️ - Proceed with monitoring
|
||||
|
||||
**Evidence:**
|
||||
- P0 coverage: 100%
|
||||
- P1 coverage: 85% (below 90% target)
|
||||
- Test quality: 78/100 (below 80 target)
|
||||
|
||||
**Gaps:**
|
||||
- 1 P1 requirement not tested (avatar upload)
|
||||
- Test quality score slightly below threshold
|
||||
|
||||
**Mitigation:**
|
||||
- Avatar upload not critical for v1.2 launch
|
||||
- Test quality issues are minor (no flakiness)
|
||||
- Monitoring alerts configured
|
||||
|
||||
**Approvals:**
|
||||
- Product Manager: APPROVED (business priority to launch)
|
||||
- Tech Lead: APPROVED (technical risk acceptable)
|
||||
```
|
||||
|
||||
### Example FAIL Decision
|
||||
|
||||
```markdown
|
||||
## Decision Summary
|
||||
|
||||
**Verdict:** FAIL ❌ - Cannot release
|
||||
|
||||
**Evidence:**
|
||||
- P0 coverage: 60% (below 95% threshold)
|
||||
- Critical security vulnerability (CVE-2024-12345)
|
||||
- Test quality: 55/100
|
||||
|
||||
**Blockers:**
|
||||
1. **Login flow not tested** (P0 requirement)
|
||||
- Critical path completely untested
|
||||
- Must add E2E and API tests
|
||||
|
||||
2. **SQL injection vulnerability**
|
||||
- Critical security issue
|
||||
- Must fix before deployment
|
||||
|
||||
**Actions Required:**
|
||||
1. Add login tests (QA team, 2 days)
|
||||
2. Fix SQL injection (backend team, 1 day)
|
||||
3. Re-run security scan (DevOps, 1 hour)
|
||||
4. Re-run trace after fixes
|
||||
|
||||
**Cannot proceed until all blockers resolved.**
|
||||
```
|
||||
|
||||
## What You Get
|
||||
|
||||
### Phase 1: Traceability Matrix
|
||||
- Requirement-to-test mapping
|
||||
- Coverage classification (FULL/PARTIAL/NONE)
|
||||
- Gap identification with priorities
|
||||
- Actionable recommendations
|
||||
|
||||
### Phase 2: Gate Decision
|
||||
- Go/no-go verdict (PASS/CONCERNS/FAIL/WAIVED)
|
||||
- Evidence summary
|
||||
- Approval signatures
|
||||
- Next steps and monitoring plan
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### Greenfield Projects
|
||||
|
||||
**Phase 3:**
|
||||
```
|
||||
After architecture complete:
|
||||
1. Run test-design (system-level)
|
||||
2. Run trace Phase 1 (baseline)
|
||||
3. Use for implementation-readiness gate
|
||||
```
|
||||
|
||||
**Phase 4:**
|
||||
```
|
||||
After each epic/story:
|
||||
1. Run trace Phase 1 (refresh coverage)
|
||||
2. Identify gaps
|
||||
3. Add missing tests
|
||||
```
|
||||
|
||||
**Release Gate:**
|
||||
```
|
||||
Before deployment:
|
||||
1. Run trace Phase 1 (final coverage check)
|
||||
2. Run trace Phase 2 (make gate decision)
|
||||
3. Get approvals
|
||||
4. Deploy (if PASS or WAIVED)
|
||||
```
|
||||
|
||||
### Brownfield Projects
|
||||
|
||||
**Phase 2:**
|
||||
```
|
||||
Before planning new work:
|
||||
1. Run trace Phase 1 (establish baseline)
|
||||
2. Understand existing coverage
|
||||
3. Plan testing strategy
|
||||
```
|
||||
|
||||
**Phase 4:**
|
||||
```
|
||||
After each epic/story:
|
||||
1. Run trace Phase 1 (refresh)
|
||||
2. Compare to baseline
|
||||
3. Track coverage improvement
|
||||
```
|
||||
|
||||
**Release Gate:**
|
||||
```
|
||||
Before deployment:
|
||||
1. Run trace Phase 1 (final check)
|
||||
2. Run trace Phase 2 (gate decision)
|
||||
3. Compare to baseline
|
||||
4. Deploy if coverage maintained or improved
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
### Run Phase 1 Frequently
|
||||
|
||||
Don't wait until release gate:
|
||||
|
||||
```
|
||||
After Story 1: trace Phase 1 (identify gaps early)
|
||||
After Story 2: trace Phase 1 (refresh)
|
||||
After Story 3: trace Phase 1 (refresh)
|
||||
Before Release: trace Phase 1 + Phase 2 (final gate)
|
||||
```
|
||||
|
||||
**Benefit:** Catch gaps early when they're cheap to fix.
|
||||
|
||||
### Use Coverage Trends
|
||||
|
||||
Track improvement over time:
|
||||
|
||||
```markdown
|
||||
## Coverage Trend
|
||||
|
||||
| Date | Epic | P0/P1 Coverage | Quality Score | Status |
|
||||
| ---------- | -------- | -------------- | ------------- | -------------- |
|
||||
| 2026-01-01 | Baseline | 45% | - | Starting point |
|
||||
| 2026-01-08 | Epic 1 | 78% | 72 | Improving |
|
||||
| 2026-01-15 | Epic 2 | 92% | 84 | Near target |
|
||||
| 2026-01-20 | Epic 3 | 100% | 88 | Ready! |
|
||||
```
|
||||
|
||||
### Set Coverage Targets by Priority
|
||||
|
||||
Don't aim for 100% across all priorities:
|
||||
|
||||
**Recommended Targets:**
|
||||
- **P0:** 100% (critical path must be tested)
|
||||
- **P1:** 90% (high-value scenarios)
|
||||
- **P2:** 50% (nice-to-have features)
|
||||
- **P3:** 20% (low-value edge cases)
|
||||
|
||||
### Use Classification Strategically
|
||||
|
||||
**FULL** ✅: Requirement completely tested
|
||||
- E2E test covers full user workflow
|
||||
- API test validates backend behavior
|
||||
- All acceptance criteria covered
|
||||
|
||||
**PARTIAL** ⚠️: Some aspects tested
|
||||
- E2E test exists but missing scenarios
|
||||
- API test exists but incomplete
|
||||
- Some acceptance criteria not covered
|
||||
|
||||
**NONE** ❌: No tests exist
|
||||
- Requirement identified but not tested
|
||||
- May be intentional (low priority) or oversight
|
||||
|
||||
**Classification helps prioritize:**
|
||||
- Fix NONE coverage for P0/P1 requirements first
|
||||
- Enhance PARTIAL coverage for P0 requirements
|
||||
- Accept PARTIAL or NONE for P2/P3 if time-constrained
|
||||
|
||||
### Automate Gate Decisions
|
||||
|
||||
Use traceability in CI:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/gate-check.yml
|
||||
- name: Check coverage
|
||||
run: |
|
||||
# Run trace Phase 1
|
||||
# Parse coverage percentages
|
||||
if [ $P0_COVERAGE -lt 95 ]; then
|
||||
echo "P0 coverage below 95%"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### Document Waivers Clearly
|
||||
|
||||
If proceeding with WAIVED:
|
||||
|
||||
**Required:**
|
||||
```markdown
|
||||
## Waiver Documentation
|
||||
|
||||
**Waived By:** VP Engineering, Product Lead
|
||||
**Date:** 2026-01-15
|
||||
**Gate Type:** Release Gate v1.2
|
||||
|
||||
**Justification:**
|
||||
Business critical to launch by Q1 for investor demo.
|
||||
Performance concerns acceptable for initial user base.
|
||||
|
||||
**Conditions:**
|
||||
- Set monitoring alerts for P99 > 300ms
|
||||
- Plan optimization for v1.3 (due February 28)
|
||||
- Monitor user feedback closely
|
||||
|
||||
**Accepted Risks:**
|
||||
- 1% of users may experience 350ms latency
|
||||
- Avatar upload feature incomplete
|
||||
- Profile export deferred to next release
|
||||
|
||||
**Quantified Impact:**
|
||||
- Affects <100 users at current scale
|
||||
- Workaround exists (manual export)
|
||||
- Monitoring will catch issues early
|
||||
|
||||
**Approvals:**
|
||||
- VP Engineering: [Signature] Date: 2026-01-15
|
||||
- Product Lead: [Signature] Date: 2026-01-15
|
||||
- QA Lead: [Signature] Date: 2026-01-15
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Too Many Gaps to Fix
|
||||
|
||||
**Problem:** Phase 1 shows 50 uncovered requirements.
|
||||
|
||||
**Solution:** Prioritize ruthlessly:
|
||||
1. Fix all P0 gaps (critical path)
|
||||
2. Fix high-risk P1 gaps
|
||||
3. Accept low-risk P1 gaps with mitigation
|
||||
4. Defer all P2/P3 gaps
|
||||
|
||||
**Don't try to fix everything** - focus on what matters for release.
|
||||
|
||||
### Can't Find Test Coverage
|
||||
|
||||
**Problem:** Tests exist but TEA can't map them to requirements.
|
||||
|
||||
**Cause:** Tests don't reference requirements.
|
||||
|
||||
**Solution:** Add traceability comments:
|
||||
```typescript
|
||||
test('should display profile', async ({ page }) => {
|
||||
// Covers: Requirement 1 - User can view profile
|
||||
// Acceptance criteria: Navigate to /profile, see name/email
|
||||
await page.goto('/profile');
|
||||
await expect(page.getByText('Test User')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
Or use test IDs:
|
||||
```typescript
|
||||
test('[REQ-1] should display profile', async ({ page }) => {
|
||||
// Test code...
|
||||
});
|
||||
```
|
||||
|
||||
### Unclear What "FULL" vs "PARTIAL" Means
|
||||
|
||||
**FULL** ✅: All acceptance criteria tested
|
||||
```
|
||||
Requirement: User can edit profile
|
||||
Acceptance criteria:
|
||||
- Can modify name ✅ Tested
|
||||
- Can modify email ✅ Tested
|
||||
- Can upload avatar ✅ Tested
|
||||
- Changes persist ✅ Tested
|
||||
Result: FULL coverage
|
||||
```
|
||||
|
||||
**PARTIAL** ⚠️: Some criteria tested, some not
|
||||
```
|
||||
Requirement: User can edit profile
|
||||
Acceptance criteria:
|
||||
- Can modify name ✅ Tested
|
||||
- Can modify email ✅ Tested
|
||||
- Can upload avatar ❌ Not tested
|
||||
- Changes persist ✅ Tested
|
||||
Result: PARTIAL coverage (3/4 criteria)
|
||||
```
|
||||
|
||||
### Gate Decision Unclear
|
||||
|
||||
**Problem:** Not sure if PASS or CONCERNS is appropriate.
|
||||
|
||||
**Guideline:**
|
||||
|
||||
**Use PASS** ✅ if:
|
||||
- All P0 requirements 100% covered
|
||||
- P1 requirements >90% covered
|
||||
- No critical issues
|
||||
- NFRs met
|
||||
|
||||
**Use CONCERNS** ⚠️ if:
|
||||
- P1 coverage 85-90% (close to threshold)
|
||||
- Minor quality issues (score 70-79)
|
||||
- NFRs have mitigation plans
|
||||
- Team agrees risk is acceptable
|
||||
|
||||
**Use FAIL** ❌ if:
|
||||
- P0 coverage <100% (critical path gaps)
|
||||
- P1 coverage <85%
|
||||
- Critical security/performance issues
|
||||
- No mitigation possible
|
||||
|
||||
**When in doubt, use CONCERNS** and document the risk.
|
||||
|
||||
## Related Guides
|
||||
|
||||
- [How to Run Test Design](/docs/tea/how-to/workflows/run-test-design.md) - Provides requirements for traceability
|
||||
- [How to Run Test Review](/docs/tea/how-to/workflows/run-test-review.md) - Quality scores feed gate
|
||||
- [How to Run NFR Assessment](/docs/tea/how-to/workflows/run-nfr-assess.md) - NFR status feeds gate
|
||||
|
||||
## Understanding the Concepts
|
||||
|
||||
- [Risk-Based Testing](/docs/tea/explanation/risk-based-testing.md) - Why P0 vs P3 matters
|
||||
- [TEA Overview](/docs/tea/explanation/tea-overview.md) - Gate decisions in context
|
||||
|
||||
## Reference
|
||||
|
||||
- [Command: *trace](/docs/tea/reference/commands.md#trace) - Full command reference
|
||||
- [TEA Configuration](/docs/tea/reference/configuration.md) - Config options
|
||||
|
||||
---
|
||||
|
||||
Generated with [BMad Method](https://bmad-method.org) - TEA (Test Architect)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user