Compare commits

...

59 Commits

Author SHA1 Message Date
manjaroblack
17f91c9d0b fix: correct typo in user guide flowchart from 'executiony' to 'execution' 2025-08-18 09:33:51 -05:00
manjaroblack
c80a868945 docs: add agent initialization guidance and command references to workflow diagrams 2025-08-18 09:33:51 -05:00
circus
ab70baca59 fix: remove incorrect else branch causing flatten command regression (#452)
This fixes a regression bug where the flatten command would fail with an
error message even when valid arguments were provided.

The bug was:
- First introduced in commit 0fdbca7
- Fixed in commit fab9d5e (v5.0.0-beta.2)
- Accidentally reintroduced in commit ed53943

The else branch at lines 130-134 was incorrectly handling the case when
users provided arguments, showing a misleading error about
'no arguments provided' when arguments were actually present.

Fixes all flatten command variations:
- npx bmad-method flatten
- npx bmad-method flatten --input /path
- npx bmad-method flatten --output file.xml
- npx bmad-method flatten --input /path --output file.xml

Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-17 22:09:56 -05:00
Brian Madison
80d73d9093 fix: documentation and trademark updates 2025-08-17 19:23:50 -05:00
Brian Madison
f3cc410fb0 patch: move script to tools folder 2025-08-17 11:04:27 -05:00
Brian Madison
868ae23455 fix: previous merge set wrong default install location 2025-08-17 11:01:20 -05:00
Brian Madison
9de873777a fix: prettier fixes 2025-08-17 07:51:46 -05:00
Brian Madison
04c485b72e chore: bump to 4.39.1 to fix installer version display 2025-08-17 07:13:09 -05:00
Brian Madison
68eb31da77 fix: update installer version display to show 4.39.0 2025-08-17 07:12:53 -05:00
Brian Madison
c00d0aec88 chore: rollback to v4.39.0 from v5.x semantic versioning 2025-08-17 07:07:30 -05:00
Brian Madison
6543cb2a97 chore: bump version to 5.1.4 2025-08-17 00:30:15 -05:00
Brian Madison
b6fe44b16e fix: alphabetize agent commands and dependencies for improved organization
- Alphabetized all commands in agent files while maintaining help first and exit last
- Alphabetized all dependency categories (checklists, data, tasks, templates, utils, workflows)
- Alphabetized items within each dependency category across all 10 core agents:
  - analyst.md: commands and dependencies reorganized
  - architect.md: commands and dependencies reorganized
  - bmad-master.md: commands and dependencies reorganized, fixed YAML parsing issue
  - bmad-orchestrator.md: commands and dependencies reorganized
  - dev.md: commands and dependencies reorganized
  - pm.md: commands and dependencies reorganized
  - po.md: commands and dependencies reorganized
  - qa.md: commands and dependencies reorganized
  - sm.md: commands and dependencies reorganized
  - ux-expert.md: commands and dependencies reorganized
- Fixed YAML parsing error in bmad-master.md by properly quoting activation instructions
- Rebuilt all agent bundles and team bundles successfully
- Updated expansion pack bundles including new creative writing agents

This improves consistency and makes it easier to locate specific commands and dependencies
across all agent configurations.
2025-08-17 00:30:04 -05:00
Brian Madison
ac09300075 temporarily remove GCP agent system until it is completed in the experimental branch 2025-08-17 00:06:09 -05:00
DrBalls
b756790c17 Add Creative Writing expansion pack (#414)
* Add Creative Writing expansion pack
- 10 specialized writing agents for fiction and narrative design
- 8 complete workflows (novel, screenplay, short story, series)
- 27 quality checklists for genre and technical validation
- 22 writing tasks covering full creative process
- 8 professional templates for structured writing
- KDP publishing integration support

* Fix bmad-creative-writing expansion pack formatting and structure

- Convert all agents to standard BMAD markdown format with embedded YAML
- Add missing core dependencies (create-doc, advanced-elicitation, execute-checklist)
- Add bmad-kb.md customized for creative writing context
- Fix agent dependency references to only include existing files
- Standardize agent command syntax and activation instructions
- Clean up agent dependencies for beta-reader, dialog-specialist, editor, genre-specialist, narrative-designer, and world-builder

---------

Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-16 23:55:43 -05:00
Anthony
49347a8cde Feat(Expansion Pack): Part 2 - Agent System Templates (#370)
Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-16 23:47:30 -05:00
Brian, with AI
335e1da271 fix: add default current directory to installer prompt (#444)
Previously users had to manually type the full path or run pwd to get
the current directory when installing BMad. Now the installer prefills
the current working directory as the default, improving UX.

Co-authored-by: its-brianwithai <brian@ultrawideturbodev.com>
2025-08-16 22:08:06 -05:00
Brian Madison
6e2fbc6710 docs: add sync-version.sh script to troubleshooting section 2025-08-16 22:03:19 -05:00
Brian Madison
45dd7d1bc5 add: sync-version.sh script for easy version syncing 2025-08-16 22:02:12 -05:00
manjaroblack
db80eda9df refactor: centralize qa paths in core-config.yaml and update agent activation flows (#451)
Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-16 21:38:33 -05:00
Brian Madison
f5272f12e4 sync: update to published version 5.1.3 2025-08-16 21:35:12 -05:00
Brian Madison
26890a0a03 sync: update versions to 5.1.2 to match published release 2025-08-16 21:20:17 -05:00
Brian Madison
cf22fd98f3 fix: correct version to 5.1.1 after patch release
- npm latest tag now correctly points to 5.1.0
- package.json updated to 5.1.1 (what patch should have made)
- installer version synced
2025-08-16 21:10:46 -05:00
Brian Madison
fe318ecc07 sync: update package.json to match published version 5.0.1 2025-08-16 21:09:36 -05:00
Brian Madison
f959a07bda fix: update installer package.json version to 5.1.0
- Fixes version reporting in npx bmad-method --version
- Ensures installer displays correct version number
2025-08-16 21:04:32 -05:00
Brian Madison
c0899432c1 fix: simplify npm publishing to use latest tag only
- Remove stable tag complexity from workflow
- Publish directly to latest tag (default for npx)
- Update documentation to reflect single tag approach
2025-08-16 20:58:22 -05:00
Brian Madison
8573852a6e docs: update versioning and releases documentation
- Replace old semantic-release documentation with new simplified system
- Document command line release workflow (npm run release:*)
- Explain automatic release notes generation and categorization
- Add troubleshooting section and preview functionality
- Reflect current single @stable tag installation approach
2025-08-16 20:50:22 -05:00
Brian Madison
39437e9268 fix: handle protected branch in manual release workflow
- Allow workflow to continue even if push to main fails
- This is expected behavior with protected branches
- NPM publishing and GitHub releases will still work
2025-08-16 20:44:00 -05:00
Brian Madison
1772a30368 fix: enable version bumping in manual release workflow
- Fix version-bump.js to actually update package.json version
- Add tag existence check to prevent duplicate tag errors
- Remove semantic-release dependency from version bumping
2025-08-16 20:42:35 -05:00
Brian Madison
ba4fb4d084 feat: add convenient npm scripts for command line releases
- npm run release:patch/minor/major for triggering releases
- npm run release:watch for monitoring workflow progress
- One-liner workflow: preview:release && release:minor && release:watch
2025-08-16 20:38:58 -05:00
Brian Madison
3eb706c49a feat: enhance manual release workflow with automatic release notes
- Add automatic release notes generation from commit history
- Categorize commits into Features, Bug Fixes, and Maintenance
- Include installation instructions and changelog links
- Add preview-release-notes script for testing
- Update GitHub release creation to use generated notes
2025-08-16 20:35:41 -05:00
Brian Madison
3f5abf347d feat: simplify installation to single @stable tag
- Remove automatic versioning and dual publishing strategy
- Delete release.yaml and promote-to-stable.yaml workflows
- Add manual-release.yaml for controlled releases
- Remove semantic-release dependencies and config
- Update all documentation to use npx bmad-method install
- Configure NPM to publish to @stable tag by default
- Users can now use simple npx bmad-method install command
2025-08-16 20:23:23 -05:00
manjaroblack
ed539432fb chore: add code formatting config and pre-commit hooks (#450) 2025-08-16 19:08:39 -05:00
Brian Madison
51284d6ecf fix: handle existing tags in promote-to-stable workflow
- Check for existing git tags when calculating new version
- Automatically increment version if tag already exists
- Prevents workflow failure when tag v5.1.0 already exists
2025-08-16 17:14:38 -05:00
Brian Madison
6cba05114e fix: stable tag 2025-08-16 17:10:10 -05:00
Murat K Ozcan
ac360cd0bf chore: configure changelog file path in semantic-release config (#448)
Co-authored-by: Murat Ozcan <murat@Murats-MacBook-Pro.local>
2025-08-16 16:27:45 -05:00
manjaroblack
fab9d5e1f5 feat(flattener): prompt for detailed stats; polish .stats.md with emojis (#422)
* feat: add detailed statistics and markdown report generation to flattener tool

* fix: remove redundant error handling for project root detection
2025-08-16 08:03:28 -05:00
Brian Madison
93426c2d2f feat: publish stable release 5.0.0
BREAKING CHANGE: Promote beta features to stable release for v5.0.0

This commit ensures the stable release gets properly published to NPM and GitHub releases.
2025-08-15 23:06:28 -05:00
github-actions[bot]
f56d37a60a release: promote to stable 5.0.0
- Promote beta features to stable release
- Update version from 4.38.0 to 5.0.0
- Automated promotion via GitHub Actions
2025-08-15 23:06:28 -05:00
github-actions[bot]
224cfc05dc release: promote to stable 4.38.0
- Promote beta features to stable release
- Update version from 4.37.0 to 4.38.0
- Automated promotion via GitHub Actions
2025-08-15 23:06:27 -05:00
Brian Madison
6cb2fa68b3 fix: update package-lock.json for semver dependency 2025-08-15 23:06:27 -05:00
Brian Madison
d21ac491a0 release: create stable 4.37.0 release
Promote beta features to stable release with dual publishing support
2025-08-15 23:06:27 -05:00
Thiago Freitas
848e33fdd9 Feature: Installer commands for Crush CLI (#429)
* feat: add support for Crush IDE configuration and commands

* fix: update Crush IDE instructions for clarity on persona/task switching

---------

Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-15 22:38:44 -05:00
Murat K Ozcan
0b61175d98 feat: transform QA agent into Test Architect with advanced quality ca… (#433)
* feat: transform QA agent into Test Architect with advanced quality capabilities

  - Add 6 specialized quality assessment commands
  - Implement risk-based testing with scoring
  - Create quality gate system with deterministic decisions
  - Add comprehensive test design and NFR validation
  - Update documentation with stage-based workflow integration

* feat: transform QA agent into Test Architect with advanced quality capabilities

  - Add 6 specialized quality assessment commands
  - Implement risk-based testing with scoring
  - Create quality gate system with deterministic decisions
  - Add comprehensive test design and NFR validation
  - Update documentation with stage-based workflow integration

* docs: refined the docs for test architect

* fix: addressed review comments from manjaroblack, round 1

* fix: addressed review comments from manjaroblack, round 1

---------

Co-authored-by: Murat Ozcan <murat@mac.lan>
Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-15 21:02:37 -05:00
cecil-the-coder
33269c888d fix: resolve CommonJS import compatibility for chalk, inquirer, and ora (#442)
Adds .default fallback for CommonJS imports to resolve compatibility issues
with newer versions of chalk, inquirer, and ora packages.

Fixes installer failures when error handlers or interactive prompts are triggered.

Changes:
- chalk: require('chalk').default || require('chalk')
- inquirer: require('inquirer').default || require('inquirer')
- ora: require('ora').default || require('ora')

Affects: installer.js, ide-setup.js, file-manager.js, ide-base-setup.js, bmad.js

Co-authored-by: Cecil <cecilthecoder@gmail.com>
2025-08-15 21:01:30 -05:00
Brian Madison
7f016d0020 fix: add permissions and authentication for promotion workflow
- Add contents:write permission for GitHub Actions
- Configure git to use GITHUB_TOKEN for authentication
- Set remote URL with access token for push operations
- This should resolve the 403 permission denied error
2025-08-15 20:25:12 -05:00
Brian Madison
8b0b72b7b4 docs: document dual publishing strategy and automated promotion
- Add comprehensive documentation for dual publishing workflow
- Document GitHub Actions promotion process
- Clarify user experience for stable vs beta installations
- Include step-by-step promotion instructions
2025-08-15 20:18:36 -05:00
Brian Madison
1c3420335b test: trigger beta release to test current workflow
This will create a new beta version that we can then promote to stable using the new automated workflow
2025-08-15 20:17:58 -05:00
Brian Madison
fb02234b59 feat: add automated promotion workflow for stable releases
- Add GitHub Actions workflow for one-click promotion to stable
- Supports patch/minor/major version bumps
- Automatically merges main to stable and handles version updates
- Eliminates manual git operations for stable releases
2025-08-15 20:17:49 -05:00
Brian Madison
e0dcbcf527 fix: update versions for dual publishing beta releases 2025-08-15 20:03:10 -05:00
Brian Madison
75ba8d82e1 feat: republish beta with corrected dependencies and tags 2025-08-15 19:39:35 -05:00
Brian Madison
f3e429d746 feat: trigger new beta release with fixed dependencies
This creates a new beta version that includes the semver dependency fix
2025-08-15 19:38:06 -05:00
Brian Madison
5ceca3aeb9 fix: add semver dependency and correct NPM dist-tag configuration
- Add missing semver dependency to installer package.json
- Configure semantic-release to use correct channels (beta/latest)
- This ensures beta releases publish to @beta tag correctly
2025-08-15 19:33:07 -05:00
Brian Madison
8e324f60b0 fix: remove git plugin to resolve branch protection conflicts
- Beta releases don't need to commit version bumps back to repo
- This allows semantic-release to complete successfully
- NPM publishing will still work for @beta tag
2025-08-15 19:15:55 -05:00
Brian Madison
8a29f0e319 test: verify dual publishing workflow 2025-08-15 19:14:32 -05:00
Brian Madison
d92ba835fa feat: implement dual NPM publishing strategy
- Configure semantic-release for @beta and @latest tags
- Main branch publishes to @beta (bleeding edge)
- Stable branch publishes to @latest (production)
- Enable CI/CD workflow for both branches
2025-08-15 19:03:48 -05:00
Aaron
9868437f10 Add update-check command (#423)
* Add update-check command

* Adding additional information to update-check command and aligning with cli theme

* Correct update-check message to exclude global
2025-08-14 22:24:37 -05:00
Stefan Klümpers
d563266b97 feat: install Cursor rules to subdirectory (#438)
* feat: install Cursor rules to subdirectory

Implement feature request #376 to avoid filename collisions and confusion
between repo-specific and BMAD-specific rules.

Changes:
- Move Cursor rules from .cursor/rules/ to .cursor/rules/bmad/
- Update installer configuration to use new subdirectory structure
- Update upgrader to reflect new rule directory path

This keeps BMAD Method files separate from existing project rules,
reducing chance of conflicts and improving organization.

Fixes #376

* chore: correct formatting in cursor rules directory path

---------

Co-authored-by: Stefan Klümpers <stefan.kluempers@materna.group>
2025-08-14 22:23:44 -05:00
Yongjip Kim
3efcfd54d4 fix(docs): fix broken link in GUIDING-PRINCIPLES.md (#428)
Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-14 13:40:11 -05:00
Benjamin Wiese
31e44b110e Remove bmad-core/bmad-core including empty file (#431)
Co-authored-by: Ben Wiese <benjamin.wiese@simplifier.io>
2025-08-14 13:39:28 -05:00
314 changed files with 50482 additions and 20481 deletions

View File

@@ -1,9 +1,9 @@
---
name: Bug report
about: Create a report to help us improve
title: ""
labels: ""
assignees: ""
title: ''
labels: ''
assignees: ''
---
**Describe the bug**

View File

@@ -1,9 +1,9 @@
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: ""
assignees: ""
title: ''
labels: ''
assignees: ''
---
**Did you discuss the idea first in Discord Server (#general-dev)**

View File

@@ -1,6 +1,15 @@
name: Discord Notification
on: [pull_request, release, create, delete, issue_comment, pull_request_review, pull_request_review_comment]
"on":
[
pull_request,
release,
create,
delete,
issue_comment,
pull_request_review,
pull_request_review_comment,
]
jobs:
notify:

42
.github/workflows/format-check.yaml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: format-check
"on":
pull_request:
branches: ["**"]
jobs:
prettier:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Prettier format check
run: npm run format:check
eslint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: ESLint
run: npm run lint

173
.github/workflows/manual-release.yaml vendored Normal file
View File

@@ -0,0 +1,173 @@
name: Manual Release
on:
workflow_dispatch:
inputs:
version_bump:
description: Version bump type
required: true
default: patch
type: choice
options:
- patch
- minor
- major
permissions:
contents: write
packages: write
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: npm
registry-url: https://registry.npmjs.org
- name: Install dependencies
run: npm ci
- name: Run tests and validation
run: |
npm run validate
npm run format:check
npm run lint
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Bump version
run: npm run version:${{ github.event.inputs.version_bump }}
- name: Get new version and previous tag
id: version
run: |
echo "new_version=$(node -p "require('./package.json').version")" >> $GITHUB_OUTPUT
echo "previous_tag=$(git describe --tags --abbrev=0)" >> $GITHUB_OUTPUT
- name: Update installer package.json
run: |
sed -i 's/"version": ".*"/"version": "${{ steps.version.outputs.new_version }}"/' tools/installer/package.json
- name: Build project
run: npm run build
- name: Commit version bump
run: |
git add .
git commit -m "release: bump to v${{ steps.version.outputs.new_version }}"
- name: Generate release notes
id: release_notes
run: |
# Get commits since last tag
COMMITS=$(git log ${{ steps.version.outputs.previous_tag }}..HEAD --pretty=format:"- %s" --reverse)
# Categorize commits
FEATURES=$(echo "$COMMITS" | grep -E "^- (feat|Feature)" || true)
FIXES=$(echo "$COMMITS" | grep -E "^- (fix|Fix)" || true)
CHORES=$(echo "$COMMITS" | grep -E "^- (chore|Chore)" || true)
OTHERS=$(echo "$COMMITS" | grep -v -E "^- (feat|Feature|fix|Fix|chore|Chore|release:|Release:)" || true)
# Build release notes
cat > release_notes.md << 'EOF'
## 🚀 What's New in v${{ steps.version.outputs.new_version }}
EOF
if [ ! -z "$FEATURES" ]; then
echo "### ✨ New Features" >> release_notes.md
echo "$FEATURES" >> release_notes.md
echo "" >> release_notes.md
fi
if [ ! -z "$FIXES" ]; then
echo "### 🐛 Bug Fixes" >> release_notes.md
echo "$FIXES" >> release_notes.md
echo "" >> release_notes.md
fi
if [ ! -z "$OTHERS" ]; then
echo "### 📦 Other Changes" >> release_notes.md
echo "$OTHERS" >> release_notes.md
echo "" >> release_notes.md
fi
if [ ! -z "$CHORES" ]; then
echo "### 🔧 Maintenance" >> release_notes.md
echo "$CHORES" >> release_notes.md
echo "" >> release_notes.md
fi
cat >> release_notes.md << 'EOF'
## 📦 Installation
```bash
npx bmad-method install
```
**Full Changelog**: https://github.com/bmadcode/BMAD-METHOD/compare/${{ steps.version.outputs.previous_tag }}...v${{ steps.version.outputs.new_version }}
EOF
# Output for GitHub Actions
echo "RELEASE_NOTES<<EOF" >> $GITHUB_OUTPUT
cat release_notes.md >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Create and push tag
run: |
# Check if tag already exists
if git rev-parse "v${{ steps.version.outputs.new_version }}" >/dev/null 2>&1; then
echo "Tag v${{ steps.version.outputs.new_version }} already exists, skipping tag creation"
else
git tag -a "v${{ steps.version.outputs.new_version }}" -m "Release v${{ steps.version.outputs.new_version }}"
git push origin "v${{ steps.version.outputs.new_version }}"
fi
- name: Push changes to main
run: |
if git push origin HEAD:main 2>/dev/null; then
echo "✅ Successfully pushed to main branch"
else
echo "⚠️ Could not push to main (protected branch). This is expected."
echo "📝 Version bump and tag were created successfully."
fi
- name: Publish to NPM
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
run: npm publish
- name: Create GitHub Release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: v${{ steps.version.outputs.new_version }}
release_name: "BMad Method v${{ steps.version.outputs.new_version }}"
body: ${{ steps.release_notes.outputs.RELEASE_NOTES }}
draft: false
prerelease: false
- name: Summary
run: |
echo "🎉 Successfully released v${{ steps.version.outputs.new_version }}!"
echo "📦 Published to NPM with @latest tag"
echo "🏷️ Git tag: v${{ steps.version.outputs.new_version }}"
echo "✅ Users running 'npx bmad-method install' will now get version ${{ steps.version.outputs.new_version }}"
echo ""
echo "📝 Release notes preview:"
cat release_notes.md

View File

@@ -1,59 +0,0 @@
name: Release
'on':
push:
branches:
- main
workflow_dispatch:
inputs:
version_type:
description: Version bump type
required: true
default: patch
type: choice
options:
- patch
- minor
- major
permissions:
contents: write
issues: write
pull-requests: write
packages: write
jobs:
release:
runs-on: ubuntu-latest
if: '!contains(github.event.head_commit.message, ''[skip ci]'')'
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: npm
registry-url: https://registry.npmjs.org
- name: Install dependencies
run: npm ci
- name: Run tests and validation
run: |
npm run validate
npm run format
- name: Debug permissions
run: |
echo "Testing git permissions..."
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
echo "Git config set successfully"
- name: Manual version bump
if: github.event_name == 'workflow_dispatch'
run: npm run version:${{ github.event.inputs.version_type }}
- name: Semantic Release
if: github.event_name == 'push'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
run: npm run release

3
.gitignore vendored
View File

@@ -25,7 +25,6 @@ Thumbs.db
# Development tools and configs
.prettierignore
.prettierrc
.husky/
# IDE and editor configs
.windsurf/
@@ -44,4 +43,4 @@ CLAUDE.md
test-project-install/*
sample-project/*
flattened-codebase.xml
*.stats.md

3
.husky/pre-commit Executable file
View File

@@ -0,0 +1,3 @@
#!/usr/bin/env sh
npx --no-install lint-staged

View File

@@ -1,18 +0,0 @@
{
"branches": ["main"],
"plugins": [
"@semantic-release/commit-analyzer",
"@semantic-release/release-notes-generator",
"@semantic-release/changelog",
"@semantic-release/npm",
"./tools/semantic-release-sync-installer.js",
[
"@semantic-release/git",
{
"assets": ["package.json", "package-lock.json", "tools/installer/package.json", "CHANGELOG.md"],
"message": "chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}"
}
],
"@semantic-release/github"
]
}

27
.vscode/settings.json vendored
View File

@@ -40,5 +40,30 @@
"tileset",
"Trae",
"VNET"
]
],
"json.schemas": [
{
"fileMatch": ["package.json"],
"url": "https://json.schemastore.org/package.json"
},
{
"fileMatch": [".vscode/settings.json"],
"url": "vscode://schemas/settings/folder"
}
],
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"[javascript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" },
"[json]": { "editor.defaultFormatter": "esbenp.prettier-vscode" },
"[yaml]": { "editor.defaultFormatter": "esbenp.prettier-vscode" },
"[markdown]": { "editor.defaultFormatter": "esbenp.prettier-vscode" },
"prettier.prettierPath": "node_modules/prettier",
"prettier.requireConfig": true,
"yaml.format.enable": false,
"eslint.useFlatConfig": true,
"eslint.validate": ["javascript", "yaml"],
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit"
},
"editor.rulers": [100]
}

View File

@@ -1,9 +1,8 @@
## [4.36.2](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.1...v4.36.2) (2025-08-10)
### Bug Fixes
* align installer dependencies with root package versions for ESM compatibility ([#420](https://github.com/bmadcode/BMAD-METHOD/issues/420)) ([3f6b674](https://github.com/bmadcode/BMAD-METHOD/commit/3f6b67443d61ae6add98656374bed27da4704644))
- align installer dependencies with root package versions for ESM compatibility ([#420](https://github.com/bmadcode/BMAD-METHOD/issues/420)) ([3f6b674](https://github.com/bmadcode/BMAD-METHOD/commit/3f6b67443d61ae6add98656374bed27da4704644))
## [4.36.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.0...v4.36.1) (2025-08-09)
@@ -575,10 +574,6 @@
- Manual version bumping via npm scripts is now disabled. Use conventional commits for automated releases.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
# [4.2.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.1.0...v4.2.0) (2025-06-15)
### Bug Fixes
@@ -687,3 +682,5 @@ Co-Authored-By: Claude <noreply@anthropic.com>
### Features
- add versioning and release automation ([0ea5e50](https://github.com/bmadcode/BMAD-METHOD/commit/0ea5e50aa7ace5946d0100c180dd4c0da3e2fd8c))
# Promote to stable release 5.0.0

View File

@@ -206,4 +206,4 @@ Each commit should represent one logical change:
## License
By contributing to this project, you agree that your contributions will be licensed under the same license as the project.
By contributing to this project, you agree that your contributions will be licensed under the MIT License.

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2025 Brian AKA BMad AKA BMad Code
Copyright (c) 2025 BMad Code, LLC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -19,3 +19,8 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
TRADEMARK NOTICE:
BMAD™ and BMAD-METHOD™ are trademarks of BMad Code, LLC. The use of these
trademarks in this software does not grant any rights to use the trademarks
for any other purpose.

View File

@@ -1,4 +1,4 @@
# BMad-Method: Universal AI Agent Framework
# BMAD-METHOD™: Universal AI Agent Framework
[![Version](https://img.shields.io/npm/v/bmad-method?color=blue&label=version)](https://www.npmjs.com/package/bmad-method)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
@@ -11,11 +11,11 @@ Foundations in Agentic Agile Driven Development, known as the Breakthrough Metho
**[Join our Discord Community](https://discord.gg/gk8jAdXWmj)** - A growing community for AI enthusiasts! Get help, share ideas, explore AI agents & frameworks, collaborate on tech projects, enjoy hobbies, and help each other succeed. Whether you're stuck on BMad, building your own agents, or just want to chat about the latest in AI - we're here for you! **Some mobile and VPN may have issue joining the discord, this is a discord issue - if the invite does not work, try from your own internet or another network, or non-VPN.**
**If you find this project helpful or useful, please give it a star in the upper right hand corner!** It helps others discover BMad-Method and you will be notified of updates!
**If you find this project helpful or useful, please give it a star in the upper right hand corner!** It helps others discover BMAD-METHOD™ and you will be notified of updates!
## Overview
**BMad Method's Two Key Innovations:**
**BMAD-METHOD™'s Two Key Innovations:**
**1. Agentic Planning:** Dedicated agents (Analyst, PM, Architect) collaborate with you to create detailed, consistent PRDs and Architecture documents. Through advanced prompt engineering and human-in-the-loop refinement, these planning agents produce comprehensive specifications that go far beyond generic AI task generation.
@@ -49,7 +49,7 @@ This two-phase approach eliminates both **planning inconsistency** and **context
## Important: Keep Your BMad Installation Updated
**Stay up-to-date effortlessly!** If you already have BMad-Method installed in your project, simply run:
**Stay up-to-date effortlessly!** If you already have BMAD-METHOD™ installed in your project, simply run:
```bash
npx bmad-method install
@@ -75,6 +75,8 @@ This makes it easy to benefit from the latest improvements, bug fixes, and new a
```bash
npx bmad-method install
# OR explicitly use stable tag:
npx bmad-method@stable install
# OR if you already have BMad installed:
git pull
npm run install:bmad
@@ -108,11 +110,11 @@ npm run install:bmad # build and install all to a destination folder
## 🌟 Beyond Software Development - Expansion Packs
BMad's natural language framework works in ANY domain. Expansion packs provide specialized AI agents for creative writing, business strategy, health & wellness, education, and more. Also expansion packs can expand the core BMad-Method with specific functionality that is not generic for all cases. [See the Expansion Packs Guide](docs/expansion-packs.md) and learn to create your own!
BMAD™'s natural language framework works in ANY domain. Expansion packs provide specialized AI agents for creative writing, business strategy, health & wellness, education, and more. Also expansion packs can expand the core BMAD-METHOD™ with specific functionality that is not generic for all cases. [See the Expansion Packs Guide](docs/expansion-packs.md) and learn to create your own!
## Codebase Flattener Tool
The BMad-Method includes a powerful codebase flattener tool designed to prepare your project files for AI model consumption. This tool aggregates your entire codebase into a single XML file, making it easy to share your project context with AI assistants for analysis, debugging, or development assistance.
The BMAD-METHOD™ includes a powerful codebase flattener tool designed to prepare your project files for AI model consumption. This tool aggregates your entire codebase into a single XML file, making it easy to share your project context with AI assistants for analysis, debugging, or development assistance.
### Features
@@ -155,7 +157,7 @@ The tool will display progress and provide a comprehensive summary:
📊 File breakdown: 142 text, 14 binary, 0 errors
```
The generated XML file contains your project's text-based source files in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMad-Method projects.
The generated XML file contains your project's text-based source files in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMAD-METHOD™ projects.
#### Advanced Usage & Options
@@ -214,6 +216,10 @@ The generated XML file contains your project's text-based source files in a stru
MIT License - see [LICENSE](LICENSE) for details.
## Trademark Notice
BMAD™ and BMAD-METHOD™ are trademarks of BMad Code, LLC. All rights reserved.
[![Contributors](https://contrib.rocks/image?repo=bmadcode/bmad-method)](https://github.com/bmadcode/bmad-method/graphs/contributors)
<sub>Built with ❤️ for the AI-assisted development community</sub>

View File

@@ -1,10 +1,11 @@
# <!-- Powered by BMAD™ Core -->
bundle:
name: Team All
icon: 👥
description: Includes every core system agent.
agents:
- bmad-orchestrator
- '*'
- "*"
workflows:
- brownfield-fullstack.yaml
- brownfield-service.yaml

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
bundle:
name: Team Fullstack
icon: 🚀

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
bundle:
name: Team IDE Minimal
icon:

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
bundle:
name: Team No UI
icon: 🔧

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# analyst
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -17,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -26,7 +29,7 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Mary
id: analyst
@@ -54,28 +57,28 @@ persona:
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- create-project-brief: use task create-doc with project-brief-tmpl.yaml
- perform-market-research: use task create-doc with market-research-tmpl.yaml
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
- yolo: Toggle Yolo Mode
- doc-out: Output full document in progress to current destination file
- research-prompt {topic}: execute task create-deep-research-prompt.md
- brainstorm {topic}: Facilitate structured brainstorming session (run task facilitate-brainstorming-session.md with template brainstorming-output-tmpl.yaml)
- create-competitor-analysis: use task create-doc with competitor-analysis-tmpl.yaml
- create-project-brief: use task create-doc with project-brief-tmpl.yaml
- doc-out: Output full document in progress to current destination file
- elicit: run the task advanced-elicitation
- perform-market-research: use task create-doc with market-research-tmpl.yaml
- research-prompt {topic}: execute task create-deep-research-prompt.md
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the Business Analyst, and then abandon inhabiting this persona
dependencies:
tasks:
- facilitate-brainstorming-session.md
- create-deep-research-prompt.md
- create-doc.md
- advanced-elicitation.md
- document-project.md
templates:
- project-brief-tmpl.yaml
- market-research-tmpl.yaml
- competitor-analysis-tmpl.yaml
- brainstorming-output-tmpl.yaml
data:
- bmad-kb.md
- brainstorming-techniques.md
tasks:
- advanced-elicitation.md
- create-deep-research-prompt.md
- create-doc.md
- document-project.md
- facilitate-brainstorming-session.md
templates:
- brainstorming-output-tmpl.yaml
- competitor-analysis-tmpl.yaml
- market-research-tmpl.yaml
- project-brief-tmpl.yaml
```

View File

@@ -1,5 +1,6 @@
# architect
<!-- Powered by BMAD™ Core -->
# architect
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -18,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -27,8 +29,7 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- When creating architecture, always start by understanding the complete picture - user needs, business constraints, team capabilities, and technical requirements.
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Winston
id: architect
@@ -55,10 +56,10 @@ persona:
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- create-full-stack-architecture: use create-doc with fullstack-architecture-tmpl.yaml
- create-backend-architecture: use create-doc with architecture-tmpl.yaml
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
- create-brownfield-architecture: use create-doc with brownfield-architecture-tmpl.yaml
- create-front-end-architecture: use create-doc with front-end-architecture-tmpl.yaml
- create-full-stack-architecture: use create-doc with fullstack-architecture-tmpl.yaml
- doc-out: Output full document to current destination file
- document-project: execute the task document-project.md
- execute-checklist {checklist}: Run task execute-checklist (default->architect-checklist)
@@ -67,18 +68,18 @@ commands:
- yolo: Toggle Yolo Mode
- exit: Say goodbye as the Architect, and then abandon inhabiting this persona
dependencies:
tasks:
- create-doc.md
- create-deep-research-prompt.md
- document-project.md
- execute-checklist.md
templates:
- architecture-tmpl.yaml
- front-end-architecture-tmpl.yaml
- fullstack-architecture-tmpl.yaml
- brownfield-architecture-tmpl.yaml
checklists:
- architect-checklist.md
data:
- technical-preferences.md
tasks:
- create-deep-research-prompt.md
- create-doc.md
- document-project.md
- execute-checklist.md
templates:
- architecture-tmpl.yaml
- brownfield-architecture-tmpl.yaml
- front-end-architecture-tmpl.yaml
- fullstack-architecture-tmpl.yaml
```

View File

@@ -1,5 +1,6 @@
# BMad Master
<!-- Powered by BMAD™ Core -->
# BMad Master
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -10,15 +11,16 @@ CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your
```yaml
IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to {root}/{type}/{name}
- Dependencies map to root/type/name
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-doc.md → {root}/tasks/create-doc.md
- Example: create-doc.md → root/tasks/create-doc.md
- IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read bmad-core/core-config.yaml (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run *help to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -27,10 +29,10 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: Do NOT scan filesystem or load any resources during startup, ONLY when commanded
- 'CRITICAL: Do NOT scan filesystem or load any resources during startup, ONLY when commanded (Exception: Read bmad-core/core-config.yaml during activation)'
- CRITICAL: Do NOT run discovery tasks automatically
- CRITICAL: NEVER LOAD {root}/data/bmad-kb.md UNLESS USER TYPES *kb
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: NEVER LOAD root/data/bmad-kb.md UNLESS USER TYPES *kb
- CRITICAL: On activation, ONLY greet user, auto-run *help, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: BMad Master
id: bmad-master
@@ -49,28 +51,40 @@ persona:
commands:
- help: Show these listed commands in a numbered list
- kb: Toggle KB mode off (default) or on, when on will load and reference the {root}/data/bmad-kb.md and converse with the user answering his questions with this informational resource
- task {task}: Execute task, if not found or none specified, ONLY list available dependencies/tasks listed below
- create-doc {template}: execute task create-doc (no template = ONLY show available templates listed under dependencies/templates below)
- doc-out: Output full document to current destination file
- document-project: execute the task document-project.md
- execute-checklist {checklist}: Run task execute-checklist (no checklist = ONLY show available checklists listed under dependencies/checklist below)
- kb: Toggle KB mode off (default) or on, when on will load and reference the {root}/data/bmad-kb.md and converse with the user answering his questions with this informational resource
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- task {task}: Execute task, if not found or none specified, ONLY list available dependencies/tasks listed below
- yolo: Toggle Yolo Mode
- exit: Exit (confirm)
dependencies:
checklists:
- architect-checklist.md
- change-checklist.md
- pm-checklist.md
- po-master-checklist.md
- story-dod-checklist.md
- story-draft-checklist.md
data:
- bmad-kb.md
- brainstorming-techniques.md
- elicitation-methods.md
- technical-preferences.md
tasks:
- advanced-elicitation.md
- facilitate-brainstorming-session.md
- brownfield-create-epic.md
- brownfield-create-story.md
- correct-course.md
- create-deep-research-prompt.md
- create-doc.md
- document-project.md
- create-next-story.md
- document-project.md
- execute-checklist.md
- facilitate-brainstorming-session.md
- generate-ai-frontend-prompt.md
- index-docs.md
- shard-doc.md
@@ -86,11 +100,6 @@ dependencies:
- prd-tmpl.yaml
- project-brief-tmpl.yaml
- story-tmpl.yaml
data:
- bmad-kb.md
- brainstorming-techniques.md
- elicitation-methods.md
- technical-preferences.md
workflows:
- brownfield-fullstack.md
- brownfield-service.md
@@ -98,11 +107,4 @@ dependencies:
- greenfield-fullstack.md
- greenfield-service.md
- greenfield-ui.md
checklists:
- architect-checklist.md
- change-checklist.md
- pm-checklist.md
- po-master-checklist.md
- story-dod-checklist.md
- story-draft-checklist.md
```

View File

@@ -1,5 +1,6 @@
# BMad Web Orchestrator
<!-- Powered by BMAD™ Core -->
# BMad Web Orchestrator
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -18,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -29,8 +31,8 @@ activation-instructions:
- Assess user goal against available agents and workflows in this bundle
- If clear match to an agent's expertise, suggest transformation with *agent command
- If project-oriented, suggest *workflow-guidance to explore options
- Load resources only when needed - never pre-load
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- Load resources only when needed - never pre-load (Exception: Read `bmad-core/core-config.yaml` during activation)
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: BMad Orchestrator
id: bmad-orchestrator
@@ -54,21 +56,16 @@ persona:
- Always remind users that commands require * prefix
commands: # All commands require * prefix when used (e.g., *help, *agent pm)
help: Show this guide with available agents and workflows
chat-mode: Start conversational mode for detailed assistance
kb-mode: Load full BMad knowledge base
status: Show current context, active agent, and progress
agent: Transform into a specialized agent (list if name not specified)
exit: Return to BMad or exit session
task: Run a specific task (list if name not specified)
workflow: Start a specific workflow (list if name not specified)
workflow-guidance: Get personalized help selecting the right workflow
plan: Create detailed workflow plan before starting
plan-status: Show current workflow plan progress
plan-update: Update workflow plan status
chat-mode: Start conversational mode for detailed assistance
checklist: Execute a checklist (list if name not specified)
yolo: Toggle skip confirmations mode
party-mode: Group chat with all agents
doc-out: Output full document
kb-mode: Load full BMad knowledge base
party-mode: Group chat with all agents
status: Show current context, active agent, and progress
task: Run a specific task (list if name not specified)
yolo: Toggle skip confirmations mode
exit: Return to BMad or exit session
help-display-template: |
=== BMad Orchestrator Commands ===
All commands must start with * (asterisk)
@@ -132,19 +129,19 @@ workflow-guidance:
- Understand each workflow's purpose, options, and decision points
- Ask clarifying questions based on the workflow's structure
- Guide users through workflow selection when multiple options exist
- When appropriate, suggest: "Would you like me to create a detailed workflow plan before starting?"
- When appropriate, suggest: 'Would you like me to create a detailed workflow plan before starting?'
- For workflows with divergent paths, help users choose the right path
- Adapt questions to the specific domain (e.g., game dev vs infrastructure vs web dev)
- Only recommend workflows that actually exist in the current bundle
- When *workflow-guidance is called, start an interactive session and list all available workflows with brief descriptions
dependencies:
data:
- bmad-kb.md
- elicitation-methods.md
tasks:
- advanced-elicitation.md
- create-doc.md
- kb-mode-interaction.md
data:
- bmad-kb.md
- elicitation-methods.md
utils:
- workflow-management.md
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# dev
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -17,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -29,16 +32,15 @@ activation-instructions:
- CRITICAL: Read the following full files as these are your explicit rules for development standards for this project - {root}/core-config.yaml devLoadAlwaysFiles list
- CRITICAL: Do NOT load any other files during startup aside from the assigned story and devLoadAlwaysFiles items, unless user requested you do or the following contradicts
- CRITICAL: Do NOT begin development until a story is not in draft mode and you are told to proceed
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: James
id: dev
title: Full Stack Developer
icon: 💻
whenToUse: "Use for code implementation, debugging, refactoring, and development best practices"
whenToUse: 'Use for code implementation, debugging, refactoring, and development best practices'
customization:
persona:
role: Expert Senior Software Engineer & Implementation Specialist
style: Extremely concise, pragmatic, detail-oriented, solution-focused
@@ -54,23 +56,25 @@ core_principles:
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
- develop-story:
- order-of-execution: "Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete"
- order-of-execution: 'Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete'
- story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
- blocking: "HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression"
- ready-for-review: "Code matches requirements + All validations pass + Follows standards + File List complete"
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: 'Code matches requirements + All validations pass + Follows standards + File List complete'
- completion: "All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON'T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: 'Ready for Review'→HALT"
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- review-qa: run task `apply-qa-fixes.md'
- run-tests: Execute linting and tests
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
dependencies:
tasks:
- execute-checklist.md
- validate-next-story.md
checklists:
- story-dod-checklist.md
tasks:
- apply-qa-fixes.md
- execute-checklist.md
- validate-next-story.md
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# pm
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -17,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -26,7 +29,7 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: John
id: pm
@@ -50,32 +53,32 @@ persona:
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- create-prd: run task create-doc.md with template prd-tmpl.yaml
- create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
- correct-course: execute the correct-course task
- create-brownfield-epic: run task brownfield-create-epic.md
- create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
- create-brownfield-story: run task brownfield-create-story.md
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-prd: run task create-doc.md with template prd-tmpl.yaml
- create-story: Create user story from requirements (task brownfield-create-story)
- doc-out: Output full document to current destination file
- shard-prd: run the task shard-doc.md for the provided prd.md (ask if not found)
- correct-course: execute the correct-course task
- yolo: Toggle Yolo Mode
- exit: Exit (confirm)
dependencies:
checklists:
- change-checklist.md
- pm-checklist.md
data:
- technical-preferences.md
tasks:
- create-doc.md
- correct-course.md
- create-deep-research-prompt.md
- brownfield-create-epic.md
- brownfield-create-story.md
- correct-course.md
- create-deep-research-prompt.md
- create-doc.md
- execute-checklist.md
- shard-doc.md
templates:
- prd-tmpl.yaml
- brownfield-prd-tmpl.yaml
checklists:
- pm-checklist.md
- change-checklist.md
data:
- technical-preferences.md
- prd-tmpl.yaml
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# po
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -17,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -26,7 +29,7 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Sarah
id: po
@@ -53,24 +56,24 @@ persona:
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- correct-course: execute the correct-course task
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story)
- doc-out: Output full document to current destination file
- execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- validate-story-draft {story}: run the task validate-next-story against the provided story file
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- exit: Exit (confirm)
dependencies:
checklists:
- change-checklist.md
- po-master-checklist.md
tasks:
- correct-course.md
- execute-checklist.md
- shard-doc.md
- correct-course.md
- validate-next-story.md
templates:
- story-tmpl.yaml
checklists:
- po-master-checklist.md
- change-checklist.md
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# qa
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -17,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -26,30 +29,34 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Quinn
id: qa
title: Senior Developer & QA Architect
title: Test Architect & Quality Advisor
icon: 🧪
whenToUse: Use for senior code review, refactoring, test planning, quality assurance, and mentoring through code improvements
whenToUse: |
Use for comprehensive test architecture review, quality gate decisions,
and code improvement. Provides thorough analysis including requirements
traceability, risk assessment, and test strategy.
Advisory only - teams choose their quality bar.
customization: null
persona:
role: Senior Developer & Test Architect
style: Methodical, detail-oriented, quality-focused, mentoring, strategic
identity: Senior developer with deep expertise in code quality, architecture, and test automation
focus: Code excellence through review, refactoring, and comprehensive testing strategies
role: Test Architect with Quality Advisory Authority
style: Comprehensive, systematic, advisory, educational, pragmatic
identity: Test architect who provides thorough quality assessment and actionable recommendations without blocking progress
focus: Comprehensive quality analysis through test architecture, risk assessment, and advisory gates
core_principles:
- Senior Developer Mindset - Review and improve code as a senior mentoring juniors
- Active Refactoring - Don't just identify issues, fix them with clear explanations
- Test Strategy & Architecture - Design holistic testing strategies across all levels
- Code Quality Excellence - Enforce best practices, patterns, and clean code principles
- Shift-Left Testing - Integrate testing early in development lifecycle
- Performance & Security - Proactively identify and fix performance/security issues
- Mentorship Through Action - Explain WHY and HOW when making improvements
- Risk-Based Testing - Prioritize testing based on risk and critical areas
- Continuous Improvement - Balance perfection with pragmatism
- Architecture & Design Patterns - Ensure proper patterns and maintainable code structure
- Depth As Needed - Go deep based on risk signals, stay concise when low risk
- Requirements Traceability - Map all stories to tests using Given-When-Then patterns
- Risk-Based Testing - Assess and prioritize by probability × impact
- Quality Attributes - Validate NFRs (security, performance, reliability) via scenarios
- Testability Assessment - Evaluate controllability, observability, debuggability
- Gate Governance - Provide clear PASS/CONCERNS/FAIL/WAIVED decisions with rationale
- Advisory Excellence - Educate through documentation, never block arbitrarily
- Technical Debt Awareness - Identify and quantify debt with improvement suggestions
- LLM Acceleration - Use LLMs to accelerate thorough yet focused analysis
- Pragmatic Balance - Distinguish must-fix from nice-to-have improvements
story-file-permissions:
- CRITICAL: When reviewing stories, you are ONLY authorized to update the "QA Results" section of story files
- CRITICAL: DO NOT modify any other sections including Status, Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Testing, Dev Agent Record, Change Log, or any other sections
@@ -57,13 +64,28 @@ story-file-permissions:
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- review {story}: execute the task review-story for the highest sequence story in docs/stories unless another is specified - keep any specified technical-preferences in mind as needed
- exit: Say goodbye as the QA Engineer, and then abandon inhabiting this persona
- gate {story}: Execute qa-gate task to write/update quality gate decision in directory from qa.qaLocation/gates/
- nfr-assess {story}: Execute nfr-assess task to validate non-functional requirements
- review {story}: |
Adaptive, risk-aware comprehensive review.
Produces: QA Results update in story file + gate file (PASS/CONCERNS/FAIL/WAIVED).
Gate file location: qa.qaLocation/gates/{epic}.{story}-{slug}.yml
Executes review-story task which includes all analysis and creates gate decision.
- risk-profile {story}: Execute risk-profile task to generate risk assessment matrix
- test-design {story}: Execute test-design task to create comprehensive test scenarios
- trace {story}: Execute trace-requirements task to map requirements to tests using Given-When-Then
- exit: Say goodbye as the Test Architect, and then abandon inhabiting this persona
dependencies:
tasks:
- review-story.md
data:
- technical-preferences.md
tasks:
- nfr-assess.md
- qa-gate.md
- review-story.md
- risk-profile.md
- test-design.md
- trace-requirements.md
templates:
- qa-gate-tmpl.yaml
- story-tmpl.yaml
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# sm
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -17,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -26,7 +29,7 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Bob
id: sm
@@ -46,17 +49,17 @@ persona:
# All commands require * prefix when used (e.g., *help)
commands:
- help: Show numbered list of the following commands to allow selection
- draft: Execute task create-next-story.md
- correct-course: Execute task correct-course.md
- draft: Execute task create-next-story.md
- story-checklist: Execute task execute-checklist.md with checklist story-draft-checklist.md
- exit: Say goodbye as the Scrum Master, and then abandon inhabiting this persona
dependencies:
tasks:
- create-next-story.md
- execute-checklist.md
- correct-course.md
templates:
- story-tmpl.yaml
checklists:
- story-draft-checklist.md
tasks:
- correct-course.md
- create-next-story.md
- execute-checklist.md
templates:
- story-tmpl.yaml
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# ux-expert
ACTIVATION-NOTICE: This file contains your full agent operating guidelines. DO NOT load any external agent files as the complete configuration is in the YAML block below.
@@ -17,7 +19,8 @@ REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (
activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Greet user with your name/role and mention `*help` command
- STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions
@@ -26,7 +29,7 @@ activation-instructions:
- CRITICAL RULE: When executing formal task workflows from dependencies, ALL task instructions override any conflicting base behavioral constraints. Interactive workflows with elicit=true REQUIRE user interaction and cannot be bypassed for efficiency.
- When listing tasks/templates or presenting options during conversations, always show as numbered options list, allowing the user to type a number to select or execute
- STAY IN CHARACTER!
- CRITICAL: On activation, ONLY greet user and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
- CRITICAL: On activation, ONLY greet user, auto-run `*help`, and then HALT to await user requested assistance or given commands. ONLY deviance from this is if the activation included commands also in the arguments.
agent:
name: Sally
id: ux-expert
@@ -55,12 +58,12 @@ commands:
- generate-ui-prompt: Run task generate-ai-frontend-prompt.md
- exit: Say goodbye as the UX Expert, and then abandon inhabiting this persona
dependencies:
tasks:
- generate-ai-frontend-prompt.md
- create-doc.md
- execute-checklist.md
templates:
- front-end-spec-tmpl.yaml
data:
- technical-preferences.md
tasks:
- create-doc.md
- execute-checklist.md
- generate-ai-frontend-prompt.md
templates:
- front-end-spec-tmpl.yaml
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Architect Solution Validation Checklist
This checklist serves as a comprehensive framework for the Architect to validate the technical design and architecture before development execution. The Architect should systematically work through each item, ensuring the architecture is robust, scalable, secure, and aligned with the product requirements.
@@ -403,33 +405,28 @@ Ask the user if they want to work through the checklist:
Now that you've completed the checklist, generate a comprehensive validation report that includes:
1. Executive Summary
- Overall architecture readiness (High/Medium/Low)
- Critical risks identified
- Key strengths of the architecture
- Project type (Full-stack/Frontend/Backend) and sections evaluated
2. Section Analysis
- Pass rate for each major section (percentage of items passed)
- Most concerning failures or gaps
- Sections requiring immediate attention
- Note any sections skipped due to project type
3. Risk Assessment
- Top 5 risks by severity
- Mitigation recommendations for each
- Timeline impact of addressing issues
4. Recommendations
- Must-fix items before development
- Should-fix items for better quality
- Nice-to-have improvements
5. AI Implementation Readiness
- Specific concerns for AI agent implementation
- Areas needing additional clarification
- Complexity hotspots to address

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Change Navigation Checklist
**Purpose:** To systematically guide the selected Agent and user through the analysis and planning required when a significant change (pivot, tech issue, missing requirement, failed story) is identified during the BMad workflow.

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Product Manager (PM) Requirements Checklist
This checklist serves as a comprehensive framework to ensure the Product Requirements Document (PRD) and Epic definitions are complete, well-structured, and appropriately scoped for MVP development. The PM should systematically work through each item during the product definition process.
@@ -304,7 +306,6 @@ Ask the user if they want to work through the checklist:
Create a comprehensive validation report that includes:
1. Executive Summary
- Overall PRD completeness (percentage)
- MVP scope appropriateness (Too Large/Just Right/Too Small)
- Readiness for architecture phase (Ready/Nearly Ready/Not Ready)
@@ -312,26 +313,22 @@ Create a comprehensive validation report that includes:
2. Category Analysis Table
Fill in the actual table with:
- Status: PASS (90%+ complete), PARTIAL (60-89%), FAIL (<60%)
- Critical Issues: Specific problems that block progress
3. Top Issues by Priority
- BLOCKERS: Must fix before architect can proceed
- HIGH: Should fix for quality
- MEDIUM: Would improve clarity
- LOW: Nice to have
4. MVP Scope Assessment
- Features that might be cut for true MVP
- Missing features that are essential
- Complexity concerns
- Timeline realism
5. Technical Readiness
- Clarity of technical constraints
- Identified technical risks
- Areas needing architect investigation

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Product Owner (PO) Master Validation Checklist
This checklist serves as a comprehensive framework for the Product Owner to validate project plans before development execution. It adapts intelligently based on project type (greenfield vs brownfield) and includes UI/UX considerations when applicable.
@@ -8,12 +10,10 @@ PROJECT TYPE DETECTION:
First, determine the project type by checking:
1. Is this a GREENFIELD project (new from scratch)?
- Look for: New project initialization, no existing codebase references
- Check for: prd.md, architecture.md, new project setup stories
2. Is this a BROWNFIELD project (enhancing existing system)?
- Look for: References to existing codebase, enhancement/modification language
- Check for: brownfield-prd.md, brownfield-architecture.md, existing system analysis
@@ -347,7 +347,6 @@ Ask the user if they want to work through the checklist:
Generate a comprehensive validation report that adapts to project type:
1. Executive Summary
- Project type: [Greenfield/Brownfield] with [UI/No UI]
- Overall readiness (percentage)
- Go/No-Go recommendation
@@ -357,42 +356,36 @@ Generate a comprehensive validation report that adapts to project type:
2. Project-Specific Analysis
FOR GREENFIELD:
- Setup completeness
- Dependency sequencing
- MVP scope appropriateness
- Development timeline feasibility
FOR BROWNFIELD:
- Integration risk level (High/Medium/Low)
- Existing system impact assessment
- Rollback readiness
- User disruption potential
3. Risk Assessment
- Top 5 risks by severity
- Mitigation recommendations
- Timeline impact of addressing issues
- [BROWNFIELD] Specific integration risks
4. MVP Completeness
- Core features coverage
- Missing essential functionality
- Scope creep identified
- True MVP vs over-engineering
5. Implementation Readiness
- Developer clarity score (1-10)
- Ambiguous requirements count
- Missing technical details
- [BROWNFIELD] Integration point clarity
6. Recommendations
- Must-fix before development
- Should-fix for quality
- Consider for improvement

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Story Definition of Done (DoD) Checklist
## Instructions for Developer Agent
@@ -25,14 +27,12 @@ The goal is quality delivery, not just checking boxes.]]
1. **Requirements Met:**
[[LLM: Be specific - list each requirement and whether it's complete]]
- [ ] All functional requirements specified in the story are implemented.
- [ ] All acceptance criteria defined in the story are met.
2. **Coding Standards & Project Structure:**
[[LLM: Code quality matters for maintainability. Check each item carefully]]
- [ ] All new/modified code strictly adheres to `Operational Guidelines`.
- [ ] All new/modified code aligns with `Project Structure` (file locations, naming, etc.).
- [ ] Adherence to `Tech Stack` for technologies/versions used (if story introduces or modifies tech usage).
@@ -44,7 +44,6 @@ The goal is quality delivery, not just checking boxes.]]
3. **Testing:**
[[LLM: Testing proves your code works. Be honest about test coverage]]
- [ ] All required unit tests as per the story and `Operational Guidelines` Testing Strategy are implemented.
- [ ] All required integration tests (if applicable) as per the story and `Operational Guidelines` Testing Strategy are implemented.
- [ ] All tests (unit, integration, E2E if applicable) pass successfully.
@@ -53,14 +52,12 @@ The goal is quality delivery, not just checking boxes.]]
4. **Functionality & Verification:**
[[LLM: Did you actually run and test your code? Be specific about what you tested]]
- [ ] Functionality has been manually verified by the developer (e.g., running the app locally, checking UI, testing API endpoints).
- [ ] Edge cases and potential error conditions considered and handled gracefully.
5. **Story Administration:**
[[LLM: Documentation helps the next developer. What should they know?]]
- [ ] All tasks within the story file are marked as complete.
- [ ] Any clarifications or decisions made during development are documented in the story file or linked appropriately.
- [ ] The story wrap up section has been completed with notes of changes or information relevant to the next story or overall project, the agent model that was primarily used during development, and the changelog of any changes is properly updated.
@@ -68,7 +65,6 @@ The goal is quality delivery, not just checking boxes.]]
6. **Dependencies, Build & Configuration:**
[[LLM: Build issues block everyone. Ensure everything compiles and runs cleanly]]
- [ ] Project builds successfully without errors.
- [ ] Project linting passes
- [ ] Any new dependencies added were either pre-approved in the story requirements OR explicitly approved by the user during development (approval documented in story file).
@@ -79,7 +75,6 @@ The goal is quality delivery, not just checking boxes.]]
7. **Documentation (If Applicable):**
[[LLM: Good documentation prevents future confusion. What needs explaining?]]
- [ ] Relevant inline code documentation (e.g., JSDoc, TSDoc, Python docstrings) for new public APIs or complex logic is complete.
- [ ] User-facing documentation updated, if changes impact users.
- [ ] Technical documentation (e.g., READMEs, system diagrams) updated if significant architectural changes were made.

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Story Draft Checklist
The Scrum Master should use this checklist to validate that each story contains sufficient context for a developer agent to implement it successfully, while assuming the dev agent has reasonable capabilities to figure things out.
@@ -117,19 +119,16 @@ Note: We don't need every file listed - just the important ones.]]
Generate a concise validation report:
1. Quick Summary
- Story readiness: READY / NEEDS REVISION / BLOCKED
- Clarity score (1-10)
- Major gaps identified
2. Fill in the validation table with:
- PASS: Requirements clearly met
- PARTIAL: Some gaps but workable
- FAIL: Critical information missing
3. Specific Issues (if any)
- List concrete problems to fix
- Suggest specific improvements
- Identify any blocking dependencies

View File

@@ -1,4 +1,7 @@
# <!-- Powered by BMAD™ Core -->
markdownExploder: true
qa:
qaLocation: docs/qa
prd:
prdFile: docs/prd.md
prdVersion: v4

View File

@@ -1,8 +1,10 @@
# BMad Knowledge Base
<!-- Powered by BMAD™ Core -->
# BMAD™ Knowledge Base
## Overview
BMad-Method (Breakthrough Method of Agile AI-driven Development) is a framework that combines AI agents with Agile development methodologies. The v4 system introduces a modular architecture with improved dependency management, bundle optimization, and support for both web and IDE environments.
BMAD-METHOD™ (Breakthrough Method of Agile AI-driven Development) is a framework that combines AI agents with Agile development methodologies. The v4 system introduces a modular architecture with improved dependency management, bundle optimization, and support for both web and IDE environments.
### Key Features
@@ -101,7 +103,7 @@ npx bmad-method install
- **Roo Code**: Web-based IDE with agent support
- **GitHub Copilot**: VS Code extension with AI peer programming assistant
**Note for VS Code Users**: BMad-Method assumes when you mention "VS Code" that you're using it with an AI-powered extension like GitHub Copilot, Cline, or Roo. Standard VS Code without AI capabilities cannot run BMad agents. The installer includes built-in support for Cline and Roo.
**Note for VS Code Users**: BMAD-METHOD™ assumes when you mention "VS Code" that you're using it with an AI-powered extension like GitHub Copilot, Cline, or Roo. Standard VS Code without AI capabilities cannot run BMad agents. The installer includes built-in support for Cline and Roo.
**Verify Installation**:
@@ -109,7 +111,7 @@ npx bmad-method install
- IDE-specific integration files created
- All agent commands/rules/modes available
**Remember**: At its core, BMad-Method is about mastering and harnessing prompt engineering. Any IDE with AI agent support can use BMad - the framework provides the structured prompts and workflows that make AI development effective
**Remember**: At its core, BMAD-METHOD™ is about mastering and harnessing prompt engineering. Any IDE with AI agent support can use BMad - the framework provides the structured prompts and workflows that make AI development effective
### Environment Selection Guide
@@ -298,7 +300,7 @@ You are the "Vibe CEO" - thinking like a CEO with unlimited resources and a sing
- **Claude Code**: `/agent-name` (e.g., `/bmad-master`)
- **Cursor**: `@agent-name` (e.g., `@bmad-master`)
- **Windsurf**: `@agent-name` (e.g., `@bmad-master`)
- **Windsurf**: `/agent-name` (e.g., `/bmad-master`)
- **Trae**: `@agent-name` (e.g., `@bmad-master`)
- **Roo Code**: Select mode from mode selector (e.g., `bmad-master`)
- **GitHub Copilot**: Open the Chat view (`⌃⌘I` on Mac, `Ctrl+Alt+I` on Windows/Linux) and select **Agent** from the chat mode selector.
@@ -353,7 +355,7 @@ You are the "Vibe CEO" - thinking like a CEO with unlimited resources and a sing
### System Overview
The BMad-Method is built around a modular architecture centered on the `bmad-core` directory, which serves as the brain of the entire system. This design enables the framework to operate effectively in both IDE environments (like Cursor, VS Code) and web-based AI interfaces (like ChatGPT, Gemini).
The BMAD-METHOD™ is built around a modular architecture centered on the `bmad-core` directory, which serves as the brain of the entire system. This design enables the framework to operate effectively in both IDE environments (like Cursor, VS Code) and web-based AI interfaces (like ChatGPT, Gemini).
### Key Architectural Components
@@ -651,8 +653,11 @@ Templates with Level 2 headings (`##`) can be automatically sharded:
```markdown
## Goals and Background Context
## Requirements
## User Interface Design Goals
## Success Metrics
```
@@ -705,7 +710,7 @@ Use the `shard-doc` task or `@kayvan/markdown-tree-parser` tool for automatic sh
- **Keep conversations focused** - One agent, one task per conversation
- **Review everything** - Always review and approve before marking complete
## Contributing to BMad-Method
## Contributing to BMAD-METHOD™
### Quick Contribution Guidelines
@@ -737,7 +742,7 @@ For full details, see `CONTRIBUTING.md`. Key points:
### What Are Expansion Packs?
Expansion packs extend BMad-Method beyond traditional software development into ANY domain. They provide specialized agent teams, templates, and workflows while keeping the core framework lean and focused on development.
Expansion packs extend BMAD-METHOD™ beyond traditional software development into ANY domain. They provide specialized agent teams, templates, and workflows while keeping the core framework lean and focused on development.
### Why Use Expansion Packs?

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Brainstorming Techniques Data
## Creative Expansion

View File

@@ -1,18 +1,23 @@
<!-- Powered by BMAD™ Core -->
# Elicitation Methods Data
## Core Reflective Methods
**Expand or Contract for Audience**
- Ask whether to 'expand' (add detail, elaborate) or 'contract' (simplify, clarify)
- Identify specific target audience if relevant
- Tailor content complexity and depth accordingly
**Explain Reasoning (CoT Step-by-Step)**
- Walk through the step-by-step thinking process
- Reveal underlying assumptions and decision points
- Show how conclusions were reached from current role's perspective
**Critique and Refine**
- Review output for flaws, inconsistencies, or improvement areas
- Identify specific weaknesses from role's expertise
- Suggest refined version reflecting domain knowledge
@@ -20,12 +25,14 @@
## Structural Analysis Methods
**Analyze Logical Flow and Dependencies**
- Examine content structure for logical progression
- Check internal consistency and coherence
- Identify and validate dependencies between elements
- Confirm effective ordering and sequencing
**Assess Alignment with Overall Goals**
- Evaluate content contribution to stated objectives
- Identify any misalignments or gaps
- Interpret alignment from specific role's perspective
@@ -34,12 +41,14 @@
## Risk and Challenge Methods
**Identify Potential Risks and Unforeseen Issues**
- Brainstorm potential risks from role's expertise
- Identify overlooked edge cases or scenarios
- Anticipate unintended consequences
- Highlight implementation challenges
**Challenge from Critical Perspective**
- Adopt critical stance on current content
- Play devil's advocate from specified viewpoint
- Argue against proposal highlighting weaknesses
@@ -48,12 +57,14 @@
## Creative Exploration Methods
**Tree of Thoughts Deep Dive**
- Break problem into discrete "thoughts" or intermediate steps
- Explore multiple reasoning paths simultaneously
- Use self-evaluation to classify each path as "sure", "likely", or "impossible"
- Apply search algorithms (BFS/DFS) to find optimal solution paths
**Hindsight is 20/20: The 'If Only...' Reflection**
- Imagine retrospective scenario based on current content
- Identify the one "if only we had known/done X..." insight
- Describe imagined consequences humorously or dramatically
@@ -62,6 +73,7 @@
## Multi-Persona Collaboration Methods
**Agile Team Perspective Shift**
- Rotate through different Scrum team member viewpoints
- Product Owner: Focus on user value and business impact
- Scrum Master: Examine process flow and team dynamics
@@ -69,12 +81,14 @@
- QA: Identify testing scenarios and quality concerns
**Stakeholder Round Table**
- Convene virtual meeting with multiple personas
- Each persona contributes unique perspective on content
- Identify conflicts and synergies between viewpoints
- Synthesize insights into actionable recommendations
**Meta-Prompting Analysis**
- Step back to analyze the structure and logic of current approach
- Question the format and methodology being used
- Suggest alternative frameworks or mental models
@@ -83,24 +97,28 @@
## Advanced 2025 Techniques
**Self-Consistency Validation**
- Generate multiple reasoning paths for same problem
- Compare consistency across different approaches
- Identify most reliable and robust solution
- Highlight areas where approaches diverge and why
**ReWOO (Reasoning Without Observation)**
- Separate parametric reasoning from tool-based actions
- Create reasoning plan without external dependencies
- Identify what can be solved through pure reasoning
- Optimize for efficiency and reduced token usage
**Persona-Pattern Hybrid**
- Combine specific role expertise with elicitation pattern
- Architect + Risk Analysis: Deep technical risk assessment
- UX Expert + User Journey: End-to-end experience critique
- PM + Stakeholder Analysis: Multi-perspective impact review
**Emergent Collaboration Discovery**
- Allow multiple perspectives to naturally emerge
- Identify unexpected insights from persona interactions
- Explore novel combinations of viewpoints
@@ -109,18 +127,21 @@
## Game-Based Elicitation Methods
**Red Team vs Blue Team**
- Red Team: Attack the proposal, find vulnerabilities
- Blue Team: Defend and strengthen the approach
- Competitive analysis reveals blind spots
- Results in more robust, battle-tested solutions
**Innovation Tournament**
- Pit multiple alternative approaches against each other
- Score each approach across different criteria
- Crowd-source evaluation from different personas
- Identify winning combination of features
**Escape Room Challenge**
- Present content as constraints to work within
- Find creative solutions within tight limitations
- Identify minimum viable approach
@@ -129,6 +150,7 @@
## Process Control
**Proceed / No Further Actions**
- Acknowledge choice to finalize current work
- Accept output as-is or move to next step
- Prepare to continue without additional elicitation

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# User-Defined Preferred Patterns and Preferences
None Listed

View File

@@ -0,0 +1,148 @@
<!-- Powered by BMAD™ Core -->
# Test Levels Framework
Comprehensive guide for determining appropriate test levels (unit, integration, E2E) for different scenarios.
## Test Level Decision Matrix
### Unit Tests
**When to use:**
- Testing pure functions and business logic
- Algorithm correctness
- Input validation and data transformation
- Error handling in isolated components
- Complex calculations or state machines
**Characteristics:**
- Fast execution (immediate feedback)
- No external dependencies (DB, API, file system)
- Highly maintainable and stable
- Easy to debug failures
**Example scenarios:**
```yaml
unit_test:
component: 'PriceCalculator'
scenario: 'Calculate discount with multiple rules'
justification: 'Complex business logic with multiple branches'
mock_requirements: 'None - pure function'
```
### Integration Tests
**When to use:**
- Component interaction verification
- Database operations and transactions
- API endpoint contracts
- Service-to-service communication
- Middleware and interceptor behavior
**Characteristics:**
- Moderate execution time
- Tests component boundaries
- May use test databases or containers
- Validates system integration points
**Example scenarios:**
```yaml
integration_test:
components: ['UserService', 'AuthRepository']
scenario: 'Create user with role assignment'
justification: 'Critical data flow between service and persistence'
test_environment: 'In-memory database'
```
### End-to-End Tests
**When to use:**
- Critical user journeys
- Cross-system workflows
- Visual regression testing
- Compliance and regulatory requirements
- Final validation before release
**Characteristics:**
- Slower execution
- Tests complete workflows
- Requires full environment setup
- Most realistic but most brittle
**Example scenarios:**
```yaml
e2e_test:
journey: 'Complete checkout process'
scenario: 'User purchases with saved payment method'
justification: 'Revenue-critical path requiring full validation'
environment: 'Staging with test payment gateway'
```
## Test Level Selection Rules
### Favor Unit Tests When:
- Logic can be isolated
- No side effects involved
- Fast feedback needed
- High cyclomatic complexity
### Favor Integration Tests When:
- Testing persistence layer
- Validating service contracts
- Testing middleware/interceptors
- Component boundaries critical
### Favor E2E Tests When:
- User-facing critical paths
- Multi-system interactions
- Regulatory compliance scenarios
- Visual regression important
## Anti-patterns to Avoid
- E2E testing for business logic validation
- Unit testing framework behavior
- Integration testing third-party libraries
- Duplicate coverage across levels
## Duplicate Coverage Guard
**Before adding any test, check:**
1. Is this already tested at a lower level?
2. Can a unit test cover this instead of integration?
3. Can an integration test cover this instead of E2E?
**Coverage overlap is only acceptable when:**
- Testing different aspects (unit: logic, integration: interaction, e2e: user experience)
- Critical paths requiring defense in depth
- Regression prevention for previously broken functionality
## Test Naming Conventions
- Unit: `test_{component}_{scenario}`
- Integration: `test_{flow}_{interaction}`
- E2E: `test_{journey}_{outcome}`
## Test ID Format
`{EPIC}.{STORY}-{LEVEL}-{SEQ}`
Examples:
- `1.3-UNIT-001`
- `1.3-INT-002`
- `1.3-E2E-001`

View File

@@ -0,0 +1,174 @@
<!-- Powered by BMAD™ Core -->
# Test Priorities Matrix
Guide for prioritizing test scenarios based on risk, criticality, and business impact.
## Priority Levels
### P0 - Critical (Must Test)
**Criteria:**
- Revenue-impacting functionality
- Security-critical paths
- Data integrity operations
- Regulatory compliance requirements
- Previously broken functionality (regression prevention)
**Examples:**
- Payment processing
- Authentication/authorization
- User data creation/deletion
- Financial calculations
- GDPR/privacy compliance
**Testing Requirements:**
- Comprehensive coverage at all levels
- Both happy and unhappy paths
- Edge cases and error scenarios
- Performance under load
### P1 - High (Should Test)
**Criteria:**
- Core user journeys
- Frequently used features
- Features with complex logic
- Integration points between systems
- Features affecting user experience
**Examples:**
- User registration flow
- Search functionality
- Data import/export
- Notification systems
- Dashboard displays
**Testing Requirements:**
- Primary happy paths required
- Key error scenarios
- Critical edge cases
- Basic performance validation
### P2 - Medium (Nice to Test)
**Criteria:**
- Secondary features
- Admin functionality
- Reporting features
- Configuration options
- UI polish and aesthetics
**Examples:**
- Admin settings panels
- Report generation
- Theme customization
- Help documentation
- Analytics tracking
**Testing Requirements:**
- Happy path coverage
- Basic error handling
- Can defer edge cases
### P3 - Low (Test if Time Permits)
**Criteria:**
- Rarely used features
- Nice-to-have functionality
- Cosmetic issues
- Non-critical optimizations
**Examples:**
- Advanced preferences
- Legacy feature support
- Experimental features
- Debug utilities
**Testing Requirements:**
- Smoke tests only
- Can rely on manual testing
- Document known limitations
## Risk-Based Priority Adjustments
### Increase Priority When:
- High user impact (affects >50% of users)
- High financial impact (>$10K potential loss)
- Security vulnerability potential
- Compliance/legal requirements
- Customer-reported issues
- Complex implementation (>500 LOC)
- Multiple system dependencies
### Decrease Priority When:
- Feature flag protected
- Gradual rollout planned
- Strong monitoring in place
- Easy rollback capability
- Low usage metrics
- Simple implementation
- Well-isolated component
## Test Coverage by Priority
| Priority | Unit Coverage | Integration Coverage | E2E Coverage |
| -------- | ------------- | -------------------- | ------------------ |
| P0 | >90% | >80% | All critical paths |
| P1 | >80% | >60% | Main happy paths |
| P2 | >60% | >40% | Smoke tests |
| P3 | Best effort | Best effort | Manual only |
## Priority Assignment Rules
1. **Start with business impact** - What happens if this fails?
2. **Consider probability** - How likely is failure?
3. **Factor in detectability** - Would we know if it failed?
4. **Account for recoverability** - Can we fix it quickly?
## Priority Decision Tree
```
Is it revenue-critical?
├─ YES → P0
└─ NO → Does it affect core user journey?
├─ YES → Is it high-risk?
│ ├─ YES → P0
│ └─ NO → P1
└─ NO → Is it frequently used?
├─ YES → P1
└─ NO → Is it customer-facing?
├─ YES → P2
└─ NO → P3
```
## Test Execution Order
1. Execute P0 tests first (fail fast on critical issues)
2. Execute P1 tests second (core functionality)
3. Execute P2 tests if time permits
4. P3 tests only in full regression cycles
## Continuous Adjustment
Review and adjust priorities based on:
- Production incident patterns
- User feedback and complaints
- Usage analytics
- Test failure history
- Business priority changes

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Advanced Elicitation Task
## Purpose

View File

@@ -0,0 +1,150 @@
<!-- Powered by BMAD™ Core -->
# apply-qa-fixes
Implement fixes based on QA results (gate and assessments) for a specific story. This task is for the Dev agent to systematically consume QA outputs and apply code/test changes while only updating allowed sections in the story file.
## Purpose
- Read QA outputs for a story (gate YAML + assessment markdowns)
- Create a prioritized, deterministic fix plan
- Apply code and test changes to close gaps and address issues
- Update only the allowed story sections for the Dev agent
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "2.2"
- qa_root: from `bmad-core/core-config.yaml` key `qa.qaLocation` (e.g., `docs/project/qa`)
- story_root: from `bmad-core/core-config.yaml` key `devStoryLocation` (e.g., `docs/project/stories`)
optional:
- story_title: '{title}' # derive from story H1 if missing
- story_slug: '{slug}' # derive from title (lowercase, hyphenated) if missing
```
## QA Sources to Read
- Gate (YAML): `{qa_root}/gates/{epic}.{story}-*.yml`
- If multiple, use the most recent by modified time
- Assessments (Markdown):
- Test Design: `{qa_root}/assessments/{epic}.{story}-test-design-*.md`
- Traceability: `{qa_root}/assessments/{epic}.{story}-trace-*.md`
- Risk Profile: `{qa_root}/assessments/{epic}.{story}-risk-*.md`
- NFR Assessment: `{qa_root}/assessments/{epic}.{story}-nfr-*.md`
## Prerequisites
- Repository builds and tests run locally (Deno 2)
- Lint and test commands available:
- `deno lint`
- `deno test -A`
## Process (Do not skip steps)
### 0) Load Core Config & Locate Story
- Read `bmad-core/core-config.yaml` and resolve `qa_root` and `story_root`
- Locate story file in `{story_root}/{epic}.{story}.*.md`
- HALT if missing and ask for correct story id/path
### 1) Collect QA Findings
- Parse the latest gate YAML:
- `gate` (PASS|CONCERNS|FAIL|WAIVED)
- `top_issues[]` with `id`, `severity`, `finding`, `suggested_action`
- `nfr_validation.*.status` and notes
- `trace` coverage summary/gaps
- `test_design.coverage_gaps[]`
- `risk_summary.recommendations.must_fix[]` (if present)
- Read any present assessment markdowns and extract explicit gaps/recommendations
### 2) Build Deterministic Fix Plan (Priority Order)
Apply in order, highest priority first:
1. High severity items in `top_issues` (security/perf/reliability/maintainability)
2. NFR statuses: all FAIL must be fixed → then CONCERNS
3. Test Design `coverage_gaps` (prioritize P0 scenarios if specified)
4. Trace uncovered requirements (AC-level)
5. Risk `must_fix` recommendations
6. Medium severity issues, then low
Guidance:
- Prefer tests closing coverage gaps before/with code changes
- Keep changes minimal and targeted; follow project architecture and TS/Deno rules
### 3) Apply Changes
- Implement code fixes per plan
- Add missing tests to close coverage gaps (unit first; integration where required by AC)
- Keep imports centralized via `deps.ts` (see `docs/project/typescript-rules.md`)
- Follow DI boundaries in `src/core/di.ts` and existing patterns
### 4) Validate
- Run `deno lint` and fix issues
- Run `deno test -A` until all tests pass
- Iterate until clean
### 5) Update Story (Allowed Sections ONLY)
CRITICAL: Dev agent is ONLY authorized to update these sections of the story file. Do not modify any other sections (e.g., QA Results, Story, Acceptance Criteria, Dev Notes, Testing):
- Tasks / Subtasks Checkboxes (mark any fix subtask you added as done)
- Dev Agent Record →
- Agent Model Used (if changed)
- Debug Log References (commands/results, e.g., lint/tests)
- Completion Notes List (what changed, why, how)
- File List (all added/modified/deleted files)
- Change Log (new dated entry describing applied fixes)
- Status (see Rule below)
Status Rule:
- If gate was PASS and all identified gaps are closed → set `Status: Ready for Done`
- Otherwise → set `Status: Ready for Review` and notify QA to re-run the review
### 6) Do NOT Edit Gate Files
- Dev does not modify gate YAML. If fixes address issues, request QA to re-run `review-story` to update the gate
## Blocking Conditions
- Missing `bmad-core/core-config.yaml`
- Story file not found for `story_id`
- No QA artifacts found (neither gate nor assessments)
- HALT and request QA to generate at least a gate file (or proceed only with clear developer-provided fix list)
## Completion Checklist
- deno lint: 0 problems
- deno test -A: all tests pass
- All high severity `top_issues` addressed
- NFR FAIL → resolved; CONCERNS minimized or documented
- Coverage gaps closed or explicitly documented with rationale
- Story updated (allowed sections only) including File List and Change Log
- Status set according to Status Rule
## Example: Story 2.2
Given gate `docs/project/qa/gates/2.2-*.yml` shows
- `coverage_gaps`: Back action behavior untested (AC2)
- `coverage_gaps`: Centralized dependencies enforcement untested (AC4)
Fix plan:
- Add a test ensuring the Toolkit Menu "Back" action returns to Main Menu
- Add a static test verifying imports for service/view go through `deps.ts`
- Re-run lint/tests and update Dev Agent Record + File List accordingly
## Key Principles
- Deterministic, risk-first prioritization
- Minimal, maintainable changes
- Tests validate behavior and close gaps
- Strict adherence to allowed story update areas
- Gate ownership remains with QA; Dev signals readiness via Status

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Create Brownfield Epic Task
## Purpose

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Create Brownfield Story Task
## Purpose

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Correct Course Task
## Purpose

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Create Brownfield Story Task
## Purpose
@@ -139,16 +141,19 @@ Critical: This is where you'll need to be interactive with the user if informati
Create Dev Technical Guidance section with available information:
```markdown
````markdown
## Dev Technical Guidance
### Existing System Context
[Extract from available documentation]
### Integration Approach
[Based on patterns found or ask user]
### Technical Constraints
[From documentation or user input]
### Missing Information
@@ -191,6 +196,7 @@ Example task structure for brownfield:
- [ ] Integration test for {{integration point}}
- [ ] Update existing tests if needed
```
````
### 5. Risk Assessment and Mitigation
@@ -202,14 +208,17 @@ Add section for brownfield-specific risks:
## Risk Assessment
### Implementation Risks
- **Primary Risk**: {{main risk to existing system}}
- **Mitigation**: {{how to address}}
- **Verification**: {{how to confirm safety}}
### Rollback Plan
- {{Simple steps to undo changes if needed}}
### Safety Checks
- [ ] Existing {{feature}} tested before changes
- [ ] Changes can be feature-flagged or isolated
- [ ] Rollback procedure documented
@@ -252,6 +261,7 @@ Include header noting documentation context:
<!-- Context: Brownfield enhancement to {{existing system}} -->
## Status: Draft
[Rest of story content...]
```

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Create Deep Research Prompt Task
This task helps create comprehensive research prompts for various types of deep analysis. It can process inputs from brainstorming sessions, project briefs, market research, or specific research questions to generate targeted prompts for deeper investigation.
@@ -21,63 +23,54 @@ CRITICAL: First, help the user select the most appropriate research focus based
Present these numbered options to the user:
1. **Product Validation Research**
- Validate product hypotheses and market fit
- Test assumptions about user needs and solutions
- Assess technical and business feasibility
- Identify risks and mitigation strategies
2. **Market Opportunity Research**
- Analyze market size and growth potential
- Identify market segments and dynamics
- Assess market entry strategies
- Evaluate timing and market readiness
3. **User & Customer Research**
- Deep dive into user personas and behaviors
- Understand jobs-to-be-done and pain points
- Map customer journeys and touchpoints
- Analyze willingness to pay and value perception
4. **Competitive Intelligence Research**
- Detailed competitor analysis and positioning
- Feature and capability comparisons
- Business model and strategy analysis
- Identify competitive advantages and gaps
5. **Technology & Innovation Research**
- Assess technology trends and possibilities
- Evaluate technical approaches and architectures
- Identify emerging technologies and disruptions
- Analyze build vs. buy vs. partner options
6. **Industry & Ecosystem Research**
- Map industry value chains and dynamics
- Identify key players and relationships
- Analyze regulatory and compliance factors
- Understand partnership opportunities
7. **Strategic Options Research**
- Evaluate different strategic directions
- Assess business model alternatives
- Analyze go-to-market strategies
- Consider expansion and scaling paths
8. **Risk & Feasibility Research**
- Identify and assess various risk factors
- Evaluate implementation challenges
- Analyze resource requirements
- Consider regulatory and legal implications
9. **Custom Research Focus**
- User-defined research objectives
- Specialized domain investigation
- Cross-functional research needs
@@ -246,13 +239,11 @@ CRITICAL: collaborate with the user to develop specific, actionable research que
### 5. Review and Refinement
1. **Present Complete Prompt**
- Show the full research prompt
- Explain key elements and rationale
- Highlight any assumptions made
2. **Gather Feedback**
- Are the objectives clear and correct?
- Do the questions address all concerns?
- Is the scope appropriate?

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Create Next Story Task
## Purpose

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Document an Existing Project
## Purpose
@@ -112,7 +114,7 @@ This document captures the CURRENT STATE of the [Project Name] codebase, includi
### Change Log
| Date | Version | Description | Author |
|------|---------|-------------|--------|
| ------ | ------- | --------------------------- | --------- |
| [Date] | 1.0 | Initial brownfield analysis | [Analyst] |
## Quick Reference - Key Files and Entry Points
@@ -137,7 +139,7 @@ This document captures the CURRENT STATE of the [Project Name] codebase, includi
### Actual Tech Stack (from package.json/requirements.txt)
| Category | Technology | Version | Notes |
|----------|------------|---------|--------|
| --------- | ---------- | ------- | -------------------------- |
| Runtime | Node.js | 16.x | [Any constraints] |
| Framework | Express | 4.18.2 | [Custom middleware?] |
| Database | PostgreSQL | 13 | [Connection pooling setup] |
@@ -179,6 +181,7 @@ project-root/
### Data Models
Instead of duplicating, reference actual model files:
- **User Model**: See `src/models/User.js`
- **Order Model**: See `src/models/Order.js`
- **Related Types**: TypeScript definitions in `src/types/`
@@ -209,7 +212,7 @@ Instead of duplicating, reference actual model files:
### External Services
| Service | Purpose | Integration Type | Key Files |
|---------|---------|------------------|-----------|
| -------- | -------- | ---------------- | ------------------------------ |
| Stripe | Payments | REST API | `src/integrations/stripe/` |
| SendGrid | Emails | SDK | `src/services/emailService.js` |
@@ -256,6 +259,7 @@ npm run test:integration # Runs integration tests (requires local DB)
### Files That Will Need Modification
Based on the enhancement requirements, these files will be affected:
- `src/services/userService.js` - Add new user fields
- `src/models/User.js` - Update schema
- `src/routes/userRoutes.js` - New endpoints

View File

@@ -1,6 +1,8 @@
---
## <!-- Powered by BMAD™ Core -->
docOutputLocation: docs/brainstorming-session-results.md
template: "{root}/templates/brainstorming-output-tmpl.yaml"
template: '{root}/templates/brainstorming-output-tmpl.yaml'
---
# Facilitate Brainstorming Session Task

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Create AI Frontend Prompt Task
## Purpose

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Index Documentation Task
## Purpose
@@ -11,14 +13,12 @@ You are now operating as a Documentation Indexer. Your goal is to ensure all doc
### Required Steps
1. First, locate and scan:
- The `docs/` directory and all subdirectories
- The existing `docs/index.md` file (create if absent)
- All markdown (`.md`) and text (`.txt`) files in the documentation structure
- Note the folder structure for hierarchical organization
2. For the existing `docs/index.md`:
- Parse current entries
- Note existing file references and descriptions
- Identify any broken links or missing files
@@ -26,7 +26,6 @@ You are now operating as a Documentation Indexer. Your goal is to ensure all doc
- Preserve existing folder sections
3. For each documentation file found:
- Extract the title (from first heading or filename)
- Generate a brief description by analyzing the content
- Create a relative markdown link to the file
@@ -35,7 +34,6 @@ You are now operating as a Documentation Indexer. Your goal is to ensure all doc
- If missing or outdated, prepare an update
4. For any missing or non-existent files found in index:
- Present a list of all entries that reference non-existent files
- For each entry:
- Show the full entry details (title, path, description)
@@ -88,7 +86,6 @@ Documents within the `another-folder/` directory:
### [Nested Document](./another-folder/document.md)
Description of nested document.
```
### Index Entry Format
@@ -157,7 +154,6 @@ For each file referenced in the index but not found in the filesystem:
### Special Cases
1. **Sharded Documents**: If a folder contains an `index.md` file, treat it as a sharded document:
- Use the folder's `index.md` title as the section title
- List the folder's documents as subsections
- Note in the description that this is a multi-part document

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# KB Mode Interaction Task
## Purpose
@@ -6,7 +8,7 @@ Provide a user-friendly interface to the BMad knowledge base without overwhelmin
## Instructions
When entering KB mode (*kb-mode), follow these steps:
When entering KB mode (\*kb-mode), follow these steps:
### 1. Welcome and Guide
@@ -48,12 +50,12 @@ Or ask me about anything else related to BMad-Method!
When user is done or wants to exit KB mode:
- Summarize key points discussed if helpful
- Remind them they can return to KB mode anytime with *kb-mode
- Remind them they can return to KB mode anytime with \*kb-mode
- Suggest next steps based on what was discussed
## Example Interaction
**User**: *kb-mode
**User**: \*kb-mode
**Assistant**: I've entered KB mode and have access to the full BMad knowledge base. I can help you with detailed information about any aspect of BMad-Method.

View File

@@ -0,0 +1,345 @@
<!-- Powered by BMAD™ Core -->
# nfr-assess
Quick NFR validation focused on the core four: security, performance, reliability, maintainability.
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: `bmad-core/core-config.yaml` for the `devStoryLocation`
optional:
- architecture_refs: `bmad-core/core-config.yaml` for the `architecture.architectureFile`
- technical_preferences: `bmad-core/core-config.yaml` for the `technicalPreferences`
- acceptance_criteria: From story file
```
## Purpose
Assess non-functional requirements for a story and generate:
1. YAML block for the gate file's `nfr_validation` section
2. Brief markdown assessment saved to `qa.qaLocation/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md`
## Process
### 0. Fail-safe for Missing Inputs
If story_path or story file can't be found:
- Still create assessment file with note: "Source story not found"
- Set all selected NFRs to CONCERNS with notes: "Target unknown / evidence missing"
- Continue with assessment to provide value
### 1. Elicit Scope
**Interactive mode:** Ask which NFRs to assess
**Non-interactive mode:** Default to core four (security, performance, reliability, maintainability)
```text
Which NFRs should I assess? (Enter numbers or press Enter for default)
[1] Security (default)
[2] Performance (default)
[3] Reliability (default)
[4] Maintainability (default)
[5] Usability
[6] Compatibility
[7] Portability
[8] Functional Suitability
> [Enter for 1-4]
```
### 2. Check for Thresholds
Look for NFR requirements in:
- Story acceptance criteria
- `docs/architecture/*.md` files
- `docs/technical-preferences.md`
**Interactive mode:** Ask for missing thresholds
**Non-interactive mode:** Mark as CONCERNS with "Target unknown"
```text
No performance requirements found. What's your target response time?
> 200ms for API calls
No security requirements found. Required auth method?
> JWT with refresh tokens
```
**Unknown targets policy:** If a target is missing and not provided, mark status as CONCERNS with notes: "Target unknown"
### 3. Quick Assessment
For each selected NFR, check:
- Is there evidence it's implemented?
- Can we validate it?
- Are there obvious gaps?
### 4. Generate Outputs
## Output 1: Gate YAML Block
Generate ONLY for NFRs actually assessed (no placeholders):
```yaml
# Gate YAML (copy/paste):
nfr_validation:
_assessed: [security, performance, reliability, maintainability]
security:
status: CONCERNS
notes: 'No rate limiting on auth endpoints'
performance:
status: PASS
notes: 'Response times < 200ms verified'
reliability:
status: PASS
notes: 'Error handling and retries implemented'
maintainability:
status: CONCERNS
notes: 'Test coverage at 65%, target is 80%'
```
## Deterministic Status Rules
- **FAIL**: Any selected NFR has critical gap or target clearly not met
- **CONCERNS**: No FAILs, but any NFR is unknown/partial/missing evidence
- **PASS**: All selected NFRs meet targets with evidence
## Quality Score Calculation
```
quality_score = 100
- 20 for each FAIL attribute
- 10 for each CONCERNS attribute
Floor at 0, ceiling at 100
```
If `technical-preferences.md` defines custom weights, use those instead.
## Output 2: Brief Assessment Report
**ALWAYS save to:** `qa.qaLocation/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md`
```markdown
# NFR Assessment: {epic}.{story}
Date: {date}
Reviewer: Quinn
<!-- Note: Source story not found (if applicable) -->
## Summary
- Security: CONCERNS - Missing rate limiting
- Performance: PASS - Meets <200ms requirement
- Reliability: PASS - Proper error handling
- Maintainability: CONCERNS - Test coverage below target
## Critical Issues
1. **No rate limiting** (Security)
- Risk: Brute force attacks possible
- Fix: Add rate limiting middleware to auth endpoints
2. **Test coverage 65%** (Maintainability)
- Risk: Untested code paths
- Fix: Add tests for uncovered branches
## Quick Wins
- Add rate limiting: ~2 hours
- Increase test coverage: ~4 hours
- Add performance monitoring: ~1 hour
```
## Output 3: Story Update Line
**End with this line for the review task to quote:**
```
NFR assessment: qa.qaLocation/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
```
## Output 4: Gate Integration Line
**Always print at the end:**
```
Gate NFR block ready → paste into qa.qaLocation/gates/{epic}.{story}-{slug}.yml under nfr_validation
```
## Assessment Criteria
### Security
**PASS if:**
- Authentication implemented
- Authorization enforced
- Input validation present
- No hardcoded secrets
**CONCERNS if:**
- Missing rate limiting
- Weak encryption
- Incomplete authorization
**FAIL if:**
- No authentication
- Hardcoded credentials
- SQL injection vulnerabilities
### Performance
**PASS if:**
- Meets response time targets
- No obvious bottlenecks
- Reasonable resource usage
**CONCERNS if:**
- Close to limits
- Missing indexes
- No caching strategy
**FAIL if:**
- Exceeds response time limits
- Memory leaks
- Unoptimized queries
### Reliability
**PASS if:**
- Error handling present
- Graceful degradation
- Retry logic where needed
**CONCERNS if:**
- Some error cases unhandled
- No circuit breakers
- Missing health checks
**FAIL if:**
- No error handling
- Crashes on errors
- No recovery mechanisms
### Maintainability
**PASS if:**
- Test coverage meets target
- Code well-structured
- Documentation present
**CONCERNS if:**
- Test coverage below target
- Some code duplication
- Missing documentation
**FAIL if:**
- No tests
- Highly coupled code
- No documentation
## Quick Reference
### What to Check
```yaml
security:
- Authentication mechanism
- Authorization checks
- Input validation
- Secret management
- Rate limiting
performance:
- Response times
- Database queries
- Caching usage
- Resource consumption
reliability:
- Error handling
- Retry logic
- Circuit breakers
- Health checks
- Logging
maintainability:
- Test coverage
- Code structure
- Documentation
- Dependencies
```
## Key Principles
- Focus on the core four NFRs by default
- Quick assessment, not deep analysis
- Gate-ready output format
- Brief, actionable findings
- Skip what doesn't apply
- Deterministic status rules for consistency
- Unknown targets → CONCERNS, not guesses
---
## Appendix: ISO 25010 Reference
<details>
<summary>Full ISO 25010 Quality Model (click to expand)</summary>
### All 8 Quality Characteristics
1. **Functional Suitability**: Completeness, correctness, appropriateness
2. **Performance Efficiency**: Time behavior, resource use, capacity
3. **Compatibility**: Co-existence, interoperability
4. **Usability**: Learnability, operability, accessibility
5. **Reliability**: Maturity, availability, fault tolerance
6. **Security**: Confidentiality, integrity, authenticity
7. **Maintainability**: Modularity, reusability, testability
8. **Portability**: Adaptability, installability
Use these when assessing beyond the core four.
</details>
<details>
<summary>Example: Deep Performance Analysis (click to expand)</summary>
```yaml
performance_deep_dive:
response_times:
p50: 45ms
p95: 180ms
p99: 350ms
database:
slow_queries: 2
missing_indexes: ['users.email', 'orders.user_id']
caching:
hit_rate: 0%
recommendation: 'Add Redis for session data'
load_test:
max_rps: 150
breaking_point: 200 rps
```
</details>

163
bmad-core/tasks/qa-gate.md Normal file
View File

@@ -0,0 +1,163 @@
<!-- Powered by BMAD™ Core -->
# qa-gate
Create or update a quality gate decision file for a story based on review findings.
## Purpose
Generate a standalone quality gate file that provides a clear pass/fail decision with actionable feedback. This gate serves as an advisory checkpoint for teams to understand quality status.
## Prerequisites
- Story has been reviewed (manually or via review-story task)
- Review findings are available
- Understanding of story requirements and implementation
## Gate File Location
**ALWAYS** check the `bmad-core/core-config.yaml` for the `qa.qaLocation/gates`
Slug rules:
- Convert to lowercase
- Replace spaces with hyphens
- Strip punctuation
- Example: "User Auth - Login!" becomes "user-auth-login"
## Minimal Required Schema
```yaml
schema: 1
story: '{epic}.{story}'
gate: PASS|CONCERNS|FAIL|WAIVED
status_reason: '1-2 sentence explanation of gate decision'
reviewer: 'Quinn'
updated: '{ISO-8601 timestamp}'
top_issues: [] # Empty array if no issues
waiver: { active: false } # Only set active: true if WAIVED
```
## Schema with Issues
```yaml
schema: 1
story: '1.3'
gate: CONCERNS
status_reason: 'Missing rate limiting on auth endpoints poses security risk.'
reviewer: 'Quinn'
updated: '2025-01-12T10:15:00Z'
top_issues:
- id: 'SEC-001'
severity: high # ONLY: low|medium|high
finding: 'No rate limiting on login endpoint'
suggested_action: 'Add rate limiting middleware before production'
- id: 'TEST-001'
severity: medium
finding: 'No integration tests for auth flow'
suggested_action: 'Add integration test coverage'
waiver: { active: false }
```
## Schema when Waived
```yaml
schema: 1
story: '1.3'
gate: WAIVED
status_reason: 'Known issues accepted for MVP release.'
reviewer: 'Quinn'
updated: '2025-01-12T10:15:00Z'
top_issues:
- id: 'PERF-001'
severity: low
finding: 'Dashboard loads slowly with 1000+ items'
suggested_action: 'Implement pagination in next sprint'
waiver:
active: true
reason: 'MVP release - performance optimization deferred'
approved_by: 'Product Owner'
```
## Gate Decision Criteria
### PASS
- All acceptance criteria met
- No high-severity issues
- Test coverage meets project standards
### CONCERNS
- Non-blocking issues present
- Should be tracked and scheduled
- Can proceed with awareness
### FAIL
- Acceptance criteria not met
- High-severity issues present
- Recommend return to InProgress
### WAIVED
- Issues explicitly accepted
- Requires approval and reason
- Proceed despite known issues
## Severity Scale
**FIXED VALUES - NO VARIATIONS:**
- `low`: Minor issues, cosmetic problems
- `medium`: Should fix soon, not blocking
- `high`: Critical issues, should block release
## Issue ID Prefixes
- `SEC-`: Security issues
- `PERF-`: Performance issues
- `REL-`: Reliability issues
- `TEST-`: Testing gaps
- `MNT-`: Maintainability concerns
- `ARCH-`: Architecture issues
- `DOC-`: Documentation gaps
- `REQ-`: Requirements issues
## Output Requirements
1. **ALWAYS** create gate file at: `qa.qaLocation/gates` from `bmad-core/core-config.yaml`
2. **ALWAYS** append this exact format to story's QA Results section:
```text
Gate: {STATUS} → qa.qaLocation/gates/{epic}.{story}-{slug}.yml
```
3. Keep status_reason to 1-2 sentences maximum
4. Use severity values exactly: `low`, `medium`, or `high`
## Example Story Update
After creating gate file, append to story's QA Results section:
```markdown
## QA Results
### Review Date: 2025-01-12
### Reviewed By: Quinn (Test Architect)
[... existing review content ...]
### Gate Status
Gate: CONCERNS → qa.qaLocation/gates/{epic}.{story}-{slug}.yml
```
## Key Principles
- Keep it minimal and predictable
- Fixed severity scale (low/medium/high)
- Always write to standard path
- Always update story with gate reference
- Clear, actionable findings

View File

@@ -1,6 +1,18 @@
<!-- Powered by BMAD™ Core -->
# review-story
When a developer agent marks a story as "Ready for Review", perform a comprehensive senior developer code review with the ability to refactor and improve code directly.
Perform a comprehensive test architecture review with quality gate decision. This adaptive, risk-aware review creates both a story update and a detailed gate file.
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
- story_title: '{title}' # If missing, derive from story file H1
- story_slug: '{slug}' # If missing, derive from title (lowercase, hyphenated)
```
## Prerequisites
@@ -8,98 +20,133 @@ When a developer agent marks a story as "Ready for Review", perform a comprehens
- Developer has completed all tasks and updated the File List
- All automated tests are passing
## Review Process
## Review Process - Adaptive Test Architecture
1. **Read the Complete Story**
- Review all acceptance criteria
- Understand the dev notes and requirements
- Note any completion notes from the developer
### 1. Risk Assessment (Determines Review Depth)
2. **Verify Implementation Against Dev Notes Guidance**
- Review the "Dev Notes" section for specific technical guidance provided to the developer
- Verify the developer's implementation follows the architectural patterns specified in Dev Notes
- Check that file locations match the project structure guidance in Dev Notes
- Confirm any specified libraries, frameworks, or technical approaches were used correctly
- Validate that security considerations mentioned in Dev Notes were implemented
**Auto-escalate to deep review when:**
3. **Focus on the File List**
- Verify all files listed were actually created/modified
- Check for any missing files that should have been updated
- Ensure file locations align with the project structure guidance from Dev Notes
- Auth/payment/security files touched
- No tests added to story
- Diff > 500 lines
- Previous gate was FAIL/CONCERNS
- Story has > 5 acceptance criteria
4. **Senior Developer Code Review**
- Review code with the eye of a senior developer
- If changes form a cohesive whole, review them together
- If changes are independent, review incrementally file by file
- Focus on:
- Code architecture and design patterns
- Refactoring opportunities
### 2. Comprehensive Analysis
**A. Requirements Traceability**
- Map each acceptance criteria to its validating tests (document mapping with Given-When-Then, not test code)
- Identify coverage gaps
- Verify all requirements have corresponding test cases
**B. Code Quality Review**
- Architecture and design patterns
- Refactoring opportunities (and perform them)
- Code duplication or inefficiencies
- Performance optimizations
- Security concerns
- Best practices and patterns
- Security vulnerabilities
- Best practices adherence
5. **Active Refactoring**
- As a senior developer, you CAN and SHOULD refactor code where improvements are needed
- When refactoring:
- Make the changes directly in the files
- Explain WHY you're making the change
- Describe HOW the change improves the code
- Ensure all tests still pass after refactoring
- Update the File List if you modify additional files
**C. Test Architecture Assessment**
- Test coverage adequacy at appropriate levels
- Test level appropriateness (what should be unit vs integration vs e2e)
- Test design quality and maintainability
- Test data management strategy
- Mock/stub usage appropriateness
- Edge case and error scenario coverage
- Test execution time and reliability
**D. Non-Functional Requirements (NFRs)**
- Security: Authentication, authorization, data protection
- Performance: Response times, resource usage
- Reliability: Error handling, recovery mechanisms
- Maintainability: Code clarity, documentation
**E. Testability Evaluation**
- Controllability: Can we control the inputs?
- Observability: Can we observe the outputs?
- Debuggability: Can we debug failures easily?
**F. Technical Debt Identification**
- Accumulated shortcuts
- Missing tests
- Outdated dependencies
- Architecture violations
### 3. Active Refactoring
- Refactor code where safe and appropriate
- Run tests to ensure changes don't break functionality
- Document all changes in QA Results section with clear WHY and HOW
- Do NOT alter story content beyond QA Results section
- Do NOT change story Status or File List; recommend next status only
### 4. Standards Compliance Check
6. **Standards Compliance Check**
- Verify adherence to `docs/coding-standards.md`
- Check compliance with `docs/unified-project-structure.md`
- Validate testing approach against `docs/testing-strategy.md`
- Ensure all guidelines mentioned in the story are followed
7. **Acceptance Criteria Validation**
### 5. Acceptance Criteria Validation
- Verify each AC is fully implemented
- Check for any missing functionality
- Validate edge cases are handled
8. **Test Coverage Review**
- Ensure unit tests cover edge cases
- Add missing tests if critical coverage is lacking
- Verify integration tests (if required) are comprehensive
- Check that test assertions are meaningful
- Look for missing test scenarios
### 6. Documentation and Comments
9. **Documentation and Comments**
- Verify code is self-documenting where possible
- Add comments for complex logic if missing
- Ensure any API changes are documented
## Update Story File - QA Results Section ONLY
## Output 1: Update Story File - QA Results Section ONLY
**CRITICAL**: You are ONLY authorized to update the "QA Results" section of the story file. DO NOT modify any other sections.
**QA Results Anchor Rule:**
- If `## QA Results` doesn't exist, append it at end of file
- If it exists, append a new dated entry below existing entries
- Never edit other sections
After review and any refactoring, append your results to the story file in the QA Results section:
```markdown
## QA Results
### Review Date: [Date]
### Reviewed By: Quinn (Senior Developer QA)
### Reviewed By: Quinn (Test Architect)
### Code Quality Assessment
[Overall assessment of implementation quality]
### Refactoring Performed
[List any refactoring you performed with explanations]
- **File**: [filename]
- **Change**: [what was changed]
- **Why**: [reason for change]
- **How**: [how it improves the code]
### Compliance Check
- Coding Standards: [✓/✗] [notes if any]
- Project Structure: [✓/✗] [notes if any]
- Testing Strategy: [✓/✗] [notes if any]
- All ACs Met: [✓/✗] [notes if any]
### Improvements Checklist
[Check off items you handled yourself, leave unchecked for dev to address]
- [x] Refactored user service for better error handling (services/user.service.ts)
@@ -109,22 +156,144 @@ After review and any refactoring, append your results to the story file in the Q
- [ ] Update API documentation for new error codes
### Security Review
[Any security concerns found and whether addressed]
### Performance Considerations
[Any performance issues found and whether addressed]
### Final Status
[✓ Approved - Ready for Done] / [✗ Changes Required - See unchecked items above]
### Files Modified During Review
[If you modified files, list them here - ask Dev to update File List]
### Gate Status
Gate: {STATUS} → qa.qaLocation/gates/{epic}.{story}-{slug}.yml
Risk profile: qa.qaLocation/assessments/{epic}.{story}-risk-{YYYYMMDD}.md
NFR assessment: qa.qaLocation/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
# Note: Paths should reference core-config.yaml for custom configurations
### Recommended Status
[✓ Ready for Done] / [✗ Changes Required - See unchecked items above]
(Story owner decides final status)
```
## Output 2: Create Quality Gate File
**Template and Directory:**
- Render from `../templates/qa-gate-tmpl.yaml`
- Create directory defined in `qa.qaLocation/gates` (see `bmad-core/core-config.yaml`) if missing
- Save to: `qa.qaLocation/gates/{epic}.{story}-{slug}.yml`
Gate file structure:
```yaml
schema: 1
story: '{epic}.{story}'
story_title: '{story title}'
gate: PASS|CONCERNS|FAIL|WAIVED
status_reason: '1-2 sentence explanation of gate decision'
reviewer: 'Quinn (Test Architect)'
updated: '{ISO-8601 timestamp}'
top_issues: [] # Empty if no issues
waiver: { active: false } # Set active: true only if WAIVED
# Extended fields (optional but recommended):
quality_score: 0-100 # 100 - (20*FAILs) - (10*CONCERNS) or use technical-preferences.md weights
expires: '{ISO-8601 timestamp}' # Typically 2 weeks from review
evidence:
tests_reviewed: { count }
risks_identified: { count }
trace:
ac_covered: [1, 2, 3] # AC numbers with test coverage
ac_gaps: [4] # AC numbers lacking coverage
nfr_validation:
security:
status: PASS|CONCERNS|FAIL
notes: 'Specific findings'
performance:
status: PASS|CONCERNS|FAIL
notes: 'Specific findings'
reliability:
status: PASS|CONCERNS|FAIL
notes: 'Specific findings'
maintainability:
status: PASS|CONCERNS|FAIL
notes: 'Specific findings'
recommendations:
immediate: # Must fix before production
- action: 'Add rate limiting'
refs: ['api/auth/login.ts']
future: # Can be addressed later
- action: 'Consider caching'
refs: ['services/data.ts']
```
### Gate Decision Criteria
**Deterministic rule (apply in order):**
If risk_summary exists, apply its thresholds first (≥9 → FAIL, ≥6 → CONCERNS), then NFR statuses, then top_issues severity.
1. **Risk thresholds (if risk_summary present):**
- If any risk score ≥ 9 → Gate = FAIL (unless waived)
- Else if any score ≥ 6 → Gate = CONCERNS
2. **Test coverage gaps (if trace available):**
- If any P0 test from test-design is missing → Gate = CONCERNS
- If security/data-loss P0 test missing → Gate = FAIL
3. **Issue severity:**
- If any `top_issues.severity == high` → Gate = FAIL (unless waived)
- Else if any `severity == medium` → Gate = CONCERNS
4. **NFR statuses:**
- If any NFR status is FAIL → Gate = FAIL
- Else if any NFR status is CONCERNS → Gate = CONCERNS
- Else → Gate = PASS
- WAIVED only when waiver.active: true with reason/approver
Detailed criteria:
- **PASS**: All critical requirements met, no blocking issues
- **CONCERNS**: Non-critical issues found, team should review
- **FAIL**: Critical issues that should be addressed
- **WAIVED**: Issues acknowledged but explicitly waived by team
### Quality Score Calculation
```text
quality_score = 100 - (20 × number of FAILs) - (10 × number of CONCERNS)
Bounded between 0 and 100
```
If `technical-preferences.md` defines custom weights, use those instead.
### Suggested Owner Convention
For each issue in `top_issues`, include a `suggested_owner`:
- `dev`: Code changes needed
- `sm`: Requirements clarification needed
- `po`: Business decision needed
## Key Principles
- You are a SENIOR developer reviewing junior/mid-level work
- You have the authority and responsibility to improve code directly
- You are a Test Architect providing comprehensive quality assessment
- You have the authority to improve code directly when appropriate
- Always explain your changes for learning purposes
- Balance between perfection and pragmatism
- Focus on significant improvements, not nitpicks
- Focus on risk-based prioritization
- Provide actionable recommendations with clear ownership
## Blocking Conditions
@@ -140,6 +309,8 @@ Stop the review and request clarification if:
After review:
1. If all items are checked and approved: Update story status to "Done"
2. If unchecked items remain: Keep status as "Review" for dev to address
3. Always provide constructive feedback and explanations for learning
1. Update the QA Results section in the story file
2. Create the gate file in directory from `qa.qaLocation/gates`
3. Recommend status: "Ready for Done" or "Changes Required" (owner decides)
4. If files were modified, list them in QA Results and ask Dev to update File List
5. Always provide constructive feedback and actionable recommendations

View File

@@ -0,0 +1,355 @@
<!-- Powered by BMAD™ Core -->
# risk-profile
Generate a comprehensive risk assessment matrix for a story implementation using probability × impact analysis.
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: 'docs/stories/{epic}.{story}.*.md'
- story_title: '{title}' # If missing, derive from story file H1
- story_slug: '{slug}' # If missing, derive from title (lowercase, hyphenated)
```
## Purpose
Identify, assess, and prioritize risks in the story implementation. Provide risk mitigation strategies and testing focus areas based on risk levels.
## Risk Assessment Framework
### Risk Categories
**Category Prefixes:**
- `TECH`: Technical Risks
- `SEC`: Security Risks
- `PERF`: Performance Risks
- `DATA`: Data Risks
- `BUS`: Business Risks
- `OPS`: Operational Risks
1. **Technical Risks (TECH)**
- Architecture complexity
- Integration challenges
- Technical debt
- Scalability concerns
- System dependencies
2. **Security Risks (SEC)**
- Authentication/authorization flaws
- Data exposure vulnerabilities
- Injection attacks
- Session management issues
- Cryptographic weaknesses
3. **Performance Risks (PERF)**
- Response time degradation
- Throughput bottlenecks
- Resource exhaustion
- Database query optimization
- Caching failures
4. **Data Risks (DATA)**
- Data loss potential
- Data corruption
- Privacy violations
- Compliance issues
- Backup/recovery gaps
5. **Business Risks (BUS)**
- Feature doesn't meet user needs
- Revenue impact
- Reputation damage
- Regulatory non-compliance
- Market timing
6. **Operational Risks (OPS)**
- Deployment failures
- Monitoring gaps
- Incident response readiness
- Documentation inadequacy
- Knowledge transfer issues
## Risk Analysis Process
### 1. Risk Identification
For each category, identify specific risks:
```yaml
risk:
id: 'SEC-001' # Use prefixes: SEC, PERF, DATA, BUS, OPS, TECH
category: security
title: 'Insufficient input validation on user forms'
description: 'Form inputs not properly sanitized could lead to XSS attacks'
affected_components:
- 'UserRegistrationForm'
- 'ProfileUpdateForm'
detection_method: 'Code review revealed missing validation'
```
### 2. Risk Assessment
Evaluate each risk using probability × impact:
**Probability Levels:**
- `High (3)`: Likely to occur (>70% chance)
- `Medium (2)`: Possible occurrence (30-70% chance)
- `Low (1)`: Unlikely to occur (<30% chance)
**Impact Levels:**
- `High (3)`: Severe consequences (data breach, system down, major financial loss)
- `Medium (2)`: Moderate consequences (degraded performance, minor data issues)
- `Low (1)`: Minor consequences (cosmetic issues, slight inconvenience)
### Risk Score = Probability × Impact
- 9: Critical Risk (Red)
- 6: High Risk (Orange)
- 4: Medium Risk (Yellow)
- 2-3: Low Risk (Green)
- 1: Minimal Risk (Blue)
### 3. Risk Prioritization
Create risk matrix:
```markdown
## Risk Matrix
| Risk ID | Description | Probability | Impact | Score | Priority |
| -------- | ----------------------- | ----------- | ---------- | ----- | -------- |
| SEC-001 | XSS vulnerability | High (3) | High (3) | 9 | Critical |
| PERF-001 | Slow query on dashboard | Medium (2) | Medium (2) | 4 | Medium |
| DATA-001 | Backup failure | Low (1) | High (3) | 3 | Low |
```
### 4. Risk Mitigation Strategies
For each identified risk, provide mitigation:
```yaml
mitigation:
risk_id: 'SEC-001'
strategy: 'preventive' # preventive|detective|corrective
actions:
- 'Implement input validation library (e.g., validator.js)'
- 'Add CSP headers to prevent XSS execution'
- 'Sanitize all user inputs before storage'
- 'Escape all outputs in templates'
testing_requirements:
- 'Security testing with OWASP ZAP'
- 'Manual penetration testing of forms'
- 'Unit tests for validation functions'
residual_risk: 'Low - Some zero-day vulnerabilities may remain'
owner: 'dev'
timeline: 'Before deployment'
```
## Outputs
### Output 1: Gate YAML Block
Generate for pasting into gate file under `risk_summary`:
**Output rules:**
- Only include assessed risks; do not emit placeholders
- Sort risks by score (desc) when emitting highest and any tabular lists
- If no risks: totals all zeros, omit highest, keep recommendations arrays empty
```yaml
# risk_summary (paste into gate file):
risk_summary:
totals:
critical: X # score 9
high: Y # score 6
medium: Z # score 4
low: W # score 2-3
highest:
id: SEC-001
score: 9
title: 'XSS on profile form'
recommendations:
must_fix:
- 'Add input sanitization & CSP'
monitor:
- 'Add security alerts for auth endpoints'
```
### Output 2: Markdown Report
**Save to:** `qa.qaLocation/assessments/{epic}.{story}-risk-{YYYYMMDD}.md`
```markdown
# Risk Profile: Story {epic}.{story}
Date: {date}
Reviewer: Quinn (Test Architect)
## Executive Summary
- Total Risks Identified: X
- Critical Risks: Y
- High Risks: Z
- Risk Score: XX/100 (calculated)
## Critical Risks Requiring Immediate Attention
### 1. [ID]: Risk Title
**Score: 9 (Critical)**
**Probability**: High - Detailed reasoning
**Impact**: High - Potential consequences
**Mitigation**:
- Immediate action required
- Specific steps to take
**Testing Focus**: Specific test scenarios needed
## Risk Distribution
### By Category
- Security: X risks (Y critical)
- Performance: X risks (Y critical)
- Data: X risks (Y critical)
- Business: X risks (Y critical)
- Operational: X risks (Y critical)
### By Component
- Frontend: X risks
- Backend: X risks
- Database: X risks
- Infrastructure: X risks
## Detailed Risk Register
[Full table of all risks with scores and mitigations]
## Risk-Based Testing Strategy
### Priority 1: Critical Risk Tests
- Test scenarios for critical risks
- Required test types (security, load, chaos)
- Test data requirements
### Priority 2: High Risk Tests
- Integration test scenarios
- Edge case coverage
### Priority 3: Medium/Low Risk Tests
- Standard functional tests
- Regression test suite
## Risk Acceptance Criteria
### Must Fix Before Production
- All critical risks (score 9)
- High risks affecting security/data
### Can Deploy with Mitigation
- Medium risks with compensating controls
- Low risks with monitoring in place
### Accepted Risks
- Document any risks team accepts
- Include sign-off from appropriate authority
## Monitoring Requirements
Post-deployment monitoring for:
- Performance metrics for PERF risks
- Security alerts for SEC risks
- Error rates for operational risks
- Business KPIs for business risks
## Risk Review Triggers
Review and update risk profile when:
- Architecture changes significantly
- New integrations added
- Security vulnerabilities discovered
- Performance issues reported
- Regulatory requirements change
```
## Risk Scoring Algorithm
Calculate overall story risk score:
```text
Base Score = 100
For each risk:
- Critical (9): Deduct 20 points
- High (6): Deduct 10 points
- Medium (4): Deduct 5 points
- Low (2-3): Deduct 2 points
Minimum score = 0 (extremely risky)
Maximum score = 100 (minimal risk)
```
## Risk-Based Recommendations
Based on risk profile, recommend:
1. **Testing Priority**
- Which tests to run first
- Additional test types needed
- Test environment requirements
2. **Development Focus**
- Code review emphasis areas
- Additional validation needed
- Security controls to implement
3. **Deployment Strategy**
- Phased rollout for high-risk changes
- Feature flags for risky features
- Rollback procedures
4. **Monitoring Setup**
- Metrics to track
- Alerts to configure
- Dashboard requirements
## Integration with Quality Gates
**Deterministic gate mapping:**
- Any risk with score 9 Gate = FAIL (unless waived)
- Else if any score 6 Gate = CONCERNS
- Else Gate = PASS
- Unmitigated risks Document in gate
### Output 3: Story Hook Line
**Print this line for review task to quote:**
```text
Risk profile: qa.qaLocation/assessments/{epic}.{story}-risk-{YYYYMMDD}.md
```
## Key Principles
- Identify risks early and systematically
- Use consistent probability × impact scoring
- Provide actionable mitigation strategies
- Link risks to specific test requirements
- Track residual risk after mitigation
- Update risk profile as story evolves

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Document Sharding Task
## Purpose
@@ -91,13 +93,11 @@ CRITICAL: Use proper parsing that understands markdown context. A ## inside a co
For each extracted section:
1. **Generate filename**: Convert the section heading to lowercase-dash-case
- Remove special characters
- Replace spaces with dashes
- Example: "## Tech Stack" → `tech-stack.md`
2. **Adjust heading levels**:
- The level 2 heading becomes level 1 (# instead of ##) in the sharded new document
- All subsection levels decrease by 1:

View File

@@ -0,0 +1,176 @@
<!-- Powered by BMAD™ Core -->
# test-design
Create comprehensive test scenarios with appropriate test level recommendations for story implementation.
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "1.3"
- story_path: '{devStoryLocation}/{epic}.{story}.*.md' # Path from core-config.yaml
- story_title: '{title}' # If missing, derive from story file H1
- story_slug: '{slug}' # If missing, derive from title (lowercase, hyphenated)
```
## Purpose
Design a complete test strategy that identifies what to test, at which level (unit/integration/e2e), and why. This ensures efficient test coverage without redundancy while maintaining appropriate test boundaries.
## Dependencies
```yaml
data:
- test-levels-framework.md # Unit/Integration/E2E decision criteria
- test-priorities-matrix.md # P0/P1/P2/P3 classification system
```
## Process
### 1. Analyze Story Requirements
Break down each acceptance criterion into testable scenarios. For each AC:
- Identify the core functionality to test
- Determine data variations needed
- Consider error conditions
- Note edge cases
### 2. Apply Test Level Framework
**Reference:** Load `test-levels-framework.md` for detailed criteria
Quick rules:
- **Unit**: Pure logic, algorithms, calculations
- **Integration**: Component interactions, DB operations
- **E2E**: Critical user journeys, compliance
### 3. Assign Priorities
**Reference:** Load `test-priorities-matrix.md` for classification
Quick priority assignment:
- **P0**: Revenue-critical, security, compliance
- **P1**: Core user journeys, frequently used
- **P2**: Secondary features, admin functions
- **P3**: Nice-to-have, rarely used
### 4. Design Test Scenarios
For each identified test need, create:
```yaml
test_scenario:
id: '{epic}.{story}-{LEVEL}-{SEQ}'
requirement: 'AC reference'
priority: P0|P1|P2|P3
level: unit|integration|e2e
description: 'What is being tested'
justification: 'Why this level was chosen'
mitigates_risks: ['RISK-001'] # If risk profile exists
```
### 5. Validate Coverage
Ensure:
- Every AC has at least one test
- No duplicate coverage across levels
- Critical paths have multiple levels
- Risk mitigations are addressed
## Outputs
### Output 1: Test Design Document
**Save to:** `qa.qaLocation/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md`
```markdown
# Test Design: Story {epic}.{story}
Date: {date}
Designer: Quinn (Test Architect)
## Test Strategy Overview
- Total test scenarios: X
- Unit tests: Y (A%)
- Integration tests: Z (B%)
- E2E tests: W (C%)
- Priority distribution: P0: X, P1: Y, P2: Z
## Test Scenarios by Acceptance Criteria
### AC1: {description}
#### Scenarios
| ID | Level | Priority | Test | Justification |
| ------------ | ----------- | -------- | ------------------------- | ------------------------ |
| 1.3-UNIT-001 | Unit | P0 | Validate input format | Pure validation logic |
| 1.3-INT-001 | Integration | P0 | Service processes request | Multi-component flow |
| 1.3-E2E-001 | E2E | P1 | User completes journey | Critical path validation |
[Continue for all ACs...]
## Risk Coverage
[Map test scenarios to identified risks if risk profile exists]
## Recommended Execution Order
1. P0 Unit tests (fail fast)
2. P0 Integration tests
3. P0 E2E tests
4. P1 tests in order
5. P2+ as time permits
```
### Output 2: Gate YAML Block
Generate for inclusion in quality gate:
```yaml
test_design:
scenarios_total: X
by_level:
unit: Y
integration: Z
e2e: W
by_priority:
p0: A
p1: B
p2: C
coverage_gaps: [] # List any ACs without tests
```
### Output 3: Trace References
Print for use by trace-requirements task:
```text
Test design matrix: qa.qaLocation/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md
P0 tests identified: {count}
```
## Quality Checklist
Before finalizing, verify:
- [ ] Every AC has test coverage
- [ ] Test levels are appropriate (not over-testing)
- [ ] No duplicate coverage across levels
- [ ] Priorities align with business risk
- [ ] Test IDs follow naming convention
- [ ] Scenarios are atomic and independent
## Key Principles
- **Shift left**: Prefer unit over integration, integration over E2E
- **Risk-based**: Focus on what could go wrong
- **Efficient coverage**: Test once at the right level
- **Maintainability**: Consider long-term test maintenance
- **Fast feedback**: Quick tests run first

View File

@@ -0,0 +1,266 @@
<!-- Powered by BMAD™ Core -->
# trace-requirements
Map story requirements to test cases using Given-When-Then patterns for comprehensive traceability.
## Purpose
Create a requirements traceability matrix that ensures every acceptance criterion has corresponding test coverage. This task helps identify gaps in testing and ensures all requirements are validated.
**IMPORTANT**: Given-When-Then is used here for documenting the mapping between requirements and tests, NOT for writing the actual test code. Tests should follow your project's testing standards (no BDD syntax in test code).
## Prerequisites
- Story file with clear acceptance criteria
- Access to test files or test specifications
- Understanding of the implementation
## Traceability Process
### 1. Extract Requirements
Identify all testable requirements from:
- Acceptance Criteria (primary source)
- User story statement
- Tasks/subtasks with specific behaviors
- Non-functional requirements mentioned
- Edge cases documented
### 2. Map to Test Cases
For each requirement, document which tests validate it. Use Given-When-Then to describe what the test validates (not how it's written):
```yaml
requirement: 'AC1: User can login with valid credentials'
test_mappings:
- test_file: 'auth/login.test.ts'
test_case: 'should successfully login with valid email and password'
# Given-When-Then describes WHAT the test validates, not HOW it's coded
given: 'A registered user with valid credentials'
when: 'They submit the login form'
then: 'They are redirected to dashboard and session is created'
coverage: full
- test_file: 'e2e/auth-flow.test.ts'
test_case: 'complete login flow'
given: 'User on login page'
when: 'Entering valid credentials and submitting'
then: 'Dashboard loads with user data'
coverage: integration
```
### 3. Coverage Analysis
Evaluate coverage for each requirement:
**Coverage Levels:**
- `full`: Requirement completely tested
- `partial`: Some aspects tested, gaps exist
- `none`: No test coverage found
- `integration`: Covered in integration/e2e tests only
- `unit`: Covered in unit tests only
### 4. Gap Identification
Document any gaps found:
```yaml
coverage_gaps:
- requirement: 'AC3: Password reset email sent within 60 seconds'
gap: 'No test for email delivery timing'
severity: medium
suggested_test:
type: integration
description: 'Test email service SLA compliance'
- requirement: 'AC5: Support 1000 concurrent users'
gap: 'No load testing implemented'
severity: high
suggested_test:
type: performance
description: 'Load test with 1000 concurrent connections'
```
## Outputs
### Output 1: Gate YAML Block
**Generate for pasting into gate file under `trace`:**
```yaml
trace:
totals:
requirements: X
full: Y
partial: Z
none: W
planning_ref: 'qa.qaLocation/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md'
uncovered:
- ac: 'AC3'
reason: 'No test found for password reset timing'
notes: 'See qa.qaLocation/assessments/{epic}.{story}-trace-{YYYYMMDD}.md'
```
### Output 2: Traceability Report
**Save to:** `qa.qaLocation/assessments/{epic}.{story}-trace-{YYYYMMDD}.md`
Create a traceability report with:
```markdown
# Requirements Traceability Matrix
## Story: {epic}.{story} - {title}
### Coverage Summary
- Total Requirements: X
- Fully Covered: Y (Z%)
- Partially Covered: A (B%)
- Not Covered: C (D%)
### Requirement Mappings
#### AC1: {Acceptance Criterion 1}
**Coverage: FULL**
Given-When-Then Mappings:
- **Unit Test**: `auth.service.test.ts::validateCredentials`
- Given: Valid user credentials
- When: Validation method called
- Then: Returns true with user object
- **Integration Test**: `auth.integration.test.ts::loginFlow`
- Given: User with valid account
- When: Login API called
- Then: JWT token returned and session created
#### AC2: {Acceptance Criterion 2}
**Coverage: PARTIAL**
[Continue for all ACs...]
### Critical Gaps
1. **Performance Requirements**
- Gap: No load testing for concurrent users
- Risk: High - Could fail under production load
- Action: Implement load tests using k6 or similar
2. **Security Requirements**
- Gap: Rate limiting not tested
- Risk: Medium - Potential DoS vulnerability
- Action: Add rate limit tests to integration suite
### Test Design Recommendations
Based on gaps identified, recommend:
1. Additional test scenarios needed
2. Test types to implement (unit/integration/e2e/performance)
3. Test data requirements
4. Mock/stub strategies
### Risk Assessment
- **High Risk**: Requirements with no coverage
- **Medium Risk**: Requirements with only partial coverage
- **Low Risk**: Requirements with full unit + integration coverage
```
## Traceability Best Practices
### Given-When-Then for Mapping (Not Test Code)
Use Given-When-Then to document what each test validates:
**Given**: The initial context the test sets up
- What state/data the test prepares
- User context being simulated
- System preconditions
**When**: The action the test performs
- What the test executes
- API calls or user actions tested
- Events triggered
**Then**: What the test asserts
- Expected outcomes verified
- State changes checked
- Values validated
**Note**: This is for documentation only. Actual test code follows your project's standards (e.g., describe/it blocks, no BDD syntax).
### Coverage Priority
Prioritize coverage based on:
1. Critical business flows
2. Security-related requirements
3. Data integrity requirements
4. User-facing features
5. Performance SLAs
### Test Granularity
Map at appropriate levels:
- Unit tests for business logic
- Integration tests for component interaction
- E2E tests for user journeys
- Performance tests for NFRs
## Quality Indicators
Good traceability shows:
- Every AC has at least one test
- Critical paths have multiple test levels
- Edge cases are explicitly covered
- NFRs have appropriate test types
- Clear Given-When-Then for each test
## Red Flags
Watch for:
- ACs with no test coverage
- Tests that don't map to requirements
- Vague test descriptions
- Missing edge case coverage
- NFRs without specific tests
## Integration with Gates
This traceability feeds into quality gates:
- Critical gaps → FAIL
- Minor gaps → CONCERNS
- Missing P0 tests from test-design → CONCERNS
### Output 3: Story Hook Line
**Print this line for review task to quote:**
```text
Trace matrix: qa.qaLocation/assessments/{epic}.{story}-trace-{YYYYMMDD}.md
```
- Full coverage → PASS contribution
## Key Principles
- Every requirement must be testable
- Use Given-When-Then for clarity
- Identify both presence and absence
- Prioritize based on risk
- Make recommendations actionable

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Validate Next Story Task
## Purpose

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: architecture-template-v2
name: Architecture Document

View File

@@ -153,4 +153,4 @@ sections:
content: |
---
*Session facilitated using the BMAD-METHOD brainstorming framework*
*Session facilitated using the BMAD-METHOD brainstorming framework*

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: brownfield-architecture-template-v2
name: Brownfield Enhancement Architecture

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: brownfield-prd-template-v2
name: Brownfield Enhancement PRD

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: competitor-analysis-template-v2
name: Competitive Analysis Report
@@ -141,7 +142,14 @@ sections:
title: Feature Comparison Matrix
instruction: Create a detailed comparison table of key features across competitors
type: table
columns: ["Feature Category", "{{your_company}}", "{{competitor_1}}", "{{competitor_2}}", "{{competitor_3}}"]
columns:
[
"Feature Category",
"{{your_company}}",
"{{competitor_1}}",
"{{competitor_2}}",
"{{competitor_3}}",
]
rows:
- category: "Core Functionality"
items:
@@ -153,7 +161,13 @@ sections:
- ["Onboarding Time", "{{time}}", "{{time}}", "{{time}}", "{{time}}"]
- category: "Integration & Ecosystem"
items:
- ["API Availability", "{{availability}}", "{{availability}}", "{{availability}}", "{{availability}}"]
- [
"API Availability",
"{{availability}}",
"{{availability}}",
"{{availability}}",
"{{availability}}",
]
- ["Third-party Integrations", "{{number}}", "{{number}}", "{{number}}", "{{number}}"]
- category: "Pricing & Plans"
items:

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: frontend-architecture-template-v2
name: Frontend Architecture Document
@@ -75,12 +76,24 @@ sections:
rows:
- ["Framework", "{{framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["UI Library", "{{ui_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["State Management", "{{state_management}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- [
"State Management",
"{{state_management}}",
"{{version}}",
"{{purpose}}",
"{{why_chosen}}",
]
- ["Routing", "{{routing_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Build Tool", "{{build_tool}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Styling", "{{styling_solution}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Testing", "{{test_framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Component Library", "{{component_lib}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- [
"Component Library",
"{{component_lib}}",
"{{version}}",
"{{purpose}}",
"{{why_chosen}}",
]
- ["Form Handling", "{{form_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Animation", "{{animation_lib}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Dev Tools", "{{dev_tools}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: frontend-spec-template-v2
name: UI/UX Specification

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: fullstack-architecture-template-v2
name: Fullstack Architecture Document
@@ -156,11 +157,29 @@ sections:
columns: [Category, Technology, Version, Purpose, Rationale]
rows:
- ["Frontend Language", "{{fe_language}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Frontend Framework", "{{fe_framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["UI Component Library", "{{ui_library}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- [
"Frontend Framework",
"{{fe_framework}}",
"{{version}}",
"{{purpose}}",
"{{why_chosen}}",
]
- [
"UI Component Library",
"{{ui_library}}",
"{{version}}",
"{{purpose}}",
"{{why_chosen}}",
]
- ["State Management", "{{state_mgmt}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Backend Language", "{{be_language}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Backend Framework", "{{be_framework}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- [
"Backend Framework",
"{{be_framework}}",
"{{version}}",
"{{purpose}}",
"{{why_chosen}}",
]
- ["API Style", "{{api_style}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Database", "{{database}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]
- ["Cache", "{{cache}}", "{{version}}", "{{purpose}}", "{{why_chosen}}"]

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: market-research-template-v2
name: Market Research Report

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: prd-template-v2
name: Product Requirements Document

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: project-brief-template-v2
name: Project Brief

View File

@@ -0,0 +1,103 @@
# <!-- Powered by BMAD™ Core -->
template:
id: qa-gate-template-v1
name: Quality Gate Decision
version: 1.0
output:
format: yaml
filename: qa.qaLocation/gates/{{epic_num}}.{{story_num}}-{{story_slug}}.yml
title: "Quality Gate: {{epic_num}}.{{story_num}}"
# Required fields (keep these first)
schema: 1
story: "{{epic_num}}.{{story_num}}"
story_title: "{{story_title}}"
gate: "{{gate_status}}" # PASS|CONCERNS|FAIL|WAIVED
status_reason: "{{status_reason}}" # 1-2 sentence summary of why this gate decision
reviewer: "Quinn (Test Architect)"
updated: "{{iso_timestamp}}"
# Always present but only active when WAIVED
waiver: { active: false }
# Issues (if any) - Use fixed severity: low | medium | high
top_issues: []
# Risk summary (from risk-profile task if run)
risk_summary:
totals: { critical: 0, high: 0, medium: 0, low: 0 }
recommendations:
must_fix: []
monitor: []
# Examples section using block scalars for clarity
examples:
with_issues: |
top_issues:
- id: "SEC-001"
severity: high # ONLY: low|medium|high
finding: "No rate limiting on login endpoint"
suggested_action: "Add rate limiting middleware before production"
- id: "TEST-001"
severity: medium
finding: "Missing integration tests for auth flow"
suggested_action: "Add test coverage for critical paths"
when_waived: |
waiver:
active: true
reason: "Accepted for MVP release - will address in next sprint"
approved_by: "Product Owner"
# ============ Optional Extended Fields ============
# Uncomment and use if your team wants more detail
optional_fields_examples:
quality_and_expiry: |
quality_score: 75 # 0-100 (optional scoring)
expires: "2025-01-26T00:00:00Z" # Optional gate freshness window
evidence: |
evidence:
tests_reviewed: 15
risks_identified: 3
trace:
ac_covered: [1, 2, 3] # AC numbers with test coverage
ac_gaps: [4] # AC numbers lacking coverage
nfr_validation: |
nfr_validation:
security: { status: CONCERNS, notes: "Rate limiting missing" }
performance: { status: PASS, notes: "" }
reliability: { status: PASS, notes: "" }
maintainability: { status: PASS, notes: "" }
history: |
history: # Append-only audit trail
- at: "2025-01-12T10:00:00Z"
gate: FAIL
note: "Initial review - missing tests"
- at: "2025-01-12T15:00:00Z"
gate: CONCERNS
note: "Tests added but rate limiting still missing"
risk_summary: |
risk_summary: # From risk-profile task
totals:
critical: 0
high: 0
medium: 0
low: 0
# 'highest' is emitted only when risks exist
recommendations:
must_fix: []
monitor: []
recommendations: |
recommendations:
immediate: # Must fix before production
- action: "Add rate limiting to auth endpoints"
refs: ["api/auth/login.ts:42-68"]
future: # Can be addressed later
- action: "Consider caching for better performance"
refs: ["services/data.service.ts"]

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
template:
id: story-template-v2
name: Story Document

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
workflow:
id: brownfield-fullstack
name: Brownfield Full-Stack Enhancement

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
workflow:
id: brownfield-service
name: Brownfield Service/API Enhancement

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
workflow:
id: brownfield-ui
name: Brownfield UI/Frontend Enhancement

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
workflow:
id: greenfield-fullstack
name: Greenfield Full-Stack Application Development

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
workflow:
id: greenfield-service
name: Greenfield Service/API Development

View File

@@ -1,3 +1,4 @@
# <!-- Powered by BMAD™ Core -->
workflow:
id: greenfield-ui
name: Greenfield UI/Frontend Development

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Create Document from Template (YAML Driven)
## ⚠️ CRITICAL EXECUTION NOTICE ⚠️

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -9,7 +11,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -22,14 +23,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -38,7 +37,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -46,7 +44,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -60,7 +57,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -70,7 +66,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# BMad Document Template Specification
## Overview
@@ -14,7 +16,7 @@ template:
output:
format: markdown
filename: default-path/to/{{filename}}.md
title: "{{variable}} Document Title"
title: '{{variable}} Document Title'
workflow:
mode: interactive
@@ -108,8 +110,8 @@ sections:
Use `{{variable_name}}` in titles, templates, and content:
```yaml
title: "Epic {{epic_number}} {{epic_title}}"
template: "As a {{user_type}}, I want {{action}}, so that {{benefit}}."
title: 'Epic {{epic_number}} {{epic_title}}'
template: 'As a {{user_type}}, I want {{action}}, so that {{benefit}}.'
```
### Conditional Sections
@@ -212,7 +214,7 @@ choices:
- id: criteria
title: Acceptance Criteria
type: numbered-list
item_template: "{{criterion_number}}: {{criteria}}"
item_template: '{{criterion_number}}: {{criteria}}'
repeatable: true
```
@@ -220,7 +222,7 @@ choices:
````yaml
examples:
- "FR6: The system must authenticate users within 2 seconds"
- 'FR6: The system must authenticate users within 2 seconds'
- |
```mermaid
sequenceDiagram

View File

@@ -1,3 +1,5 @@
<!-- Powered by BMAD™ Core -->
# Workflow Management
Enables BMad orchestrator to manage and execute team workflows.

1747
dist/agents/analyst.txt vendored

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -53,7 +53,6 @@ activation-instructions:
- Assess user goal against available agents and workflows in this bundle
- If clear match to an agent's expertise, suggest transformation with *agent command
- If project-oriented, suggest *workflow-guidance to explore options
- Load resources only when needed - never pre-load
agent:
name: BMad Orchestrator
id: bmad-orchestrator
@@ -77,21 +76,16 @@ persona:
- Always remind users that commands require * prefix
commands:
help: Show this guide with available agents and workflows
chat-mode: Start conversational mode for detailed assistance
kb-mode: Load full BMad knowledge base
status: Show current context, active agent, and progress
agent: Transform into a specialized agent (list if name not specified)
exit: Return to BMad or exit session
task: Run a specific task (list if name not specified)
workflow: Start a specific workflow (list if name not specified)
workflow-guidance: Get personalized help selecting the right workflow
plan: Create detailed workflow plan before starting
plan-status: Show current workflow plan progress
plan-update: Update workflow plan status
chat-mode: Start conversational mode for detailed assistance
checklist: Execute a checklist (list if name not specified)
yolo: Toggle skip confirmations mode
party-mode: Group chat with all agents
doc-out: Output full document
kb-mode: Load full BMad knowledge base
party-mode: Group chat with all agents
status: Show current context, active agent, and progress
task: Run a specific task (list if name not specified)
yolo: Toggle skip confirmations mode
exit: Return to BMad or exit session
help-display-template: |
=== BMad Orchestrator Commands ===
All commands must start with * (asterisk)
@@ -160,19 +154,20 @@ workflow-guidance:
- Only recommend workflows that actually exist in the current bundle
- When *workflow-guidance is called, start an interactive session and list all available workflows with brief descriptions
dependencies:
data:
- bmad-kb.md
- elicitation-methods.md
tasks:
- advanced-elicitation.md
- create-doc.md
- kb-mode-interaction.md
data:
- bmad-kb.md
- elicitation-methods.md
utils:
- workflow-management.md
```
==================== END: .bmad-core/agents/bmad-orchestrator.md ====================
==================== START: .bmad-core/tasks/advanced-elicitation.md ====================
<!-- Powered by BMAD™ Core -->
# Advanced Elicitation Task
## Purpose
@@ -293,6 +288,7 @@ Choose a number (0-8) or 9 to proceed:
==================== END: .bmad-core/tasks/advanced-elicitation.md ====================
==================== START: .bmad-core/tasks/create-doc.md ====================
<!-- Powered by BMAD™ Core -->
# Create Document from Template (YAML Driven)
## ⚠️ CRITICAL EXECUTION NOTICE ⚠️
@@ -397,6 +393,7 @@ User can type `#yolo` to toggle to YOLO mode (process all sections at once).
==================== END: .bmad-core/tasks/create-doc.md ====================
==================== START: .bmad-core/tasks/kb-mode-interaction.md ====================
<!-- Powered by BMAD™ Core -->
# KB Mode Interaction Task
## Purpose
@@ -405,7 +402,7 @@ Provide a user-friendly interface to the BMad knowledge base without overwhelmin
## Instructions
When entering KB mode (*kb-mode), follow these steps:
When entering KB mode (\*kb-mode), follow these steps:
### 1. Welcome and Guide
@@ -447,12 +444,12 @@ Or ask me about anything else related to BMad-Method!
When user is done or wants to exit KB mode:
- Summarize key points discussed if helpful
- Remind them they can return to KB mode anytime with *kb-mode
- Remind them they can return to KB mode anytime with \*kb-mode
- Suggest next steps based on what was discussed
## Example Interaction
**User**: *kb-mode
**User**: \*kb-mode
**Assistant**: I've entered KB mode and have access to the full BMad knowledge base. I can help you with detailed information about any aspect of BMad-Method.
@@ -475,11 +472,12 @@ Or ask me about anything else related to BMad-Method!
==================== END: .bmad-core/tasks/kb-mode-interaction.md ====================
==================== START: .bmad-core/data/bmad-kb.md ====================
# BMad Knowledge Base
<!-- Powered by BMAD™ Core -->
# BMAD™ Knowledge Base
## Overview
BMad-Method (Breakthrough Method of Agile AI-driven Development) is a framework that combines AI agents with Agile development methodologies. The v4 system introduces a modular architecture with improved dependency management, bundle optimization, and support for both web and IDE environments.
BMAD-METHOD™ (Breakthrough Method of Agile AI-driven Development) is a framework that combines AI agents with Agile development methodologies. The v4 system introduces a modular architecture with improved dependency management, bundle optimization, and support for both web and IDE environments.
### Key Features
@@ -578,7 +576,7 @@ npx bmad-method install
- **Roo Code**: Web-based IDE with agent support
- **GitHub Copilot**: VS Code extension with AI peer programming assistant
**Note for VS Code Users**: BMad-Method assumes when you mention "VS Code" that you're using it with an AI-powered extension like GitHub Copilot, Cline, or Roo. Standard VS Code without AI capabilities cannot run BMad agents. The installer includes built-in support for Cline and Roo.
**Note for VS Code Users**: BMAD-METHOD™ assumes when you mention "VS Code" that you're using it with an AI-powered extension like GitHub Copilot, Cline, or Roo. Standard VS Code without AI capabilities cannot run BMad agents. The installer includes built-in support for Cline and Roo.
**Verify Installation**:
@@ -586,7 +584,7 @@ npx bmad-method install
- IDE-specific integration files created
- All agent commands/rules/modes available
**Remember**: At its core, BMad-Method is about mastering and harnessing prompt engineering. Any IDE with AI agent support can use BMad - the framework provides the structured prompts and workflows that make AI development effective
**Remember**: At its core, BMAD-METHOD™ is about mastering and harnessing prompt engineering. Any IDE with AI agent support can use BMad - the framework provides the structured prompts and workflows that make AI development effective
### Environment Selection Guide
@@ -775,7 +773,7 @@ You are the "Vibe CEO" - thinking like a CEO with unlimited resources and a sing
- **Claude Code**: `/agent-name` (e.g., `/bmad-master`)
- **Cursor**: `@agent-name` (e.g., `@bmad-master`)
- **Windsurf**: `@agent-name` (e.g., `@bmad-master`)
- **Windsurf**: `/agent-name` (e.g., `/bmad-master`)
- **Trae**: `@agent-name` (e.g., `@bmad-master`)
- **Roo Code**: Select mode from mode selector (e.g., `bmad-master`)
- **GitHub Copilot**: Open the Chat view (`⌃⌘I` on Mac, `Ctrl+Alt+I` on Windows/Linux) and select **Agent** from the chat mode selector.
@@ -830,7 +828,7 @@ You are the "Vibe CEO" - thinking like a CEO with unlimited resources and a sing
### System Overview
The BMad-Method is built around a modular architecture centered on the `bmad-core` directory, which serves as the brain of the entire system. This design enables the framework to operate effectively in both IDE environments (like Cursor, VS Code) and web-based AI interfaces (like ChatGPT, Gemini).
The BMAD-METHOD™ is built around a modular architecture centered on the `bmad-core` directory, which serves as the brain of the entire system. This design enables the framework to operate effectively in both IDE environments (like Cursor, VS Code) and web-based AI interfaces (like ChatGPT, Gemini).
### Key Architectural Components
@@ -1128,8 +1126,11 @@ Templates with Level 2 headings (`##`) can be automatically sharded:
```markdown
## Goals and Background Context
## Requirements
## User Interface Design Goals
## Success Metrics
```
@@ -1182,7 +1183,7 @@ Use the `shard-doc` task or `@kayvan/markdown-tree-parser` tool for automatic sh
- **Keep conversations focused** - One agent, one task per conversation
- **Review everything** - Always review and approve before marking complete
## Contributing to BMad-Method
## Contributing to BMAD-METHOD™
### Quick Contribution Guidelines
@@ -1214,7 +1215,7 @@ For full details, see `CONTRIBUTING.md`. Key points:
### What Are Expansion Packs?
Expansion packs extend BMad-Method beyond traditional software development into ANY domain. They provide specialized agent teams, templates, and workflows while keeping the core framework lean and focused on development.
Expansion packs extend BMAD-METHOD™ beyond traditional software development into ANY domain. They provide specialized agent teams, templates, and workflows while keeping the core framework lean and focused on development.
### Why Use Expansion Packs?
@@ -1281,21 +1282,25 @@ Use the **expansion-creator** pack to build your own:
==================== END: .bmad-core/data/bmad-kb.md ====================
==================== START: .bmad-core/data/elicitation-methods.md ====================
<!-- Powered by BMAD™ Core -->
# Elicitation Methods Data
## Core Reflective Methods
**Expand or Contract for Audience**
- Ask whether to 'expand' (add detail, elaborate) or 'contract' (simplify, clarify)
- Identify specific target audience if relevant
- Tailor content complexity and depth accordingly
**Explain Reasoning (CoT Step-by-Step)**
- Walk through the step-by-step thinking process
- Reveal underlying assumptions and decision points
- Show how conclusions were reached from current role's perspective
**Critique and Refine**
- Review output for flaws, inconsistencies, or improvement areas
- Identify specific weaknesses from role's expertise
- Suggest refined version reflecting domain knowledge
@@ -1303,12 +1308,14 @@ Use the **expansion-creator** pack to build your own:
## Structural Analysis Methods
**Analyze Logical Flow and Dependencies**
- Examine content structure for logical progression
- Check internal consistency and coherence
- Identify and validate dependencies between elements
- Confirm effective ordering and sequencing
**Assess Alignment with Overall Goals**
- Evaluate content contribution to stated objectives
- Identify any misalignments or gaps
- Interpret alignment from specific role's perspective
@@ -1317,12 +1324,14 @@ Use the **expansion-creator** pack to build your own:
## Risk and Challenge Methods
**Identify Potential Risks and Unforeseen Issues**
- Brainstorm potential risks from role's expertise
- Identify overlooked edge cases or scenarios
- Anticipate unintended consequences
- Highlight implementation challenges
**Challenge from Critical Perspective**
- Adopt critical stance on current content
- Play devil's advocate from specified viewpoint
- Argue against proposal highlighting weaknesses
@@ -1331,12 +1340,14 @@ Use the **expansion-creator** pack to build your own:
## Creative Exploration Methods
**Tree of Thoughts Deep Dive**
- Break problem into discrete "thoughts" or intermediate steps
- Explore multiple reasoning paths simultaneously
- Use self-evaluation to classify each path as "sure", "likely", or "impossible"
- Apply search algorithms (BFS/DFS) to find optimal solution paths
**Hindsight is 20/20: The 'If Only...' Reflection**
- Imagine retrospective scenario based on current content
- Identify the one "if only we had known/done X..." insight
- Describe imagined consequences humorously or dramatically
@@ -1345,6 +1356,7 @@ Use the **expansion-creator** pack to build your own:
## Multi-Persona Collaboration Methods
**Agile Team Perspective Shift**
- Rotate through different Scrum team member viewpoints
- Product Owner: Focus on user value and business impact
- Scrum Master: Examine process flow and team dynamics
@@ -1352,12 +1364,14 @@ Use the **expansion-creator** pack to build your own:
- QA: Identify testing scenarios and quality concerns
**Stakeholder Round Table**
- Convene virtual meeting with multiple personas
- Each persona contributes unique perspective on content
- Identify conflicts and synergies between viewpoints
- Synthesize insights into actionable recommendations
**Meta-Prompting Analysis**
- Step back to analyze the structure and logic of current approach
- Question the format and methodology being used
- Suggest alternative frameworks or mental models
@@ -1366,24 +1380,28 @@ Use the **expansion-creator** pack to build your own:
## Advanced 2025 Techniques
**Self-Consistency Validation**
- Generate multiple reasoning paths for same problem
- Compare consistency across different approaches
- Identify most reliable and robust solution
- Highlight areas where approaches diverge and why
**ReWOO (Reasoning Without Observation)**
- Separate parametric reasoning from tool-based actions
- Create reasoning plan without external dependencies
- Identify what can be solved through pure reasoning
- Optimize for efficiency and reduced token usage
**Persona-Pattern Hybrid**
- Combine specific role expertise with elicitation pattern
- Architect + Risk Analysis: Deep technical risk assessment
- UX Expert + User Journey: End-to-end experience critique
- PM + Stakeholder Analysis: Multi-perspective impact review
**Emergent Collaboration Discovery**
- Allow multiple perspectives to naturally emerge
- Identify unexpected insights from persona interactions
- Explore novel combinations of viewpoints
@@ -1392,18 +1410,21 @@ Use the **expansion-creator** pack to build your own:
## Game-Based Elicitation Methods
**Red Team vs Blue Team**
- Red Team: Attack the proposal, find vulnerabilities
- Blue Team: Defend and strengthen the approach
- Competitive analysis reveals blind spots
- Results in more robust, battle-tested solutions
**Innovation Tournament**
- Pit multiple alternative approaches against each other
- Score each approach across different criteria
- Crowd-source evaluation from different personas
- Identify winning combination of features
**Escape Room Challenge**
- Present content as constraints to work within
- Find creative solutions within tight limitations
- Identify minimum viable approach
@@ -1412,12 +1433,14 @@ Use the **expansion-creator** pack to build your own:
## Process Control
**Proceed / No Further Actions**
- Acknowledge choice to finalize current work
- Accept output as-is or move to next step
- Prepare to continue without additional elicitation
==================== END: .bmad-core/data/elicitation-methods.md ====================
==================== START: .bmad-core/utils/workflow-management.md ====================
<!-- Powered by BMAD™ Core -->
# Workflow Management
Enables BMad orchestrator to manage and execute team workflows.

183
dist/agents/dev.txt vendored
View File

@@ -69,9 +69,6 @@ core_principles:
- Numbered Options - Always use numbered lists when presenting choices to the user
commands:
- help: Show numbered list of the following commands to allow selection
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
- develop-story:
- order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
- story-file-updates-ONLY:
@@ -81,16 +78,174 @@ commands:
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
- completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- review-qa: run task `apply-qa-fixes.md'
- run-tests: Execute linting and tests
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
dependencies:
tasks:
- execute-checklist.md
- validate-next-story.md
checklists:
- story-dod-checklist.md
tasks:
- apply-qa-fixes.md
- execute-checklist.md
- validate-next-story.md
```
==================== END: .bmad-core/agents/dev.md ====================
==================== START: .bmad-core/tasks/apply-qa-fixes.md ====================
<!-- Powered by BMAD™ Core -->
# apply-qa-fixes
Implement fixes based on QA results (gate and assessments) for a specific story. This task is for the Dev agent to systematically consume QA outputs and apply code/test changes while only updating allowed sections in the story file.
## Purpose
- Read QA outputs for a story (gate YAML + assessment markdowns)
- Create a prioritized, deterministic fix plan
- Apply code and test changes to close gaps and address issues
- Update only the allowed story sections for the Dev agent
## Inputs
```yaml
required:
- story_id: '{epic}.{story}' # e.g., "2.2"
- qa_root: from `bmad-core/core-config.yaml` key `qa.qaLocation` (e.g., `docs/project/qa`)
- story_root: from `bmad-core/core-config.yaml` key `devStoryLocation` (e.g., `docs/project/stories`)
optional:
- story_title: '{title}' # derive from story H1 if missing
- story_slug: '{slug}' # derive from title (lowercase, hyphenated) if missing
```
## QA Sources to Read
- Gate (YAML): `{qa_root}/gates/{epic}.{story}-*.yml`
- If multiple, use the most recent by modified time
- Assessments (Markdown):
- Test Design: `{qa_root}/assessments/{epic}.{story}-test-design-*.md`
- Traceability: `{qa_root}/assessments/{epic}.{story}-trace-*.md`
- Risk Profile: `{qa_root}/assessments/{epic}.{story}-risk-*.md`
- NFR Assessment: `{qa_root}/assessments/{epic}.{story}-nfr-*.md`
## Prerequisites
- Repository builds and tests run locally (Deno 2)
- Lint and test commands available:
- `deno lint`
- `deno test -A`
## Process (Do not skip steps)
### 0) Load Core Config & Locate Story
- Read `bmad-core/core-config.yaml` and resolve `qa_root` and `story_root`
- Locate story file in `{story_root}/{epic}.{story}.*.md`
- HALT if missing and ask for correct story id/path
### 1) Collect QA Findings
- Parse the latest gate YAML:
- `gate` (PASS|CONCERNS|FAIL|WAIVED)
- `top_issues[]` with `id`, `severity`, `finding`, `suggested_action`
- `nfr_validation.*.status` and notes
- `trace` coverage summary/gaps
- `test_design.coverage_gaps[]`
- `risk_summary.recommendations.must_fix[]` (if present)
- Read any present assessment markdowns and extract explicit gaps/recommendations
### 2) Build Deterministic Fix Plan (Priority Order)
Apply in order, highest priority first:
1. High severity items in `top_issues` (security/perf/reliability/maintainability)
2. NFR statuses: all FAIL must be fixed → then CONCERNS
3. Test Design `coverage_gaps` (prioritize P0 scenarios if specified)
4. Trace uncovered requirements (AC-level)
5. Risk `must_fix` recommendations
6. Medium severity issues, then low
Guidance:
- Prefer tests closing coverage gaps before/with code changes
- Keep changes minimal and targeted; follow project architecture and TS/Deno rules
### 3) Apply Changes
- Implement code fixes per plan
- Add missing tests to close coverage gaps (unit first; integration where required by AC)
- Keep imports centralized via `deps.ts` (see `docs/project/typescript-rules.md`)
- Follow DI boundaries in `src/core/di.ts` and existing patterns
### 4) Validate
- Run `deno lint` and fix issues
- Run `deno test -A` until all tests pass
- Iterate until clean
### 5) Update Story (Allowed Sections ONLY)
CRITICAL: Dev agent is ONLY authorized to update these sections of the story file. Do not modify any other sections (e.g., QA Results, Story, Acceptance Criteria, Dev Notes, Testing):
- Tasks / Subtasks Checkboxes (mark any fix subtask you added as done)
- Dev Agent Record →
- Agent Model Used (if changed)
- Debug Log References (commands/results, e.g., lint/tests)
- Completion Notes List (what changed, why, how)
- File List (all added/modified/deleted files)
- Change Log (new dated entry describing applied fixes)
- Status (see Rule below)
Status Rule:
- If gate was PASS and all identified gaps are closed → set `Status: Ready for Done`
- Otherwise → set `Status: Ready for Review` and notify QA to re-run the review
### 6) Do NOT Edit Gate Files
- Dev does not modify gate YAML. If fixes address issues, request QA to re-run `review-story` to update the gate
## Blocking Conditions
- Missing `bmad-core/core-config.yaml`
- Story file not found for `story_id`
- No QA artifacts found (neither gate nor assessments)
- HALT and request QA to generate at least a gate file (or proceed only with clear developer-provided fix list)
## Completion Checklist
- deno lint: 0 problems
- deno test -A: all tests pass
- All high severity `top_issues` addressed
- NFR FAIL → resolved; CONCERNS minimized or documented
- Coverage gaps closed or explicitly documented with rationale
- Story updated (allowed sections only) including File List and Change Log
- Status set according to Status Rule
## Example: Story 2.2
Given gate `docs/project/qa/gates/2.2-*.yml` shows
- `coverage_gaps`: Back action behavior untested (AC2)
- `coverage_gaps`: Centralized dependencies enforcement untested (AC4)
Fix plan:
- Add a test ensuring the Toolkit Menu "Back" action returns to Main Menu
- Add a static test verifying imports for service/view go through `deps.ts`
- Re-run lint/tests and update Dev Agent Record + File List accordingly
## Key Principles
- Deterministic, risk-first prioritization
- Minimal, maintainable changes
- Tests validate behavior and close gaps
- Strict adherence to allowed story update areas
- Gate ownership remains with QA; Dev signals readiness via Status
==================== END: .bmad-core/tasks/apply-qa-fixes.md ====================
==================== START: .bmad-core/tasks/execute-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -102,7 +257,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -115,14 +269,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -131,7 +283,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -139,7 +290,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -153,7 +303,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -163,7 +312,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
@@ -187,6 +335,7 @@ The LLM will:
==================== END: .bmad-core/tasks/execute-checklist.md ====================
==================== START: .bmad-core/tasks/validate-next-story.md ====================
<!-- Powered by BMAD™ Core -->
# Validate Next Story Task
## Purpose
@@ -324,6 +473,7 @@ Provide a structured validation report including:
==================== END: .bmad-core/tasks/validate-next-story.md ====================
==================== START: .bmad-core/checklists/story-dod-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Story Definition of Done (DoD) Checklist
## Instructions for Developer Agent
@@ -351,14 +501,12 @@ The goal is quality delivery, not just checking boxes.]]
1. **Requirements Met:**
[[LLM: Be specific - list each requirement and whether it's complete]]
- [ ] All functional requirements specified in the story are implemented.
- [ ] All acceptance criteria defined in the story are met.
2. **Coding Standards & Project Structure:**
[[LLM: Code quality matters for maintainability. Check each item carefully]]
- [ ] All new/modified code strictly adheres to `Operational Guidelines`.
- [ ] All new/modified code aligns with `Project Structure` (file locations, naming, etc.).
- [ ] Adherence to `Tech Stack` for technologies/versions used (if story introduces or modifies tech usage).
@@ -370,7 +518,6 @@ The goal is quality delivery, not just checking boxes.]]
3. **Testing:**
[[LLM: Testing proves your code works. Be honest about test coverage]]
- [ ] All required unit tests as per the story and `Operational Guidelines` Testing Strategy are implemented.
- [ ] All required integration tests (if applicable) as per the story and `Operational Guidelines` Testing Strategy are implemented.
- [ ] All tests (unit, integration, E2E if applicable) pass successfully.
@@ -379,14 +526,12 @@ The goal is quality delivery, not just checking boxes.]]
4. **Functionality & Verification:**
[[LLM: Did you actually run and test your code? Be specific about what you tested]]
- [ ] Functionality has been manually verified by the developer (e.g., running the app locally, checking UI, testing API endpoints).
- [ ] Edge cases and potential error conditions considered and handled gracefully.
5. **Story Administration:**
[[LLM: Documentation helps the next developer. What should they know?]]
- [ ] All tasks within the story file are marked as complete.
- [ ] Any clarifications or decisions made during development are documented in the story file or linked appropriately.
- [ ] The story wrap up section has been completed with notes of changes or information relevant to the next story or overall project, the agent model that was primarily used during development, and the changelog of any changes is properly updated.
@@ -394,7 +539,6 @@ The goal is quality delivery, not just checking boxes.]]
6. **Dependencies, Build & Configuration:**
[[LLM: Build issues block everyone. Ensure everything compiles and runs cleanly]]
- [ ] Project builds successfully without errors.
- [ ] Project linting passes
- [ ] Any new dependencies added were either pre-approved in the story requirements OR explicitly approved by the user during development (approval documented in story file).
@@ -405,7 +549,6 @@ The goal is quality delivery, not just checking boxes.]]
7. **Documentation (If Applicable):**
[[LLM: Good documentation prevents future confusion. What needs explaining?]]
- [ ] Relevant inline code documentation (e.g., JSDoc, TSDoc, Python docstrings) for new public APIs or complex logic is complete.
- [ ] User-facing documentation updated, if changes impact users.
- [ ] Technical documentation (e.g., READMEs, system diagrams) updated if significant architectural changes were made.

1539
dist/agents/pm.txt vendored

File diff suppressed because it is too large Load Diff

553
dist/agents/po.txt vendored
View File

@@ -75,30 +75,105 @@ persona:
- Documentation Ecosystem Integrity - Maintain consistency across all documents
commands:
- help: Show numbered list of the following commands to allow selection
- execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- correct-course: execute the correct-course task
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
- create-story: Create user story from requirements (task brownfield-create-story)
- doc-out: Output full document to current destination file
- execute-checklist-po: Run task execute-checklist (checklist po-master-checklist)
- shard-doc {document} {destination}: run the task shard-doc against the optionally provided document to the specified destination
- validate-story-draft {story}: run the task validate-next-story against the provided story file
- yolo: Toggle Yolo Mode off on - on will skip doc section confirmations
- exit: Exit (confirm)
dependencies:
checklists:
- change-checklist.md
- po-master-checklist.md
tasks:
- correct-course.md
- execute-checklist.md
- shard-doc.md
- correct-course.md
- validate-next-story.md
templates:
- story-tmpl.yaml
checklists:
- po-master-checklist.md
- change-checklist.md
```
==================== END: .bmad-core/agents/po.md ====================
==================== START: .bmad-core/tasks/correct-course.md ====================
<!-- Powered by BMAD™ Core -->
# Correct Course Task
## Purpose
- Guide a structured response to a change trigger using the `.bmad-core/checklists/change-checklist`.
- Analyze the impacts of the change on epics, project artifacts, and the MVP, guided by the checklist's structure.
- Explore potential solutions (e.g., adjust scope, rollback elements, re-scope features) as prompted by the checklist.
- Draft specific, actionable proposed updates to any affected project artifacts (e.g., epics, user stories, PRD sections, architecture document sections) based on the analysis.
- Produce a consolidated "Sprint Change Proposal" document that contains the impact analysis and the clearly drafted proposed edits for user review and approval.
- Ensure a clear handoff path if the nature of the changes necessitates fundamental replanning by other core agents (like PM or Architect).
## Instructions
### 1. Initial Setup & Mode Selection
- **Acknowledge Task & Inputs:**
- Confirm with the user that the "Correct Course Task" (Change Navigation & Integration) is being initiated.
- Verify the change trigger and ensure you have the user's initial explanation of the issue and its perceived impact.
- Confirm access to all relevant project artifacts (e.g., PRD, Epics/Stories, Architecture Documents, UI/UX Specifications) and, critically, the `.bmad-core/checklists/change-checklist`.
- **Establish Interaction Mode:**
- Ask the user their preferred interaction mode for this task:
- **"Incrementally (Default & Recommended):** Shall we work through the change-checklist section by section, discussing findings and collaboratively drafting proposed changes for each relevant part before moving to the next? This allows for detailed, step-by-step refinement."
- **"YOLO Mode (Batch Processing):** Or, would you prefer I conduct a more batched analysis based on the checklist and then present a consolidated set of findings and proposed changes for a broader review? This can be quicker for initial assessment but might require more extensive review of the combined proposals."
- Once the user chooses, confirm the selected mode and then inform the user: "We will now use the change-checklist to analyze the change and draft proposed updates. I will guide you through the checklist items based on our chosen interaction mode."
### 2. Execute Checklist Analysis (Iteratively or Batched, per Interaction Mode)
- Systematically work through Sections 1-4 of the change-checklist (typically covering Change Context, Epic/Story Impact Analysis, Artifact Conflict Resolution, and Path Evaluation/Recommendation).
- For each checklist item or logical group of items (depending on interaction mode):
- Present the relevant prompt(s) or considerations from the checklist to the user.
- Request necessary information and actively analyze the relevant project artifacts (PRD, epics, architecture documents, story history, etc.) to assess the impact.
- Discuss your findings for each item with the user.
- Record the status of each checklist item (e.g., `[x] Addressed`, `[N/A]`, `[!] Further Action Needed`) and any pertinent notes or decisions.
- Collaboratively agree on the "Recommended Path Forward" as prompted by Section 4 of the checklist.
### 3. Draft Proposed Changes (Iteratively or Batched)
- Based on the completed checklist analysis (Sections 1-4) and the agreed "Recommended Path Forward" (excluding scenarios requiring fundamental replans that would necessitate immediate handoff to PM/Architect):
- Identify the specific project artifacts that require updates (e.g., specific epics, user stories, PRD sections, architecture document components, diagrams).
- **Draft the proposed changes directly and explicitly for each identified artifact.** Examples include:
- Revising user story text, acceptance criteria, or priority.
- Adding, removing, reordering, or splitting user stories within epics.
- Proposing modified architecture diagram snippets (e.g., providing an updated Mermaid diagram block or a clear textual description of the change to an existing diagram).
- Updating technology lists, configuration details, or specific sections within the PRD or architecture documents.
- Drafting new, small supporting artifacts if necessary (e.g., a brief addendum for a specific decision).
- If in "Incremental Mode," discuss and refine these proposed edits for each artifact or small group of related artifacts with the user as they are drafted.
- If in "YOLO Mode," compile all drafted edits for presentation in the next step.
### 4. Generate "Sprint Change Proposal" with Edits
- Synthesize the complete change-checklist analysis (covering findings from Sections 1-4) and all the agreed-upon proposed edits (from Instruction 3) into a single document titled "Sprint Change Proposal." This proposal should align with the structure suggested by Section 5 of the change-checklist.
- The proposal must clearly present:
- **Analysis Summary:** A concise overview of the original issue, its analyzed impact (on epics, artifacts, MVP scope), and the rationale for the chosen path forward.
- **Specific Proposed Edits:** For each affected artifact, clearly show or describe the exact changes (e.g., "Change Story X.Y from: [old text] To: [new text]", "Add new Acceptance Criterion to Story A.B: [new AC]", "Update Section 3.2 of Architecture Document as follows: [new/modified text or diagram description]").
- Present the complete draft of the "Sprint Change Proposal" to the user for final review and feedback. Incorporate any final adjustments requested by the user.
### 5. Finalize & Determine Next Steps
- Obtain explicit user approval for the "Sprint Change Proposal," including all the specific edits documented within it.
- Provide the finalized "Sprint Change Proposal" document to the user.
- **Based on the nature of the approved changes:**
- **If the approved edits sufficiently address the change and can be implemented directly or organized by a PO/SM:** State that the "Correct Course Task" is complete regarding analysis and change proposal, and the user can now proceed with implementing or logging these changes (e.g., updating actual project documents, backlog items). Suggest handoff to a PO/SM agent for backlog organization if appropriate.
- **If the analysis and proposed path (as per checklist Section 4 and potentially Section 6) indicate that the change requires a more fundamental replan (e.g., significant scope change, major architectural rework):** Clearly state this conclusion. Advise the user that the next step involves engaging the primary PM or Architect agents, using the "Sprint Change Proposal" as critical input and context for that deeper replanning effort.
## Output Deliverables
- **Primary:** A "Sprint Change Proposal" document (in markdown format). This document will contain:
- A summary of the change-checklist analysis (issue, impact, rationale for the chosen path).
- Specific, clearly drafted proposed edits for all affected project artifacts.
- **Implicit:** An annotated change-checklist (or the record of its completion) reflecting the discussions, findings, and decisions made during the process.
==================== END: .bmad-core/tasks/correct-course.md ====================
==================== START: .bmad-core/tasks/execute-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -110,7 +185,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -123,14 +197,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -139,7 +211,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -147,7 +218,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -161,7 +231,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -171,7 +240,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
@@ -195,6 +263,7 @@ The LLM will:
==================== END: .bmad-core/tasks/execute-checklist.md ====================
==================== START: .bmad-core/tasks/shard-doc.md ====================
<!-- Powered by BMAD™ Core -->
# Document Sharding Task
## Purpose
@@ -288,13 +357,11 @@ CRITICAL: Use proper parsing that understands markdown context. A ## inside a co
For each extracted section:
1. **Generate filename**: Convert the section heading to lowercase-dash-case
- Remove special characters
- Replace spaces with dashes
- Example: "## Tech Stack" → `tech-stack.md`
2. **Adjust heading levels**:
- The level 2 heading becomes level 1 (# instead of ##) in the sharded new document
- All subsection levels decrease by 1:
@@ -384,80 +451,8 @@ Document sharded successfully:
- Ensure the sharding is reversible (could reconstruct the original from shards)
==================== END: .bmad-core/tasks/shard-doc.md ====================
==================== START: .bmad-core/tasks/correct-course.md ====================
# Correct Course Task
## Purpose
- Guide a structured response to a change trigger using the `.bmad-core/checklists/change-checklist`.
- Analyze the impacts of the change on epics, project artifacts, and the MVP, guided by the checklist's structure.
- Explore potential solutions (e.g., adjust scope, rollback elements, re-scope features) as prompted by the checklist.
- Draft specific, actionable proposed updates to any affected project artifacts (e.g., epics, user stories, PRD sections, architecture document sections) based on the analysis.
- Produce a consolidated "Sprint Change Proposal" document that contains the impact analysis and the clearly drafted proposed edits for user review and approval.
- Ensure a clear handoff path if the nature of the changes necessitates fundamental replanning by other core agents (like PM or Architect).
## Instructions
### 1. Initial Setup & Mode Selection
- **Acknowledge Task & Inputs:**
- Confirm with the user that the "Correct Course Task" (Change Navigation & Integration) is being initiated.
- Verify the change trigger and ensure you have the user's initial explanation of the issue and its perceived impact.
- Confirm access to all relevant project artifacts (e.g., PRD, Epics/Stories, Architecture Documents, UI/UX Specifications) and, critically, the `.bmad-core/checklists/change-checklist`.
- **Establish Interaction Mode:**
- Ask the user their preferred interaction mode for this task:
- **"Incrementally (Default & Recommended):** Shall we work through the change-checklist section by section, discussing findings and collaboratively drafting proposed changes for each relevant part before moving to the next? This allows for detailed, step-by-step refinement."
- **"YOLO Mode (Batch Processing):** Or, would you prefer I conduct a more batched analysis based on the checklist and then present a consolidated set of findings and proposed changes for a broader review? This can be quicker for initial assessment but might require more extensive review of the combined proposals."
- Once the user chooses, confirm the selected mode and then inform the user: "We will now use the change-checklist to analyze the change and draft proposed updates. I will guide you through the checklist items based on our chosen interaction mode."
### 2. Execute Checklist Analysis (Iteratively or Batched, per Interaction Mode)
- Systematically work through Sections 1-4 of the change-checklist (typically covering Change Context, Epic/Story Impact Analysis, Artifact Conflict Resolution, and Path Evaluation/Recommendation).
- For each checklist item or logical group of items (depending on interaction mode):
- Present the relevant prompt(s) or considerations from the checklist to the user.
- Request necessary information and actively analyze the relevant project artifacts (PRD, epics, architecture documents, story history, etc.) to assess the impact.
- Discuss your findings for each item with the user.
- Record the status of each checklist item (e.g., `[x] Addressed`, `[N/A]`, `[!] Further Action Needed`) and any pertinent notes or decisions.
- Collaboratively agree on the "Recommended Path Forward" as prompted by Section 4 of the checklist.
### 3. Draft Proposed Changes (Iteratively or Batched)
- Based on the completed checklist analysis (Sections 1-4) and the agreed "Recommended Path Forward" (excluding scenarios requiring fundamental replans that would necessitate immediate handoff to PM/Architect):
- Identify the specific project artifacts that require updates (e.g., specific epics, user stories, PRD sections, architecture document components, diagrams).
- **Draft the proposed changes directly and explicitly for each identified artifact.** Examples include:
- Revising user story text, acceptance criteria, or priority.
- Adding, removing, reordering, or splitting user stories within epics.
- Proposing modified architecture diagram snippets (e.g., providing an updated Mermaid diagram block or a clear textual description of the change to an existing diagram).
- Updating technology lists, configuration details, or specific sections within the PRD or architecture documents.
- Drafting new, small supporting artifacts if necessary (e.g., a brief addendum for a specific decision).
- If in "Incremental Mode," discuss and refine these proposed edits for each artifact or small group of related artifacts with the user as they are drafted.
- If in "YOLO Mode," compile all drafted edits for presentation in the next step.
### 4. Generate "Sprint Change Proposal" with Edits
- Synthesize the complete change-checklist analysis (covering findings from Sections 1-4) and all the agreed-upon proposed edits (from Instruction 3) into a single document titled "Sprint Change Proposal." This proposal should align with the structure suggested by Section 5 of the change-checklist.
- The proposal must clearly present:
- **Analysis Summary:** A concise overview of the original issue, its analyzed impact (on epics, artifacts, MVP scope), and the rationale for the chosen path forward.
- **Specific Proposed Edits:** For each affected artifact, clearly show or describe the exact changes (e.g., "Change Story X.Y from: [old text] To: [new text]", "Add new Acceptance Criterion to Story A.B: [new AC]", "Update Section 3.2 of Architecture Document as follows: [new/modified text or diagram description]").
- Present the complete draft of the "Sprint Change Proposal" to the user for final review and feedback. Incorporate any final adjustments requested by the user.
### 5. Finalize & Determine Next Steps
- Obtain explicit user approval for the "Sprint Change Proposal," including all the specific edits documented within it.
- Provide the finalized "Sprint Change Proposal" document to the user.
- **Based on the nature of the approved changes:**
- **If the approved edits sufficiently address the change and can be implemented directly or organized by a PO/SM:** State that the "Correct Course Task" is complete regarding analysis and change proposal, and the user can now proceed with implementing or logging these changes (e.g., updating actual project documents, backlog items). Suggest handoff to a PO/SM agent for backlog organization if appropriate.
- **If the analysis and proposed path (as per checklist Section 4 and potentially Section 6) indicate that the change requires a more fundamental replan (e.g., significant scope change, major architectural rework):** Clearly state this conclusion. Advise the user that the next step involves engaging the primary PM or Architect agents, using the "Sprint Change Proposal" as critical input and context for that deeper replanning effort.
## Output Deliverables
- **Primary:** A "Sprint Change Proposal" document (in markdown format). This document will contain:
- A summary of the change-checklist analysis (issue, impact, rationale for the chosen path).
- Specific, clearly drafted proposed edits for all affected project artifacts.
- **Implicit:** An annotated change-checklist (or the record of its completion) reflecting the discussions, findings, and decisions made during the process.
==================== END: .bmad-core/tasks/correct-course.md ====================
==================== START: .bmad-core/tasks/validate-next-story.md ====================
<!-- Powered by BMAD™ Core -->
# Validate Next Story Task
## Purpose
@@ -595,6 +590,7 @@ Provide a structured validation report including:
==================== END: .bmad-core/tasks/validate-next-story.md ====================
==================== START: .bmad-core/templates/story-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: story-template-v2
name: Story Document
@@ -734,7 +730,194 @@ sections:
editors: [qa-agent]
==================== END: .bmad-core/templates/story-tmpl.yaml ====================
==================== START: .bmad-core/checklists/change-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Change Navigation Checklist
**Purpose:** To systematically guide the selected Agent and user through the analysis and planning required when a significant change (pivot, tech issue, missing requirement, failed story) is identified during the BMad workflow.
**Instructions:** Review each item with the user. Mark `[x]` for completed/confirmed, `[N/A]` if not applicable, or add notes for discussion points.
[[LLM: INITIALIZATION INSTRUCTIONS - CHANGE NAVIGATION
Changes during development are inevitable, but how we handle them determines project success or failure.
Before proceeding, understand:
1. This checklist is for SIGNIFICANT changes that affect the project direction
2. Minor adjustments within a story don't require this process
3. The goal is to minimize wasted work while adapting to new realities
4. User buy-in is critical - they must understand and approve changes
Required context:
- The triggering story or issue
- Current project state (completed stories, current epic)
- Access to PRD, architecture, and other key documents
- Understanding of remaining work planned
APPROACH:
This is an interactive process with the user. Work through each section together, discussing implications and options. The user makes final decisions, but provide expert guidance on technical feasibility and impact.
REMEMBER: Changes are opportunities to improve, not failures. Handle them professionally and constructively.]]
---
## 1. Understand the Trigger & Context
[[LLM: Start by fully understanding what went wrong and why. Don't jump to solutions yet. Ask probing questions:
- What exactly happened that triggered this review?
- Is this a one-time issue or symptomatic of a larger problem?
- Could this have been anticipated earlier?
- What assumptions were incorrect?
Be specific and factual, not blame-oriented.]]
- [ ] **Identify Triggering Story:** Clearly identify the story (or stories) that revealed the issue.
- [ ] **Define the Issue:** Articulate the core problem precisely.
- [ ] Is it a technical limitation/dead-end?
- [ ] Is it a newly discovered requirement?
- [ ] Is it a fundamental misunderstanding of existing requirements?
- [ ] Is it a necessary pivot based on feedback or new information?
- [ ] Is it a failed/abandoned story needing a new approach?
- [ ] **Assess Initial Impact:** Describe the immediate observed consequences (e.g., blocked progress, incorrect functionality, non-viable tech).
- [ ] **Gather Evidence:** Note any specific logs, error messages, user feedback, or analysis that supports the issue definition.
## 2. Epic Impact Assessment
[[LLM: Changes ripple through the project structure. Systematically evaluate:
1. Can we salvage the current epic with modifications?
2. Do future epics still make sense given this change?
3. Are we creating or eliminating dependencies?
4. Does the epic sequence need reordering?
Think about both immediate and downstream effects.]]
- [ ] **Analyze Current Epic:**
- [ ] Can the current epic containing the trigger story still be completed?
- [ ] Does the current epic need modification (story changes, additions, removals)?
- [ ] Should the current epic be abandoned or fundamentally redefined?
- [ ] **Analyze Future Epics:**
- [ ] Review all remaining planned epics.
- [ ] Does the issue require changes to planned stories in future epics?
- [ ] Does the issue invalidate any future epics?
- [ ] Does the issue necessitate the creation of entirely new epics?
- [ ] Should the order/priority of future epics be changed?
- [ ] **Summarize Epic Impact:** Briefly document the overall effect on the project's epic structure and flow.
## 3. Artifact Conflict & Impact Analysis
[[LLM: Documentation drives development in BMad. Check each artifact:
1. Does this change invalidate documented decisions?
2. Are architectural assumptions still valid?
3. Do user flows need rethinking?
4. Are technical constraints different than documented?
Be thorough - missed conflicts cause future problems.]]
- [ ] **Review PRD:**
- [ ] Does the issue conflict with the core goals or requirements stated in the PRD?
- [ ] Does the PRD need clarification or updates based on the new understanding?
- [ ] **Review Architecture Document:**
- [ ] Does the issue conflict with the documented architecture (components, patterns, tech choices)?
- [ ] Are specific components/diagrams/sections impacted?
- [ ] Does the technology list need updating?
- [ ] Do data models or schemas need revision?
- [ ] Are external API integrations affected?
- [ ] **Review Frontend Spec (if applicable):**
- [ ] Does the issue conflict with the FE architecture, component library choice, or UI/UX design?
- [ ] Are specific FE components or user flows impacted?
- [ ] **Review Other Artifacts (if applicable):**
- [ ] Consider impact on deployment scripts, IaC, monitoring setup, etc.
- [ ] **Summarize Artifact Impact:** List all artifacts requiring updates and the nature of the changes needed.
## 4. Path Forward Evaluation
[[LLM: Present options clearly with pros/cons. For each path:
1. What's the effort required?
2. What work gets thrown away?
3. What risks are we taking?
4. How does this affect timeline?
5. Is this sustainable long-term?
Be honest about trade-offs. There's rarely a perfect solution.]]
- [ ] **Option 1: Direct Adjustment / Integration:**
- [ ] Can the issue be addressed by modifying/adding future stories within the existing plan?
- [ ] Define the scope and nature of these adjustments.
- [ ] Assess feasibility, effort, and risks of this path.
- [ ] **Option 2: Potential Rollback:**
- [ ] Would reverting completed stories significantly simplify addressing the issue?
- [ ] Identify specific stories/commits to consider for rollback.
- [ ] Assess the effort required for rollback.
- [ ] Assess the impact of rollback (lost work, data implications).
- [ ] Compare the net benefit/cost vs. Direct Adjustment.
- [ ] **Option 3: PRD MVP Review & Potential Re-scoping:**
- [ ] Is the original PRD MVP still achievable given the issue and constraints?
- [ ] Does the MVP scope need reduction (removing features/epics)?
- [ ] Do the core MVP goals need modification?
- [ ] Are alternative approaches needed to meet the original MVP intent?
- [ ] **Extreme Case:** Does the issue necessitate a fundamental replan or potentially a new PRD V2 (to be handled by PM)?
- [ ] **Select Recommended Path:** Based on the evaluation, agree on the most viable path forward.
## 5. Sprint Change Proposal Components
[[LLM: The proposal must be actionable and clear. Ensure:
1. The issue is explained in plain language
2. Impacts are quantified where possible
3. The recommended path has clear rationale
4. Next steps are specific and assigned
5. Success criteria for the change are defined
This proposal guides all subsequent work.]]
(Ensure all agreed-upon points from previous sections are captured in the proposal)
- [ ] **Identified Issue Summary:** Clear, concise problem statement.
- [ ] **Epic Impact Summary:** How epics are affected.
- [ ] **Artifact Adjustment Needs:** List of documents to change.
- [ ] **Recommended Path Forward:** Chosen solution with rationale.
- [ ] **PRD MVP Impact:** Changes to scope/goals (if any).
- [ ] **High-Level Action Plan:** Next steps for stories/updates.
- [ ] **Agent Handoff Plan:** Identify roles needed (PM, Arch, Design Arch, PO).
## 6. Final Review & Handoff
[[LLM: Changes require coordination. Before concluding:
1. Is the user fully aligned with the plan?
2. Do all stakeholders understand the impacts?
3. Are handoffs to other agents clear?
4. Is there a rollback plan if the change fails?
5. How will we validate the change worked?
Get explicit approval - implicit agreement causes problems.
FINAL REPORT:
After completing the checklist, provide a concise summary:
- What changed and why
- What we're doing about it
- Who needs to do what
- When we'll know if it worked
Keep it action-oriented and forward-looking.]]
- [ ] **Review Checklist:** Confirm all relevant items were discussed.
- [ ] **Review Sprint Change Proposal:** Ensure it accurately reflects the discussion and decisions.
- [ ] **User Approval:** Obtain explicit user approval for the proposal.
- [ ] **Confirm Next Steps:** Reiterate the handoff plan and the next actions to be taken by specific agents.
---
==================== END: .bmad-core/checklists/change-checklist.md ====================
==================== START: .bmad-core/checklists/po-master-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Product Owner (PO) Master Validation Checklist
This checklist serves as a comprehensive framework for the Product Owner to validate project plans before development execution. It adapts intelligently based on project type (greenfield vs brownfield) and includes UI/UX considerations when applicable.
@@ -745,12 +928,10 @@ PROJECT TYPE DETECTION:
First, determine the project type by checking:
1. Is this a GREENFIELD project (new from scratch)?
- Look for: New project initialization, no existing codebase references
- Check for: prd.md, architecture.md, new project setup stories
2. Is this a BROWNFIELD project (enhancing existing system)?
- Look for: References to existing codebase, enhancement/modification language
- Check for: brownfield-prd.md, brownfield-architecture.md, existing system analysis
@@ -1084,7 +1265,6 @@ Ask the user if they want to work through the checklist:
Generate a comprehensive validation report that adapts to project type:
1. Executive Summary
- Project type: [Greenfield/Brownfield] with [UI/No UI]
- Overall readiness (percentage)
- Go/No-Go recommendation
@@ -1094,42 +1274,36 @@ Generate a comprehensive validation report that adapts to project type:
2. Project-Specific Analysis
FOR GREENFIELD:
- Setup completeness
- Dependency sequencing
- MVP scope appropriateness
- Development timeline feasibility
FOR BROWNFIELD:
- Integration risk level (High/Medium/Low)
- Existing system impact assessment
- Rollback readiness
- User disruption potential
3. Risk Assessment
- Top 5 risks by severity
- Mitigation recommendations
- Timeline impact of addressing issues
- [BROWNFIELD] Specific integration risks
4. MVP Completeness
- Core features coverage
- Missing essential functionality
- Scope creep identified
- True MVP vs over-engineering
5. Implementation Readiness
- Developer clarity score (1-10)
- Ambiguous requirements count
- Missing technical details
- [BROWNFIELD] Integration point clarity
6. Recommendations
- Must-fix before development
- Should-fix for quality
- Consider for improvement
@@ -1177,188 +1351,3 @@ After presenting the report, ask if the user wants:
- **CONDITIONAL**: The plan requires specific adjustments before proceeding.
- **REJECTED**: The plan requires significant revision to address critical deficiencies.
==================== END: .bmad-core/checklists/po-master-checklist.md ====================
==================== START: .bmad-core/checklists/change-checklist.md ====================
# Change Navigation Checklist
**Purpose:** To systematically guide the selected Agent and user through the analysis and planning required when a significant change (pivot, tech issue, missing requirement, failed story) is identified during the BMad workflow.
**Instructions:** Review each item with the user. Mark `[x]` for completed/confirmed, `[N/A]` if not applicable, or add notes for discussion points.
[[LLM: INITIALIZATION INSTRUCTIONS - CHANGE NAVIGATION
Changes during development are inevitable, but how we handle them determines project success or failure.
Before proceeding, understand:
1. This checklist is for SIGNIFICANT changes that affect the project direction
2. Minor adjustments within a story don't require this process
3. The goal is to minimize wasted work while adapting to new realities
4. User buy-in is critical - they must understand and approve changes
Required context:
- The triggering story or issue
- Current project state (completed stories, current epic)
- Access to PRD, architecture, and other key documents
- Understanding of remaining work planned
APPROACH:
This is an interactive process with the user. Work through each section together, discussing implications and options. The user makes final decisions, but provide expert guidance on technical feasibility and impact.
REMEMBER: Changes are opportunities to improve, not failures. Handle them professionally and constructively.]]
---
## 1. Understand the Trigger & Context
[[LLM: Start by fully understanding what went wrong and why. Don't jump to solutions yet. Ask probing questions:
- What exactly happened that triggered this review?
- Is this a one-time issue or symptomatic of a larger problem?
- Could this have been anticipated earlier?
- What assumptions were incorrect?
Be specific and factual, not blame-oriented.]]
- [ ] **Identify Triggering Story:** Clearly identify the story (or stories) that revealed the issue.
- [ ] **Define the Issue:** Articulate the core problem precisely.
- [ ] Is it a technical limitation/dead-end?
- [ ] Is it a newly discovered requirement?
- [ ] Is it a fundamental misunderstanding of existing requirements?
- [ ] Is it a necessary pivot based on feedback or new information?
- [ ] Is it a failed/abandoned story needing a new approach?
- [ ] **Assess Initial Impact:** Describe the immediate observed consequences (e.g., blocked progress, incorrect functionality, non-viable tech).
- [ ] **Gather Evidence:** Note any specific logs, error messages, user feedback, or analysis that supports the issue definition.
## 2. Epic Impact Assessment
[[LLM: Changes ripple through the project structure. Systematically evaluate:
1. Can we salvage the current epic with modifications?
2. Do future epics still make sense given this change?
3. Are we creating or eliminating dependencies?
4. Does the epic sequence need reordering?
Think about both immediate and downstream effects.]]
- [ ] **Analyze Current Epic:**
- [ ] Can the current epic containing the trigger story still be completed?
- [ ] Does the current epic need modification (story changes, additions, removals)?
- [ ] Should the current epic be abandoned or fundamentally redefined?
- [ ] **Analyze Future Epics:**
- [ ] Review all remaining planned epics.
- [ ] Does the issue require changes to planned stories in future epics?
- [ ] Does the issue invalidate any future epics?
- [ ] Does the issue necessitate the creation of entirely new epics?
- [ ] Should the order/priority of future epics be changed?
- [ ] **Summarize Epic Impact:** Briefly document the overall effect on the project's epic structure and flow.
## 3. Artifact Conflict & Impact Analysis
[[LLM: Documentation drives development in BMad. Check each artifact:
1. Does this change invalidate documented decisions?
2. Are architectural assumptions still valid?
3. Do user flows need rethinking?
4. Are technical constraints different than documented?
Be thorough - missed conflicts cause future problems.]]
- [ ] **Review PRD:**
- [ ] Does the issue conflict with the core goals or requirements stated in the PRD?
- [ ] Does the PRD need clarification or updates based on the new understanding?
- [ ] **Review Architecture Document:**
- [ ] Does the issue conflict with the documented architecture (components, patterns, tech choices)?
- [ ] Are specific components/diagrams/sections impacted?
- [ ] Does the technology list need updating?
- [ ] Do data models or schemas need revision?
- [ ] Are external API integrations affected?
- [ ] **Review Frontend Spec (if applicable):**
- [ ] Does the issue conflict with the FE architecture, component library choice, or UI/UX design?
- [ ] Are specific FE components or user flows impacted?
- [ ] **Review Other Artifacts (if applicable):**
- [ ] Consider impact on deployment scripts, IaC, monitoring setup, etc.
- [ ] **Summarize Artifact Impact:** List all artifacts requiring updates and the nature of the changes needed.
## 4. Path Forward Evaluation
[[LLM: Present options clearly with pros/cons. For each path:
1. What's the effort required?
2. What work gets thrown away?
3. What risks are we taking?
4. How does this affect timeline?
5. Is this sustainable long-term?
Be honest about trade-offs. There's rarely a perfect solution.]]
- [ ] **Option 1: Direct Adjustment / Integration:**
- [ ] Can the issue be addressed by modifying/adding future stories within the existing plan?
- [ ] Define the scope and nature of these adjustments.
- [ ] Assess feasibility, effort, and risks of this path.
- [ ] **Option 2: Potential Rollback:**
- [ ] Would reverting completed stories significantly simplify addressing the issue?
- [ ] Identify specific stories/commits to consider for rollback.
- [ ] Assess the effort required for rollback.
- [ ] Assess the impact of rollback (lost work, data implications).
- [ ] Compare the net benefit/cost vs. Direct Adjustment.
- [ ] **Option 3: PRD MVP Review & Potential Re-scoping:**
- [ ] Is the original PRD MVP still achievable given the issue and constraints?
- [ ] Does the MVP scope need reduction (removing features/epics)?
- [ ] Do the core MVP goals need modification?
- [ ] Are alternative approaches needed to meet the original MVP intent?
- [ ] **Extreme Case:** Does the issue necessitate a fundamental replan or potentially a new PRD V2 (to be handled by PM)?
- [ ] **Select Recommended Path:** Based on the evaluation, agree on the most viable path forward.
## 5. Sprint Change Proposal Components
[[LLM: The proposal must be actionable and clear. Ensure:
1. The issue is explained in plain language
2. Impacts are quantified where possible
3. The recommended path has clear rationale
4. Next steps are specific and assigned
5. Success criteria for the change are defined
This proposal guides all subsequent work.]]
(Ensure all agreed-upon points from previous sections are captured in the proposal)
- [ ] **Identified Issue Summary:** Clear, concise problem statement.
- [ ] **Epic Impact Summary:** How epics are affected.
- [ ] **Artifact Adjustment Needs:** List of documents to change.
- [ ] **Recommended Path Forward:** Chosen solution with rationale.
- [ ] **PRD MVP Impact:** Changes to scope/goals (if any).
- [ ] **High-Level Action Plan:** Next steps for stories/updates.
- [ ] **Agent Handoff Plan:** Identify roles needed (PM, Arch, Design Arch, PO).
## 6. Final Review & Handoff
[[LLM: Changes require coordination. Before concluding:
1. Is the user fully aligned with the plan?
2. Do all stakeholders understand the impacts?
3. Are handoffs to other agents clear?
4. Is there a rollback plan if the change fails?
5. How will we validate the change worked?
Get explicit approval - implicit agreement causes problems.
FINAL REPORT:
After completing the checklist, provide a concise summary:
- What changed and why
- What we're doing about it
- Who needs to do what
- When we'll know if it worked
Keep it action-oriented and forward-looking.]]
- [ ] **Review Checklist:** Confirm all relevant items were discussed.
- [ ] **Review Sprint Change Proposal:** Ensure it accurately reflects the discussion and decisions.
- [ ] **User Approval:** Obtain explicit user approval for the proposal.
- [ ] **Confirm Next Steps:** Reiterate the handoff plan and the next actions to be taken by specific agents.
---
==================== END: .bmad-core/checklists/change-checklist.md ====================

1756
dist/agents/qa.txt vendored

File diff suppressed because it is too large Load Diff

175
dist/agents/sm.txt vendored
View File

@@ -68,23 +68,98 @@ persona:
- You are NOT allowed to implement stories or modify code EVER!
commands:
- help: Show numbered list of the following commands to allow selection
- draft: Execute task create-next-story.md
- correct-course: Execute task correct-course.md
- draft: Execute task create-next-story.md
- story-checklist: Execute task execute-checklist.md with checklist story-draft-checklist.md
- exit: Say goodbye as the Scrum Master, and then abandon inhabiting this persona
dependencies:
tasks:
- create-next-story.md
- execute-checklist.md
- correct-course.md
templates:
- story-tmpl.yaml
checklists:
- story-draft-checklist.md
tasks:
- correct-course.md
- create-next-story.md
- execute-checklist.md
templates:
- story-tmpl.yaml
```
==================== END: .bmad-core/agents/sm.md ====================
==================== START: .bmad-core/tasks/correct-course.md ====================
<!-- Powered by BMAD™ Core -->
# Correct Course Task
## Purpose
- Guide a structured response to a change trigger using the `.bmad-core/checklists/change-checklist`.
- Analyze the impacts of the change on epics, project artifacts, and the MVP, guided by the checklist's structure.
- Explore potential solutions (e.g., adjust scope, rollback elements, re-scope features) as prompted by the checklist.
- Draft specific, actionable proposed updates to any affected project artifacts (e.g., epics, user stories, PRD sections, architecture document sections) based on the analysis.
- Produce a consolidated "Sprint Change Proposal" document that contains the impact analysis and the clearly drafted proposed edits for user review and approval.
- Ensure a clear handoff path if the nature of the changes necessitates fundamental replanning by other core agents (like PM or Architect).
## Instructions
### 1. Initial Setup & Mode Selection
- **Acknowledge Task & Inputs:**
- Confirm with the user that the "Correct Course Task" (Change Navigation & Integration) is being initiated.
- Verify the change trigger and ensure you have the user's initial explanation of the issue and its perceived impact.
- Confirm access to all relevant project artifacts (e.g., PRD, Epics/Stories, Architecture Documents, UI/UX Specifications) and, critically, the `.bmad-core/checklists/change-checklist`.
- **Establish Interaction Mode:**
- Ask the user their preferred interaction mode for this task:
- **"Incrementally (Default & Recommended):** Shall we work through the change-checklist section by section, discussing findings and collaboratively drafting proposed changes for each relevant part before moving to the next? This allows for detailed, step-by-step refinement."
- **"YOLO Mode (Batch Processing):** Or, would you prefer I conduct a more batched analysis based on the checklist and then present a consolidated set of findings and proposed changes for a broader review? This can be quicker for initial assessment but might require more extensive review of the combined proposals."
- Once the user chooses, confirm the selected mode and then inform the user: "We will now use the change-checklist to analyze the change and draft proposed updates. I will guide you through the checklist items based on our chosen interaction mode."
### 2. Execute Checklist Analysis (Iteratively or Batched, per Interaction Mode)
- Systematically work through Sections 1-4 of the change-checklist (typically covering Change Context, Epic/Story Impact Analysis, Artifact Conflict Resolution, and Path Evaluation/Recommendation).
- For each checklist item or logical group of items (depending on interaction mode):
- Present the relevant prompt(s) or considerations from the checklist to the user.
- Request necessary information and actively analyze the relevant project artifacts (PRD, epics, architecture documents, story history, etc.) to assess the impact.
- Discuss your findings for each item with the user.
- Record the status of each checklist item (e.g., `[x] Addressed`, `[N/A]`, `[!] Further Action Needed`) and any pertinent notes or decisions.
- Collaboratively agree on the "Recommended Path Forward" as prompted by Section 4 of the checklist.
### 3. Draft Proposed Changes (Iteratively or Batched)
- Based on the completed checklist analysis (Sections 1-4) and the agreed "Recommended Path Forward" (excluding scenarios requiring fundamental replans that would necessitate immediate handoff to PM/Architect):
- Identify the specific project artifacts that require updates (e.g., specific epics, user stories, PRD sections, architecture document components, diagrams).
- **Draft the proposed changes directly and explicitly for each identified artifact.** Examples include:
- Revising user story text, acceptance criteria, or priority.
- Adding, removing, reordering, or splitting user stories within epics.
- Proposing modified architecture diagram snippets (e.g., providing an updated Mermaid diagram block or a clear textual description of the change to an existing diagram).
- Updating technology lists, configuration details, or specific sections within the PRD or architecture documents.
- Drafting new, small supporting artifacts if necessary (e.g., a brief addendum for a specific decision).
- If in "Incremental Mode," discuss and refine these proposed edits for each artifact or small group of related artifacts with the user as they are drafted.
- If in "YOLO Mode," compile all drafted edits for presentation in the next step.
### 4. Generate "Sprint Change Proposal" with Edits
- Synthesize the complete change-checklist analysis (covering findings from Sections 1-4) and all the agreed-upon proposed edits (from Instruction 3) into a single document titled "Sprint Change Proposal." This proposal should align with the structure suggested by Section 5 of the change-checklist.
- The proposal must clearly present:
- **Analysis Summary:** A concise overview of the original issue, its analyzed impact (on epics, artifacts, MVP scope), and the rationale for the chosen path forward.
- **Specific Proposed Edits:** For each affected artifact, clearly show or describe the exact changes (e.g., "Change Story X.Y from: [old text] To: [new text]", "Add new Acceptance Criterion to Story A.B: [new AC]", "Update Section 3.2 of Architecture Document as follows: [new/modified text or diagram description]").
- Present the complete draft of the "Sprint Change Proposal" to the user for final review and feedback. Incorporate any final adjustments requested by the user.
### 5. Finalize & Determine Next Steps
- Obtain explicit user approval for the "Sprint Change Proposal," including all the specific edits documented within it.
- Provide the finalized "Sprint Change Proposal" document to the user.
- **Based on the nature of the approved changes:**
- **If the approved edits sufficiently address the change and can be implemented directly or organized by a PO/SM:** State that the "Correct Course Task" is complete regarding analysis and change proposal, and the user can now proceed with implementing or logging these changes (e.g., updating actual project documents, backlog items). Suggest handoff to a PO/SM agent for backlog organization if appropriate.
- **If the analysis and proposed path (as per checklist Section 4 and potentially Section 6) indicate that the change requires a more fundamental replan (e.g., significant scope change, major architectural rework):** Clearly state this conclusion. Advise the user that the next step involves engaging the primary PM or Architect agents, using the "Sprint Change Proposal" as critical input and context for that deeper replanning effort.
## Output Deliverables
- **Primary:** A "Sprint Change Proposal" document (in markdown format). This document will contain:
- A summary of the change-checklist analysis (issue, impact, rationale for the chosen path).
- Specific, clearly drafted proposed edits for all affected project artifacts.
- **Implicit:** An annotated change-checklist (or the record of its completion) reflecting the discussions, findings, and decisions made during the process.
==================== END: .bmad-core/tasks/correct-course.md ====================
==================== START: .bmad-core/tasks/create-next-story.md ====================
<!-- Powered by BMAD™ Core -->
# Create Next Story Task
## Purpose
@@ -200,6 +275,7 @@ ALWAYS cite source documents: `[Source: architecture/{filename}.md#{section}]`
==================== END: .bmad-core/tasks/create-next-story.md ====================
==================== START: .bmad-core/tasks/execute-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -211,7 +287,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -224,14 +299,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -240,7 +313,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -248,7 +320,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -262,7 +333,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -272,7 +342,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
@@ -295,80 +364,8 @@ The LLM will:
- Offer to provide detailed analysis of any section, especially those with warnings or failures
==================== END: .bmad-core/tasks/execute-checklist.md ====================
==================== START: .bmad-core/tasks/correct-course.md ====================
# Correct Course Task
## Purpose
- Guide a structured response to a change trigger using the `.bmad-core/checklists/change-checklist`.
- Analyze the impacts of the change on epics, project artifacts, and the MVP, guided by the checklist's structure.
- Explore potential solutions (e.g., adjust scope, rollback elements, re-scope features) as prompted by the checklist.
- Draft specific, actionable proposed updates to any affected project artifacts (e.g., epics, user stories, PRD sections, architecture document sections) based on the analysis.
- Produce a consolidated "Sprint Change Proposal" document that contains the impact analysis and the clearly drafted proposed edits for user review and approval.
- Ensure a clear handoff path if the nature of the changes necessitates fundamental replanning by other core agents (like PM or Architect).
## Instructions
### 1. Initial Setup & Mode Selection
- **Acknowledge Task & Inputs:**
- Confirm with the user that the "Correct Course Task" (Change Navigation & Integration) is being initiated.
- Verify the change trigger and ensure you have the user's initial explanation of the issue and its perceived impact.
- Confirm access to all relevant project artifacts (e.g., PRD, Epics/Stories, Architecture Documents, UI/UX Specifications) and, critically, the `.bmad-core/checklists/change-checklist`.
- **Establish Interaction Mode:**
- Ask the user their preferred interaction mode for this task:
- **"Incrementally (Default & Recommended):** Shall we work through the change-checklist section by section, discussing findings and collaboratively drafting proposed changes for each relevant part before moving to the next? This allows for detailed, step-by-step refinement."
- **"YOLO Mode (Batch Processing):** Or, would you prefer I conduct a more batched analysis based on the checklist and then present a consolidated set of findings and proposed changes for a broader review? This can be quicker for initial assessment but might require more extensive review of the combined proposals."
- Once the user chooses, confirm the selected mode and then inform the user: "We will now use the change-checklist to analyze the change and draft proposed updates. I will guide you through the checklist items based on our chosen interaction mode."
### 2. Execute Checklist Analysis (Iteratively or Batched, per Interaction Mode)
- Systematically work through Sections 1-4 of the change-checklist (typically covering Change Context, Epic/Story Impact Analysis, Artifact Conflict Resolution, and Path Evaluation/Recommendation).
- For each checklist item or logical group of items (depending on interaction mode):
- Present the relevant prompt(s) or considerations from the checklist to the user.
- Request necessary information and actively analyze the relevant project artifacts (PRD, epics, architecture documents, story history, etc.) to assess the impact.
- Discuss your findings for each item with the user.
- Record the status of each checklist item (e.g., `[x] Addressed`, `[N/A]`, `[!] Further Action Needed`) and any pertinent notes or decisions.
- Collaboratively agree on the "Recommended Path Forward" as prompted by Section 4 of the checklist.
### 3. Draft Proposed Changes (Iteratively or Batched)
- Based on the completed checklist analysis (Sections 1-4) and the agreed "Recommended Path Forward" (excluding scenarios requiring fundamental replans that would necessitate immediate handoff to PM/Architect):
- Identify the specific project artifacts that require updates (e.g., specific epics, user stories, PRD sections, architecture document components, diagrams).
- **Draft the proposed changes directly and explicitly for each identified artifact.** Examples include:
- Revising user story text, acceptance criteria, or priority.
- Adding, removing, reordering, or splitting user stories within epics.
- Proposing modified architecture diagram snippets (e.g., providing an updated Mermaid diagram block or a clear textual description of the change to an existing diagram).
- Updating technology lists, configuration details, or specific sections within the PRD or architecture documents.
- Drafting new, small supporting artifacts if necessary (e.g., a brief addendum for a specific decision).
- If in "Incremental Mode," discuss and refine these proposed edits for each artifact or small group of related artifacts with the user as they are drafted.
- If in "YOLO Mode," compile all drafted edits for presentation in the next step.
### 4. Generate "Sprint Change Proposal" with Edits
- Synthesize the complete change-checklist analysis (covering findings from Sections 1-4) and all the agreed-upon proposed edits (from Instruction 3) into a single document titled "Sprint Change Proposal." This proposal should align with the structure suggested by Section 5 of the change-checklist.
- The proposal must clearly present:
- **Analysis Summary:** A concise overview of the original issue, its analyzed impact (on epics, artifacts, MVP scope), and the rationale for the chosen path forward.
- **Specific Proposed Edits:** For each affected artifact, clearly show or describe the exact changes (e.g., "Change Story X.Y from: [old text] To: [new text]", "Add new Acceptance Criterion to Story A.B: [new AC]", "Update Section 3.2 of Architecture Document as follows: [new/modified text or diagram description]").
- Present the complete draft of the "Sprint Change Proposal" to the user for final review and feedback. Incorporate any final adjustments requested by the user.
### 5. Finalize & Determine Next Steps
- Obtain explicit user approval for the "Sprint Change Proposal," including all the specific edits documented within it.
- Provide the finalized "Sprint Change Proposal" document to the user.
- **Based on the nature of the approved changes:**
- **If the approved edits sufficiently address the change and can be implemented directly or organized by a PO/SM:** State that the "Correct Course Task" is complete regarding analysis and change proposal, and the user can now proceed with implementing or logging these changes (e.g., updating actual project documents, backlog items). Suggest handoff to a PO/SM agent for backlog organization if appropriate.
- **If the analysis and proposed path (as per checklist Section 4 and potentially Section 6) indicate that the change requires a more fundamental replan (e.g., significant scope change, major architectural rework):** Clearly state this conclusion. Advise the user that the next step involves engaging the primary PM or Architect agents, using the "Sprint Change Proposal" as critical input and context for that deeper replanning effort.
## Output Deliverables
- **Primary:** A "Sprint Change Proposal" document (in markdown format). This document will contain:
- A summary of the change-checklist analysis (issue, impact, rationale for the chosen path).
- Specific, clearly drafted proposed edits for all affected project artifacts.
- **Implicit:** An annotated change-checklist (or the record of its completion) reflecting the discussions, findings, and decisions made during the process.
==================== END: .bmad-core/tasks/correct-course.md ====================
==================== START: .bmad-core/templates/story-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: story-template-v2
name: Story Document
@@ -509,6 +506,7 @@ sections:
==================== END: .bmad-core/templates/story-tmpl.yaml ====================
==================== START: .bmad-core/checklists/story-draft-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Story Draft Checklist
The Scrum Master should use this checklist to validate that each story contains sufficient context for a developer agent to implement it successfully, while assuming the dev agent has reasonable capabilities to figure things out.
@@ -628,19 +626,16 @@ Note: We don't need every file listed - just the important ones.]]
Generate a concise validation report:
1. Quick Summary
- Story readiness: READY / NEEDS REVISION / BLOCKED
- Clarity score (1-10)
- Major gaps identified
2. Fill in the validation table with:
- PASS: Requirements clearly met
- PARTIAL: Some gaps but workable
- FAIL: Critical information missing
3. Specific Issues (if any)
- List concrete problems to fix
- Suggest specific improvements
- Identify any blocking dependencies

View File

@@ -77,72 +77,19 @@ commands:
- generate-ui-prompt: Run task generate-ai-frontend-prompt.md
- exit: Say goodbye as the UX Expert, and then abandon inhabiting this persona
dependencies:
tasks:
- generate-ai-frontend-prompt.md
- create-doc.md
- execute-checklist.md
templates:
- front-end-spec-tmpl.yaml
data:
- technical-preferences.md
tasks:
- create-doc.md
- execute-checklist.md
- generate-ai-frontend-prompt.md
templates:
- front-end-spec-tmpl.yaml
```
==================== END: .bmad-core/agents/ux-expert.md ====================
==================== START: .bmad-core/tasks/generate-ai-frontend-prompt.md ====================
# Create AI Frontend Prompt Task
## Purpose
To generate a masterful, comprehensive, and optimized prompt that can be used with any AI-driven frontend development tool (e.g., Vercel v0, Lovable.ai, or similar) to scaffold or generate significant portions of a frontend application.
## Inputs
- Completed UI/UX Specification (`front-end-spec.md`)
- Completed Frontend Architecture Document (`front-end-architecture`) or a full stack combined architecture such as `architecture.md`
- Main System Architecture Document (`architecture` - for API contracts and tech stack to give further context)
## Key Activities & Instructions
### 1. Core Prompting Principles
Before generating the prompt, you must understand these core principles for interacting with a generative AI for code.
- **Be Explicit and Detailed**: The AI cannot read your mind. Provide as much detail and context as possible. Vague requests lead to generic or incorrect outputs.
- **Iterate, Don't Expect Perfection**: Generating an entire complex application in one go is rare. The most effective method is to prompt for one component or one section at a time, then build upon the results.
- **Provide Context First**: Always start by providing the AI with the necessary context, such as the tech stack, existing code snippets, and overall project goals.
- **Mobile-First Approach**: Frame all UI generation requests with a mobile-first design mindset. Describe the mobile layout first, then provide separate instructions for how it should adapt for tablet and desktop.
### 2. The Structured Prompting Framework
To ensure the highest quality output, you MUST structure every prompt using the following four-part framework.
1. **High-Level Goal**: Start with a clear, concise summary of the overall objective. This orients the AI on the primary task.
- _Example: "Create a responsive user registration form with client-side validation and API integration."_
2. **Detailed, Step-by-Step Instructions**: Provide a granular, numbered list of actions the AI should take. Break down complex tasks into smaller, sequential steps. This is the most critical part of the prompt.
- _Example: "1. Create a new file named `RegistrationForm.js`. 2. Use React hooks for state management. 3. Add styled input fields for 'Name', 'Email', and 'Password'. 4. For the email field, ensure it is a valid email format. 5. On submission, call the API endpoint defined below."_
3. **Code Examples, Data Structures & Constraints**: Include any relevant snippets of existing code, data structures, or API contracts. This gives the AI concrete examples to work with. Crucially, you must also state what _not_ to do.
- _Example: "Use this API endpoint: `POST /api/register`. The expected JSON payload is `{ "name": "string", "email": "string", "password": "string" }`. Do NOT include a 'confirm password' field. Use Tailwind CSS for all styling."_
4. **Define a Strict Scope**: Explicitly define the boundaries of the task. Tell the AI which files it can modify and, more importantly, which files to leave untouched to prevent unintended changes across the codebase.
- _Example: "You should only create the `RegistrationForm.js` component and add it to the `pages/register.js` file. Do NOT alter the `Navbar.js` component or any other existing page or component."_
### 3. Assembling the Master Prompt
You will now synthesize the inputs and the above principles into a final, comprehensive prompt.
1. **Gather Foundational Context**:
- Start the prompt with a preamble describing the overall project purpose, the full tech stack (e.g., Next.js, TypeScript, Tailwind CSS), and the primary UI component library being used.
2. **Describe the Visuals**:
- If the user has design files (Figma, etc.), instruct them to provide links or screenshots.
- If not, describe the visual style: color palette, typography, spacing, and overall aesthetic (e.g., "minimalist", "corporate", "playful").
3. **Build the Prompt using the Structured Framework**:
- Follow the four-part framework from Section 2 to build out the core request, whether it's for a single component or a full page.
4. **Present and Refine**:
- Output the complete, generated prompt in a clear, copy-pasteable format (e.g., a large code block).
- Explain the structure of the prompt and why certain information was included, referencing the principles above.
- <important_note>Conclude by reminding the user that all AI-generated code will require careful human review, testing, and refinement to be considered production-ready.</important_note>
==================== END: .bmad-core/tasks/generate-ai-frontend-prompt.md ====================
==================== START: .bmad-core/tasks/create-doc.md ====================
<!-- Powered by BMAD™ Core -->
# Create Document from Template (YAML Driven)
## ⚠️ CRITICAL EXECUTION NOTICE ⚠️
@@ -247,6 +194,7 @@ User can type `#yolo` to toggle to YOLO mode (process all sections at once).
==================== END: .bmad-core/tasks/create-doc.md ====================
==================== START: .bmad-core/tasks/execute-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -258,7 +206,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -271,14 +218,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -287,7 +232,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -295,7 +239,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -309,7 +252,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -319,7 +261,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
@@ -342,7 +283,63 @@ The LLM will:
- Offer to provide detailed analysis of any section, especially those with warnings or failures
==================== END: .bmad-core/tasks/execute-checklist.md ====================
==================== START: .bmad-core/tasks/generate-ai-frontend-prompt.md ====================
<!-- Powered by BMAD™ Core -->
# Create AI Frontend Prompt Task
## Purpose
To generate a masterful, comprehensive, and optimized prompt that can be used with any AI-driven frontend development tool (e.g., Vercel v0, Lovable.ai, or similar) to scaffold or generate significant portions of a frontend application.
## Inputs
- Completed UI/UX Specification (`front-end-spec.md`)
- Completed Frontend Architecture Document (`front-end-architecture`) or a full stack combined architecture such as `architecture.md`
- Main System Architecture Document (`architecture` - for API contracts and tech stack to give further context)
## Key Activities & Instructions
### 1. Core Prompting Principles
Before generating the prompt, you must understand these core principles for interacting with a generative AI for code.
- **Be Explicit and Detailed**: The AI cannot read your mind. Provide as much detail and context as possible. Vague requests lead to generic or incorrect outputs.
- **Iterate, Don't Expect Perfection**: Generating an entire complex application in one go is rare. The most effective method is to prompt for one component or one section at a time, then build upon the results.
- **Provide Context First**: Always start by providing the AI with the necessary context, such as the tech stack, existing code snippets, and overall project goals.
- **Mobile-First Approach**: Frame all UI generation requests with a mobile-first design mindset. Describe the mobile layout first, then provide separate instructions for how it should adapt for tablet and desktop.
### 2. The Structured Prompting Framework
To ensure the highest quality output, you MUST structure every prompt using the following four-part framework.
1. **High-Level Goal**: Start with a clear, concise summary of the overall objective. This orients the AI on the primary task.
- _Example: "Create a responsive user registration form with client-side validation and API integration."_
2. **Detailed, Step-by-Step Instructions**: Provide a granular, numbered list of actions the AI should take. Break down complex tasks into smaller, sequential steps. This is the most critical part of the prompt.
- _Example: "1. Create a new file named `RegistrationForm.js`. 2. Use React hooks for state management. 3. Add styled input fields for 'Name', 'Email', and 'Password'. 4. For the email field, ensure it is a valid email format. 5. On submission, call the API endpoint defined below."_
3. **Code Examples, Data Structures & Constraints**: Include any relevant snippets of existing code, data structures, or API contracts. This gives the AI concrete examples to work with. Crucially, you must also state what _not_ to do.
- _Example: "Use this API endpoint: `POST /api/register`. The expected JSON payload is `{ "name": "string", "email": "string", "password": "string" }`. Do NOT include a 'confirm password' field. Use Tailwind CSS for all styling."_
4. **Define a Strict Scope**: Explicitly define the boundaries of the task. Tell the AI which files it can modify and, more importantly, which files to leave untouched to prevent unintended changes across the codebase.
- _Example: "You should only create the `RegistrationForm.js` component and add it to the `pages/register.js` file. Do NOT alter the `Navbar.js` component or any other existing page or component."_
### 3. Assembling the Master Prompt
You will now synthesize the inputs and the above principles into a final, comprehensive prompt.
1. **Gather Foundational Context**:
- Start the prompt with a preamble describing the overall project purpose, the full tech stack (e.g., Next.js, TypeScript, Tailwind CSS), and the primary UI component library being used.
2. **Describe the Visuals**:
- If the user has design files (Figma, etc.), instruct them to provide links or screenshots.
- If not, describe the visual style: color palette, typography, spacing, and overall aesthetic (e.g., "minimalist", "corporate", "playful").
3. **Build the Prompt using the Structured Framework**:
- Follow the four-part framework from Section 2 to build out the core request, whether it's for a single component or a full page.
4. **Present and Refine**:
- Output the complete, generated prompt in a clear, copy-pasteable format (e.g., a large code block).
- Explain the structure of the prompt and why certain information was included, referencing the principles above.
- <important_note>Conclude by reminding the user that all AI-generated code will require careful human review, testing, and refinement to be considered production-ready.</important_note>
==================== END: .bmad-core/tasks/generate-ai-frontend-prompt.md ====================
==================== START: .bmad-core/templates/front-end-spec-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: frontend-spec-template-v2
name: UI/UX Specification
@@ -695,6 +692,7 @@ sections:
==================== END: .bmad-core/templates/front-end-spec-tmpl.yaml ====================
==================== START: .bmad-core/data/technical-preferences.md ====================
<!-- Powered by BMAD™ Core -->
# User-Defined Preferred Patterns and Preferences
None Listed

View File

@@ -95,6 +95,7 @@ dependencies:
==================== END: .bmad-2d-phaser-game-dev/agents/game-designer.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/create-doc.md ====================
<!-- Powered by BMAD™ Core -->
# Create Document from Template (YAML Driven)
## ⚠️ CRITICAL EXECUTION NOTICE ⚠️
@@ -199,6 +200,7 @@ User can type `#yolo` to toggle to YOLO mode (process all sections at once).
==================== END: .bmad-2d-phaser-game-dev/tasks/create-doc.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/execute-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -210,7 +212,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -223,14 +224,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -239,7 +238,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -247,7 +245,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -261,7 +258,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -271,7 +267,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
@@ -295,6 +290,7 @@ The LLM will:
==================== END: .bmad-2d-phaser-game-dev/tasks/execute-checklist.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/game-design-brainstorming.md ====================
<!-- Powered by BMAD™ Core -->
# Game Design Brainstorming Techniques Task
This task provides a comprehensive toolkit of creative brainstorming techniques specifically designed for game design ideation and innovative thinking. The game designer can use these techniques to facilitate productive brainstorming sessions focused on game mechanics, player experience, and creative concepts.
@@ -306,7 +302,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
[[LLM: Begin by understanding the game design context and goals. Ask clarifying questions if needed to determine the best approach for game-specific ideation.]]
1. **Establish Game Context**
- Understand the game genre or opportunity area
- Identify target audience and platform constraints
- Determine session goals (concept exploration vs. mechanic refinement)
@@ -324,7 +319,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
1. **"What If" Game Scenarios**
[[LLM: Generate provocative what-if questions that challenge game design assumptions and expand thinking beyond current genre limitations.]]
- What if players could rewind time in any genre?
- What if the game world reacted to the player's real-world location?
- What if failure was more rewarding than success?
@@ -333,7 +327,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
2. **Cross-Genre Fusion**
[[LLM: Help user combine unexpected game genres and mechanics to create unique experiences.]]
- "How might [genre A] mechanics work in [genre B]?"
- Puzzle mechanics in action games
- Dating sim elements in strategy games
@@ -342,7 +335,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
3. **Player Motivation Reversal**
[[LLM: Flip traditional player motivations to reveal new gameplay possibilities.]]
- What if losing was the goal?
- What if cooperation was forced in competitive games?
- What if players had to help their enemies?
@@ -359,7 +351,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
1. **SCAMPER for Game Mechanics**
[[LLM: Guide through each SCAMPER prompt specifically for game design.]]
- **S** = Substitute: What mechanics can be substituted? (walking → flying → swimming)
- **C** = Combine: What systems can be merged? (inventory + character growth)
- **A** = Adapt: What mechanics from other media? (books, movies, sports)
@@ -370,7 +361,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
2. **Player Agency Spectrum**
[[LLM: Explore different levels of player control and agency across game systems.]]
- Full Control: Direct character movement, combat, building
- Indirect Control: Setting rules, giving commands, environmental changes
- Influence Only: Suggestions, preferences, emotional reactions
@@ -378,7 +368,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
3. **Temporal Game Design**
[[LLM: Explore how time affects gameplay and player experience.]]
- Real-time vs. turn-based mechanics
- Time travel and manipulation
- Persistent vs. session-based progress
@@ -389,7 +378,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
1. **Emotion-First Design**
[[LLM: Start with target emotions and work backward to mechanics that create them.]]
- Target Emotion: Wonder → Mechanics: Discovery, mystery, scale
- Target Emotion: Triumph → Mechanics: Challenge, skill growth, recognition
- Target Emotion: Connection → Mechanics: Cooperation, shared goals, communication
@@ -397,7 +385,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
2. **Player Archetype Brainstorming**
[[LLM: Design for different player types and motivations.]]
- Achievers: Progression, completion, mastery
- Explorers: Discovery, secrets, world-building
- Socializers: Interaction, cooperation, community
@@ -406,7 +393,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
3. **Accessibility-First Innovation**
[[LLM: Generate ideas that make games more accessible while creating new gameplay.]]
- Visual impairment considerations leading to audio-focused mechanics
- Motor accessibility inspiring one-handed or simplified controls
- Cognitive accessibility driving clear feedback and pacing
@@ -416,7 +402,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
1. **Environmental Storytelling**
[[LLM: Brainstorm ways the game world itself tells stories without explicit narrative.]]
- How does the environment show history?
- What do interactive objects reveal about characters?
- How can level design communicate mood?
@@ -424,7 +409,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
2. **Player-Generated Narrative**
[[LLM: Explore ways players create their own stories through gameplay.]]
- Emergent storytelling through player choices
- Procedural narrative generation
- Player-to-player story sharing
@@ -432,7 +416,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
3. **Genre Expectation Subversion**
[[LLM: Identify and deliberately subvert player expectations within genres.]]
- Fantasy RPG where magic is mundane
- Horror game where monsters are friendly
- Racing game where going slow is optimal
@@ -442,7 +425,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
1. **Platform-Specific Design**
[[LLM: Generate ideas that leverage unique platform capabilities.]]
- Mobile: GPS, accelerometer, camera, always-connected
- Web: URLs, tabs, social sharing, real-time collaboration
- Console: Controllers, TV viewing, couch co-op
@@ -450,7 +432,6 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
2. **Constraint-Based Creativity**
[[LLM: Use technical or design constraints as creative catalysts.]]
- One-button games
- Games without graphics
- Games that play in notification bars
@@ -496,19 +477,16 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
[[LLM: Guide the brainstorming session with appropriate pacing for game design exploration.]]
1. **Inspiration Phase** (10-15 min)
- Reference existing games and mechanics
- Explore player experiences and emotions
- Gather visual and thematic inspiration
2. **Divergent Exploration** (25-35 min)
- Generate many game concepts or mechanics
- Use expansion and fusion techniques
- Encourage wild and impossible ideas
3. **Player-Centered Filtering** (15-20 min)
- Consider target audience reactions
- Evaluate emotional impact and engagement
- Group ideas by player experience goals
@@ -606,6 +584,7 @@ This task provides a comprehensive toolkit of creative brainstorming techniques
==================== END: .bmad-2d-phaser-game-dev/tasks/game-design-brainstorming.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/create-deep-research-prompt.md ====================
<!-- Powered by BMAD™ Core -->
# Create Deep Research Prompt Task
This task helps create comprehensive research prompts for various types of deep analysis. It can process inputs from brainstorming sessions, project briefs, market research, or specific research questions to generate targeted prompts for deeper investigation.
@@ -629,63 +608,54 @@ CRITICAL: First, help the user select the most appropriate research focus based
Present these numbered options to the user:
1. **Product Validation Research**
- Validate product hypotheses and market fit
- Test assumptions about user needs and solutions
- Assess technical and business feasibility
- Identify risks and mitigation strategies
2. **Market Opportunity Research**
- Analyze market size and growth potential
- Identify market segments and dynamics
- Assess market entry strategies
- Evaluate timing and market readiness
3. **User & Customer Research**
- Deep dive into user personas and behaviors
- Understand jobs-to-be-done and pain points
- Map customer journeys and touchpoints
- Analyze willingness to pay and value perception
4. **Competitive Intelligence Research**
- Detailed competitor analysis and positioning
- Feature and capability comparisons
- Business model and strategy analysis
- Identify competitive advantages and gaps
5. **Technology & Innovation Research**
- Assess technology trends and possibilities
- Evaluate technical approaches and architectures
- Identify emerging technologies and disruptions
- Analyze build vs. buy vs. partner options
6. **Industry & Ecosystem Research**
- Map industry value chains and dynamics
- Identify key players and relationships
- Analyze regulatory and compliance factors
- Understand partnership opportunities
7. **Strategic Options Research**
- Evaluate different strategic directions
- Assess business model alternatives
- Analyze go-to-market strategies
- Consider expansion and scaling paths
8. **Risk & Feasibility Research**
- Identify and assess various risk factors
- Evaluate implementation challenges
- Analyze resource requirements
- Consider regulatory and legal implications
9. **Custom Research Focus**
- User-defined research objectives
- Specialized domain investigation
- Cross-functional research needs
@@ -854,13 +824,11 @@ CRITICAL: collaborate with the user to develop specific, actionable research que
### 5. Review and Refinement
1. **Present Complete Prompt**
- Show the full research prompt
- Explain key elements and rationale
- Highlight any assumptions made
2. **Gather Feedback**
- Are the objectives clear and correct?
- Do the questions address all concerns?
- Is the scope appropriate?
@@ -898,6 +866,7 @@ CRITICAL: collaborate with the user to develop specific, actionable research que
==================== END: .bmad-2d-phaser-game-dev/tasks/create-deep-research-prompt.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/advanced-elicitation.md ====================
<!-- Powered by BMAD™ Core -->
# Advanced Game Design Elicitation Task
## Purpose
@@ -918,7 +887,6 @@ CRITICAL: collaborate with the user to develop specific, actionable research que
2. If the section contains game flow diagrams, level layouts, or system diagrams, explain each diagram briefly with game development context before offering elicitation options (e.g., "The gameplay loop diagram shows how player actions lead to rewards and progression. Notice how each step maintains player engagement and creates opportunities for skill development.")
3. If the section contains multiple game elements (like multiple mechanics, multiple levels, multiple systems, etc.), inform the user they can apply elicitation actions to:
- The entire section as a whole
- Individual game elements within the section (specify which element when selecting an action)
@@ -1012,6 +980,7 @@ The questions and perspectives offered should always consider:
==================== END: .bmad-2d-phaser-game-dev/tasks/advanced-elicitation.md ====================
==================== START: .bmad-2d-phaser-game-dev/templates/game-design-doc-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: game-design-doc-template-v2
name: Game Design Document (GDD)
@@ -1358,6 +1327,7 @@ sections:
==================== END: .bmad-2d-phaser-game-dev/templates/game-design-doc-tmpl.yaml ====================
==================== START: .bmad-2d-phaser-game-dev/templates/level-design-doc-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: level-design-doc-template-v2
name: Level Design Document
@@ -1845,6 +1815,7 @@ sections:
==================== END: .bmad-2d-phaser-game-dev/templates/level-design-doc-tmpl.yaml ====================
==================== START: .bmad-2d-phaser-game-dev/templates/game-brief-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: game-brief-template-v2
name: Game Brief
@@ -2204,6 +2175,7 @@ sections:
==================== END: .bmad-2d-phaser-game-dev/templates/game-brief-tmpl.yaml ====================
==================== START: .bmad-2d-phaser-game-dev/checklists/game-design-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Game Design Document Quality Checklist
## Document Completeness

View File

@@ -102,6 +102,7 @@ dependencies:
==================== END: .bmad-2d-phaser-game-dev/agents/game-developer.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/execute-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -113,7 +114,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -126,14 +126,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -142,7 +140,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -150,7 +147,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -164,7 +160,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -174,7 +169,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
@@ -198,6 +192,7 @@ The LLM will:
==================== END: .bmad-2d-phaser-game-dev/tasks/execute-checklist.md ====================
==================== START: .bmad-2d-phaser-game-dev/templates/game-architecture-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: game-architecture-template-v2
name: Game Architecture Document
@@ -814,6 +809,7 @@ sections:
==================== END: .bmad-2d-phaser-game-dev/templates/game-architecture-tmpl.yaml ====================
==================== START: .bmad-2d-phaser-game-dev/checklists/game-story-dod-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Game Development Story Definition of Done Checklist
## Story Completeness
@@ -977,6 +973,7 @@ _Any specific concerns, recommendations, or clarifications needed before develop
==================== END: .bmad-2d-phaser-game-dev/checklists/game-story-dod-checklist.md ====================
==================== START: .bmad-2d-phaser-game-dev/data/development-guidelines.md ====================
<!-- Powered by BMAD™ Core -->
# Game Development Guidelines
## Overview
@@ -1052,7 +1049,7 @@ interface GameState {
interface GameSettings {
musicVolume: number;
sfxVolume: number;
difficulty: "easy" | "normal" | "hard";
difficulty: 'easy' | 'normal' | 'hard';
controls: ControlScheme;
}
```
@@ -1093,12 +1090,12 @@ class GameScene extends Phaser.Scene {
private inputManager!: InputManager;
constructor() {
super({ key: "GameScene" });
super({ key: 'GameScene' });
}
preload(): void {
// Load only scene-specific assets
this.load.image("player", "assets/player.png");
this.load.image('player', 'assets/player.png');
}
create(data: SceneData): void {
@@ -1123,7 +1120,7 @@ class GameScene extends Phaser.Scene {
this.inputManager.destroy();
// Remove event listeners
this.events.off("*");
this.events.off('*');
}
}
```
@@ -1132,13 +1129,13 @@ class GameScene extends Phaser.Scene {
```typescript
// Proper scene transitions with data
this.scene.start("NextScene", {
this.scene.start('NextScene', {
playerScore: this.playerScore,
currentLevel: this.currentLevel + 1,
});
// Scene overlays for UI
this.scene.launch("PauseMenuScene");
this.scene.launch('PauseMenuScene');
this.scene.pause();
```
@@ -1182,7 +1179,7 @@ class Player extends GameEntity {
private health!: HealthComponent;
constructor(scene: Phaser.Scene, x: number, y: number) {
super(scene, x, y, "player");
super(scene, x, y, 'player');
this.movement = this.addComponent(new MovementComponent(this));
this.health = this.addComponent(new HealthComponent(this, 100));
@@ -1202,7 +1199,7 @@ class GameManager {
constructor(scene: Phaser.Scene) {
if (GameManager.instance) {
throw new Error("GameManager already exists!");
throw new Error('GameManager already exists!');
}
this.scene = scene;
@@ -1212,7 +1209,7 @@ class GameManager {
static getInstance(): GameManager {
if (!GameManager.instance) {
throw new Error("GameManager not initialized!");
throw new Error('GameManager not initialized!');
}
return GameManager.instance;
}
@@ -1259,7 +1256,7 @@ class BulletPool {
}
// Pool exhausted - create new bullet
console.warn("Bullet pool exhausted, creating new bullet");
console.warn('Bullet pool exhausted, creating new bullet');
return new Bullet(this.scene, 0, 0);
}
@@ -1359,12 +1356,12 @@ class InputManager {
}
private setupKeyboard(): void {
this.keys = this.scene.input.keyboard.addKeys("W,A,S,D,SPACE,ESC,UP,DOWN,LEFT,RIGHT");
this.keys = this.scene.input.keyboard.addKeys('W,A,S,D,SPACE,ESC,UP,DOWN,LEFT,RIGHT');
}
private setupTouch(): void {
this.scene.input.on("pointerdown", this.handlePointerDown, this);
this.scene.input.on("pointerup", this.handlePointerUp, this);
this.scene.input.on('pointerdown', this.handlePointerDown, this);
this.scene.input.on('pointerup', this.handlePointerUp, this);
}
update(): void {
@@ -1391,9 +1388,9 @@ class InputManager {
class AssetManager {
loadAssets(): Promise<void> {
return new Promise((resolve, reject) => {
this.scene.load.on("filecomplete", this.handleFileComplete, this);
this.scene.load.on("loaderror", this.handleLoadError, this);
this.scene.load.on("complete", () => resolve());
this.scene.load.on('filecomplete', this.handleFileComplete, this);
this.scene.load.on('loaderror', this.handleLoadError, this);
this.scene.load.on('complete', () => resolve());
this.scene.load.start();
});
@@ -1409,8 +1406,8 @@ class AssetManager {
private loadFallbackAsset(key: string): void {
// Load placeholder or default assets
switch (key) {
case "player":
this.scene.load.image("player", "assets/defaults/default-player.png");
case 'player':
this.scene.load.image('player', 'assets/defaults/default-player.png');
break;
default:
console.warn(`No fallback for asset: ${key}`);
@@ -1437,11 +1434,11 @@ class GameSystem {
private attemptRecovery(context: string): void {
switch (context) {
case "update":
case 'update':
// Reset system state
this.reset();
break;
case "render":
case 'render':
// Disable visual effects
this.disableEffects();
break;
@@ -1461,7 +1458,7 @@ class GameSystem {
```typescript
// Example test for game mechanics
describe("HealthComponent", () => {
describe('HealthComponent', () => {
let healthComponent: HealthComponent;
beforeEach(() => {
@@ -1469,18 +1466,18 @@ describe("HealthComponent", () => {
healthComponent = new HealthComponent(mockEntity, 100);
});
test("should initialize with correct health", () => {
test('should initialize with correct health', () => {
expect(healthComponent.currentHealth).toBe(100);
expect(healthComponent.maxHealth).toBe(100);
});
test("should handle damage correctly", () => {
test('should handle damage correctly', () => {
healthComponent.takeDamage(25);
expect(healthComponent.currentHealth).toBe(75);
expect(healthComponent.isAlive()).toBe(true);
});
test("should handle death correctly", () => {
test('should handle death correctly', () => {
healthComponent.takeDamage(150);
expect(healthComponent.currentHealth).toBe(0);
expect(healthComponent.isAlive()).toBe(false);
@@ -1493,7 +1490,7 @@ describe("HealthComponent", () => {
**Scene Testing:**
```typescript
describe("GameScene Integration", () => {
describe('GameScene Integration', () => {
let scene: GameScene;
let mockGame: Phaser.Game;
@@ -1503,7 +1500,7 @@ describe("GameScene Integration", () => {
scene = new GameScene();
});
test("should initialize all systems", () => {
test('should initialize all systems', () => {
scene.create({});
expect(scene.gameManager).toBeDefined();
@@ -1564,25 +1561,21 @@ src/
### Story Implementation Process
1. **Read Story Requirements:**
- Understand acceptance criteria
- Identify technical requirements
- Review performance constraints
2. **Plan Implementation:**
- Identify files to create/modify
- Consider component architecture
- Plan testing approach
3. **Implement Feature:**
- Follow TypeScript strict mode
- Use established patterns
- Maintain 60 FPS performance
4. **Test Implementation:**
- Write unit tests for game logic
- Test cross-platform functionality
- Validate performance targets

View File

@@ -88,6 +88,7 @@ dependencies:
==================== END: .bmad-2d-phaser-game-dev/agents/game-sm.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/create-game-story.md ====================
<!-- Powered by BMAD™ Core -->
# Create Game Development Story Task
## Purpose
@@ -307,6 +308,7 @@ This task ensures game development stories are immediately actionable and enable
==================== END: .bmad-2d-phaser-game-dev/tasks/create-game-story.md ====================
==================== START: .bmad-2d-phaser-game-dev/tasks/execute-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
@@ -318,7 +320,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
@@ -331,14 +332,12 @@ If the user asks or does not specify a specific checklist, list the checklists a
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
@@ -347,7 +346,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
@@ -355,7 +353,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
@@ -369,7 +366,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
@@ -379,7 +375,6 @@ If the user asks or does not specify a specific checklist, list the checklists a
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
@@ -403,6 +398,7 @@ The LLM will:
==================== END: .bmad-2d-phaser-game-dev/tasks/execute-checklist.md ====================
==================== START: .bmad-2d-phaser-game-dev/templates/game-story-tmpl.yaml ====================
# <!-- Powered by BMAD™ Core -->
template:
id: game-story-template-v2
name: Game Development Story
@@ -659,6 +655,7 @@ sections:
==================== END: .bmad-2d-phaser-game-dev/templates/game-story-tmpl.yaml ====================
==================== START: .bmad-2d-phaser-game-dev/checklists/game-story-dod-checklist.md ====================
<!-- Powered by BMAD™ Core -->
# Game Development Story Definition of Done Checklist
## Story Completeness

Some files were not shown because too many files have changed in this diff Show More