Compare commits

...

22 Commits

Author SHA1 Message Date
manjaroblack
0849cf73e1 fix: remove redundant error handling for project root detection 2025-08-15 18:11:08 -05:00
manjaroblack
e3899b2e3b feat: add detailed statistics and markdown report generation to flattener tool 2025-08-15 18:11:08 -05:00
Aaron
9868437f10 Add update-check command (#423)
* Add update-check command

* Adding additional information to update-check command and aligning with cli theme

* Correct update-check message to exclude global
2025-08-14 22:24:37 -05:00
Stefan Klümpers
d563266b97 feat: install Cursor rules to subdirectory (#438)
* feat: install Cursor rules to subdirectory

Implement feature request #376 to avoid filename collisions and confusion
between repo-specific and BMAD-specific rules.

Changes:
- Move Cursor rules from .cursor/rules/ to .cursor/rules/bmad/
- Update installer configuration to use new subdirectory structure
- Update upgrader to reflect new rule directory path

This keeps BMAD Method files separate from existing project rules,
reducing chance of conflicts and improving organization.

Fixes #376

* chore: correct formatting in cursor rules directory path

---------

Co-authored-by: Stefan Klümpers <stefan.kluempers@materna.group>
2025-08-14 22:23:44 -05:00
Yongjip Kim
3efcfd54d4 fix(docs): fix broken link in GUIDING-PRINCIPLES.md (#428)
Co-authored-by: Brian <bmadcode@gmail.com>
2025-08-14 13:40:11 -05:00
Benjamin Wiese
31e44b110e Remove bmad-core/bmad-core including empty file (#431)
Co-authored-by: Ben Wiese <benjamin.wiese@simplifier.io>
2025-08-14 13:39:28 -05:00
semantic-release-bot
ffcb4d4bf2 chore(release): 4.36.2 [skip ci]
## [4.36.2](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.1...v4.36.2) (2025-08-10)

### Bug Fixes

* align installer dependencies with root package versions for ESM compatibility ([#420](https://github.com/bmadcode/BMAD-METHOD/issues/420)) ([3f6b674](3f6b67443d))
2025-08-10 14:26:15 +00:00
circus
3f6b67443d fix: align installer dependencies with root package versions for ESM compatibility (#420)
Downgrade chalk, inquirer, and ora in tools/installer to CommonJS-compatible versions:
- chalk: ^5.4.1 -> ^4.1.2
- inquirer: ^12.6.3 -> ^8.2.6
- ora: ^8.2.0 -> ^5.4.1

Resolves 'is not a function' errors caused by ESM/CommonJS incompatibility.
2025-08-10 09:25:46 -05:00
semantic-release-bot
85a0d83fc5 chore(release): 4.36.1 [skip ci]
## [4.36.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.0...v4.36.1) (2025-08-09)

### Bug Fixes

* update Node.js version to 20 in release workflow and reduce Discord spam ([3f7e19a](3f7e19a098))
2025-08-09 20:49:42 +00:00
Brian Madison
3f7e19a098 fix: update Node.js version to 20 in release workflow and reduce Discord spam
- Update release workflow Node.js version from 18 to 20 to match package.json requirements
- Remove push trigger from Discord workflow to reduce notification spam

This should resolve the semantic-release content-length header error after org migration.
2025-08-09 15:49:13 -05:00
semantic-release-bot
23df54c955 chore(release): 4.36.0 [skip ci]
# [4.36.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.3...v4.36.0) (2025-08-09)

### Features

* modularize flattener tool into separate components with improved project root detection ([#417](https://github.com/bmadcode/BMAD-METHOD/issues/417)) ([0fdbca7](0fdbca73fc))
2025-08-09 20:33:49 +00:00
manjaroblack
0fdbca73fc feat: modularize flattener tool into separate components with improved project root detection (#417) 2025-08-09 15:33:23 -05:00
Daniel Willitzer
5d7d7c9015 Merge pull request #369 from antmikinka/pr/part-1-gcp-setup
Feat(Expansion Pack): Part 1 - Google Cloud Setup
2025-08-08 19:23:48 -07:00
Brian Madison
dd2b4ed5ac discord PR spam 2025-08-08 20:07:32 -05:00
Lior Assouline
8f40576681 Flatten venv & many other bins dir fix (#408)
* added .venv to ignore list of flattener

* more files pattern to ignore

---------

Co-authored-by: Lior Assouline <Lior.Assouline@harmonicinc.com>
2025-08-08 07:54:47 -05:00
Yanqing Wang
fe86675c5f Update link in README.md (#384) 2025-08-07 07:49:14 -05:00
semantic-release-bot
8211d2daff chore(release): 4.35.3 [skip ci]
## [4.35.3](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.2...v4.35.3) (2025-08-06)

### Bug Fixes

* doc location improvement ([1676f51](1676f5189e))
2025-08-06 05:01:55 +00:00
Brian Madison
1676f5189e fix: doc location improvement 2025-08-06 00:00:26 -05:00
semantic-release-bot
3c3d58939f chore(release): 4.35.2 [skip ci]
## [4.35.2](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.1...v4.35.2) (2025-08-06)

### Bug Fixes

* npx status check ([f7c2a4f](f7c2a4fb6c))
2025-08-06 03:34:49 +00:00
Brian Madison
2d954d3481 merge 2025-08-05 22:34:21 -05:00
Brian Madison
f7c2a4fb6c fix: npx status check 2025-08-05 22:33:47 -05:00
antmikinka
c7fc5d3606 Feat(Expansion Pack): Part 1 - Google Cloud Setup 2025-07-27 12:26:53 -07:00
43 changed files with 15824 additions and 13684 deletions

16
.github/workflows/discord.yaml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: Discord Notification
on: [pull_request, release, create, delete, issue_comment, pull_request_review, pull_request_review_comment]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- name: Notify Discord
uses: sarisia/actions-status-discord@v1
if: always()
with:
webhook: ${{ secrets.DISCORD_WEBHOOK }}
status: ${{ job.status }}
title: "Triggered by ${{ github.event_name }}"
color: 0x5865F2

View File

@@ -32,7 +32,7 @@ jobs:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
node-version: '20'
cache: npm
registry-url: https://registry.npmjs.org
- name: Install dependencies

4
.gitignore vendored
View File

@@ -3,6 +3,8 @@ node_modules/
pnpm-lock.yaml
bun.lock
deno.lock
pnpm-workspace.yaml
package-lock.json
# Logs
logs/
@@ -41,3 +43,5 @@ CLAUDE.md
.bmad-creator-tools
test-project-install/*
sample-project/*
flattened-codebase.xml
*.stats.md

View File

@@ -1,9 +1,39 @@
## [4.35.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.0...v4.35.1) (2025-08-06)
## [4.36.2](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.1...v4.36.2) (2025-08-10)
### Bug Fixes
* npx hanging commands ([2cf322e](https://github.com/bmadcode/BMAD-METHOD/commit/2cf322ee0d9b563a4998c72b2c5eab259594739b))
* align installer dependencies with root package versions for ESM compatibility ([#420](https://github.com/bmadcode/BMAD-METHOD/issues/420)) ([3f6b674](https://github.com/bmadcode/BMAD-METHOD/commit/3f6b67443d61ae6add98656374bed27da4704644))
## [4.36.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.0...v4.36.1) (2025-08-09)
### Bug Fixes
- update Node.js version to 20 in release workflow and reduce Discord spam ([3f7e19a](https://github.com/bmadcode/BMAD-METHOD/commit/3f7e19a098155341a2b89796addc47b0623cb87a))
# [4.36.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.3...v4.36.0) (2025-08-09)
### Features
- modularize flattener tool into separate components with improved project root detection ([#417](https://github.com/bmadcode/BMAD-METHOD/issues/417)) ([0fdbca7](https://github.com/bmadcode/BMAD-METHOD/commit/0fdbca73fc60e306109f682f018e105e2b4623a2))
## [4.35.3](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.2...v4.35.3) (2025-08-06)
### Bug Fixes
- doc location improvement ([1676f51](https://github.com/bmadcode/BMAD-METHOD/commit/1676f5189ed057fa2d7facbd6a771fe67cdb6372))
## [4.35.2](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.1...v4.35.2) (2025-08-06)
### Bug Fixes
- npx status check ([f7c2a4f](https://github.com/bmadcode/BMAD-METHOD/commit/f7c2a4fb6c454b17d250b85537129b01ffee6b85))
## [4.35.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.0...v4.35.1) (2025-08-06)
### Bug Fixes
- npx hanging commands ([2cf322e](https://github.com/bmadcode/BMAD-METHOD/commit/2cf322ee0d9b563a4998c72b2c5eab259594739b))
# [4.35.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.34.0...v4.35.0) (2025-08-04)

View File

@@ -23,7 +23,7 @@ Foundations in Agentic Agile Driven Development, known as the Breakthrough Metho
This two-phase approach eliminates both **planning inconsistency** and **context loss** - the biggest problems in AI-assisted development. Your Dev agent opens a story file with complete understanding of what to build, how to build it, and why.
**📖 [See the complete workflow in the User Guide](bmad-core/user-guide.md)** - Planning phase, development cycle, and all agent roles
**📖 [See the complete workflow in the User Guide](docs/user-guide.md)** - Planning phase, development cycle, and all agent roles
## Quick Navigation
@@ -31,18 +31,18 @@ This two-phase approach eliminates both **planning inconsistency** and **context
**Before diving in, review these critical workflow diagrams that explain how BMad works:**
1. **[Planning Workflow (Web UI)](bmad-core/user-guide.md#the-planning-workflow-web-ui)** - How to create PRD and Architecture documents
2. **[Core Development Cycle (IDE)](bmad-core/user-guide.md#the-core-development-cycle-ide)** - How SM, Dev, and QA agents collaborate through story files
1. **[Planning Workflow (Web UI)](docs/user-guide.md#the-planning-workflow-web-ui)** - How to create PRD and Architecture documents
2. **[Core Development Cycle (IDE)](docs/user-guide.md#the-core-development-cycle-ide)** - How SM, Dev, and QA agents collaborate through story files
> ⚠️ **These diagrams explain 90% of BMad Method Agentic Agile flow confusion** - Understanding the PRD+Architecture creation and the SM/Dev/QA workflow and how agents pass notes through story files is essential - and also explains why this is NOT taskmaster or just a simple task runner!
### What would you like to do?
- **[Install and Build software with Full Stack Agile AI Team](#quick-start)** → Quick Start Instruction
- **[Learn how to use BMad](bmad-core/user-guide.md)** → Complete user guide and walkthrough
- **[Learn how to use BMad](docs/user-guide.md)** → Complete user guide and walkthrough
- **[See available AI agents](/bmad-core/agents))** → Specialized roles for your team
- **[Explore non-technical uses](#-beyond-software-development---expansion-packs)** → Creative writing, business, wellness, education
- **[Create my own AI agents](#creating-your-own-expansion-pack)** → Build agents for your domain
- **[Create my own AI agents](docs/expansion-packs.md)** → Build agents for your domain
- **[Browse ready-made expansion packs](expansion-packs/)** → Game dev, DevOps, infrastructure and get inspired with ideas and examples
- **[Understand the architecture](docs/core-architecture.md)** → Technical deep dive
- **[Join the community](https://discord.gg/gk8jAdXWmj)** → Get help and share ideas
@@ -97,7 +97,7 @@ This single command handles:
3. **Upload & configure**: Upload the file and set instructions: "Your critical operating instructions are attached, do not break character as directed"
4. **Start Ideating and Planning**: Start chatting! Type `*help` to see available commands or pick an agent like `*analyst` to start right in on creating a brief.
5. **CRITICAL**: Talk to BMad Orchestrator in the web at ANY TIME (#bmad-orchestrator command) and ask it questions about how this all works!
6. **When to move to the IDE**: Once you have your PRD, Architecture, optional UX and Briefs - its time to switch over to the IDE to shard your docs, and start implementing the actual code! See the [User guide](bmad-core/user-guide.md) for more details
6. **When to move to the IDE**: Once you have your PRD, Architecture, optional UX and Briefs - its time to switch over to the IDE to shard your docs, and start implementing the actual code! See the [User guide](docs/user-guide.md) for more details
### Alternative: Clone and Build
@@ -144,7 +144,7 @@ npx bmad-method flatten --input /path/to/source --output /path/to/output/codebas
The tool will display progress and provide a comprehensive summary:
```
```text
📊 Completion Summary:
✅ Successfully processed 156 files into flattened-codebase.xml
📁 Output file: /path/to/your/project/flattened-codebase.xml
@@ -155,13 +155,46 @@ The tool will display progress and provide a comprehensive summary:
📊 File breakdown: 142 text, 14 binary, 0 errors
```
The generated XML file contains all your project's source code in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMad-Method projects.
The generated XML file contains your project's text-based source files in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMad-Method projects.
#### Advanced Usage & Options
- CLI options
- `-i, --input <path>`: Directory to flatten. Default: current working directory or auto-detected project root when run interactively.
- `-o, --output <path>`: Output file path. Default: `flattened-codebase.xml` in the chosen directory.
- Interactive mode
- If you do not pass `--input` and `--output` and the terminal is interactive (TTY), the tool will attempt to detect your project root (by looking for markers like `.git`, `package.json`, etc.) and prompt you to confirm or override the paths.
- In non-interactive contexts (e.g., CI), it will prefer the detected root silently; otherwise it falls back to the current directory and default filename.
- File discovery and ignoring
- Uses `git ls-files` when inside a git repository for speed and correctness; otherwise falls back to a glob-based scan.
- Applies your `.gitignore` plus a curated set of default ignore patterns (e.g., `node_modules`, build outputs, caches, logs, IDE folders, lockfiles, large media/binaries, `.env*`, and previously generated XML outputs).
- Binary handling
- Binary files are detected and excluded from the XML content. They are counted in the final summary but not embedded in the output.
- XML format and safety
- UTF-8 encoded file with root element `<files>`.
- Each text file is emitted as a `<file path="relative/path">` element whose content is wrapped in `<![CDATA[ ... ]]>`.
- The tool safely handles occurrences of `]]>` inside content by splitting the CDATA to preserve correctness.
- File contents are preserved as-is and indented for readability inside the XML.
- Performance
- Concurrency is selected automatically based on your CPU and workload size. No configuration required.
- Running inside a git repo improves discovery performance.
#### Minimal XML example
```xml
<?xml version="1.0" encoding="UTF-8"?>
<files>
<file path="src/index.js"><![CDATA[
// your source content
]]></file>
</files>
```
## Documentation & Resources
### Essential Guides
- 📖 **[User Guide](bmad-core/user-guide.md)** - Complete walkthrough from project inception to completion
- 📖 **[User Guide](docs/user-guide.md)** - Complete walkthrough from project inception to completion
- 🏗️ **[Core Architecture](docs/core-architecture.md)** - Technical deep dive and system design
- 🚀 **[Expansion Packs Guide](docs/expansion-packs.md)** - Extend BMad to any domain beyond software development

18
dist/agents/dev.txt vendored
View File

@@ -72,15 +72,15 @@ commands:
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story:
order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
- develop-story:
- order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
- story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
- completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
dependencies:
tasks:
- execute-checklist.md

View File

@@ -352,15 +352,15 @@ commands:
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story:
order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
- develop-story:
- order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
- story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
- completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
dependencies:
tasks:
- execute-checklist.md

View File

@@ -322,15 +322,15 @@ commands:
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story:
order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
- develop-story:
- order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
- story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
- completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
dependencies:
tasks:
- execute-checklist.md

View File

@@ -65,7 +65,7 @@ See [Expansion Packs Guide](../docs/expansion-packs.md) for detailed examples an
### Template Rules
Templates follow the [BMad Document Template](common/utils/bmad-doc-template.md) specification using YAML format:
Templates follow the [BMad Document Template](../common/utils/bmad-doc-template.md) specification using YAML format:
1. **Structure**: Templates are defined in YAML with clear metadata, workflow configuration, and section hierarchy
2. **Separation of Concerns**: Instructions for LLMs are in `instruction` fields, separate from content

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 181 KiB

View File

@@ -0,0 +1,13 @@
# 1. Create new Google Cloud Project
gcloud projects create {{PROJECT_ID}} --name="{{COMPANY_NAME}} AI Agent System"
# 2. Set default project
gcloud config set project {{PROJECT_ID}}
# 3. Enable required APIs
gcloud services enable aiplatform.googleapis.com
gcloud services enable storage.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud services enable run.googleapis.com
gcloud services enable firestore.googleapis.com
gcloud services enable secretmanager.googleapis.com

View File

@@ -0,0 +1,13 @@
# 1. Create new Google Cloud Project
gcloud projects create {{PROJECT_ID}} --name="{{COMPANY_NAME}} AI Agent System"
# 2. Set default project
gcloud config set project {{PROJECT_ID}}
# 3. Enable required APIs
gcloud services enable aiplatform.googleapis.com
gcloud services enable storage.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud services enable run.googleapis.com
gcloud services enable firestore.googleapis.com
gcloud services enable secretmanager.googleapis.com

View File

@@ -0,0 +1,25 @@
{{company_name}}-ai-agents/
├── agents/
│ ├── __init__.py
│ ├── {{team_1}}/
│ │ ├── __init__.py
│ │ ├── {{agent_1}}.py
│ │ └── {{agent_2}}.py
│ └── {{team_2}}/
├── tasks/
│ ├── __init__.py
│ ├── {{task_category_1}}/
│ └── {{task_category_2}}/
├── templates/
│ ├── {{document_type_1}}/
│ └── {{document_type_2}}/
├── checklists/
├── data/
├── workflows/
├── config/
│ ├── settings.py
│ └── agent_config.yaml
├── main.py
└── deployment/
├── Dockerfile
└── cloudbuild.yaml

View File

@@ -0,0 +1,34 @@
import os
from pydantic import BaseSettings
class Settings(BaseSettings):
# Google Cloud Configuration
project_id: str = "{{PROJECT_ID}}"
location: str = "{{LOCATION}}" # e.g., "us-central1"
# Company Information
company_name: str = "{{COMPANY_NAME}}"
industry: str = "{{INDUSTRY}}"
business_type: str = "{{BUSINESS_TYPE}}"
# Agent Configuration
default_model: str = "gemini-1.5-pro"
max_iterations: int = 10
timeout_seconds: int = 300
# Storage Configuration
bucket_name: str = "{{COMPANY_NAME}}-ai-agents-storage"
database_name: str = "{{COMPANY_NAME}}-ai-agents-db"
# API Configuration
session_service_type: str = "vertex" # or "in_memory" for development
artifact_service_type: str = "gcs" # or "in_memory" for development
memory_service_type: str = "vertex" # or "in_memory" for development
# Security
service_account_path: str = "./{{COMPANY_NAME}}-ai-agents-key.json"
class Config:
env_file = ".env"
settings = Settings()

View File

@@ -0,0 +1,70 @@
import asyncio
from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import VertexAiSessionService
from google.adk.artifacts import GcsArtifactService
from google.adk.memory import VertexAiRagMemoryService
from google.adk.models import Gemini
from config.settings import settings
from agents.{{primary_team}}.{{main_orchestrator}} import {{MainOrchestratorClass}}
class {{CompanyName}}AISystem:
def __init__(self):
self.settings = settings
self.runner = None
self.main_orchestrator = None
async def initialize(self):
"""Initialize the AI agent system"""
# Create main orchestrator
self.main_orchestrator = {{MainOrchestratorClass}}()
# Initialize services
session_service = VertexAiSessionService(
project=self.settings.project_id,
location=self.settings.location
)
artifact_service = GcsArtifactService(
bucket_name=self.settings.bucket_name
)
memory_service = VertexAiRagMemoryService(
rag_corpus=f"projects/{self.settings.project_id}/locations/{self.settings.location}/ragCorpora/{{COMPANY_NAME}}-knowledge"
)
# Create runner
self.runner = Runner(
app_name=f"{self.settings.company_name}-AI-System",
agent=self.main_orchestrator,
session_service=session_service,
artifact_service=artifact_service,
memory_service=memory_service
)
print(f"{self.settings.company_name} AI Agent System initialized successfully!")
async def run_agent_interaction(self, user_id: str, session_id: str, message: str):
"""Run agent interaction"""
if not self.runner:
await self.initialize()
async for event in self.runner.run_async(
user_id=user_id,
session_id=session_id,
new_message=message
):
yield event
# Application factory
async def create_app():
ai_system = {{CompanyName}}AISystem()
await ai_system.initialize()
return ai_system
if __name__ == "__main__":
# Development server
import uvicorn
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)

View File

@@ -0,0 +1,26 @@
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- '{{COMPANY_NAME}}-ai-agents'
- '--image'
- 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA'
- '--region'
- '{{LOCATION}}'
- '--platform'
- 'managed'
- '--allow-unauthenticated'
images:
- 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA'

View File

@@ -0,0 +1,109 @@
# BMad Expansion Pack: Google Cloud Vertex AI Agent System
[](https://opensource.org/licenses/MIT)
[](https://www.google.com/search?q=https://github.com/antmikinka/BMAD-METHOD)
[](https://cloud.google.com/)
This expansion pack provides a complete, deployable starter kit for building and hosting sophisticated AI agent systems on Google Cloud Platform (GCP). It bridges the gap between the BMad Method's natural language framework and a production-ready cloud environment, leveraging Google Vertex AI, Cloud Run, and the Google Agent Development Kit (ADK).
## Features
* **Automated GCP Setup**: `gcloud` scripts to configure your project, service accounts, and required APIs in minutes.
* **Production-Ready Deployment**: Includes a `Dockerfile` and `cloudbuild.yaml` for easy, repeatable deployments to Google Cloud Run.
* **Rich Template Library**: A comprehensive set of BMad-compatible templates for Teams, Agents, Tasks, Workflows, Documents, and Checklists.
* **Pre-configured Agent Roles**: Includes powerful master templates for key agent archetypes like Orchestrators and Specialists.
* **Highly Customizable**: Easily adapt the entire system with company-specific variables and industry-specific configurations.
* **Powered by Google ADK**: Built on the official Google Agent Development Kit for robust and native integration with Vertex AI services.
## Prerequisites
Before you begin, ensure you have the following installed and configured:
* A Google Cloud Platform (GCP) Account with an active billing account.
* The [Google Cloud SDK (`gcloud` CLI)](https://www.google.com/search?q=%5Bhttps://cloud.google.com/sdk/docs/install%5D\(https://cloud.google.com/sdk/docs/install\)) installed and authenticated.
* [Docker](https://www.docker.com/products/docker-desktop/) installed on your local machine.
* Python 3.11+
## Quick Start Guide
Follow these steps to get your own AI agent system running on Google Cloud.
### 1\. Configure Setup Variables
The setup scripts use placeholder variables. Before running them, open the files in the `/scripts` directory and replace the following placeholders with your own values:
* `{{PROJECT_ID}}`: Your unique Google Cloud project ID.
* `{{COMPANY_NAME}}`: Your company or project name (used for naming resources).
* `{{LOCATION}}`: The GCP region you want to deploy to (e.g., `us-central1`).
### 2\. Run the GCP Setup Scripts
Execute the setup scripts to prepare your Google Cloud environment.
```bash
# Navigate to the scripts directory
cd scripts/
# Run the project configuration script
sh 1-initial-project-config.sh
# Run the service account setup script
sh 2-service-account-setup.sh
```
These scripts will enable the necessary APIs, create a service account, assign permissions, and download a JSON key file required for authentication.
### 3\. Install Python Dependencies
Install the required Python packages for the application.
```bash
# From the root of the expansion pack
pip install -r requirements.txt
```
### 4\. Deploy to Cloud Run
Deploy the entire agent system as a serverless application using Cloud Build.
```bash
# From the root of the expansion pack
gcloud builds submit --config deployment/cloudbuild.yaml .
```
This command will build the Docker container, push it to the Google Container Registry, and deploy it to Cloud Run. Your agent system is now live\!
## How to Use
Once deployed, the power of this system lies in its natural language templates.
1. **Define Your Organization**: Go to `/templates/teams` and use the templates to define your agent teams (e.g., Product Development, Operations).
2. **Customize Your Agents**: In `/templates/agents`, use the `Master-Agent-Template.yaml` to create new agents or customize the existing Orchestrator and Specialist templates. Define their personas, skills, and commands in plain English.
3. **Build Your Workflows**: In `/templates/workflows`, link agents and tasks together to create complex, automated processes.
The deployed application reads these YAML and Markdown files to dynamically construct and run your AI workforce. When you update a template, your live agents automatically adopt the new behaviors.
## What's Included
This expansion pack has a comprehensive structure to get you started:
```
/
├── deployment/ # Dockerfile and cloudbuild.yaml for deployment
├── scripts/ # GCP setup scripts (project config, service accounts)
├── src/ # Python source code (main.py, settings.py)
├── templates/
│ ├── agents/ # Master, Orchestrator, Specialist agent templates
│ ├── teams/ # Team structure templates
│ ├── tasks/ # Generic and specialized task templates
│ ├── documents/ # Document and report templates
│ ├── checklists/ # Quality validation checklists
│ ├── workflows/ # Workflow definition templates
│ └── ...and more
├── config/ # Customization guides and variable files
└── requirements.txt # Python package dependencies
```
## Contributing
Contributions are welcome\! Please follow the main project's `CONTRIBUTING.md` guidelines. For major changes or new features for this expansion pack, please open an issue or discussion first.

25202
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "bmad-method",
"version": "4.35.1",
"version": "4.36.2",
"description": "Breakthrough Method of Agile AI-driven Development",
"main": "tools/cli.js",
"bin": {
@@ -40,9 +40,9 @@
"commander": "^14.0.0",
"fs-extra": "^11.3.0",
"glob": "^11.0.3",
"ignore": "^7.0.5",
"inquirer": "^8.2.6",
"js-yaml": "^4.1.0",
"minimatch": "^10.0.3",
"ora": "^5.4.1"
},
"keywords": [

View File

@@ -0,0 +1,76 @@
const fs = require("fs-extra");
const path = require("node:path");
const os = require("node:os");
const { isBinaryFile } = require("./binary.js");
/**
* Aggregate file contents with bounded concurrency.
* Returns text files, binary files (with size), and errors.
* @param {string[]} files absolute file paths
* @param {string} rootDir
* @param {{ text?: string, warn?: (msg: string) => void } | null} spinner
*/
async function aggregateFileContents(files, rootDir, spinner = null) {
const results = {
textFiles: [],
binaryFiles: [],
errors: [],
totalFiles: files.length,
processedFiles: 0,
};
// Automatic concurrency selection based on CPU count and workload size.
// - Base on 2x logical CPUs, clamped to [2, 64]
// - For very small workloads, avoid excessive parallelism
const cpuCount = (os.cpus && Array.isArray(os.cpus()) ? os.cpus().length : (os.cpus?.length || 4));
let concurrency = Math.min(64, Math.max(2, (Number(cpuCount) || 4) * 2));
if (files.length > 0 && files.length < concurrency) {
concurrency = Math.max(1, Math.min(concurrency, Math.ceil(files.length / 2)));
}
async function processOne(filePath) {
try {
const relativePath = path.relative(rootDir, filePath);
if (spinner) {
spinner.text = `Processing: ${relativePath} (${results.processedFiles + 1}/${results.totalFiles})`;
}
const binary = await isBinaryFile(filePath);
if (binary) {
const size = (await fs.stat(filePath)).size;
results.binaryFiles.push({ path: relativePath, absolutePath: filePath, size });
} else {
const content = await fs.readFile(filePath, "utf8");
results.textFiles.push({
path: relativePath,
absolutePath: filePath,
content,
size: content.length,
lines: content.split("\n").length,
});
}
} catch (error) {
const relativePath = path.relative(rootDir, filePath);
const errorInfo = { path: relativePath, absolutePath: filePath, error: error.message };
results.errors.push(errorInfo);
if (spinner) {
spinner.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
} else {
console.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
}
} finally {
results.processedFiles++;
}
}
for (let i = 0; i < files.length; i += concurrency) {
const slice = files.slice(i, i + concurrency);
await Promise.all(slice.map(processOne));
}
return results;
}
module.exports = {
aggregateFileContents,
};

53
tools/flattener/binary.js Normal file
View File

@@ -0,0 +1,53 @@
const fsp = require("node:fs/promises");
const path = require("node:path");
const { Buffer } = require("node:buffer");
/**
* Efficiently determine if a file is binary without reading the whole file.
* - Fast path by extension for common binaries
* - Otherwise read a small prefix and check for NUL bytes
* @param {string} filePath
* @returns {Promise<boolean>}
*/
async function isBinaryFile(filePath) {
try {
const stats = await fsp.stat(filePath);
if (stats.isDirectory()) {
throw new Error("EISDIR: illegal operation on a directory");
}
const binaryExtensions = new Set([
".jpg", ".jpeg", ".png", ".gif", ".bmp", ".ico", ".svg",
".pdf", ".doc", ".docx", ".xls", ".xlsx", ".ppt", ".pptx",
".zip", ".tar", ".gz", ".rar", ".7z",
".exe", ".dll", ".so", ".dylib",
".mp3", ".mp4", ".avi", ".mov", ".wav",
".ttf", ".otf", ".woff", ".woff2",
".bin", ".dat", ".db", ".sqlite",
]);
const ext = path.extname(filePath).toLowerCase();
if (binaryExtensions.has(ext)) return true;
if (stats.size === 0) return false;
const sampleSize = Math.min(4096, stats.size);
const fd = await fsp.open(filePath, "r");
try {
const buffer = Buffer.allocUnsafe(sampleSize);
const { bytesRead } = await fd.read(buffer, 0, sampleSize, 0);
const slice = bytesRead === sampleSize ? buffer : buffer.subarray(0, bytesRead);
return slice.includes(0);
} finally {
await fd.close();
}
} catch (error) {
console.warn(
`Warning: Could not determine if file is binary: ${filePath} - ${error.message}`,
);
return false;
}
}
module.exports = {
isBinaryFile,
};

View File

@@ -0,0 +1,70 @@
const path = require("node:path");
const { execFile } = require("node:child_process");
const { promisify } = require("node:util");
const { glob } = require("glob");
const { loadIgnore } = require("./ignoreRules.js");
const pExecFile = promisify(execFile);
async function isGitRepo(rootDir) {
try {
const { stdout } = await pExecFile("git", [
"rev-parse",
"--is-inside-work-tree",
], { cwd: rootDir });
return String(stdout || "").toString().trim() === "true";
} catch {
return false;
}
}
async function gitListFiles(rootDir) {
try {
const { stdout } = await pExecFile("git", [
"ls-files",
"-co",
"--exclude-standard",
], { cwd: rootDir });
return String(stdout || "")
.split(/\r?\n/)
.map((s) => s.trim())
.filter(Boolean);
} catch {
return [];
}
}
/**
* Discover files under rootDir.
* - Prefer git ls-files when available for speed/correctness
* - Fallback to glob and apply unified ignore rules
* @param {string} rootDir
* @param {object} [options]
* @param {boolean} [options.preferGit=true]
* @returns {Promise<string[]>} absolute file paths
*/
async function discoverFiles(rootDir, options = {}) {
const { preferGit = true } = options;
const { filter } = await loadIgnore(rootDir);
// Try git first
if (preferGit && await isGitRepo(rootDir)) {
const relFiles = await gitListFiles(rootDir);
const filteredRel = relFiles.filter((p) => filter(p));
return filteredRel.map((p) => path.resolve(rootDir, p));
}
// Glob fallback
const globbed = await glob("**/*", {
cwd: rootDir,
nodir: true,
dot: true,
follow: false,
});
const filteredRel = globbed.filter((p) => filter(p));
return filteredRel.map((p) => path.resolve(rootDir, p));
}
module.exports = {
discoverFiles,
};

35
tools/flattener/files.js Normal file
View File

@@ -0,0 +1,35 @@
const path = require("node:path");
const discovery = require("./discovery.js");
const ignoreRules = require("./ignoreRules.js");
const { isBinaryFile } = require("./binary.js");
const { aggregateFileContents } = require("./aggregate.js");
// Backward-compatible signature; delegate to central loader
async function parseGitignore(gitignorePath) {
return await ignoreRules.parseGitignore(gitignorePath);
}
async function discoverFiles(rootDir) {
try {
// Delegate to discovery module which respects .gitignore and defaults
return await discovery.discoverFiles(rootDir, { preferGit: true });
} catch (error) {
console.error("Error discovering files:", error.message);
return [];
}
}
async function filterFiles(files, rootDir) {
const { filter } = await ignoreRules.loadIgnore(rootDir);
const relativeFiles = files.map((f) => path.relative(rootDir, f));
const filteredRelative = relativeFiles.filter((p) => filter(p));
return filteredRelative.map((p) => path.resolve(rootDir, p));
}
module.exports = {
parseGitignore,
discoverFiles,
isBinaryFile,
aggregateFileContents,
filterFiles,
};

View File

@@ -0,0 +1,176 @@
const fs = require("fs-extra");
const path = require("node:path");
const ignore = require("ignore");
// Central default ignore patterns for discovery and filtering.
// These complement .gitignore and are applied regardless of VCS presence.
const DEFAULT_PATTERNS = [
// Project/VCS
"**/.bmad-core/**",
"**/.git/**",
"**/.svn/**",
"**/.hg/**",
"**/.bzr/**",
// Package/build outputs
"**/node_modules/**",
"**/bower_components/**",
"**/vendor/**",
"**/packages/**",
"**/build/**",
"**/dist/**",
"**/out/**",
"**/target/**",
"**/bin/**",
"**/obj/**",
"**/release/**",
"**/debug/**",
// Environments
"**/.venv/**",
"**/venv/**",
"**/.virtualenv/**",
"**/virtualenv/**",
"**/env/**",
// Logs & coverage
"**/*.log",
"**/npm-debug.log*",
"**/yarn-debug.log*",
"**/yarn-error.log*",
"**/lerna-debug.log*",
"**/coverage/**",
"**/.nyc_output/**",
"**/.coverage/**",
"**/test-results/**",
// Caches & temp
"**/.cache/**",
"**/.tmp/**",
"**/.temp/**",
"**/tmp/**",
"**/temp/**",
"**/.sass-cache/**",
// IDE/editor
"**/.vscode/**",
"**/.idea/**",
"**/*.swp",
"**/*.swo",
"**/*~",
"**/.project",
"**/.classpath",
"**/.settings/**",
"**/*.sublime-project",
"**/*.sublime-workspace",
// Lockfiles
"**/package-lock.json",
"**/yarn.lock",
"**/pnpm-lock.yaml",
"**/composer.lock",
"**/Pipfile.lock",
// Python/Java/compiled artifacts
"**/*.pyc",
"**/*.pyo",
"**/*.pyd",
"**/__pycache__/**",
"**/*.class",
"**/*.jar",
"**/*.war",
"**/*.ear",
"**/*.o",
"**/*.so",
"**/*.dll",
"**/*.exe",
// System junk
"**/lib64/**",
"**/.venv/lib64/**",
"**/venv/lib64/**",
"**/_site/**",
"**/.jekyll-cache/**",
"**/.jekyll-metadata",
"**/.DS_Store",
"**/.DS_Store?",
"**/._*",
"**/.Spotlight-V100/**",
"**/.Trashes/**",
"**/ehthumbs.db",
"**/Thumbs.db",
"**/desktop.ini",
// XML outputs
"**/flattened-codebase.xml",
"**/repomix-output.xml",
// Images, media, fonts, archives, docs, dylibs
"**/*.jpg",
"**/*.jpeg",
"**/*.png",
"**/*.gif",
"**/*.bmp",
"**/*.ico",
"**/*.svg",
"**/*.pdf",
"**/*.doc",
"**/*.docx",
"**/*.xls",
"**/*.xlsx",
"**/*.ppt",
"**/*.pptx",
"**/*.zip",
"**/*.tar",
"**/*.gz",
"**/*.rar",
"**/*.7z",
"**/*.dylib",
"**/*.mp3",
"**/*.mp4",
"**/*.avi",
"**/*.mov",
"**/*.wav",
"**/*.ttf",
"**/*.otf",
"**/*.woff",
"**/*.woff2",
// Env files
"**/.env",
"**/.env.*",
"**/*.env",
// Misc
"**/junit.xml",
];
async function readIgnoreFile(filePath) {
try {
if (!await fs.pathExists(filePath)) return [];
const content = await fs.readFile(filePath, "utf8");
return content
.split("\n")
.map((l) => l.trim())
.filter((l) => l && !l.startsWith("#"));
} catch (err) {
return [];
}
}
// Backward compatible export matching previous signature
async function parseGitignore(gitignorePath) {
return readIgnoreFile(gitignorePath);
}
async function loadIgnore(rootDir, extraPatterns = []) {
const ig = ignore();
const gitignorePath = path.join(rootDir, ".gitignore");
const patterns = [
...await readIgnoreFile(gitignorePath),
...DEFAULT_PATTERNS,
...extraPatterns,
];
// De-duplicate
const unique = Array.from(new Set(patterns.map((p) => String(p))));
ig.add(unique);
// Include-only filter: return true if path should be included
const filter = (relativePath) => !ig.ignores(relativePath.replace(/\\/g, "/"));
return { ig, filter, patterns: unique };
}
module.exports = {
DEFAULT_PATTERNS,
parseGitignore,
loadIgnore,
};

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,204 @@
const fs = require("fs-extra");
const path = require("node:path");
// Deno/Node compatibility: explicitly import process
const process = require("node:process");
const { execFile } = require("node:child_process");
const { promisify } = require("node:util");
const execFileAsync = promisify(execFile);
// Simple memoization across calls (keyed by realpath of startDir)
const _cache = new Map();
async function _tryRun(cmd, args, cwd, timeoutMs = 500) {
try {
const { stdout } = await execFileAsync(cmd, args, {
cwd,
timeout: timeoutMs,
windowsHide: true,
maxBuffer: 1024 * 1024,
});
const out = String(stdout || "").trim();
return out || null;
} catch {
return null;
}
}
async function _detectVcsTopLevel(startDir) {
// Run common VCS root queries in parallel; ignore failures
const gitP = _tryRun("git", ["rev-parse", "--show-toplevel"], startDir);
const hgP = _tryRun("hg", ["root"], startDir);
const svnP = (async () => {
const show = await _tryRun("svn", ["info", "--show-item", "wc-root"], startDir);
if (show) return show;
const info = await _tryRun("svn", ["info"], startDir);
if (info) {
const line = info.split(/\r?\n/).find((l) => l.toLowerCase().startsWith("working copy root path:"));
if (line) return line.split(":").slice(1).join(":").trim();
}
return null;
})();
const [git, hg, svn] = await Promise.all([gitP, hgP, svnP]);
return git || hg || svn || null;
}
/**
* Attempt to find the project root by walking up from startDir.
* Uses a robust, prioritized set of ecosystem markers (VCS > workspaces/monorepo > lock/build > language config).
* Also recognizes package.json with "workspaces" as a workspace root.
* You can augment markers via env PROJECT_ROOT_MARKERS as a comma-separated list of file/dir names.
* @param {string} startDir
* @returns {Promise<string|null>} project root directory or null if not found
*/
async function findProjectRoot(startDir) {
try {
// Resolve symlinks for robustness (e.g., when invoked from a symlinked path)
let dir = path.resolve(startDir);
try {
dir = await fs.realpath(dir);
} catch {
// ignore if realpath fails; continue with resolved path
}
const startKey = dir; // preserve starting point for caching
if (_cache.has(startKey)) return _cache.get(startKey);
const fsRoot = path.parse(dir).root;
// Helper to safely check for existence
const exists = (p) => fs.pathExists(p);
// Build checks: an array of { makePath: (dir) => string, weight }
const checks = [];
const add = (rel, weight) => {
const makePath = (d) => Array.isArray(rel) ? path.join(d, ...rel) : path.join(d, rel);
checks.push({ makePath, weight });
};
// Highest priority: explicit sentinel markers
add(".project-root", 110);
add(".workspace-root", 110);
add(".repo-root", 110);
// Highest priority: VCS roots
add(".git", 100);
add(".hg", 95);
add(".svn", 95);
// Monorepo/workspace indicators
add("pnpm-workspace.yaml", 90);
add("lerna.json", 90);
add("turbo.json", 90);
add("nx.json", 90);
add("rush.json", 90);
add("go.work", 90);
add("WORKSPACE", 90);
add("WORKSPACE.bazel", 90);
add("MODULE.bazel", 90);
add("pants.toml", 90);
// Lockfiles and package-manager/top-level locks
add("yarn.lock", 85);
add("pnpm-lock.yaml", 85);
add("package-lock.json", 85);
add("bun.lockb", 85);
add("Cargo.lock", 85);
add("composer.lock", 85);
add("poetry.lock", 85);
add("Pipfile.lock", 85);
add("Gemfile.lock", 85);
// Build-system root indicators
add("settings.gradle", 80);
add("settings.gradle.kts", 80);
add("gradlew", 80);
add("pom.xml", 80);
add("build.sbt", 80);
add(["project", "build.properties"], 80);
// Language/project config markers
add("deno.json", 75);
add("deno.jsonc", 75);
add("pyproject.toml", 75);
add("Pipfile", 75);
add("requirements.txt", 75);
add("go.mod", 75);
add("Cargo.toml", 75);
add("composer.json", 75);
add("mix.exs", 75);
add("Gemfile", 75);
add("CMakeLists.txt", 75);
add("stack.yaml", 75);
add("cabal.project", 75);
add("rebar.config", 75);
add("pubspec.yaml", 75);
add("flake.nix", 75);
add("shell.nix", 75);
add("default.nix", 75);
add(".tool-versions", 75);
add("package.json", 74); // generic Node project (lower than lockfiles/workspaces)
// Changesets
add([".changeset", "config.json"], 70);
add(".changeset", 70);
// Custom markers via env (comma-separated names)
if (process.env.PROJECT_ROOT_MARKERS) {
for (const name of process.env.PROJECT_ROOT_MARKERS.split(",").map((s) => s.trim()).filter(Boolean)) {
add(name, 72);
}
}
/** Check for package.json with "workspaces" */
const hasWorkspacePackageJson = async (d) => {
const pkgPath = path.join(d, "package.json");
if (!(await exists(pkgPath))) return false;
try {
const raw = await fs.readFile(pkgPath, "utf8");
const pkg = JSON.parse(raw);
return Boolean(pkg && pkg.workspaces);
} catch {
return false;
}
};
let best = null; // { dir, weight }
// Try to detect VCS toplevel once up-front; treat as authoritative slightly above .git marker
const vcsTop = await _detectVcsTopLevel(dir);
if (vcsTop) {
best = { dir: vcsTop, weight: 101 };
}
while (true) {
// Special check: package.json with "workspaces"
if (await hasWorkspacePackageJson(dir)) {
if (!best || 90 >= best.weight) best = { dir, weight: 90 };
}
// Evaluate all other checks in parallel
const results = await Promise.all(
checks.map(async (c) => ({ c, ok: await exists(c.makePath(dir)) })),
);
for (const { c, ok } of results) {
if (!ok) continue;
if (!best || c.weight >= best.weight) {
best = { dir, weight: c.weight };
}
}
if (dir === fsRoot) break;
dir = path.dirname(dir);
}
const out = best ? best.dir : null;
_cache.set(startKey, out);
return out;
} catch {
return null;
}
}
module.exports = { findProjectRoot };

View File

@@ -0,0 +1,44 @@
const os = require("node:os");
const path = require("node:path");
const readline = require("node:readline");
const process = require("node:process");
function expandHome(p) {
if (!p) return p;
if (p.startsWith("~")) return path.join(os.homedir(), p.slice(1));
return p;
}
function createRl() {
return readline.createInterface({
input: process.stdin,
output: process.stdout,
});
}
function promptQuestion(question) {
return new Promise((resolve) => {
const rl = createRl();
rl.question(question, (answer) => {
rl.close();
resolve(answer);
});
});
}
async function promptYesNo(question, defaultYes = true) {
const suffix = defaultYes ? " [Y/n] " : " [y/N] ";
const ans = (await promptQuestion(`${question}${suffix}`)).trim().toLowerCase();
if (!ans) return defaultYes;
if (["y", "yes"].includes(ans)) return true;
if (["n", "no"].includes(ans)) return false;
return promptYesNo(question, defaultYes);
}
async function promptPath(question, defaultValue) {
const prompt = `${question}${defaultValue ? ` (default: ${defaultValue})` : ""}: `;
const ans = (await promptQuestion(prompt)).trim();
return expandHome(ans || defaultValue);
}
module.exports = { promptYesNo, promptPath, promptQuestion, expandHome };

View File

@@ -0,0 +1,331 @@
"use strict";
const fs = require("node:fs/promises");
const path = require("node:path");
const zlib = require("node:zlib");
const { Buffer } = require("node:buffer");
const crypto = require("node:crypto");
const cp = require("node:child_process");
const KB = 1024;
const MB = 1024 * KB;
const formatSize = (bytes) => {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
if (bytes < 1024 * 1024 * 1024) return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
return `${(bytes / (1024 * 1024 * 1024)).toFixed(2)} GB`;
};
const percentile = (sorted, p) => {
if (sorted.length === 0) return 0;
const idx = Math.min(sorted.length - 1, Math.max(0, Math.ceil((p / 100) * sorted.length) - 1));
return sorted[idx];
};
async function processWithLimit(items, fn, concurrency = 64) {
for (let i = 0; i < items.length; i += concurrency) {
await Promise.all(items.slice(i, i + concurrency).map(fn));
}
}
async function enrichAllFiles(textFiles, binaryFiles) {
/** @type {Array<{ path: string; absolutePath: string; size: number; lines?: number; isBinary: boolean; ext: string; dir: string; depth: number; hidden: boolean; mtimeMs: number; isSymlink: boolean; }>} */
const allFiles = [];
async function enrich(file, isBinary) {
const ext = (path.extname(file.path) || "").toLowerCase();
const dir = path.dirname(file.path) || ".";
const depth = file.path.split(path.sep).filter(Boolean).length;
const hidden = file.path.split(path.sep).some((seg) => seg.startsWith("."));
let mtimeMs = 0;
let isSymlink = false;
try {
const lst = await fs.lstat(file.absolutePath);
mtimeMs = lst.mtimeMs;
isSymlink = lst.isSymbolicLink();
} catch (_) { /* ignore lstat errors during enrichment */ }
allFiles.push({
path: file.path,
absolutePath: file.absolutePath,
size: file.size || 0,
lines: file.lines,
isBinary,
ext,
dir,
depth,
hidden,
mtimeMs,
isSymlink,
});
}
await processWithLimit(textFiles, (f) => enrich(f, false));
await processWithLimit(binaryFiles, (f) => enrich(f, true));
return allFiles;
}
function buildHistogram(allFiles) {
const buckets = [
[1 * KB, "01KB"],
[10 * KB, "110KB"],
[100 * KB, "10100KB"],
[1 * MB, "100KB1MB"],
[10 * MB, "110MB"],
[100 * MB, "10100MB"],
[Infinity, ">=100MB"],
];
const histogram = buckets.map(([_, label]) => ({ label, count: 0, bytes: 0 }));
for (const f of allFiles) {
for (let i = 0; i < buckets.length; i++) {
if (f.size < buckets[i][0]) {
histogram[i].count++;
histogram[i].bytes += f.size;
break;
}
}
}
return histogram;
}
function aggregateByExtension(allFiles) {
const byExtension = new Map();
for (const f of allFiles) {
const key = f.ext || "<none>";
const v = byExtension.get(key) || { ext: key, count: 0, bytes: 0 };
v.count++;
v.bytes += f.size;
byExtension.set(key, v);
}
return Array.from(byExtension.values()).sort((a, b) => b.bytes - a.bytes);
}
function aggregateByDirectory(allFiles) {
const byDirectory = new Map();
function addDirBytes(dir, bytes) {
const v = byDirectory.get(dir) || { dir, count: 0, bytes: 0 };
v.count++;
v.bytes += bytes;
byDirectory.set(dir, v);
}
for (const f of allFiles) {
const parts = f.dir === "." ? [] : f.dir.split(path.sep);
let acc = "";
for (let i = 0; i < parts.length; i++) {
acc = i === 0 ? parts[0] : acc + path.sep + parts[i];
addDirBytes(acc, f.size);
}
if (parts.length === 0) addDirBytes(".", f.size);
}
return Array.from(byDirectory.values()).sort((a, b) => b.bytes - a.bytes);
}
function computeDepthAndLongest(allFiles) {
const depthDistribution = new Map();
for (const f of allFiles) {
depthDistribution.set(f.depth, (depthDistribution.get(f.depth) || 0) + 1);
}
const longestPaths = [...allFiles]
.sort((a, b) => b.path.length - a.path.length)
.slice(0, 25)
.map((f) => ({ path: f.path, length: f.path.length, size: f.size }));
const depthDist = Array.from(depthDistribution.entries())
.sort((a, b) => a[0] - b[0])
.map(([depth, count]) => ({ depth, count }));
return { depthDist, longestPaths };
}
function computeTemporal(allFiles, nowMs) {
let oldest = null, newest = null;
const ageBuckets = [
{ label: "> 1 year", minDays: 365, maxDays: Infinity, count: 0, bytes: 0 },
{ label: "612 months", minDays: 180, maxDays: 365, count: 0, bytes: 0 },
{ label: "16 months", minDays: 30, maxDays: 180, count: 0, bytes: 0 },
{ label: "730 days", minDays: 7, maxDays: 30, count: 0, bytes: 0 },
{ label: "17 days", minDays: 1, maxDays: 7, count: 0, bytes: 0 },
{ label: "< 1 day", minDays: 0, maxDays: 1, count: 0, bytes: 0 },
];
for (const f of allFiles) {
const ageDays = Math.max(0, (nowMs - (f.mtimeMs || nowMs)) / (24 * 60 * 60 * 1000));
for (const b of ageBuckets) {
if (ageDays >= b.minDays && ageDays < b.maxDays) {
b.count++;
b.bytes += f.size;
break;
}
}
if (!oldest || f.mtimeMs < oldest.mtimeMs) oldest = f;
if (!newest || f.mtimeMs > newest.mtimeMs) newest = f;
}
return {
oldest: oldest ? { path: oldest.path, mtime: oldest.mtimeMs ? new Date(oldest.mtimeMs).toISOString() : null } : null,
newest: newest ? { path: newest.path, mtime: newest.mtimeMs ? new Date(newest.mtimeMs).toISOString() : null } : null,
ageBuckets,
};
}
function computeQuality(allFiles, textFiles) {
const zeroByteFiles = allFiles.filter((f) => f.size === 0).length;
const emptyTextFiles = textFiles.filter((f) => (f.size || 0) === 0 || (f.lines || 0) === 0).length;
const hiddenFiles = allFiles.filter((f) => f.hidden).length;
const symlinks = allFiles.filter((f) => f.isSymlink).length;
const largeThreshold = 50 * MB;
const suspiciousThreshold = 100 * MB;
const largeFilesCount = allFiles.filter((f) => f.size >= largeThreshold).length;
const suspiciousLargeFilesCount = allFiles.filter((f) => f.size >= suspiciousThreshold).length;
return {
zeroByteFiles,
emptyTextFiles,
hiddenFiles,
symlinks,
largeFilesCount,
suspiciousLargeFilesCount,
largeThreshold,
};
}
function computeDuplicates(allFiles, textFiles) {
const duplicatesBySize = new Map();
for (const f of allFiles) {
const key = String(f.size);
const arr = duplicatesBySize.get(key) || [];
arr.push(f);
duplicatesBySize.set(key, arr);
}
const duplicateCandidates = [];
for (const [sizeKey, arr] of duplicatesBySize.entries()) {
if (arr.length < 2) continue;
const textGroup = arr.filter((f) => !f.isBinary);
const otherGroup = arr.filter((f) => f.isBinary);
const contentHashGroups = new Map();
for (const tf of textGroup) {
try {
const src = textFiles.find((x) => x.absolutePath === tf.absolutePath);
const content = src ? src.content : "";
const h = crypto.createHash("sha1").update(content).digest("hex");
const g = contentHashGroups.get(h) || [];
g.push(tf);
contentHashGroups.set(h, g);
} catch (_) { /* ignore hashing errors for duplicate detection */ }
}
for (const [_h, g] of contentHashGroups.entries()) {
if (g.length > 1) duplicateCandidates.push({ reason: "same-size+text-hash", size: Number(sizeKey), count: g.length, files: g.map((f) => f.path) });
}
if (otherGroup.length > 1) {
duplicateCandidates.push({ reason: "same-size", size: Number(sizeKey), count: otherGroup.length, files: otherGroup.map((f) => f.path) });
}
}
return duplicateCandidates;
}
function estimateCompressibility(textFiles) {
let compSampleBytes = 0;
let compCompressedBytes = 0;
for (const tf of textFiles) {
try {
const sampleLen = Math.min(256 * 1024, tf.size || 0);
if (sampleLen <= 0) continue;
const sample = tf.content.slice(0, sampleLen);
const gz = zlib.gzipSync(Buffer.from(sample, "utf8"));
compSampleBytes += sampleLen;
compCompressedBytes += gz.length;
} catch (_) { /* ignore compression errors during sampling */ }
}
return compSampleBytes > 0 ? compCompressedBytes / compSampleBytes : null;
}
function computeGitInfo(allFiles, rootDir, largeThreshold) {
const info = {
isRepo: false,
trackedCount: 0,
trackedBytes: 0,
untrackedCount: 0,
untrackedBytes: 0,
lfsCandidates: [],
};
try {
if (!rootDir) return info;
const top = cp.execFileSync("git", ["rev-parse", "--show-toplevel"], { cwd: rootDir, stdio: ["ignore", "pipe", "ignore"] }).toString().trim();
if (!top) return info;
info.isRepo = true;
const out = cp.execFileSync("git", ["ls-files", "-z"], { cwd: rootDir, stdio: ["ignore", "pipe", "ignore"] });
const tracked = new Set(out.toString().split("\0").filter(Boolean));
let trackedBytes = 0, trackedCount = 0, untrackedBytes = 0, untrackedCount = 0;
const lfsCandidates = [];
for (const f of allFiles) {
const isTracked = tracked.has(f.path);
if (isTracked) {
trackedCount++; trackedBytes += f.size;
if (f.size >= largeThreshold) lfsCandidates.push({ path: f.path, size: f.size });
} else {
untrackedCount++; untrackedBytes += f.size;
}
}
info.trackedCount = trackedCount;
info.trackedBytes = trackedBytes;
info.untrackedCount = untrackedCount;
info.untrackedBytes = untrackedBytes;
info.lfsCandidates = lfsCandidates.sort((a, b) => b.size - a.size).slice(0, 50);
} catch (_) { /* git not available or not a repo, ignore */ }
return info;
}
function computeLargestFiles(allFiles, totalBytes) {
const toPct = (num, den) => (den === 0 ? 0 : (num / den) * 100);
return [...allFiles]
.sort((a, b) => b.size - a.size)
.slice(0, 50)
.map((f) => ({
path: f.path,
size: f.size,
sizeFormatted: formatSize(f.size),
percentOfTotal: toPct(f.size, totalBytes),
ext: f.ext || "",
isBinary: f.isBinary,
mtime: f.mtimeMs ? new Date(f.mtimeMs).toISOString() : null,
}));
}
function mdTable(rows, headers) {
const header = `| ${headers.join(" | ")} |`;
const sep = `| ${headers.map(() => "---").join(" | ")} |`;
const body = rows.map((r) => `| ${r.join(" | ")} |`).join("\n");
return `${header}\n${sep}\n${body}`;
}
function buildMarkdownReport(largestFiles, byExtensionArr, byDirectoryArr, totalBytes) {
const toPct = (num, den) => (den === 0 ? 0 : (num / den) * 100);
const md = [];
md.push("\n### Top Largest Files (Top 50)\n");
md.push(mdTable(
largestFiles.map((f) => [f.path, f.sizeFormatted, `${f.percentOfTotal.toFixed(2)}%`, f.ext || "", f.isBinary ? "binary" : "text"]),
["Path", "Size", "% of total", "Ext", "Type"],
));
md.push("\n\n### Top Extensions by Bytes (Top 20)\n");
const topExtRows = byExtensionArr.slice(0, 20).map((e) => [e.ext, String(e.count), formatSize(e.bytes), `${toPct(e.bytes, totalBytes).toFixed(2)}%`]);
md.push(mdTable(topExtRows, ["Ext", "Count", "Bytes", "% of total"]));
md.push("\n\n### Top Directories by Bytes (Top 20)\n");
const topDirRows = byDirectoryArr.slice(0, 20).map((d) => [d.dir, String(d.count), formatSize(d.bytes), `${toPct(d.bytes, totalBytes).toFixed(2)}%`]);
md.push(mdTable(topDirRows, ["Directory", "Files", "Bytes", "% of total"]));
return md.join("\n");
}
module.exports = {
KB,
MB,
formatSize,
percentile,
processWithLimit,
enrichAllFiles,
buildHistogram,
aggregateByExtension,
aggregateByDirectory,
computeDepthAndLongest,
computeTemporal,
computeQuality,
computeDuplicates,
estimateCompressibility,
computeGitInfo,
computeLargestFiles,
buildMarkdownReport,
};

80
tools/flattener/stats.js Normal file
View File

@@ -0,0 +1,80 @@
const H = require("./stats.helpers.js");
async function calculateStatistics(aggregatedContent, xmlFileSize, rootDir) {
const { textFiles, binaryFiles, errors } = aggregatedContent;
const totalLines = textFiles.reduce((sum, f) => sum + (f.lines || 0), 0);
const estimatedTokens = Math.ceil(xmlFileSize / 4);
// Build enriched file list
const allFiles = await H.enrichAllFiles(textFiles, binaryFiles);
const totalBytes = allFiles.reduce((s, f) => s + f.size, 0);
const sizes = allFiles.map((f) => f.size).sort((a, b) => a - b);
const avgSize = sizes.length ? totalBytes / sizes.length : 0;
const medianSize = sizes.length ? H.percentile(sizes, 50) : 0;
const p90 = H.percentile(sizes, 90);
const p95 = H.percentile(sizes, 95);
const p99 = H.percentile(sizes, 99);
const histogram = H.buildHistogram(allFiles);
const byExtensionArr = H.aggregateByExtension(allFiles);
const byDirectoryArr = H.aggregateByDirectory(allFiles);
const { depthDist, longestPaths } = H.computeDepthAndLongest(allFiles);
const temporal = H.computeTemporal(allFiles, Date.now());
const quality = H.computeQuality(allFiles, textFiles);
const duplicateCandidates = H.computeDuplicates(allFiles, textFiles);
const compressibilityRatio = H.estimateCompressibility(textFiles);
const git = H.computeGitInfo(allFiles, rootDir, quality.largeThreshold);
const largestFiles = H.computeLargestFiles(allFiles, totalBytes);
const markdownReport = H.buildMarkdownReport(
largestFiles,
byExtensionArr,
byDirectoryArr,
totalBytes,
);
return {
// Back-compat summary
totalFiles: textFiles.length + binaryFiles.length,
textFiles: textFiles.length,
binaryFiles: binaryFiles.length,
errorFiles: errors.length,
totalSize: H.formatSize(totalBytes),
totalBytes,
xmlSize: H.formatSize(xmlFileSize),
totalLines,
estimatedTokens: estimatedTokens.toLocaleString(),
// Distributions and percentiles
avgFileSize: avgSize,
medianFileSize: medianSize,
p90,
p95,
p99,
histogram,
// Extensions and directories
byExtension: byExtensionArr,
byDirectory: byDirectoryArr,
depthDistribution: depthDist,
longestPaths,
// Temporal
temporal,
// Quality signals
quality,
// Duplicates and compressibility
duplicateCandidates,
compressibilityRatio,
// Git-aware
git,
largestFiles,
markdownReport,
};
}
module.exports = { calculateStatistics };

View File

@@ -0,0 +1,405 @@
#!/usr/bin/env node
/* deno-lint-ignore-file */
/*
Automatic test matrix for project root detection.
Creates temporary fixtures for various ecosystems and validates findProjectRoot().
No external options or flags required. Safe to run multiple times.
*/
const os = require("node:os");
const path = require("node:path");
const fs = require("fs-extra");
const { promisify } = require("node:util");
const { execFile } = require("node:child_process");
const process = require("node:process");
const execFileAsync = promisify(execFile);
const { findProjectRoot } = require("./projectRoot.js");
async function cmdAvailable(cmd) {
try {
await execFileAsync(cmd, ["--version"], { timeout: 500, windowsHide: true });
return true;
} catch {
return false;
}
async function testSvnMarker() {
const root = await mkTmpDir("svn");
const nested = path.join(root, "proj", "code");
await fs.ensureDir(nested);
await fs.ensureDir(path.join(root, ".svn"));
const found = await findProjectRoot(nested);
assertEqual(found, root, ".svn marker should be detected");
return { name: "svn-marker", ok: true };
}
async function testSymlinkStart() {
const root = await mkTmpDir("symlink-start");
const nested = path.join(root, "a", "b");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, ".project-root"), "\n");
const tmp = await mkTmpDir("symlink-tmp");
const link = path.join(tmp, "link-to-b");
try {
await fs.symlink(nested, link);
} catch {
// symlink may not be permitted on some systems; skip
return { name: "symlink-start", ok: true, skipped: true };
}
const found = await findProjectRoot(link);
assertEqual(found, root, "should resolve symlinked start to real root");
return { name: "symlink-start", ok: true };
}
async function testSubmoduleLikeInnerGitFile() {
const root = await mkTmpDir("submodule-like");
const mid = path.join(root, "mid");
const leaf = path.join(mid, "leaf");
await fs.ensureDir(leaf);
// outer repo
await fs.ensureDir(path.join(root, ".git"));
// inner submodule-like .git file
await fs.writeFile(path.join(mid, ".git"), "gitdir: ../.git/modules/mid\n");
const found = await findProjectRoot(leaf);
assertEqual(found, root, "outermost .git should win on tie weight");
return { name: "submodule-like-gitfile", ok: true };
}
}
async function mkTmpDir(name) {
const base = await fs.realpath(os.tmpdir());
const dir = await fs.mkdtemp(path.join(base, `flattener-${name}-`));
return dir;
}
function assertEqual(actual, expected, msg) {
if (actual !== expected) {
throw new Error(`${msg}: expected=\"${expected}\" actual=\"${actual}\"`);
}
}
async function testSentinel() {
const root = await mkTmpDir("sentinel");
const nested = path.join(root, "a", "b", "c");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, ".project-root"), "\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "sentinel .project-root should win");
return { name: "sentinel", ok: true };
}
async function testOtherSentinels() {
const root = await mkTmpDir("other-sentinels");
const nested = path.join(root, "x", "y");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, ".workspace-root"), "\n");
const found1 = await findProjectRoot(nested);
assertEqual(found1, root, "sentinel .workspace-root should win");
await fs.remove(path.join(root, ".workspace-root"));
await fs.writeFile(path.join(root, ".repo-root"), "\n");
const found2 = await findProjectRoot(nested);
assertEqual(found2, root, "sentinel .repo-root should win");
return { name: "other-sentinels", ok: true };
}
async function testGitCliAndMarker() {
const hasGit = await cmdAvailable("git");
if (!hasGit) return { name: "git-cli", ok: true, skipped: true };
const root = await mkTmpDir("git");
const nested = path.join(root, "pkg", "src");
await fs.ensureDir(nested);
await execFileAsync("git", ["init"], { cwd: root, timeout: 2000 });
const found = await findProjectRoot(nested);
await assertEqual(found, root, "git toplevel should be detected");
return { name: "git-cli", ok: true };
}
async function testHgMarkerOrCli() {
// Prefer simple marker test to avoid requiring Mercurial install
const root = await mkTmpDir("hg");
const nested = path.join(root, "lib");
await fs.ensureDir(nested);
await fs.ensureDir(path.join(root, ".hg"));
const found = await findProjectRoot(nested);
await assertEqual(found, root, ".hg marker should be detected");
return { name: "hg-marker", ok: true };
}
async function testWorkspacePnpm() {
const root = await mkTmpDir("pnpm-workspace");
const pkgA = path.join(root, "packages", "a");
await fs.ensureDir(pkgA);
await fs.writeFile(path.join(root, "pnpm-workspace.yaml"), "packages:\n - packages/*\n");
const found = await findProjectRoot(pkgA);
await assertEqual(found, root, "pnpm-workspace.yaml should be detected");
return { name: "pnpm-workspace", ok: true };
}
async function testPackageJsonWorkspaces() {
const root = await mkTmpDir("package-workspaces");
const pkgA = path.join(root, "packages", "a");
await fs.ensureDir(pkgA);
await fs.writeJson(path.join(root, "package.json"), { private: true, workspaces: ["packages/*"] }, { spaces: 2 });
const found = await findProjectRoot(pkgA);
await assertEqual(found, root, "package.json workspaces should be detected");
return { name: "package.json-workspaces", ok: true };
}
async function testLockfiles() {
const root = await mkTmpDir("lockfiles");
const nested = path.join(root, "src");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "yarn.lock"), "\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "yarn.lock should be detected");
return { name: "lockfiles", ok: true };
}
async function testLanguageConfigs() {
const root = await mkTmpDir("lang-configs");
const nested = path.join(root, "x", "y");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "pyproject.toml"), "[tool.poetry]\nname='tmp'\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "pyproject.toml should be detected");
return { name: "language-configs", ok: true };
}
async function testPreferOuterOnTie() {
const root = await mkTmpDir("tie");
const mid = path.join(root, "mid");
const leaf = path.join(mid, "leaf");
await fs.ensureDir(leaf);
// same weight marker at two levels
await fs.writeFile(path.join(root, "requirements.txt"), "\n");
await fs.writeFile(path.join(mid, "requirements.txt"), "\n");
const found = await findProjectRoot(leaf);
await assertEqual(found, root, "outermost directory should win on equal weight");
return { name: "prefer-outermost-tie", ok: true };
}
// Additional coverage: Bazel, Nx/Turbo/Rush, Go workspaces, Deno, Java/Scala, PHP, Rust, Nix, Changesets, env markers,
// and priority interaction between package.json and lockfiles.
async function testBazelWorkspace() {
const root = await mkTmpDir("bazel");
const nested = path.join(root, "apps", "svc");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "WORKSPACE"), "workspace(name=\"tmp\")\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "Bazel WORKSPACE should be detected");
return { name: "bazel-workspace", ok: true };
}
async function testNx() {
const root = await mkTmpDir("nx");
const nested = path.join(root, "apps", "web");
await fs.ensureDir(nested);
await fs.writeJson(path.join(root, "nx.json"), { npmScope: "tmp" }, { spaces: 2 });
const found = await findProjectRoot(nested);
await assertEqual(found, root, "nx.json should be detected");
return { name: "nx", ok: true };
}
async function testTurbo() {
const root = await mkTmpDir("turbo");
const nested = path.join(root, "packages", "x");
await fs.ensureDir(nested);
await fs.writeJson(path.join(root, "turbo.json"), { pipeline: {} }, { spaces: 2 });
const found = await findProjectRoot(nested);
await assertEqual(found, root, "turbo.json should be detected");
return { name: "turbo", ok: true };
}
async function testRush() {
const root = await mkTmpDir("rush");
const nested = path.join(root, "apps", "a");
await fs.ensureDir(nested);
await fs.writeJson(path.join(root, "rush.json"), { projectFolderMinDepth: 1 }, { spaces: 2 });
const found = await findProjectRoot(nested);
await assertEqual(found, root, "rush.json should be detected");
return { name: "rush", ok: true };
}
async function testGoWorkAndMod() {
const root = await mkTmpDir("gowork");
const mod = path.join(root, "modA");
const nested = path.join(mod, "pkg");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "go.work"), "go 1.22\nuse ./modA\n");
await fs.writeFile(path.join(mod, "go.mod"), "module example.com/a\ngo 1.22\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "go.work should define the workspace root");
return { name: "go-work", ok: true };
}
async function testDenoJson() {
const root = await mkTmpDir("deno");
const nested = path.join(root, "src");
await fs.ensureDir(nested);
await fs.writeJson(path.join(root, "deno.json"), { tasks: {} }, { spaces: 2 });
const found = await findProjectRoot(nested);
await assertEqual(found, root, "deno.json should be detected");
return { name: "deno-json", ok: true };
}
async function testGradleSettings() {
const root = await mkTmpDir("gradle");
const nested = path.join(root, "app");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "settings.gradle"), "rootProject.name='tmp'\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "settings.gradle should be detected");
return { name: "gradle-settings", ok: true };
}
async function testMavenPom() {
const root = await mkTmpDir("maven");
const nested = path.join(root, "module");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "pom.xml"), "<project></project>\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "pom.xml should be detected");
return { name: "maven-pom", ok: true };
}
async function testSbtBuild() {
const root = await mkTmpDir("sbt");
const nested = path.join(root, "sub");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "build.sbt"), "name := \"tmp\"\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "build.sbt should be detected");
return { name: "sbt-build", ok: true };
}
async function testComposer() {
const root = await mkTmpDir("composer");
const nested = path.join(root, "src");
await fs.ensureDir(nested);
await fs.writeJson(path.join(root, "composer.json"), { name: "tmp/pkg" }, { spaces: 2 });
await fs.writeFile(path.join(root, "composer.lock"), "{}\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "composer.{json,lock} should be detected");
return { name: "composer", ok: true };
}
async function testCargo() {
const root = await mkTmpDir("cargo");
const nested = path.join(root, "src");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "Cargo.toml"), "[package]\nname='tmp'\nversion='0.0.0'\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "Cargo.toml should be detected");
return { name: "cargo", ok: true };
}
async function testNixFlake() {
const root = await mkTmpDir("nix");
const nested = path.join(root, "work");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "flake.nix"), "{ }\n");
const found = await findProjectRoot(nested);
await assertEqual(found, root, "flake.nix should be detected");
return { name: "nix-flake", ok: true };
}
async function testChangesetConfig() {
const root = await mkTmpDir("changeset");
const nested = path.join(root, "pkg");
await fs.ensureDir(nested);
await fs.ensureDir(path.join(root, ".changeset"));
await fs.writeJson(path.join(root, ".changeset", "config.json"), { $schema: "https://unpkg.com/@changesets/config@2.3.1/schema.json" }, { spaces: 2 });
const found = await findProjectRoot(nested);
await assertEqual(found, root, ".changeset/config.json should be detected");
return { name: "changesets", ok: true };
}
async function testEnvCustomMarker() {
const root = await mkTmpDir("env-marker");
const nested = path.join(root, "dir");
await fs.ensureDir(nested);
await fs.writeFile(path.join(root, "MY_ROOT"), "\n");
const prev = process.env.PROJECT_ROOT_MARKERS;
process.env.PROJECT_ROOT_MARKERS = "MY_ROOT";
try {
const found = await findProjectRoot(nested);
await assertEqual(found, root, "custom env marker should be honored");
} finally {
if (prev === undefined) delete process.env.PROJECT_ROOT_MARKERS; else process.env.PROJECT_ROOT_MARKERS = prev;
}
return { name: "env-custom-marker", ok: true };
}
async function testPackageLowPriorityVsLock() {
const root = await mkTmpDir("pkg-vs-lock");
const nested = path.join(root, "nested");
await fs.ensureDir(path.join(nested, "deep"));
await fs.writeJson(path.join(nested, "package.json"), { name: "nested" }, { spaces: 2 });
await fs.writeFile(path.join(root, "yarn.lock"), "\n");
const found = await findProjectRoot(path.join(nested, "deep"));
await assertEqual(found, root, "lockfile at root should outrank nested package.json");
return { name: "package-vs-lock-priority", ok: true };
}
async function run() {
const tests = [
testSentinel,
testOtherSentinels,
testGitCliAndMarker,
testHgMarkerOrCli,
testWorkspacePnpm,
testPackageJsonWorkspaces,
testLockfiles,
testLanguageConfigs,
testPreferOuterOnTie,
testBazelWorkspace,
testNx,
testTurbo,
testRush,
testGoWorkAndMod,
testDenoJson,
testGradleSettings,
testMavenPom,
testSbtBuild,
testComposer,
testCargo,
testNixFlake,
testChangesetConfig,
testEnvCustomMarker,
testPackageLowPriorityVsLock,
testSvnMarker,
testSymlinkStart,
testSubmoduleLikeInnerGitFile,
];
const results = [];
for (const t of tests) {
try {
const r = await t();
results.push({ ...r, ok: true });
console.log(`${r.name}${r.skipped ? " (skipped)" : ""}`);
} catch (err) {
console.error(`${t.name}:`, err && err.message ? err.message : err);
results.push({ name: t.name, ok: false, error: String(err) });
}
}
const failed = results.filter((r) => !r.ok);
console.log("\nSummary:");
for (const r of results) {
console.log(`- ${r.name}: ${r.ok ? "ok" : "FAIL"}${r.skipped ? " (skipped)" : ""}`);
}
if (failed.length) {
process.exitCode = 1;
}
}
run().catch((e) => {
console.error("Fatal error:", e);
process.exit(1);
});

86
tools/flattener/xml.js Normal file
View File

@@ -0,0 +1,86 @@
const fs = require("fs-extra");
function escapeXml(str) {
if (typeof str !== "string") {
return String(str);
}
return str
.replace(/&/g, "&amp;")
.replace(/</g, "&lt;")
.replace(/'/g, "&apos;");
}
function indentFileContent(content) {
if (typeof content !== "string") {
return String(content);
}
return content.split("\n").map((line) => ` ${line}`);
}
function generateXMLOutput(aggregatedContent, outputPath) {
const { textFiles } = aggregatedContent;
const writeStream = fs.createWriteStream(outputPath, { encoding: "utf8" });
return new Promise((resolve, reject) => {
writeStream.on("error", reject);
writeStream.on("finish", resolve);
writeStream.write('<?xml version="1.0" encoding="UTF-8"?>\n');
writeStream.write("<files>\n");
// Sort files by path for deterministic order
const filesSorted = [...textFiles].sort((a, b) =>
a.path.localeCompare(b.path)
);
let index = 0;
const writeNext = () => {
if (index >= filesSorted.length) {
writeStream.write("</files>\n");
writeStream.end();
return;
}
const file = filesSorted[index++];
const p = escapeXml(file.path);
const content = typeof file.content === "string" ? file.content : "";
if (content.length === 0) {
writeStream.write(`\t<file path='${p}'/>\n`);
setTimeout(writeNext, 0);
return;
}
const needsCdata = content.includes("<") || content.includes("&") ||
content.includes("]]>");
if (needsCdata) {
// Open tag and CDATA on their own line with tab indent; content lines indented with two tabs
writeStream.write(`\t<file path='${p}'><![CDATA[\n`);
// Safely split any occurrences of "]]>" inside content, trim trailing newlines, indent each line with two tabs
const safe = content.replace(/]]>/g, "]]]]><![CDATA[>");
const trimmed = safe.replace(/[\r\n]+$/, "");
const indented = trimmed.length > 0
? trimmed.split("\n").map((line) => `\t\t${line}`).join("\n")
: "";
writeStream.write(indented);
// Close CDATA and attach closing tag directly after the last content line
writeStream.write("]]></file>\n");
} else {
// Write opening tag then newline; indent content with two tabs; attach closing tag directly after last content char
writeStream.write(`\t<file path='${p}'>\n`);
const trimmed = content.replace(/[\r\n]+$/, "");
const indented = trimmed.length > 0
? trimmed.split("\n").map((line) => `\t\t${line}`).join("\n")
: "";
writeStream.write(indented);
writeStream.write(`</file>\n`);
}
setTimeout(writeNext, 0);
};
writeNext();
});
}
module.exports = { generateXMLOutput };

View File

@@ -6,13 +6,17 @@ const fs = require('fs').promises;
const yaml = require('js-yaml');
const chalk = require('chalk');
const inquirer = require('inquirer');
const semver = require('semver');
const https = require('https');
// Handle both execution contexts (from root via npx or from installer directory)
let version;
let installer;
let packageName;
try {
// Try installer context first (when run from tools/installer/)
version = require('../package.json').version;
packageName = require('../package.json').name;
installer = require('../lib/installer');
} catch (e) {
// Fall back to root context (when run via npx from GitHub)
@@ -86,6 +90,60 @@ program
}
});
// Command to check if updates are available
program
.command('update-check')
.description('Check for BMad Update')
.action(async () => {
console.log('Checking for updates...');
// Make HTTP request to npm registry for latest version info
const req = https.get(`https://registry.npmjs.org/${packageName}/latest`, res => {
// Check for HTTP errors (non-200 status codes)
if (res.statusCode !== 200) {
console.error(chalk.red(`Update check failed: Received status code ${res.statusCode}`));
return;
}
// Accumulate response data chunks
let data = '';
res.on('data', chunk => data += chunk);
// Process complete response
res.on('end', () => {
try {
// Parse npm registry response and extract version
const latest = JSON.parse(data).version;
// Compare versions using semver
if (semver.gt(latest, version)) {
console.log(chalk.bold.blue(`⚠️ ${packageName} update available: ${version}${latest}`));
console.log(chalk.bold.blue('\nInstall latest by running:'));
console.log(chalk.bold.magenta(` npm install ${packageName}@latest`));
console.log(chalk.dim(' or'));
console.log(chalk.bold.magenta(` npx ${packageName}@latest`));
} else {
console.log(chalk.bold.blue(`${packageName} is up to date`));
}
} catch (error) {
// Handle JSON parsing errors
console.error(chalk.red('Failed to parse npm registry data:'), error.message);
}
});
});
// Handle network/connection errors
req.on('error', error => {
console.error(chalk.red('Update check failed:'), error.message);
});
// Set 30 second timeout to prevent hanging
req.setTimeout(30000, () => {
req.destroy();
console.error(chalk.red('Update check timed out'));
});
});
program
.command('list:expansions')
.description('List available expansion packs')

View File

@@ -11,7 +11,7 @@ installation-options:
ide-configurations:
cursor:
name: Cursor
rule-dir: .cursor/rules/
rule-dir: .cursor/rules/bmad/
format: multi-file
command-suffix: .mdc
instructions: |

View File

@@ -68,7 +68,7 @@ class IdeSetup extends BaseIdeSetup {
}
async setupCursor(installDir, selectedAgent) {
const cursorRulesDir = path.join(installDir, ".cursor", "rules");
const cursorRulesDir = path.join(installDir, ".cursor", "rules", "bmad");
const agents = selectedAgent ? [selectedAgent] : await this.getAllAgentIds(installDir);
await fileManager.ensureDirectory(cursorRulesDir);

View File

@@ -237,6 +237,10 @@ class Installer {
// Copy common/ items to .bmad-core
spinner.text = "Copying common utilities...";
await this.copyCommonItems(installDir, ".bmad-core", spinner);
// Copy documentation files from docs/ to .bmad-core
spinner.text = "Copying documentation files...";
await this.copyDocsItems(installDir, ".bmad-core", spinner);
// Get list of all files for manifest
const foundFiles = await resourceLocator.findFiles("**/*", {
@@ -308,6 +312,11 @@ class Installer {
spinner.text = "Copying common utilities...";
const commonFiles = await this.copyCommonItems(installDir, ".bmad-core", spinner);
files.push(...commonFiles);
// Copy documentation files from docs/ to .bmad-core
spinner.text = "Copying documentation files...";
const docFiles = await this.copyDocsItems(installDir, ".bmad-core", spinner);
files.push(...docFiles);
} else if (config.installType === "team") {
// Team installation
spinner.text = `Installing ${config.team} team...`;
@@ -353,6 +362,11 @@ class Installer {
spinner.text = "Copying common utilities...";
const commonFiles = await this.copyCommonItems(installDir, ".bmad-core", spinner);
files.push(...commonFiles);
// Copy documentation files from docs/ to .bmad-core
spinner.text = "Copying documentation files...";
const docFiles = await this.copyDocsItems(installDir, ".bmad-core", spinner);
files.push(...docFiles);
} else if (config.installType === "expansion-only") {
// Expansion-only installation - DO NOT create .bmad-core
// Only install expansion packs
@@ -896,7 +910,7 @@ class Installer {
}
// Important notice to read the user guide
console.log(chalk.red.bold("\n📖 IMPORTANT: Please read the user guide installed at .bmad-core/user-guide.md"));
console.log(chalk.red.bold("\n📖 IMPORTANT: Please read the user guide at docs/user-guide.md (also installed at .bmad-core/user-guide.md)"));
console.log(chalk.red("This guide contains essential information about the BMad workflow and how to use the agents effectively."));
}
@@ -1557,6 +1571,54 @@ class Installer {
return copiedFiles;
}
async copyDocsItems(installDir, targetSubdir, spinner) {
const fs = require('fs').promises;
const sourceBase = path.dirname(path.dirname(path.dirname(path.dirname(__filename)))); // Go up to project root
const docsPath = path.join(sourceBase, 'docs');
const targetPath = path.join(installDir, targetSubdir);
const copiedFiles = [];
// Specific documentation files to copy
const docFiles = [
'enhanced-ide-development-workflow.md',
'user-guide.md',
'working-in-the-brownfield.md'
];
// Check if docs/ exists
if (!(await fileManager.pathExists(docsPath))) {
console.warn('Warning: docs/ folder not found');
return copiedFiles;
}
// Copy specific documentation files from docs/ to target
for (const docFile of docFiles) {
const sourcePath = path.join(docsPath, docFile);
const destPath = path.join(targetPath, docFile);
// Check if the source file exists
if (await fileManager.pathExists(sourcePath)) {
// Read the file content
const content = await fs.readFile(sourcePath, 'utf8');
// Replace {root} with the target subdirectory
const updatedContent = content.replace(/\{root\}/g, targetSubdir);
// Ensure directory exists
await fileManager.ensureDirectory(path.dirname(destPath));
// Write the updated content
await fs.writeFile(destPath, updatedContent, 'utf8');
copiedFiles.push(path.join(targetSubdir, docFile));
}
}
if (copiedFiles.length > 0) {
console.log(chalk.dim(` Added ${copiedFiles.length} documentation files`));
}
return copiedFiles;
}
async detectExpansionPacks(installDir) {
const expansionPacks = {};
const glob = require("glob");
@@ -1729,7 +1791,7 @@ class Installer {
const manifestPath = path.join(bmadDir, "install-manifest.yaml");
if (await fileManager.pathExists(manifestPath)) {
return bmadDir;
return currentDir; // Return parent directory, not .bmad-core itself
}
currentDir = path.dirname(currentDir);
@@ -1739,7 +1801,7 @@ class Installer {
if (path.basename(process.cwd()) === ".bmad-core") {
const manifestPath = path.join(process.cwd(), "install-manifest.yaml");
if (await fileManager.pathExists(manifestPath)) {
return process.cwd();
return path.dirname(process.cwd()); // Return parent directory
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "bmad-method",
"version": "4.35.1",
"version": "4.36.2",
"description": "BMad Method installer - AI-powered Agile development framework",
"main": "lib/installer.js",
"bin": {
@@ -22,12 +22,12 @@
"author": "BMad Team",
"license": "MIT",
"dependencies": {
"chalk": "^5.4.1",
"chalk": "^4.1.2",
"commander": "^14.0.0",
"fs-extra": "^11.3.0",
"inquirer": "^12.6.3",
"inquirer": "^8.2.6",
"js-yaml": "^4.1.0",
"ora": "^8.2.0"
"ora": "^5.4.1"
},
"engines": {
"node": ">=20.0.0"

105
tools/shared/bannerArt.js Normal file
View File

@@ -0,0 +1,105 @@
// ASCII banner art definitions extracted from banners.js to separate art from logic
const BMAD_TITLE = "BMAD-METHOD";
const FLATTENER_TITLE = "FLATTENER";
const INSTALLER_TITLE = "INSTALLER";
// Large ASCII blocks (block-style fonts)
const BMAD_LARGE = `
██████╗ ███╗ ███╗ █████╗ ██████╗ ███╗ ███╗███████╗████████╗██╗ ██╗ ██████╗ ██████╗
██╔══██╗████╗ ████║██╔══██╗██╔══██╗ ████╗ ████║██╔════╝╚══██╔══╝██║ ██║██╔═══██╗██╔══██╗
██████╔╝██╔████╔██║███████║██║ ██║█████╗██╔████╔██║█████╗ ██║ ███████║██║ ██║██║ ██║
██╔══██╗██║╚██╔╝██║██╔══██║██║ ██║╚════╝██║╚██╔╝██║██╔══╝ ██║ ██╔══██║██║ ██║██║ ██║
██████╔╝██║ ╚═╝ ██║██║ ██║██████╔╝ ██║ ╚═╝ ██║███████╗ ██║ ██║ ██║╚██████╔╝██████╔╝
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝
`;
const FLATTENER_LARGE = `
███████╗██╗ █████╗ ████████╗████████╗███████╗███╗ ██╗███████╗██████╗
██╔════╝██║ ██╔══██╗╚══██╔══╝╚══██╔══╝██╔════╝████╗ ██║██╔════╝██╔══██╗
█████╗ ██║ ███████║ ██║ ██║ █████╗ ██╔██╗ ██║█████╗ ██████╔╝
██╔══╝ ██║ ██╔══██║ ██║ ██║ ██╔══╝ ██║╚██╗██║██╔══╝ ██╔══██╗
██║ ███████║██║ ██║ ██║ ██║ ███████╗██║ ╚████║███████╗██║ ██║
╚═╝ ╚══════╝╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝╚══════╝╚═╝ ╚═╝
`;
const INSTALLER_LARGE = `
██╗███╗ ██╗███████╗████████╗ █████╗ ██╗ ██╗ ███████╗██████╗
██║████╗ ██║██╔════╝╚══██╔══╝██╔══██╗██║ ██║ ██╔════╝██╔══██╗
██║██╔██╗ ██║███████╗ ██║ ███████║██║ ██║ █████╗ ██████╔╝
██║██║╚██╗██║╚════██║ ██║ ██╔══██║██║ ██║ ██╔══╝ ██╔══██╗
██║██║ ╚████║███████║ ██║ ██║ ██║███████╗███████╗███████╗██║ ██║
╚═╝╚═╝ ╚═══╝╚══════╝ ╚═╝ ╚═╝ ╚═╝╚══════╝╚══════╝╚══════╝╚═╝ ╚═╝
`;
// Curated medium/small/tiny variants (fixed art, no runtime scaling)
// Medium: bold framed title with heavy fill (high contrast, compact)
const BMAD_MEDIUM = `
███╗ █╗ █╗ ██╗ ███╗ █╗ █╗███╗█████╗█╗ █╗ ██╗ ███╗
█╔═█╗██╗ ██║█╔═█╗█╔═█╗ ██╗ ██║█╔═╝╚═█╔═╝█║ █║█╔═█╗█╔═█╗
███╔╝█╔███╔█║████║█║ █║██╗█╔███╔█║██╗ █║ ████║█║ █║█║ █║
█╔═█╗█║ █╔╝█║█╔═█║█║ █║╚═╝█║ █╔╝█║█╔╝ █║ █╔═█║█║ █║█║ █║
███╔╝█║ ╚╝ █║█║ █║███╔╝ █║ ╚╝ █║███╗ █║ █║ █║╚██╔╝███╔╝
╚══╝ ╚╝ ╚╝╚╝ ╚╝╚══╝ ╚╝ ╚╝╚══╝ ╚╝ ╚╝ ╚╝ ╚═╝ ╚══╝
`;
const FLATTENER_MEDIUM = `
███╗█╗ ██╗ █████╗█████╗███╗█╗ █╗███╗███╗
█╔═╝█║ █╔═█╗╚═█╔═╝╚═█╔═╝█╔═╝██╗ █║█╔═╝█╔═█╗
██╗ █║ ████║ █║ █║ ██╗ █╔█╗█║██╗ ███╔╝
█╔╝ █║ █╔═█║ █║ █║ █╔╝ █║ ██║█╔╝ █╔═█╗
█║ ███║█║ █║ █║ █║ ███╗█║ █║███╗█║ █║
╚╝ ╚══╝╚╝ ╚╝ ╚╝ ╚╝ ╚══╝╚╝ ╚╝╚══╝╚╝ ╚╝
`;
const INSTALLER_MEDIUM = `
█╗█╗ █╗████╗█████╗ ██╗ █╗ █╗ ███╗███╗
█║██╗ █║█╔══╝╚═█╔═╝█╔═█╗█║ █║ █╔═╝█╔═█╗
█║█╔█╗█║████╗ █║ ████║█║ █║ ██╗ ███╔╝
█║█║ ██║╚══█║ █║ █╔═█║█║ █║ █╔╝ █╔═█╗
█║█║ █║████║ █║ █║ █║███╗███╗███╗█║ █║
╚╝╚╝ ╚╝╚═══╝ ╚╝ ╚╝ ╚╝╚══╝╚══╝╚══╝╚╝ ╚╝
`;
// Small: rounded box with bold rule
// Width: 30 columns total (28 inner)
const BMAD_SMALL = `
╭──────────────────────────╮
│ BMAD-METHOD │
╰──────────────────────────╯
`;
const FLATTENER_SMALL = `
╭──────────────────────────╮
│ FLATTENER │
╰──────────────────────────╯
`;
const INSTALLER_SMALL = `
╭──────────────────────────╮
│ INSTALLER │
╰──────────────────────────╯
`;
// Tiny (compact brackets)
const BMAD_TINY = `[ BMAD-METHOD ]`;
const FLATTENER_TINY = `[ FLATTENER ]`;
const INSTALLER_TINY = `[ INSTALLER ]`;
module.exports = {
BMAD_TITLE,
FLATTENER_TITLE,
INSTALLER_TITLE,
BMAD_LARGE,
FLATTENER_LARGE,
INSTALLER_LARGE,
BMAD_MEDIUM,
FLATTENER_MEDIUM,
INSTALLER_MEDIUM,
BMAD_SMALL,
FLATTENER_SMALL,
INSTALLER_SMALL,
BMAD_TINY,
FLATTENER_TINY,
INSTALLER_TINY,
};

View File

@@ -557,7 +557,7 @@ class V3ToV4Upgrader {
try {
const ideMessages = {
cursor: "Rules created in .cursor/rules/",
cursor: "Rules created in .cursor/rules/bmad/",
"claude-code": "Commands created in .claude/commands/BMad/",
windsurf: "Rules created in .windsurf/rules/",
trae: "Rules created in.trae/rules/",