Compare commits

...

18 Commits

Author SHA1 Message Date
semantic-release-bot
85a0d83fc5 chore(release): 4.36.1 [skip ci]
## [4.36.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.0...v4.36.1) (2025-08-09)

### Bug Fixes

* update Node.js version to 20 in release workflow and reduce Discord spam ([3f7e19a](3f7e19a098))
2025-08-09 20:49:42 +00:00
Brian Madison
3f7e19a098 fix: update Node.js version to 20 in release workflow and reduce Discord spam
- Update release workflow Node.js version from 18 to 20 to match package.json requirements
- Remove push trigger from Discord workflow to reduce notification spam

This should resolve the semantic-release content-length header error after org migration.
2025-08-09 15:49:13 -05:00
semantic-release-bot
23df54c955 chore(release): 4.36.0 [skip ci]
# [4.36.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.3...v4.36.0) (2025-08-09)

### Features

* modularize flattener tool into separate components with improved project root detection ([#417](https://github.com/bmadcode/BMAD-METHOD/issues/417)) ([0fdbca7](0fdbca73fc))
2025-08-09 20:33:49 +00:00
manjaroblack
0fdbca73fc feat: modularize flattener tool into separate components with improved project root detection (#417) 2025-08-09 15:33:23 -05:00
Daniel Willitzer
5d7d7c9015 Merge pull request #369 from antmikinka/pr/part-1-gcp-setup
Feat(Expansion Pack): Part 1 - Google Cloud Setup
2025-08-08 19:23:48 -07:00
Brian Madison
dd2b4ed5ac discord PR spam 2025-08-08 20:07:32 -05:00
Lior Assouline
8f40576681 Flatten venv & many other bins dir fix (#408)
* added .venv to ignore list of flattener

* more files pattern to ignore

---------

Co-authored-by: Lior Assouline <Lior.Assouline@harmonicinc.com>
2025-08-08 07:54:47 -05:00
Yanqing Wang
fe86675c5f Update link in README.md (#384) 2025-08-07 07:49:14 -05:00
semantic-release-bot
8211d2daff chore(release): 4.35.3 [skip ci]
## [4.35.3](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.2...v4.35.3) (2025-08-06)

### Bug Fixes

* doc location improvement ([1676f51](1676f5189e))
2025-08-06 05:01:55 +00:00
Brian Madison
1676f5189e fix: doc location improvement 2025-08-06 00:00:26 -05:00
semantic-release-bot
3c3d58939f chore(release): 4.35.2 [skip ci]
## [4.35.2](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.1...v4.35.2) (2025-08-06)

### Bug Fixes

* npx status check ([f7c2a4f](f7c2a4fb6c))
2025-08-06 03:34:49 +00:00
Brian Madison
2d954d3481 merge 2025-08-05 22:34:21 -05:00
Brian Madison
f7c2a4fb6c fix: npx status check 2025-08-05 22:33:47 -05:00
semantic-release-bot
9df28d5313 chore(release): 4.35.1 [skip ci]
## [4.35.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.0...v4.35.1) (2025-08-06)

### Bug Fixes

* npx hanging commands ([2cf322e](2cf322ee0d))
2025-08-06 03:22:35 +00:00
Brian Madison
2cf322ee0d fix: npx hanging commands 2025-08-05 22:22:04 -05:00
semantic-release-bot
5dc4043577 chore(release): 4.35.0 [skip ci]
# [4.35.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.34.0...v4.35.0) (2025-08-04)

### Features

* add qwen-code ide support to bmad installer. ([#392](https://github.com/bmadcode/BMAD-METHOD/issues/392)) ([a72b790](a72b790f3b))
2025-08-04 01:24:35 +00:00
Houston Zhang
a72b790f3b feat: add qwen-code ide support to bmad installer. (#392)
Co-authored-by: Djanghao <hstnz>
2025-08-03 20:24:09 -05:00
antmikinka
c7fc5d3606 Feat(Expansion Pack): Part 1 - Google Cloud Setup 2025-07-27 12:26:53 -07:00
38 changed files with 14132 additions and 13127 deletions

16
.github/workflows/discord.yaml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: Discord Notification
on: [pull_request, release, create, delete, issue_comment, pull_request_review, pull_request_review_comment]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- name: Notify Discord
uses: sarisia/actions-status-discord@v1
if: always()
with:
webhook: ${{ secrets.DISCORD_WEBHOOK }}
status: ${{ job.status }}
title: "Triggered by ${{ github.event_name }}"
color: 0x5865F2

View File

@@ -32,7 +32,7 @@ jobs:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
node-version: '20'
cache: npm
registry-url: https://registry.npmjs.org
- name: Install dependencies

4
.gitignore vendored
View File

@@ -3,6 +3,8 @@ node_modules/
pnpm-lock.yaml
bun.lock
deno.lock
pnpm-workspace.yaml
package-lock.json
# Logs
logs/
@@ -41,3 +43,5 @@ CLAUDE.md
.bmad-creator-tools
test-project-install/*
sample-project/*
flattened-codebase.xml

View File

@@ -1,9 +1,45 @@
# [4.34.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.33.1...v4.34.0) (2025-08-03)
## [4.36.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.36.0...v4.36.1) (2025-08-09)
### Bug Fixes
* update Node.js version to 20 in release workflow and reduce Discord spam ([3f7e19a](https://github.com/bmadcode/BMAD-METHOD/commit/3f7e19a098155341a2b89796addc47b0623cb87a))
# [4.36.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.3...v4.36.0) (2025-08-09)
### Features
* add KiloCode integration support to BMAD installer ([#390](https://github.com/bmadcode/BMAD-METHOD/issues/390)) ([dcebe91](https://github.com/bmadcode/BMAD-METHOD/commit/dcebe91d5ea68e69aa27183411a81639d444efd7))
- modularize flattener tool into separate components with improved project root detection ([#417](https://github.com/bmadcode/BMAD-METHOD/issues/417)) ([0fdbca7](https://github.com/bmadcode/BMAD-METHOD/commit/0fdbca73fc60e306109f682f018e105e2b4623a2))
## [4.35.3](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.2...v4.35.3) (2025-08-06)
### Bug Fixes
- doc location improvement ([1676f51](https://github.com/bmadcode/BMAD-METHOD/commit/1676f5189ed057fa2d7facbd6a771fe67cdb6372))
## [4.35.2](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.1...v4.35.2) (2025-08-06)
### Bug Fixes
- npx status check ([f7c2a4f](https://github.com/bmadcode/BMAD-METHOD/commit/f7c2a4fb6c454b17d250b85537129b01ffee6b85))
## [4.35.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.35.0...v4.35.1) (2025-08-06)
### Bug Fixes
- npx hanging commands ([2cf322e](https://github.com/bmadcode/BMAD-METHOD/commit/2cf322ee0d9b563a4998c72b2c5eab259594739b))
# [4.35.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.34.0...v4.35.0) (2025-08-04)
### Features
- add qwen-code ide support to bmad installer. ([#392](https://github.com/bmadcode/BMAD-METHOD/issues/392)) ([a72b790](https://github.com/bmadcode/BMAD-METHOD/commit/a72b790f3be6c77355511ace2d63e6bec4d751f1))
# [4.34.0](https://github.com/bmadcode/BMAD-METHOD/compare/v4.33.1...v4.34.0) (2025-08-03)
### Features
- add KiloCode integration support to BMAD installer ([#390](https://github.com/bmadcode/BMAD-METHOD/issues/390)) ([dcebe91](https://github.com/bmadcode/BMAD-METHOD/commit/dcebe91d5ea68e69aa27183411a81639d444efd7))
## [4.33.1](https://github.com/bmadcode/BMAD-METHOD/compare/v4.33.0...v4.33.1) (2025-07-29)

View File

@@ -23,7 +23,7 @@ Foundations in Agentic Agile Driven Development, known as the Breakthrough Metho
This two-phase approach eliminates both **planning inconsistency** and **context loss** - the biggest problems in AI-assisted development. Your Dev agent opens a story file with complete understanding of what to build, how to build it, and why.
**📖 [See the complete workflow in the User Guide](bmad-core/user-guide.md)** - Planning phase, development cycle, and all agent roles
**📖 [See the complete workflow in the User Guide](docs/user-guide.md)** - Planning phase, development cycle, and all agent roles
## Quick Navigation
@@ -31,18 +31,18 @@ This two-phase approach eliminates both **planning inconsistency** and **context
**Before diving in, review these critical workflow diagrams that explain how BMad works:**
1. **[Planning Workflow (Web UI)](bmad-core/user-guide.md#the-planning-workflow-web-ui)** - How to create PRD and Architecture documents
2. **[Core Development Cycle (IDE)](bmad-core/user-guide.md#the-core-development-cycle-ide)** - How SM, Dev, and QA agents collaborate through story files
1. **[Planning Workflow (Web UI)](docs/user-guide.md#the-planning-workflow-web-ui)** - How to create PRD and Architecture documents
2. **[Core Development Cycle (IDE)](docs/user-guide.md#the-core-development-cycle-ide)** - How SM, Dev, and QA agents collaborate through story files
> ⚠️ **These diagrams explain 90% of BMad Method Agentic Agile flow confusion** - Understanding the PRD+Architecture creation and the SM/Dev/QA workflow and how agents pass notes through story files is essential - and also explains why this is NOT taskmaster or just a simple task runner!
### What would you like to do?
- **[Install and Build software with Full Stack Agile AI Team](#quick-start)** → Quick Start Instruction
- **[Learn how to use BMad](bmad-core/user-guide.md)** → Complete user guide and walkthrough
- **[Learn how to use BMad](docs/user-guide.md)** → Complete user guide and walkthrough
- **[See available AI agents](/bmad-core/agents))** → Specialized roles for your team
- **[Explore non-technical uses](#-beyond-software-development---expansion-packs)** → Creative writing, business, wellness, education
- **[Create my own AI agents](#creating-your-own-expansion-pack)** → Build agents for your domain
- **[Create my own AI agents](docs/expansion-packs.md)** → Build agents for your domain
- **[Browse ready-made expansion packs](expansion-packs/)** → Game dev, DevOps, infrastructure and get inspired with ideas and examples
- **[Understand the architecture](docs/core-architecture.md)** → Technical deep dive
- **[Join the community](https://discord.gg/gk8jAdXWmj)** → Get help and share ideas
@@ -97,7 +97,7 @@ This single command handles:
3. **Upload & configure**: Upload the file and set instructions: "Your critical operating instructions are attached, do not break character as directed"
4. **Start Ideating and Planning**: Start chatting! Type `*help` to see available commands or pick an agent like `*analyst` to start right in on creating a brief.
5. **CRITICAL**: Talk to BMad Orchestrator in the web at ANY TIME (#bmad-orchestrator command) and ask it questions about how this all works!
6. **When to move to the IDE**: Once you have your PRD, Architecture, optional UX and Briefs - its time to switch over to the IDE to shard your docs, and start implementing the actual code! See the [User guide](bmad-core/user-guide.md) for more details
6. **When to move to the IDE**: Once you have your PRD, Architecture, optional UX and Briefs - its time to switch over to the IDE to shard your docs, and start implementing the actual code! See the [User guide](docs/user-guide.md) for more details
### Alternative: Clone and Build
@@ -144,7 +144,7 @@ npx bmad-method flatten --input /path/to/source --output /path/to/output/codebas
The tool will display progress and provide a comprehensive summary:
```
```text
📊 Completion Summary:
✅ Successfully processed 156 files into flattened-codebase.xml
📁 Output file: /path/to/your/project/flattened-codebase.xml
@@ -155,13 +155,46 @@ The tool will display progress and provide a comprehensive summary:
📊 File breakdown: 142 text, 14 binary, 0 errors
```
The generated XML file contains all your project's source code in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMad-Method projects.
The generated XML file contains your project's text-based source files in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMad-Method projects.
#### Advanced Usage & Options
- CLI options
- `-i, --input <path>`: Directory to flatten. Default: current working directory or auto-detected project root when run interactively.
- `-o, --output <path>`: Output file path. Default: `flattened-codebase.xml` in the chosen directory.
- Interactive mode
- If you do not pass `--input` and `--output` and the terminal is interactive (TTY), the tool will attempt to detect your project root (by looking for markers like `.git`, `package.json`, etc.) and prompt you to confirm or override the paths.
- In non-interactive contexts (e.g., CI), it will prefer the detected root silently; otherwise it falls back to the current directory and default filename.
- File discovery and ignoring
- Uses `git ls-files` when inside a git repository for speed and correctness; otherwise falls back to a glob-based scan.
- Applies your `.gitignore` plus a curated set of default ignore patterns (e.g., `node_modules`, build outputs, caches, logs, IDE folders, lockfiles, large media/binaries, `.env*`, and previously generated XML outputs).
- Binary handling
- Binary files are detected and excluded from the XML content. They are counted in the final summary but not embedded in the output.
- XML format and safety
- UTF-8 encoded file with root element `<files>`.
- Each text file is emitted as a `<file path="relative/path">` element whose content is wrapped in `<![CDATA[ ... ]]>`.
- The tool safely handles occurrences of `]]>` inside content by splitting the CDATA to preserve correctness.
- File contents are preserved as-is and indented for readability inside the XML.
- Performance
- Concurrency is selected automatically based on your CPU and workload size. No configuration required.
- Running inside a git repo improves discovery performance.
#### Minimal XML example
```xml
<?xml version="1.0" encoding="UTF-8"?>
<files>
<file path="src/index.js"><![CDATA[
// your source content
]]></file>
</files>
```
## Documentation & Resources
### Essential Guides
- 📖 **[User Guide](bmad-core/user-guide.md)** - Complete walkthrough from project inception to completion
- 📖 **[User Guide](docs/user-guide.md)** - Complete walkthrough from project inception to completion
- 🏗️ **[Core Architecture](docs/core-architecture.md)** - Technical deep dive and system design
- 🚀 **[Expansion Packs Guide](docs/expansion-packs.md)** - Extend BMad to any domain beyond software development

18
dist/agents/dev.txt vendored
View File

@@ -72,15 +72,15 @@ commands:
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story:
order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
- develop-story:
- order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
- story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
- completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
dependencies:
tasks:
- execute-checklist.md

View File

@@ -352,15 +352,15 @@ commands:
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story:
order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
- develop-story:
- order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
- story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
- completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
dependencies:
tasks:
- execute-checklist.md

View File

@@ -322,15 +322,15 @@ commands:
- run-tests: Execute linting and tests
- explain: teach me what and why you did whatever you just did in detail so I can learn. Explain to me as if you were training a junior engineer.
- exit: Say goodbye as the Developer, and then abandon inhabiting this persona
develop-story:
order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
- develop-story:
- order-of-execution: Read (first or next) task→Implement Task and its subtasks→Write tests→Execute validations→Only if ALL pass, then update the task checkbox with [x]→Update story section File List to ensure it lists and new or modified or deleted source file→repeat order-of-execution until complete
- story-file-updates-ONLY:
- CRITICAL: ONLY UPDATE THE STORY FILE WITH UPDATES TO SECTIONS INDICATED BELOW. DO NOT MODIFY ANY OTHER SECTIONS.
- CRITICAL: You are ONLY authorized to edit these specific sections of story files - Tasks / Subtasks Checkboxes, Dev Agent Record section and all its subsections, Agent Model Used, Debug Log References, Completion Notes List, File List, Change Log, Status
- CRITICAL: DO NOT modify Status, Story, Acceptance Criteria, Dev Notes, Testing sections, or any other sections not listed above
- blocking: 'HALT for: Unapproved deps needed, confirm with user | Ambiguous after story check | 3 failures attempting to implement or fix something repeatedly | Missing config | Failing regression'
- ready-for-review: Code matches requirements + All validations pass + Follows standards + File List complete
- completion: 'All Tasks and Subtasks marked [x] and have tests→Validations and full regression passes (DON''T BE LAZY, EXECUTE ALL TESTS and CONFIRM)→Ensure File List is Complete→run the task execute-checklist for the checklist story-dod-checklist→set story status: ''Ready for Review''→HALT'
dependencies:
tasks:
- execute-checklist.md

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 181 KiB

View File

@@ -0,0 +1,13 @@
# 1. Create new Google Cloud Project
gcloud projects create {{PROJECT_ID}} --name="{{COMPANY_NAME}} AI Agent System"
# 2. Set default project
gcloud config set project {{PROJECT_ID}}
# 3. Enable required APIs
gcloud services enable aiplatform.googleapis.com
gcloud services enable storage.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud services enable run.googleapis.com
gcloud services enable firestore.googleapis.com
gcloud services enable secretmanager.googleapis.com

View File

@@ -0,0 +1,13 @@
# 1. Create new Google Cloud Project
gcloud projects create {{PROJECT_ID}} --name="{{COMPANY_NAME}} AI Agent System"
# 2. Set default project
gcloud config set project {{PROJECT_ID}}
# 3. Enable required APIs
gcloud services enable aiplatform.googleapis.com
gcloud services enable storage.googleapis.com
gcloud services enable cloudfunctions.googleapis.com
gcloud services enable run.googleapis.com
gcloud services enable firestore.googleapis.com
gcloud services enable secretmanager.googleapis.com

View File

@@ -0,0 +1,25 @@
{{company_name}}-ai-agents/
├── agents/
│ ├── __init__.py
│ ├── {{team_1}}/
│ │ ├── __init__.py
│ │ ├── {{agent_1}}.py
│ │ └── {{agent_2}}.py
│ └── {{team_2}}/
├── tasks/
│ ├── __init__.py
│ ├── {{task_category_1}}/
│ └── {{task_category_2}}/
├── templates/
│ ├── {{document_type_1}}/
│ └── {{document_type_2}}/
├── checklists/
├── data/
├── workflows/
├── config/
│ ├── settings.py
│ └── agent_config.yaml
├── main.py
└── deployment/
├── Dockerfile
└── cloudbuild.yaml

View File

@@ -0,0 +1,34 @@
import os
from pydantic import BaseSettings
class Settings(BaseSettings):
# Google Cloud Configuration
project_id: str = "{{PROJECT_ID}}"
location: str = "{{LOCATION}}" # e.g., "us-central1"
# Company Information
company_name: str = "{{COMPANY_NAME}}"
industry: str = "{{INDUSTRY}}"
business_type: str = "{{BUSINESS_TYPE}}"
# Agent Configuration
default_model: str = "gemini-1.5-pro"
max_iterations: int = 10
timeout_seconds: int = 300
# Storage Configuration
bucket_name: str = "{{COMPANY_NAME}}-ai-agents-storage"
database_name: str = "{{COMPANY_NAME}}-ai-agents-db"
# API Configuration
session_service_type: str = "vertex" # or "in_memory" for development
artifact_service_type: str = "gcs" # or "in_memory" for development
memory_service_type: str = "vertex" # or "in_memory" for development
# Security
service_account_path: str = "./{{COMPANY_NAME}}-ai-agents-key.json"
class Config:
env_file = ".env"
settings = Settings()

View File

@@ -0,0 +1,70 @@
import asyncio
from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import VertexAiSessionService
from google.adk.artifacts import GcsArtifactService
from google.adk.memory import VertexAiRagMemoryService
from google.adk.models import Gemini
from config.settings import settings
from agents.{{primary_team}}.{{main_orchestrator}} import {{MainOrchestratorClass}}
class {{CompanyName}}AISystem:
def __init__(self):
self.settings = settings
self.runner = None
self.main_orchestrator = None
async def initialize(self):
"""Initialize the AI agent system"""
# Create main orchestrator
self.main_orchestrator = {{MainOrchestratorClass}}()
# Initialize services
session_service = VertexAiSessionService(
project=self.settings.project_id,
location=self.settings.location
)
artifact_service = GcsArtifactService(
bucket_name=self.settings.bucket_name
)
memory_service = VertexAiRagMemoryService(
rag_corpus=f"projects/{self.settings.project_id}/locations/{self.settings.location}/ragCorpora/{{COMPANY_NAME}}-knowledge"
)
# Create runner
self.runner = Runner(
app_name=f"{self.settings.company_name}-AI-System",
agent=self.main_orchestrator,
session_service=session_service,
artifact_service=artifact_service,
memory_service=memory_service
)
print(f"{self.settings.company_name} AI Agent System initialized successfully!")
async def run_agent_interaction(self, user_id: str, session_id: str, message: str):
"""Run agent interaction"""
if not self.runner:
await self.initialize()
async for event in self.runner.run_async(
user_id=user_id,
session_id=session_id,
new_message=message
):
yield event
# Application factory
async def create_app():
ai_system = {{CompanyName}}AISystem()
await ai_system.initialize()
return ai_system
if __name__ == "__main__":
# Development server
import uvicorn
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)

View File

@@ -0,0 +1,26 @@
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- '{{COMPANY_NAME}}-ai-agents'
- '--image'
- 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA'
- '--region'
- '{{LOCATION}}'
- '--platform'
- 'managed'
- '--allow-unauthenticated'
images:
- 'gcr.io/{{PROJECT_ID}}/{{COMPANY_NAME}}-ai-agents:$COMMIT_SHA'

View File

@@ -0,0 +1,109 @@
# BMad Expansion Pack: Google Cloud Vertex AI Agent System
[](https://opensource.org/licenses/MIT)
[](https://www.google.com/search?q=https://github.com/antmikinka/BMAD-METHOD)
[](https://cloud.google.com/)
This expansion pack provides a complete, deployable starter kit for building and hosting sophisticated AI agent systems on Google Cloud Platform (GCP). It bridges the gap between the BMad Method's natural language framework and a production-ready cloud environment, leveraging Google Vertex AI, Cloud Run, and the Google Agent Development Kit (ADK).
## Features
* **Automated GCP Setup**: `gcloud` scripts to configure your project, service accounts, and required APIs in minutes.
* **Production-Ready Deployment**: Includes a `Dockerfile` and `cloudbuild.yaml` for easy, repeatable deployments to Google Cloud Run.
* **Rich Template Library**: A comprehensive set of BMad-compatible templates for Teams, Agents, Tasks, Workflows, Documents, and Checklists.
* **Pre-configured Agent Roles**: Includes powerful master templates for key agent archetypes like Orchestrators and Specialists.
* **Highly Customizable**: Easily adapt the entire system with company-specific variables and industry-specific configurations.
* **Powered by Google ADK**: Built on the official Google Agent Development Kit for robust and native integration with Vertex AI services.
## Prerequisites
Before you begin, ensure you have the following installed and configured:
* A Google Cloud Platform (GCP) Account with an active billing account.
* The [Google Cloud SDK (`gcloud` CLI)](https://www.google.com/search?q=%5Bhttps://cloud.google.com/sdk/docs/install%5D\(https://cloud.google.com/sdk/docs/install\)) installed and authenticated.
* [Docker](https://www.docker.com/products/docker-desktop/) installed on your local machine.
* Python 3.11+
## Quick Start Guide
Follow these steps to get your own AI agent system running on Google Cloud.
### 1\. Configure Setup Variables
The setup scripts use placeholder variables. Before running them, open the files in the `/scripts` directory and replace the following placeholders with your own values:
* `{{PROJECT_ID}}`: Your unique Google Cloud project ID.
* `{{COMPANY_NAME}}`: Your company or project name (used for naming resources).
* `{{LOCATION}}`: The GCP region you want to deploy to (e.g., `us-central1`).
### 2\. Run the GCP Setup Scripts
Execute the setup scripts to prepare your Google Cloud environment.
```bash
# Navigate to the scripts directory
cd scripts/
# Run the project configuration script
sh 1-initial-project-config.sh
# Run the service account setup script
sh 2-service-account-setup.sh
```
These scripts will enable the necessary APIs, create a service account, assign permissions, and download a JSON key file required for authentication.
### 3\. Install Python Dependencies
Install the required Python packages for the application.
```bash
# From the root of the expansion pack
pip install -r requirements.txt
```
### 4\. Deploy to Cloud Run
Deploy the entire agent system as a serverless application using Cloud Build.
```bash
# From the root of the expansion pack
gcloud builds submit --config deployment/cloudbuild.yaml .
```
This command will build the Docker container, push it to the Google Container Registry, and deploy it to Cloud Run. Your agent system is now live\!
## How to Use
Once deployed, the power of this system lies in its natural language templates.
1. **Define Your Organization**: Go to `/templates/teams` and use the templates to define your agent teams (e.g., Product Development, Operations).
2. **Customize Your Agents**: In `/templates/agents`, use the `Master-Agent-Template.yaml` to create new agents or customize the existing Orchestrator and Specialist templates. Define their personas, skills, and commands in plain English.
3. **Build Your Workflows**: In `/templates/workflows`, link agents and tasks together to create complex, automated processes.
The deployed application reads these YAML and Markdown files to dynamically construct and run your AI workforce. When you update a template, your live agents automatically adopt the new behaviors.
## What's Included
This expansion pack has a comprehensive structure to get you started:
```
/
├── deployment/ # Dockerfile and cloudbuild.yaml for deployment
├── scripts/ # GCP setup scripts (project config, service accounts)
├── src/ # Python source code (main.py, settings.py)
├── templates/
│ ├── agents/ # Master, Orchestrator, Specialist agent templates
│ ├── teams/ # Team structure templates
│ ├── tasks/ # Generic and specialized task templates
│ ├── documents/ # Document and report templates
│ ├── checklists/ # Quality validation checklists
│ ├── workflows/ # Workflow definition templates
│ └── ...and more
├── config/ # Customization guides and variable files
└── requirements.txt # Python package dependencies
```
## Contributing
Contributions are welcome\! Please follow the main project's `CONTRIBUTING.md` guidelines. For major changes or new features for this expansion pack, please open an issue or discussion first.

25202
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "bmad-method",
"version": "4.34.0",
"version": "4.36.1",
"description": "Breakthrough Method of Agile AI-driven Development",
"main": "tools/cli.js",
"bin": {
@@ -40,9 +40,9 @@
"commander": "^14.0.0",
"fs-extra": "^11.3.0",
"glob": "^11.0.3",
"ignore": "^7.0.5",
"inquirer": "^8.2.6",
"js-yaml": "^4.1.0",
"minimatch": "^10.0.3",
"ora": "^5.4.1"
},
"keywords": [

View File

@@ -93,6 +93,7 @@ program
const agents = await builder.resolver.listAgents();
console.log('Available agents:');
agents.forEach(agent => console.log(` - ${agent}`));
process.exit(0);
});
program
@@ -103,6 +104,7 @@ program
const expansions = await builder.listExpansionPacks();
console.log('Available expansion packs:');
expansions.forEach(expansion => console.log(` - ${expansion}`));
process.exit(0);
});
program

View File

@@ -0,0 +1,76 @@
const fs = require("fs-extra");
const path = require("node:path");
const os = require("node:os");
const { isBinaryFile } = require("./binary.js");
/**
* Aggregate file contents with bounded concurrency.
* Returns text files, binary files (with size), and errors.
* @param {string[]} files absolute file paths
* @param {string} rootDir
* @param {{ text?: string, warn?: (msg: string) => void } | null} spinner
*/
async function aggregateFileContents(files, rootDir, spinner = null) {
const results = {
textFiles: [],
binaryFiles: [],
errors: [],
totalFiles: files.length,
processedFiles: 0,
};
// Automatic concurrency selection based on CPU count and workload size.
// - Base on 2x logical CPUs, clamped to [2, 64]
// - For very small workloads, avoid excessive parallelism
const cpuCount = (os.cpus && Array.isArray(os.cpus()) ? os.cpus().length : (os.cpus?.length || 4));
let concurrency = Math.min(64, Math.max(2, (Number(cpuCount) || 4) * 2));
if (files.length > 0 && files.length < concurrency) {
concurrency = Math.max(1, Math.min(concurrency, Math.ceil(files.length / 2)));
}
async function processOne(filePath) {
try {
const relativePath = path.relative(rootDir, filePath);
if (spinner) {
spinner.text = `Processing: ${relativePath} (${results.processedFiles + 1}/${results.totalFiles})`;
}
const binary = await isBinaryFile(filePath);
if (binary) {
const size = (await fs.stat(filePath)).size;
results.binaryFiles.push({ path: relativePath, absolutePath: filePath, size });
} else {
const content = await fs.readFile(filePath, "utf8");
results.textFiles.push({
path: relativePath,
absolutePath: filePath,
content,
size: content.length,
lines: content.split("\n").length,
});
}
} catch (error) {
const relativePath = path.relative(rootDir, filePath);
const errorInfo = { path: relativePath, absolutePath: filePath, error: error.message };
results.errors.push(errorInfo);
if (spinner) {
spinner.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
} else {
console.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
}
} finally {
results.processedFiles++;
}
}
for (let i = 0; i < files.length; i += concurrency) {
const slice = files.slice(i, i + concurrency);
await Promise.all(slice.map(processOne));
}
return results;
}
module.exports = {
aggregateFileContents,
};

53
tools/flattener/binary.js Normal file
View File

@@ -0,0 +1,53 @@
const fsp = require("node:fs/promises");
const path = require("node:path");
const { Buffer } = require("node:buffer");
/**
* Efficiently determine if a file is binary without reading the whole file.
* - Fast path by extension for common binaries
* - Otherwise read a small prefix and check for NUL bytes
* @param {string} filePath
* @returns {Promise<boolean>}
*/
async function isBinaryFile(filePath) {
try {
const stats = await fsp.stat(filePath);
if (stats.isDirectory()) {
throw new Error("EISDIR: illegal operation on a directory");
}
const binaryExtensions = new Set([
".jpg", ".jpeg", ".png", ".gif", ".bmp", ".ico", ".svg",
".pdf", ".doc", ".docx", ".xls", ".xlsx", ".ppt", ".pptx",
".zip", ".tar", ".gz", ".rar", ".7z",
".exe", ".dll", ".so", ".dylib",
".mp3", ".mp4", ".avi", ".mov", ".wav",
".ttf", ".otf", ".woff", ".woff2",
".bin", ".dat", ".db", ".sqlite",
]);
const ext = path.extname(filePath).toLowerCase();
if (binaryExtensions.has(ext)) return true;
if (stats.size === 0) return false;
const sampleSize = Math.min(4096, stats.size);
const fd = await fsp.open(filePath, "r");
try {
const buffer = Buffer.allocUnsafe(sampleSize);
const { bytesRead } = await fd.read(buffer, 0, sampleSize, 0);
const slice = bytesRead === sampleSize ? buffer : buffer.subarray(0, bytesRead);
return slice.includes(0);
} finally {
await fd.close();
}
} catch (error) {
console.warn(
`Warning: Could not determine if file is binary: ${filePath} - ${error.message}`,
);
return false;
}
}
module.exports = {
isBinaryFile,
};

View File

@@ -0,0 +1,70 @@
const path = require("node:path");
const { execFile } = require("node:child_process");
const { promisify } = require("node:util");
const { glob } = require("glob");
const { loadIgnore } = require("./ignoreRules.js");
const pExecFile = promisify(execFile);
async function isGitRepo(rootDir) {
try {
const { stdout } = await pExecFile("git", [
"rev-parse",
"--is-inside-work-tree",
], { cwd: rootDir });
return String(stdout || "").toString().trim() === "true";
} catch {
return false;
}
}
async function gitListFiles(rootDir) {
try {
const { stdout } = await pExecFile("git", [
"ls-files",
"-co",
"--exclude-standard",
], { cwd: rootDir });
return String(stdout || "")
.split(/\r?\n/)
.map((s) => s.trim())
.filter(Boolean);
} catch {
return [];
}
}
/**
* Discover files under rootDir.
* - Prefer git ls-files when available for speed/correctness
* - Fallback to glob and apply unified ignore rules
* @param {string} rootDir
* @param {object} [options]
* @param {boolean} [options.preferGit=true]
* @returns {Promise<string[]>} absolute file paths
*/
async function discoverFiles(rootDir, options = {}) {
const { preferGit = true } = options;
const { filter } = await loadIgnore(rootDir);
// Try git first
if (preferGit && await isGitRepo(rootDir)) {
const relFiles = await gitListFiles(rootDir);
const filteredRel = relFiles.filter((p) => filter(p));
return filteredRel.map((p) => path.resolve(rootDir, p));
}
// Glob fallback
const globbed = await glob("**/*", {
cwd: rootDir,
nodir: true,
dot: true,
follow: false,
});
const filteredRel = globbed.filter((p) => filter(p));
return filteredRel.map((p) => path.resolve(rootDir, p));
}
module.exports = {
discoverFiles,
};

35
tools/flattener/files.js Normal file
View File

@@ -0,0 +1,35 @@
const path = require("node:path");
const discovery = require("./discovery.js");
const ignoreRules = require("./ignoreRules.js");
const { isBinaryFile } = require("./binary.js");
const { aggregateFileContents } = require("./aggregate.js");
// Backward-compatible signature; delegate to central loader
async function parseGitignore(gitignorePath) {
return await ignoreRules.parseGitignore(gitignorePath);
}
async function discoverFiles(rootDir) {
try {
// Delegate to discovery module which respects .gitignore and defaults
return await discovery.discoverFiles(rootDir, { preferGit: true });
} catch (error) {
console.error("Error discovering files:", error.message);
return [];
}
}
async function filterFiles(files, rootDir) {
const { filter } = await ignoreRules.loadIgnore(rootDir);
const relativeFiles = files.map((f) => path.relative(rootDir, f));
const filteredRelative = relativeFiles.filter((p) => filter(p));
return filteredRelative.map((p) => path.resolve(rootDir, p));
}
module.exports = {
parseGitignore,
discoverFiles,
isBinaryFile,
aggregateFileContents,
filterFiles,
};

View File

@@ -0,0 +1,176 @@
const fs = require("fs-extra");
const path = require("node:path");
const ignore = require("ignore");
// Central default ignore patterns for discovery and filtering.
// These complement .gitignore and are applied regardless of VCS presence.
const DEFAULT_PATTERNS = [
// Project/VCS
"**/.bmad-core/**",
"**/.git/**",
"**/.svn/**",
"**/.hg/**",
"**/.bzr/**",
// Package/build outputs
"**/node_modules/**",
"**/bower_components/**",
"**/vendor/**",
"**/packages/**",
"**/build/**",
"**/dist/**",
"**/out/**",
"**/target/**",
"**/bin/**",
"**/obj/**",
"**/release/**",
"**/debug/**",
// Environments
"**/.venv/**",
"**/venv/**",
"**/.virtualenv/**",
"**/virtualenv/**",
"**/env/**",
// Logs & coverage
"**/*.log",
"**/npm-debug.log*",
"**/yarn-debug.log*",
"**/yarn-error.log*",
"**/lerna-debug.log*",
"**/coverage/**",
"**/.nyc_output/**",
"**/.coverage/**",
"**/test-results/**",
// Caches & temp
"**/.cache/**",
"**/.tmp/**",
"**/.temp/**",
"**/tmp/**",
"**/temp/**",
"**/.sass-cache/**",
// IDE/editor
"**/.vscode/**",
"**/.idea/**",
"**/*.swp",
"**/*.swo",
"**/*~",
"**/.project",
"**/.classpath",
"**/.settings/**",
"**/*.sublime-project",
"**/*.sublime-workspace",
// Lockfiles
"**/package-lock.json",
"**/yarn.lock",
"**/pnpm-lock.yaml",
"**/composer.lock",
"**/Pipfile.lock",
// Python/Java/compiled artifacts
"**/*.pyc",
"**/*.pyo",
"**/*.pyd",
"**/__pycache__/**",
"**/*.class",
"**/*.jar",
"**/*.war",
"**/*.ear",
"**/*.o",
"**/*.so",
"**/*.dll",
"**/*.exe",
// System junk
"**/lib64/**",
"**/.venv/lib64/**",
"**/venv/lib64/**",
"**/_site/**",
"**/.jekyll-cache/**",
"**/.jekyll-metadata",
"**/.DS_Store",
"**/.DS_Store?",
"**/._*",
"**/.Spotlight-V100/**",
"**/.Trashes/**",
"**/ehthumbs.db",
"**/Thumbs.db",
"**/desktop.ini",
// XML outputs
"**/flattened-codebase.xml",
"**/repomix-output.xml",
// Images, media, fonts, archives, docs, dylibs
"**/*.jpg",
"**/*.jpeg",
"**/*.png",
"**/*.gif",
"**/*.bmp",
"**/*.ico",
"**/*.svg",
"**/*.pdf",
"**/*.doc",
"**/*.docx",
"**/*.xls",
"**/*.xlsx",
"**/*.ppt",
"**/*.pptx",
"**/*.zip",
"**/*.tar",
"**/*.gz",
"**/*.rar",
"**/*.7z",
"**/*.dylib",
"**/*.mp3",
"**/*.mp4",
"**/*.avi",
"**/*.mov",
"**/*.wav",
"**/*.ttf",
"**/*.otf",
"**/*.woff",
"**/*.woff2",
// Env files
"**/.env",
"**/.env.*",
"**/*.env",
// Misc
"**/junit.xml",
];
async function readIgnoreFile(filePath) {
try {
if (!await fs.pathExists(filePath)) return [];
const content = await fs.readFile(filePath, "utf8");
return content
.split("\n")
.map((l) => l.trim())
.filter((l) => l && !l.startsWith("#"));
} catch (err) {
return [];
}
}
// Backward compatible export matching previous signature
async function parseGitignore(gitignorePath) {
return readIgnoreFile(gitignorePath);
}
async function loadIgnore(rootDir, extraPatterns = []) {
const ig = ignore();
const gitignorePath = path.join(rootDir, ".gitignore");
const patterns = [
...await readIgnoreFile(gitignorePath),
...DEFAULT_PATTERNS,
...extraPatterns,
];
// De-duplicate
const unique = Array.from(new Set(patterns.map((p) => String(p))));
ig.add(unique);
// Include-only filter: return true if path should be included
const filter = (relativePath) => !ig.ignores(relativePath.replace(/\\/g, "/"));
return { ig, filter, patterns: unique };
}
module.exports = {
DEFAULT_PATTERNS,
parseGitignore,
loadIgnore,
};

View File

@@ -1,219 +1,38 @@
#!/usr/bin/env node
const { Command } = require('commander');
const fs = require('fs-extra');
const path = require('node:path');
const { glob } = require('glob');
const { minimatch } = require('minimatch');
const { Command } = require("commander");
const fs = require("fs-extra");
const path = require("node:path");
const process = require("node:process");
// Modularized components
const { findProjectRoot } = require("./projectRoot.js");
const { promptYesNo, promptPath } = require("./prompts.js");
const {
discoverFiles,
filterFiles,
aggregateFileContents,
} = require("./files.js");
const { generateXMLOutput } = require("./xml.js");
const { calculateStatistics } = require("./stats.js");
/**
* Recursively discover all files in a directory
* @param {string} rootDir - The root directory to scan
* @returns {Promise<string[]>} Array of file paths
*/
async function discoverFiles(rootDir) {
try {
const gitignorePath = path.join(rootDir, '.gitignore');
const gitignorePatterns = await parseGitignore(gitignorePath);
// Common gitignore patterns that should always be ignored
const commonIgnorePatterns = [
// Version control
'.git/**',
'.svn/**',
'.hg/**',
'.bzr/**',
// Dependencies
'node_modules/**',
'bower_components/**',
'vendor/**',
'packages/**',
// Build outputs
'build/**',
'dist/**',
'out/**',
'target/**',
'bin/**',
'obj/**',
'release/**',
'debug/**',
// Environment and config
'.env',
'.env.*',
'*.env',
'.config',
// Logs
'logs/**',
'*.log',
'npm-debug.log*',
'yarn-debug.log*',
'yarn-error.log*',
'lerna-debug.log*',
// Coverage and testing
'coverage/**',
'.nyc_output/**',
'.coverage/**',
'test-results/**',
'junit.xml',
// Cache directories
'.cache/**',
'.tmp/**',
'.temp/**',
'tmp/**',
'temp/**',
'.sass-cache/**',
'.eslintcache',
'.stylelintcache',
// OS generated files
'.DS_Store',
'.DS_Store?',
'._*',
'.Spotlight-V100',
'.Trashes',
'ehthumbs.db',
'Thumbs.db',
'desktop.ini',
// IDE and editor files
'.vscode/**',
'.idea/**',
'*.swp',
'*.swo',
'*~',
'.project',
'.classpath',
'.settings/**',
'*.sublime-project',
'*.sublime-workspace',
// Package manager files
'package-lock.json',
'yarn.lock',
'pnpm-lock.yaml',
'composer.lock',
'Pipfile.lock',
// Runtime and compiled files
'*.pyc',
'*.pyo',
'*.pyd',
'__pycache__/**',
'*.class',
'*.jar',
'*.war',
'*.ear',
'*.o',
'*.so',
'*.dll',
'*.exe',
// Documentation build
'_site/**',
'.jekyll-cache/**',
'.jekyll-metadata',
// Flattener specific outputs
'flattened-codebase.xml',
'repomix-output.xml'
];
const combinedIgnores = [
...gitignorePatterns,
...commonIgnorePatterns
];
// Use glob to recursively find all files, excluding common ignore patterns
const files = await glob('**/*', {
cwd: rootDir,
nodir: true, // Only files, not directories
dot: true, // Include hidden files
follow: false, // Don't follow symbolic links
ignore: combinedIgnores
});
return files.map(file => path.resolve(rootDir, file));
} catch (error) {
console.error('Error discovering files:', error.message);
return [];
}
}
/**
* Parse .gitignore file and return ignore patterns
* @param {string} gitignorePath - Path to .gitignore file
* @returns {Promise<string[]>} Array of ignore patterns
*/
async function parseGitignore(gitignorePath) {
try {
if (!await fs.pathExists(gitignorePath)) {
return [];
}
const content = await fs.readFile(gitignorePath, 'utf8');
return content
.split('\n')
.map(line => line.trim())
.filter(line => line && !line.startsWith('#')) // Remove empty lines and comments
.map(pattern => {
// Convert gitignore patterns to glob patterns
if (pattern.endsWith('/')) {
return pattern + '**';
}
return pattern;
});
} catch (error) {
console.error('Error parsing .gitignore:', error.message);
return [];
}
}
/**
* Check if a file is binary using file command and heuristics
* @param {string} filePath - Path to the file
* @returns {Promise<boolean>} True if file is binary
*/
async function isBinaryFile(filePath) {
try {
// First check by file extension
const binaryExtensions = [
'.jpg', '.jpeg', '.png', '.gif', '.bmp', '.ico', '.svg',
'.pdf', '.doc', '.docx', '.xls', '.xlsx', '.ppt', '.pptx',
'.zip', '.tar', '.gz', '.rar', '.7z',
'.exe', '.dll', '.so', '.dylib',
'.mp3', '.mp4', '.avi', '.mov', '.wav',
'.ttf', '.otf', '.woff', '.woff2',
'.bin', '.dat', '.db', '.sqlite'
];
const ext = path.extname(filePath).toLowerCase();
if (binaryExtensions.includes(ext)) {
return true;
}
// For files without clear extensions, try to read a small sample
const stats = await fs.stat(filePath);
if (stats.size === 0) {
return false; // Empty files are considered text
}
// Read first 1024 bytes to check for null bytes
const sampleSize = Math.min(1024, stats.size);
const buffer = await fs.readFile(filePath, { encoding: null, flag: 'r' });
const sample = buffer.slice(0, sampleSize);
// If we find null bytes, it's likely binary
return sample.includes(0);
} catch (error) {
console.warn(`Warning: Could not determine if file is binary: ${filePath} - ${error.message}`);
return false; // Default to text if we can't determine
}
}
/**
* Read and aggregate content from text files
@@ -222,68 +41,6 @@ async function isBinaryFile(filePath) {
* @param {Object} spinner - Optional spinner instance for progress display
* @returns {Promise<Object>} Object containing file contents and metadata
*/
async function aggregateFileContents(files, rootDir, spinner = null) {
const results = {
textFiles: [],
binaryFiles: [],
errors: [],
totalFiles: files.length,
processedFiles: 0
};
for (const filePath of files) {
try {
const relativePath = path.relative(rootDir, filePath);
// Update progress indicator
if (spinner) {
spinner.text = `Processing file ${results.processedFiles + 1}/${results.totalFiles}: ${relativePath}`;
}
const isBinary = await isBinaryFile(filePath);
if (isBinary) {
results.binaryFiles.push({
path: relativePath,
absolutePath: filePath,
size: (await fs.stat(filePath)).size
});
} else {
// Read text file content
const content = await fs.readFile(filePath, 'utf8');
results.textFiles.push({
path: relativePath,
absolutePath: filePath,
content: content,
size: content.length,
lines: content.split('\n').length
});
}
results.processedFiles++;
} catch (error) {
const relativePath = path.relative(rootDir, filePath);
const errorInfo = {
path: relativePath,
absolutePath: filePath,
error: error.message
};
results.errors.push(errorInfo);
// Log warning without interfering with spinner
if (spinner) {
spinner.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
} else {
console.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
}
results.processedFiles++;
}
}
return results;
}
/**
* Generate XML output with aggregated file contents using streaming
@@ -291,111 +48,6 @@ async function aggregateFileContents(files, rootDir, spinner = null) {
* @param {string} outputPath - The output file path
* @returns {Promise<void>} Promise that resolves when writing is complete
*/
async function generateXMLOutput(aggregatedContent, outputPath) {
const { textFiles } = aggregatedContent;
// Create write stream for efficient memory usage
const writeStream = fs.createWriteStream(outputPath, { encoding: 'utf8' });
return new Promise((resolve, reject) => {
writeStream.on('error', reject);
writeStream.on('finish', resolve);
// Write XML header
writeStream.write('<?xml version="1.0" encoding="UTF-8"?>\n');
writeStream.write('<files>\n');
// Process files one by one to minimize memory usage
let fileIndex = 0;
const writeNextFile = () => {
if (fileIndex >= textFiles.length) {
// All files processed, close XML and stream
writeStream.write('</files>\n');
writeStream.end();
return;
}
const file = textFiles[fileIndex];
fileIndex++;
// Write file opening tag
writeStream.write(` <file path="${escapeXml(file.path)}">`);
// Use CDATA for code content, handling CDATA end sequences properly
if (file.content?.trim()) {
const indentedContent = indentFileContent(file.content);
if (file.content.includes(']]>')) {
// If content contains ]]>, split it and wrap each part in CDATA
writeStream.write(splitAndWrapCDATA(indentedContent));
} else {
writeStream.write(`<![CDATA[\n${indentedContent}\n ]]>`);
}
} else if (file.content) {
// Handle empty or whitespace-only content
const indentedContent = indentFileContent(file.content);
writeStream.write(`<![CDATA[\n${indentedContent}\n ]]>`);
}
// Write file closing tag
writeStream.write('</file>\n');
// Continue with next file on next tick to avoid stack overflow
setImmediate(writeNextFile);
};
// Start processing files
writeNextFile();
});
}
/**
* Escape XML special characters for attributes
* @param {string} str - String to escape
* @returns {string} Escaped string
*/
function escapeXml(str) {
if (typeof str !== 'string') {
return String(str);
}
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&apos;');
}
/**
* Indent file content with 4 spaces for each line
* @param {string} content - Content to indent
* @returns {string} Indented content
*/
function indentFileContent(content) {
if (typeof content !== 'string') {
return String(content);
}
// Split content into lines and add 4 spaces of indentation to each line
return content.split('\n').map(line => ` ${line}`).join('\n');
}
/**
* Split content containing ]]> and wrap each part in CDATA
* @param {string} content - Content to process
* @returns {string} Content with properly wrapped CDATA sections
*/
function splitAndWrapCDATA(content) {
if (typeof content !== 'string') {
return String(content);
}
// Replace ]]> with ]]]]><![CDATA[> to escape it within CDATA
const escapedContent = content.replace(/]]>/g, ']]]]><![CDATA[>');
return `<![CDATA[
${escapedContent}
]]>`;
}
/**
* Calculate statistics for the processed files
@@ -403,38 +55,6 @@ ${escapedContent}
* @param {number} xmlFileSize - The size of the generated XML file in bytes
* @returns {Object} Statistics object
*/
function calculateStatistics(aggregatedContent, xmlFileSize) {
const { textFiles, binaryFiles, errors } = aggregatedContent;
// Calculate total file size in bytes
const totalTextSize = textFiles.reduce((sum, file) => sum + file.size, 0);
const totalBinarySize = binaryFiles.reduce((sum, file) => sum + file.size, 0);
const totalSize = totalTextSize + totalBinarySize;
// Calculate total lines of code
const totalLines = textFiles.reduce((sum, file) => sum + file.lines, 0);
// Estimate token count (rough approximation: 1 token ≈ 4 characters)
const estimatedTokens = Math.ceil(xmlFileSize / 4);
// Format file size
const formatSize = (bytes) => {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
};
return {
totalFiles: textFiles.length + binaryFiles.length,
textFiles: textFiles.length,
binaryFiles: binaryFiles.length,
errorFiles: errors.length,
totalSize: formatSize(totalSize),
xmlSize: formatSize(xmlFileSize),
totalLines,
estimatedTokens: estimatedTokens.toLocaleString()
};
}
/**
* Filter files based on .gitignore patterns
@@ -442,66 +62,81 @@ function calculateStatistics(aggregatedContent, xmlFileSize) {
* @param {string} rootDir - The root directory
* @returns {Promise<string[]>} Filtered array of file paths
*/
async function filterFiles(files, rootDir) {
const gitignorePath = path.join(rootDir, '.gitignore');
const ignorePatterns = await parseGitignore(gitignorePath);
if (ignorePatterns.length === 0) {
return files;
}
// Convert absolute paths to relative for pattern matching
const relativeFiles = files.map(file => path.relative(rootDir, file));
// Separate positive and negative patterns
const positivePatterns = ignorePatterns.filter(p => !p.startsWith('!'));
const negativePatterns = ignorePatterns.filter(p => p.startsWith('!')).map(p => p.slice(1));
// Filter out files that match ignore patterns
const filteredRelative = [];
for (const file of relativeFiles) {
let shouldIgnore = false;
// First check positive patterns (ignore these files)
for (const pattern of positivePatterns) {
if (minimatch(file, pattern)) {
shouldIgnore = true;
break;
}
}
// Then check negative patterns (don't ignore these files even if they match positive patterns)
if (shouldIgnore) {
for (const pattern of negativePatterns) {
if (minimatch(file, pattern)) {
shouldIgnore = false;
break;
}
}
}
if (!shouldIgnore) {
filteredRelative.push(file);
}
}
// Convert back to absolute paths
return filteredRelative.map(file => path.resolve(rootDir, file));
}
/**
* Attempt to find the project root by walking up from startDir
* Looks for common project markers like .git, package.json, pyproject.toml, etc.
* @param {string} startDir
* @returns {Promise<string|null>} project root directory or null if not found
*/
const program = new Command();
program
.name('bmad-flatten')
.description('BMad-Method codebase flattener tool')
.version('1.0.0')
.option('-i, --input <path>', 'Input directory to flatten', process.cwd())
.option('-o, --output <path>', 'Output file path', 'flattened-codebase.xml')
.name("bmad-flatten")
.description("BMad-Method codebase flattener tool")
.version("1.0.0")
.option("-i, --input <path>", "Input directory to flatten", process.cwd())
.option("-o, --output <path>", "Output file path", "flattened-codebase.xml")
.action(async (options) => {
const inputDir = path.resolve(options.input);
const outputPath = path.resolve(options.output);
let inputDir = path.resolve(options.input);
let outputPath = path.resolve(options.output);
// Detect if user explicitly provided -i/--input or -o/--output
const argv = process.argv.slice(2);
const userSpecifiedInput = argv.some((a) =>
a === "-i" || a === "--input" || a.startsWith("--input=")
);
const userSpecifiedOutput = argv.some((a) =>
a === "-o" || a === "--output" || a.startsWith("--output=")
);
const noPathArgs = !userSpecifiedInput && !userSpecifiedOutput;
if (noPathArgs) {
const detectedRoot = await findProjectRoot(process.cwd());
const suggestedOutput = detectedRoot
? path.join(detectedRoot, "flattened-codebase.xml")
: path.resolve("flattened-codebase.xml");
if (detectedRoot) {
const useDefaults = await promptYesNo(
`Detected project root at "${detectedRoot}". Use it as input and write output to "${suggestedOutput}"?`,
true,
);
if (useDefaults) {
inputDir = detectedRoot;
outputPath = suggestedOutput;
} else {
inputDir = await promptPath(
"Enter input directory path",
process.cwd(),
);
outputPath = await promptPath(
"Enter output file path",
path.join(inputDir, "flattened-codebase.xml"),
);
}
} else {
console.log("Could not auto-detect a project root.");
inputDir = await promptPath(
"Enter input directory path",
process.cwd(),
);
outputPath = await promptPath(
"Enter output file path",
path.join(inputDir, "flattened-codebase.xml"),
);
}
} else {
console.error(
"Could not auto-detect a project root and no arguments were provided. Please specify -i/--input and -o/--output.",
);
process.exit(1);
}
// Ensure output directory exists
await fs.ensureDir(path.dirname(outputPath));
console.log(`Flattening codebase from: ${inputDir}`);
console.log(`Output file: ${outputPath}`);
@@ -513,22 +148,27 @@ program
}
// Import ora dynamically
const { default: ora } = await import('ora');
const { default: ora } = await import("ora");
// Start file discovery with spinner
const discoverySpinner = ora('🔍 Discovering files...').start();
const discoverySpinner = ora("🔍 Discovering files...").start();
const files = await discoverFiles(inputDir);
const filteredFiles = await filterFiles(files, inputDir);
discoverySpinner.succeed(`📁 Found ${filteredFiles.length} files to include`);
discoverySpinner.succeed(
`📁 Found ${filteredFiles.length} files to include`,
);
// Process files with progress tracking
console.log('Reading file contents');
const processingSpinner = ora('📄 Processing files...').start();
const aggregatedContent = await aggregateFileContents(filteredFiles, inputDir, processingSpinner);
processingSpinner.succeed(`✅ Processed ${aggregatedContent.processedFiles}/${filteredFiles.length} files`);
// Log processing results for test validation
console.log(`Processed ${aggregatedContent.processedFiles}/${filteredFiles.length} files`);
console.log("Reading file contents");
const processingSpinner = ora("📄 Processing files...").start();
const aggregatedContent = await aggregateFileContents(
filteredFiles,
inputDir,
processingSpinner,
);
processingSpinner.succeed(
`✅ Processed ${aggregatedContent.processedFiles}/${filteredFiles.length} files`,
);
if (aggregatedContent.errors.length > 0) {
console.log(`Errors: ${aggregatedContent.errors.length}`);
}
@@ -538,27 +178,34 @@ program
}
// Generate XML output using streaming
const xmlSpinner = ora('🔧 Generating XML output...').start();
const xmlSpinner = ora("🔧 Generating XML output...").start();
await generateXMLOutput(aggregatedContent, outputPath);
xmlSpinner.succeed('📝 XML generation completed');
xmlSpinner.succeed("📝 XML generation completed");
// Calculate and display statistics
const outputStats = await fs.stat(outputPath);
const stats = calculateStatistics(aggregatedContent, outputStats.size);
// Display completion summary
console.log('\n📊 Completion Summary:');
console.log(`✅ Successfully processed ${filteredFiles.length} files into ${path.basename(outputPath)}`);
console.log("\n📊 Completion Summary:");
console.log(
`✅ Successfully processed ${filteredFiles.length} files into ${
path.basename(outputPath)
}`,
);
console.log(`📁 Output file: ${outputPath}`);
console.log(`📏 Total source size: ${stats.totalSize}`);
console.log(`📄 Generated XML size: ${stats.xmlSize}`);
console.log(`📝 Total lines of code: ${stats.totalLines.toLocaleString()}`);
console.log(
`📝 Total lines of code: ${stats.totalLines.toLocaleString()}`,
);
console.log(`🔢 Estimated tokens: ${stats.estimatedTokens}`);
console.log(`📊 File breakdown: ${stats.textFiles} text, ${stats.binaryFiles} binary, ${stats.errorFiles} errors`);
console.log(
`📊 File breakdown: ${stats.textFiles} text, ${stats.binaryFiles} binary, ${stats.errorFiles} errors`,
);
} catch (error) {
console.error('❌ Critical error:', error.message);
console.error('An unexpected error occurred.');
console.error("❌ Critical error:", error.message);
console.error("An unexpected error occurred.");
process.exit(1);
}
});

View File

@@ -0,0 +1,45 @@
const fs = require("fs-extra");
const path = require("node:path");
/**
* Attempt to find the project root by walking up from startDir
* Looks for common project markers like .git, package.json, pyproject.toml, etc.
* @param {string} startDir
* @returns {Promise<string|null>} project root directory or null if not found
*/
async function findProjectRoot(startDir) {
try {
let dir = path.resolve(startDir);
const root = path.parse(dir).root;
const markers = [
".git",
"package.json",
"pnpm-workspace.yaml",
"yarn.lock",
"pnpm-lock.yaml",
"pyproject.toml",
"requirements.txt",
"go.mod",
"Cargo.toml",
"composer.json",
".hg",
".svn",
];
while (true) {
const exists = await Promise.all(
markers.map((m) => fs.pathExists(path.join(dir, m))),
);
if (exists.some(Boolean)) {
return dir;
}
if (dir === root) break;
dir = path.dirname(dir);
}
return null;
} catch {
return null;
}
}
module.exports = { findProjectRoot };

View File

@@ -0,0 +1,44 @@
const os = require("node:os");
const path = require("node:path");
const readline = require("node:readline");
const process = require("node:process");
function expandHome(p) {
if (!p) return p;
if (p.startsWith("~")) return path.join(os.homedir(), p.slice(1));
return p;
}
function createRl() {
return readline.createInterface({
input: process.stdin,
output: process.stdout,
});
}
function promptQuestion(question) {
return new Promise((resolve) => {
const rl = createRl();
rl.question(question, (answer) => {
rl.close();
resolve(answer);
});
});
}
async function promptYesNo(question, defaultYes = true) {
const suffix = defaultYes ? " [Y/n] " : " [y/N] ";
const ans = (await promptQuestion(`${question}${suffix}`)).trim().toLowerCase();
if (!ans) return defaultYes;
if (["y", "yes"].includes(ans)) return true;
if (["n", "no"].includes(ans)) return false;
return promptYesNo(question, defaultYes);
}
async function promptPath(question, defaultValue) {
const prompt = `${question}${defaultValue ? ` (default: ${defaultValue})` : ""}: `;
const ans = (await promptQuestion(prompt)).trim();
return expandHome(ans || defaultValue);
}
module.exports = { promptYesNo, promptPath, promptQuestion, expandHome };

30
tools/flattener/stats.js Normal file
View File

@@ -0,0 +1,30 @@
function calculateStatistics(aggregatedContent, xmlFileSize) {
const { textFiles, binaryFiles, errors } = aggregatedContent;
const totalTextSize = textFiles.reduce((sum, file) => sum + file.size, 0);
const totalBinarySize = binaryFiles.reduce((sum, file) => sum + file.size, 0);
const totalSize = totalTextSize + totalBinarySize;
const totalLines = textFiles.reduce((sum, file) => sum + file.lines, 0);
const estimatedTokens = Math.ceil(xmlFileSize / 4);
const formatSize = (bytes) => {
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
};
return {
totalFiles: textFiles.length + binaryFiles.length,
textFiles: textFiles.length,
binaryFiles: binaryFiles.length,
errorFiles: errors.length,
totalSize: formatSize(totalSize),
xmlSize: formatSize(xmlFileSize),
totalLines,
estimatedTokens: estimatedTokens.toLocaleString(),
};
}
module.exports = { calculateStatistics };

86
tools/flattener/xml.js Normal file
View File

@@ -0,0 +1,86 @@
const fs = require("fs-extra");
function escapeXml(str) {
if (typeof str !== "string") {
return String(str);
}
return str
.replace(/&/g, "&amp;")
.replace(/</g, "&lt;")
.replace(/'/g, "&apos;");
}
function indentFileContent(content) {
if (typeof content !== "string") {
return String(content);
}
return content.split("\n").map((line) => ` ${line}`);
}
function generateXMLOutput(aggregatedContent, outputPath) {
const { textFiles } = aggregatedContent;
const writeStream = fs.createWriteStream(outputPath, { encoding: "utf8" });
return new Promise((resolve, reject) => {
writeStream.on("error", reject);
writeStream.on("finish", resolve);
writeStream.write('<?xml version="1.0" encoding="UTF-8"?>\n');
writeStream.write("<files>\n");
// Sort files by path for deterministic order
const filesSorted = [...textFiles].sort((a, b) =>
a.path.localeCompare(b.path)
);
let index = 0;
const writeNext = () => {
if (index >= filesSorted.length) {
writeStream.write("</files>\n");
writeStream.end();
return;
}
const file = filesSorted[index++];
const p = escapeXml(file.path);
const content = typeof file.content === "string" ? file.content : "";
if (content.length === 0) {
writeStream.write(`\t<file path='${p}'/>\n`);
setTimeout(writeNext, 0);
return;
}
const needsCdata = content.includes("<") || content.includes("&") ||
content.includes("]]>");
if (needsCdata) {
// Open tag and CDATA on their own line with tab indent; content lines indented with two tabs
writeStream.write(`\t<file path='${p}'><![CDATA[\n`);
// Safely split any occurrences of "]]>" inside content, trim trailing newlines, indent each line with two tabs
const safe = content.replace(/]]>/g, "]]]]><![CDATA[>");
const trimmed = safe.replace(/[\r\n]+$/, "");
const indented = trimmed.length > 0
? trimmed.split("\n").map((line) => `\t\t${line}`).join("\n")
: "";
writeStream.write(indented);
// Close CDATA and attach closing tag directly after the last content line
writeStream.write("]]></file>\n");
} else {
// Write opening tag then newline; indent content with two tabs; attach closing tag directly after last content char
writeStream.write(`\t<file path='${p}'>\n`);
const trimmed = content.replace(/[\r\n]+$/, "");
const indented = trimmed.length > 0
? trimmed.split("\n").map((line) => `\t\t${line}`).join("\n")
: "";
writeStream.write(indented);
writeStream.write(`</file>\n`);
}
setTimeout(writeNext, 0);
};
writeNext();
});
}
module.exports = { generateXMLOutput };

View File

@@ -41,7 +41,7 @@ program
.option('-f, --full', 'Install complete BMad Method')
.option('-x, --expansion-only', 'Install only expansion packs (no bmad-core)')
.option('-d, --directory <path>', 'Installation directory')
.option('-i, --ide <ide...>', 'Configure for specific IDE(s) - can specify multiple (cursor, claude-code, windsurf, trae, roo, kilo, cline, gemini, github-copilot, other)')
.option('-i, --ide <ide...>', 'Configure for specific IDE(s) - can specify multiple (cursor, claude-code, windsurf, trae, roo, kilo, cline, gemini, qwen-code, github-copilot, other)')
.option('-e, --expansion-packs <packs...>', 'Install specific expansion packs (can specify multiple)')
.action(async (options) => {
try {
@@ -314,6 +314,7 @@ async function promptInstallation() {
{ name: 'Kilo Code', value: 'kilo' },
{ name: 'Cline', value: 'cline' },
{ name: 'Gemini CLI', value: 'gemini' },
{ name: 'Qwen Code', value: 'qwen-code' },
{ name: 'Github Copilot', value: 'github-copilot' }
]
}

View File

@@ -98,4 +98,16 @@ ide-configurations:
# To use BMAD agents in Kilo Code:
# 1. Open the mode selector in VSCode
# 2. Select a bmad-{agent} mode (e.g. "bmad-dev")
# 3. The AI adopts that agent's persona and capabilities
# 3. The AI adopts that agent's persona and capabilities
qwen-code:
name: Qwen Code
rule-dir: .qwen/bmad-method/
format: single-file
command-suffix: .md
instructions: |
# To use BMad agents with Qwen Code:
# 1. The installer creates a .qwen/bmad-method/ directory in your project.
# 2. It concatenates all agent files into a single QWEN.md file.
# 3. Simply mention the agent in your prompt (e.g., "As *dev, ...").
# 4. The Qwen Code CLI will automatically have the context for that agent.

View File

@@ -59,6 +59,8 @@ class IdeSetup extends BaseIdeSetup {
return this.setupGeminiCli(installDir, selectedAgent);
case "github-copilot":
return this.setupGitHubCopilot(installDir, selectedAgent, spinner, preConfiguredSettings);
case "qwen-code":
return this.setupQwenCode(installDir, selectedAgent);
default:
console.log(chalk.yellow(`\nIDE ${ide} not yet supported`));
return false;
@@ -977,6 +979,106 @@ class IdeSetup extends BaseIdeSetup {
return true;
}
async setupQwenCode(installDir, selectedAgent) {
const qwenDir = path.join(installDir, ".qwen");
const bmadMethodDir = path.join(qwenDir, "bmad-method");
await fileManager.ensureDirectory(bmadMethodDir);
// Update logic for existing settings.json
const settingsPath = path.join(qwenDir, "settings.json");
if (await fileManager.pathExists(settingsPath)) {
try {
const settingsContent = await fileManager.readFile(settingsPath);
const settings = JSON.parse(settingsContent);
let updated = false;
// Handle contextFileName property
if (settings.contextFileName && Array.isArray(settings.contextFileName)) {
const originalLength = settings.contextFileName.length;
settings.contextFileName = settings.contextFileName.filter(
(fileName) => !fileName.startsWith("agents/")
);
if (settings.contextFileName.length !== originalLength) {
updated = true;
}
}
if (updated) {
await fileManager.writeFile(
settingsPath,
JSON.stringify(settings, null, 2)
);
console.log(chalk.green("✓ Updated .qwen/settings.json - removed agent file references"));
}
} catch (error) {
console.warn(
chalk.yellow("Could not update .qwen/settings.json"),
error
);
}
}
// Remove old agents directory
const agentsDir = path.join(qwenDir, "agents");
if (await fileManager.pathExists(agentsDir)) {
await fileManager.removeDirectory(agentsDir);
console.log(chalk.green("✓ Removed old .qwen/agents directory"));
}
// Get all available agents
const agents = selectedAgent ? [selectedAgent] : await this.getAllAgentIds(installDir);
let concatenatedContent = "";
for (const agentId of agents) {
// Find the source agent file
const agentPath = await this.findAgentPath(agentId, installDir);
if (agentPath) {
const agentContent = await fileManager.readFile(agentPath);
// Create properly formatted agent rule content (similar to gemini)
let agentRuleContent = `# ${agentId.toUpperCase()} Agent Rule\n\n`;
agentRuleContent += `This rule is triggered when the user types \`*${agentId}\` and activates the ${await this.getAgentTitle(
agentId,
installDir
)} agent persona.\n\n`;
agentRuleContent += "## Agent Activation\n\n";
agentRuleContent +=
"CRITICAL: Read the full YAML, start activation to alter your state of being, follow startup section instructions, stay in this being until told to exit this mode:\n\n";
agentRuleContent += "```yaml\n";
// Extract just the YAML content from the agent file
const yamlContent = extractYamlFromAgent(agentContent);
if (yamlContent) {
agentRuleContent += yamlContent;
}
else {
// If no YAML found, include the whole content minus the header
agentRuleContent += agentContent.replace(/^#.*$/m, "").trim();
}
agentRuleContent += "\n```\n\n";
agentRuleContent += "## File Reference\n\n";
const relativePath = path.relative(installDir, agentPath).replace(/\\/g, '/');
agentRuleContent += `The complete agent definition is available in [${relativePath}](${relativePath}).\n\n`;
agentRuleContent += "## Usage\n\n";
agentRuleContent += `When the user types \`*${agentId}\`, activate this ${await this.getAgentTitle(
agentId,
installDir
)} persona and follow all instructions defined in the YAML configuration above.\n`;
// Add to concatenated content with separator
concatenatedContent += agentRuleContent + "\n\n---\n\n";
console.log(chalk.green(`✓ Added context for *${agentId}`));
}
}
// Write the concatenated content to QWEN.md
const qwenMdPath = path.join(bmadMethodDir, "QWEN.md");
await fileManager.writeFile(qwenMdPath, concatenatedContent);
console.log(chalk.green(`\n✓ Created QWEN.md in ${bmadMethodDir}`));
return true;
}
async setupGitHubCopilot(installDir, selectedAgent, spinner = null, preConfiguredSettings = null) {
// Configure VS Code workspace settings first to avoid UI conflicts with loading spinners
await this.configureVsCodeSettings(installDir, spinner, preConfiguredSettings);

View File

@@ -237,6 +237,10 @@ class Installer {
// Copy common/ items to .bmad-core
spinner.text = "Copying common utilities...";
await this.copyCommonItems(installDir, ".bmad-core", spinner);
// Copy documentation files from docs/ to .bmad-core
spinner.text = "Copying documentation files...";
await this.copyDocsItems(installDir, ".bmad-core", spinner);
// Get list of all files for manifest
const foundFiles = await resourceLocator.findFiles("**/*", {
@@ -308,6 +312,11 @@ class Installer {
spinner.text = "Copying common utilities...";
const commonFiles = await this.copyCommonItems(installDir, ".bmad-core", spinner);
files.push(...commonFiles);
// Copy documentation files from docs/ to .bmad-core
spinner.text = "Copying documentation files...";
const docFiles = await this.copyDocsItems(installDir, ".bmad-core", spinner);
files.push(...docFiles);
} else if (config.installType === "team") {
// Team installation
spinner.text = `Installing ${config.team} team...`;
@@ -353,6 +362,11 @@ class Installer {
spinner.text = "Copying common utilities...";
const commonFiles = await this.copyCommonItems(installDir, ".bmad-core", spinner);
files.push(...commonFiles);
// Copy documentation files from docs/ to .bmad-core
spinner.text = "Copying documentation files...";
const docFiles = await this.copyDocsItems(installDir, ".bmad-core", spinner);
files.push(...docFiles);
} else if (config.installType === "expansion-only") {
// Expansion-only installation - DO NOT create .bmad-core
// Only install expansion packs
@@ -896,7 +910,7 @@ class Installer {
}
// Important notice to read the user guide
console.log(chalk.red.bold("\n📖 IMPORTANT: Please read the user guide installed at .bmad-core/user-guide.md"));
console.log(chalk.red.bold("\n📖 IMPORTANT: Please read the user guide at docs/user-guide.md (also installed at .bmad-core/user-guide.md)"));
console.log(chalk.red("This guide contains essential information about the BMad workflow and how to use the agents effectively."));
}
@@ -1557,6 +1571,54 @@ class Installer {
return copiedFiles;
}
async copyDocsItems(installDir, targetSubdir, spinner) {
const fs = require('fs').promises;
const sourceBase = path.dirname(path.dirname(path.dirname(path.dirname(__filename)))); // Go up to project root
const docsPath = path.join(sourceBase, 'docs');
const targetPath = path.join(installDir, targetSubdir);
const copiedFiles = [];
// Specific documentation files to copy
const docFiles = [
'enhanced-ide-development-workflow.md',
'user-guide.md',
'working-in-the-brownfield.md'
];
// Check if docs/ exists
if (!(await fileManager.pathExists(docsPath))) {
console.warn('Warning: docs/ folder not found');
return copiedFiles;
}
// Copy specific documentation files from docs/ to target
for (const docFile of docFiles) {
const sourcePath = path.join(docsPath, docFile);
const destPath = path.join(targetPath, docFile);
// Check if the source file exists
if (await fileManager.pathExists(sourcePath)) {
// Read the file content
const content = await fs.readFile(sourcePath, 'utf8');
// Replace {root} with the target subdirectory
const updatedContent = content.replace(/\{root\}/g, targetSubdir);
// Ensure directory exists
await fileManager.ensureDirectory(path.dirname(destPath));
// Write the updated content
await fs.writeFile(destPath, updatedContent, 'utf8');
copiedFiles.push(path.join(targetSubdir, docFile));
}
}
if (copiedFiles.length > 0) {
console.log(chalk.dim(` Added ${copiedFiles.length} documentation files`));
}
return copiedFiles;
}
async detectExpansionPacks(installDir) {
const expansionPacks = {};
const glob = require("glob");
@@ -1729,7 +1791,7 @@ class Installer {
const manifestPath = path.join(bmadDir, "install-manifest.yaml");
if (await fileManager.pathExists(manifestPath)) {
return bmadDir;
return currentDir; // Return parent directory, not .bmad-core itself
}
currentDir = path.dirname(currentDir);
@@ -1739,7 +1801,7 @@ class Installer {
if (path.basename(process.cwd()) === ".bmad-core") {
const manifestPath = path.join(process.cwd(), "install-manifest.yaml");
if (await fileManager.pathExists(manifestPath)) {
return process.cwd();
return path.dirname(process.cwd()); // Return parent directory
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "bmad-method",
"version": "4.34.0",
"version": "4.36.1",
"description": "BMad Method installer - AI-powered Agile development framework",
"main": "lib/installer.js",
"bin": {

105
tools/shared/bannerArt.js Normal file
View File

@@ -0,0 +1,105 @@
// ASCII banner art definitions extracted from banners.js to separate art from logic
const BMAD_TITLE = "BMAD-METHOD";
const FLATTENER_TITLE = "FLATTENER";
const INSTALLER_TITLE = "INSTALLER";
// Large ASCII blocks (block-style fonts)
const BMAD_LARGE = `
██████╗ ███╗ ███╗ █████╗ ██████╗ ███╗ ███╗███████╗████████╗██╗ ██╗ ██████╗ ██████╗
██╔══██╗████╗ ████║██╔══██╗██╔══██╗ ████╗ ████║██╔════╝╚══██╔══╝██║ ██║██╔═══██╗██╔══██╗
██████╔╝██╔████╔██║███████║██║ ██║█████╗██╔████╔██║█████╗ ██║ ███████║██║ ██║██║ ██║
██╔══██╗██║╚██╔╝██║██╔══██║██║ ██║╚════╝██║╚██╔╝██║██╔══╝ ██║ ██╔══██║██║ ██║██║ ██║
██████╔╝██║ ╚═╝ ██║██║ ██║██████╔╝ ██║ ╚═╝ ██║███████╗ ██║ ██║ ██║╚██████╔╝██████╔╝
╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝
`;
const FLATTENER_LARGE = `
███████╗██╗ █████╗ ████████╗████████╗███████╗███╗ ██╗███████╗██████╗
██╔════╝██║ ██╔══██╗╚══██╔══╝╚══██╔══╝██╔════╝████╗ ██║██╔════╝██╔══██╗
█████╗ ██║ ███████║ ██║ ██║ █████╗ ██╔██╗ ██║█████╗ ██████╔╝
██╔══╝ ██║ ██╔══██║ ██║ ██║ ██╔══╝ ██║╚██╗██║██╔══╝ ██╔══██╗
██║ ███████║██║ ██║ ██║ ██║ ███████╗██║ ╚████║███████╗██║ ██║
╚═╝ ╚══════╝╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝╚══════╝╚═╝ ╚═╝
`;
const INSTALLER_LARGE = `
██╗███╗ ██╗███████╗████████╗ █████╗ ██╗ ██╗ ███████╗██████╗
██║████╗ ██║██╔════╝╚══██╔══╝██╔══██╗██║ ██║ ██╔════╝██╔══██╗
██║██╔██╗ ██║███████╗ ██║ ███████║██║ ██║ █████╗ ██████╔╝
██║██║╚██╗██║╚════██║ ██║ ██╔══██║██║ ██║ ██╔══╝ ██╔══██╗
██║██║ ╚████║███████║ ██║ ██║ ██║███████╗███████╗███████╗██║ ██║
╚═╝╚═╝ ╚═══╝╚══════╝ ╚═╝ ╚═╝ ╚═╝╚══════╝╚══════╝╚══════╝╚═╝ ╚═╝
`;
// Curated medium/small/tiny variants (fixed art, no runtime scaling)
// Medium: bold framed title with heavy fill (high contrast, compact)
const BMAD_MEDIUM = `
███╗ █╗ █╗ ██╗ ███╗ █╗ █╗███╗█████╗█╗ █╗ ██╗ ███╗
█╔═█╗██╗ ██║█╔═█╗█╔═█╗ ██╗ ██║█╔═╝╚═█╔═╝█║ █║█╔═█╗█╔═█╗
███╔╝█╔███╔█║████║█║ █║██╗█╔███╔█║██╗ █║ ████║█║ █║█║ █║
█╔═█╗█║ █╔╝█║█╔═█║█║ █║╚═╝█║ █╔╝█║█╔╝ █║ █╔═█║█║ █║█║ █║
███╔╝█║ ╚╝ █║█║ █║███╔╝ █║ ╚╝ █║███╗ █║ █║ █║╚██╔╝███╔╝
╚══╝ ╚╝ ╚╝╚╝ ╚╝╚══╝ ╚╝ ╚╝╚══╝ ╚╝ ╚╝ ╚╝ ╚═╝ ╚══╝
`;
const FLATTENER_MEDIUM = `
███╗█╗ ██╗ █████╗█████╗███╗█╗ █╗███╗███╗
█╔═╝█║ █╔═█╗╚═█╔═╝╚═█╔═╝█╔═╝██╗ █║█╔═╝█╔═█╗
██╗ █║ ████║ █║ █║ ██╗ █╔█╗█║██╗ ███╔╝
█╔╝ █║ █╔═█║ █║ █║ █╔╝ █║ ██║█╔╝ █╔═█╗
█║ ███║█║ █║ █║ █║ ███╗█║ █║███╗█║ █║
╚╝ ╚══╝╚╝ ╚╝ ╚╝ ╚╝ ╚══╝╚╝ ╚╝╚══╝╚╝ ╚╝
`;
const INSTALLER_MEDIUM = `
█╗█╗ █╗████╗█████╗ ██╗ █╗ █╗ ███╗███╗
█║██╗ █║█╔══╝╚═█╔═╝█╔═█╗█║ █║ █╔═╝█╔═█╗
█║█╔█╗█║████╗ █║ ████║█║ █║ ██╗ ███╔╝
█║█║ ██║╚══█║ █║ █╔═█║█║ █║ █╔╝ █╔═█╗
█║█║ █║████║ █║ █║ █║███╗███╗███╗█║ █║
╚╝╚╝ ╚╝╚═══╝ ╚╝ ╚╝ ╚╝╚══╝╚══╝╚══╝╚╝ ╚╝
`;
// Small: rounded box with bold rule
// Width: 30 columns total (28 inner)
const BMAD_SMALL = `
╭──────────────────────────╮
│ BMAD-METHOD │
╰──────────────────────────╯
`;
const FLATTENER_SMALL = `
╭──────────────────────────╮
│ FLATTENER │
╰──────────────────────────╯
`;
const INSTALLER_SMALL = `
╭──────────────────────────╮
│ INSTALLER │
╰──────────────────────────╯
`;
// Tiny (compact brackets)
const BMAD_TINY = `[ BMAD-METHOD ]`;
const FLATTENER_TINY = `[ FLATTENER ]`;
const INSTALLER_TINY = `[ INSTALLER ]`;
module.exports = {
BMAD_TITLE,
FLATTENER_TITLE,
INSTALLER_TITLE,
BMAD_LARGE,
FLATTENER_LARGE,
INSTALLER_LARGE,
BMAD_MEDIUM,
FLATTENER_MEDIUM,
INSTALLER_MEDIUM,
BMAD_SMALL,
FLATTENER_SMALL,
INSTALLER_SMALL,
BMAD_TINY,
FLATTENER_TINY,
INSTALLER_TINY,
};