Feat/flattener-tool (#337)
* This PR introduces a powerful new Codebase Flattener Tool that aggregates entire codebases into AI-optimized XML format, making it easy to share project context with AI assistants for analysis, debugging, and development assistance. - AI-Optimized XML Output : Generates clean, structured XML specifically designed for AI model consumption - Smart File Discovery : Recursive file scanning with intelligent filtering using glob patterns - Binary File Detection : Automatically identifies and excludes binary files, focusing on source code - Progress Tracking : Real-time progress indicators with comprehensive completion statistics - Flexible Output : Customizable output file location and naming via CLI arguments - Gitignore Integration : Automatically respects .gitignore patterns to exclude unnecessary files - CDATA Handling : Proper XML CDATA sections with escape sequence handling for ]]> patterns - Content Indentation : Beautiful XML formatting with properly indented file content (4-space indentation) - Error Handling : Robust error handling with detailed logging for problematic files - Hierarchical Formatting : Clean XML structure with proper indentation and formatting - File Content Preservation : Maintains original file formatting within indented CDATA sections - Exclusion Logic : Prevents self-inclusion of output files ( flattened-codebase.xml , repomix-output.xml ) - tools/flattener/main.js - Complete flattener implementation with CLI interface - package.json - Added new dependencies (glob, minimatch, fs-extra, commander, ora, chalk) - package-lock.json - Updated dependency tree - .gitignore - Added exclusions for flattener outputs - README.md - Comprehensive documentation with usage examples - docs/bmad-workflow-guide.md - Integration guidance - tools/cli.js - CLI integration - .vscode/settings.json - SonarLint configuration ``` current directory npm run flatten npm run flatten -- --output my-project.xml npm run flatten -- -o /path/to/output/codebase.xml ``` The tool provides comprehensive completion summaries including: - File count and breakdown (text/binary/errors) - Source code size and generated XML size - Total lines of code and estimated token count - Processing progress and performance metrics - Bug Fix : Corrected typo in exclusion patterns ( repromix-output.xml → repomix-output.xml ) - Performance : Efficient file processing with streaming and progress indicators - Reliability : Comprehensive error handling and validation - Maintainability : Clean, well-documented code with modular functions - AI Integration : Perfect for sharing codebase context with AI assistants - Code Reviews : Streamlined code review process with complete project context - Documentation : Enhanced project documentation and analysis capabilities - Development Workflow : Improved development assistance and debugging support This tool significantly enhances the BMad-Method framework's AI integration capabilities, providing developers with a seamless way to share complete project context for enhanced AI-assisted development workflows. * docs(bmad-core): update documentation for enhanced workflow and user guide - Fix typos and improve clarity in user guide - Add new enhanced development workflow documentation - Update brownfield workflow with flattened codebase instructions - Improve consistency in documentation formatting * chore: remove unused files and configurations - Delete deprecated bmad workflow guide and roomodes file - Remove sonarlint project configuration - Downgrade ora dependency version - Remove jest test script * Update package.json Removed jest as it is not needed. * Update working-in-the-brownfield.md added documentation for sharding docs * perf(flattener): improve memory efficiency by streaming xml output - Replace in-memory XML generation with streaming approach - Add comprehensive common ignore patterns list - Update statistics calculation to use file size instead of content length * fix/chore: Update console.log for user-guide.md install path. Cleaned up config files/folders and updated .gitignore (#347) * fix: Update console.log for user-guide.md install path Changed IMPORTANT: Please read the user guide installed at docs/user-guilde.md to IMPORTANT: Please read the user guide installed at .bmad-core/user-guide.md WHY: the actual install location of the user-guide.md is in the .bmad-core directory. * chore: remove formatting configs and clean up gitignore - Delete husky pre-commit hook and prettier config files - Remove VS Code chat/copilot settings - Reorganize and clean up gitignore entries * feat: Overhaul and Enhance 2D Unity Game Dev Expansion Pack (#350) * Updated game-sm agent to match the new core framework patterns * feat:Created more comprehensive game story matching new format system as well * feat:Added Game specific course correct task * feat:Updated dod-checklist to match new DoD format * feat:Added new Architect agent for appropriate architecture doc creation and design * feat:Overhaul of game-architecture-tmpl template * feat:Updated rest of templates besides level which doesnt really need it * feat: Finished extended architecture documentation needed for new game story tasks * feat: Updated game Developer to new format * feat: Updated last agent to new format and updated bmad-kb. bmad-kb I did my best with but im not sure of it's valid usage in the expansion pack, the AI generated more of the file then myself. I made sure to include it due to the new core-config file * feat: Finished updating designer agent to new format and cleaned up template linting errors * Built dist for web bundle * Increased expansion pack minor verison number * Updated architecht and design for sharding built-in * chore: bump bmad-2d-unity-game-dev version (minor) * updated config.yaml for game-specific pieces to supplement core-config.yaml * Updated game-core-config and epic processing for game story and game design. Initial implementation was far too generic * chore: bump bmad-2d-unity-game-dev version (patch) * feat: Fixed issue with multi-configs being needed. chore: bump bmad-2d-unity-game-dev version (patch) * Chore: Built web-bundle * feat: Added the ability to specify the unity editor install location.\nchore: bump bmad-2d-unity-game-dev version (patch) * feat: core-config must be in two places to support inherited tasks at this time so added instructions to copy and create one in expansion pack folder as well. chore: bump bmad-2d-unity-game-dev version (patch) * This PR introduces a powerful new Codebase Flattener Tool that aggregates entire codebases into AI-optimized XML format, making it easy to share project context with AI assistants for analysis, debugging, and development assistance. - AI-Optimized XML Output : Generates clean, structured XML specifically designed for AI model consumption - Smart File Discovery : Recursive file scanning with intelligent filtering using glob patterns - Binary File Detection : Automatically identifies and excludes binary files, focusing on source code - Progress Tracking : Real-time progress indicators with comprehensive completion statistics - Flexible Output : Customizable output file location and naming via CLI arguments - Gitignore Integration : Automatically respects .gitignore patterns to exclude unnecessary files - CDATA Handling : Proper XML CDATA sections with escape sequence handling for ]]> patterns - Content Indentation : Beautiful XML formatting with properly indented file content (4-space indentation) - Error Handling : Robust error handling with detailed logging for problematic files - Hierarchical Formatting : Clean XML structure with proper indentation and formatting - File Content Preservation : Maintains original file formatting within indented CDATA sections - Exclusion Logic : Prevents self-inclusion of output files ( flattened-codebase.xml , repomix-output.xml ) - tools/flattener/main.js - Complete flattener implementation with CLI interface - package.json - Added new dependencies (glob, minimatch, fs-extra, commander, ora, chalk) - package-lock.json - Updated dependency tree - .gitignore - Added exclusions for flattener outputs - README.md - Comprehensive documentation with usage examples - docs/bmad-workflow-guide.md - Integration guidance - tools/cli.js - CLI integration - .vscode/settings.json - SonarLint configuration ``` current directory npm run flatten npm run flatten -- --output my-project.xml npm run flatten -- -o /path/to/output/codebase.xml ``` The tool provides comprehensive completion summaries including: - File count and breakdown (text/binary/errors) - Source code size and generated XML size - Total lines of code and estimated token count - Processing progress and performance metrics - Bug Fix : Corrected typo in exclusion patterns ( repromix-output.xml → repomix-output.xml ) - Performance : Efficient file processing with streaming and progress indicators - Reliability : Comprehensive error handling and validation - Maintainability : Clean, well-documented code with modular functions - AI Integration : Perfect for sharing codebase context with AI assistants - Code Reviews : Streamlined code review process with complete project context - Documentation : Enhanced project documentation and analysis capabilities - Development Workflow : Improved development assistance and debugging support This tool significantly enhances the BMad-Method framework's AI integration capabilities, providing developers with a seamless way to share complete project context for enhanced AI-assisted development workflows. * chore: remove unused files and configurations - Delete deprecated bmad workflow guide and roomodes file - Remove sonarlint project configuration - Downgrade ora dependency version - Remove jest test script * docs: update command names and agent references in documentation - Change `*create` to `*draft` in workflow guide - Update PM agent commands to use consistent naming - Replace `analyst` references with `architect` - Fix command examples to match new naming conventions --------- Co-authored-by: PinkyD <paulbeanjr@gmail.com>
This commit is contained in:
40
README.md
40
README.md
@@ -110,6 +110,46 @@ npm run install:bmad # build and install all to a destination folder
|
|||||||
|
|
||||||
BMad's natural language framework works in ANY domain. Expansion packs provide specialized AI agents for creative writing, business strategy, health & wellness, education, and more. Also expansion packs can expand the core BMad-Method with specific functionality that is not generic for all cases. [See the Expansion Packs Guide](docs/expansion-packs.md) and learn to create your own!
|
BMad's natural language framework works in ANY domain. Expansion packs provide specialized AI agents for creative writing, business strategy, health & wellness, education, and more. Also expansion packs can expand the core BMad-Method with specific functionality that is not generic for all cases. [See the Expansion Packs Guide](docs/expansion-packs.md) and learn to create your own!
|
||||||
|
|
||||||
|
## Codebase Flattener Tool
|
||||||
|
|
||||||
|
The BMad-Method includes a powerful codebase flattener tool designed to prepare your project files for AI model consumption. This tool aggregates your entire codebase into a single XML file, making it easy to share your project context with AI assistants for analysis, debugging, or development assistance.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- **AI-Optimized Output**: Generates clean XML format specifically designed for AI model consumption
|
||||||
|
- **Smart Filtering**: Automatically respects `.gitignore` patterns to exclude unnecessary files
|
||||||
|
- **Binary File Detection**: Intelligently identifies and excludes binary files, focusing on source code
|
||||||
|
- **Progress Tracking**: Real-time progress indicators and comprehensive completion statistics
|
||||||
|
- **Flexible Output**: Customizable output file location and naming
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage - creates flattened-codebase.xml in current directory
|
||||||
|
npm run flatten
|
||||||
|
|
||||||
|
# Specify custom output file
|
||||||
|
npm run flatten -- --output my-project.xml
|
||||||
|
npm run flatten -- -o /path/to/output/codebase.xml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Output
|
||||||
|
|
||||||
|
The tool will display progress and provide a comprehensive summary:
|
||||||
|
|
||||||
|
```
|
||||||
|
📊 Completion Summary:
|
||||||
|
✅ Successfully processed 156 files into flattened-codebase.xml
|
||||||
|
📁 Output file: /path/to/your/project/flattened-codebase.xml
|
||||||
|
📏 Total source size: 2.3 MB
|
||||||
|
📄 Generated XML size: 2.1 MB
|
||||||
|
📝 Total lines of code: 15,847
|
||||||
|
🔢 Estimated tokens: 542,891
|
||||||
|
📊 File breakdown: 142 text, 14 binary, 0 errors
|
||||||
|
```
|
||||||
|
|
||||||
|
The generated XML file contains all your project's source code in a structured format that AI models can easily parse and understand, making it perfect for code reviews, architecture discussions, or getting AI assistance with your BMad-Method projects.
|
||||||
|
|
||||||
## Documentation & Resources
|
## Documentation & Resources
|
||||||
|
|
||||||
### Essential Guides
|
### Essential Guides
|
||||||
|
|||||||
@@ -48,10 +48,12 @@ persona:
|
|||||||
- Proactive risk identification
|
- Proactive risk identification
|
||||||
- Strategic thinking & outcome-oriented
|
- Strategic thinking & outcome-oriented
|
||||||
# All commands require * prefix when used (e.g., *help)
|
# All commands require * prefix when used (e.g., *help)
|
||||||
commands:
|
commands:
|
||||||
- help: Show numbered list of the following commands to allow selection
|
- help: Show numbered list of the following commands to allow selection
|
||||||
- create-prd: run task create-doc.md with template prd-tmpl.yaml
|
- create-prd: run task create-doc.md with template prd-tmpl.yaml
|
||||||
- create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
|
- create-brownfield-prd: run task create-doc.md with template brownfield-prd-tmpl.yaml
|
||||||
|
- create-brownfield-epic: run task brownfield-create-epic.md
|
||||||
|
- create-brownfield-story: run task brownfield-create-story.md
|
||||||
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
|
- create-epic: Create epic for brownfield projects (task brownfield-create-epic)
|
||||||
- create-story: Create user story from requirements (task brownfield-create-story)
|
- create-story: Create user story from requirements (task brownfield-create-story)
|
||||||
- doc-out: Output full document to current destination file
|
- doc-out: Output full document to current destination file
|
||||||
|
|||||||
0
bmad-core/bmad-core/user-guide.md
Normal file
0
bmad-core/bmad-core/user-guide.md
Normal file
43
bmad-core/enhanced-ide-development-workflow.md
Normal file
43
bmad-core/enhanced-ide-development-workflow.md
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# Enhanced Development Workflow
|
||||||
|
|
||||||
|
This is a simple step-by-step guide to help you efficiently manage your development workflow using the BMad Method. Refer to the **[<ins>User Guide</ins>](user-guide.md)** for any scenario that is not covered here.
|
||||||
|
|
||||||
|
## Create new Branch
|
||||||
|
|
||||||
|
1. **Start new branch**
|
||||||
|
|
||||||
|
## Story Creation (Scrum Master)
|
||||||
|
|
||||||
|
1. **Start new chat/conversation**
|
||||||
|
2. **Load SM agent**
|
||||||
|
3. **Execute**: `*draft` (runs create-next-story task)
|
||||||
|
4. **Review generated story** in `docs/stories/`
|
||||||
|
5. **Update status**: Change from "Draft" to "Approved"
|
||||||
|
|
||||||
|
## Story Implementation (Developer)
|
||||||
|
|
||||||
|
1. **Start new chat/conversation**
|
||||||
|
2. **Load Dev agent**
|
||||||
|
3. **Execute**: `*develop-story {selected-story}` (runs execute-checklist task)
|
||||||
|
4. **Review generated report** in `{selected-story}`
|
||||||
|
|
||||||
|
## Story Review (Quality Assurance)
|
||||||
|
|
||||||
|
1. **Start new chat/conversation**
|
||||||
|
2. **Load QA agent**
|
||||||
|
3. **Execute**: `*review {selected-story}` (runs review-story task)
|
||||||
|
4. **Review generated report** in `{selected-story}`
|
||||||
|
|
||||||
|
## Commit Changes and Push
|
||||||
|
|
||||||
|
1. **Commit changes**
|
||||||
|
2. **Push to remote**
|
||||||
|
|
||||||
|
## Repeat Until Complete
|
||||||
|
|
||||||
|
- **SM**: Create next story → Review → Approve
|
||||||
|
- **Dev**: Implement story → Complete → Mark Ready for Review
|
||||||
|
- **QA**: Review story → Mark done
|
||||||
|
- **Commit**: All changes
|
||||||
|
- **Push**: To remote
|
||||||
|
- **Continue**: Until all features implemented
|
||||||
@@ -1,12 +1,12 @@
|
|||||||
# BMad-Method BMAd Code User Guide
|
# BMad-Method BMAd Code User Guide
|
||||||
|
|
||||||
This guide will help you understand and effectively use the BMad Method for agile ai driven planning and development.
|
This guide will help you understand and effectively use the BMad Method for agile AI driven planning and development.
|
||||||
|
|
||||||
## The BMad Plan and Execute Workflow
|
## The BMad Plan and Execute Workflow
|
||||||
|
|
||||||
First, here is the full standard Greenfield Planning + Execution Workflow. Brownfield is very similar, but its suggested to understand this greenfield first, even if on a simple project before tackling a brownfield project. The BMad Method needs to be installed to the root of your new project folder. For the planning phase, you can optionally perform it with powerful web agents, potentially resulting in higher quality results at a fraction of the cost it would take to complete if providing your own API key or credits in some Agentic tools. For planning, powerful thinking models and larger context - along with working as a partner with the agents will net the best results.
|
First, here is the full standard Greenfield Planning + Execution Workflow. Brownfield is very similar, but it's suggested to understand this greenfield first, even if on a simple project before tackling a brownfield project. The BMad Method needs to be installed to the root of your new project folder. For the planning phase, you can optionally perform it with powerful web agents, potentially resulting in higher quality results at a fraction of the cost it would take to complete if providing your own API key or credits in some Agentic tools. For planning, powerful thinking models and larger context - along with working as a partner with the agents will net the best results.
|
||||||
|
|
||||||
If you are going to use the BMad Method with a Brownfield project (an existing project), review [Working in the Brownfield](./working-in-the-brownfield.md)
|
If you are going to use the BMad Method with a Brownfield project (an existing project), review **[Working in the Brownfield](./working-in-the-brownfield.md)**.
|
||||||
|
|
||||||
If you do not see the diagrams that following rendering, you can install Markdown All in One along with the Markdown Preview Mermaid Support plugins to VSCode (or one of the forked clones). With these plugin's, if you right click on the tab when open, there should be a Open Preview option, or check the IDE documentation.
|
If you do not see the diagrams that following rendering, you can install Markdown All in One along with the Markdown Preview Mermaid Support plugins to VSCode (or one of the forked clones). With these plugin's, if you right click on the tab when open, there should be a Open Preview option, or check the IDE documentation.
|
||||||
|
|
||||||
@@ -132,7 +132,7 @@ graph TD
|
|||||||
If you want to do the planning in the Web with Claude (Sonnet 4 or Opus), Gemini Gem (2.5 Pro), or Custom GPT's:
|
If you want to do the planning in the Web with Claude (Sonnet 4 or Opus), Gemini Gem (2.5 Pro), or Custom GPT's:
|
||||||
|
|
||||||
1. Navigate to `dist/teams/`
|
1. Navigate to `dist/teams/`
|
||||||
2. Copy `team-fullstack.txt` content
|
2. Copy `team-fullstack.txt`
|
||||||
3. Create new Gemini Gem or CustomGPT
|
3. Create new Gemini Gem or CustomGPT
|
||||||
4. Upload file with instructions: "Your critical operating instructions are attached, do not break character as directed"
|
4. Upload file with instructions: "Your critical operating instructions are attached, do not break character as directed"
|
||||||
5. Type `/help` to see available commands
|
5. Type `/help` to see available commands
|
||||||
@@ -152,7 +152,7 @@ There are two bmad agents - in the future they will be consolidated into the sin
|
|||||||
|
|
||||||
This agent can do any task or command that all other agents can do, aside from actual story implementation. Additionally, this agent can help explain the BMad Method when in the web by accessing the knowledge base and explaining anything to you about the process.
|
This agent can do any task or command that all other agents can do, aside from actual story implementation. Additionally, this agent can help explain the BMad Method when in the web by accessing the knowledge base and explaining anything to you about the process.
|
||||||
|
|
||||||
If you dont want to bother switching between different agents aside from the dev, this is the agent for you.
|
If you don't want to bother switching between different agents aside from the dev, this is the agent for you. Just remember that as the context grows, the performance of the agent degrades, therefore it is important to instruct the agent to compact the conversation and start a new conversation with the compacted conversation as the initial message. Do this often, preferably after each story is implemented.
|
||||||
|
|
||||||
### BMad-Orchestrator
|
### BMad-Orchestrator
|
||||||
|
|
||||||
@@ -210,6 +210,7 @@ dependencies:
|
|||||||
- **Agent Selection**: Use appropriate agent for task
|
- **Agent Selection**: Use appropriate agent for task
|
||||||
- **Iterative Development**: Work in small, focused tasks
|
- **Iterative Development**: Work in small, focused tasks
|
||||||
- **File Organization**: Maintain clean project structure
|
- **File Organization**: Maintain clean project structure
|
||||||
|
- **Commit Regularly**: Save your work frequently
|
||||||
|
|
||||||
## Technical Preferences System
|
## Technical Preferences System
|
||||||
|
|
||||||
@@ -236,7 +237,7 @@ devLoadAlwaysFiles:
|
|||||||
|
|
||||||
You will want to verify from sharding your architecture that these documents exist, that they are as lean as possible, and contain exactly the information you want your dev agent to ALWAYS load into it's context. These are the rules the agent will follow.
|
You will want to verify from sharding your architecture that these documents exist, that they are as lean as possible, and contain exactly the information you want your dev agent to ALWAYS load into it's context. These are the rules the agent will follow.
|
||||||
|
|
||||||
As your project grows and the code starts to build consistent patterns, coding standards should be reduced to just the items that the agent makes mistakes at still - must with the better models, they will look at surrounding code in files and not need a rule from that file to guide them.
|
As your project grows and the code starts to build consistent patterns, coding standards should be reduced to include only the standards that the agent still makes with. The agent will look at surrounding code in files to infer the coding standards that are relevant to the current task.
|
||||||
|
|
||||||
## Getting Help
|
## Getting Help
|
||||||
|
|
||||||
|
|||||||
@@ -2,10 +2,10 @@
|
|||||||
|
|
||||||
> **HIGHLY RECOMMENDED: Use Gemini Web or Gemini CLI for Brownfield Documentation Generation!**
|
> **HIGHLY RECOMMENDED: Use Gemini Web or Gemini CLI for Brownfield Documentation Generation!**
|
||||||
>
|
>
|
||||||
> Gemini Web's 1M+ token context window or Gemini CLI (when its working) can analyze your ENTIRE codebase or critical sections of it all at once (obviously within reason):
|
> Gemini Web's 1M+ token context window or Gemini CLI (when it's working) can analyze your ENTIRE codebase, or critical sections of it, all at once (obviously within reason):
|
||||||
>
|
>
|
||||||
> - Upload via GitHub URL or use gemini cli in the project folder
|
> - Upload via GitHub URL or use gemini cli in the project folder
|
||||||
> - If in the web: Upload up to 1000 files or the zipped project or just give it the github url
|
> - If working in the web: use the flattener-tool to flatten your project into a single file, then upload that file to your web agent.
|
||||||
|
|
||||||
## What is Brownfield Development?
|
## What is Brownfield Development?
|
||||||
|
|
||||||
@@ -22,10 +22,13 @@ Brownfield development refers to adding features, fixing bugs, or modernizing ex
|
|||||||
|
|
||||||
## When NOT to use a Brownfield Flow
|
## When NOT to use a Brownfield Flow
|
||||||
|
|
||||||
If you have just completed an MVP with BMad, and you want to continue with post-MVP, its easier to just talk to the PM and ask him to work with you to create a new epic to add into the PRD, shard out the epic, update any architecture documents with the architect, and just go from there.
|
If you have just completed an MVP with BMad, and you want to continue with post-MVP, its easier to just talk to the PM and ask it to work with you to create a new epic to add into the PRD, shard out the epic, update any architecture documents with the architect, and just go from there.
|
||||||
|
|
||||||
## The Complete Brownfield Workflow
|
## The Complete Brownfield Workflow
|
||||||
|
|
||||||
|
1. **Follow the [<ins>User Guide - Installation</ins>](user-guide.md#installation) steps to setup your agent in the web.**
|
||||||
|
2. **Generate a 'flattened' single file of your entire codebase** run: ```npm run flatten```
|
||||||
|
|
||||||
### Choose Your Approach
|
### Choose Your Approach
|
||||||
|
|
||||||
#### Approach A: PRD-First (Recommended if adding very large and complex new features, single or multiple epics or massive changes)
|
#### Approach A: PRD-First (Recommended if adding very large and complex new features, single or multiple epics or massive changes)
|
||||||
@@ -48,11 +51,11 @@ If you have just completed an MVP with BMad, and you want to continue with post-
|
|||||||
|
|
||||||
#### Phase 1: Define Requirements First
|
#### Phase 1: Define Requirements First
|
||||||
|
|
||||||
**In Gemini Web (with your codebase uploaded):**
|
**In Gemini Web (with your flattened-codebase.xml uploaded):**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@pm
|
@pm
|
||||||
*create-doc brownfield-prd
|
*create-brownfield-prd
|
||||||
```
|
```
|
||||||
|
|
||||||
The PM will:
|
The PM will:
|
||||||
@@ -69,7 +72,7 @@ The PM will:
|
|||||||
**Still in Gemini Web, now with PRD context:**
|
**Still in Gemini Web, now with PRD context:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@analyst
|
@architect
|
||||||
*document-project
|
*document-project
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -104,22 +107,21 @@ For example, if you say "Add payment processing to user service":
|
|||||||
1. **Go to Gemini Web** (gemini.google.com)
|
1. **Go to Gemini Web** (gemini.google.com)
|
||||||
2. **Upload your project**:
|
2. **Upload your project**:
|
||||||
- **Option A**: Paste your GitHub repository URL directly
|
- **Option A**: Paste your GitHub repository URL directly
|
||||||
- **Option B**: Upload up to 1000 files from your src/project folder
|
- **Option B**: Upload your flattened-codebase.xml file
|
||||||
- **Option C**: Zip your project and upload the archive
|
3. **Load the analyst agent**: Upload `dist/agents/architect.txt`
|
||||||
3. **Load the analyst agent**: Upload `dist/agents/analyst.txt`
|
|
||||||
4. **Run documentation**: Type `*document-project`
|
4. **Run documentation**: Type `*document-project`
|
||||||
|
|
||||||
The analyst will generate comprehensive documentation of everything.
|
The analyst will generate comprehensive documentation of everything.
|
||||||
|
|
||||||
#### Phase 2: Plan Your Enhancement
|
#### Phase 2: Plan Your Enhancement
|
||||||
|
|
||||||
#### Option A: Full Brownfield Workflow (Recommended for Major Changes)
|
##### Option A: Full Brownfield Workflow (Recommended for Major Changes)
|
||||||
|
|
||||||
**1. Create Brownfield PRD**:
|
**1. Create Brownfield PRD**:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@pm
|
@pm
|
||||||
*create-doc brownfield-prd
|
*create-brownfield-prd
|
||||||
```
|
```
|
||||||
|
|
||||||
The PM agent will:
|
The PM agent will:
|
||||||
@@ -146,7 +148,7 @@ The PM agent will:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
@architect
|
@architect
|
||||||
*create-doc brownfield-architecture
|
*create-brownfield-architecture
|
||||||
```
|
```
|
||||||
|
|
||||||
The architect will:
|
The architect will:
|
||||||
@@ -157,13 +159,13 @@ The architect will:
|
|||||||
- **Identify technical risks**
|
- **Identify technical risks**
|
||||||
- **Define compatibility requirements**
|
- **Define compatibility requirements**
|
||||||
|
|
||||||
#### Option B: Quick Enhancement (For Focused Changes)
|
##### Option B: Quick Enhancement (For Focused Changes)
|
||||||
|
|
||||||
**For Single Epic Without Full PRD**:
|
**For Single Epic Without Full PRD**:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@pm
|
@pm
|
||||||
*brownfield-create-epic
|
*create-brownfield-epic
|
||||||
```
|
```
|
||||||
|
|
||||||
Use when:
|
Use when:
|
||||||
@@ -177,7 +179,7 @@ Use when:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
@pm
|
@pm
|
||||||
*brownfield-create-story
|
*create-brownfield-story
|
||||||
```
|
```
|
||||||
|
|
||||||
Use when:
|
Use when:
|
||||||
@@ -191,7 +193,7 @@ Use when:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
@po
|
@po
|
||||||
*execute-checklist po-master-checklist
|
*execute-checklist-po
|
||||||
```
|
```
|
||||||
|
|
||||||
The PO ensures:
|
The PO ensures:
|
||||||
@@ -201,26 +203,27 @@ The PO ensures:
|
|||||||
- Risk mitigation strategies in place
|
- Risk mitigation strategies in place
|
||||||
- Clear integration approach
|
- Clear integration approach
|
||||||
|
|
||||||
### Phase 4: Transition to Development
|
### Phase 4: Save and Shard Documents
|
||||||
|
|
||||||
Follow the enhanced IDE Development Workflow:
|
1. Save your PRD and Architecture as:
|
||||||
|
docs/brownfield-prd.md
|
||||||
1. **Ensure documents are in project**:
|
docs/brownfield-architecture.md
|
||||||
|
2. Shard your docs:
|
||||||
- Copy `docs/prd.md` (or brownfield-prd.md)
|
In your IDE
|
||||||
- Copy `docs/architecture.md` (or brownfield-architecture.md)
|
|
||||||
|
|
||||||
2. **Shard documents**:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@po
|
@po
|
||||||
# Ask to shard docs/prd.md
|
shard docs/brownfield-prd.md
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Development cycle**:
|
```bash
|
||||||
- **SM** creates stories with integration awareness
|
@po
|
||||||
- **Dev** implements with existing code respect
|
shard docs/brownfield-architecture.md
|
||||||
- **QA** reviews for compatibility and improvements
|
```
|
||||||
|
|
||||||
|
### Phase 5: Transition to Development
|
||||||
|
|
||||||
|
**Follow the [<ins>Enhanced IDE Development Workflow</ins>](enhanced-ide-development-workflow.md)**
|
||||||
|
|
||||||
## Brownfield Best Practices
|
## Brownfield Best Practices
|
||||||
|
|
||||||
@@ -287,7 +290,7 @@ Document:
|
|||||||
### Scenario 3: Bug Fix in Complex System
|
### Scenario 3: Bug Fix in Complex System
|
||||||
|
|
||||||
1. Document relevant subsystems
|
1. Document relevant subsystems
|
||||||
2. Use `brownfield-create-story` for focused fix
|
2. Use `create-brownfield-story` for focused fix
|
||||||
3. Include regression test requirements
|
3. Include regression test requirements
|
||||||
4. QA validates no side effects
|
4. QA validates no side effects
|
||||||
|
|
||||||
@@ -310,7 +313,7 @@ Document:
|
|||||||
|
|
||||||
### "Too much boilerplate for small changes"
|
### "Too much boilerplate for small changes"
|
||||||
|
|
||||||
**Solution**: Use `brownfield-create-story` instead of full workflow
|
**Solution**: Use `create-brownfield-story` instead of full workflow
|
||||||
|
|
||||||
### "Integration points unclear"
|
### "Integration points unclear"
|
||||||
|
|
||||||
@@ -322,19 +325,19 @@ Document:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Document existing project
|
# Document existing project
|
||||||
@analyst → *document-project
|
@architect → *document-project
|
||||||
|
|
||||||
# Create enhancement PRD
|
# Create enhancement PRD
|
||||||
@pm → *create-doc brownfield-prd
|
@pm → *create-brownfield-prd
|
||||||
|
|
||||||
# Create architecture with integration focus
|
# Create architecture with integration focus
|
||||||
@architect → *create-doc brownfield-architecture
|
@architect → *create-brownfield-architecture
|
||||||
|
|
||||||
# Quick epic creation
|
# Quick epic creation
|
||||||
@pm → *brownfield-create-epic
|
@pm → *create-brownfield-epic
|
||||||
|
|
||||||
# Single story creation
|
# Single story creation
|
||||||
@pm → *brownfield-create-story
|
@pm → *create-brownfield-story
|
||||||
```
|
```
|
||||||
|
|
||||||
### Decision Tree
|
### Decision Tree
|
||||||
|
|||||||
155
package.json
155
package.json
@@ -1,78 +1,81 @@
|
|||||||
{
|
{
|
||||||
"name": "bmad-method",
|
"name": "bmad-method",
|
||||||
"version": "4.31.0",
|
"version": "4.31.0",
|
||||||
"description": "Breakthrough Method of Agile AI-driven Development",
|
"description": "Breakthrough Method of Agile AI-driven Development",
|
||||||
"main": "tools/cli.js",
|
"main": "tools/cli.js",
|
||||||
"bin": {
|
"bin": {
|
||||||
"bmad": "tools/bmad-npx-wrapper.js",
|
"bmad": "tools/bmad-npx-wrapper.js",
|
||||||
"bmad-method": "tools/bmad-npx-wrapper.js"
|
"bmad-method": "tools/bmad-npx-wrapper.js"
|
||||||
},
|
},
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"build": "node tools/cli.js build",
|
"build": "node tools/cli.js build",
|
||||||
"build:agents": "node tools/cli.js build --agents-only",
|
"build:agents": "node tools/cli.js build --agents-only",
|
||||||
"build:teams": "node tools/cli.js build --teams-only",
|
"build:teams": "node tools/cli.js build --teams-only",
|
||||||
"list:agents": "node tools/cli.js list:agents",
|
"list:agents": "node tools/cli.js list:agents",
|
||||||
"validate": "node tools/cli.js validate",
|
"validate": "node tools/cli.js validate",
|
||||||
"install:bmad": "node tools/installer/bin/bmad.js install",
|
"flatten": "node tools/flattener/main.js",
|
||||||
"format": "prettier --write \"**/*.md\"",
|
"install:bmad": "node tools/installer/bin/bmad.js install",
|
||||||
"version:patch": "node tools/version-bump.js patch",
|
"format": "prettier --write \"**/*.md\"",
|
||||||
"version:minor": "node tools/version-bump.js minor",
|
"version:patch": "node tools/version-bump.js patch",
|
||||||
"version:major": "node tools/version-bump.js major",
|
"version:minor": "node tools/version-bump.js minor",
|
||||||
"version:expansion": "node tools/bump-expansion-version.js",
|
"version:major": "node tools/version-bump.js major",
|
||||||
"version:expansion:set": "node tools/update-expansion-version.js",
|
"version:expansion": "node tools/bump-expansion-version.js",
|
||||||
"version:all": "node tools/bump-all-versions.js",
|
"version:expansion:set": "node tools/update-expansion-version.js",
|
||||||
"version:all:minor": "node tools/bump-all-versions.js minor",
|
"version:all": "node tools/bump-all-versions.js",
|
||||||
"version:all:major": "node tools/bump-all-versions.js major",
|
"version:all:minor": "node tools/bump-all-versions.js minor",
|
||||||
"version:all:patch": "node tools/bump-all-versions.js patch",
|
"version:all:major": "node tools/bump-all-versions.js major",
|
||||||
"version:expansion:all": "node tools/bump-all-versions.js",
|
"version:all:patch": "node tools/bump-all-versions.js patch",
|
||||||
"version:expansion:all:minor": "node tools/bump-all-versions.js minor",
|
"version:expansion:all": "node tools/bump-all-versions.js",
|
||||||
"version:expansion:all:major": "node tools/bump-all-versions.js major",
|
"version:expansion:all:minor": "node tools/bump-all-versions.js minor",
|
||||||
"version:expansion:all:patch": "node tools/bump-all-versions.js patch",
|
"version:expansion:all:major": "node tools/bump-all-versions.js major",
|
||||||
"release": "semantic-release",
|
"version:expansion:all:patch": "node tools/bump-all-versions.js patch",
|
||||||
"release:test": "semantic-release --dry-run --no-ci || echo 'Config test complete - authentication errors are expected locally'",
|
"release": "semantic-release",
|
||||||
"prepare": "husky"
|
"release:test": "semantic-release --dry-run --no-ci || echo 'Config test complete - authentication errors are expected locally'",
|
||||||
},
|
"prepare": "husky"
|
||||||
"dependencies": {
|
},
|
||||||
"@kayvan/markdown-tree-parser": "^1.5.0",
|
"dependencies": {
|
||||||
"bmad-method": "^4.30.3",
|
"@kayvan/markdown-tree-parser": "^1.5.0",
|
||||||
"chalk": "^4.1.2",
|
"bmad-method": "^4.30.3",
|
||||||
"commander": "^14.0.0",
|
"chalk": "^4.1.2",
|
||||||
"fs-extra": "^11.3.0",
|
"commander": "^14.0.0",
|
||||||
"glob": "^11.0.3",
|
"fs-extra": "^11.3.0",
|
||||||
"inquirer": "^8.2.6",
|
"glob": "^11.0.3",
|
||||||
"js-yaml": "^4.1.0",
|
"inquirer": "^8.2.6",
|
||||||
"ora": "^5.4.1"
|
"js-yaml": "^4.1.0",
|
||||||
},
|
"minimatch": "^10.0.3",
|
||||||
"keywords": [
|
"ora": "^5.4.1"
|
||||||
"agile",
|
},
|
||||||
"ai",
|
"keywords": [
|
||||||
"orchestrator",
|
"agile",
|
||||||
"development",
|
"ai",
|
||||||
"methodology",
|
"orchestrator",
|
||||||
"agents",
|
"development",
|
||||||
"bmad"
|
"methodology",
|
||||||
],
|
"agents",
|
||||||
"author": "Brian (BMad) Madison",
|
"bmad"
|
||||||
"license": "MIT",
|
],
|
||||||
"repository": {
|
"author": "Brian (BMad) Madison",
|
||||||
"type": "git",
|
"license": "MIT",
|
||||||
"url": "git+https://github.com/bmadcode/BMAD-METHOD.git"
|
"repository": {
|
||||||
},
|
"type": "git",
|
||||||
"engines": {
|
"url": "git+https://github.com/bmadcode/BMAD-METHOD.git"
|
||||||
"node": ">=20.0.0"
|
},
|
||||||
},
|
"engines": {
|
||||||
"devDependencies": {
|
"node": ">=20.0.0"
|
||||||
"@semantic-release/changelog": "^6.0.3",
|
},
|
||||||
"@semantic-release/git": "^10.0.1",
|
"devDependencies": {
|
||||||
"husky": "^9.1.7",
|
"@semantic-release/changelog": "^6.0.3",
|
||||||
"lint-staged": "^16.1.1",
|
"@semantic-release/git": "^10.0.1",
|
||||||
"prettier": "^3.5.3",
|
"husky": "^9.1.7",
|
||||||
"semantic-release": "^22.0.0",
|
"jest": "^30.0.4",
|
||||||
"yaml-lint": "^1.7.0"
|
"lint-staged": "^16.1.1",
|
||||||
},
|
"prettier": "^3.5.3",
|
||||||
"lint-staged": {
|
"semantic-release": "^22.0.0",
|
||||||
"**/*.md": [
|
"yaml-lint": "^1.7.0"
|
||||||
"prettier --write"
|
},
|
||||||
]
|
"lint-staged": {
|
||||||
}
|
"**/*.md": [
|
||||||
|
"prettier --write"
|
||||||
|
]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -149,4 +149,13 @@ program
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
program
|
||||||
|
.command('flatten')
|
||||||
|
.description('Flatten codebase to XML format')
|
||||||
|
.option('-o, --output <path>', 'Output file path', 'flattened-codebase.xml')
|
||||||
|
.action(async (options) => {
|
||||||
|
const flattener = require('./flattener/main');
|
||||||
|
await flattener.parseAsync(['flatten', '--output', options.output], { from: 'user' });
|
||||||
|
});
|
||||||
|
|
||||||
program.parse();
|
program.parse();
|
||||||
559
tools/flattener/main.js
Normal file
559
tools/flattener/main.js
Normal file
@@ -0,0 +1,559 @@
|
|||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
const { Command } = require('commander');
|
||||||
|
const fs = require('fs-extra');
|
||||||
|
const path = require('node:path');
|
||||||
|
const { glob } = require('glob');
|
||||||
|
const { minimatch } = require('minimatch');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Recursively discover all files in a directory
|
||||||
|
* @param {string} rootDir - The root directory to scan
|
||||||
|
* @returns {Promise<string[]>} Array of file paths
|
||||||
|
*/
|
||||||
|
async function discoverFiles(rootDir) {
|
||||||
|
try {
|
||||||
|
const gitignorePath = path.join(rootDir, '.gitignore');
|
||||||
|
const gitignorePatterns = await parseGitignore(gitignorePath);
|
||||||
|
|
||||||
|
// Common gitignore patterns that should always be ignored
|
||||||
|
const commonIgnorePatterns = [
|
||||||
|
// Version control
|
||||||
|
'.git/**',
|
||||||
|
'.svn/**',
|
||||||
|
'.hg/**',
|
||||||
|
'.bzr/**',
|
||||||
|
|
||||||
|
// Dependencies
|
||||||
|
'node_modules/**',
|
||||||
|
'bower_components/**',
|
||||||
|
'vendor/**',
|
||||||
|
'packages/**',
|
||||||
|
|
||||||
|
// Build outputs
|
||||||
|
'build/**',
|
||||||
|
'dist/**',
|
||||||
|
'out/**',
|
||||||
|
'target/**',
|
||||||
|
'bin/**',
|
||||||
|
'obj/**',
|
||||||
|
'release/**',
|
||||||
|
'debug/**',
|
||||||
|
|
||||||
|
// Environment and config
|
||||||
|
'.env',
|
||||||
|
'.env.*',
|
||||||
|
'*.env',
|
||||||
|
'.config',
|
||||||
|
|
||||||
|
// Logs
|
||||||
|
'logs/**',
|
||||||
|
'*.log',
|
||||||
|
'npm-debug.log*',
|
||||||
|
'yarn-debug.log*',
|
||||||
|
'yarn-error.log*',
|
||||||
|
'lerna-debug.log*',
|
||||||
|
|
||||||
|
// Coverage and testing
|
||||||
|
'coverage/**',
|
||||||
|
'.nyc_output/**',
|
||||||
|
'.coverage/**',
|
||||||
|
'test-results/**',
|
||||||
|
'junit.xml',
|
||||||
|
|
||||||
|
// Cache directories
|
||||||
|
'.cache/**',
|
||||||
|
'.tmp/**',
|
||||||
|
'.temp/**',
|
||||||
|
'tmp/**',
|
||||||
|
'temp/**',
|
||||||
|
'.sass-cache/**',
|
||||||
|
'.eslintcache',
|
||||||
|
'.stylelintcache',
|
||||||
|
|
||||||
|
// OS generated files
|
||||||
|
'.DS_Store',
|
||||||
|
'.DS_Store?',
|
||||||
|
'._*',
|
||||||
|
'.Spotlight-V100',
|
||||||
|
'.Trashes',
|
||||||
|
'ehthumbs.db',
|
||||||
|
'Thumbs.db',
|
||||||
|
'desktop.ini',
|
||||||
|
|
||||||
|
// IDE and editor files
|
||||||
|
'.vscode/**',
|
||||||
|
'.idea/**',
|
||||||
|
'*.swp',
|
||||||
|
'*.swo',
|
||||||
|
'*~',
|
||||||
|
'.project',
|
||||||
|
'.classpath',
|
||||||
|
'.settings/**',
|
||||||
|
'*.sublime-project',
|
||||||
|
'*.sublime-workspace',
|
||||||
|
|
||||||
|
// Package manager files
|
||||||
|
'package-lock.json',
|
||||||
|
'yarn.lock',
|
||||||
|
'pnpm-lock.yaml',
|
||||||
|
'composer.lock',
|
||||||
|
'Pipfile.lock',
|
||||||
|
|
||||||
|
// Runtime and compiled files
|
||||||
|
'*.pyc',
|
||||||
|
'*.pyo',
|
||||||
|
'*.pyd',
|
||||||
|
'__pycache__/**',
|
||||||
|
'*.class',
|
||||||
|
'*.jar',
|
||||||
|
'*.war',
|
||||||
|
'*.ear',
|
||||||
|
'*.o',
|
||||||
|
'*.so',
|
||||||
|
'*.dll',
|
||||||
|
'*.exe',
|
||||||
|
|
||||||
|
// Documentation build
|
||||||
|
'_site/**',
|
||||||
|
'.jekyll-cache/**',
|
||||||
|
'.jekyll-metadata',
|
||||||
|
|
||||||
|
// Flattener specific outputs
|
||||||
|
'flattened-codebase.xml',
|
||||||
|
'repomix-output.xml'
|
||||||
|
];
|
||||||
|
|
||||||
|
const combinedIgnores = [
|
||||||
|
...gitignorePatterns,
|
||||||
|
...commonIgnorePatterns
|
||||||
|
];
|
||||||
|
|
||||||
|
// Use glob to recursively find all files, excluding common ignore patterns
|
||||||
|
const files = await glob('**/*', {
|
||||||
|
cwd: rootDir,
|
||||||
|
nodir: true, // Only files, not directories
|
||||||
|
dot: true, // Include hidden files
|
||||||
|
follow: false, // Don't follow symbolic links
|
||||||
|
ignore: combinedIgnores
|
||||||
|
});
|
||||||
|
|
||||||
|
return files.map(file => path.resolve(rootDir, file));
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error discovering files:', error.message);
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse .gitignore file and return ignore patterns
|
||||||
|
* @param {string} gitignorePath - Path to .gitignore file
|
||||||
|
* @returns {Promise<string[]>} Array of ignore patterns
|
||||||
|
*/
|
||||||
|
async function parseGitignore(gitignorePath) {
|
||||||
|
try {
|
||||||
|
if (!await fs.pathExists(gitignorePath)) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = await fs.readFile(gitignorePath, 'utf8');
|
||||||
|
return content
|
||||||
|
.split('\n')
|
||||||
|
.map(line => line.trim())
|
||||||
|
.filter(line => line && !line.startsWith('#')) // Remove empty lines and comments
|
||||||
|
.map(pattern => {
|
||||||
|
// Convert gitignore patterns to glob patterns
|
||||||
|
if (pattern.endsWith('/')) {
|
||||||
|
return pattern + '**';
|
||||||
|
}
|
||||||
|
return pattern;
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error parsing .gitignore:', error.message);
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a file is binary using file command and heuristics
|
||||||
|
* @param {string} filePath - Path to the file
|
||||||
|
* @returns {Promise<boolean>} True if file is binary
|
||||||
|
*/
|
||||||
|
async function isBinaryFile(filePath) {
|
||||||
|
try {
|
||||||
|
// First check by file extension
|
||||||
|
const binaryExtensions = [
|
||||||
|
'.jpg', '.jpeg', '.png', '.gif', '.bmp', '.ico', '.svg',
|
||||||
|
'.pdf', '.doc', '.docx', '.xls', '.xlsx', '.ppt', '.pptx',
|
||||||
|
'.zip', '.tar', '.gz', '.rar', '.7z',
|
||||||
|
'.exe', '.dll', '.so', '.dylib',
|
||||||
|
'.mp3', '.mp4', '.avi', '.mov', '.wav',
|
||||||
|
'.ttf', '.otf', '.woff', '.woff2',
|
||||||
|
'.bin', '.dat', '.db', '.sqlite'
|
||||||
|
];
|
||||||
|
|
||||||
|
const ext = path.extname(filePath).toLowerCase();
|
||||||
|
if (binaryExtensions.includes(ext)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// For files without clear extensions, try to read a small sample
|
||||||
|
const stats = await fs.stat(filePath);
|
||||||
|
if (stats.size === 0) {
|
||||||
|
return false; // Empty files are considered text
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read first 1024 bytes to check for null bytes
|
||||||
|
const sampleSize = Math.min(1024, stats.size);
|
||||||
|
const buffer = await fs.readFile(filePath, { encoding: null, flag: 'r' });
|
||||||
|
const sample = buffer.slice(0, sampleSize);
|
||||||
|
// If we find null bytes, it's likely binary
|
||||||
|
return sample.includes(0);
|
||||||
|
} catch (error) {
|
||||||
|
console.warn(`Warning: Could not determine if file is binary: ${filePath} - ${error.message}`);
|
||||||
|
return false; // Default to text if we can't determine
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Read and aggregate content from text files
|
||||||
|
* @param {string[]} files - Array of file paths
|
||||||
|
* @param {string} rootDir - The root directory
|
||||||
|
* @param {Object} spinner - Optional spinner instance for progress display
|
||||||
|
* @returns {Promise<Object>} Object containing file contents and metadata
|
||||||
|
*/
|
||||||
|
async function aggregateFileContents(files, rootDir, spinner = null) {
|
||||||
|
const results = {
|
||||||
|
textFiles: [],
|
||||||
|
binaryFiles: [],
|
||||||
|
errors: [],
|
||||||
|
totalFiles: files.length,
|
||||||
|
processedFiles: 0
|
||||||
|
};
|
||||||
|
|
||||||
|
for (const filePath of files) {
|
||||||
|
try {
|
||||||
|
const relativePath = path.relative(rootDir, filePath);
|
||||||
|
|
||||||
|
// Update progress indicator
|
||||||
|
if (spinner) {
|
||||||
|
spinner.text = `Processing file ${results.processedFiles + 1}/${results.totalFiles}: ${relativePath}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
const isBinary = await isBinaryFile(filePath);
|
||||||
|
|
||||||
|
if (isBinary) {
|
||||||
|
results.binaryFiles.push({
|
||||||
|
path: relativePath,
|
||||||
|
absolutePath: filePath,
|
||||||
|
size: (await fs.stat(filePath)).size
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
// Read text file content
|
||||||
|
const content = await fs.readFile(filePath, 'utf8');
|
||||||
|
results.textFiles.push({
|
||||||
|
path: relativePath,
|
||||||
|
absolutePath: filePath,
|
||||||
|
content: content,
|
||||||
|
size: content.length,
|
||||||
|
lines: content.split('\n').length
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
results.processedFiles++;
|
||||||
|
} catch (error) {
|
||||||
|
const relativePath = path.relative(rootDir, filePath);
|
||||||
|
const errorInfo = {
|
||||||
|
path: relativePath,
|
||||||
|
absolutePath: filePath,
|
||||||
|
error: error.message
|
||||||
|
};
|
||||||
|
|
||||||
|
results.errors.push(errorInfo);
|
||||||
|
|
||||||
|
// Log warning without interfering with spinner
|
||||||
|
if (spinner) {
|
||||||
|
spinner.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
|
||||||
|
} else {
|
||||||
|
console.warn(`Warning: Could not read file ${relativePath}: ${error.message}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
results.processedFiles++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return results;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate XML output with aggregated file contents using streaming
|
||||||
|
* @param {Object} aggregatedContent - The aggregated content object
|
||||||
|
* @param {string} outputPath - The output file path
|
||||||
|
* @returns {Promise<void>} Promise that resolves when writing is complete
|
||||||
|
*/
|
||||||
|
async function generateXMLOutput(aggregatedContent, outputPath) {
|
||||||
|
const { textFiles } = aggregatedContent;
|
||||||
|
|
||||||
|
// Create write stream for efficient memory usage
|
||||||
|
const writeStream = fs.createWriteStream(outputPath, { encoding: 'utf8' });
|
||||||
|
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
writeStream.on('error', reject);
|
||||||
|
writeStream.on('finish', resolve);
|
||||||
|
|
||||||
|
// Write XML header
|
||||||
|
writeStream.write('<?xml version="1.0" encoding="UTF-8"?>\n');
|
||||||
|
writeStream.write('<files>\n');
|
||||||
|
|
||||||
|
// Process files one by one to minimize memory usage
|
||||||
|
let fileIndex = 0;
|
||||||
|
|
||||||
|
const writeNextFile = () => {
|
||||||
|
if (fileIndex >= textFiles.length) {
|
||||||
|
// All files processed, close XML and stream
|
||||||
|
writeStream.write('</files>\n');
|
||||||
|
writeStream.end();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const file = textFiles[fileIndex];
|
||||||
|
fileIndex++;
|
||||||
|
|
||||||
|
// Write file opening tag
|
||||||
|
writeStream.write(` <file path="${escapeXml(file.path)}">`);
|
||||||
|
|
||||||
|
// Use CDATA for code content, handling CDATA end sequences properly
|
||||||
|
if (file.content?.trim()) {
|
||||||
|
const indentedContent = indentFileContent(file.content);
|
||||||
|
if (file.content.includes(']]>')) {
|
||||||
|
// If content contains ]]>, split it and wrap each part in CDATA
|
||||||
|
writeStream.write(splitAndWrapCDATA(indentedContent));
|
||||||
|
} else {
|
||||||
|
writeStream.write(`<![CDATA[\n${indentedContent}\n ]]>`);
|
||||||
|
}
|
||||||
|
} else if (file.content) {
|
||||||
|
// Handle empty or whitespace-only content
|
||||||
|
const indentedContent = indentFileContent(file.content);
|
||||||
|
writeStream.write(`<![CDATA[\n${indentedContent}\n ]]>`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write file closing tag
|
||||||
|
writeStream.write('</file>\n');
|
||||||
|
|
||||||
|
// Continue with next file on next tick to avoid stack overflow
|
||||||
|
setImmediate(writeNextFile);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Start processing files
|
||||||
|
writeNextFile();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Escape XML special characters for attributes
|
||||||
|
* @param {string} str - String to escape
|
||||||
|
* @returns {string} Escaped string
|
||||||
|
*/
|
||||||
|
function escapeXml(str) {
|
||||||
|
if (typeof str !== 'string') {
|
||||||
|
return String(str);
|
||||||
|
}
|
||||||
|
return str
|
||||||
|
.replace(/&/g, '&')
|
||||||
|
.replace(/</g, '<')
|
||||||
|
.replace(/>/g, '>')
|
||||||
|
.replace(/"/g, '"')
|
||||||
|
.replace(/'/g, ''');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Indent file content with 4 spaces for each line
|
||||||
|
* @param {string} content - Content to indent
|
||||||
|
* @returns {string} Indented content
|
||||||
|
*/
|
||||||
|
function indentFileContent(content) {
|
||||||
|
if (typeof content !== 'string') {
|
||||||
|
return String(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Split content into lines and add 4 spaces of indentation to each line
|
||||||
|
return content.split('\n').map(line => ` ${line}`).join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Split content containing ]]> and wrap each part in CDATA
|
||||||
|
* @param {string} content - Content to process
|
||||||
|
* @returns {string} Content with properly wrapped CDATA sections
|
||||||
|
*/
|
||||||
|
function splitAndWrapCDATA(content) {
|
||||||
|
if (typeof content !== 'string') {
|
||||||
|
return String(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Replace ]]> with ]]]]><![CDATA[> to escape it within CDATA
|
||||||
|
const escapedContent = content.replace(/]]>/g, ']]]]><![CDATA[>');
|
||||||
|
return `<![CDATA[
|
||||||
|
${escapedContent}
|
||||||
|
]]>`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate statistics for the processed files
|
||||||
|
* @param {Object} aggregatedContent - The aggregated content object
|
||||||
|
* @param {number} xmlFileSize - The size of the generated XML file in bytes
|
||||||
|
* @returns {Object} Statistics object
|
||||||
|
*/
|
||||||
|
function calculateStatistics(aggregatedContent, xmlFileSize) {
|
||||||
|
const { textFiles, binaryFiles, errors } = aggregatedContent;
|
||||||
|
|
||||||
|
// Calculate total file size in bytes
|
||||||
|
const totalTextSize = textFiles.reduce((sum, file) => sum + file.size, 0);
|
||||||
|
const totalBinarySize = binaryFiles.reduce((sum, file) => sum + file.size, 0);
|
||||||
|
const totalSize = totalTextSize + totalBinarySize;
|
||||||
|
|
||||||
|
// Calculate total lines of code
|
||||||
|
const totalLines = textFiles.reduce((sum, file) => sum + file.lines, 0);
|
||||||
|
|
||||||
|
// Estimate token count (rough approximation: 1 token ≈ 4 characters)
|
||||||
|
const estimatedTokens = Math.ceil(xmlFileSize / 4);
|
||||||
|
|
||||||
|
// Format file size
|
||||||
|
const formatSize = (bytes) => {
|
||||||
|
if (bytes < 1024) return `${bytes} B`;
|
||||||
|
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
|
||||||
|
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
|
||||||
|
};
|
||||||
|
|
||||||
|
return {
|
||||||
|
totalFiles: textFiles.length + binaryFiles.length,
|
||||||
|
textFiles: textFiles.length,
|
||||||
|
binaryFiles: binaryFiles.length,
|
||||||
|
errorFiles: errors.length,
|
||||||
|
totalSize: formatSize(totalSize),
|
||||||
|
xmlSize: formatSize(xmlFileSize),
|
||||||
|
totalLines,
|
||||||
|
estimatedTokens: estimatedTokens.toLocaleString()
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Filter files based on .gitignore patterns
|
||||||
|
* @param {string[]} files - Array of file paths
|
||||||
|
* @param {string} rootDir - The root directory
|
||||||
|
* @returns {Promise<string[]>} Filtered array of file paths
|
||||||
|
*/
|
||||||
|
async function filterFiles(files, rootDir) {
|
||||||
|
const gitignorePath = path.join(rootDir, '.gitignore');
|
||||||
|
const ignorePatterns = await parseGitignore(gitignorePath);
|
||||||
|
|
||||||
|
if (ignorePatterns.length === 0) {
|
||||||
|
return files;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert absolute paths to relative for pattern matching
|
||||||
|
const relativeFiles = files.map(file => path.relative(rootDir, file));
|
||||||
|
|
||||||
|
// Separate positive and negative patterns
|
||||||
|
const positivePatterns = ignorePatterns.filter(p => !p.startsWith('!'));
|
||||||
|
const negativePatterns = ignorePatterns.filter(p => p.startsWith('!')).map(p => p.slice(1));
|
||||||
|
|
||||||
|
// Filter out files that match ignore patterns
|
||||||
|
const filteredRelative = [];
|
||||||
|
|
||||||
|
for (const file of relativeFiles) {
|
||||||
|
let shouldIgnore = false;
|
||||||
|
|
||||||
|
// First check positive patterns (ignore these files)
|
||||||
|
for (const pattern of positivePatterns) {
|
||||||
|
if (minimatch(file, pattern)) {
|
||||||
|
shouldIgnore = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Then check negative patterns (don't ignore these files even if they match positive patterns)
|
||||||
|
if (shouldIgnore) {
|
||||||
|
for (const pattern of negativePatterns) {
|
||||||
|
if (minimatch(file, pattern)) {
|
||||||
|
shouldIgnore = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!shouldIgnore) {
|
||||||
|
filteredRelative.push(file);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert back to absolute paths
|
||||||
|
return filteredRelative.map(file => path.resolve(rootDir, file));
|
||||||
|
}
|
||||||
|
|
||||||
|
const program = new Command();
|
||||||
|
|
||||||
|
program
|
||||||
|
.name('bmad-flatten')
|
||||||
|
.description('BMad-Method codebase flattener tool')
|
||||||
|
.version('1.0.0')
|
||||||
|
.option('-o, --output <path>', 'Output file path', 'flattened-codebase.xml')
|
||||||
|
.action(async (options) => {
|
||||||
|
console.log(`Flattening codebase to: ${options.output}`);
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Import ora dynamically
|
||||||
|
const { default: ora } = await import('ora');
|
||||||
|
|
||||||
|
// Start file discovery with spinner
|
||||||
|
const discoverySpinner = ora('🔍 Discovering files...').start();
|
||||||
|
const files = await discoverFiles(process.cwd());
|
||||||
|
const filteredFiles = await filterFiles(files, process.cwd());
|
||||||
|
discoverySpinner.succeed(`📁 Found ${filteredFiles.length} files to include`);
|
||||||
|
|
||||||
|
// Process files with progress tracking
|
||||||
|
console.log('Reading file contents');
|
||||||
|
const processingSpinner = ora('📄 Processing files...').start();
|
||||||
|
const aggregatedContent = await aggregateFileContents(filteredFiles, process.cwd(), processingSpinner);
|
||||||
|
processingSpinner.succeed(`✅ Processed ${aggregatedContent.processedFiles}/${filteredFiles.length} files`);
|
||||||
|
|
||||||
|
// Log processing results for test validation
|
||||||
|
console.log(`Processed ${aggregatedContent.processedFiles}/${filteredFiles.length} files`);
|
||||||
|
if (aggregatedContent.errors.length > 0) {
|
||||||
|
console.log(`Errors: ${aggregatedContent.errors.length}`);
|
||||||
|
}
|
||||||
|
console.log(`Text files: ${aggregatedContent.textFiles.length}`);
|
||||||
|
if (aggregatedContent.binaryFiles.length > 0) {
|
||||||
|
console.log(`Binary files: ${aggregatedContent.binaryFiles.length}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate XML output using streaming
|
||||||
|
const xmlSpinner = ora('🔧 Generating XML output...').start();
|
||||||
|
await generateXMLOutput(aggregatedContent, options.output);
|
||||||
|
xmlSpinner.succeed('📝 XML generation completed');
|
||||||
|
|
||||||
|
// Calculate and display statistics
|
||||||
|
const outputStats = await fs.stat(options.output);
|
||||||
|
const stats = calculateStatistics(aggregatedContent, outputStats.size);
|
||||||
|
|
||||||
|
// Display completion summary
|
||||||
|
console.log('\n📊 Completion Summary:');
|
||||||
|
console.log(`✅ Successfully processed ${filteredFiles.length} files into ${options.output}`);
|
||||||
|
console.log(`📁 Output file: ${path.resolve(options.output)}`);
|
||||||
|
console.log(`📏 Total source size: ${stats.totalSize}`);
|
||||||
|
console.log(`📄 Generated XML size: ${stats.xmlSize}`);
|
||||||
|
console.log(`📝 Total lines of code: ${stats.totalLines.toLocaleString()}`);
|
||||||
|
console.log(`🔢 Estimated tokens: ${stats.estimatedTokens}`);
|
||||||
|
console.log(`📊 File breakdown: ${stats.textFiles} text, ${stats.binaryFiles} binary, ${stats.errorFiles} errors`);
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
console.error('❌ Critical error:', error.message);
|
||||||
|
console.error('An unexpected error occurred.');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (require.main === module) {
|
||||||
|
program.parse();
|
||||||
|
}
|
||||||
|
|
||||||
|
module.exports = program;
|
||||||
Reference in New Issue
Block a user