moved bmad-core to dot folder so when adding to project it is clear its not part of the project it is added to

This commit is contained in:
Brian Madison
2025-06-13 19:11:17 -05:00
parent 03241a73d6
commit 7c71e1f815
86 changed files with 87 additions and 28247 deletions

View File

@@ -1,92 +0,0 @@
# Advanced Elicitation Task
## Purpose
- Provide optional reflective and brainstorming actions to enhance content quality
- Enable deeper exploration of ideas through structured elicitation techniques
- Support iterative refinement through multiple analytical perspectives
## Task Instructions
### 1. Section Context and Review
[[LLM: When invoked after outputting a section:
1. First, provide a brief 1-2 sentence summary of what the user should look for in the section just presented (e.g., "Please review the technology choices for completeness and alignment with your project needs. Pay special attention to version numbers and any missing categories.")
2. If the section contains Mermaid diagrams, explain each diagram briefly before offering elicitation options (e.g., "The component diagram shows the main system modules and their interactions. Notice how the API Gateway routes requests to different services.")
3. If the section contains multiple distinct items (like multiple components, multiple patterns, etc.), inform the user they can apply elicitation actions to:
- The entire section as a whole
- Individual items within the section (specify which item when selecting an action)
4. Then present the action list as specified below.]]
### 2. Ask for Review and Present Action List
[[LLM: Ask the user to review the drafted section. In the SAME message, inform them that they can suggest additions, removals, or modifications, OR they can select an action by number from the 'Advanced Reflective, Elicitation & Brainstorming Actions'. If there are multiple items in the section, mention they can specify which item(s) to apply the action to. Then, present ONLY the numbered list (0-9) of these actions. Conclude by stating that selecting 9 will proceed to the next section. Await user selection. If an elicitation action (0-8) is chosen, execute it and then re-offer this combined review/elicitation choice. If option 9 is chosen, or if the user provides direct feedback, proceed accordingly.]]
**Present the numbered list (0-9) with this exact format:**
```text
**Advanced Reflective, Elicitation & Brainstorming Actions**
Choose an action (0-9 - 9 to bypass - HELP for explanation of these options):
0. Expand or Contract for Audience
1. Explain Reasoning (CoT Step-by-Step)
2. Critique and Refine
3. Analyze Logical Flow and Dependencies
4. Assess Alignment with Overall Goals
5. Identify Potential Risks and Unforeseen Issues
6. Challenge from Critical Perspective (Self or Other Persona)
7. Explore Diverse Alternatives (ToT-Inspired)
8. Hindsight is 20/20: The 'If Only...' Reflection
9. Proceed / No Further Actions
```
### 2. Processing Guidelines
**Do NOT show:**
- The full protocol text with `[[LLM: ...]]` instructions
- Detailed explanations of each option unless executing or the user asks, when giving the definition you can modify to tie its relevance
- Any internal template markup
**After user selection from the list:**
- Execute the chosen action according to the protocol instructions below
- Ask if they want to select another action or proceed with option 9 once complete
- Continue until user selects option 9 or indicates completion
## Action Definitions
0. Expand or Contract for Audience
[[LLM: Ask the user whether they want to 'expand' on the content (add more detail, elaborate) or 'contract' it (simplify, clarify, make more concise). Also, ask if there's a specific target audience they have in mind. Once clarified, perform the expansion or contraction from your current role's perspective, tailored to the specified audience if provided.]]
1. Explain Reasoning (CoT Step-by-Step)
[[LLM: Explain the step-by-step thinking process, characteristic of your role, that you used to arrive at the current proposal for this content.]]
2. Critique and Refine
[[LLM: From your current role's perspective, review your last output or the current section for flaws, inconsistencies, or areas for improvement, and then suggest a refined version reflecting your expertise.]]
3. Analyze Logical Flow and Dependencies
[[LLM: From your role's standpoint, examine the content's structure for logical progression, internal consistency, and any relevant dependencies. Confirm if elements are presented in an effective order.]]
4. Assess Alignment with Overall Goals
[[LLM: Evaluate how well the current content contributes to the stated overall goals of the document, interpreting this from your specific role's perspective and identifying any misalignments you perceive.]]
5. Identify Potential Risks and Unforeseen Issues
[[LLM: Based on your role's expertise, brainstorm potential risks, overlooked edge cases, or unintended consequences related to the current content or proposal.]]
6. Challenge from Critical Perspective (Self or Other Persona)
[[LLM: Adopt a critical perspective on the current content. If the user specifies another role or persona (e.g., 'as a customer', 'as [Another Persona Name]'), critique the content or play devil's advocate from that specified viewpoint. If no other role is specified, play devil's advocate from your own current persona's viewpoint, arguing against the proposal or current content and highlighting weaknesses or counterarguments specific to your concerns. This can also randomly include YAGNI when appropriate, such as when trimming the scope of an MVP, the perspective might challenge the need for something to cut MVP scope.]]
7. Explore Diverse Alternatives (ToT-Inspired)
[[LLM: From your role's perspective, first broadly brainstorm a range of diverse approaches or solutions to the current topic. Then, from this wider exploration, select and present 2 distinct alternatives, detailing the pros, cons, and potential implications you foresee for each.]]
8. Hindsight is 20/20: The 'If Only...' Reflection
[[LLM: In your current persona, imagine it's a retrospective for a project based on the current content. What's the one 'if only we had known/done X...' that your role would humorously or dramatically highlight, along with the imagined consequences?]]
9. Proceed / No Further Actions
[[LLM: Acknowledge the user's choice to finalize the current work, accept the AI's last output as is, or move on to the next step without selecting another action from this list. Prepare to proceed accordingly.]]

View File

@@ -1,216 +0,0 @@
# Brainstorming Techniques Task
This task provides a comprehensive toolkit of creative brainstorming techniques for ideation and innovative thinking. The analyst can use these techniques to facilitate productive brainstorming sessions with users.
## Process
### 1. Session Setup
[[LLM: Begin by understanding the brainstorming context and goals. Ask clarifying questions if needed to determine the best approach.]]
1. **Establish Context**
- Understand the problem space or opportunity area
- Identify any constraints or parameters
- Determine session goals (divergent exploration vs. focused ideation)
2. **Select Technique Approach**
- Option A: User selects specific techniques
- Option B: Analyst recommends techniques based on context
- Option C: Random technique selection for creative variety
- Option D: Progressive technique flow (start broad, narrow down)
### 2. Core Brainstorming Techniques
#### Creative Expansion Techniques
1. **"What If" Scenarios**
[[LLM: Generate provocative what-if questions that challenge assumptions and expand thinking beyond current limitations.]]
- What if we had unlimited resources?
- What if this problem didn't exist?
- What if we approached this from a child's perspective?
- What if we had to solve this in 24 hours?
2. **Analogical Thinking**
[[LLM: Help user draw parallels between their challenge and other domains, industries, or natural systems.]]
- "How might this work like [X] but for [Y]?"
- Nature-inspired solutions (biomimicry)
- Cross-industry pattern matching
- Historical precedent analysis
3. **Reversal/Inversion**
[[LLM: Flip the problem or approach it from the opposite angle to reveal new insights.]]
- What if we did the exact opposite?
- How could we make this problem worse? (then reverse)
- Start from the end goal and work backward
- Reverse roles or perspectives
4. **First Principles Thinking**
[[LLM: Break down to fundamental truths and rebuild from scratch.]]
- What are the absolute fundamentals here?
- What assumptions can we challenge?
- If we started from zero, what would we build?
- What laws of physics/economics/human nature apply?
#### Structured Ideation Frameworks
5. **SCAMPER Method**
[[LLM: Guide through each SCAMPER prompt systematically.]]
- **S**ubstitute: What can be substituted?
- **C**ombine: What can be combined or integrated?
- **A**dapt: What can be adapted from elsewhere?
- **M**odify/Magnify: What can be emphasized or reduced?
- **P**ut to other uses: What else could this be used for?
- **E**liminate: What can be removed or simplified?
- **R**everse/Rearrange: What can be reversed or reordered?
6. **Six Thinking Hats**
[[LLM: Cycle through different thinking modes, spending focused time in each.]]
- White Hat: Facts and information
- Red Hat: Emotions and intuition
- Black Hat: Caution and critical thinking
- Yellow Hat: Optimism and benefits
- Green Hat: Creativity and alternatives
- Blue Hat: Process and control
7. **Mind Mapping**
[[LLM: Create text-based mind maps with clear hierarchical structure.]]
```
Central Concept
├── Branch 1
│ ├── Sub-idea 1.1
│ └── Sub-idea 1.2
├── Branch 2
│ ├── Sub-idea 2.1
│ └── Sub-idea 2.2
└── Branch 3
└── Sub-idea 3.1
```
#### Collaborative Techniques
8. **"Yes, And..." Building**
[[LLM: Accept every idea and build upon it without judgment. Encourage wild ideas and defer criticism.]]
- Accept the premise of each idea
- Add to it with "Yes, and..."
- Build chains of connected ideas
- Explore tangents freely
9. **Brainwriting/Round Robin**
[[LLM: Simulate multiple perspectives by generating ideas from different viewpoints.]]
- Generate ideas from stakeholder perspectives
- Build on previous ideas in rounds
- Combine unrelated ideas
- Cross-pollinate concepts
10. **Random Stimulation**
[[LLM: Use random words, images, or concepts as creative triggers.]]
- Random word association
- Picture/metaphor inspiration
- Forced connections between unrelated items
- Constraint-based creativity
#### Deep Exploration Techniques
11. **Five Whys**
[[LLM: Dig deeper into root causes and underlying motivations.]]
- Why does this problem exist? → Answer → Why? (repeat 5 times)
- Uncover hidden assumptions
- Find root causes, not symptoms
- Identify intervention points
12. **Morphological Analysis**
[[LLM: Break down into parameters and systematically explore combinations.]]
- List key parameters/dimensions
- Identify possible values for each
- Create combination matrix
- Explore unusual combinations
13. **Provocation Technique (PO)**
[[LLM: Make deliberately provocative statements to jar thinking.]]
- PO: Cars have square wheels
- PO: Customers pay us to take products
- PO: The problem solves itself
- Extract useful ideas from provocations
### 3. Technique Selection Guide
[[LLM: Help user select appropriate techniques based on their needs.]]
**For Initial Exploration:**
- What If Scenarios
- First Principles
- Mind Mapping
**For Stuck/Blocked Thinking:**
- Random Stimulation
- Reversal/Inversion
- Provocation Technique
**For Systematic Coverage:**
- SCAMPER
- Morphological Analysis
- Six Thinking Hats
**For Deep Understanding:**
- Five Whys
- Analogical Thinking
- First Principles
**For Team/Collaborative Settings:**
- Brainwriting
- "Yes, And..."
- Six Thinking Hats
### 4. Session Flow Management
[[LLM: Guide the brainstorming session with appropriate pacing and technique transitions.]]
1. **Warm-up Phase** (5-10 min)
- Start with accessible techniques
- Build creative confidence
- Establish "no judgment" atmosphere
2. **Divergent Phase** (20-30 min)
- Use expansion techniques
- Generate quantity over quality
- Encourage wild ideas
3. **Convergent Phase** (15-20 min)
- Group and categorize ideas
- Identify patterns and themes
- Select promising directions
4. **Synthesis Phase** (10-15 min)
- Combine complementary ideas
- Refine and develop concepts
- Prepare summary of insights
### 5. Output Format
[[LLM: Present brainstorming results in an organized, actionable format.]]
**Session Summary:**
- Techniques used
- Number of ideas generated
- Key themes identified
**Idea Categories:**
1. **Immediate Opportunities** - Ideas that could be implemented now
2. **Future Innovations** - Ideas requiring more development
3. **Moonshots** - Ambitious, transformative ideas
4. **Insights & Learnings** - Key realizations from the session
**Next Steps:**
- Which ideas to explore further
- Recommended follow-up techniques
- Suggested research areas
## Important Notes
- Maintain energy and momentum throughout the session
- Defer judgment - all ideas are valid during generation
- Quantity leads to quality - aim for many ideas
- Build on ideas collaboratively
- Document everything - even "silly" ideas can spark breakthroughs
- Take breaks if energy flags
- End with clear next actions

View File

@@ -1,160 +0,0 @@
# Create Brownfield Epic Task
## Purpose
Create a single epic for smaller brownfield enhancements that don't require the full PRD and Architecture documentation process. This task is for isolated features or modifications that can be completed within a focused scope.
## When to Use This Task
**Use this task when:**
- The enhancement can be completed in 1-3 stories
- No significant architectural changes are required
- The enhancement follows existing project patterns
- Integration complexity is minimal
- Risk to existing system is low
**Use the full brownfield PRD/Architecture process when:**
- The enhancement requires multiple coordinated stories
- Architectural planning is needed
- Significant integration work is required
- Risk assessment and mitigation planning is necessary
## Instructions
### 1. Project Analysis (Required)
Before creating the epic, gather essential information about the existing project:
**Existing Project Context:**
- [ ] Project purpose and current functionality understood
- [ ] Existing technology stack identified
- [ ] Current architecture patterns noted
- [ ] Integration points with existing system identified
**Enhancement Scope:**
- [ ] Enhancement clearly defined and scoped
- [ ] Impact on existing functionality assessed
- [ ] Required integration points identified
- [ ] Success criteria established
### 2. Epic Creation
Create a focused epic following this structure:
#### Epic Title
{{Enhancement Name}} - Brownfield Enhancement
#### Epic Goal
{{1-2 sentences describing what the epic will accomplish and why it adds value}}
#### Epic Description
**Existing System Context:**
- Current relevant functionality: {{brief description}}
- Technology stack: {{relevant existing technologies}}
- Integration points: {{where new work connects to existing system}}
**Enhancement Details:**
- What's being added/changed: {{clear description}}
- How it integrates: {{integration approach}}
- Success criteria: {{measurable outcomes}}
#### Stories
List 1-3 focused stories that complete the epic:
1. **Story 1:** {{Story title and brief description}}
2. **Story 2:** {{Story title and brief description}}
3. **Story 3:** {{Story title and brief description}}
#### Compatibility Requirements
- [ ] Existing APIs remain unchanged
- [ ] Database schema changes are backward compatible
- [ ] UI changes follow existing patterns
- [ ] Performance impact is minimal
#### Risk Mitigation
- **Primary Risk:** {{main risk to existing system}}
- **Mitigation:** {{how risk will be addressed}}
- **Rollback Plan:** {{how to undo changes if needed}}
#### Definition of Done
- [ ] All stories completed with acceptance criteria met
- [ ] Existing functionality verified through testing
- [ ] Integration points working correctly
- [ ] Documentation updated appropriately
- [ ] No regression in existing features
### 3. Validation Checklist
Before finalizing the epic, ensure:
**Scope Validation:**
- [ ] Epic can be completed in 1-3 stories maximum
- [ ] No architectural documentation is required
- [ ] Enhancement follows existing patterns
- [ ] Integration complexity is manageable
**Risk Assessment:**
- [ ] Risk to existing system is low
- [ ] Rollback plan is feasible
- [ ] Testing approach covers existing functionality
- [ ] Team has sufficient knowledge of integration points
**Completeness Check:**
- [ ] Epic goal is clear and achievable
- [ ] Stories are properly scoped
- [ ] Success criteria are measurable
- [ ] Dependencies are identified
### 4. Handoff to Story Manager
Once the epic is validated, provide this handoff to the Story Manager:
---
**Story Manager Handoff:**
"Please develop detailed user stories for this brownfield epic. Key considerations:
- This is an enhancement to an existing system running {{technology stack}}
- Integration points: {{list key integration points}}
- Existing patterns to follow: {{relevant existing patterns}}
- Critical compatibility requirements: {{key requirements}}
- Each story must include verification that existing functionality remains intact
The epic should maintain system integrity while delivering {{epic goal}}."
---
## Success Criteria
The epic creation is successful when:
1. Enhancement scope is clearly defined and appropriately sized
2. Integration approach respects existing system architecture
3. Risk to existing functionality is minimized
4. Stories are logically sequenced for safe implementation
5. Compatibility requirements are clearly specified
6. Rollback plan is feasible and documented
## Important Notes
- This task is specifically for SMALL brownfield enhancements
- If the scope grows beyond 3 stories, consider the full brownfield PRD process
- Always prioritize existing system integrity over new functionality
- When in doubt about scope or complexity, escalate to full brownfield planning

View File

@@ -1,147 +0,0 @@
# Create Brownfield Story Task
## Purpose
Create a single user story for very small brownfield enhancements that can be completed in one focused development session. This task is for minimal additions or bug fixes that require existing system integration awareness.
## When to Use This Task
**Use this task when:**
- The enhancement can be completed in a single story
- No new architecture or significant design is required
- The change follows existing patterns exactly
- Integration is straightforward with minimal risk
- Change is isolated with clear boundaries
**Use brownfield-create-epic when:**
- The enhancement requires 2-3 coordinated stories
- Some design work is needed
- Multiple integration points are involved
**Use the full brownfield PRD/Architecture process when:**
- The enhancement requires multiple coordinated stories
- Architectural planning is needed
- Significant integration work is required
## Instructions
### 1. Quick Project Assessment
Gather minimal but essential context about the existing project:
**Current System Context:**
- [ ] Relevant existing functionality identified
- [ ] Technology stack for this area noted
- [ ] Integration point(s) clearly understood
- [ ] Existing patterns for similar work identified
**Change Scope:**
- [ ] Specific change clearly defined
- [ ] Impact boundaries identified
- [ ] Success criteria established
### 2. Story Creation
Create a single focused story following this structure:
#### Story Title
{{Specific Enhancement}} - Brownfield Addition
#### User Story
As a {{user type}},
I want {{specific action/capability}},
So that {{clear benefit/value}}.
#### Story Context
**Existing System Integration:**
- Integrates with: {{existing component/system}}
- Technology: {{relevant tech stack}}
- Follows pattern: {{existing pattern to follow}}
- Touch points: {{specific integration points}}
#### Acceptance Criteria
**Functional Requirements:**
1. {{Primary functional requirement}}
2. {{Secondary functional requirement (if any)}}
3. {{Integration requirement}}
**Integration Requirements:** 4. Existing {{relevant functionality}} continues to work unchanged 5. New functionality follows existing {{pattern}} pattern 6. Integration with {{system/component}} maintains current behavior
**Quality Requirements:** 7. Change is covered by appropriate tests 8. Documentation is updated if needed 9. No regression in existing functionality verified
#### Technical Notes
- **Integration Approach:** {{how it connects to existing system}}
- **Existing Pattern Reference:** {{link or description of pattern to follow}}
- **Key Constraints:** {{any important limitations or requirements}}
#### Definition of Done
- [ ] Functional requirements met
- [ ] Integration requirements verified
- [ ] Existing functionality regression tested
- [ ] Code follows existing patterns and standards
- [ ] Tests pass (existing and new)
- [ ] Documentation updated if applicable
### 3. Risk and Compatibility Check
**Minimal Risk Assessment:**
- **Primary Risk:** {{main risk to existing system}}
- **Mitigation:** {{simple mitigation approach}}
- **Rollback:** {{how to undo if needed}}
**Compatibility Verification:**
- [ ] No breaking changes to existing APIs
- [ ] Database changes (if any) are additive only
- [ ] UI changes follow existing design patterns
- [ ] Performance impact is negligible
### 4. Validation Checklist
Before finalizing the story, confirm:
**Scope Validation:**
- [ ] Story can be completed in one development session
- [ ] Integration approach is straightforward
- [ ] Follows existing patterns exactly
- [ ] No design or architecture work required
**Clarity Check:**
- [ ] Story requirements are unambiguous
- [ ] Integration points are clearly specified
- [ ] Success criteria are testable
- [ ] Rollback approach is simple
## Success Criteria
The story creation is successful when:
1. Enhancement is clearly defined and appropriately scoped for single session
2. Integration approach is straightforward and low-risk
3. Existing system patterns are identified and will be followed
4. Rollback plan is simple and feasible
5. Acceptance criteria include existing functionality verification
## Important Notes
- This task is for VERY SMALL brownfield changes only
- If complexity grows during analysis, escalate to brownfield-create-epic
- Always prioritize existing system integrity
- When in doubt about integration complexity, use brownfield-create-epic instead
- Stories should take no more than 4 hours of focused development work

View File

@@ -1,74 +0,0 @@
# Core Dump Task
## Purpose
To create a concise memory recording file (`.ai/core-dump-n.md`) that captures the essential context of the current agent session, enabling seamless continuation of work in future agent sessions. This task ensures persistent context across agent conversations while maintaining minimal token usage for efficient context loading.
## Inputs for this Task
- Current session conversation history and accomplishments
- Files created, modified, or deleted during the session
- Key decisions made and procedures followed
- Current project state and next logical steps
- User requests and agent responses that shaped the session
## Task Execution Instructions
### 0. Check Existing Core Dump
Before proceeding, check if `.ai/core-dump.md` already exists:
- If file exists, ask user: "Core dump file exists. Should I: 1. Overwrite, 2. Update, 3. Append or 4. Create new?"
- **Overwrite**: Replace entire file with new content
- **Update**: Merge new session info with existing content, updating relevant sections
- **Append**: Add new session as a separate entry while preserving existing content
- **Create New**: Create a new file, appending the next possible -# to the file, such as core-dump-3.md if 1 and 2 already exist.
- If file doesn't exist, proceed with creation of `core-dump-1.md`
### 1. Analyze Session Context
- Review the entire conversation to identify key accomplishments
- Note any specific tasks, procedures, or workflows that were executed
- Identify important decisions made or problems solved
- Capture the user's working style and preferences observed during the session
### 2. Document What Was Accomplished
- **Primary Actions**: List the main tasks completed concisely
- **Story Progress**: For story work, use format "Tasks Complete: 1-6, 8. Next Task Pending: 7, 9"
- **Problem Solving**: Document any challenges encountered and how they were resolved
- **User Communications**: Summarize key user requests, preferences, and discussion points
### 3. Record File System Changes (Concise Format)
- **Files Created**: `filename.ext` (brief purpose/size)
- **Files Modified**: `filename.ext` (what changed)
- **Files Deleted**: `filename.ext` (why removed)
- Focus on essential details, avoid verbose descriptions
### 4. Capture Current Project State
- **Project Progress**: Where the project stands after this session
- **Current Issues**: Any blockers or problems that need resolution
- **Next Logical Steps**: What would be the natural next actions to take
### 5. Create/Update Core Dump File
Based on user's choice from step 0, handle the file accordingly:
### 6. Optimize for Minimal Context
- Keep descriptions concise but informative
- Use abbreviated formats where possible (file sizes, task numbers)
- Focus on actionable information rather than detailed explanations
- Avoid redundant information that can be found in project documentation
- Prioritize information that would be lost without this recording
- Ensure the file can be quickly scanned and understood
### 7. Validate Completeness
- Verify all significant session activities are captured
- Ensure a future agent could understand the current state
- Check that file changes are accurately recorded
- Confirm next steps are clear and actionable
- Verify user communication style and preferences are noted

View File

@@ -1,73 +0,0 @@
# Correct Course Task
## Purpose
- Guide a structured response to a change trigger using the `change-checklist`.
- Analyze the impacts of the change on epics, project artifacts, and the MVP, guided by the checklist's structure.
- Explore potential solutions (e.g., adjust scope, rollback elements, rescope features) as prompted by the checklist.
- Draft specific, actionable proposed updates to any affected project artifacts (e.g., epics, user stories, PRD sections, architecture document sections) based on the analysis.
- Produce a consolidated "Sprint Change Proposal" document that contains the impact analysis and the clearly drafted proposed edits for user review and approval.
- Ensure a clear handoff path if the nature of the changes necessitates fundamental replanning by other core agents (like PM or Architect).
## Instructions
### 1. Initial Setup & Mode Selection
- **Acknowledge Task & Inputs:**
- Confirm with the user that the "Correct Course Task" (Change Navigation & Integration) is being initiated.
- Verify the change trigger and ensure you have the user's initial explanation of the issue and its perceived impact.
- Confirm access to all relevant project artifacts (e.g., PRD, Epics/Stories, Architecture Documents, UI/UX Specifications) and, critically, the `change-checklist` (e.g., `change-checklist`).
- **Establish Interaction Mode:**
- Ask the user their preferred interaction mode for this task:
- **"Incrementally (Default & Recommended):** Shall we work through the `change-checklist` section by section, discussing findings and collaboratively drafting proposed changes for each relevant part before moving to the next? This allows for detailed, step-by-step refinement."
- **"YOLO Mode (Batch Processing):** Or, would you prefer I conduct a more batched analysis based on the checklist and then present a consolidated set of findings and proposed changes for a broader review? This can be quicker for initial assessment but might require more extensive review of the combined proposals."
- Request the user to select their preferred mode.
- Once the user chooses, confirm the selected mode (e.g., "Okay, we will proceed in Incremental mode."). This chosen mode will govern how subsequent steps in this task are executed.
- **Explain Process:** Briefly inform the user: "We will now use the `change-checklist` to analyze the change and draft proposed updates. I will guide you through the checklist items based on our chosen interaction mode."
<rule>When asking multiple questions or presenting multiple points for user input at once, number them clearly (e.g., 1., 2a., 2b.) to make it easier for the user to provide specific responses.</rule>
### 2. Execute Checklist Analysis (Iteratively or Batched, per Interaction Mode)
- Systematically work through Sections 1-4 of the `change-checklist` (typically covering Change Context, Epic/Story Impact Analysis, Artifact Conflict Resolution, and Path Evaluation/Recommendation).
- For each checklist item or logical group of items (depending on interaction mode):
- Present the relevant prompt(s) or considerations from the checklist to the user.
- Request necessary information and actively analyze the relevant project artifacts (PRD, epics, architecture documents, story history, etc.) to assess the impact.
- Discuss your findings for each item with the user.
- Record the status of each checklist item (e.g., `[x] Addressed`, `[N/A]`, `[!] Further Action Needed`) and any pertinent notes or decisions.
- Collaboratively agree on the "Recommended Path Forward" as prompted by Section 4 of the checklist.
### 3. Draft Proposed Changes (Iteratively or Batched)
- Based on the completed checklist analysis (Sections 1-4) and the agreed "Recommended Path Forward" (excluding scenarios requiring fundamental replans that would necessitate immediate handoff to PM/Architect):
- Identify the specific project artifacts that require updates (e.g., specific epics, user stories, PRD sections, architecture document components, diagrams).
- **Draft the proposed changes directly and explicitly for each identified artifact.** Examples include:
- Revising user story text, acceptance criteria, or priority.
- Adding, removing, reordering, or splitting user stories within epics.
- Proposing modified architecture diagram snippets (e.g., providing an updated Mermaid diagram block or a clear textual description of the change to an existing diagram).
- Updating technology lists, configuration details, or specific sections within the PRD or architecture documents.
- Drafting new, small supporting artifacts if necessary (e.g., a brief addendum for a specific decision).
- If in "Incremental Mode," discuss and refine these proposed edits for each artifact or small group of related artifacts with the user as they are drafted.
- If in "YOLO Mode," compile all drafted edits for presentation in the next step.
### 4. Generate "Sprint Change Proposal" with Edits
- Synthesize the complete `change-checklist` analysis (covering findings from Sections 1-4) and all the agreed-upon proposed edits (from Instruction 3) into a single document titled "Sprint Change Proposal." This proposal should align with the structure suggested by Section 5 of the `change-checklist` (Proposal Components).
- The proposal must clearly present:
- **Analysis Summary:** A concise overview of the original issue, its analyzed impact (on epics, artifacts, MVP scope), and the rationale for the chosen path forward.
- **Specific Proposed Edits:** For each affected artifact, clearly show or describe the exact changes (e.g., "Change Story X.Y from: [old text] To: [new text]", "Add new Acceptance Criterion to Story A.B: [new AC]", "Update Section 3.2 of Architecture Document as follows: [new/modified text or diagram description]").
- Present the complete draft of the "Sprint Change Proposal" to the user for final review and feedback. Incorporate any final adjustments requested by the user.
### 5. Finalize & Determine Next Steps
- Obtain explicit user approval for the "Sprint Change Proposal," including all the specific edits documented within it.
- Provide the finalized "Sprint Change Proposal" document to the user.
- **Based on the nature of the approved changes:**
- **If the approved edits sufficiently address the change and can be implemented directly or organized by a PO/SM:** State that the "Correct Course Task" is complete regarding analysis and change proposal, and the user can now proceed with implementing or logging these changes (e.g., updating actual project documents, backlog items). Suggest handoff to a PO/SM agent for backlog organization if appropriate.
- **If the analysis and proposed path (as per checklist Section 4 and potentially Section 6) indicate that the change requires a more fundamental replan (e.g., significant scope change, major architectural rework):** Clearly state this conclusion. Advise the user that the next step involves engaging the primary PM or Architect agents, using the "Sprint Change Proposal" as critical input and context for that deeper replanning effort.
## Output Deliverables
- **Primary:** A "Sprint Change Proposal" document (in markdown format). This document will contain:
- A summary of the `change-checklist` analysis (issue, impact, rationale for the chosen path).
- Specific, clearly drafted proposed edits for all affected project artifacts.
- **Implicit:** An annotated `change-checklist` (or the record of its completion) reflecting the discussions, findings, and decisions made during the process.

View File

@@ -1,289 +0,0 @@
# Create Deep Research Prompt Task
This task helps create comprehensive research prompts for various types of deep analysis. It can process inputs from brainstorming sessions, project briefs, market research, or specific research questions to generate targeted prompts for deeper investigation.
## Purpose
Generate well-structured research prompts that:
- Define clear research objectives and scope
- Specify appropriate research methodologies
- Outline expected deliverables and formats
- Guide systematic investigation of complex topics
- Ensure actionable insights are captured
## Research Type Selection
[[LLM: First, help the user select the most appropriate research focus based on their needs and any input documents they've provided.]]
### 1. Research Focus Options
Present these numbered options to the user:
1. **Product Validation Research**
- Validate product hypotheses and market fit
- Test assumptions about user needs and solutions
- Assess technical and business feasibility
- Identify risks and mitigation strategies
2. **Market Opportunity Research**
- Analyze market size and growth potential
- Identify market segments and dynamics
- Assess market entry strategies
- Evaluate timing and market readiness
3. **User & Customer Research**
- Deep dive into user personas and behaviors
- Understand jobs-to-be-done and pain points
- Map customer journeys and touchpoints
- Analyze willingness to pay and value perception
4. **Competitive Intelligence Research**
- Detailed competitor analysis and positioning
- Feature and capability comparisons
- Business model and strategy analysis
- Identify competitive advantages and gaps
5. **Technology & Innovation Research**
- Assess technology trends and possibilities
- Evaluate technical approaches and architectures
- Identify emerging technologies and disruptions
- Analyze build vs. buy vs. partner options
6. **Industry & Ecosystem Research**
- Map industry value chains and dynamics
- Identify key players and relationships
- Analyze regulatory and compliance factors
- Understand partnership opportunities
7. **Strategic Options Research**
- Evaluate different strategic directions
- Assess business model alternatives
- Analyze go-to-market strategies
- Consider expansion and scaling paths
8. **Risk & Feasibility Research**
- Identify and assess various risk factors
- Evaluate implementation challenges
- Analyze resource requirements
- Consider regulatory and legal implications
9. **Custom Research Focus**
[[LLM: Allow user to define their own specific research focus.]]
- User-defined research objectives
- Specialized domain investigation
- Cross-functional research needs
### 2. Input Processing
[[LLM: Based on the selected research type and any provided inputs (project brief, brainstorming results, etc.), extract relevant context and constraints.]]
**If Project Brief provided:**
- Extract key product concepts and goals
- Identify target users and use cases
- Note technical constraints and preferences
- Highlight uncertainties and assumptions
**If Brainstorming Results provided:**
- Synthesize main ideas and themes
- Identify areas needing validation
- Extract hypotheses to test
- Note creative directions to explore
**If Market Research provided:**
- Build on identified opportunities
- Deepen specific market insights
- Validate initial findings
- Explore adjacent possibilities
**If Starting Fresh:**
- Gather essential context through questions
- Define the problem space
- Clarify research objectives
- Establish success criteria
## Process
### 3. Research Prompt Structure
[[LLM: Based on the selected research type and context, collaboratively develop a comprehensive research prompt with these components.]]
#### A. Research Objectives
[[LLM: Work with the user to articulate clear, specific objectives for the research.]]
- Primary research goal and purpose
- Key decisions the research will inform
- Success criteria for the research
- Constraints and boundaries
#### B. Research Questions
[[LLM: Develop specific, actionable research questions organized by theme.]]
**Core Questions:**
- Central questions that must be answered
- Priority ranking of questions
- Dependencies between questions
**Supporting Questions:**
- Additional context-building questions
- Nice-to-have insights
- Future-looking considerations
#### C. Research Methodology
[[LLM: Specify appropriate research methods based on the type and objectives.]]
**Data Collection Methods:**
- Secondary research sources
- Primary research approaches (if applicable)
- Data quality requirements
- Source credibility criteria
**Analysis Frameworks:**
- Specific frameworks to apply
- Comparison criteria
- Evaluation methodologies
- Synthesis approaches
#### D. Output Requirements
[[LLM: Define how research findings should be structured and presented.]]
**Format Specifications:**
- Executive summary requirements
- Detailed findings structure
- Visual/tabular presentations
- Supporting documentation
**Key Deliverables:**
- Must-have sections and insights
- Decision-support elements
- Action-oriented recommendations
- Risk and uncertainty documentation
### 4. Prompt Generation
[[LLM: Synthesize all elements into a comprehensive, ready-to-use research prompt.]]
**Research Prompt Template:**
```
## Research Objective
[Clear statement of what this research aims to achieve]
## Background Context
[Relevant information from project brief, brainstorming, or other inputs]
## Research Questions
### Primary Questions (Must Answer)
1. [Specific, actionable question]
2. [Specific, actionable question]
...
### Secondary Questions (Nice to Have)
1. [Supporting question]
2. [Supporting question]
...
## Research Methodology
### Information Sources
- [Specific source types and priorities]
### Analysis Frameworks
- [Specific frameworks to apply]
### Data Requirements
- [Quality, recency, credibility needs]
## Expected Deliverables
### Executive Summary
- Key findings and insights
- Critical implications
- Recommended actions
### Detailed Analysis
[Specific sections needed based on research type]
### Supporting Materials
- Data tables
- Comparison matrices
- Source documentation
## Success Criteria
[How to evaluate if research achieved its objectives]
## Timeline and Priority
[If applicable, any time constraints or phasing]
```
### 5. Review and Refinement
[[LLM: Present the draft research prompt for user review and refinement.]]
1. **Present Complete Prompt**
- Show the full research prompt
- Explain key elements and rationale
- Highlight any assumptions made
2. **Gather Feedback**
- Are the objectives clear and correct?
- Do the questions address all concerns?
- Is the scope appropriate?
- Are output requirements sufficient?
3. **Refine as Needed**
- Incorporate user feedback
- Adjust scope or focus
- Add missing elements
- Clarify ambiguities
### 6. Next Steps Guidance
[[LLM: Provide clear guidance on how to use the research prompt.]]
**Execution Options:**
1. **Use with AI Research Assistant**: Provide this prompt to an AI model with research capabilities
2. **Guide Human Research**: Use as a framework for manual research efforts
3. **Hybrid Approach**: Combine AI and human research using this structure
**Integration Points:**
- How findings will feed into next phases
- Which team members should review results
- How to validate findings
- When to revisit or expand research
## Important Notes
- The quality of the research prompt directly impacts the quality of insights gathered
- Be specific rather than general in research questions
- Consider both current state and future implications
- Balance comprehensiveness with focus
- Document assumptions and limitations clearly
- Plan for iterative refinement based on initial findings

View File

@@ -1,74 +0,0 @@
# Create Document from Template Task
## Purpose
- Generate documents from any specified template following embedded instructions from the perspective of the selected agent persona
## Instructions
### 1. Identify Template and Context
- Determine which template to use (user-provided or list available for selection to user)
- Agent-specific templates are listed in the agent's dependencies under `templates`. For each template listed, consider it a document the agent can create. So if an agent has:
@{example}
dependencies:
templates: - prd-tmpl - architecture-tmpl
@{/example}
You would offer to create "PRD" and "Architecture" documents when the user asks what you can help with.
- Gather all relevant inputs, or ask for them, or else rely on user providing necessary details to complete the document
- Understand the document purpose and target audience
### 2. Determine Interaction Mode
Confirm with the user their preferred interaction style:
- **Incremental:** Work through chunks of the document.
- **YOLO Mode:** Draft complete document making reasonable assumptions in one shot. (Can be entered also after starting incremental by just typing /yolo)
### 3. Execute Template
- Load specified template from `templates#*` or the /templates directory
- Follow ALL embedded LLM instructions within the template
- Process template markup according to `utils#template-format` conventions
### 4. Template Processing Rules
#### CRITICAL: Never display template markup, LLM instructions, or examples to users
- Replace all {{placeholders}} with actual content
- Execute all [[LLM: instructions]] internally
- Process `<<REPEAT>>` sections as needed
- Evaluate ^^CONDITION^^ blocks and include only if applicable
- Use @{examples} for guidance but never output them
### 5. Content Generation
- **Incremental Mode**: Present each major section for review before proceeding
- **YOLO Mode**: Generate all sections, then review complete document with user
- Apply any elicitation protocols specified in template
- Incorporate user feedback and iterate as needed
### 6. Validation
If template specifies a checklist:
- Run the appropriate checklist against completed document
- Document completion status for each item
- Address any deficiencies found
- Present validation summary to user
### 7. Final Presentation
- Present clean, formatted content only
- Ensure all sections are complete
- DO NOT truncate or summarize content
- Begin directly with document content (no preamble)
- Include any handoff prompts specified in template
## Important Notes
- Template markup is for AI processing only - never expose to users

View File

@@ -1,431 +0,0 @@
# Create Expansion Pack Task
This task helps you create a comprehensive BMAD expansion pack that can include new agents, tasks, templates, and checklists for a specific domain.
## Understanding Expansion Packs
Expansion packs extend BMAD with domain-specific capabilities. They are self-contained packages that can be installed into any BMAD project. Every expansion pack MUST include a custom BMAD orchestrator agent that manages the domain-specific workflow.
## CRITICAL REQUIREMENTS
1. **Create Planning Document First**: Before any implementation, create a concise task list for user approval
2. **Verify All References**: Any task, template, or data file referenced in an agent MUST exist in the pack
3. **Include Orchestrator**: Every pack needs a custom BMAD-style orchestrator agent
4. **User Data Requirements**: Clearly specify any files users must provide in their data folder
## Process Overview
### Phase 1: Discovery and Planning
#### 1.1 Define the Domain
Ask the user:
- **Pack Name**: Short identifier (e.g., `healthcare`, `fintech`, `gamedev`)
- **Display Name**: Full name (e.g., "Healthcare Compliance Pack")
- **Description**: What domain or industry does this serve?
- **Key Problems**: What specific challenges will this pack solve?
- **Target Users**: Who will benefit from this expansion?
#### 1.2 Gather Examples
Request from the user:
- **Sample Documents**: Any existing documents in this domain
- **Workflow Examples**: How work currently flows in this domain
- **Compliance Needs**: Any regulatory or standards requirements
- **Output Examples**: What final deliverables look like
- **Data Requirements**: What reference data files users will need to provide
#### 1.3 Create Planning Document
**STOP HERE AND CREATE PLAN FIRST**
Create `expansion-packs/{pack-name}/plan.md` with:
```markdown
# {Pack Name} Expansion Pack Plan
## Overview
- Pack Name: {name}
- Description: {description}
- Target Domain: {domain}
## Components to Create
### Agents
- [ ] {pack-name}-orchestrator (REQUIRED: Custom BMAD orchestrator)
- [ ] {agent-1-name}
- [ ] {agent-2-name}
### Tasks
- [ ] {task-1} (referenced by: {agent})
- [ ] {task-2} (referenced by: {agent})
### Templates
- [ ] {template-1} (used by: {agent/task})
- [ ] {template-2} (used by: {agent/task})
### Checklists
- [ ] {checklist-1}
- [ ] {checklist-2}
### Data Files Required from User
- [ ] {filename}.{ext} - {description of content needed}
- [ ] {filename2}.{ext} - {description of content needed}
## Approval
User approval received: [ ] Yes
```
**Wait for user approval before proceeding to Phase 2**
### Phase 2: Component Design
#### 2.1 Create Orchestrator Agent
**FIRST PRIORITY**: Design the custom BMAD orchestrator:
- **Name**: `{pack-name}-orchestrator`
- **Purpose**: Master coordinator for domain-specific workflow
- **Key Commands**: Domain-specific orchestration commands
- **Integration**: How it leverages other pack agents
- **Workflow**: The complete process it manages
#### 2.2 Identify Specialist Agents
For each additional agent:
- **Role**: What specialist is needed?
- **Expertise**: Domain-specific knowledge required
- **Interactions**: How they work with orchestrator and BMAD agents
- **Unique Value**: What can't existing agents handle?
- **Required Tasks**: List ALL tasks this agent references
- **Required Templates**: List ALL templates this agent uses
- **Required Data**: List ALL data files this agent needs
#### 2.3 Design Specialized Tasks
For each task:
- **Purpose**: What specific action does it enable?
- **Inputs**: What information is needed?
- **Process**: Step-by-step instructions
- **Outputs**: What gets produced?
- **Agent Usage**: Which agents will use this task?
#### 2.4 Create Document Templates
For each template:
- **Document Type**: What kind of document?
- **Structure**: Sections and organization
- **Placeholders**: Variable content areas
- **Instructions**: How to complete each section
- **Standards**: Any format requirements
#### 2.5 Define Checklists
For each checklist:
- **Purpose**: What quality aspect does it verify?
- **Scope**: When should it be used?
- **Items**: Specific things to check
- **Criteria**: Pass/fail conditions
### Phase 3: Implementation
**Only proceed after plan.md is approved**
#### 3.1 Create Directory Structure
```text
expansion-packs/
└── {pack-name}/
├── plan.md (ALREADY CREATED)
├── manifest.yml
├── README.md
├── agents/
│ ├── {pack-name}-orchestrator.yml (REQUIRED)
│ └── {agent-id}.yml
├── personas/
│ ├── {pack-name}-orchestrator.md (REQUIRED)
│ └── {agent-id}.md
├── tasks/
│ └── {task-name}.md
├── templates/
│ └── {template-name}.md
├── checklists/
│ └── {checklist-name}.md
└── ide-agents/
├── {pack-name}-orchestrator.ide.md (REQUIRED)
└── {agent-id}.ide.md
```
#### 3.2 Create Manifest
Create `manifest.yml`:
```yaml
name: {pack-name}
version: 1.0.0
description: >-
{Detailed description of the expansion pack}
author: {Your name or organization}
bmad_version: "4.0.0"
# Files to create in the expansion pack
files:
agents:
- {pack-name}-orchestrator.yml
- {agent-name}.yml
personas:
- {pack-name}-orchestrator.md
- {agent-name}.md
ide-agents:
- {pack-name}-orchestrator.ide.md
- {agent-name}.ide.md
tasks:
- {task-name}.md
templates:
- {template-name}.md
checklists:
- {checklist-name}.md
# Data files users must provide
required_data:
- filename: {data-file}.{ext}
description: {What this file should contain}
location: bmad-core/data/
# Dependencies on core BMAD components
dependencies:
- {core-agent-name}
- {core-task-name}
# Post-install message
post_install_message: |
{Pack Name} expansion pack ready!
Required data files:
- {data-file}.{ext}: {description}
To use: npm run agent {pack-name}-orchestrator
```
### Phase 4: Content Creation
**Work through plan.md checklist systematically**
#### 4.1 Create Orchestrator First
1. Create `personas/{pack-name}-orchestrator.md` with BMAD-style commands
2. Create `agents/{pack-name}-orchestrator.yml` configuration
3. Create `ide-agents/{pack-name}-orchestrator.ide.md`
4. Verify ALL referenced tasks exist
5. Verify ALL referenced templates exist
6. Document data file requirements
#### 4.2 Agent Creation Order
For each additional agent:
1. Create persona file with domain expertise
2. Create agent configuration YAML
3. Create IDE-optimized version
4. **STOP** - Verify all referenced tasks/templates exist
5. Create any missing tasks/templates immediately
6. Mark agent as complete in plan.md
#### 4.3 Task Creation Guidelines
Each task should:
1. Have a clear, single purpose
2. Include step-by-step instructions
3. Provide examples when helpful
4. Reference domain standards
5. Be reusable across agents
#### 4.4 Template Best Practices
Templates should:
1. Include clear section headers
2. Provide inline instructions
3. Show example content
4. Mark required vs optional sections
5. Include domain-specific terminology
### Phase 5: Verification and Documentation
#### 5.1 Final Verification Checklist
Before declaring complete:
1. [ ] All items in plan.md marked complete
2. [ ] Orchestrator agent created and tested
3. [ ] All agent references validated
4. [ ] All required data files documented
5. [ ] manifest.yml lists all components
6. [ ] No orphaned tasks or templates
#### 5.2 Create README
Include:
- Overview of the pack's purpose
- **Orchestrator usage instructions**
- Required data files and formats
- List of all components
- Integration with BMAD workflow
- Example scenarios
#### 5.3 Data File Documentation
For each required data file:
```markdown
## Required Data Files
### {filename}.{ext}
- **Purpose**: {why this file is needed}
- **Format**: {file format and structure}
- **Location**: Place in `bmad-core/data/`
- **Example**:
```
{sample content}
```
```
## Example: Healthcare Expansion Pack
```text
healthcare/
├── plan.md (Created first for approval)
├── manifest.yml
├── README.md
├── agents/
│ ├── healthcare-orchestrator.yml (REQUIRED)
│ ├── clinical-analyst.yml
│ └── compliance-officer.yml
├── personas/
│ ├── healthcare-orchestrator.md (REQUIRED)
│ ├── clinical-analyst.md
│ └── compliance-officer.md
├── ide-agents/
│ ├── healthcare-orchestrator.ide.md (REQUIRED)
│ ├── clinical-analyst.ide.md
│ └── compliance-officer.ide.md
├── tasks/
│ ├── hipaa-assessment.md
│ ├── clinical-protocol-review.md
│ └── patient-data-analysis.md
├── templates/
│ ├── clinical-trial-protocol.md
│ ├── hipaa-compliance-report.md
│ └── patient-outcome-report.md
└── checklists/
├── hipaa-checklist.md
└── clinical-data-quality.md
Required user data files:
- bmad-core/data/medical-terminology.md
- bmad-core/data/hipaa-requirements.md
```
## Interactive Questions Flow
### Initial Discovery
1. "What domain or industry will this expansion pack serve?"
2. "What are the main challenges or workflows in this domain?"
3. "Do you have any example documents or outputs? (Please share)"
4. "What specialized roles/experts exist in this domain?"
5. "What reference data will users need to provide?"
### Planning Phase
6. "Here's the proposed plan. Please review and approve before we continue."
### Orchestrator Design
7. "What key commands should the {pack-name} orchestrator support?"
8. "What's the typical workflow from start to finish?"
9. "How should it integrate with core BMAD agents?"
### Agent Planning
10. "For agent '{name}', what is their specific expertise?"
11. "What tasks will this agent reference? (I'll create them)"
12. "What templates will this agent use? (I'll create them)"
13. "What data files will this agent need? (You'll provide these)"
### Task Design
14. "Describe the '{task}' process step-by-step"
15. "What information is needed to complete this task?"
16. "What should the output look like?"
### Template Creation
17. "What sections should the '{template}' document have?"
18. "Are there any required formats or standards?"
19. "Can you provide an example of a completed document?"
### Data Requirements
20. "For {data-file}, what information should it contain?"
21. "What format should this data be in?"
22. "Can you provide a sample?"
## Important Considerations
- **Plan First**: ALWAYS create and get approval for plan.md before implementing
- **Orchestrator Required**: Every pack MUST have a custom BMAD orchestrator
- **Verify References**: ALL referenced tasks/templates MUST exist
- **Document Data Needs**: Clearly specify what users must provide
- **Domain Expertise**: Ensure accuracy in specialized fields
- **Compliance**: Include necessary regulatory requirements
## Tips for Success
1. **Plan Thoroughly**: The plan.md prevents missing components
2. **Build Orchestrator First**: It defines the overall workflow
3. **Verify As You Go**: Check off items in plan.md
4. **Test References**: Ensure no broken dependencies
5. **Document Data**: Users need clear data file instructions
## Common Mistakes to Avoid
1. **Missing Orchestrator**: Every pack needs its own BMAD-style orchestrator
2. **Orphaned References**: Agent references task that doesn't exist
3. **Unclear Data Needs**: Not specifying required user data files
4. **Skipping Plan**: Going straight to implementation
5. **Generic Orchestrator**: Not making it domain-specific
## Completion Checklist
- [ ] plan.md created and approved
- [ ] All plan.md items checked off
- [ ] Orchestrator agent created
- [ ] All agent references verified
- [ ] Data requirements documented or added
- [ ] README includes all setup instructions
- [ ] manifest.yml reflects actual files

View File

@@ -1,262 +0,0 @@
# Create IDE Agent Task
This task guides you through creating a new BMAD IDE agent that conforms to the IDE agent schema and integrates effectively with workflows and teams.
**Note for User-Created IDE Agents**: If creating a custom IDE agent for your own use (not part of the core BMAD system), prefix the agent ID with a period (e.g., `.api-expert`) to ensure it's gitignored and won't conflict with repository updates.
## Prerequisites
1. Load and understand the IDE agent schema: `/bmad-core/schemas/ide-agent-schema.yml`
2. Review existing IDE agents in `/bmad-core/ide-agents/` for patterns and conventions
3. Review workflows in `/bmad-core/workflows/` to identify integration opportunities
4. Consider if this agent should also have a full agent counterpart
## Process
### 1. Define Agent Core Identity
Based on the schema's required fields:
- **Role**: Must end with "IDE Agent" (pattern: `^.+ IDE Agent$`)
- Example: "API Specialist IDE Agent", "Test Engineer IDE Agent"
- **Agent ID**: Following pattern `^[a-z][a-z0-9-]*$`
- For user agents: prefix with period (`.api-expert`)
- **Primary Purpose**: Define ONE focused capability
### 2. Create File References
All IDE agents must include (per schema):
```yaml
taskroot: "bmad-core/tasks/" # Required constant
templates: "bmad-core/templates/" # Optional but common
checklists: "bmad-core/checklists/" # Optional
default-template: "bmad-core/templates/{template-name}" # If agent creates documents
```
Additional custom references as needed (e.g., `story-path`, `coding-standards`)
### 3. Define Persona (Schema Required Fields)
Create concise persona following schema structure:
- **Name**: Character name (e.g., "Alex", "Dana")
- **Role**: Professional role title
- **Identity**: Extended specialization (20+ chars)
- **Focus**: Primary objectives (20+ chars)
- **Style**: Communication approach (20+ chars)
Keep descriptions brief for IDE efficiency!
### 4. Core Principles (Minimum 3 Required)
Must include these based on schema validation:
1. **Numbered Options Protocol** (REQUIRED): "When presenting multiple options, always use numbered lists for easy selection"
2. **[Domain-Specific Principle]**: Related to agent's expertise
3. **[Quality/Efficiency Principle]**: How they ensure excellence
4. Additional principles as needed (keep concise)
### 5. Critical Startup Operating Instructions
First instruction MUST announce name/role and mention *help (schema requirement):
```markdown
1. Announce your name and role, and let the user know they can say *help at any time to list the commands on your first response as a reminder even if their initial request is a question, wrapping the question. For Example 'I am {role} {name}, {response}... Also remember, you can enter `*help` to see a list of commands at any time.'
```
Add 2-5 additional startup instructions specific to the agent's role.
### 6. Commands (Minimum 2 Required)
Required commands per schema:
```markdown
- `*help` - Show these available commands as a numbered list offering selection
- `*chat-mode` - Enter conversational mode, staying in character while offering `advanced-elicitation` when providing advice or multiple options. Ends if other task or command is given
```
Add role-specific commands:
- Use pattern: `^\\*[a-z][a-z0-9-]*( \\{[^}]+\\})?$`
- Include clear descriptions (10+ chars)
- Reference tasks when appropriate
### 7. Workflow Integration Analysis
Analyze where this IDE agent fits in workflows:
1. **Load workflow definitions** from `/bmad-core/workflows/`
2. **Identify integration points**:
- Which workflow phases benefit from this agent?
- Can they replace or augment existing workflow steps?
- Do they enable new workflow capabilities?
3. **Suggest workflow enhancements**:
- For technical agents → development/implementation phases
- For testing agents → validation phases
- For design agents → planning/design phases
- For specialized agents → specific workflow steps
4. **Document recommendations**:
```markdown
## Workflow Integration
This agent enhances the following workflows:
- `greenfield-service`: API design phase (between architecture and implementation)
- `brownfield-service`: API refactoring and modernization
- User can specify: {custom workflow integration}
```
### 8. Team Integration Suggestions
Consider which teams benefit from this IDE agent:
1. **Analyze team compositions** in `/bmad-core/agent-teams/`
2. **Suggest team additions**:
- Technical specialists → development teams
- Quality specialists → full-stack teams
- Domain experts → relevant specialized teams
3. **Document integration**:
```markdown
## Team Integration
Recommended teams for this agent:
- `team-fullstack`: Provides specialized {domain} expertise
- `team-no-ui`: Enhances backend {capability}
- User proposed: {custom team integration}
```
### 9. Create the IDE Agent File
Create `/bmad-core/ide-agents/{agent-id}.ide.md` following schema structure:
(For user agents: `/bmad-core/ide-agents/.{agent-id}.ide.md`)
```markdown
# Role: {Title} IDE Agent
## File References
`taskroot`: `bmad-core/tasks/`
`templates`: `bmad-core/templates/`
{additional references}
## Persona
- **Name:** {Name}
- **Role:** {Role}
- **Identity:** {20+ char description}
- **Focus:** {20+ char objectives}
- **Style:** {20+ char communication style}
## Core Principles (Always Active)
- **{Principle}:** {Description}
- **{Principle}:** {Description}
- **Numbered Options Protocol:** When presenting multiple options, always use numbered lists for easy selection
## Critical Startup Operating Instructions
1. Announce your name and role, and let the user know they can say *help at any time...
2. {Additional startup instruction}
3. {Additional startup instruction}
## Commands
- `*help` - Show these available commands as a numbered list offering selection
- `*chat-mode` - Enter conversational mode, staying in character while offering `advanced-elicitation`...
- `*{command}` - {Description of what it does}
{additional commands}
{Optional sections like Expertise, Workflow, Protocol, etc.}
```
### 10. Validation and Testing
1. **Schema Validation**: Ensure all required fields are present
2. **Pattern Validation**: Check role name, command patterns
3. **Size Optimization**: Keep concise for IDE efficiency
4. **Command Testing**: Verify all commands are properly formatted
5. **Integration Testing**: Test in actual IDE environment
## Example: API Specialist IDE Agent
```markdown
# Role: API Specialist IDE Agent
## File References
`taskroot`: `bmad-core/tasks/`
`templates`: `bmad-core/templates/`
`default-template`: `bmad-core/templates/api-spec-tmpl`
## Persona
- **Name:** Alex
- **Role:** API Specialist
- **Identity:** REST API design expert specializing in scalable, secure service interfaces
- **Focus:** Creating clean, well-documented APIs that follow industry best practices
- **Style:** Direct, example-driven, focused on practical implementation patterns
## Core Principles (Always Active)
- **API-First Design:** Every endpoint designed with consumer needs in mind
- **Security by Default:** Authentication and authorization built into every design
- **Documentation Excellence:** APIs are only as good as their documentation
- **Numbered Options Protocol:** When presenting multiple options, always use numbered lists for easy selection
## Critical Startup Operating Instructions
1. Announce your name and role, and let the user know they can say *help at any time to list the commands on your first response as a reminder even if their initial request is a question, wrapping the question. For Example 'I am API Specialist Alex, {response}... Also remember, you can enter `*help` to see a list of commands at any time.'
2. Assess the API design context (REST, GraphQL, gRPC)
3. Focus on practical, implementable solutions
## Commands
- `*help` - Show these available commands as a numbered list offering selection
- `*chat-mode` - Enter conversational mode, staying in character while offering `advanced-elicitation` when providing advice or multiple options. Ends if other task or command is given
- `*design-api` - Design REST API endpoints for specified requirements
- `*create-spec` - Create OpenAPI specification using default template
- `*review-api` - Review existing API design for best practices
- `*security-check` - Analyze API security considerations
## Workflow Integration
This agent enhances the following workflows:
- `greenfield-service`: API design phase after architecture
- `brownfield-service`: API modernization and refactoring
- `greenfield-fullstack`: API contract definition between frontend/backend
## Team Integration
Recommended teams for this agent:
- `team-fullstack`: API contract expertise
- `team-no-ui`: Backend API specialization
- Any team building service-oriented architectures
```
## IDE Agent Creation Checklist
- [ ] Role name ends with "IDE Agent"
- [ ] All schema-required fields present
- [ ] Includes required File References
- [ ] Persona has all 5 required fields
- [ ] Minimum 3 Core Principles including Numbered Options Protocol
- [ ] First startup instruction announces name/role with *help
- [ ] Includes *help and *chat-mode commands
- [ ] Commands follow pattern requirements
- [ ] Workflow integration documented
- [ ] Team integration suggestions provided
- [ ] Validates against ide-agent-schema.yml
- [ ] Concise and focused on single expertise
## Best Practices
1. **Stay Focused**: IDE agents should excel at ONE thing
2. **Reference Tasks**: Don't duplicate task content
3. **Minimal Personality**: Just enough to be helpful
4. **Clear Commands**: Make it obvious what each command does
5. **Integration First**: Consider how agent enhances existing workflows
6. **Schema Compliance**: Always validate against the schema
This schema-driven approach ensures IDE agents are consistent, integrated, and valuable additions to the BMAD ecosystem.

View File

@@ -1,206 +0,0 @@
# Create Next Story Task
## Purpose
To identify the next logical story based on project progress and epic definitions, and then to prepare a comprehensive, self-contained, and actionable story file using the `Story Template`. This task ensures the story is enriched with all necessary technical context, requirements, and acceptance criteria, making it ready for efficient implementation by a Developer Agent with minimal need for additional research.
## Inputs for this Task
- Access to the project's documentation repository, specifically:
- `docs/index.md` (hereafter "Index Doc")
- All Epic files - located in one of these locations:
- Primary: `docs/prd/epic-{n}-{description}.md` (e.g., `epic-1-foundation-core-infrastructure.md`)
- Secondary: `docs/epics/epic-{n}-{description}.md`
- User-specified location if not found in above paths
- Existing story files in `docs/stories/`
- Main PRD (hereafter "PRD Doc")
- Main Architecture Document (hereafter "Main Arch Doc")
- Frontend Architecture Document (hereafter "Frontend Arch Doc," if relevant)
- Project Structure Guide (`docs/project-structure.md`)
- Operational Guidelines Document (`docs/operational-guidelines.md`)
- Technology Stack Document (`docs/tech-stack.md`)
- Data Models Document (as referenced in Index Doc)
- API Reference Document (as referenced in Index Doc)
- UI/UX Specifications, Style Guides, Component Guides (if relevant, as referenced in Index Doc)
- The `bmad-core/templates/story-tmpl.md` (hereafter "Story Template")
- The `bmad-core/checklists/story-draft-checklist.md` (hereafter "Story Draft Checklist")
- User confirmation to proceed with story identification and, if needed, to override warnings about incomplete prerequisite stories.
## Task Execution Instructions
### 1. Identify Next Story for Preparation
#### 1.1 Locate Epic Files
- First, determine where epic files are located:
- Check `docs/prd/` for files matching pattern `epic-{n}-*.md`
- If not found, check `docs/epics/` for files matching pattern `epic-{n}-*.md`
- If still not found, ask user: "Unable to locate epic files. Please specify the path where epic files are stored."
- Note: Epic files follow naming convention `epic-{n}-{description}.md` (e.g., `epic-1-foundation-core-infrastructure.md`)
#### 1.2 Review Existing Stories
- Review `docs/stories/` to find the highest-numbered story file.
- **If a highest story file exists (`{lastEpicNum}.{lastStoryNum}.story.md`):**
- Verify its `Status` is 'Done' (or equivalent).
- If not 'Done', present an alert to the user:
```plaintext
ALERT: Found incomplete story:
File: {lastEpicNum}.{lastStoryNum}.story.md
Status: [current status]
Would you like to:
1. View the incomplete story details (instructs user to do so, agent does not display)
2. Cancel new story creation at this time
3. Accept risk & Override to create the next story in draft
Please choose an option (1/2/3):
```
- Proceed only if user selects option 3 (Override) or if the last story was 'Done'.
- If proceeding: Look for the Epic File for `{lastEpicNum}` (e.g., `epic-{lastEpicNum}-*.md`) and check for a story numbered `{lastStoryNum + 1}`. If it exists and its prerequisites (per Epic File) are met, this is the next story.
- Else (story not found or prerequisites not met): The next story is the first story in the next Epic File (e.g., look for `epic-{lastEpicNum + 1}-*.md`, then `epic-{lastEpicNum + 2}-*.md`, etc.) whose prerequisites are met.
- **If no story files exist in `docs/stories/`:**
- The next story is the first story in the first epic file (look for `epic-1-*.md`, then `epic-2-*.md`, etc.) whose prerequisites are met.
- If no suitable story with met prerequisites is found, report to the user that story creation is blocked, specifying what prerequisites are pending. HALT task.
- Announce the identified story to the user: "Identified next story for preparation: {epicNum}.{storyNum} - {Story Title}".
### 2. Gather Core Story Requirements (from Epic File)
- For the identified story, open its parent Epic File (e.g., `epic-{epicNum}-*.md` from the location identified in step 1.1).
- Extract: Exact Title, full Goal/User Story statement, initial list of Requirements, all Acceptance Criteria (ACs), and any predefined high-level Tasks.
- Keep a record of this original epic-defined scope for later deviation analysis.
### 3. Review Previous Story and Extract Dev Notes
[[LLM: This step is CRITICAL for continuity and learning from implementation experience]]
- If this is not the first story (i.e., previous story exists):
- Read the previous story file: `docs/stories/{prevEpicNum}.{prevStoryNum}.story.md`
- Pay special attention to:
- Dev Agent Record sections (especially Completion Notes and Debug Log References)
- Any deviations from planned implementation
- Technical decisions made during implementation
- Challenges encountered and solutions applied
- Any "lessons learned" or notes for future stories
- Extract relevant insights that might inform the current story's preparation
### 4. Gather & Synthesize Architecture Context from Sharded Docs
[[LLM: CRITICAL - You MUST gather technical details from the sharded architecture documents. NEVER make up technical details not found in these documents.]]
#### 4.1 Start with Architecture Index
- Read `docs/architecture/index.md` to understand the full scope of available documentation
- Identify which sharded documents are most relevant to the current story
#### 4.2 Recommended Reading Order Based on Story Type
[[LLM: Read documents in this order, but ALWAYS verify relevance to the specific story. Skip irrelevant sections but NEVER skip documents that contain information needed for the story.]]
**For ALL Stories:**
1. `docs/architecture/tech-stack.md` - Understand technology constraints and versions
2. `docs/architecture/unified-project-structure.md` - Know where code should be placed
3. `docs/architecture/coding-standards.md` - Ensure dev follows project conventions
4. `docs/architecture/testing-strategy.md` - Include testing requirements in tasks
**For Backend/API Stories, additionally read:** 5. `docs/architecture/data-models.md` - Data structures and validation rules 6. `docs/architecture/database-schema.md` - Database design and relationships 7. `docs/architecture/backend-architecture.md` - Service patterns and structure 8. `docs/architecture/rest-api-spec.md` - API endpoint specifications 9. `docs/architecture/external-apis.md` - Third-party integrations (if relevant)
**For Frontend/UI Stories, additionally read:** 5. `docs/architecture/frontend-architecture.md` - Component structure and patterns 6. `docs/architecture/components.md` - Specific component designs 7. `docs/architecture/core-workflows.md` - User interaction flows 8. `docs/architecture/data-models.md` - Frontend data handling
**For Full-Stack Stories:**
- Read both Backend and Frontend sections above
#### 4.3 Extract Story-Specific Technical Details
[[LLM: As you read each document, extract ONLY the information directly relevant to implementing the current story. Do NOT include general information unless it directly impacts the story implementation.]]
For each relevant document, extract:
- Specific data models, schemas, or structures the story will use
- API endpoints the story must implement or consume
- Component specifications for UI elements in the story
- File paths and naming conventions for new code
- Testing requirements specific to the story's features
- Security or performance considerations affecting the story
#### 4.4 Document Source References
[[LLM: ALWAYS cite the source document and section for each technical detail you include. This helps the dev agent verify information if needed.]]
Format references as: `[Source: architecture/{filename}.md#{section}]`
### 5. Verify Project Structure Alignment
- Cross-reference the story's requirements and anticipated file manipulations with the Project Structure Guide from `docs/architecture/unified-project-structure.md`.
- Ensure any file paths, component locations, or module names implied by the story align with defined structures.
- Document any structural conflicts, necessary clarifications, or undefined components/paths in a "Project Structure Notes" section within the story draft.
### 6. Populate Story Template with Full Context
- Create a new story file: `docs/stories/{epicNum}.{storyNum}.story.md`.
- Use the Story Template to structure the file.
- Fill in:
- Story `{EpicNum}.{StoryNum}: {Short Title Copied from Epic File}`
- `Status: Draft`
- `Story` (User Story statement from Epic)
- `Acceptance Criteria (ACs)` (from Epic, to be refined if needed based on context)
- **`Dev Technical Guidance` section (CRITICAL):**
[[LLM: This section MUST contain ONLY information extracted from the architecture shards. NEVER invent or assume technical details.]]
- Include ALL relevant technical details gathered from Steps 3 and 4, organized by category:
- **Previous Story Insights**: Key learnings or considerations from the previous story
- **Data Models**: Specific schemas, validation rules, relationships [with source references]
- **API Specifications**: Endpoint details, request/response formats, auth requirements [with source references]
- **Component Specifications**: UI component details, props, state management [with source references]
- **File Locations**: Exact paths where new code should be created based on project structure
- **Testing Requirements**: Specific test cases or strategies from testing-strategy.md
- **Technical Constraints**: Version requirements, performance considerations, security rules
- Every technical detail MUST include its source reference: `[Source: architecture/{filename}.md#{section}]`
- If information for a category is not found in the architecture docs, explicitly state: "No specific guidance found in architecture docs"
- **`Tasks / Subtasks` section:**
- Generate a detailed, sequential list of technical tasks based ONLY on:
- Requirements from the Epic
- Technical constraints from architecture shards
- Project structure from unified-project-structure.md
- Testing requirements from testing-strategy.md
- Each task must reference relevant architecture documentation
- Include unit testing as explicit subtasks based on testing-strategy.md
- Link tasks to ACs where applicable (e.g., `Task 1 (AC: 1, 3)`)
- Add notes on project structure alignment or discrepancies found in Step 5.
- Prepare content for the "Deviation Analysis" based on any conflicts between epic requirements and architecture constraints.
### 7. Run Story Draft Checklist
- Execute the Story Draft Checklist against the prepared story
- Document any issues or gaps identified
- Make necessary adjustments to meet quality standards
- Ensure all technical guidance is properly sourced from architecture docs
### 8. Finalize Story File
- Review all sections for completeness and accuracy
- Verify all source references are included for technical details
- Ensure tasks align with both epic requirements and architecture constraints
- Update status to "Draft"
- Save the story file to `docs/stories/{epicNum}.{storyNum}.story.md`
### 9. Report Completion
Provide a summary to the user including:
- Story created: `{epicNum}.{storyNum} - {Story Title}`
- Status: Draft
- Key technical components included from architecture docs
- Any deviations or conflicts noted between epic and architecture
- Recommendations for story review before approval
- Next steps: Story should be reviewed by PO for approval before dev work begins
[[LLM: Remember - The success of this task depends on extracting real, specific technical details from the architecture shards. The dev agent should have everything they need in the story file without having to search through multiple documents.]]

View File

@@ -1,223 +0,0 @@
# Create Team Task
This task guides you through creating a new BMAD agent team that conforms to the agent-team schema and effectively combines agents for specific project types.
**Note for User-Created Teams**: If creating a custom team for your own use (not part of the core BMAD system), prefix the team name with a period (e.g., `.team-frontend`) to ensure it's gitignored and won't conflict with repository updates.
## Prerequisites
1. Load and understand the team schema: `/bmad-core/schemas/agent-team-schema.yml`
2. Review existing teams in `/bmad-core/agent-teams/` for patterns and naming conventions
3. List available agents from `/agents/` to understand team composition options
4. Review workflows in `/bmad-core/workflows/` to align team capabilities
## Process
### 1. Define Team Purpose and Scope
Before selecting agents, clarify the team's mission:
- **Team Purpose**: What specific problems will this team solve?
- **Project Types**: Greenfield, brownfield, or both?
- **Technical Scope**: UI-focused, backend-only, or full-stack?
- **Team Size Consideration**: Smaller teams (3-5 agents) for focused work, larger teams (6-8) for comprehensive coverage
### 2. Create Team Metadata
Based on the schema requirements:
- **Team Name**: Must follow pattern `^Team .+$` (e.g., "Team Frontend", "Team Analytics")
- For user teams: prefix with period (e.g., "Team .MyCustom")
- **Description**: 20-500 characters explaining team's purpose, capabilities, and use cases
- **File Name**: `/bmad-core/agent-teams/team-{identifier}.yml`
- For user teams: `/bmad-core/agent-teams/.team-{identifier}.yml`
### 3. Select Agents Based on Purpose
#### Discover Available Agents
1. List all agents from `/agents/` directory
2. Review each agent's role and capabilities
3. Consider agent synergies and coverage
#### Agent Selection Guidelines
Based on team purpose, recommend agents:
**For Planning & Strategy Teams:**
- `bmad` (required orchestrator)
- `analyst` - Requirements gathering and research
- `pm` - Product strategy and documentation
- `po` - Validation and approval
- `architect` - Technical planning (if technical planning needed)
**For Design & UX Teams:**
- `bmad` (required orchestrator)
- `ux-expert` - User experience design
- `architect` - Frontend architecture
- `pm` - Product requirements alignment
- `po` - Design validation
**For Development Teams:**
- `bmad` (required orchestrator)
- `sm` - Sprint coordination
- `dev` - Implementation
- `qa` - Quality assurance
- `architect` - Technical guidance
**For Full-Stack Teams:**
- `bmad` (required orchestrator)
- `analyst` - Initial planning
- `pm` - Product management
- `ux-expert` - UI/UX design (if UI work included)
- `architect` - System architecture
- `po` - Validation
- Additional agents as needed
#### Special Cases
- **Using Wildcard**: If team needs all agents, use `["bmad", "*"]`
- **Validation**: Schema requires `bmad` in all teams
### 4. Select Workflows
Based on the schema's workflow enum values and team composition:
1. **Analyze team capabilities** against available workflows:
- `brownfield-fullstack` - Requires full team with UX
- `brownfield-service` - Backend-focused team
- `brownfield-ui` - UI/UX-focused team
- `greenfield-fullstack` - Full team for new projects
- `greenfield-service` - Backend team for new services
- `greenfield-ui` - Frontend team for new UIs
2. **Match workflows to agents**:
- UI workflows require `ux-expert`
- Service workflows benefit from `architect` and `dev`
- All workflows benefit from planning agents (`analyst`, `pm`)
3. **Apply schema validation rules**:
- Teams without `ux-expert` shouldn't have UI workflows
- Teams named "Team No UI" can't have UI workflows
### 5. Create Team Configuration
Generate the configuration following the schema:
```yaml
bundle:
name: "{Team Name}" # Must match pattern "^Team .+$"
description: >-
{20-500 character description explaining purpose,
capabilities, and ideal use cases}
agents:
- bmad # Required orchestrator
- {agent-id-1}
- {agent-id-2}
# ... additional agents
workflows:
- {workflow-1} # From enum list
- {workflow-2}
# ... additional workflows
```
### 6. Validate Team Composition
Before finalizing, verify:
1. **Role Coverage**: Does the team have all necessary skills for its workflows?
2. **Size Optimization**:
- Minimum: 2 agents (bmad + 1)
- Recommended: 3-7 agents
- Maximum with wildcard: bmad + "*"
3. **Workflow Alignment**: Can the selected agents execute all workflows?
4. **Schema Compliance**: Configuration matches all schema requirements
### 7. Integration Recommendations
Document how this team integrates with existing system:
1. **Complementary Teams**: Which existing teams complement this one?
2. **Handoff Points**: Where does this team hand off to others?
3. **Use Case Scenarios**: Specific project types ideal for this team
### 8. Validation and Testing
1. **Schema Validation**: Ensure configuration matches agent-team-schema.yml
2. **Build Validation**: Run `npm run validate`
3. **Build Team**: Run `npm run build:team -t {team-name}`
4. **Size Check**: Verify output is appropriate for target platform
5. **Test Scenarios**: Run sample workflows with the team
## Example Team Creation
### Example 1: API Development Team
```yaml
bundle:
name: "Team API"
description: >-
Specialized team for API and backend service development. Focuses on
robust service architecture, implementation, and testing without UI
components. Ideal for microservices, REST APIs, and backend systems.
agents:
- bmad
- analyst
- architect
- dev
- qa
- po
workflows:
- greenfield-service
- brownfield-service
```
### Example 2: Rapid Prototyping Team
```yaml
bundle:
name: "Team Prototype"
description: >-
Agile team for rapid prototyping and proof of concept development.
Combines planning, design, and implementation for quick iterations
on new ideas and experimental features.
agents:
- bmad
- pm
- ux-expert
- architect
- dev
workflows:
- greenfield-ui
- greenfield-fullstack
```
## Team Creation Checklist
- [ ] Team purpose clearly defined
- [ ] Name follows schema pattern "Team {Name}"
- [ ] Description is 20-500 characters
- [ ] Includes bmad orchestrator
- [ ] Agents align with team purpose
- [ ] Workflows match team capabilities
- [ ] No conflicting validations (e.g., no-UI team with UI workflows)
- [ ] Configuration validates against schema
- [ ] Build completes successfully
- [ ] Output size appropriate for platform
## Best Practices
1. **Start Focused**: Create teams with specific purposes rather than general-purpose teams
2. **Consider Workflow**: Order agents by typical workflow sequence
3. **Avoid Redundancy**: Don't duplicate roles unless needed
4. **Document Rationale**: Explain why each agent is included
5. **Test Integration**: Verify team works well with selected workflows
6. **Iterate**: Refine team composition based on usage
This schema-driven approach ensures teams are well-structured, purposeful, and integrate seamlessly with the BMAD ecosystem.

View File

@@ -1,97 +0,0 @@
# Checklist Validation Task
This task provides instructions for validating documentation against checklists. The agent MUST follow these instructions to ensure thorough and systematic validation of documents.
## Context
The BMAD Method uses various checklists to ensure quality and completeness of different artifacts. Each checklist contains embedded prompts and instructions to guide the LLM through thorough validation and advanced elicitation. The checklists automatically identify their required artifacts and guide the validation process.
## Available Checklists
If the user asks or does not specify a specific checklist, list the checklists available to the agent persona. If the task is being run not with a specific agent, tell the user to check the bmad-core/checklists folder to select the appropriate one to run.
## Instructions
1. **Initial Assessment**
- If user or the task being run provides a checklist name:
- Try fuzzy matching (e.g. "architecture checklist" -> "architect-checklist")
- If multiple matches found, ask user to clarify
- Load the appropriate checklist from bmad-core/checklists/
- If no checklist specified:
- Ask the user which checklist they want to use
- Present the available options from the files in the checklists folder
- Confirm if they want to work through the checklist:
- Section by section (interactive mode - very time consuming)
- All at once (YOLO mode - recommended for checklists, there will be a summary of sections at the end to discuss)
2. **Document and Artifact Gathering**
- Each checklist will specify its required documents/artifacts at the beginning
- Follow the checklist's specific instructions for what to gather, generally a file can be resolved in the docs folder, if not or unsure, halt and ask or confirm with the user.
3. **Checklist Processing**
If in interactive mode:
- Work through each section of the checklist one at a time
- For each section:
- Review all items in the section following instructions for that section embedded in the checklist
- Check each item against the relevant documentation or artifacts as appropriate
- Present summary of findings for that section, highlighting warnings, errors and non applicable items (rationale for non-applicability).
- Get user confirmation before proceeding to next section or if any thing major do we need to halt and take corrective action
If in YOLO mode:
- Process all sections at once
- Create a comprehensive report of all findings
- Present the complete analysis to the user
4. **Validation Approach**
For each checklist item:
- Read and understand the requirement
- Look for evidence in the documentation that satisfies the requirement
- Consider both explicit mentions and implicit coverage
- Aside from this, follow all checklist llm instructions
- Mark items as:
- ✅ PASS: Requirement clearly met
- ❌ FAIL: Requirement not met or insufficient coverage
- ⚠️ PARTIAL: Some aspects covered but needs improvement
- N/A: Not applicable to this case
5. **Section Analysis**
For each section:
- think step by step to calculate pass rate
- Identify common themes in failed items
- Provide specific recommendations for improvement
- In interactive mode, discuss findings with user
- Document any user decisions or explanations
6. **Final Report**
Prepare a summary that includes:
- Overall checklist completion status
- Pass rates by section
- List of failed items with context
- Specific recommendations for improvement
- Any sections or items marked as N/A with justification
## Checklist Execution Methodology
Each checklist now contains embedded LLM prompts and instructions that will:
1. **Guide thorough thinking** - Prompts ensure deep analysis of each section
2. **Request specific artifacts** - Clear instructions on what documents/access is needed
3. **Provide contextual guidance** - Section-specific prompts for better validation
4. **Generate comprehensive reports** - Final summary with detailed findings
The LLM will:
- Execute the complete checklist validation
- Present a final report with pass/fail rates and key findings
- Offer to provide detailed analysis of any section, especially those with warnings or failures

View File

@@ -1,58 +0,0 @@
# Create AI Frontend Prompt Task
## Purpose
To generate a masterful, comprehensive, and optimized prompt that can be used with AI-driven frontend development tools (e.g., Lovable, Vercel v0, or similar) to scaffold or generate significant portions of the frontend application.
## Inputs
- Completed UI/UX Specification (`front-end-spec-tmpl`)
- Completed Frontend Architecture Document (`front-end-architecture`)
- Main System Architecture Document (`architecture` - for API contracts and tech stack)
- Primary Design Files (Figma, Sketch, etc. - for visual context if the tool can accept it or if descriptions are needed)
## Key Activities & Instructions
1. **Confirm Target AI Generation Platform:**
- Ask the user to specify which AI frontend generation tool/platform they intend to use (e.g., "Lovable.ai", "Vercel v0", "GPT-4 with direct code generation instructions", etc.).
- Explain that prompt optimization might differ slightly based on the platform's capabilities and preferred input format.
2. **Synthesize Inputs into a Structured Prompt:**
- **Overall Project Context:**
- Briefly state the project's purpose (from brief/PRD).
- Specify the chosen frontend framework, core libraries, and UI component library (from `front-end-architecture` and main `architecture`).
- Mention the styling approach (e.g., Tailwind CSS, CSS Modules).
- **Design System & Visuals:**
- Reference the primary design files (e.g., Figma link).
- If the tool doesn't directly ingest design files, describe the overall visual style, color palette, typography, and key branding elements (from `front-end-spec-tmpl`).
- List any global UI components or design tokens that should be defined or adhered to.
- **Application Structure & Routing:**
- Describe the main pages/views and their routes (from `front-end-architecture` - Routing Strategy).
- Outline the navigation structure (from `front-end-spec-tmpl`).
- **Key User Flows & Page-Level Interactions:**
- For a few critical user flows (from `front-end-spec-tmpl`):
- Describe the sequence of user actions and expected UI changes on each relevant page.
- Specify API calls to be made (referencing API endpoints from the main `architecture`) and how data should be displayed or used.
- **Component Generation Instructions (Iterative or Key Components):**
- Based on the chosen AI tool's capabilities, decide on a strategy:
- **Option 1 (Scaffolding):** Prompt for the generation of main page structures, layouts, and placeholders for components.
- **Option 2 (Key Component Generation):** Select a few critical or complex components from the `front-end-architecture` (Component Breakdown) and provide detailed specifications for them (props, state, basic behavior, key UI elements).
- **Option 3 (Holistic, if tool supports):** Attempt to describe the entire application structure and key components more broadly.
- <important_note>Advise the user that generating an entire complex application perfectly in one go is rare. Iterative prompting or focusing on sections/key components is often more effective.</important_note>
- **State Management (High-Level Pointers):**
- Mention the chosen state management solution (e.g., "Use Redux Toolkit").
- For key pieces of data, indicate if they should be managed in global state.
- **API Integration Points:**
- For pages/components that fetch or submit data, clearly state the relevant API endpoints (from `architecture`) and the expected data shapes (can reference schemas in `data-models` or `api-reference` sections of the architecture doc).
- **Critical "Don'ts" or Constraints:**
- e.g., "Do not use deprecated libraries." "Ensure all forms have basic client-side validation."
- **Platform-Specific Optimizations:**
- If the chosen AI tool has known best practices for prompting (e.g., specific keywords, structure, level of detail), incorporate them. (This might require the agent to have some general knowledge or to ask the user if they know any such specific prompt modifiers for their chosen tool).
3. **Present and Refine the Master Prompt:**
- Output the generated prompt in a clear, copy-pasteable format (e.g., a large code block).
- Explain the structure of the prompt and why certain information was included.
- Work with the user to refine the prompt based on their knowledge of the target AI tool and any specific nuances they want to emphasize.
- <important_note>Remind the user that the generated code from the AI tool will likely require review, testing, and further refinement by developers.</important_note>

View File

@@ -1,177 +0,0 @@
# Index Documentation Task
## Purpose
This task maintains the integrity and completeness of the `docs/index.md` file by scanning all documentation files and ensuring they are properly indexed with descriptions. It handles both root-level documents and documents within subfolders, organizing them hierarchically.
## Task Instructions
You are now operating as a Documentation Indexer. Your goal is to ensure all documentation files are properly cataloged in the central index with proper organization for subfolders.
### Required Steps
1. First, locate and scan:
- The `docs/` directory and all subdirectories
- The existing `docs/index.md` file (create if absent)
- All markdown (`.md`) and text (`.txt`) files in the documentation structure
- Note the folder structure for hierarchical organization
2. For the existing `docs/index.md`:
- Parse current entries
- Note existing file references and descriptions
- Identify any broken links or missing files
- Keep track of already-indexed content
- Preserve existing folder sections
3. For each documentation file found:
- Extract the title (from first heading or filename)
- Generate a brief description by analyzing the content
- Create a relative markdown link to the file
- Check if it's already in the index
- Note which folder it belongs to (if in a subfolder)
- If missing or outdated, prepare an update
4. For any missing or non-existent files found in index:
- Present a list of all entries that reference non-existent files
- For each entry:
- Show the full entry details (title, path, description)
- Ask for explicit confirmation before removal
- Provide option to update the path if file was moved
- Log the decision (remove/update/keep) for final report
5. Update `docs/index.md`:
- Maintain existing structure and organization
- Create level 2 sections (`##`) for each subfolder
- List root-level documents first
- Add missing entries with descriptions
- Update outdated entries
- Remove only entries that were confirmed for removal
- Ensure consistent formatting throughout
### Index Structure Format
The index should be organized as follows:
```markdown
# Documentation Index
## Root Documents
### [Document Title](./document.md)
Brief description of the document's purpose and contents.
### [Another Document](./another.md)
Description here.
## Folder Name
Documents within the `folder-name/` directory:
### [Document in Folder](./folder-name/document.md)
Description of this document.
### [Another in Folder](./folder-name/another.md)
Description here.
## Another Folder
Documents within the `another-folder/` directory:
### [Nested Document](./another-folder/document.md)
Description of nested document.
```
### Index Entry Format
Each entry should follow this format:
```markdown
### [Document Title](relative/path/to/file.md)
Brief description of the document's purpose and contents.
```
### Rules of Operation
1. NEVER modify the content of indexed files
2. Preserve existing descriptions in index.md when they are adequate
3. Maintain any existing categorization or grouping in the index
4. Use relative paths for all links (starting with `./`)
5. Ensure descriptions are concise but informative
6. NEVER remove entries without explicit confirmation
7. Report any broken links or inconsistencies found
8. Allow path updates for moved files before considering removal
9. Create folder sections using level 2 headings (`##`)
10. Sort folders alphabetically, with root documents listed first
11. Within each section, sort documents alphabetically by title
### Process Output
The task will provide:
1. A summary of changes made to index.md
2. List of newly indexed files (organized by folder)
3. List of updated entries
4. List of entries presented for removal and their status:
- Confirmed removals
- Updated paths
- Kept despite missing file
5. Any new folders discovered
6. Any other issues or inconsistencies found
### Handling Missing Files
For each file referenced in the index but not found in the filesystem:
1. Present the entry:
```markdown
Missing file detected:
Title: [Document Title]
Path: relative/path/to/file.md
Description: Existing description
Section: [Root Documents | Folder Name]
Options:
1. Remove this entry
2. Update the file path
3. Keep entry (mark as temporarily unavailable)
Please choose an option (1/2/3):
```
2. Wait for user confirmation before taking any action
3. Log the decision for the final report
### Special Cases
1. **Sharded Documents**: If a folder contains an `index.md` file, treat it as a sharded document:
- Use the folder's `index.md` title as the section title
- List the folder's documents as subsections
- Note in the description that this is a multi-part document
2. **README files**: Convert `README.md` to more descriptive titles based on content
3. **Nested Subfolders**: For deeply nested folders, maintain the hierarchy but limit to 2 levels in the main index. Deeper structures should have their own index files.
## Required Input
Please provide:
1. Location of the `docs/` directory (default: `./docs`)
2. Confirmation of write access to `docs/index.md`
3. Any specific categorization preferences
4. Any files or directories to exclude from indexing (e.g., `.git`, `node_modules`)
5. Whether to include hidden files/folders (starting with `.`)
Would you like to proceed with documentation indexing? Please provide the required input above.

View File

@@ -1,173 +0,0 @@
# Document Sharding Task
## Purpose
- Split a large document into multiple smaller documents based on level 2 sections
- Create a folder structure to organize the sharded documents
- Maintain all content integrity including code blocks, diagrams, and markdown formatting
## Recommended Method: @kayvan/markdown-tree-parser
[[LLM: First, suggest the user install and use the @kayvan/markdown-tree-parser tool if the md-tree command is unavailable so we can have the best performance and reliable document sharding. Let the user know this will save cost of having the LLM to the expensive sharding operation. Give instructions for MPV NPX and PNPM global installs.]]
### Installation and Usage
1. **Install globally**:
```bash
npm install -g @kayvan/markdown-tree-parser
```
2. **Use the explode command**:
```bash
# For PRD
md-tree explode docs/prd.md docs/prd
# For Architecture
md-tree explode docs/architecture.md docs/architecture
# For any document
md-tree explode [source-document] [destination-folder]
```
3. **What it does**:
- Automatically splits the document by level 2 sections
- Creates properly named files
- Adjusts heading levels appropriately
- Handles all edge cases with code blocks and special markdown
If the user has @kayvan/markdown-tree-parser installed, use it and skip the manual process below.
---
## Manual Method (if @kayvan/markdown-tree-parser is not available)
[[LLM: Only proceed with the manual instructions below if the user cannot or does not want to use @kayvan/markdown-tree-parser.]]
### Task Instructions
### 1. Identify Document and Target Location
- Determine which document to shard (user-provided path)
- Create a new folder under `docs/` with the same name as the document (without extension)
- Example: `docs/prd.md` → create folder `docs/prd/`
### 2. Parse and Extract Sections
[[LLM: When sharding the document:
1. Read the entire document content
2. Identify all level 2 sections (## headings)
3. For each level 2 section:
- Extract the section heading and ALL content until the next level 2 section
- Include all subsections, code blocks, diagrams, lists, tables, etc.
- Be extremely careful with:
- Fenced code blocks (```) - ensure you capture the full block including closing backticks
- Mermaid diagrams - preserve the complete diagram syntax
- Nested markdown elements
- Multi-line content that might contain ## inside code blocks
CRITICAL: Use proper parsing that understands markdown context. A ## inside a code block is NOT a section header.]]
### 3. Create Individual Files
For each extracted section:
1. **Generate filename**: Convert the section heading to lowercase-dash-case
- Remove special characters
- Replace spaces with dashes
- Example: "## Tech Stack" → `tech-stack.md`
2. **Adjust heading levels**:
- The level 2 heading becomes level 1 (# instead of ##)
- All subsection levels decrease by 1:
```txt
- ### → ##
- #### → ###
- ##### → ####
- etc.
```
3. **Write content**: Save the adjusted content to the new file
### 4. Create Index File
Create an `index.md` file in the sharded folder that:
1. Contains the original level 1 heading and any content before the first level 2 section
2. Lists all the sharded files with links:
```markdown
# Original Document Title
[Original introduction content if any]
## Sections
- [Section Name 1](./section-name-1.md)
- [Section Name 2](./section-name-2.md)
- [Section Name 3](./section-name-3.md)
...
```
### 5. Preserve Special Content
[[LLM: Pay special attention to preserving:
1. **Code blocks**: Must capture complete blocks including:
```language
content
```
2. **Mermaid diagrams**: Preserve complete syntax:
```mermaid
graph TD
...
```
3. **Tables**: Maintain proper markdown table formatting
4. **Lists**: Preserve indentation and nesting
5. **Inline code**: Preserve backticks
6. **Links and references**: Keep all markdown links intact
7. **Template markup**: If documents contain {{placeholders}} or [[LLM instructions]], preserve exactly]]
### 6. Validation
After sharding:
1. Verify all sections were extracted
2. Check that no content was lost
3. Ensure heading levels were properly adjusted
4. Confirm all files were created successfully
### 7. Report Results
Provide a summary:
```text
Document sharded successfully:
- Source: [original document path]
- Destination: docs/[folder-name]/
- Files created: [count]
- Sections:
- section-name-1.md: "Section Title 1"
- section-name-2.md: "Section Title 2"
...
```
## Important Notes
- Never modify the actual content, only adjust heading levels
- Preserve ALL formatting, including whitespace where significant
- Handle edge cases like sections with code blocks containing ## symbols
- Ensure the sharding is reversible (could reconstruct the original from shards)