Compare commits
9 Commits
fix/update
...
feature/cl
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8df2d50bac | ||
|
|
d0a7deb46c | ||
|
|
18a5f63d06 | ||
|
|
5d82b69610 | ||
|
|
de77826bcc | ||
|
|
4125025abd | ||
|
|
72a324075c | ||
|
|
93271e0a2d | ||
|
|
df9ce457ff |
@@ -1,12 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix expand command preserving tagged task structure and preventing data corruption
|
|
||||||
|
|
||||||
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
|
|
||||||
- Add new test section for feature-expand tag creation and testing during expand operations
|
|
||||||
- Verify tag preservation during expand, force expand, and expand --all operations
|
|
||||||
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
|
|
||||||
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
|
|
||||||
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
|
|
||||||
|
|
||||||
- For example:
|
|
||||||
- `OPENAI_BASE_URL`
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": minor
|
|
||||||
---
|
|
||||||
|
|
||||||
Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Rename Roo Code Boomerang role to Orchestrator
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
"task-master-ai": patch
|
|
||||||
---
|
|
||||||
|
|
||||||
Improve mcp keys check in cursor
|
|
||||||
@@ -94,8 +94,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
|
|||||||
|
|
||||||
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
|
||||||
|
|
||||||
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
|
|
||||||
|
|
||||||
###### VS Code (`servers` + `type`)
|
###### VS Code (`servers` + `type`)
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
|||||||
@@ -9,32 +9,32 @@
|
|||||||
|
|
||||||
**Architectural Design & Planning Role (Delegated Tasks):**
|
**Architectural Design & Planning Role (Delegated Tasks):**
|
||||||
|
|
||||||
Your primary role when activated via `new_task` by the Orchestrator is to perform specific architectural, design, or planning tasks, focusing on the instructions provided in the delegation message and referencing the relevant `taskmaster-ai` task ID.
|
Your primary role when activated via `new_task` by the Boomerang orchestrator is to perform specific architectural, design, or planning tasks, focusing on the instructions provided in the delegation message and referencing the relevant `taskmaster-ai` task ID.
|
||||||
|
|
||||||
1. **Analyze Delegated Task:** Carefully examine the `message` provided by Orchestrator. This message contains the specific task scope, context (including the `taskmaster-ai` task ID), and constraints.
|
1. **Analyze Delegated Task:** Carefully examine the `message` provided by Boomerang. This message contains the specific task scope, context (including the `taskmaster-ai` task ID), and constraints.
|
||||||
2. **Information Gathering (As Needed):** Use analysis tools to fulfill the task:
|
2. **Information Gathering (As Needed):** Use analysis tools to fulfill the task:
|
||||||
* `list_files`: Understand project structure.
|
* `list_files`: Understand project structure.
|
||||||
* `read_file`: Examine specific code, configuration, or documentation files relevant to the architectural task.
|
* `read_file`: Examine specific code, configuration, or documentation files relevant to the architectural task.
|
||||||
* `list_code_definition_names`: Analyze code structure and relationships.
|
* `list_code_definition_names`: Analyze code structure and relationships.
|
||||||
* `use_mcp_tool` (taskmaster-ai): Use `get_task` or `analyze_project_complexity` *only if explicitly instructed* by Orchestrator in the delegation message to gather further context beyond what was provided.
|
* `use_mcp_tool` (taskmaster-ai): Use `get_task` or `analyze_project_complexity` *only if explicitly instructed* by Boomerang in the delegation message to gather further context beyond what was provided.
|
||||||
3. **Task Execution (Design & Planning):** Focus *exclusively* on the delegated architectural task, which may involve:
|
3. **Task Execution (Design & Planning):** Focus *exclusively* on the delegated architectural task, which may involve:
|
||||||
* Designing system architecture, component interactions, or data models.
|
* Designing system architecture, component interactions, or data models.
|
||||||
* Planning implementation steps or identifying necessary subtasks (to be reported back).
|
* Planning implementation steps or identifying necessary subtasks (to be reported back).
|
||||||
* Analyzing technical feasibility, complexity, or potential risks.
|
* Analyzing technical feasibility, complexity, or potential risks.
|
||||||
* Defining interfaces, APIs, or data contracts.
|
* Defining interfaces, APIs, or data contracts.
|
||||||
* Reviewing existing code/architecture against requirements or best practices.
|
* Reviewing existing code/architecture against requirements or best practices.
|
||||||
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||||
* Summary of design decisions, plans created, analysis performed, or subtasks identified.
|
* Summary of design decisions, plans created, analysis performed, or subtasks identified.
|
||||||
* Any relevant artifacts produced (e.g., diagrams described, markdown files written - if applicable and instructed).
|
* Any relevant artifacts produced (e.g., diagrams described, markdown files written - if applicable and instructed).
|
||||||
* Completion status (success, failure, needs review).
|
* Completion status (success, failure, needs review).
|
||||||
* Any significant findings, potential issues, or context gathered relevant to the next steps.
|
* Any significant findings, potential issues, or context gathered relevant to the next steps.
|
||||||
5. **Handling Issues:**
|
5. **Handling Issues:**
|
||||||
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring further review (e.g., needing testing input, deeper debugging analysis), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
|
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring further review (e.g., needing testing input, deeper debugging analysis), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
|
||||||
* **Failure:** If the task fails (e.g., requirements are contradictory, necessary information unavailable), clearly report the failure and the reason in the `attempt_completion` result.
|
* **Failure:** If the task fails (e.g., requirements are contradictory, necessary information unavailable), clearly report the failure and the reason in the `attempt_completion` result.
|
||||||
6. **Taskmaster Interaction:**
|
6. **Taskmaster Interaction:**
|
||||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||||
7. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
7. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||||
|
|
||||||
**Context Reporting Strategy:**
|
**Context Reporting Strategy:**
|
||||||
|
|
||||||
@@ -42,17 +42,17 @@ context_reporting: |
|
|||||||
<thinking>
|
<thinking>
|
||||||
Strategy:
|
Strategy:
|
||||||
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
||||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||||
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||||
</thinking>
|
</thinking>
|
||||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
|
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
|
||||||
- **Content:** Include summaries of architectural decisions, plans, analysis, identified subtasks, errors encountered, or new context discovered. Structure the `result` clearly.
|
- **Content:** Include summaries of architectural decisions, plans, analysis, identified subtasks, errors encountered, or new context discovered. Structure the `result` clearly.
|
||||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
|
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
|
||||||
|
|
||||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||||
|
|
||||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||||
taskmaster_strategy:
|
taskmaster_strategy:
|
||||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||||
initialization: |
|
initialization: |
|
||||||
@@ -64,7 +64,7 @@ taskmaster_strategy:
|
|||||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||||
if_uninitialized: |
|
if_uninitialized: |
|
||||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||||
if_ready: |
|
if_ready: |
|
||||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||||
@@ -73,21 +73,21 @@ taskmaster_strategy:
|
|||||||
**Mode Collaboration & Triggers (Architect Perspective):**
|
**Mode Collaboration & Triggers (Architect Perspective):**
|
||||||
|
|
||||||
mode_collaboration: |
|
mode_collaboration: |
|
||||||
# Architect Mode Collaboration (Focus on receiving from Orchestrator and reporting back)
|
# Architect Mode Collaboration (Focus on receiving from Boomerang and reporting back)
|
||||||
- Delegated Task Reception (FROM Orchestrator via `new_task`):
|
- Delegated Task Reception (FROM Boomerang via `new_task`):
|
||||||
* Receive specific architectural/planning task instructions referencing a `taskmaster-ai` ID.
|
* Receive specific architectural/planning task instructions referencing a `taskmaster-ai` ID.
|
||||||
* Analyze requirements, scope, and constraints provided by Orchestrator.
|
* Analyze requirements, scope, and constraints provided by Boomerang.
|
||||||
- Completion Reporting (TO Orchestrator via `attempt_completion`):
|
- Completion Reporting (TO Boomerang via `attempt_completion`):
|
||||||
* Report design decisions, plans, analysis results, or identified subtasks in the `result`.
|
* Report design decisions, plans, analysis results, or identified subtasks in the `result`.
|
||||||
* Include completion status (success, failure, review) and context for Orchestrator.
|
* Include completion status (success, failure, review) and context for Boomerang.
|
||||||
* Signal completion of the *specific delegated architectural task*.
|
* Signal completion of the *specific delegated architectural task*.
|
||||||
|
|
||||||
mode_triggers:
|
mode_triggers:
|
||||||
# Conditions that might trigger a switch TO Architect mode (typically orchestrated BY Orchestrator based on needs identified by other modes or the user)
|
# Conditions that might trigger a switch TO Architect mode (typically orchestrated BY Boomerang based on needs identified by other modes or the user)
|
||||||
architect:
|
architect:
|
||||||
- condition: needs_architectural_design # e.g., New feature requires system design
|
- condition: needs_architectural_design # e.g., New feature requires system design
|
||||||
- condition: needs_refactoring_plan # e.g., Code mode identifies complex refactoring needed
|
- condition: needs_refactoring_plan # e.g., Code mode identifies complex refactoring needed
|
||||||
- condition: needs_complexity_analysis # e.g., Before breaking down a large feature
|
- condition: needs_complexity_analysis # e.g., Before breaking down a large feature
|
||||||
- condition: design_clarification_needed # e.g., Implementation details unclear
|
- condition: design_clarification_needed # e.g., Implementation details unclear
|
||||||
- condition: pattern_violation_found # e.g., Code deviates significantly from established patterns
|
- condition: pattern_violation_found # e.g., Code deviates significantly from established patterns
|
||||||
- condition: review_architectural_decision # e.g., Orchestrator requests review based on 'review' status from another mode
|
- condition: review_architectural_decision # e.g., Boomerang requests review based on 'review' status from another mode
|
||||||
@@ -9,16 +9,16 @@
|
|||||||
|
|
||||||
**Information Retrieval & Explanation Role (Delegated Tasks):**
|
**Information Retrieval & Explanation Role (Delegated Tasks):**
|
||||||
|
|
||||||
Your primary role when activated via `new_task` by the Orchestrator (orchestrator) mode is to act as a specialized technical assistant. Focus *exclusively* on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
Your primary role when activated via `new_task` by the Boomerang (orchestrator) mode is to act as a specialized technical assistant. Focus *exclusively* on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||||
|
|
||||||
1. **Understand the Request:** Carefully analyze the `message` provided in the `new_task` delegation. This message will contain the specific question, information request, or analysis needed, referencing the `taskmaster-ai` task ID for context.
|
1. **Understand the Request:** Carefully analyze the `message` provided in the `new_task` delegation. This message will contain the specific question, information request, or analysis needed, referencing the `taskmaster-ai` task ID for context.
|
||||||
2. **Information Gathering:** Utilize appropriate tools to gather the necessary information based *only* on the delegation instructions:
|
2. **Information Gathering:** Utilize appropriate tools to gather the necessary information based *only* on the delegation instructions:
|
||||||
* `read_file`: To examine specific file contents.
|
* `read_file`: To examine specific file contents.
|
||||||
* `search_files`: To find patterns or specific text across the project.
|
* `search_files`: To find patterns or specific text across the project.
|
||||||
* `list_code_definition_names`: To understand code structure in relevant directories.
|
* `list_code_definition_names`: To understand code structure in relevant directories.
|
||||||
* `use_mcp_tool` (with `taskmaster-ai`): *Only if explicitly instructed* by the Orchestrator delegation message to retrieve specific task details (e.g., using `get_task`).
|
* `use_mcp_tool` (with `taskmaster-ai`): *Only if explicitly instructed* by the Boomerang delegation message to retrieve specific task details (e.g., using `get_task`).
|
||||||
3. **Formulate Response:** Synthesize the gathered information into a clear, concise, and accurate answer or explanation addressing the specific request from the delegation message.
|
3. **Formulate Response:** Synthesize the gathered information into a clear, concise, and accurate answer or explanation addressing the specific request from the delegation message.
|
||||||
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to process and potentially update `taskmaster-ai`. Include:
|
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to process and potentially update `taskmaster-ai`. Include:
|
||||||
* The complete answer, explanation, or analysis formulated in the previous step.
|
* The complete answer, explanation, or analysis formulated in the previous step.
|
||||||
* Completion status (success, failure - e.g., if information could not be found).
|
* Completion status (success, failure - e.g., if information could not be found).
|
||||||
* Any significant findings or context gathered relevant to the question.
|
* Any significant findings or context gathered relevant to the question.
|
||||||
@@ -31,22 +31,22 @@ context_reporting: |
|
|||||||
<thinking>
|
<thinking>
|
||||||
Strategy:
|
Strategy:
|
||||||
- Focus on providing comprehensive information (the answer/analysis) within the `attempt_completion` `result` parameter.
|
- Focus on providing comprehensive information (the answer/analysis) within the `attempt_completion` `result` parameter.
|
||||||
- Orchestrator will use this information to potentially update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
- Boomerang will use this information to potentially update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||||
- My role is to *report* accurately, not *log* directly to Taskmaster.
|
- My role is to *report* accurately, not *log* directly to Taskmaster.
|
||||||
</thinking>
|
</thinking>
|
||||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains the complete and accurate answer/analysis requested by Orchestrator.
|
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains the complete and accurate answer/analysis requested by Boomerang.
|
||||||
- **Content:** Include the full answer, explanation, or analysis results. Cite sources if applicable. Structure the `result` clearly.
|
- **Content:** Include the full answer, explanation, or analysis results. Cite sources if applicable. Structure the `result` clearly.
|
||||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||||
- **Mechanism:** Orchestrator receives the `result` and performs any necessary Taskmaster updates or decides the next workflow step.
|
- **Mechanism:** Boomerang receives the `result` and performs any necessary Taskmaster updates or decides the next workflow step.
|
||||||
|
|
||||||
**Taskmaster Interaction:**
|
**Taskmaster Interaction:**
|
||||||
|
|
||||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||||
* **Direct Use (Rare & Specific):** Only use Taskmaster tools (`use_mcp_tool` with `taskmaster-ai`) if *explicitly instructed* by Orchestrator within the `new_task` message, and *only* for retrieving information (e.g., `get_task`). Do not update Taskmaster status or content directly.
|
* **Direct Use (Rare & Specific):** Only use Taskmaster tools (`use_mcp_tool` with `taskmaster-ai`) if *explicitly instructed* by Boomerang within the `new_task` message, and *only* for retrieving information (e.g., `get_task`). Do not update Taskmaster status or content directly.
|
||||||
|
|
||||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||||
|
|
||||||
# Only relevant if operating autonomously (not delegated by Orchestrator), which is highly exceptional for Ask mode.
|
# Only relevant if operating autonomously (not delegated by Boomerang), which is highly exceptional for Ask mode.
|
||||||
taskmaster_strategy:
|
taskmaster_strategy:
|
||||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||||
initialization: |
|
initialization: |
|
||||||
@@ -58,7 +58,7 @@ taskmaster_strategy:
|
|||||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||||
if_uninitialized: |
|
if_uninitialized: |
|
||||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||||
if_ready: |
|
if_ready: |
|
||||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context (again, very rare for Ask).
|
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context (again, very rare for Ask).
|
||||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||||
@@ -67,13 +67,13 @@ taskmaster_strategy:
|
|||||||
**Mode Collaboration & Triggers:**
|
**Mode Collaboration & Triggers:**
|
||||||
|
|
||||||
mode_collaboration: |
|
mode_collaboration: |
|
||||||
# Ask Mode Collaboration: Focuses on receiving tasks from Orchestrator and reporting back findings.
|
# Ask Mode Collaboration: Focuses on receiving tasks from Boomerang and reporting back findings.
|
||||||
- Delegated Task Reception (FROM Orchestrator via `new_task`):
|
- Delegated Task Reception (FROM Boomerang via `new_task`):
|
||||||
* Understand question/analysis request from Orchestrator (referencing taskmaster-ai task ID).
|
* Understand question/analysis request from Boomerang (referencing taskmaster-ai task ID).
|
||||||
* Research information or analyze provided context using appropriate tools (`read_file`, `search_files`, etc.) as instructed.
|
* Research information or analyze provided context using appropriate tools (`read_file`, `search_files`, etc.) as instructed.
|
||||||
* Formulate answers/explanations strictly within the subtask scope.
|
* Formulate answers/explanations strictly within the subtask scope.
|
||||||
* Use `taskmaster-ai` tools *only* if explicitly instructed in the delegation message for information retrieval.
|
* Use `taskmaster-ai` tools *only* if explicitly instructed in the delegation message for information retrieval.
|
||||||
- Completion Reporting (TO Orchestrator via `attempt_completion`):
|
- Completion Reporting (TO Boomerang via `attempt_completion`):
|
||||||
* Provide the complete answer, explanation, or analysis results in the `result` parameter.
|
* Provide the complete answer, explanation, or analysis results in the `result` parameter.
|
||||||
* Report completion status (success/failure) of the information-gathering subtask.
|
* Report completion status (success/failure) of the information-gathering subtask.
|
||||||
* Cite sources or relevant context found.
|
* Cite sources or relevant context found.
|
||||||
|
|||||||
@@ -70,52 +70,52 @@ taskmaster_strategy:
|
|||||||
**Mode Collaboration & Triggers:**
|
**Mode Collaboration & Triggers:**
|
||||||
|
|
||||||
mode_collaboration: |
|
mode_collaboration: |
|
||||||
# Collaboration definitions for how Orchestrator orchestrates and interacts.
|
# Collaboration definitions for how Boomerang orchestrates and interacts.
|
||||||
# Orchestrator delegates via `new_task` using taskmaster-ai for task context,
|
# Boomerang delegates via `new_task` using taskmaster-ai for task context,
|
||||||
# receives results via `attempt_completion`, processes them, updates taskmaster-ai, and determines the next step.
|
# receives results via `attempt_completion`, processes them, updates taskmaster-ai, and determines the next step.
|
||||||
|
|
||||||
1. Architect Mode Collaboration: # Interaction initiated BY Orchestrator
|
1. Architect Mode Collaboration: # Interaction initiated BY Boomerang
|
||||||
- Delegation via `new_task`:
|
- Delegation via `new_task`:
|
||||||
* Provide clear architectural task scope (referencing taskmaster-ai task ID).
|
* Provide clear architectural task scope (referencing taskmaster-ai task ID).
|
||||||
* Request design, structure, planning based on taskmaster context.
|
* Request design, structure, planning based on taskmaster context.
|
||||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Architect via attempt_completion
|
- Completion Reporting TO Boomerang: # Receiving results FROM Architect via attempt_completion
|
||||||
* Expect design decisions, artifacts created, completion status (taskmaster-ai task ID).
|
* Expect design decisions, artifacts created, completion status (taskmaster-ai task ID).
|
||||||
* Expect context needed for subsequent implementation delegation.
|
* Expect context needed for subsequent implementation delegation.
|
||||||
|
|
||||||
2. Test Mode Collaboration: # Interaction initiated BY Orchestrator
|
2. Test Mode Collaboration: # Interaction initiated BY Boomerang
|
||||||
- Delegation via `new_task`:
|
- Delegation via `new_task`:
|
||||||
* Provide clear testing scope (referencing taskmaster-ai task ID).
|
* Provide clear testing scope (referencing taskmaster-ai task ID).
|
||||||
* Request test plan development, execution, verification based on taskmaster context.
|
* Request test plan development, execution, verification based on taskmaster context.
|
||||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Test via attempt_completion
|
- Completion Reporting TO Boomerang: # Receiving results FROM Test via attempt_completion
|
||||||
* Expect summary of test results (pass/fail, coverage), completion status (taskmaster-ai task ID).
|
* Expect summary of test results (pass/fail, coverage), completion status (taskmaster-ai task ID).
|
||||||
* Expect details on bugs or validation issues.
|
* Expect details on bugs or validation issues.
|
||||||
|
|
||||||
3. Debug Mode Collaboration: # Interaction initiated BY Orchestrator
|
3. Debug Mode Collaboration: # Interaction initiated BY Boomerang
|
||||||
- Delegation via `new_task`:
|
- Delegation via `new_task`:
|
||||||
* Provide clear debugging scope (referencing taskmaster-ai task ID).
|
* Provide clear debugging scope (referencing taskmaster-ai task ID).
|
||||||
* Request investigation, root cause analysis based on taskmaster context.
|
* Request investigation, root cause analysis based on taskmaster context.
|
||||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Debug via attempt_completion
|
- Completion Reporting TO Boomerang: # Receiving results FROM Debug via attempt_completion
|
||||||
* Expect summary of findings (root cause, affected areas), completion status (taskmaster-ai task ID).
|
* Expect summary of findings (root cause, affected areas), completion status (taskmaster-ai task ID).
|
||||||
* Expect recommended fixes or next diagnostic steps.
|
* Expect recommended fixes or next diagnostic steps.
|
||||||
|
|
||||||
4. Ask Mode Collaboration: # Interaction initiated BY Orchestrator
|
4. Ask Mode Collaboration: # Interaction initiated BY Boomerang
|
||||||
- Delegation via `new_task`:
|
- Delegation via `new_task`:
|
||||||
* Provide clear question/analysis request (referencing taskmaster-ai task ID).
|
* Provide clear question/analysis request (referencing taskmaster-ai task ID).
|
||||||
* Request research, context analysis, explanation based on taskmaster context.
|
* Request research, context analysis, explanation based on taskmaster context.
|
||||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Ask via attempt_completion
|
- Completion Reporting TO Boomerang: # Receiving results FROM Ask via attempt_completion
|
||||||
* Expect answers, explanations, analysis results, completion status (taskmaster-ai task ID).
|
* Expect answers, explanations, analysis results, completion status (taskmaster-ai task ID).
|
||||||
* Expect cited sources or relevant context found.
|
* Expect cited sources or relevant context found.
|
||||||
|
|
||||||
5. Code Mode Collaboration: # Interaction initiated BY Orchestrator
|
5. Code Mode Collaboration: # Interaction initiated BY Boomerang
|
||||||
- Delegation via `new_task`:
|
- Delegation via `new_task`:
|
||||||
* Provide clear coding requirements (referencing taskmaster-ai task ID).
|
* Provide clear coding requirements (referencing taskmaster-ai task ID).
|
||||||
* Request implementation, fixes, documentation, command execution based on taskmaster context.
|
* Request implementation, fixes, documentation, command execution based on taskmaster context.
|
||||||
- Completion Reporting TO Orchestrator: # Receiving results FROM Code via attempt_completion
|
- Completion Reporting TO Boomerang: # Receiving results FROM Code via attempt_completion
|
||||||
* Expect outcome of commands/tool usage, summary of code changes/operations, completion status (taskmaster-ai task ID).
|
* Expect outcome of commands/tool usage, summary of code changes/operations, completion status (taskmaster-ai task ID).
|
||||||
* Expect links to commits or relevant code sections if relevant.
|
* Expect links to commits or relevant code sections if relevant.
|
||||||
|
|
||||||
7. Orchestrator Mode Collaboration: # Orchestrator's Internal Orchestration Logic
|
7. Boomerang Mode Collaboration: # Boomerang's Internal Orchestration Logic
|
||||||
# Orchestrator orchestrates via delegation, using taskmaster-ai as the source of truth.
|
# Boomerang orchestrates via delegation, using taskmaster-ai as the source of truth.
|
||||||
- Task Decomposition & Planning:
|
- Task Decomposition & Planning:
|
||||||
* Analyze complex user requests, potentially delegating initial analysis to Architect mode.
|
* Analyze complex user requests, potentially delegating initial analysis to Architect mode.
|
||||||
* Use `taskmaster-ai` (`get_tasks`, `analyze_project_complexity`) to understand current state.
|
* Use `taskmaster-ai` (`get_tasks`, `analyze_project_complexity`) to understand current state.
|
||||||
@@ -141,9 +141,9 @@ mode_collaboration: |
|
|||||||
|
|
||||||
mode_triggers:
|
mode_triggers:
|
||||||
# Conditions that trigger a switch TO the specified mode via switch_mode.
|
# Conditions that trigger a switch TO the specified mode via switch_mode.
|
||||||
# Note: Orchestrator mode is typically initiated for complex tasks or explicitly chosen by the user,
|
# Note: Boomerang mode is typically initiated for complex tasks or explicitly chosen by the user,
|
||||||
# and receives results via attempt_completion, not standard switch_mode triggers from other modes.
|
# and receives results via attempt_completion, not standard switch_mode triggers from other modes.
|
||||||
# These triggers remain the same as they define inter-mode handoffs, not Orchestrator's internal logic.
|
# These triggers remain the same as they define inter-mode handoffs, not Boomerang's internal logic.
|
||||||
|
|
||||||
architect:
|
architect:
|
||||||
- condition: needs_architectural_changes
|
- condition: needs_architectural_changes
|
||||||
@@ -9,22 +9,22 @@
|
|||||||
|
|
||||||
**Execution Role (Delegated Tasks):**
|
**Execution Role (Delegated Tasks):**
|
||||||
|
|
||||||
Your primary role is to **execute** tasks delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
Your primary role is to **execute** tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||||
|
|
||||||
1. **Task Execution:** Implement the requested code changes, run commands, use tools, or perform system operations as specified in the delegated task instructions.
|
1. **Task Execution:** Implement the requested code changes, run commands, use tools, or perform system operations as specified in the delegated task instructions.
|
||||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||||
* Outcome of commands/tool usage.
|
* Outcome of commands/tool usage.
|
||||||
* Summary of code changes made or system operations performed.
|
* Summary of code changes made or system operations performed.
|
||||||
* Completion status (success, failure, needs review).
|
* Completion status (success, failure, needs review).
|
||||||
* Any significant findings, errors encountered, or context gathered.
|
* Any significant findings, errors encountered, or context gathered.
|
||||||
* Links to commits or relevant code sections if applicable.
|
* Links to commits or relevant code sections if applicable.
|
||||||
3. **Handling Issues:**
|
3. **Handling Issues:**
|
||||||
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring review (architectural, testing, debugging), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
|
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring review (architectural, testing, debugging), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
|
||||||
* **Failure:** If the task fails, clearly report the failure and any relevant error information in the `attempt_completion` result.
|
* **Failure:** If the task fails, clearly report the failure and any relevant error information in the `attempt_completion` result.
|
||||||
4. **Taskmaster Interaction:**
|
4. **Taskmaster Interaction:**
|
||||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||||
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||||
|
|
||||||
**Context Reporting Strategy:**
|
**Context Reporting Strategy:**
|
||||||
|
|
||||||
@@ -32,17 +32,17 @@ context_reporting: |
|
|||||||
<thinking>
|
<thinking>
|
||||||
Strategy:
|
Strategy:
|
||||||
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
||||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||||
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||||
</thinking>
|
</thinking>
|
||||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
|
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
|
||||||
- **Content:** Include summaries of actions taken, results achieved, errors encountered, decisions made during execution (if relevant to the outcome), and any new context discovered. Structure the `result` clearly.
|
- **Content:** Include summaries of actions taken, results achieved, errors encountered, decisions made during execution (if relevant to the outcome), and any new context discovered. Structure the `result` clearly.
|
||||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
|
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
|
||||||
|
|
||||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||||
|
|
||||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||||
taskmaster_strategy:
|
taskmaster_strategy:
|
||||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||||
initialization: |
|
initialization: |
|
||||||
@@ -54,7 +54,7 @@ taskmaster_strategy:
|
|||||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||||
if_uninitialized: |
|
if_uninitialized: |
|
||||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||||
if_ready: |
|
if_ready: |
|
||||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||||
|
|||||||
@@ -9,29 +9,29 @@
|
|||||||
|
|
||||||
**Execution Role (Delegated Tasks):**
|
**Execution Role (Delegated Tasks):**
|
||||||
|
|
||||||
Your primary role is to **execute diagnostic tasks** delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
Your primary role is to **execute diagnostic tasks** delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
|
||||||
|
|
||||||
1. **Task Execution:**
|
1. **Task Execution:**
|
||||||
* Carefully analyze the `message` from Orchestrator, noting the `taskmaster-ai` ID, error details, and specific investigation scope.
|
* Carefully analyze the `message` from Boomerang, noting the `taskmaster-ai` ID, error details, and specific investigation scope.
|
||||||
* Perform the requested diagnostics using appropriate tools:
|
* Perform the requested diagnostics using appropriate tools:
|
||||||
* `read_file`: Examine specified code or log files.
|
* `read_file`: Examine specified code or log files.
|
||||||
* `search_files`: Locate relevant code, errors, or patterns.
|
* `search_files`: Locate relevant code, errors, or patterns.
|
||||||
* `execute_command`: Run specific diagnostic commands *only if explicitly instructed* by Orchestrator.
|
* `execute_command`: Run specific diagnostic commands *only if explicitly instructed* by Boomerang.
|
||||||
* `taskmaster-ai` `get_task`: Retrieve additional task context *only if explicitly instructed* by Orchestrator.
|
* `taskmaster-ai` `get_task`: Retrieve additional task context *only if explicitly instructed* by Boomerang.
|
||||||
* Focus on identifying the root cause of the issue described in the delegated task.
|
* Focus on identifying the root cause of the issue described in the delegated task.
|
||||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||||
* Summary of diagnostic steps taken and findings (e.g., identified root cause, affected areas).
|
* Summary of diagnostic steps taken and findings (e.g., identified root cause, affected areas).
|
||||||
* Recommended next steps (e.g., specific code changes for Code mode, further tests for Test mode).
|
* Recommended next steps (e.g., specific code changes for Code mode, further tests for Test mode).
|
||||||
* Completion status (success, failure, needs review). Reference the original `taskmaster-ai` task ID.
|
* Completion status (success, failure, needs review). Reference the original `taskmaster-ai` task ID.
|
||||||
* Any significant context gathered during the investigation.
|
* Any significant context gathered during the investigation.
|
||||||
* **Crucially:** Execute *only* the delegated diagnostic task. Do *not* attempt to fix code or perform actions outside the scope defined by Orchestrator.
|
* **Crucially:** Execute *only* the delegated diagnostic task. Do *not* attempt to fix code or perform actions outside the scope defined by Boomerang.
|
||||||
3. **Handling Issues:**
|
3. **Handling Issues:**
|
||||||
* **Needs Review:** If the root cause is unclear, requires architectural input, or needs further specialized testing, set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
|
* **Needs Review:** If the root cause is unclear, requires architectural input, or needs further specialized testing, set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
|
||||||
* **Failure:** If the diagnostic task cannot be completed (e.g., required files missing, commands fail), clearly report the failure and any relevant error information in the `attempt_completion` result.
|
* **Failure:** If the diagnostic task cannot be completed (e.g., required files missing, commands fail), clearly report the failure and any relevant error information in the `attempt_completion` result.
|
||||||
4. **Taskmaster Interaction:**
|
4. **Taskmaster Interaction:**
|
||||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||||
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||||
|
|
||||||
**Context Reporting Strategy:**
|
**Context Reporting Strategy:**
|
||||||
|
|
||||||
@@ -39,17 +39,17 @@ context_reporting: |
|
|||||||
<thinking>
|
<thinking>
|
||||||
Strategy:
|
Strategy:
|
||||||
- Focus on providing comprehensive diagnostic findings within the `attempt_completion` `result` parameter.
|
- Focus on providing comprehensive diagnostic findings within the `attempt_completion` `result` parameter.
|
||||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask` and decide the next step (e.g., delegate fix to Code mode).
|
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask` and decide the next step (e.g., delegate fix to Code mode).
|
||||||
- My role is to *report* diagnostic findings accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
- My role is to *report* diagnostic findings accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||||
</thinking>
|
</thinking>
|
||||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary diagnostic information for Orchestrator to understand the issue, update Taskmaster, and plan the next action.
|
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary diagnostic information for Boomerang to understand the issue, update Taskmaster, and plan the next action.
|
||||||
- **Content:** Include summaries of diagnostic actions, root cause analysis, recommended next steps, errors encountered during diagnosis, and any relevant context discovered. Structure the `result` clearly.
|
- **Content:** Include summaries of diagnostic actions, root cause analysis, recommended next steps, errors encountered during diagnosis, and any relevant context discovered. Structure the `result` clearly.
|
||||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates and subsequent delegation.
|
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates and subsequent delegation.
|
||||||
|
|
||||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||||
|
|
||||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||||
taskmaster_strategy:
|
taskmaster_strategy:
|
||||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||||
initialization: |
|
initialization: |
|
||||||
@@ -61,7 +61,7 @@ taskmaster_strategy:
|
|||||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||||
if_uninitialized: |
|
if_uninitialized: |
|
||||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||||
if_ready: |
|
if_ready: |
|
||||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||||
|
|||||||
@@ -9,22 +9,22 @@
|
|||||||
|
|
||||||
**Execution Role (Delegated Tasks):**
|
**Execution Role (Delegated Tasks):**
|
||||||
|
|
||||||
Your primary role is to **execute** testing tasks delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID and its associated context (e.g., `testStrategy`).
|
Your primary role is to **execute** testing tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID and its associated context (e.g., `testStrategy`).
|
||||||
|
|
||||||
1. **Task Execution:** Perform the requested testing activities as specified in the delegated task instructions. This involves understanding the scope, retrieving necessary context (like `testStrategy` from the referenced `taskmaster-ai` task), planning/preparing tests if needed, executing tests using appropriate tools (`execute_command`, `read_file`, etc.), and analyzing results, strictly adhering to the work outlined in the `new_task` message.
|
1. **Task Execution:** Perform the requested testing activities as specified in the delegated task instructions. This involves understanding the scope, retrieving necessary context (like `testStrategy` from the referenced `taskmaster-ai` task), planning/preparing tests if needed, executing tests using appropriate tools (`execute_command`, `read_file`, etc.), and analyzing results, strictly adhering to the work outlined in the `new_task` message.
|
||||||
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
|
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
|
||||||
* Summary of testing activities performed (e.g., tests planned, executed).
|
* Summary of testing activities performed (e.g., tests planned, executed).
|
||||||
* Concise results/outcome (e.g., pass/fail counts, overall status, coverage information if applicable).
|
* Concise results/outcome (e.g., pass/fail counts, overall status, coverage information if applicable).
|
||||||
* Completion status (success, failure, needs review - e.g., if tests reveal significant issues needing broader attention).
|
* Completion status (success, failure, needs review - e.g., if tests reveal significant issues needing broader attention).
|
||||||
* Any significant findings (e.g., details of bugs, errors, or validation issues found).
|
* Any significant findings (e.g., details of bugs, errors, or validation issues found).
|
||||||
* Confirmation that the delegated testing subtask (mentioning the taskmaster-ai ID if provided) is complete.
|
* Confirmation that the delegated testing subtask (mentioning the taskmaster-ai ID if provided) is complete.
|
||||||
3. **Handling Issues:**
|
3. **Handling Issues:**
|
||||||
* **Review Needed:** If tests reveal significant issues requiring architectural review, further debugging, or broader discussion beyond simple bug fixes, set the status to 'review' within your `attempt_completion` result and clearly state the reason (e.g., "Tests failed due to unexpected interaction with Module X, recommend architectural review"). **Do not delegate directly.** Report back to Orchestrator.
|
* **Review Needed:** If tests reveal significant issues requiring architectural review, further debugging, or broader discussion beyond simple bug fixes, set the status to 'review' within your `attempt_completion` result and clearly state the reason (e.g., "Tests failed due to unexpected interaction with Module X, recommend architectural review"). **Do not delegate directly.** Report back to Boomerang.
|
||||||
* **Failure:** If the testing task itself cannot be completed (e.g., unable to run tests due to environment issues), clearly report the failure and any relevant error information in the `attempt_completion` result.
|
* **Failure:** If the testing task itself cannot be completed (e.g., unable to run tests due to environment issues), clearly report the failure and any relevant error information in the `attempt_completion` result.
|
||||||
4. **Taskmaster Interaction:**
|
4. **Taskmaster Interaction:**
|
||||||
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
|
||||||
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
|
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
|
||||||
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
|
||||||
|
|
||||||
**Context Reporting Strategy:**
|
**Context Reporting Strategy:**
|
||||||
|
|
||||||
@@ -32,17 +32,17 @@ context_reporting: |
|
|||||||
<thinking>
|
<thinking>
|
||||||
Strategy:
|
Strategy:
|
||||||
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
|
||||||
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
|
||||||
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
|
||||||
</thinking>
|
</thinking>
|
||||||
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
|
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
|
||||||
- **Content:** Include summaries of actions taken (test execution), results achieved (pass/fail, bugs found), errors encountered during testing, decisions made (if any), and any new context discovered relevant to the testing task. Structure the `result` clearly.
|
- **Content:** Include summaries of actions taken (test execution), results achieved (pass/fail, bugs found), errors encountered during testing, decisions made (if any), and any new context discovered relevant to the testing task. Structure the `result` clearly.
|
||||||
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
|
||||||
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
|
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
|
||||||
|
|
||||||
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
**Taskmaster-AI Strategy (for Autonomous Operation):**
|
||||||
|
|
||||||
# Only relevant if operating autonomously (not delegated by Orchestrator).
|
# Only relevant if operating autonomously (not delegated by Boomerang).
|
||||||
taskmaster_strategy:
|
taskmaster_strategy:
|
||||||
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
|
||||||
initialization: |
|
initialization: |
|
||||||
@@ -54,7 +54,7 @@ taskmaster_strategy:
|
|||||||
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
|
||||||
if_uninitialized: |
|
if_uninitialized: |
|
||||||
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
|
||||||
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
|
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
|
||||||
if_ready: |
|
if_ready: |
|
||||||
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
|
||||||
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
|
||||||
|
|||||||
@@ -72,7 +72,6 @@ Taskmaster uses two primary methods for configuration:
|
|||||||
- `XAI_API_KEY`: Your X-AI API key.
|
- `XAI_API_KEY`: Your X-AI API key.
|
||||||
- **Optional Endpoint Overrides:**
|
- **Optional Endpoint Overrides:**
|
||||||
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
|
||||||
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
|
|
||||||
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
|
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
|
||||||
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
|
||||||
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
|
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
|
||||||
@@ -132,14 +131,13 @@ PERPLEXITY_API_KEY=pplx-your-key-here
|
|||||||
# etc.
|
# etc.
|
||||||
|
|
||||||
# Optional Endpoint Overrides
|
# Optional Endpoint Overrides
|
||||||
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
|
|
||||||
# OPENAI_BASE_URL=https://api.third-party.com/v1
|
|
||||||
#
|
|
||||||
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
|
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
|
||||||
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
|
||||||
|
|
||||||
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
# Google Vertex AI Configuration (Required if using 'vertex' provider)
|
||||||
# VERTEX_PROJECT_ID=your-gcp-project-id
|
# VERTEX_PROJECT_ID=your-gcp-project-id
|
||||||
|
# VERTEX_LOCATION=us-central1
|
||||||
|
# GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
|
||||||
```
|
```
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
## Main Models
|
## Main Models
|
||||||
|
|
||||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
| ---------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||||
@@ -62,13 +62,11 @@
|
|||||||
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
|
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
|
||||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||||
| claude-code | opus | 0.725 | 0 | 0 |
|
|
||||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
|
||||||
|
|
||||||
## Research Models
|
## Research Models
|
||||||
|
|
||||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||||
| ----------- | -------------------------- | --------- | ---------- | ----------- |
|
| ---------- | -------------------------- | --------- | ---------- | ----------- |
|
||||||
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
|
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
|
||||||
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
|
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
|
||||||
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
|
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
|
||||||
@@ -79,13 +77,11 @@
|
|||||||
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
|
||||||
| xai | grok-3 | — | 3 | 15 |
|
| xai | grok-3 | — | 3 | 15 |
|
||||||
| xai | grok-3-fast | — | 5 | 25 |
|
| xai | grok-3-fast | — | 5 | 25 |
|
||||||
| claude-code | opus | 0.725 | 0 | 0 |
|
|
||||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
|
||||||
|
|
||||||
## Fallback Models
|
## Fallback Models
|
||||||
|
|
||||||
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
|
||||||
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
| ---------- | ---------------------------------------------- | --------- | ---------- | ----------- |
|
||||||
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
|
||||||
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
|
||||||
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
|
||||||
@@ -133,5 +129,3 @@
|
|||||||
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
|
||||||
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
|
||||||
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
|
||||||
| claude-code | opus | 0.725 | 0 | 0 |
|
|
||||||
| claude-code | sonnet | 0.727 | 0 | 0 |
|
|
||||||
|
|||||||
@@ -26,7 +26,6 @@ import { createLogWrapper } from '../../tools/utils.js';
|
|||||||
* @param {string} [args.prompt] - Additional context to guide subtask generation.
|
* @param {string} [args.prompt] - Additional context to guide subtask generation.
|
||||||
* @param {boolean} [args.force] - Force expansion even if subtasks exist.
|
* @param {boolean} [args.force] - Force expansion even if subtasks exist.
|
||||||
* @param {string} [args.projectRoot] - Project root directory.
|
* @param {string} [args.projectRoot] - Project root directory.
|
||||||
* @param {string} [args.tag] - Tag for the task
|
|
||||||
* @param {Object} log - Logger object
|
* @param {Object} log - Logger object
|
||||||
* @param {Object} context - Context object containing session
|
* @param {Object} context - Context object containing session
|
||||||
* @param {Object} [context.session] - MCP Session object
|
* @param {Object} [context.session] - MCP Session object
|
||||||
@@ -35,8 +34,7 @@ import { createLogWrapper } from '../../tools/utils.js';
|
|||||||
export async function expandTaskDirect(args, log, context = {}) {
|
export async function expandTaskDirect(args, log, context = {}) {
|
||||||
const { session } = context; // Extract session
|
const { session } = context; // Extract session
|
||||||
// Destructure expected args, including projectRoot
|
// Destructure expected args, including projectRoot
|
||||||
const { tasksJsonPath, id, num, research, prompt, force, projectRoot, tag } =
|
const { tasksJsonPath, id, num, research, prompt, force, projectRoot } = args;
|
||||||
args;
|
|
||||||
|
|
||||||
// Log session root data for debugging
|
// Log session root data for debugging
|
||||||
log.info(
|
log.info(
|
||||||
@@ -196,8 +194,7 @@ export async function expandTaskDirect(args, log, context = {}) {
|
|||||||
session,
|
session,
|
||||||
projectRoot,
|
projectRoot,
|
||||||
commandName: 'expand-task',
|
commandName: 'expand-task',
|
||||||
outputType: 'mcp',
|
outputType: 'mcp'
|
||||||
tag
|
|
||||||
},
|
},
|
||||||
forceFlag
|
forceFlag
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -45,8 +45,7 @@ export function registerExpandTaskTool(server) {
|
|||||||
.boolean()
|
.boolean()
|
||||||
.optional()
|
.optional()
|
||||||
.default(false)
|
.default(false)
|
||||||
.describe('Force expansion even if subtasks exist'),
|
.describe('Force expansion even if subtasks exist')
|
||||||
tag: z.string().optional().describe('Tag context to operate on')
|
|
||||||
}),
|
}),
|
||||||
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
|
||||||
try {
|
try {
|
||||||
@@ -74,8 +73,7 @@ export function registerExpandTaskTool(server) {
|
|||||||
research: args.research,
|
research: args.research,
|
||||||
prompt: args.prompt,
|
prompt: args.prompt,
|
||||||
force: args.force,
|
force: args.force,
|
||||||
projectRoot: args.projectRoot,
|
projectRoot: args.projectRoot
|
||||||
tag: args.tag || 'master'
|
|
||||||
},
|
},
|
||||||
log,
|
log,
|
||||||
{ session }
|
{ session }
|
||||||
|
|||||||
@@ -571,11 +571,10 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
|
|||||||
const mcpConfigRaw = fs.readFileSync(mcpConfigPath, 'utf-8');
|
const mcpConfigRaw = fs.readFileSync(mcpConfigPath, 'utf-8');
|
||||||
const mcpConfig = JSON.parse(mcpConfigRaw);
|
const mcpConfig = JSON.parse(mcpConfigRaw);
|
||||||
|
|
||||||
const mcpEnv =
|
const mcpEnv = mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
|
||||||
mcpConfig?.mcpServers?.['task-master-ai']?.env ||
|
|
||||||
mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
|
|
||||||
if (!mcpEnv) {
|
if (!mcpEnv) {
|
||||||
return false;
|
// console.warn(chalk.yellow('Warning: Could not find taskmaster-ai env in mcp.json.'));
|
||||||
|
return false; // Structure missing
|
||||||
}
|
}
|
||||||
|
|
||||||
let apiKeyToCheck = null;
|
let apiKeyToCheck = null;
|
||||||
@@ -783,15 +782,9 @@ function getAllProviders() {
|
|||||||
|
|
||||||
function getBaseUrlForRole(role, explicitRoot = null) {
|
function getBaseUrlForRole(role, explicitRoot = null) {
|
||||||
const roleConfig = getModelConfigForRole(role, explicitRoot);
|
const roleConfig = getModelConfigForRole(role, explicitRoot);
|
||||||
if (roleConfig && typeof roleConfig.baseURL === 'string') {
|
return roleConfig && typeof roleConfig.baseURL === 'string'
|
||||||
return roleConfig.baseURL;
|
? roleConfig.baseURL
|
||||||
}
|
: undefined;
|
||||||
const provider = roleConfig?.provider;
|
|
||||||
if (provider) {
|
|
||||||
const envVarName = `${provider.toUpperCase()}_BASE_URL`;
|
|
||||||
return resolveEnvVariable(envVarName, null, explicitRoot);
|
|
||||||
}
|
|
||||||
return undefined;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export {
|
export {
|
||||||
|
|||||||
@@ -32,12 +32,7 @@ async function expandAllTasks(
|
|||||||
context = {},
|
context = {},
|
||||||
outputFormat = 'text' // Assume text default for CLI
|
outputFormat = 'text' // Assume text default for CLI
|
||||||
) {
|
) {
|
||||||
const {
|
const { session, mcpLog, projectRoot: providedProjectRoot } = context;
|
||||||
session,
|
|
||||||
mcpLog,
|
|
||||||
projectRoot: providedProjectRoot,
|
|
||||||
tag: contextTag
|
|
||||||
} = context;
|
|
||||||
const isMCPCall = !!mcpLog; // Determine if called from MCP
|
const isMCPCall = !!mcpLog; // Determine if called from MCP
|
||||||
|
|
||||||
const projectRoot = providedProjectRoot || findProjectRoot();
|
const projectRoot = providedProjectRoot || findProjectRoot();
|
||||||
@@ -79,7 +74,7 @@ async function expandAllTasks(
|
|||||||
|
|
||||||
try {
|
try {
|
||||||
logger.info(`Reading tasks from ${tasksPath}`);
|
logger.info(`Reading tasks from ${tasksPath}`);
|
||||||
const data = readJSON(tasksPath, projectRoot, contextTag);
|
const data = readJSON(tasksPath, projectRoot);
|
||||||
if (!data || !data.tasks) {
|
if (!data || !data.tasks) {
|
||||||
throw new Error(`Invalid tasks data in ${tasksPath}`);
|
throw new Error(`Invalid tasks data in ${tasksPath}`);
|
||||||
}
|
}
|
||||||
@@ -129,7 +124,7 @@ async function expandAllTasks(
|
|||||||
numSubtasks,
|
numSubtasks,
|
||||||
useResearch,
|
useResearch,
|
||||||
additionalContext,
|
additionalContext,
|
||||||
{ ...context, projectRoot, tag: data.tag || contextTag }, // Pass the whole context object with projectRoot and resolved tag
|
{ ...context, projectRoot }, // Pass the whole context object with projectRoot
|
||||||
force
|
force
|
||||||
);
|
);
|
||||||
expandedCount++;
|
expandedCount++;
|
||||||
|
|||||||
@@ -417,7 +417,7 @@ async function expandTask(
|
|||||||
context = {},
|
context = {},
|
||||||
force = false
|
force = false
|
||||||
) {
|
) {
|
||||||
const { session, mcpLog, projectRoot: contextProjectRoot, tag } = context;
|
const { session, mcpLog, projectRoot: contextProjectRoot } = context;
|
||||||
const outputFormat = mcpLog ? 'json' : 'text';
|
const outputFormat = mcpLog ? 'json' : 'text';
|
||||||
|
|
||||||
// Determine projectRoot: Use from context if available, otherwise derive from tasksPath
|
// Determine projectRoot: Use from context if available, otherwise derive from tasksPath
|
||||||
@@ -439,7 +439,7 @@ async function expandTask(
|
|||||||
try {
|
try {
|
||||||
// --- Task Loading/Filtering (Unchanged) ---
|
// --- Task Loading/Filtering (Unchanged) ---
|
||||||
logger.info(`Reading tasks from ${tasksPath}`);
|
logger.info(`Reading tasks from ${tasksPath}`);
|
||||||
const data = readJSON(tasksPath, projectRoot, tag);
|
const data = readJSON(tasksPath, projectRoot);
|
||||||
if (!data || !data.tasks)
|
if (!data || !data.tasks)
|
||||||
throw new Error(`Invalid tasks data in ${tasksPath}`);
|
throw new Error(`Invalid tasks data in ${tasksPath}`);
|
||||||
const taskIndex = data.tasks.findIndex(
|
const taskIndex = data.tasks.findIndex(
|
||||||
@@ -668,7 +668,7 @@ async function expandTask(
|
|||||||
// --- End Change: Append instead of replace ---
|
// --- End Change: Append instead of replace ---
|
||||||
|
|
||||||
data.tasks[taskIndex] = task; // Assign the modified task back
|
data.tasks[taskIndex] = task; // Assign the modified task back
|
||||||
writeJSON(tasksPath, data, projectRoot, tag);
|
writeJSON(tasksPath, data);
|
||||||
// await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
// await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||||
|
|
||||||
// Display AI Usage Summary for CLI
|
// Display AI Usage Summary for CLI
|
||||||
|
|||||||
@@ -39,23 +39,7 @@ const updatedTaskSchema = z
|
|||||||
priority: z.string().optional(),
|
priority: z.string().optional(),
|
||||||
details: z.string().optional(),
|
details: z.string().optional(),
|
||||||
testStrategy: z.string().optional(),
|
testStrategy: z.string().optional(),
|
||||||
subtasks: z
|
subtasks: z.array(z.any()).optional()
|
||||||
.array(
|
|
||||||
z.object({
|
|
||||||
id: z
|
|
||||||
.number()
|
|
||||||
.int()
|
|
||||||
.positive()
|
|
||||||
.describe('Sequential subtask ID starting from 1'),
|
|
||||||
title: z.string(),
|
|
||||||
description: z.string(),
|
|
||||||
status: z.string(),
|
|
||||||
dependencies: z.array(z.number().int()).optional(),
|
|
||||||
details: z.string().optional(),
|
|
||||||
testStrategy: z.string().optional()
|
|
||||||
})
|
|
||||||
)
|
|
||||||
.optional()
|
|
||||||
})
|
})
|
||||||
.strip(); // Allows parsing even if AI adds extra fields, but validation focuses on schema
|
.strip(); // Allows parsing even if AI adds extra fields, but validation focuses on schema
|
||||||
|
|
||||||
@@ -457,8 +441,6 @@ Guidelines:
|
|||||||
9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced
|
9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced
|
||||||
10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted
|
10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted
|
||||||
11. Ensure any new subtasks have unique IDs that don't conflict with existing ones
|
11. Ensure any new subtasks have unique IDs that don't conflict with existing ones
|
||||||
12. CRITICAL: For subtask IDs, use ONLY numeric values (1, 2, 3, etc.) NOT strings ("1", "2", "3")
|
|
||||||
13. CRITICAL: Subtask IDs should start from 1 and increment sequentially (1, 2, 3...) - do NOT use parent task ID as prefix
|
|
||||||
|
|
||||||
The changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.`;
|
The changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.`;
|
||||||
|
|
||||||
@@ -591,37 +573,6 @@ The changes described in the prompt should be thoughtfully applied to make the t
|
|||||||
);
|
);
|
||||||
updatedTask.status = taskToUpdate.status;
|
updatedTask.status = taskToUpdate.status;
|
||||||
}
|
}
|
||||||
// Fix subtask IDs if they exist (ensure they are numeric and sequential)
|
|
||||||
if (updatedTask.subtasks && Array.isArray(updatedTask.subtasks)) {
|
|
||||||
let currentSubtaskId = 1;
|
|
||||||
updatedTask.subtasks = updatedTask.subtasks.map((subtask) => {
|
|
||||||
// Fix AI-generated subtask IDs that might be strings or use parent ID as prefix
|
|
||||||
const correctedSubtask = {
|
|
||||||
...subtask,
|
|
||||||
id: currentSubtaskId, // Override AI-generated ID with correct sequential ID
|
|
||||||
dependencies: Array.isArray(subtask.dependencies)
|
|
||||||
? subtask.dependencies
|
|
||||||
.map((dep) =>
|
|
||||||
typeof dep === 'string' ? parseInt(dep, 10) : dep
|
|
||||||
)
|
|
||||||
.filter(
|
|
||||||
(depId) =>
|
|
||||||
!Number.isNaN(depId) &&
|
|
||||||
depId >= 1 &&
|
|
||||||
depId < currentSubtaskId
|
|
||||||
)
|
|
||||||
: [],
|
|
||||||
status: subtask.status || 'pending'
|
|
||||||
};
|
|
||||||
currentSubtaskId++;
|
|
||||||
return correctedSubtask;
|
|
||||||
});
|
|
||||||
report(
|
|
||||||
'info',
|
|
||||||
`Fixed ${updatedTask.subtasks.length} subtask IDs to be sequential numeric IDs.`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Preserve completed subtasks (Keep existing logic)
|
// Preserve completed subtasks (Keep existing logic)
|
||||||
if (taskToUpdate.subtasks?.length > 0) {
|
if (taskToUpdate.subtasks?.length > 0) {
|
||||||
if (!updatedTask.subtasks) {
|
if (!updatedTask.subtasks) {
|
||||||
|
|||||||
@@ -73,7 +73,7 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
|
|||||||
*/
|
*/
|
||||||
function findProjectRoot(
|
function findProjectRoot(
|
||||||
startDir = process.cwd(),
|
startDir = process.cwd(),
|
||||||
markers = ['package.json', 'pyproject.toml', '.git', LEGACY_CONFIG_FILE]
|
markers = ['package.json', '.git', LEGACY_CONFIG_FILE]
|
||||||
) {
|
) {
|
||||||
let currentPath = path.resolve(startDir);
|
let currentPath = path.resolve(startDir);
|
||||||
const rootPath = path.parse(currentPath).root;
|
const rootPath = path.parse(currentPath).root;
|
||||||
|
|||||||
@@ -333,8 +333,8 @@ log_step() {
|
|||||||
|
|
||||||
log_step "Initializing Task Master project (non-interactive)"
|
log_step "Initializing Task Master project (non-interactive)"
|
||||||
task-master init -y --name="E2E Test $TIMESTAMP" --description="Automated E2E test run"
|
task-master init -y --name="E2E Test $TIMESTAMP" --description="Automated E2E test run"
|
||||||
if [ ! -f ".taskmaster/config.json" ]; then
|
if [ ! -f ".taskmasterconfig" ]; then
|
||||||
log_error "Initialization failed: .taskmaster/config.json not found."
|
log_error "Initialization failed: .taskmasterconfig not found."
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
log_success "Project initialized."
|
log_success "Project initialized."
|
||||||
@@ -344,8 +344,8 @@ log_step() {
|
|||||||
exit_status_prd=$?
|
exit_status_prd=$?
|
||||||
echo "$cmd_output_prd"
|
echo "$cmd_output_prd"
|
||||||
extract_and_sum_cost "$cmd_output_prd"
|
extract_and_sum_cost "$cmd_output_prd"
|
||||||
if [ $exit_status_prd -ne 0 ] || [ ! -s ".taskmaster/tasks/tasks.json" ]; then
|
if [ $exit_status_prd -ne 0 ] || [ ! -s "tasks/tasks.json" ]; then
|
||||||
log_error "Parsing PRD failed: .taskmaster/tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
|
log_error "Parsing PRD failed: tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
|
||||||
exit 1
|
exit 1
|
||||||
else
|
else
|
||||||
log_success "PRD parsed successfully."
|
log_success "PRD parsed successfully."
|
||||||
@@ -386,95 +386,6 @@ log_step() {
|
|||||||
task-master list --with-subtasks > task_list_after_changes.log
|
task-master list --with-subtasks > task_list_after_changes.log
|
||||||
log_success "Task list after changes saved to task_list_after_changes.log"
|
log_success "Task list after changes saved to task_list_after_changes.log"
|
||||||
|
|
||||||
# === Start New Test Section: Tag-Aware Expand Testing ===
|
|
||||||
log_step "Creating additional tag for expand testing"
|
|
||||||
task-master add-tag feature-expand --description="Tag for testing expand command with tag preservation"
|
|
||||||
log_success "Created feature-expand tag."
|
|
||||||
|
|
||||||
log_step "Adding task to feature-expand tag"
|
|
||||||
task-master add-task --tag=feature-expand --prompt="Test task for tag-aware expansion" --priority=medium
|
|
||||||
# Get the new task ID dynamically
|
|
||||||
new_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
|
||||||
log_success "Added task $new_expand_task_id to feature-expand tag."
|
|
||||||
|
|
||||||
log_step "Verifying tags exist before expand test"
|
|
||||||
task-master tags > tags_before_expand.log
|
|
||||||
tag_count_before=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
||||||
log_success "Tag count before expand: $tag_count_before"
|
|
||||||
|
|
||||||
log_step "Expanding task in feature-expand tag (testing tag corruption fix)"
|
|
||||||
cmd_output_expand_tagged=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" 2>&1)
|
|
||||||
exit_status_expand_tagged=$?
|
|
||||||
echo "$cmd_output_expand_tagged"
|
|
||||||
extract_and_sum_cost "$cmd_output_expand_tagged"
|
|
||||||
if [ $exit_status_expand_tagged -ne 0 ]; then
|
|
||||||
log_error "Tagged expand failed. Exit status: $exit_status_expand_tagged"
|
|
||||||
else
|
|
||||||
log_success "Tagged expand completed."
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_step "Verifying tag preservation after expand"
|
|
||||||
task-master tags > tags_after_expand.log
|
|
||||||
tag_count_after=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
||||||
|
|
||||||
if [ "$tag_count_before" -eq "$tag_count_after" ]; then
|
|
||||||
log_success "Tag count preserved: $tag_count_after (no corruption detected)"
|
|
||||||
else
|
|
||||||
log_error "Tag corruption detected! Before: $tag_count_before, After: $tag_count_after"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_step "Verifying master tag still exists and has tasks"
|
|
||||||
master_task_count=$(jq -r '.master.tasks | length' .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
|
||||||
if [ "$master_task_count" -gt "0" ]; then
|
|
||||||
log_success "Master tag preserved with $master_task_count tasks"
|
|
||||||
else
|
|
||||||
log_error "Master tag corrupted or empty after tagged expand"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_step "Verifying feature-expand tag has expanded subtasks"
|
|
||||||
expanded_subtask_count=$(jq -r ".\"feature-expand\".tasks[] | select(.id == $new_expand_task_id) | .subtasks | length" .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
|
|
||||||
if [ "$expanded_subtask_count" -gt "0" ]; then
|
|
||||||
log_success "Expand successful: $expanded_subtask_count subtasks created in feature-expand tag"
|
|
||||||
else
|
|
||||||
log_error "Expand failed: No subtasks found in feature-expand tag"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_step "Testing force expand with tag preservation"
|
|
||||||
cmd_output_force_expand=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" --force 2>&1)
|
|
||||||
exit_status_force_expand=$?
|
|
||||||
echo "$cmd_output_force_expand"
|
|
||||||
extract_and_sum_cost "$cmd_output_force_expand"
|
|
||||||
|
|
||||||
# Verify tags still preserved after force expand
|
|
||||||
tag_count_after_force=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
||||||
if [ "$tag_count_before" -eq "$tag_count_after_force" ]; then
|
|
||||||
log_success "Force expand preserved all tags"
|
|
||||||
else
|
|
||||||
log_error "Force expand caused tag corruption"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_step "Testing expand --all with tag preservation"
|
|
||||||
# Add another task to feature-expand for expand-all testing
|
|
||||||
task-master add-task --tag=feature-expand --prompt="Second task for expand-all testing" --priority=low
|
|
||||||
second_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
|
|
||||||
|
|
||||||
cmd_output_expand_all=$(task-master expand --tag=feature-expand --all 2>&1)
|
|
||||||
exit_status_expand_all=$?
|
|
||||||
echo "$cmd_output_expand_all"
|
|
||||||
extract_and_sum_cost "$cmd_output_expand_all"
|
|
||||||
|
|
||||||
# Verify tags preserved after expand-all
|
|
||||||
tag_count_after_all=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
|
|
||||||
if [ "$tag_count_before" -eq "$tag_count_after_all" ]; then
|
|
||||||
log_success "Expand --all preserved all tags"
|
|
||||||
else
|
|
||||||
log_error "Expand --all caused tag corruption"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_success "Completed expand --all tag preservation test."
|
|
||||||
|
|
||||||
# === End New Test Section: Tag-Aware Expand Testing ===
|
|
||||||
|
|
||||||
# === Test Model Commands ===
|
# === Test Model Commands ===
|
||||||
log_step "Checking initial model configuration"
|
log_step "Checking initial model configuration"
|
||||||
task-master models > models_initial_config.log
|
task-master models > models_initial_config.log
|
||||||
@@ -715,7 +626,7 @@ log_step() {
|
|||||||
|
|
||||||
# Find the next available task ID dynamically instead of hardcoding 11, 12
|
# Find the next available task ID dynamically instead of hardcoding 11, 12
|
||||||
# Assuming tasks are added sequentially and we didn't remove any core tasks yet
|
# Assuming tasks are added sequentially and we didn't remove any core tasks yet
|
||||||
last_task_id=$(jq '[.master.tasks[].id] | max' .taskmaster/tasks/tasks.json)
|
last_task_id=$(jq '[.tasks[].id] | max' tasks/tasks.json)
|
||||||
manual_task_id=$((last_task_id + 1))
|
manual_task_id=$((last_task_id + 1))
|
||||||
ai_task_id=$((manual_task_id + 1))
|
ai_task_id=$((manual_task_id + 1))
|
||||||
|
|
||||||
@@ -836,30 +747,30 @@ log_step() {
|
|||||||
task-master list --with-subtasks > task_list_after_clear_all.log
|
task-master list --with-subtasks > task_list_after_clear_all.log
|
||||||
log_success "Task list after clear-all saved. (Manual/LLM check recommended to verify subtasks removed)"
|
log_success "Task list after clear-all saved. (Manual/LLM check recommended to verify subtasks removed)"
|
||||||
|
|
||||||
log_step "Expanding Task 3 again (to have subtasks for next test)"
|
log_step "Expanding Task 1 again (to have subtasks for next test)"
|
||||||
task-master expand --id=3
|
task-master expand --id=1
|
||||||
log_success "Attempted to expand Task 3."
|
log_success "Attempted to expand Task 1 again."
|
||||||
# Verify 3.1 exists
|
# Verify 1.1 exists again
|
||||||
if ! jq -e '.master.tasks[] | select(.id == 3) | .subtasks[] | select(.id == 1)' .taskmaster/tasks/tasks.json > /dev/null; then
|
if ! jq -e '.tasks[] | select(.id == 1) | .subtasks[] | select(.id == 1)' tasks/tasks.json > /dev/null; then
|
||||||
log_error "Subtask 3.1 not found in tasks.json after expanding Task 3."
|
log_error "Subtask 1.1 not found in tasks.json after re-expanding Task 1."
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
log_step "Adding dependency: Task 4 depends on Subtask 3.1"
|
log_step "Adding dependency: Task 3 depends on Subtask 1.1"
|
||||||
task-master add-dependency --id=4 --depends-on=3.1
|
task-master add-dependency --id=3 --depends-on=1.1
|
||||||
log_success "Added dependency 4 -> 3.1."
|
log_success "Added dependency 3 -> 1.1."
|
||||||
|
|
||||||
log_step "Showing Task 4 details (after adding subtask dependency)"
|
log_step "Showing Task 3 details (after adding subtask dependency)"
|
||||||
task-master show 4 > task_4_details_after_dep_add.log
|
task-master show 3 > task_3_details_after_dep_add.log
|
||||||
log_success "Task 4 details saved. (Manual/LLM check recommended for dependency [3.1])"
|
log_success "Task 3 details saved. (Manual/LLM check recommended for dependency [1.1])"
|
||||||
|
|
||||||
log_step "Removing dependency: Task 4 depends on Subtask 3.1"
|
log_step "Removing dependency: Task 3 depends on Subtask 1.1"
|
||||||
task-master remove-dependency --id=4 --depends-on=3.1
|
task-master remove-dependency --id=3 --depends-on=1.1
|
||||||
log_success "Removed dependency 4 -> 3.1."
|
log_success "Removed dependency 3 -> 1.1."
|
||||||
|
|
||||||
log_step "Showing Task 4 details (after removing subtask dependency)"
|
log_step "Showing Task 3 details (after removing subtask dependency)"
|
||||||
task-master show 4 > task_4_details_after_dep_remove.log
|
task-master show 3 > task_3_details_after_dep_remove.log
|
||||||
log_success "Task 4 details saved. (Manual/LLM check recommended to verify dependency removed)"
|
log_success "Task 3 details saved. (Manual/LLM check recommended to verify dependency removed)"
|
||||||
|
|
||||||
# === End New Test Section ===
|
# === End New Test Section ===
|
||||||
|
|
||||||
|
|||||||
@@ -625,38 +625,19 @@ describe('MCP Server Direct Functions', () => {
|
|||||||
// For successful cases, record that functions were called but don't make real calls
|
// For successful cases, record that functions were called but don't make real calls
|
||||||
mockEnableSilentMode();
|
mockEnableSilentMode();
|
||||||
|
|
||||||
// Mock expandAllTasks - now returns a structured object instead of undefined
|
// Mock expandAllTasks
|
||||||
const mockExpandAll = jest.fn().mockImplementation(async () => {
|
const mockExpandAll = jest.fn().mockImplementation(async () => {
|
||||||
// Return the new structured response that matches the actual implementation
|
// Just simulate success without any real operations
|
||||||
return {
|
return undefined; // expandAllTasks doesn't return anything
|
||||||
success: true,
|
|
||||||
expandedCount: 2,
|
|
||||||
failedCount: 0,
|
|
||||||
skippedCount: 1,
|
|
||||||
tasksToExpand: 3,
|
|
||||||
telemetryData: {
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
commandName: 'expand-all-tasks',
|
|
||||||
totalCost: 0.05,
|
|
||||||
totalTokens: 1000,
|
|
||||||
inputTokens: 600,
|
|
||||||
outputTokens: 400
|
|
||||||
}
|
|
||||||
};
|
|
||||||
});
|
});
|
||||||
|
|
||||||
// Call mock expandAllTasks with the correct signature
|
// Call mock expandAllTasks
|
||||||
const result = await mockExpandAll(
|
await mockExpandAll(
|
||||||
args.file, // tasksPath
|
args.num,
|
||||||
args.num, // numSubtasks
|
args.research || false,
|
||||||
args.research || false, // useResearch
|
args.prompt || '',
|
||||||
args.prompt || '', // additionalContext
|
args.force || false,
|
||||||
args.force || false, // force
|
{ mcpLog: mockLogger, session: options.session }
|
||||||
{
|
|
||||||
mcpLog: mockLogger,
|
|
||||||
session: options.session,
|
|
||||||
projectRoot: args.projectRoot
|
|
||||||
}
|
|
||||||
);
|
);
|
||||||
|
|
||||||
mockDisableSilentMode();
|
mockDisableSilentMode();
|
||||||
@@ -664,14 +645,13 @@ describe('MCP Server Direct Functions', () => {
|
|||||||
return {
|
return {
|
||||||
success: true,
|
success: true,
|
||||||
data: {
|
data: {
|
||||||
message: `Expand all operation completed. Expanded: ${result.expandedCount}, Failed: ${result.failedCount}, Skipped: ${result.skippedCount}`,
|
message: 'Successfully expanded all pending tasks with subtasks',
|
||||||
details: {
|
details: {
|
||||||
expandedCount: result.expandedCount,
|
numSubtasks: args.num,
|
||||||
failedCount: result.failedCount,
|
research: args.research || false,
|
||||||
skippedCount: result.skippedCount,
|
prompt: args.prompt || '',
|
||||||
tasksToExpand: result.tasksToExpand
|
force: args.force || false
|
||||||
},
|
}
|
||||||
telemetryData: result.telemetryData
|
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
@@ -691,13 +671,10 @@ describe('MCP Server Direct Functions', () => {
|
|||||||
|
|
||||||
// Assert
|
// Assert
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
expect(result.data.message).toMatch(/Expand all operation completed/);
|
expect(result.data.message).toBe(
|
||||||
expect(result.data.details.expandedCount).toBe(2);
|
'Successfully expanded all pending tasks with subtasks'
|
||||||
expect(result.data.details.failedCount).toBe(0);
|
);
|
||||||
expect(result.data.details.skippedCount).toBe(1);
|
expect(result.data.details.numSubtasks).toBe(3);
|
||||||
expect(result.data.details.tasksToExpand).toBe(3);
|
|
||||||
expect(result.data.telemetryData).toBeDefined();
|
|
||||||
expect(result.data.telemetryData.commandName).toBe('expand-all-tasks');
|
|
||||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
@@ -718,8 +695,7 @@ describe('MCP Server Direct Functions', () => {
|
|||||||
|
|
||||||
// Assert
|
// Assert
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
expect(result.data.details.expandedCount).toBe(2);
|
expect(result.data.details.research).toBe(true);
|
||||||
expect(result.data.telemetryData).toBeDefined();
|
|
||||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
@@ -739,8 +715,7 @@ describe('MCP Server Direct Functions', () => {
|
|||||||
|
|
||||||
// Assert
|
// Assert
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
expect(result.data.details.expandedCount).toBe(2);
|
expect(result.data.details.force).toBe(true);
|
||||||
expect(result.data.telemetryData).toBeDefined();
|
|
||||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
@@ -760,77 +735,11 @@ describe('MCP Server Direct Functions', () => {
|
|||||||
|
|
||||||
// Assert
|
// Assert
|
||||||
expect(result.success).toBe(true);
|
expect(result.success).toBe(true);
|
||||||
expect(result.data.details.expandedCount).toBe(2);
|
expect(result.data.details.prompt).toBe(
|
||||||
expect(result.data.telemetryData).toBeDefined();
|
'Additional context for subtasks'
|
||||||
|
);
|
||||||
expect(mockEnableSilentMode).toHaveBeenCalled();
|
expect(mockEnableSilentMode).toHaveBeenCalled();
|
||||||
expect(mockDisableSilentMode).toHaveBeenCalled();
|
expect(mockDisableSilentMode).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle case with no eligible tasks', async () => {
|
|
||||||
// Arrange
|
|
||||||
const args = {
|
|
||||||
projectRoot: testProjectRoot,
|
|
||||||
file: testTasksPath,
|
|
||||||
num: 3
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act - Mock the scenario where no tasks are eligible for expansion
|
|
||||||
async function testNoEligibleTasks(args, mockLogger, options = {}) {
|
|
||||||
mockEnableSilentMode();
|
|
||||||
|
|
||||||
const mockExpandAll = jest.fn().mockImplementation(async () => {
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
expandedCount: 0,
|
|
||||||
failedCount: 0,
|
|
||||||
skippedCount: 0,
|
|
||||||
tasksToExpand: 0,
|
|
||||||
telemetryData: null,
|
|
||||||
message: 'No tasks eligible for expansion.'
|
|
||||||
};
|
|
||||||
});
|
|
||||||
|
|
||||||
const result = await mockExpandAll(
|
|
||||||
args.file,
|
|
||||||
args.num,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
mcpLog: mockLogger,
|
|
||||||
session: options.session,
|
|
||||||
projectRoot: args.projectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
mockDisableSilentMode();
|
|
||||||
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
data: {
|
|
||||||
message: result.message,
|
|
||||||
details: {
|
|
||||||
expandedCount: result.expandedCount,
|
|
||||||
failedCount: result.failedCount,
|
|
||||||
skippedCount: result.skippedCount,
|
|
||||||
tasksToExpand: result.tasksToExpand
|
|
||||||
},
|
|
||||||
telemetryData: result.telemetryData
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = await testNoEligibleTasks(args, mockLogger, {
|
|
||||||
session: mockSession
|
|
||||||
});
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(result.data.message).toBe('No tasks eligible for expansion.');
|
|
||||||
expect(result.data.details.expandedCount).toBe(0);
|
|
||||||
expect(result.data.details.tasksToExpand).toBe(0);
|
|
||||||
expect(result.data.telemetryData).toBeNull();
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -1,324 +0,0 @@
|
|||||||
/**
|
|
||||||
* Tests for the expand-all MCP tool
|
|
||||||
*
|
|
||||||
* Note: This test does NOT test the actual implementation. It tests that:
|
|
||||||
* 1. The tool is registered correctly with the correct parameters
|
|
||||||
* 2. Arguments are passed correctly to expandAllTasksDirect
|
|
||||||
* 3. Error handling works as expected
|
|
||||||
*
|
|
||||||
* We do NOT import the real implementation - everything is mocked
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
|
|
||||||
// Mock EVERYTHING
|
|
||||||
const mockExpandAllTasksDirect = jest.fn();
|
|
||||||
jest.mock('../../../../mcp-server/src/core/task-master-core.js', () => ({
|
|
||||||
expandAllTasksDirect: mockExpandAllTasksDirect
|
|
||||||
}));
|
|
||||||
|
|
||||||
const mockHandleApiResult = jest.fn((result) => result);
|
|
||||||
const mockGetProjectRootFromSession = jest.fn(() => '/mock/project/root');
|
|
||||||
const mockCreateErrorResponse = jest.fn((msg) => ({
|
|
||||||
success: false,
|
|
||||||
error: { code: 'ERROR', message: msg }
|
|
||||||
}));
|
|
||||||
const mockWithNormalizedProjectRoot = jest.fn((fn) => fn);
|
|
||||||
|
|
||||||
jest.mock('../../../../mcp-server/src/tools/utils.js', () => ({
|
|
||||||
getProjectRootFromSession: mockGetProjectRootFromSession,
|
|
||||||
handleApiResult: mockHandleApiResult,
|
|
||||||
createErrorResponse: mockCreateErrorResponse,
|
|
||||||
withNormalizedProjectRoot: mockWithNormalizedProjectRoot
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Mock the z object from zod
|
|
||||||
const mockZod = {
|
|
||||||
object: jest.fn(() => mockZod),
|
|
||||||
string: jest.fn(() => mockZod),
|
|
||||||
number: jest.fn(() => mockZod),
|
|
||||||
boolean: jest.fn(() => mockZod),
|
|
||||||
optional: jest.fn(() => mockZod),
|
|
||||||
describe: jest.fn(() => mockZod),
|
|
||||||
_def: {
|
|
||||||
shape: () => ({
|
|
||||||
num: {},
|
|
||||||
research: {},
|
|
||||||
prompt: {},
|
|
||||||
force: {},
|
|
||||||
tag: {},
|
|
||||||
projectRoot: {}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
jest.mock('zod', () => ({
|
|
||||||
z: mockZod
|
|
||||||
}));
|
|
||||||
|
|
||||||
// DO NOT import the real module - create a fake implementation
|
|
||||||
// This is the fake implementation of registerExpandAllTool
|
|
||||||
const registerExpandAllTool = (server) => {
|
|
||||||
// Create simplified version of the tool config
|
|
||||||
const toolConfig = {
|
|
||||||
name: 'expand_all',
|
|
||||||
description: 'Use Taskmaster to expand all eligible pending tasks',
|
|
||||||
parameters: mockZod,
|
|
||||||
|
|
||||||
// Create a simplified mock of the execute function
|
|
||||||
execute: mockWithNormalizedProjectRoot(async (args, context) => {
|
|
||||||
const { log, session } = context;
|
|
||||||
|
|
||||||
try {
|
|
||||||
log.info &&
|
|
||||||
log.info(`Starting expand-all with args: ${JSON.stringify(args)}`);
|
|
||||||
|
|
||||||
// Call expandAllTasksDirect
|
|
||||||
const result = await mockExpandAllTasksDirect(args, log, { session });
|
|
||||||
|
|
||||||
// Handle result
|
|
||||||
return mockHandleApiResult(result, log);
|
|
||||||
} catch (error) {
|
|
||||||
log.error && log.error(`Error in expand-all tool: ${error.message}`);
|
|
||||||
return mockCreateErrorResponse(error.message);
|
|
||||||
}
|
|
||||||
})
|
|
||||||
};
|
|
||||||
|
|
||||||
// Register the tool with the server
|
|
||||||
server.addTool(toolConfig);
|
|
||||||
};
|
|
||||||
|
|
||||||
describe('MCP Tool: expand-all', () => {
|
|
||||||
// Create mock server
|
|
||||||
let mockServer;
|
|
||||||
let executeFunction;
|
|
||||||
|
|
||||||
// Create mock logger
|
|
||||||
const mockLogger = {
|
|
||||||
debug: jest.fn(),
|
|
||||||
info: jest.fn(),
|
|
||||||
warn: jest.fn(),
|
|
||||||
error: jest.fn()
|
|
||||||
};
|
|
||||||
|
|
||||||
// Test data
|
|
||||||
const validArgs = {
|
|
||||||
num: 3,
|
|
||||||
research: true,
|
|
||||||
prompt: 'additional context',
|
|
||||||
force: false,
|
|
||||||
tag: 'master',
|
|
||||||
projectRoot: '/test/project'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Standard responses
|
|
||||||
const successResponse = {
|
|
||||||
success: true,
|
|
||||||
data: {
|
|
||||||
message:
|
|
||||||
'Expand all operation completed. Expanded: 2, Failed: 0, Skipped: 1',
|
|
||||||
details: {
|
|
||||||
expandedCount: 2,
|
|
||||||
failedCount: 0,
|
|
||||||
skippedCount: 1,
|
|
||||||
tasksToExpand: 3,
|
|
||||||
telemetryData: {
|
|
||||||
commandName: 'expand-all-tasks',
|
|
||||||
totalCost: 0.15,
|
|
||||||
totalTokens: 2500
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const errorResponse = {
|
|
||||||
success: false,
|
|
||||||
error: {
|
|
||||||
code: 'EXPAND_ALL_ERROR',
|
|
||||||
message: 'Failed to expand tasks'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
// Reset all mocks
|
|
||||||
jest.clearAllMocks();
|
|
||||||
|
|
||||||
// Create mock server
|
|
||||||
mockServer = {
|
|
||||||
addTool: jest.fn((config) => {
|
|
||||||
executeFunction = config.execute;
|
|
||||||
})
|
|
||||||
};
|
|
||||||
|
|
||||||
// Setup default successful response
|
|
||||||
mockExpandAllTasksDirect.mockResolvedValue(successResponse);
|
|
||||||
|
|
||||||
// Register the tool
|
|
||||||
registerExpandAllTool(mockServer);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should register the tool correctly', () => {
|
|
||||||
// Verify tool was registered
|
|
||||||
expect(mockServer.addTool).toHaveBeenCalledWith(
|
|
||||||
expect.objectContaining({
|
|
||||||
name: 'expand_all',
|
|
||||||
description: expect.stringContaining('expand all eligible pending'),
|
|
||||||
parameters: expect.any(Object),
|
|
||||||
execute: expect.any(Function)
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
// Verify the tool config was passed
|
|
||||||
const toolConfig = mockServer.addTool.mock.calls[0][0];
|
|
||||||
expect(toolConfig).toHaveProperty('parameters');
|
|
||||||
expect(toolConfig).toHaveProperty('execute');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should execute the tool with valid parameters', async () => {
|
|
||||||
// Setup context
|
|
||||||
const mockContext = {
|
|
||||||
log: mockLogger,
|
|
||||||
session: { workingDirectory: '/mock/dir' }
|
|
||||||
};
|
|
||||||
|
|
||||||
// Execute the function
|
|
||||||
const result = await executeFunction(validArgs, mockContext);
|
|
||||||
|
|
||||||
// Verify expandAllTasksDirect was called with correct arguments
|
|
||||||
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
|
|
||||||
validArgs,
|
|
||||||
mockLogger,
|
|
||||||
{ session: mockContext.session }
|
|
||||||
);
|
|
||||||
|
|
||||||
// Verify handleApiResult was called
|
|
||||||
expect(mockHandleApiResult).toHaveBeenCalledWith(
|
|
||||||
successResponse,
|
|
||||||
mockLogger
|
|
||||||
);
|
|
||||||
expect(result).toEqual(successResponse);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle expand all with no eligible tasks', async () => {
|
|
||||||
// Arrange
|
|
||||||
const mockDirectResult = {
|
|
||||||
success: true,
|
|
||||||
data: {
|
|
||||||
message:
|
|
||||||
'Expand all operation completed. Expanded: 0, Failed: 0, Skipped: 0',
|
|
||||||
details: {
|
|
||||||
expandedCount: 0,
|
|
||||||
failedCount: 0,
|
|
||||||
skippedCount: 0,
|
|
||||||
tasksToExpand: 0,
|
|
||||||
telemetryData: null
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
|
|
||||||
mockHandleApiResult.mockReturnValue({
|
|
||||||
success: true,
|
|
||||||
data: mockDirectResult.data
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await executeFunction(validArgs, {
|
|
||||||
log: mockLogger,
|
|
||||||
session: { workingDirectory: '/test' }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(result.data.details.expandedCount).toBe(0);
|
|
||||||
expect(result.data.details.tasksToExpand).toBe(0);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle expand all with mixed success/failure', async () => {
|
|
||||||
// Arrange
|
|
||||||
const mockDirectResult = {
|
|
||||||
success: true,
|
|
||||||
data: {
|
|
||||||
message:
|
|
||||||
'Expand all operation completed. Expanded: 2, Failed: 1, Skipped: 0',
|
|
||||||
details: {
|
|
||||||
expandedCount: 2,
|
|
||||||
failedCount: 1,
|
|
||||||
skippedCount: 0,
|
|
||||||
tasksToExpand: 3,
|
|
||||||
telemetryData: {
|
|
||||||
commandName: 'expand-all-tasks',
|
|
||||||
totalCost: 0.1,
|
|
||||||
totalTokens: 1500
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
|
|
||||||
mockHandleApiResult.mockReturnValue({
|
|
||||||
success: true,
|
|
||||||
data: mockDirectResult.data
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await executeFunction(validArgs, {
|
|
||||||
log: mockLogger,
|
|
||||||
session: { workingDirectory: '/test' }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(result.data.details.expandedCount).toBe(2);
|
|
||||||
expect(result.data.details.failedCount).toBe(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle errors from expandAllTasksDirect', async () => {
|
|
||||||
// Arrange
|
|
||||||
mockExpandAllTasksDirect.mockRejectedValue(
|
|
||||||
new Error('Direct function error')
|
|
||||||
);
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await executeFunction(validArgs, {
|
|
||||||
log: mockLogger,
|
|
||||||
session: { workingDirectory: '/test' }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(mockLogger.error).toHaveBeenCalledWith(
|
|
||||||
expect.stringContaining('Error in expand-all tool')
|
|
||||||
);
|
|
||||||
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
|
|
||||||
'Direct function error'
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle different argument combinations', async () => {
|
|
||||||
// Test with minimal args
|
|
||||||
const minimalArgs = {
|
|
||||||
projectRoot: '/test/project'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await executeFunction(minimalArgs, {
|
|
||||||
log: mockLogger,
|
|
||||||
session: { workingDirectory: '/test' }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
|
|
||||||
minimalArgs,
|
|
||||||
mockLogger,
|
|
||||||
expect.any(Object)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should use withNormalizedProjectRoot wrapper correctly', () => {
|
|
||||||
// Verify that the execute function is wrapped with withNormalizedProjectRoot
|
|
||||||
expect(mockWithNormalizedProjectRoot).toHaveBeenCalledWith(
|
|
||||||
expect.any(Function)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
@@ -1,502 +0,0 @@
|
|||||||
/**
|
|
||||||
* Tests for the expand-all-tasks.js module
|
|
||||||
*/
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
|
|
||||||
// Mock the dependencies before importing the module under test
|
|
||||||
jest.unstable_mockModule(
|
|
||||||
'../../../../../scripts/modules/task-manager/expand-task.js',
|
|
||||||
() => ({
|
|
||||||
default: jest.fn()
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
|
||||||
readJSON: jest.fn(),
|
|
||||||
log: jest.fn(),
|
|
||||||
isSilentMode: jest.fn(() => false),
|
|
||||||
findProjectRoot: jest.fn(() => '/test/project'),
|
|
||||||
aggregateTelemetry: jest.fn()
|
|
||||||
}));
|
|
||||||
|
|
||||||
jest.unstable_mockModule(
|
|
||||||
'../../../../../scripts/modules/config-manager.js',
|
|
||||||
() => ({
|
|
||||||
getDebugFlag: jest.fn(() => false)
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
|
||||||
startLoadingIndicator: jest.fn(),
|
|
||||||
stopLoadingIndicator: jest.fn(),
|
|
||||||
displayAiUsageSummary: jest.fn()
|
|
||||||
}));
|
|
||||||
|
|
||||||
jest.unstable_mockModule('chalk', () => ({
|
|
||||||
default: {
|
|
||||||
white: { bold: jest.fn((text) => text) },
|
|
||||||
cyan: jest.fn((text) => text),
|
|
||||||
green: jest.fn((text) => text),
|
|
||||||
gray: jest.fn((text) => text),
|
|
||||||
red: jest.fn((text) => text),
|
|
||||||
bold: jest.fn((text) => text)
|
|
||||||
}
|
|
||||||
}));
|
|
||||||
|
|
||||||
jest.unstable_mockModule('boxen', () => ({
|
|
||||||
default: jest.fn((text) => text)
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Import the mocked modules
|
|
||||||
const { default: expandTask } = await import(
|
|
||||||
'../../../../../scripts/modules/task-manager/expand-task.js'
|
|
||||||
);
|
|
||||||
const { readJSON, aggregateTelemetry, findProjectRoot } = await import(
|
|
||||||
'../../../../../scripts/modules/utils.js'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Import the module under test
|
|
||||||
const { default: expandAllTasks } = await import(
|
|
||||||
'../../../../../scripts/modules/task-manager/expand-all-tasks.js'
|
|
||||||
);
|
|
||||||
|
|
||||||
const mockExpandTask = expandTask;
|
|
||||||
const mockReadJSON = readJSON;
|
|
||||||
const mockAggregateTelemetry = aggregateTelemetry;
|
|
||||||
const mockFindProjectRoot = findProjectRoot;
|
|
||||||
|
|
||||||
describe('expandAllTasks', () => {
|
|
||||||
const mockTasksPath = '/test/tasks.json';
|
|
||||||
const mockProjectRoot = '/test/project';
|
|
||||||
const mockSession = { userId: 'test-user' };
|
|
||||||
const mockMcpLog = {
|
|
||||||
info: jest.fn(),
|
|
||||||
warn: jest.fn(),
|
|
||||||
error: jest.fn(),
|
|
||||||
debug: jest.fn()
|
|
||||||
};
|
|
||||||
|
|
||||||
const sampleTasksData = {
|
|
||||||
tag: 'master',
|
|
||||||
tasks: [
|
|
||||||
{
|
|
||||||
id: 1,
|
|
||||||
title: 'Pending Task 1',
|
|
||||||
status: 'pending',
|
|
||||||
subtasks: []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 2,
|
|
||||||
title: 'In Progress Task',
|
|
||||||
status: 'in-progress',
|
|
||||||
subtasks: []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 3,
|
|
||||||
title: 'Done Task',
|
|
||||||
status: 'done',
|
|
||||||
subtasks: []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 4,
|
|
||||||
title: 'Task with Subtasks',
|
|
||||||
status: 'pending',
|
|
||||||
subtasks: [{ id: '4.1', title: 'Existing subtask' }]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
};
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
jest.clearAllMocks();
|
|
||||||
mockReadJSON.mockReturnValue(sampleTasksData);
|
|
||||||
mockAggregateTelemetry.mockReturnValue({
|
|
||||||
timestamp: '2024-01-01T00:00:00.000Z',
|
|
||||||
commandName: 'expand-all-tasks',
|
|
||||||
totalCost: 0.1,
|
|
||||||
totalTokens: 2000,
|
|
||||||
inputTokens: 1200,
|
|
||||||
outputTokens: 800
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('successful expansion', () => {
|
|
||||||
test('should expand all eligible pending tasks', async () => {
|
|
||||||
// Arrange
|
|
||||||
const mockTelemetryData = {
|
|
||||||
timestamp: '2024-01-01T00:00:00.000Z',
|
|
||||||
commandName: 'expand-task',
|
|
||||||
totalCost: 0.05,
|
|
||||||
totalTokens: 1000
|
|
||||||
};
|
|
||||||
|
|
||||||
mockExpandTask.mockResolvedValue({
|
|
||||||
telemetryData: mockTelemetryData
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3, // numSubtasks
|
|
||||||
false, // useResearch
|
|
||||||
'test context', // additionalContext
|
|
||||||
false, // force
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot,
|
|
||||||
tag: 'master'
|
|
||||||
},
|
|
||||||
'json' // outputFormat
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(result.expandedCount).toBe(2); // Tasks 1 and 2 (pending and in-progress)
|
|
||||||
expect(result.failedCount).toBe(0);
|
|
||||||
expect(result.skippedCount).toBe(0);
|
|
||||||
expect(result.tasksToExpand).toBe(2);
|
|
||||||
expect(result.telemetryData).toBeDefined();
|
|
||||||
|
|
||||||
// Verify readJSON was called correctly
|
|
||||||
expect(mockReadJSON).toHaveBeenCalledWith(
|
|
||||||
mockTasksPath,
|
|
||||||
mockProjectRoot,
|
|
||||||
'master'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Verify expandTask was called for eligible tasks
|
|
||||||
expect(mockExpandTask).toHaveBeenCalledTimes(2);
|
|
||||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
|
||||||
mockTasksPath,
|
|
||||||
1,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'test context',
|
|
||||||
expect.objectContaining({
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot,
|
|
||||||
tag: 'master'
|
|
||||||
}),
|
|
||||||
false
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle force flag to expand tasks with existing subtasks', async () => {
|
|
||||||
// Arrange
|
|
||||||
mockExpandTask.mockResolvedValue({
|
|
||||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
2,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
true, // force = true
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.expandedCount).toBe(3); // Tasks 1, 2, and 4 (including task with existing subtasks)
|
|
||||||
expect(mockExpandTask).toHaveBeenCalledTimes(3);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle research flag', async () => {
|
|
||||||
// Arrange
|
|
||||||
mockExpandTask.mockResolvedValue({
|
|
||||||
telemetryData: { commandName: 'expand-task', totalCost: 0.08 }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
undefined, // numSubtasks not specified
|
|
||||||
true, // useResearch = true
|
|
||||||
'research context',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
|
||||||
mockTasksPath,
|
|
||||||
expect.any(Number),
|
|
||||||
undefined,
|
|
||||||
true, // research flag passed correctly
|
|
||||||
'research context',
|
|
||||||
expect.any(Object),
|
|
||||||
false
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should return success with message when no tasks are eligible', async () => {
|
|
||||||
// Arrange - Mock tasks data with no eligible tasks
|
|
||||||
const noEligibleTasksData = {
|
|
||||||
tag: 'master',
|
|
||||||
tasks: [
|
|
||||||
{ id: 1, status: 'done', subtasks: [] },
|
|
||||||
{
|
|
||||||
id: 2,
|
|
||||||
status: 'pending',
|
|
||||||
subtasks: [{ id: '2.1', title: 'existing' }]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
};
|
|
||||||
mockReadJSON.mockReturnValue(noEligibleTasksData);
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false, // force = false, so task with subtasks won't be expanded
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(result.expandedCount).toBe(0);
|
|
||||||
expect(result.failedCount).toBe(0);
|
|
||||||
expect(result.skippedCount).toBe(0);
|
|
||||||
expect(result.tasksToExpand).toBe(0);
|
|
||||||
expect(result.message).toBe('No tasks eligible for expansion.');
|
|
||||||
expect(mockExpandTask).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('error handling', () => {
|
|
||||||
test('should handle expandTask failures gracefully', async () => {
|
|
||||||
// Arrange
|
|
||||||
mockExpandTask
|
|
||||||
.mockResolvedValueOnce({ telemetryData: { totalCost: 0.05 } }) // First task succeeds
|
|
||||||
.mockRejectedValueOnce(new Error('AI service error')); // Second task fails
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(result.expandedCount).toBe(1);
|
|
||||||
expect(result.failedCount).toBe(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should throw error when tasks.json is invalid', async () => {
|
|
||||||
// Arrange
|
|
||||||
mockReadJSON.mockReturnValue(null);
|
|
||||||
|
|
||||||
// Act & Assert
|
|
||||||
await expect(
|
|
||||||
expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
)
|
|
||||||
).rejects.toThrow('Invalid tasks data');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should throw error when project root cannot be determined', async () => {
|
|
||||||
// Arrange - Mock findProjectRoot to return null for this test
|
|
||||||
mockFindProjectRoot.mockReturnValueOnce(null);
|
|
||||||
|
|
||||||
// Act & Assert
|
|
||||||
await expect(
|
|
||||||
expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog
|
|
||||||
// No projectRoot provided, and findProjectRoot will return null
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
)
|
|
||||||
).rejects.toThrow('Could not determine project root directory');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('telemetry aggregation', () => {
|
|
||||||
test('should aggregate telemetry data from multiple expand operations', async () => {
|
|
||||||
// Arrange
|
|
||||||
const telemetryData1 = {
|
|
||||||
commandName: 'expand-task',
|
|
||||||
totalCost: 0.03,
|
|
||||||
totalTokens: 600
|
|
||||||
};
|
|
||||||
const telemetryData2 = {
|
|
||||||
commandName: 'expand-task',
|
|
||||||
totalCost: 0.04,
|
|
||||||
totalTokens: 800
|
|
||||||
};
|
|
||||||
|
|
||||||
mockExpandTask
|
|
||||||
.mockResolvedValueOnce({ telemetryData: telemetryData1 })
|
|
||||||
.mockResolvedValueOnce({ telemetryData: telemetryData2 });
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
|
|
||||||
[telemetryData1, telemetryData2],
|
|
||||||
'expand-all-tasks'
|
|
||||||
);
|
|
||||||
expect(result.telemetryData).toBeDefined();
|
|
||||||
expect(result.telemetryData.commandName).toBe('expand-all-tasks');
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle missing telemetry data gracefully', async () => {
|
|
||||||
// Arrange
|
|
||||||
mockExpandTask.mockResolvedValue({}); // No telemetryData
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
|
|
||||||
[],
|
|
||||||
'expand-all-tasks'
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('output format handling', () => {
|
|
||||||
test('should use text output format for CLI calls', async () => {
|
|
||||||
// Arrange
|
|
||||||
mockExpandTask.mockResolvedValue({
|
|
||||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
projectRoot: mockProjectRoot
|
|
||||||
// No mcpLog provided, should use CLI logger
|
|
||||||
},
|
|
||||||
'text' // CLI output format
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(result.success).toBe(true);
|
|
||||||
// In text mode, loading indicators and console output would be used
|
|
||||||
// This is harder to test directly but we can verify the result structure
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle context tag properly', async () => {
|
|
||||||
// Arrange
|
|
||||||
const taggedTasksData = {
|
|
||||||
...sampleTasksData,
|
|
||||||
tag: 'feature-branch'
|
|
||||||
};
|
|
||||||
mockReadJSON.mockReturnValue(taggedTasksData);
|
|
||||||
mockExpandTask.mockResolvedValue({
|
|
||||||
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandAllTasks(
|
|
||||||
mockTasksPath,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
false,
|
|
||||||
{
|
|
||||||
session: mockSession,
|
|
||||||
mcpLog: mockMcpLog,
|
|
||||||
projectRoot: mockProjectRoot,
|
|
||||||
tag: 'feature-branch'
|
|
||||||
},
|
|
||||||
'json'
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(mockReadJSON).toHaveBeenCalledWith(
|
|
||||||
mockTasksPath,
|
|
||||||
mockProjectRoot,
|
|
||||||
'feature-branch'
|
|
||||||
);
|
|
||||||
expect(mockExpandTask).toHaveBeenCalledWith(
|
|
||||||
mockTasksPath,
|
|
||||||
expect.any(Number),
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
expect.objectContaining({
|
|
||||||
tag: 'feature-branch'
|
|
||||||
}),
|
|
||||||
false
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
@@ -1,888 +0,0 @@
|
|||||||
/**
|
|
||||||
* Tests for the expand-task.js module
|
|
||||||
*/
|
|
||||||
import { jest } from '@jest/globals';
|
|
||||||
import fs from 'fs';
|
|
||||||
|
|
||||||
// Mock the dependencies before importing the module under test
|
|
||||||
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
|
|
||||||
readJSON: jest.fn(),
|
|
||||||
writeJSON: jest.fn(),
|
|
||||||
log: jest.fn(),
|
|
||||||
CONFIG: {
|
|
||||||
model: 'mock-claude-model',
|
|
||||||
maxTokens: 4000,
|
|
||||||
temperature: 0.7,
|
|
||||||
debug: false
|
|
||||||
},
|
|
||||||
sanitizePrompt: jest.fn((prompt) => prompt),
|
|
||||||
truncate: jest.fn((text) => text),
|
|
||||||
isSilentMode: jest.fn(() => false),
|
|
||||||
findTaskById: jest.fn(),
|
|
||||||
findProjectRoot: jest.fn((tasksPath) => '/mock/project/root'),
|
|
||||||
getCurrentTag: jest.fn(() => 'master'),
|
|
||||||
ensureTagMetadata: jest.fn((tagObj) => tagObj),
|
|
||||||
flattenTasksWithSubtasks: jest.fn((tasks) => {
|
|
||||||
const allTasks = [];
|
|
||||||
const queue = [...(tasks || [])];
|
|
||||||
while (queue.length > 0) {
|
|
||||||
const task = queue.shift();
|
|
||||||
allTasks.push(task);
|
|
||||||
if (task.subtasks) {
|
|
||||||
for (const subtask of task.subtasks) {
|
|
||||||
queue.push({ ...subtask, id: `${task.id}.${subtask.id}` });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return allTasks;
|
|
||||||
}),
|
|
||||||
readComplexityReport: jest.fn(),
|
|
||||||
markMigrationForNotice: jest.fn(),
|
|
||||||
performCompleteTagMigration: jest.fn(),
|
|
||||||
setTasksForTag: jest.fn(),
|
|
||||||
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || [])
|
|
||||||
}));
|
|
||||||
|
|
||||||
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
|
|
||||||
displayBanner: jest.fn(),
|
|
||||||
getStatusWithColor: jest.fn((status) => status),
|
|
||||||
startLoadingIndicator: jest.fn(),
|
|
||||||
stopLoadingIndicator: jest.fn(),
|
|
||||||
succeedLoadingIndicator: jest.fn(),
|
|
||||||
failLoadingIndicator: jest.fn(),
|
|
||||||
warnLoadingIndicator: jest.fn(),
|
|
||||||
infoLoadingIndicator: jest.fn(),
|
|
||||||
displayAiUsageSummary: jest.fn(),
|
|
||||||
displayContextAnalysis: jest.fn()
|
|
||||||
}));
|
|
||||||
|
|
||||||
jest.unstable_mockModule(
|
|
||||||
'../../../../../scripts/modules/ai-services-unified.js',
|
|
||||||
() => ({
|
|
||||||
generateTextService: jest.fn().mockResolvedValue({
|
|
||||||
mainResult: JSON.stringify({
|
|
||||||
subtasks: [
|
|
||||||
{
|
|
||||||
id: 1,
|
|
||||||
title: 'Set up project structure',
|
|
||||||
description:
|
|
||||||
'Create the basic project directory structure and configuration files',
|
|
||||||
dependencies: [],
|
|
||||||
details:
|
|
||||||
'Initialize package.json, create src/ and test/ directories, set up linting configuration',
|
|
||||||
status: 'pending',
|
|
||||||
testStrategy:
|
|
||||||
'Verify all expected files and directories are created'
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 2,
|
|
||||||
title: 'Implement core functionality',
|
|
||||||
description: 'Develop the main application logic and core features',
|
|
||||||
dependencies: [1],
|
|
||||||
details:
|
|
||||||
'Create main classes, implement business logic, set up data models',
|
|
||||||
status: 'pending',
|
|
||||||
testStrategy: 'Unit tests for all core functions and classes'
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 3,
|
|
||||||
title: 'Add user interface',
|
|
||||||
description: 'Create the user interface components and layouts',
|
|
||||||
dependencies: [2],
|
|
||||||
details:
|
|
||||||
'Design UI components, implement responsive layouts, add user interactions',
|
|
||||||
status: 'pending',
|
|
||||||
testStrategy: 'UI tests and visual regression testing'
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}),
|
|
||||||
telemetryData: {
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
userId: '1234567890',
|
|
||||||
commandName: 'expand-task',
|
|
||||||
modelUsed: 'claude-3-5-sonnet',
|
|
||||||
providerName: 'anthropic',
|
|
||||||
inputTokens: 1000,
|
|
||||||
outputTokens: 500,
|
|
||||||
totalTokens: 1500,
|
|
||||||
totalCost: 0.012414,
|
|
||||||
currency: 'USD'
|
|
||||||
}
|
|
||||||
})
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
jest.unstable_mockModule(
|
|
||||||
'../../../../../scripts/modules/config-manager.js',
|
|
||||||
() => ({
|
|
||||||
getDefaultSubtasks: jest.fn(() => 3),
|
|
||||||
getDebugFlag: jest.fn(() => false)
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
jest.unstable_mockModule(
|
|
||||||
'../../../../../scripts/modules/utils/contextGatherer.js',
|
|
||||||
() => ({
|
|
||||||
ContextGatherer: jest.fn().mockImplementation(() => ({
|
|
||||||
gather: jest.fn().mockResolvedValue({
|
|
||||||
contextSummary: 'Mock context summary',
|
|
||||||
allRelatedTaskIds: [],
|
|
||||||
graphVisualization: 'Mock graph'
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
jest.unstable_mockModule(
|
|
||||||
'../../../../../scripts/modules/task-manager/generate-task-files.js',
|
|
||||||
() => ({
|
|
||||||
default: jest.fn().mockResolvedValue()
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
// Mock external UI libraries
|
|
||||||
jest.unstable_mockModule('chalk', () => ({
|
|
||||||
default: {
|
|
||||||
white: { bold: jest.fn((text) => text) },
|
|
||||||
cyan: Object.assign(
|
|
||||||
jest.fn((text) => text),
|
|
||||||
{
|
|
||||||
bold: jest.fn((text) => text)
|
|
||||||
}
|
|
||||||
),
|
|
||||||
green: jest.fn((text) => text),
|
|
||||||
yellow: jest.fn((text) => text),
|
|
||||||
bold: jest.fn((text) => text)
|
|
||||||
}
|
|
||||||
}));
|
|
||||||
|
|
||||||
jest.unstable_mockModule('boxen', () => ({
|
|
||||||
default: jest.fn((text) => text)
|
|
||||||
}));
|
|
||||||
|
|
||||||
jest.unstable_mockModule('cli-table3', () => ({
|
|
||||||
default: jest.fn().mockImplementation(() => ({
|
|
||||||
push: jest.fn(),
|
|
||||||
toString: jest.fn(() => 'mocked table')
|
|
||||||
}))
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Mock process.exit to prevent Jest worker crashes
|
|
||||||
const mockExit = jest.spyOn(process, 'exit').mockImplementation((code) => {
|
|
||||||
throw new Error(`process.exit called with "${code}"`);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Import the mocked modules
|
|
||||||
const {
|
|
||||||
readJSON,
|
|
||||||
writeJSON,
|
|
||||||
log,
|
|
||||||
findTaskById,
|
|
||||||
ensureTagMetadata,
|
|
||||||
readComplexityReport,
|
|
||||||
findProjectRoot
|
|
||||||
} = await import('../../../../../scripts/modules/utils.js');
|
|
||||||
|
|
||||||
const { generateTextService } = await import(
|
|
||||||
'../../../../../scripts/modules/ai-services-unified.js'
|
|
||||||
);
|
|
||||||
|
|
||||||
const generateTaskFiles = (
|
|
||||||
await import(
|
|
||||||
'../../../../../scripts/modules/task-manager/generate-task-files.js'
|
|
||||||
)
|
|
||||||
).default;
|
|
||||||
|
|
||||||
// Import the module under test
|
|
||||||
const { default: expandTask } = await import(
|
|
||||||
'../../../../../scripts/modules/task-manager/expand-task.js'
|
|
||||||
);
|
|
||||||
|
|
||||||
describe('expandTask', () => {
|
|
||||||
const sampleTasks = {
|
|
||||||
master: {
|
|
||||||
tasks: [
|
|
||||||
{
|
|
||||||
id: 1,
|
|
||||||
title: 'Task 1',
|
|
||||||
description: 'First task',
|
|
||||||
status: 'done',
|
|
||||||
dependencies: [],
|
|
||||||
details: 'Already completed task',
|
|
||||||
subtasks: []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 2,
|
|
||||||
title: 'Task 2',
|
|
||||||
description: 'Second task',
|
|
||||||
status: 'pending',
|
|
||||||
dependencies: [],
|
|
||||||
details: 'Task ready for expansion',
|
|
||||||
subtasks: []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 3,
|
|
||||||
title: 'Complex Task',
|
|
||||||
description: 'A complex task that needs breakdown',
|
|
||||||
status: 'pending',
|
|
||||||
dependencies: [1],
|
|
||||||
details: 'This task involves multiple steps',
|
|
||||||
subtasks: []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
id: 4,
|
|
||||||
title: 'Task with existing subtasks',
|
|
||||||
description: 'Task that already has subtasks',
|
|
||||||
status: 'pending',
|
|
||||||
dependencies: [],
|
|
||||||
details: 'Has existing subtasks',
|
|
||||||
subtasks: [
|
|
||||||
{
|
|
||||||
id: 1,
|
|
||||||
title: 'Existing subtask',
|
|
||||||
description: 'Already exists',
|
|
||||||
status: 'pending',
|
|
||||||
dependencies: []
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
'feature-branch': {
|
|
||||||
tasks: [
|
|
||||||
{
|
|
||||||
id: 1,
|
|
||||||
title: 'Feature Task 1',
|
|
||||||
description: 'Task in feature branch',
|
|
||||||
status: 'pending',
|
|
||||||
dependencies: [],
|
|
||||||
details: 'Feature-specific task',
|
|
||||||
subtasks: []
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Create a helper function for consistent mcpLog mock
|
|
||||||
const createMcpLogMock = () => ({
|
|
||||||
info: jest.fn(),
|
|
||||||
warn: jest.fn(),
|
|
||||||
error: jest.fn(),
|
|
||||||
debug: jest.fn(),
|
|
||||||
success: jest.fn()
|
|
||||||
});
|
|
||||||
|
|
||||||
beforeEach(() => {
|
|
||||||
jest.clearAllMocks();
|
|
||||||
mockExit.mockClear();
|
|
||||||
|
|
||||||
// Default readJSON implementation - returns tagged structure
|
|
||||||
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
|
|
||||||
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
|
|
||||||
const selectedTag = tag || 'master';
|
|
||||||
return {
|
|
||||||
...sampleTasksCopy[selectedTag],
|
|
||||||
tag: selectedTag,
|
|
||||||
_rawTaggedData: sampleTasksCopy
|
|
||||||
};
|
|
||||||
});
|
|
||||||
|
|
||||||
// Default findTaskById implementation
|
|
||||||
findTaskById.mockImplementation((tasks, taskId) => {
|
|
||||||
const id = parseInt(taskId, 10);
|
|
||||||
return tasks.find((t) => t.id === id);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Default complexity report (no report available)
|
|
||||||
readComplexityReport.mockReturnValue(null);
|
|
||||||
|
|
||||||
// Mock findProjectRoot to return consistent path for complexity report
|
|
||||||
findProjectRoot.mockReturnValue('/mock/project/root');
|
|
||||||
|
|
||||||
writeJSON.mockResolvedValue();
|
|
||||||
generateTaskFiles.mockResolvedValue();
|
|
||||||
log.mockImplementation(() => {});
|
|
||||||
|
|
||||||
// Mock console.log to avoid output during tests
|
|
||||||
jest.spyOn(console, 'log').mockImplementation(() => {});
|
|
||||||
});
|
|
||||||
|
|
||||||
afterEach(() => {
|
|
||||||
console.log.mockRestore();
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Basic Functionality', () => {
|
|
||||||
test('should expand a task with AI-generated subtasks', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const numSubtasks = 3;
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandTask(
|
|
||||||
tasksPath,
|
|
||||||
taskId,
|
|
||||||
numSubtasks,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
context,
|
|
||||||
false
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(readJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
'/mock/project/root',
|
|
||||||
undefined
|
|
||||||
);
|
|
||||||
expect(generateTextService).toHaveBeenCalledWith(expect.any(Object));
|
|
||||||
expect(writeJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
expect.objectContaining({
|
|
||||||
tasks: expect.arrayContaining([
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 2,
|
|
||||||
subtasks: expect.arrayContaining([
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 1,
|
|
||||||
title: 'Set up project structure',
|
|
||||||
status: 'pending'
|
|
||||||
}),
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 2,
|
|
||||||
title: 'Implement core functionality',
|
|
||||||
status: 'pending'
|
|
||||||
}),
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 3,
|
|
||||||
title: 'Add user interface',
|
|
||||||
status: 'pending'
|
|
||||||
})
|
|
||||||
])
|
|
||||||
})
|
|
||||||
]),
|
|
||||||
tag: 'master',
|
|
||||||
_rawTaggedData: expect.objectContaining({
|
|
||||||
master: expect.objectContaining({
|
|
||||||
tasks: expect.any(Array)
|
|
||||||
})
|
|
||||||
})
|
|
||||||
}),
|
|
||||||
'/mock/project/root',
|
|
||||||
undefined
|
|
||||||
);
|
|
||||||
expect(result).toEqual(
|
|
||||||
expect.objectContaining({
|
|
||||||
task: expect.objectContaining({
|
|
||||||
id: 2,
|
|
||||||
subtasks: expect.arrayContaining([
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 1,
|
|
||||||
title: 'Set up project structure',
|
|
||||||
status: 'pending'
|
|
||||||
}),
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 2,
|
|
||||||
title: 'Implement core functionality',
|
|
||||||
status: 'pending'
|
|
||||||
}),
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 3,
|
|
||||||
title: 'Add user interface',
|
|
||||||
status: 'pending'
|
|
||||||
})
|
|
||||||
])
|
|
||||||
}),
|
|
||||||
telemetryData: expect.any(Object)
|
|
||||||
})
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle research flag correctly', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const numSubtasks = 3;
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(
|
|
||||||
tasksPath,
|
|
||||||
taskId,
|
|
||||||
numSubtasks,
|
|
||||||
true, // useResearch = true
|
|
||||||
'Additional context for research',
|
|
||||||
context,
|
|
||||||
false
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert
|
|
||||||
expect(generateTextService).toHaveBeenCalledWith(
|
|
||||||
expect.objectContaining({
|
|
||||||
role: 'research',
|
|
||||||
commandName: expect.any(String)
|
|
||||||
})
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle complexity report integration without errors', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act & Assert - Should complete without errors
|
|
||||||
const result = await expandTask(
|
|
||||||
tasksPath,
|
|
||||||
taskId,
|
|
||||||
undefined, // numSubtasks not specified
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
context,
|
|
||||||
false
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert - Should successfully expand and return expected structure
|
|
||||||
expect(result).toEqual(
|
|
||||||
expect.objectContaining({
|
|
||||||
task: expect.objectContaining({
|
|
||||||
id: 2,
|
|
||||||
subtasks: expect.any(Array)
|
|
||||||
}),
|
|
||||||
telemetryData: expect.any(Object)
|
|
||||||
})
|
|
||||||
);
|
|
||||||
expect(generateTextService).toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Tag Handling (The Critical Bug Fix)', () => {
|
|
||||||
test('should preserve tagged structure when expanding with default tag', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root',
|
|
||||||
tag: 'master' // Explicit tag context
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - CRITICAL: Check tag is passed to readJSON and writeJSON
|
|
||||||
expect(readJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
'/mock/project/root',
|
|
||||||
'master'
|
|
||||||
);
|
|
||||||
expect(writeJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
expect.objectContaining({
|
|
||||||
tag: 'master',
|
|
||||||
_rawTaggedData: expect.objectContaining({
|
|
||||||
master: expect.any(Object),
|
|
||||||
'feature-branch': expect.any(Object)
|
|
||||||
})
|
|
||||||
}),
|
|
||||||
'/mock/project/root',
|
|
||||||
'master' // CRITICAL: Tag must be passed to writeJSON
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should preserve tagged structure when expanding with non-default tag', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '1'; // Task in feature-branch
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root',
|
|
||||||
tag: 'feature-branch' // Different tag context
|
|
||||||
};
|
|
||||||
|
|
||||||
// Configure readJSON to return feature-branch data
|
|
||||||
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
|
|
||||||
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
|
|
||||||
return {
|
|
||||||
...sampleTasksCopy['feature-branch'],
|
|
||||||
tag: 'feature-branch',
|
|
||||||
_rawTaggedData: sampleTasksCopy
|
|
||||||
};
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - CRITICAL: Check tag preservation for non-default tag
|
|
||||||
expect(readJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
'/mock/project/root',
|
|
||||||
'feature-branch'
|
|
||||||
);
|
|
||||||
expect(writeJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
expect.objectContaining({
|
|
||||||
tag: 'feature-branch',
|
|
||||||
_rawTaggedData: expect.objectContaining({
|
|
||||||
master: expect.any(Object),
|
|
||||||
'feature-branch': expect.any(Object)
|
|
||||||
})
|
|
||||||
}),
|
|
||||||
'/mock/project/root',
|
|
||||||
'feature-branch' // CRITICAL: Correct tag passed to writeJSON
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should NOT corrupt tagged structure when tag is undefined', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
// No tag specified - should default gracefully
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - Should still preserve structure with undefined tag
|
|
||||||
expect(readJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
'/mock/project/root',
|
|
||||||
undefined
|
|
||||||
);
|
|
||||||
expect(writeJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
expect.objectContaining({
|
|
||||||
_rawTaggedData: expect.objectContaining({
|
|
||||||
master: expect.any(Object)
|
|
||||||
})
|
|
||||||
}),
|
|
||||||
'/mock/project/root',
|
|
||||||
undefined
|
|
||||||
);
|
|
||||||
|
|
||||||
// CRITICAL: Verify structure is NOT flattened to old format
|
|
||||||
const writeCallArgs = writeJSON.mock.calls[0][1];
|
|
||||||
expect(writeCallArgs).toHaveProperty('tasks'); // Should have tasks property from readJSON mock
|
|
||||||
expect(writeCallArgs).toHaveProperty('_rawTaggedData'); // Should preserve tagged structure
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Force Flag Handling', () => {
|
|
||||||
test('should replace existing subtasks when force=true', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '4'; // Task with existing subtasks
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, true);
|
|
||||||
|
|
||||||
// Assert - Should replace existing subtasks
|
|
||||||
expect(writeJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
expect.objectContaining({
|
|
||||||
tasks: expect.arrayContaining([
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 4,
|
|
||||||
subtasks: expect.arrayContaining([
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 1,
|
|
||||||
title: 'Set up project structure'
|
|
||||||
})
|
|
||||||
])
|
|
||||||
})
|
|
||||||
])
|
|
||||||
}),
|
|
||||||
'/mock/project/root',
|
|
||||||
undefined
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should append to existing subtasks when force=false', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '4'; // Task with existing subtasks
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - Should append to existing subtasks with proper ID increments
|
|
||||||
expect(writeJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
expect.objectContaining({
|
|
||||||
tasks: expect.arrayContaining([
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 4,
|
|
||||||
subtasks: expect.arrayContaining([
|
|
||||||
// Should contain both existing and new subtasks
|
|
||||||
expect.any(Object),
|
|
||||||
expect.any(Object),
|
|
||||||
expect.any(Object),
|
|
||||||
expect.any(Object) // 1 existing + 3 new = 4 total
|
|
||||||
])
|
|
||||||
})
|
|
||||||
])
|
|
||||||
}),
|
|
||||||
'/mock/project/root',
|
|
||||||
undefined
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Error Handling', () => {
|
|
||||||
test('should handle non-existent task ID', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '999'; // Non-existent task
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
findTaskById.mockReturnValue(null);
|
|
||||||
|
|
||||||
// Act & Assert
|
|
||||||
await expect(
|
|
||||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
|
||||||
).rejects.toThrow('Task 999 not found');
|
|
||||||
|
|
||||||
expect(writeJSON).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should expand tasks regardless of status (including done tasks)', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '1'; // Task with 'done' status
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
const result = await expandTask(
|
|
||||||
tasksPath,
|
|
||||||
taskId,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
'',
|
|
||||||
context,
|
|
||||||
false
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert - Should successfully expand even 'done' tasks
|
|
||||||
expect(writeJSON).toHaveBeenCalled();
|
|
||||||
expect(result).toEqual(
|
|
||||||
expect.objectContaining({
|
|
||||||
task: expect.objectContaining({
|
|
||||||
id: 1,
|
|
||||||
status: 'done', // Status unchanged
|
|
||||||
subtasks: expect.arrayContaining([
|
|
||||||
expect.objectContaining({
|
|
||||||
id: 1,
|
|
||||||
title: 'Set up project structure',
|
|
||||||
status: 'pending'
|
|
||||||
})
|
|
||||||
])
|
|
||||||
}),
|
|
||||||
telemetryData: expect.any(Object)
|
|
||||||
})
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle AI service failures', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
generateTextService.mockRejectedValueOnce(new Error('AI service error'));
|
|
||||||
|
|
||||||
// Act & Assert
|
|
||||||
await expect(
|
|
||||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
|
||||||
).rejects.toThrow('AI service error');
|
|
||||||
|
|
||||||
expect(writeJSON).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle file read errors', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
readJSON.mockImplementation(() => {
|
|
||||||
throw new Error('File read failed');
|
|
||||||
});
|
|
||||||
|
|
||||||
// Act & Assert
|
|
||||||
await expect(
|
|
||||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
|
||||||
).rejects.toThrow('File read failed');
|
|
||||||
|
|
||||||
expect(writeJSON).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle invalid tasks data', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
readJSON.mockReturnValue(null);
|
|
||||||
|
|
||||||
// Act & Assert
|
|
||||||
await expect(
|
|
||||||
expandTask(tasksPath, taskId, 3, false, '', context, false)
|
|
||||||
).rejects.toThrow();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Output Format Handling', () => {
|
|
||||||
test('should display telemetry for CLI output format', async () => {
|
|
||||||
// Arrange
|
|
||||||
const { displayAiUsageSummary } = await import(
|
|
||||||
'../../../../../scripts/modules/ui.js'
|
|
||||||
);
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
// No mcpLog - should trigger CLI mode
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - Should display telemetry for CLI users
|
|
||||||
expect(displayAiUsageSummary).toHaveBeenCalledWith(
|
|
||||||
expect.objectContaining({
|
|
||||||
commandName: 'expand-task',
|
|
||||||
modelUsed: 'claude-3-5-sonnet',
|
|
||||||
totalCost: 0.012414
|
|
||||||
}),
|
|
||||||
'cli'
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should not display telemetry for MCP output format', async () => {
|
|
||||||
// Arrange
|
|
||||||
const { displayAiUsageSummary } = await import(
|
|
||||||
'../../../../../scripts/modules/ui.js'
|
|
||||||
);
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - Should NOT display telemetry for MCP (handled at higher level)
|
|
||||||
expect(displayAiUsageSummary).not.toHaveBeenCalled();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('Edge Cases', () => {
|
|
||||||
test('should handle empty additional context', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - Should work with empty context (but may include project context)
|
|
||||||
expect(generateTextService).toHaveBeenCalledWith(
|
|
||||||
expect.objectContaining({
|
|
||||||
prompt: expect.stringMatching(/.*/) // Just ensure prompt exists
|
|
||||||
})
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle additional context correctly', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const additionalContext = 'Use React hooks and TypeScript';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock(),
|
|
||||||
projectRoot: '/mock/project/root'
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(
|
|
||||||
tasksPath,
|
|
||||||
taskId,
|
|
||||||
3,
|
|
||||||
false,
|
|
||||||
additionalContext,
|
|
||||||
context,
|
|
||||||
false
|
|
||||||
);
|
|
||||||
|
|
||||||
// Assert - Should include additional context in prompt
|
|
||||||
expect(generateTextService).toHaveBeenCalledWith(
|
|
||||||
expect.objectContaining({
|
|
||||||
prompt: expect.stringContaining('Use React hooks and TypeScript')
|
|
||||||
})
|
|
||||||
);
|
|
||||||
});
|
|
||||||
|
|
||||||
test('should handle missing project root in context', async () => {
|
|
||||||
// Arrange
|
|
||||||
const tasksPath = 'tasks/tasks.json';
|
|
||||||
const taskId = '2';
|
|
||||||
const context = {
|
|
||||||
mcpLog: createMcpLogMock()
|
|
||||||
// No projectRoot in context
|
|
||||||
};
|
|
||||||
|
|
||||||
// Act
|
|
||||||
await expandTask(tasksPath, taskId, 3, false, '', context, false);
|
|
||||||
|
|
||||||
// Assert - Should derive project root from tasksPath
|
|
||||||
expect(findProjectRoot).toHaveBeenCalledWith(tasksPath);
|
|
||||||
expect(readJSON).toHaveBeenCalledWith(
|
|
||||||
tasksPath,
|
|
||||||
'/mock/project/root',
|
|
||||||
undefined
|
|
||||||
);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
Reference in New Issue
Block a user