Compare commits

..

9 Commits

Author SHA1 Message Date
Ben Vargas
8df2d50bac chore: add changeset for Claude Code provider feature 2025-06-20 16:21:37 +03:00
Ben Vargas
d0a7deb46c fix(models): add missing --claude-code flag to models command
The models command was missing the --claude-code provider flag, preventing users from setting Claude Code models via CLI. While the backend already supported claude-code as a provider hint, there was no command-line flag to trigger it.

Changes:
- Added --claude-code option to models command alongside existing provider flags
- Updated provider flags validation to include claudeCode option
- Added claude-code to providerHint logic for all three model roles (main, research, fallback)
- Updated error message to include --claude-code in list of mutually exclusive flags
- Added example usage in help text

This allows users to properly set Claude Code models using commands like:
  task-master models --set-main sonnet --claude-code
  task-master models --set-main opus --claude-code

Without this flag, users would get "Model ID not found" errors when trying to set claude-code models, as the system couldn't determine the correct provider for generic model names like "sonnet" or "opus".
2025-06-20 16:21:36 +03:00
Ben Vargas
18a5f63d06 style: apply biome formatting to test files 2025-06-20 16:21:21 +03:00
Ben Vargas
5d82b69610 docs: add Claude Code support information to README
- Added Claude Code to the list of supported providers in Requirements section
- Noted that Claude Code requires no API key but needs Claude Code CLI
- Added example of configuring claude-code/sonnet model
- Created dedicated Claude Code Support section with key information
- Added link to detailed Claude Code setup documentation

This ensures users are aware of the Claude Code option as a no-API-key
alternative for using Claude models.
2025-06-20 16:21:21 +03:00
Ben Vargas
de77826bcc revert: remove maxTokens update functionality from init
This functionality was out of scope for the Claude Code provider PR.
The automatic updating of maxTokens values in config.json during
initialization is a general improvement that should be in a separate PR.

Additionally, Claude Code ignores maxTokens and temperature parameters
anyway, making this change irrelevant for the Claude Code integration.

Removed:
- scripts/modules/update-config-tokens.js
- Import and usage in scripts/init.js
2025-06-20 16:21:21 +03:00
Ben Vargas
4125025abd test: add comprehensive tests for ClaudeCodeProvider
Addresses code review feedback about missing automated tests for the ClaudeCodeProvider.

## Changes

- Added unit tests for ClaudeCodeProvider class covering constructor, validateAuth, and getClient methods
- Added unit tests for ClaudeCodeLanguageModel testing lazy loading behavior and error handling
- Added integration tests verifying optional dependency behavior when @anthropic-ai/claude-code is not installed

## Test Coverage

1. **Unit Tests**:
   - ClaudeCodeProvider: Basic functionality, no API key requirement, client creation
   - ClaudeCodeLanguageModel: Model initialization, lazy loading, error messages, warning generation

2. **Integration Tests**:
   - Optional dependency behavior when package is not installed
   - Clear error messages for users about missing package
   - Provider instantiation works but usage fails gracefully

All tests pass and provide comprehensive coverage for the claude-code provider implementation.
2025-06-20 16:20:56 +03:00
Ben Vargas
72a324075c feat: make @anthropic-ai/claude-code an optional dependency
This change makes the Claude Code SDK package optional, preventing installation failures for users who don't need Claude Code functionality.

Changes:
- Added @anthropic-ai/claude-code to optionalDependencies in package.json
- Implemented lazy loading in language-model.js to only import the SDK when actually used
- Updated documentation to explain the optional installation requirement
- Applied formatting fixes to ensure code consistency

Benefits:
- Users without Claude Code subscriptions don't need to install the dependency
- Reduces package size for users who don't use Claude Code
- Prevents installation failures if the package is unavailable
- Provides clear error messages when the package is needed but not installed

The implementation uses dynamic imports to load the SDK only when doGenerate() or doStream() is called, ensuring the provider can be instantiated without the package present.
2025-06-20 16:20:56 +03:00
Ben Vargas
93271e0a2d fix(docs): correct invalid commands in claude-code usage examples
- Remove non-existent 'do', 'estimate', and 'analyze' commands
- Replace with actual Task Master commands: next, show, set-status
- Use correct syntax for parse-prd and analyze-complexity
2025-06-20 16:20:56 +03:00
Ben Vargas
df9ce457ff feat: add Claude Code provider support
Implements Claude Code as a new AI provider that uses the Claude Code CLI
without requiring API keys. This enables users to leverage Claude models
through their local Claude Code installation.

Key changes:
- Add complete AI SDK v1 implementation for Claude Code provider
  - Custom SDK with streaming/non-streaming support
  - Session management for conversation continuity
  - JSON extraction for object generation mode
  - Support for advanced settings (maxTurns, allowedTools, etc.)

- Integrate Claude Code into Task Master's provider system
  - Update ai-services-unified.js to handle keyless authentication
  - Add provider to supported-models.json with opus/sonnet models
  - Ensure correct maxTokens values are applied (opus: 32000, sonnet: 64000)

- Fix maxTokens configuration issue
  - Add max_tokens property to getAvailableModels() output
  - Update setModel() to properly handle claude-code models
  - Create update-config-tokens.js utility for init process

- Add comprehensive documentation
  - User guide with configuration examples
  - Advanced settings explanation and future integration options

The implementation maintains full backward compatibility with existing
providers while adding seamless Claude Code support to all Task Master
commands.
2025-06-20 16:20:56 +03:00
55 changed files with 352 additions and 4661 deletions

View File

@@ -1,12 +0,0 @@
---
"task-master-ai": patch
---
Fix expand command preserving tagged task structure and preventing data corruption
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
- Add new test section for feature-expand tag creation and testing during expand operations
- Verify tag preservation during expand, force expand, and expand --all operations
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": minor
---
Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
- For example:
- `OPENAI_BASE_URL`

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Call rules interactive setup during init

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>

View File

@@ -1,10 +0,0 @@
---
"task-master-ai": minor
---
Make task-master more compatible with the "o" family models of OpenAI
Now works well with:
- o3
- o3-mini
- etc.

View File

@@ -1,23 +0,0 @@
{
"mode": "exit",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.17.1"
},
"changesets": [
"bright-llamas-enter",
"huge-moose-prove",
"icy-dryers-hunt",
"lemon-deer-hide",
"modern-cats-pick",
"nasty-berries-tan",
"shy-groups-fly",
"sour-lions-check",
"spicy-teams-travel",
"stale-cameras-sin",
"swift-squids-sip",
"tiny-dogs-change",
"vast-plants-exist",
"wet-berries-dress"
]
}

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": minor
---
Add better support for python projects by adding `pyproject.toml` as a projectRoot marker

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Store tasks in Git by default

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Rename Roo Code Boomerang role to Orchestrator

View File

@@ -1,5 +0,0 @@
---
"task-master-ai": patch
---
Improve mcp keys check in cursor

View File

@@ -1,22 +0,0 @@
---
"task-master-ai": minor
---
- **Git Worktree Detection:**
- Now properly skips Git initialization when inside existing Git worktree
- Prevents accidental nested repository creation
- **Flag System Overhaul:**
- `--git`/`--no-git` controls repository initialization
- `--aliases`/`--no-aliases` consistently manages shell alias creation
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
- `--dry-run` accurately previews all initialization behaviors
- **GitTasks Functionality:**
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
- Supports both CLI and MCP interfaces with proper parameter passing
**Implementation Details:**
- Added explicit Git worktree detection before initialization
- Refactored flag processing to ensure consistent behavior
- Fixes #734

View File

@@ -26,7 +26,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `--name <name>`: `Set the name for your project in Taskmaster's configuration.`
* `--description <text>`: `Provide a brief description for your project.`
* `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.`
* `--no-git`: `Skip initializing a Git repository entirely.`
* `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.`
* **Usage:** Run this once at the beginning of a new project.
* **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.`
@@ -37,7 +36,6 @@ This document provides a detailed reference for interacting with Taskmaster, cov
* `authorName`: `Author name.` (CLI: `--author <author>`)
* `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`)
* `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`)
* `noGit`: `Skip initializing a Git repository entirely. Default is false.` (CLI: `--no-git`)
* `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`)
* **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Cursor. Operates on the current working directory of the MCP server.
* **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt.

View File

@@ -1,103 +1,5 @@
# task-master-ai
## 0.18.0-rc.0
### Minor Changes
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Can now configure baseURL of provider with `<PROVIDER>_BASE_URL`
- For example:
- `OPENAI_BASE_URL`
- [#460](https://github.com/eyaltoledano/claude-task-master/pull/460) [`a09a2d0`](https://github.com/eyaltoledano/claude-task-master/commit/a09a2d0967a10276623e3f3ead3ed577c15ce62f) Thanks [@joedanz](https://github.com/joedanz)! - Added comprehensive rule profile management:
**New Profile Support**: Added comprehensive IDE profile support with eight specialized profiles: Claude Code, Cline, Codex, Cursor, Roo, Trae, VS Code, and Windsurf. Each profile is optimized for its respective IDE with appropriate mappings and configuration.
**Initialization**: You can now specify which rule profiles to include at project initialization using `--rules <profiles>` or `-r <profiles>` (e.g., `task-master init -r cursor,roo`). Only the selected profiles and configuration are included.
**Add/Remove Commands**: `task-master rules add <profiles>` and `task-master rules remove <profiles>` let you manage specific rule profiles and MCP config after initialization, supporting multiple profiles at once.
**Interactive Setup**: `task-master rules setup` launches an interactive prompt to select which rule profiles to add to your project. This does **not** re-initialize your project or affect shell aliases; it only manages rules.
**Selective Removal**: Rules removal intelligently preserves existing non-Task Master rules and files and only removes Task Master-specific rules. Profile directories are only removed when completely empty and all conditions are met (no existing rules, no other files/folders, MCP config completely removed).
**Safety Features**: Confirmation messages clearly explain that only Task Master-specific rules and MCP configurations will be removed, while preserving existing custom rules and other files.
**Robust Validation**: Includes comprehensive checks for array types in MCP config processing and error handling throughout the rules management system.
This enables more flexible, rule-specific project setups with intelligent cleanup that preserves user customizations while safely managing Task Master components.
- Resolves #338
- [#804](https://github.com/eyaltoledano/claude-task-master/pull/804) [`1b8c320`](https://github.com/eyaltoledano/claude-task-master/commit/1b8c320c570473082f1eb4bf9628bff66e799092) Thanks [@ejones40](https://github.com/ejones40)! - Add better support for python projects by adding `pyproject.toml` as a projectRoot marker
- [#743](https://github.com/eyaltoledano/claude-task-master/pull/743) [`a2a3229`](https://github.com/eyaltoledano/claude-task-master/commit/a2a3229fd01e24a5838f11a3938a77250101e184) Thanks [@joedanz](https://github.com/joedanz)! - - **Git Worktree Detection:**
- Now properly skips Git initialization when inside existing Git worktree
- Prevents accidental nested repository creation
- **Flag System Overhaul:**
- `--git`/`--no-git` controls repository initialization
- `--aliases`/`--no-aliases` consistently manages shell alias creation
- `--git-tasks`/`--no-git-tasks` controls whether task files are stored in Git
- `--dry-run` accurately previews all initialization behaviors
- **GitTasks Functionality:**
- New `--git-tasks` flag includes task files in Git (comments them out in .gitignore)
- New `--no-git-tasks` flag excludes task files from Git (default behavior)
- Supports both CLI and MCP interfaces with proper parameter passing
**Implementation Details:**
- Added explicit Git worktree detection before initialization
- Refactored flag processing to ensure consistent behavior
- Fixes #734
- [#829](https://github.com/eyaltoledano/claude-task-master/pull/829) [`4b0c9d9`](https://github.com/eyaltoledano/claude-task-master/commit/4b0c9d9af62d00359fca3f43283cf33223d410bc) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code provider support
Introduces a new provider that enables using Claude models (Opus and Sonnet) through the Claude Code CLI without requiring an API key.
Key features:
- New claude-code provider with support for opus and sonnet models
- No API key required - uses local Claude Code CLI installation
- Optional dependency - won't affect users who don't need Claude Code
- Lazy loading ensures the provider only loads when requested
- Full integration with existing Task Master commands and workflows
- Comprehensive test coverage for reliability
- New --claude-code flag for the models command
Users can now configure Claude Code models with:
task-master models --set-main sonnet --claude-code
task-master models --set-research opus --claude-code
The @anthropic-ai/claude-code package is optional and won't be installed unless explicitly needed.
### Patch Changes
- [#827](https://github.com/eyaltoledano/claude-task-master/pull/827) [`5da5b59`](https://github.com/eyaltoledano/claude-task-master/commit/5da5b59bdeeb634dcb3adc7a9bc0fc37e004fa0c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand command preserving tagged task structure and preventing data corruption
- Enhance E2E tests with comprehensive tag-aware expand testing to verify tag corruption fix
- Add new test section for feature-expand tag creation and testing during expand operations
- Verify tag preservation during expand, force expand, and expand --all operations
- Test that master tag remains intact while feature-expand tag receives subtasks correctly
- Fix file path references to use correct .taskmaster/config.json and .taskmaster/tasks/tasks.json locations
- All tag corruption verification tests pass successfully, confirming the expand command tag corruption bug fix works as expected
- [#833](https://github.com/eyaltoledano/claude-task-master/pull/833) [`cf2c066`](https://github.com/eyaltoledano/claude-task-master/commit/cf2c06697a0b5b952fb6ca4b3c923e9892604d08) Thanks [@joedanz](https://github.com/joedanz)! - Call rules interactive setup during init
- [#826](https://github.com/eyaltoledano/claude-task-master/pull/826) [`7811227`](https://github.com/eyaltoledano/claude-task-master/commit/78112277b3caa4539e6e29805341a944799fb0e7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improves Amazon Bedrock support
- [#834](https://github.com/eyaltoledano/claude-task-master/pull/834) [`6483537`](https://github.com/eyaltoledano/claude-task-master/commit/648353794eb60d11ffceda87370a321ad310fbd7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with task creation/update where subtasks are being created like id: <parent_task>.<subtask> instead if just id: <subtask>
- [#835](https://github.com/eyaltoledano/claude-task-master/pull/835) [`727f1ec`](https://github.com/eyaltoledano/claude-task-master/commit/727f1ec4ebcbdd82547784c4c113b666af7e122e) Thanks [@joedanz](https://github.com/joedanz)! - Store tasks in Git by default
- [#822](https://github.com/eyaltoledano/claude-task-master/pull/822) [`1bd6d4f`](https://github.com/eyaltoledano/claude-task-master/commit/1bd6d4f2468070690e152e6e63e15a57bc550d90) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve provider validation system with clean constants structure
- **Fixed "Invalid provider hint" errors**: Resolved validation failures for Azure, Vertex, and Bedrock providers
- **Improved search UX**: Integrated search for better model discovery with real-time filtering
- **Better organization**: Moved custom provider options to bottom of model selection with clear section separators
This change ensures all custom providers (Azure, Vertex, Bedrock, OpenRouter, Ollama) work correctly in `task-master models --setup`
- [#633](https://github.com/eyaltoledano/claude-task-master/pull/633) [`3a2325a`](https://github.com/eyaltoledano/claude-task-master/commit/3a2325a963fed82377ab52546eedcbfebf507a7e) Thanks [@nmarley](https://github.com/nmarley)! - Fix weird `task-master init` bug when using in certain environments
- [#831](https://github.com/eyaltoledano/claude-task-master/pull/831) [`b592dff`](https://github.com/eyaltoledano/claude-task-master/commit/b592dff8bc5c5d7966843fceaa0adf4570934336) Thanks [@joedanz](https://github.com/joedanz)! - Rename Roo Code Boomerang role to Orchestrator
- [#830](https://github.com/eyaltoledano/claude-task-master/pull/830) [`e9d1bc2`](https://github.com/eyaltoledano/claude-task-master/commit/e9d1bc2385521c08374a85eba7899e878a51066c) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve mcp keys check in cursor
## 0.17.1
### Patch Changes

View File

@@ -94,8 +94,6 @@ MCP (Model Control Protocol) lets you run Task Master directly from your editor.
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
###### VSCode (`servers` + `type`)
```json

View File

@@ -9,32 +9,32 @@
**Architectural Design & Planning Role (Delegated Tasks):**
Your primary role when activated via `new_task` by the Orchestrator is to perform specific architectural, design, or planning tasks, focusing on the instructions provided in the delegation message and referencing the relevant `taskmaster-ai` task ID.
Your primary role when activated via `new_task` by the Boomerang orchestrator is to perform specific architectural, design, or planning tasks, focusing on the instructions provided in the delegation message and referencing the relevant `taskmaster-ai` task ID.
1. **Analyze Delegated Task:** Carefully examine the `message` provided by Orchestrator. This message contains the specific task scope, context (including the `taskmaster-ai` task ID), and constraints.
1. **Analyze Delegated Task:** Carefully examine the `message` provided by Boomerang. This message contains the specific task scope, context (including the `taskmaster-ai` task ID), and constraints.
2. **Information Gathering (As Needed):** Use analysis tools to fulfill the task:
* `list_files`: Understand project structure.
* `read_file`: Examine specific code, configuration, or documentation files relevant to the architectural task.
* `list_code_definition_names`: Analyze code structure and relationships.
* `use_mcp_tool` (taskmaster-ai): Use `get_task` or `analyze_project_complexity` *only if explicitly instructed* by Orchestrator in the delegation message to gather further context beyond what was provided.
* `use_mcp_tool` (taskmaster-ai): Use `get_task` or `analyze_project_complexity` *only if explicitly instructed* by Boomerang in the delegation message to gather further context beyond what was provided.
3. **Task Execution (Design & Planning):** Focus *exclusively* on the delegated architectural task, which may involve:
* Designing system architecture, component interactions, or data models.
* Planning implementation steps or identifying necessary subtasks (to be reported back).
* Analyzing technical feasibility, complexity, or potential risks.
* Defining interfaces, APIs, or data contracts.
* Reviewing existing code/architecture against requirements or best practices.
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
* Summary of design decisions, plans created, analysis performed, or subtasks identified.
* Any relevant artifacts produced (e.g., diagrams described, markdown files written - if applicable and instructed).
* Completion status (success, failure, needs review).
* Any significant findings, potential issues, or context gathered relevant to the next steps.
5. **Handling Issues:**
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring further review (e.g., needing testing input, deeper debugging analysis), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring further review (e.g., needing testing input, deeper debugging analysis), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
* **Failure:** If the task fails (e.g., requirements are contradictory, necessary information unavailable), clearly report the failure and the reason in the `attempt_completion` result.
6. **Taskmaster Interaction:**
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
7. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
7. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
**Context Reporting Strategy:**
@@ -42,17 +42,17 @@ context_reporting: |
<thinking>
Strategy:
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
</thinking>
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
- **Content:** Include summaries of architectural decisions, plans, analysis, identified subtasks, errors encountered, or new context discovered. Structure the `result` clearly.
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
**Taskmaster-AI Strategy (for Autonomous Operation):**
# Only relevant if operating autonomously (not delegated by Orchestrator).
# Only relevant if operating autonomously (not delegated by Boomerang).
taskmaster_strategy:
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
initialization: |
@@ -64,7 +64,7 @@ taskmaster_strategy:
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
if_uninitialized: |
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
if_ready: |
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
@@ -73,21 +73,21 @@ taskmaster_strategy:
**Mode Collaboration & Triggers (Architect Perspective):**
mode_collaboration: |
# Architect Mode Collaboration (Focus on receiving from Orchestrator and reporting back)
- Delegated Task Reception (FROM Orchestrator via `new_task`):
# Architect Mode Collaboration (Focus on receiving from Boomerang and reporting back)
- Delegated Task Reception (FROM Boomerang via `new_task`):
* Receive specific architectural/planning task instructions referencing a `taskmaster-ai` ID.
* Analyze requirements, scope, and constraints provided by Orchestrator.
- Completion Reporting (TO Orchestrator via `attempt_completion`):
* Analyze requirements, scope, and constraints provided by Boomerang.
- Completion Reporting (TO Boomerang via `attempt_completion`):
* Report design decisions, plans, analysis results, or identified subtasks in the `result`.
* Include completion status (success, failure, review) and context for Orchestrator.
* Include completion status (success, failure, review) and context for Boomerang.
* Signal completion of the *specific delegated architectural task*.
mode_triggers:
# Conditions that might trigger a switch TO Architect mode (typically orchestrated BY Orchestrator based on needs identified by other modes or the user)
# Conditions that might trigger a switch TO Architect mode (typically orchestrated BY Boomerang based on needs identified by other modes or the user)
architect:
- condition: needs_architectural_design # e.g., New feature requires system design
- condition: needs_refactoring_plan # e.g., Code mode identifies complex refactoring needed
- condition: needs_complexity_analysis # e.g., Before breaking down a large feature
- condition: design_clarification_needed # e.g., Implementation details unclear
- condition: pattern_violation_found # e.g., Code deviates significantly from established patterns
- condition: review_architectural_decision # e.g., Orchestrator requests review based on 'review' status from another mode
- condition: review_architectural_decision # e.g., Boomerang requests review based on 'review' status from another mode

View File

@@ -9,16 +9,16 @@
**Information Retrieval & Explanation Role (Delegated Tasks):**
Your primary role when activated via `new_task` by the Orchestrator (orchestrator) mode is to act as a specialized technical assistant. Focus *exclusively* on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
Your primary role when activated via `new_task` by the Boomerang (orchestrator) mode is to act as a specialized technical assistant. Focus *exclusively* on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
1. **Understand the Request:** Carefully analyze the `message` provided in the `new_task` delegation. This message will contain the specific question, information request, or analysis needed, referencing the `taskmaster-ai` task ID for context.
2. **Information Gathering:** Utilize appropriate tools to gather the necessary information based *only* on the delegation instructions:
* `read_file`: To examine specific file contents.
* `search_files`: To find patterns or specific text across the project.
* `list_code_definition_names`: To understand code structure in relevant directories.
* `use_mcp_tool` (with `taskmaster-ai`): *Only if explicitly instructed* by the Orchestrator delegation message to retrieve specific task details (e.g., using `get_task`).
* `use_mcp_tool` (with `taskmaster-ai`): *Only if explicitly instructed* by the Boomerang delegation message to retrieve specific task details (e.g., using `get_task`).
3. **Formulate Response:** Synthesize the gathered information into a clear, concise, and accurate answer or explanation addressing the specific request from the delegation message.
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to process and potentially update `taskmaster-ai`. Include:
4. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to process and potentially update `taskmaster-ai`. Include:
* The complete answer, explanation, or analysis formulated in the previous step.
* Completion status (success, failure - e.g., if information could not be found).
* Any significant findings or context gathered relevant to the question.
@@ -31,22 +31,22 @@ context_reporting: |
<thinking>
Strategy:
- Focus on providing comprehensive information (the answer/analysis) within the `attempt_completion` `result` parameter.
- Orchestrator will use this information to potentially update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- Boomerang will use this information to potentially update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- My role is to *report* accurately, not *log* directly to Taskmaster.
</thinking>
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains the complete and accurate answer/analysis requested by Orchestrator.
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains the complete and accurate answer/analysis requested by Boomerang.
- **Content:** Include the full answer, explanation, or analysis results. Cite sources if applicable. Structure the `result` clearly.
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
- **Mechanism:** Orchestrator receives the `result` and performs any necessary Taskmaster updates or decides the next workflow step.
- **Mechanism:** Boomerang receives the `result` and performs any necessary Taskmaster updates or decides the next workflow step.
**Taskmaster Interaction:**
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Use (Rare & Specific):** Only use Taskmaster tools (`use_mcp_tool` with `taskmaster-ai`) if *explicitly instructed* by Orchestrator within the `new_task` message, and *only* for retrieving information (e.g., `get_task`). Do not update Taskmaster status or content directly.
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Use (Rare & Specific):** Only use Taskmaster tools (`use_mcp_tool` with `taskmaster-ai`) if *explicitly instructed* by Boomerang within the `new_task` message, and *only* for retrieving information (e.g., `get_task`). Do not update Taskmaster status or content directly.
**Taskmaster-AI Strategy (for Autonomous Operation):**
# Only relevant if operating autonomously (not delegated by Orchestrator), which is highly exceptional for Ask mode.
# Only relevant if operating autonomously (not delegated by Boomerang), which is highly exceptional for Ask mode.
taskmaster_strategy:
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
initialization: |
@@ -58,7 +58,7 @@ taskmaster_strategy:
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
if_uninitialized: |
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
if_ready: |
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context (again, very rare for Ask).
2. **Set Status:** Set status to '[TASKMASTER: ON]'.
@@ -67,13 +67,13 @@ taskmaster_strategy:
**Mode Collaboration & Triggers:**
mode_collaboration: |
# Ask Mode Collaboration: Focuses on receiving tasks from Orchestrator and reporting back findings.
- Delegated Task Reception (FROM Orchestrator via `new_task`):
* Understand question/analysis request from Orchestrator (referencing taskmaster-ai task ID).
# Ask Mode Collaboration: Focuses on receiving tasks from Boomerang and reporting back findings.
- Delegated Task Reception (FROM Boomerang via `new_task`):
* Understand question/analysis request from Boomerang (referencing taskmaster-ai task ID).
* Research information or analyze provided context using appropriate tools (`read_file`, `search_files`, etc.) as instructed.
* Formulate answers/explanations strictly within the subtask scope.
* Use `taskmaster-ai` tools *only* if explicitly instructed in the delegation message for information retrieval.
- Completion Reporting (TO Orchestrator via `attempt_completion`):
- Completion Reporting (TO Boomerang via `attempt_completion`):
* Provide the complete answer, explanation, or analysis results in the `result` parameter.
* Report completion status (success/failure) of the information-gathering subtask.
* Cite sources or relevant context found.

View File

@@ -70,52 +70,52 @@ taskmaster_strategy:
**Mode Collaboration & Triggers:**
mode_collaboration: |
# Collaboration definitions for how Orchestrator orchestrates and interacts.
# Orchestrator delegates via `new_task` using taskmaster-ai for task context,
# Collaboration definitions for how Boomerang orchestrates and interacts.
# Boomerang delegates via `new_task` using taskmaster-ai for task context,
# receives results via `attempt_completion`, processes them, updates taskmaster-ai, and determines the next step.
1. Architect Mode Collaboration: # Interaction initiated BY Orchestrator
1. Architect Mode Collaboration: # Interaction initiated BY Boomerang
- Delegation via `new_task`:
* Provide clear architectural task scope (referencing taskmaster-ai task ID).
* Request design, structure, planning based on taskmaster context.
- Completion Reporting TO Orchestrator: # Receiving results FROM Architect via attempt_completion
- Completion Reporting TO Boomerang: # Receiving results FROM Architect via attempt_completion
* Expect design decisions, artifacts created, completion status (taskmaster-ai task ID).
* Expect context needed for subsequent implementation delegation.
2. Test Mode Collaboration: # Interaction initiated BY Orchestrator
2. Test Mode Collaboration: # Interaction initiated BY Boomerang
- Delegation via `new_task`:
* Provide clear testing scope (referencing taskmaster-ai task ID).
* Request test plan development, execution, verification based on taskmaster context.
- Completion Reporting TO Orchestrator: # Receiving results FROM Test via attempt_completion
- Completion Reporting TO Boomerang: # Receiving results FROM Test via attempt_completion
* Expect summary of test results (pass/fail, coverage), completion status (taskmaster-ai task ID).
* Expect details on bugs or validation issues.
3. Debug Mode Collaboration: # Interaction initiated BY Orchestrator
3. Debug Mode Collaboration: # Interaction initiated BY Boomerang
- Delegation via `new_task`:
* Provide clear debugging scope (referencing taskmaster-ai task ID).
* Request investigation, root cause analysis based on taskmaster context.
- Completion Reporting TO Orchestrator: # Receiving results FROM Debug via attempt_completion
- Completion Reporting TO Boomerang: # Receiving results FROM Debug via attempt_completion
* Expect summary of findings (root cause, affected areas), completion status (taskmaster-ai task ID).
* Expect recommended fixes or next diagnostic steps.
4. Ask Mode Collaboration: # Interaction initiated BY Orchestrator
4. Ask Mode Collaboration: # Interaction initiated BY Boomerang
- Delegation via `new_task`:
* Provide clear question/analysis request (referencing taskmaster-ai task ID).
* Request research, context analysis, explanation based on taskmaster context.
- Completion Reporting TO Orchestrator: # Receiving results FROM Ask via attempt_completion
- Completion Reporting TO Boomerang: # Receiving results FROM Ask via attempt_completion
* Expect answers, explanations, analysis results, completion status (taskmaster-ai task ID).
* Expect cited sources or relevant context found.
5. Code Mode Collaboration: # Interaction initiated BY Orchestrator
5. Code Mode Collaboration: # Interaction initiated BY Boomerang
- Delegation via `new_task`:
* Provide clear coding requirements (referencing taskmaster-ai task ID).
* Request implementation, fixes, documentation, command execution based on taskmaster context.
- Completion Reporting TO Orchestrator: # Receiving results FROM Code via attempt_completion
- Completion Reporting TO Boomerang: # Receiving results FROM Code via attempt_completion
* Expect outcome of commands/tool usage, summary of code changes/operations, completion status (taskmaster-ai task ID).
* Expect links to commits or relevant code sections if relevant.
7. Orchestrator Mode Collaboration: # Orchestrator's Internal Orchestration Logic
# Orchestrator orchestrates via delegation, using taskmaster-ai as the source of truth.
7. Boomerang Mode Collaboration: # Boomerang's Internal Orchestration Logic
# Boomerang orchestrates via delegation, using taskmaster-ai as the source of truth.
- Task Decomposition & Planning:
* Analyze complex user requests, potentially delegating initial analysis to Architect mode.
* Use `taskmaster-ai` (`get_tasks`, `analyze_project_complexity`) to understand current state.
@@ -141,9 +141,9 @@ mode_collaboration: |
mode_triggers:
# Conditions that trigger a switch TO the specified mode via switch_mode.
# Note: Orchestrator mode is typically initiated for complex tasks or explicitly chosen by the user,
# Note: Boomerang mode is typically initiated for complex tasks or explicitly chosen by the user,
# and receives results via attempt_completion, not standard switch_mode triggers from other modes.
# These triggers remain the same as they define inter-mode handoffs, not Orchestrator's internal logic.
# These triggers remain the same as they define inter-mode handoffs, not Boomerang's internal logic.
architect:
- condition: needs_architectural_changes

View File

@@ -9,22 +9,22 @@
**Execution Role (Delegated Tasks):**
Your primary role is to **execute** tasks delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
Your primary role is to **execute** tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
1. **Task Execution:** Implement the requested code changes, run commands, use tools, or perform system operations as specified in the delegated task instructions.
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
* Outcome of commands/tool usage.
* Summary of code changes made or system operations performed.
* Completion status (success, failure, needs review).
* Any significant findings, errors encountered, or context gathered.
* Links to commits or relevant code sections if applicable.
3. **Handling Issues:**
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring review (architectural, testing, debugging), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
* **Complexity/Review:** If you encounter significant complexity, uncertainty, or issues requiring review (architectural, testing, debugging), set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
* **Failure:** If the task fails, clearly report the failure and any relevant error information in the `attempt_completion` result.
4. **Taskmaster Interaction:**
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
**Context Reporting Strategy:**
@@ -32,17 +32,17 @@ context_reporting: |
<thinking>
Strategy:
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
</thinking>
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
- **Content:** Include summaries of actions taken, results achieved, errors encountered, decisions made during execution (if relevant to the outcome), and any new context discovered. Structure the `result` clearly.
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
**Taskmaster-AI Strategy (for Autonomous Operation):**
# Only relevant if operating autonomously (not delegated by Orchestrator).
# Only relevant if operating autonomously (not delegated by Boomerang).
taskmaster_strategy:
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
initialization: |
@@ -54,7 +54,7 @@ taskmaster_strategy:
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
if_uninitialized: |
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
if_ready: |
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
2. **Set Status:** Set status to '[TASKMASTER: ON]'.

View File

@@ -9,29 +9,29 @@
**Execution Role (Delegated Tasks):**
Your primary role is to **execute diagnostic tasks** delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
Your primary role is to **execute diagnostic tasks** delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID.
1. **Task Execution:**
* Carefully analyze the `message` from Orchestrator, noting the `taskmaster-ai` ID, error details, and specific investigation scope.
* Carefully analyze the `message` from Boomerang, noting the `taskmaster-ai` ID, error details, and specific investigation scope.
* Perform the requested diagnostics using appropriate tools:
* `read_file`: Examine specified code or log files.
* `search_files`: Locate relevant code, errors, or patterns.
* `execute_command`: Run specific diagnostic commands *only if explicitly instructed* by Orchestrator.
* `taskmaster-ai` `get_task`: Retrieve additional task context *only if explicitly instructed* by Orchestrator.
* `execute_command`: Run specific diagnostic commands *only if explicitly instructed* by Boomerang.
* `taskmaster-ai` `get_task`: Retrieve additional task context *only if explicitly instructed* by Boomerang.
* Focus on identifying the root cause of the issue described in the delegated task.
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
* Summary of diagnostic steps taken and findings (e.g., identified root cause, affected areas).
* Recommended next steps (e.g., specific code changes for Code mode, further tests for Test mode).
* Completion status (success, failure, needs review). Reference the original `taskmaster-ai` task ID.
* Any significant context gathered during the investigation.
* **Crucially:** Execute *only* the delegated diagnostic task. Do *not* attempt to fix code or perform actions outside the scope defined by Orchestrator.
* **Crucially:** Execute *only* the delegated diagnostic task. Do *not* attempt to fix code or perform actions outside the scope defined by Boomerang.
3. **Handling Issues:**
* **Needs Review:** If the root cause is unclear, requires architectural input, or needs further specialized testing, set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Orchestrator.
* **Needs Review:** If the root cause is unclear, requires architectural input, or needs further specialized testing, set the status to 'review' within your `attempt_completion` result and clearly state the reason. **Do not delegate directly.** Report back to Boomerang.
* **Failure:** If the diagnostic task cannot be completed (e.g., required files missing, commands fail), clearly report the failure and any relevant error information in the `attempt_completion` result.
4. **Taskmaster Interaction:**
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
**Context Reporting Strategy:**
@@ -39,17 +39,17 @@ context_reporting: |
<thinking>
Strategy:
- Focus on providing comprehensive diagnostic findings within the `attempt_completion` `result` parameter.
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask` and decide the next step (e.g., delegate fix to Code mode).
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask` and decide the next step (e.g., delegate fix to Code mode).
- My role is to *report* diagnostic findings accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
</thinking>
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary diagnostic information for Orchestrator to understand the issue, update Taskmaster, and plan the next action.
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary diagnostic information for Boomerang to understand the issue, update Taskmaster, and plan the next action.
- **Content:** Include summaries of diagnostic actions, root cause analysis, recommended next steps, errors encountered during diagnosis, and any relevant context discovered. Structure the `result` clearly.
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates and subsequent delegation.
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates and subsequent delegation.
**Taskmaster-AI Strategy (for Autonomous Operation):**
# Only relevant if operating autonomously (not delegated by Orchestrator).
# Only relevant if operating autonomously (not delegated by Boomerang).
taskmaster_strategy:
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
initialization: |
@@ -61,7 +61,7 @@ taskmaster_strategy:
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
if_uninitialized: |
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
if_ready: |
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
2. **Set Status:** Set status to '[TASKMASTER: ON]'.

View File

@@ -9,22 +9,22 @@
**Execution Role (Delegated Tasks):**
Your primary role is to **execute** testing tasks delegated to you by the Orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID and its associated context (e.g., `testStrategy`).
Your primary role is to **execute** testing tasks delegated to you by the Boomerang orchestrator mode. Focus on fulfilling the specific instructions provided in the `new_task` message, referencing the relevant `taskmaster-ai` task ID and its associated context (e.g., `testStrategy`).
1. **Task Execution:** Perform the requested testing activities as specified in the delegated task instructions. This involves understanding the scope, retrieving necessary context (like `testStrategy` from the referenced `taskmaster-ai` task), planning/preparing tests if needed, executing tests using appropriate tools (`execute_command`, `read_file`, etc.), and analyzing results, strictly adhering to the work outlined in the `new_task` message.
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Orchestrator to update `taskmaster-ai`. Include:
2. **Reporting Completion:** Signal completion using `attempt_completion`. Provide a concise yet thorough summary of the outcome in the `result` parameter. This summary is **crucial** for Boomerang to update `taskmaster-ai`. Include:
* Summary of testing activities performed (e.g., tests planned, executed).
* Concise results/outcome (e.g., pass/fail counts, overall status, coverage information if applicable).
* Completion status (success, failure, needs review - e.g., if tests reveal significant issues needing broader attention).
* Any significant findings (e.g., details of bugs, errors, or validation issues found).
* Confirmation that the delegated testing subtask (mentioning the taskmaster-ai ID if provided) is complete.
3. **Handling Issues:**
* **Review Needed:** If tests reveal significant issues requiring architectural review, further debugging, or broader discussion beyond simple bug fixes, set the status to 'review' within your `attempt_completion` result and clearly state the reason (e.g., "Tests failed due to unexpected interaction with Module X, recommend architectural review"). **Do not delegate directly.** Report back to Orchestrator.
* **Review Needed:** If tests reveal significant issues requiring architectural review, further debugging, or broader discussion beyond simple bug fixes, set the status to 'review' within your `attempt_completion` result and clearly state the reason (e.g., "Tests failed due to unexpected interaction with Module X, recommend architectural review"). **Do not delegate directly.** Report back to Boomerang.
* **Failure:** If the testing task itself cannot be completed (e.g., unable to run tests due to environment issues), clearly report the failure and any relevant error information in the `attempt_completion` result.
4. **Taskmaster Interaction:**
* **Primary Responsibility:** Orchestrator is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Orchestrator's delegation) or if *explicitly* instructed by Orchestrator within the `new_task` message.
5. **Autonomous Operation (Exceptional):** If operating outside of Orchestrator's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
* **Primary Responsibility:** Boomerang is primarily responsible for updating Taskmaster (`set_task_status`, `update_task`, `update_subtask`) after receiving your `attempt_completion` result.
* **Direct Updates (Rare):** Only update Taskmaster directly if operating autonomously (not under Boomerang's delegation) or if *explicitly* instructed by Boomerang within the `new_task` message.
5. **Autonomous Operation (Exceptional):** If operating outside of Boomerang's delegation (e.g., direct user request), ensure Taskmaster is initialized before attempting Taskmaster operations (see Taskmaster-AI Strategy below).
**Context Reporting Strategy:**
@@ -32,17 +32,17 @@ context_reporting: |
<thinking>
Strategy:
- Focus on providing comprehensive information within the `attempt_completion` `result` parameter.
- Orchestrator will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- Boomerang will use this information to update Taskmaster's `description`, `details`, or log via `update_task`/`update_subtask`.
- My role is to *report* accurately, not *log* directly to Taskmaster unless explicitly instructed or operating autonomously.
</thinking>
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Orchestrator to understand the outcome and update Taskmaster effectively.
- **Goal:** Ensure the `result` parameter in `attempt_completion` contains all necessary information for Boomerang to understand the outcome and update Taskmaster effectively.
- **Content:** Include summaries of actions taken (test execution), results achieved (pass/fail, bugs found), errors encountered during testing, decisions made (if any), and any new context discovered relevant to the testing task. Structure the `result` clearly.
- **Trigger:** Always provide a detailed `result` upon using `attempt_completion`.
- **Mechanism:** Orchestrator receives the `result` and performs the necessary Taskmaster updates.
- **Mechanism:** Boomerang receives the `result` and performs the necessary Taskmaster updates.
**Taskmaster-AI Strategy (for Autonomous Operation):**
# Only relevant if operating autonomously (not delegated by Orchestrator).
# Only relevant if operating autonomously (not delegated by Boomerang).
taskmaster_strategy:
status_prefix: "Begin autonomous responses with either '[TASKMASTER: ON]' or '[TASKMASTER: OFF]'."
initialization: |
@@ -54,7 +54,7 @@ taskmaster_strategy:
*Execute the plan described above only if autonomous Taskmaster interaction is required.*
if_uninitialized: |
1. **Inform:** "Task Master is not initialized. Autonomous Taskmaster operations cannot proceed."
2. **Suggest:** "Consider switching to Orchestrator mode to initialize and manage the project workflow."
2. **Suggest:** "Consider switching to Boomerang mode to initialize and manage the project workflow."
if_ready: |
1. **Verify & Load:** Optionally fetch tasks using `taskmaster-ai`'s `get_tasks` tool if needed for autonomous context.
2. **Set Status:** Set status to '[TASKMASTER: ON]'.

View File

@@ -6,8 +6,7 @@
".changeset",
"tasks",
"package-lock.json",
"tests/fixture/*.json",
"dist"
"tests/fixture/*.json"
]
},
"formatter": {

View File

@@ -72,7 +72,6 @@ Taskmaster uses two primary methods for configuration:
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides:**
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
@@ -132,14 +131,13 @@ PERPLEXITY_API_KEY=pplx-your-key-here
# etc.
# Optional Endpoint Overrides
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
# OPENAI_BASE_URL=https://api.third-party.com/v1
#
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
# Google Vertex AI Configuration (Required if using 'vertex' provider)
# VERTEX_PROJECT_ID=your-gcp-project-id
# VERTEX_LOCATION=us-central1
# GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json
```
## Troubleshooting

View File

@@ -2,136 +2,130 @@
## Main Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o1 | 0.489 | 15 | 60 |
| openai | o3 | 0.5 | 2 | 8 |
| openai | o3-mini | 0.493 | 1.1 | 4.4 |
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
| openai | o1-mini | 0.4 | 1.1 | 4.4 |
| openai | o1-pro | — | 150 | 600 |
| openai | gpt-4-5-preview | 0.38 | 75 | 150 |
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | — | 0 | 0 |
| ollama | qwen3:14b | — | 0 | 0 |
| ollama | qwen3:32b | — | 0 | 0 |
| ollama | mistral-small3.1:latest | — | 0 | 0 |
| ollama | llama3.3:latest | — | 0 | 0 |
| ollama | phi4:latest | — | 0 | 0 |
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324 | — | 0.27 | 1.1 |
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
| openrouter | openai/o3 | — | 10 | 40 |
| openrouter | openai/codex-mini | — | 1.5 | 6 |
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
| openrouter | openai/o1-pro | — | 150 | 600 |
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ---------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o1 | 0.489 | 15 | 60 |
| openai | o3 | 0.5 | 2 | 8 |
| openai | o3-mini | 0.493 | 1.1 | 4.4 |
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
| openai | o1-mini | 0.4 | 1.1 | 4.4 |
| openai | o1-pro | — | 150 | 600 |
| openai | gpt-4-5-preview | 0.38 | 75 | 150 |
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | — | 0 | 0 |
| ollama | qwen3:14b | — | 0 | 0 |
| ollama | qwen3:32b | — | 0 | 0 |
| ollama | mistral-small3.1:latest | — | 0 | 0 |
| ollama | llama3.3:latest | — | 0 | 0 |
| ollama | phi4:latest | — | 0 | 0 |
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324 | — | 0.27 | 1.1 |
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
| openrouter | openai/o3 | — | 10 | 40 |
| openrouter | openai/codex-mini | — | 1.5 | 6 |
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
| openrouter | openai/o1-pro | — | 150 | 600 |
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
| openrouter | mistralai/devstral-small | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
## Research Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | -------------------------- | --------- | ---------- | ----------- |
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ---------- | -------------------------- | --------- | ---------- | ----------- |
| bedrock | us.deepseek.r1-v1:0 | — | 1.35 | 5.4 |
| openai | gpt-4o-search-preview | 0.33 | 2.5 | 10 |
| openai | gpt-4o-mini-search-preview | 0.3 | 0.15 | 0.6 |
| perplexity | sonar-pro | — | 3 | 15 |
| perplexity | sonar | — | 1 | 1 |
| perplexity | deep-research | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
## Fallback Models
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ----------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o3 | 0.5 | 2 | 8 |
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | — | 0 | 0 |
| ollama | qwen3:14b | — | 0 | 0 |
| ollama | qwen3:32b | — | 0 | 0 |
| ollama | mistral-small3.1:latest | — | 0 | 0 |
| ollama | llama3.3:latest | — | 0 | 0 |
| ollama | phi4:latest | — | 0 | 0 |
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
| openrouter | openai/o3 | — | 10 | 40 |
| openrouter | openai/codex-mini | — | 1.5 | 6 |
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
| openrouter | openai/o1-pro | — | 150 | 600 |
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |
| claude-code | opus | 0.725 | 0 | 0 |
| claude-code | sonnet | 0.727 | 0 | 0 |
| Provider | Model Name | SWE Score | Input Cost | Output Cost |
| ---------- | ---------------------------------------------- | --------- | ---------- | ----------- |
| bedrock | us.anthropic.claude-3-7-sonnet-20250219-v1:0 | 0.623 | 3 | 15 |
| anthropic | claude-sonnet-4-20250514 | 0.727 | 3 | 15 |
| anthropic | claude-opus-4-20250514 | 0.725 | 15 | 75 |
| anthropic | claude-3-7-sonnet-20250219 | 0.623 | 3 | 15 |
| anthropic | claude-3-5-sonnet-20241022 | 0.49 | 3 | 15 |
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o3 | 0.5 | 2 | 8 |
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
| google | gemini-2.0-flash | 0.518 | 0.15 | 0.6 |
| google | gemini-2.0-flash-lite | — | — | — |
| perplexity | sonar-reasoning-pro | 0.211 | 2 | 8 |
| perplexity | sonar-reasoning | 0.211 | 1 | 5 |
| xai | grok-3 | — | 3 | 15 |
| xai | grok-3-fast | — | 5 | 25 |
| ollama | devstral:latest | — | 0 | 0 |
| ollama | qwen3:latest | — | 0 | 0 |
| ollama | qwen3:14b | — | 0 | 0 |
| ollama | qwen3:32b | — | 0 | 0 |
| ollama | mistral-small3.1:latest | — | 0 | 0 |
| ollama | llama3.3:latest | — | 0 | 0 |
| ollama | phi4:latest | — | 0 | 0 |
| openrouter | google/gemini-2.5-flash-preview-05-20 | — | 0.15 | 0.6 |
| openrouter | google/gemini-2.5-flash-preview-05-20:thinking | — | 0.15 | 3.5 |
| openrouter | google/gemini-2.5-pro-exp-03-25 | — | 0 | 0 |
| openrouter | deepseek/deepseek-chat-v3-0324:free | — | 0 | 0 |
| openrouter | openai/gpt-4.1 | — | 2 | 8 |
| openrouter | openai/gpt-4.1-mini | — | 0.4 | 1.6 |
| openrouter | openai/gpt-4.1-nano | — | 0.1 | 0.4 |
| openrouter | openai/o3 | — | 10 | 40 |
| openrouter | openai/codex-mini | — | 1.5 | 6 |
| openrouter | openai/gpt-4o-mini | — | 0.15 | 0.6 |
| openrouter | openai/o4-mini | 0.45 | 1.1 | 4.4 |
| openrouter | openai/o4-mini-high | — | 1.1 | 4.4 |
| openrouter | openai/o1-pro | — | 150 | 600 |
| openrouter | meta-llama/llama-3.3-70b-instruct | — | 120 | 600 |
| openrouter | meta-llama/llama-4-maverick | — | 0.18 | 0.6 |
| openrouter | meta-llama/llama-4-scout | — | 0.08 | 0.3 |
| openrouter | qwen/qwen-max | — | 1.6 | 6.4 |
| openrouter | qwen/qwen-turbo | — | 0.05 | 0.2 |
| openrouter | qwen/qwen3-235b-a22b | — | 0.14 | 2 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct:free | — | 0 | 0 |
| openrouter | mistralai/mistral-small-3.1-24b-instruct | — | 0.1 | 0.3 |
| openrouter | mistralai/mistral-nemo | — | 0.03 | 0.07 |
| openrouter | thudm/glm-4-32b:free | — | 0 | 0 |

View File

@@ -83,11 +83,6 @@ if (import.meta.url === `file://${process.argv[1]}`) {
.option('--skip-install', 'Skip installing dependencies')
.option('--dry-run', 'Show what would be done without making changes')
.option('--aliases', 'Add shell aliases (tm, taskmaster)')
.option('--no-aliases', 'Skip shell aliases (tm, taskmaster)')
.option('--git', 'Initialize Git repository')
.option('--no-git', 'Skip Git repository initialization')
.option('--git-tasks', 'Store tasks in Git')
.option('--no-git-tasks', 'No Git storage of tasks')
.action(async (cmdOptions) => {
try {
await runInitCLI(cmdOptions);

View File

@@ -26,7 +26,6 @@ import { createLogWrapper } from '../../tools/utils.js';
* @param {string} [args.prompt] - Additional context to guide subtask generation.
* @param {boolean} [args.force] - Force expansion even if subtasks exist.
* @param {string} [args.projectRoot] - Project root directory.
* @param {string} [args.tag] - Tag for the task
* @param {Object} log - Logger object
* @param {Object} context - Context object containing session
* @param {Object} [context.session] - MCP Session object
@@ -35,8 +34,7 @@ import { createLogWrapper } from '../../tools/utils.js';
export async function expandTaskDirect(args, log, context = {}) {
const { session } = context; // Extract session
// Destructure expected args, including projectRoot
const { tasksJsonPath, id, num, research, prompt, force, projectRoot, tag } =
args;
const { tasksJsonPath, id, num, research, prompt, force, projectRoot } = args;
// Log session root data for debugging
log.info(
@@ -196,8 +194,7 @@ export async function expandTaskDirect(args, log, context = {}) {
session,
projectRoot,
commandName: 'expand-task',
outputType: 'mcp',
tag
outputType: 'mcp'
},
forceFlag
);

View File

@@ -11,7 +11,7 @@ import { convertAllRulesToProfileRules } from '../../../../src/utils/rule-transf
/**
* Direct function wrapper for initializing a project.
* Derives target directory from session, sets CWD, and calls core init logic.
* @param {object} args - Arguments containing initialization options (addAliases, initGit, storeTasksInGit, skipInstall, yes, projectRoot, rules)
* @param {object} args - Arguments containing initialization options (addAliases, skipInstall, yes, projectRoot, rules)
* @param {object} log - The FastMCP logger instance.
* @param {object} context - The context object, must contain { session }.
* @returns {Promise<{success: boolean, data?: any, error?: {code: string, message: string}}>} - Standard result object.
@@ -65,9 +65,7 @@ export async function initializeProjectDirect(args, log, context = {}) {
// Construct options ONLY from the relevant flags in args
// The core initializeProject operates in the current CWD, which we just set
const options = {
addAliases: args.addAliases,
initGit: args.initGit,
storeTasksInGit: args.storeTasksInGit,
aliases: args.addAliases,
skipInstall: args.skipInstall,
yes: true // Force yes mode
};

View File

@@ -45,8 +45,7 @@ export function registerExpandTaskTool(server) {
.boolean()
.optional()
.default(false)
.describe('Force expansion even if subtasks exist'),
tag: z.string().optional().describe('Tag context to operate on')
.describe('Force expansion even if subtasks exist')
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try {
@@ -74,8 +73,7 @@ export function registerExpandTaskTool(server) {
research: args.research,
prompt: args.prompt,
force: args.force,
projectRoot: args.projectRoot,
tag: args.tag || 'master'
projectRoot: args.projectRoot
},
log,
{ session }

View File

@@ -23,18 +23,8 @@ export function registerInitializeProjectTool(server) {
addAliases: z
.boolean()
.optional()
.default(true)
.default(false)
.describe('Add shell aliases (tm, taskmaster) to shell config file.'),
initGit: z
.boolean()
.optional()
.default(true)
.describe('Initialize Git repository in project root.'),
storeTasksInGit: z
.boolean()
.optional()
.default(true)
.describe('Store tasks in Git (tasks.json and tasks/ directory).'),
yes: z
.boolean()
.optional()

2
package-lock.json generated
View File

@@ -12317,4 +12317,4 @@
}
}
}
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "task-master-ai",
"version": "0.18.0-rc.0",
"version": "0.17.1",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js",
"type": "module",

View File

@@ -23,8 +23,6 @@ import figlet from 'figlet';
import boxen from 'boxen';
import gradient from 'gradient-string';
import { isSilentMode } from './modules/utils.js';
import { insideGitWorkTree } from './modules/utils/git-utils.js';
import { manageGitignoreFile } from '../src/utils/manage-gitignore.js';
import { RULE_PROFILES } from '../src/constants/profiles.js';
import {
convertAllRulesToProfileRules,
@@ -322,60 +320,16 @@ async function initializeProject(options = {}) {
// console.log('==================================================');
// }
// Handle boolean aliases flags
if (options.aliases === true) {
options.addAliases = true; // --aliases flag provided
} else if (options.aliases === false) {
options.addAliases = false; // --no-aliases flag provided
}
// If options.aliases and options.noAliases are undefined, we'll prompt for it
// Handle boolean git flags
if (options.git === true) {
options.initGit = true; // --git flag provided
} else if (options.git === false) {
options.initGit = false; // --no-git flag provided
}
// If options.git and options.noGit are undefined, we'll prompt for it
// Handle boolean gitTasks flags
if (options.gitTasks === true) {
options.storeTasksInGit = true; // --git-tasks flag provided
} else if (options.gitTasks === false) {
options.storeTasksInGit = false; // --no-git-tasks flag provided
}
// If options.gitTasks and options.noGitTasks are undefined, we'll prompt for it
const skipPrompts = options.yes || (options.name && options.description);
// if (!isSilentMode()) {
// console.log('Skip prompts determined:', skipPrompts);
// }
let selectedRuleProfiles;
if (options.rulesExplicitlyProvided) {
// If --rules flag was used, always respect it.
log(
'info',
`Using rule profiles provided via command line: ${options.rules.join(', ')}`
);
selectedRuleProfiles = options.rules;
} else if (skipPrompts) {
// If non-interactive (e.g., --yes) and no rules specified, default to ALL.
log(
'info',
`No rules specified in non-interactive mode, defaulting to all profiles.`
);
selectedRuleProfiles = RULE_PROFILES;
} else {
// If interactive and no rules specified, default to NONE.
// The 'rules --setup' wizard will handle selection.
log(
'info',
'No rules specified; interactive setup will be launched to select profiles.'
);
selectedRuleProfiles = [];
}
const selectedRuleProfiles =
options.rules && Array.isArray(options.rules) && options.rules.length > 0
? options.rules
: RULE_PROFILES; // Default to all profiles
if (skipPrompts) {
if (!isSilentMode()) {
@@ -389,44 +343,21 @@ async function initializeProject(options = {}) {
const projectVersion = options.version || '0.1.0';
const authorName = options.author || 'Vibe coder';
const dryRun = options.dryRun || false;
const addAliases =
options.addAliases !== undefined ? options.addAliases : true; // Default to true if not specified
const initGit = options.initGit !== undefined ? options.initGit : true; // Default to true if not specified
const storeTasksInGit =
options.storeTasksInGit !== undefined ? options.storeTasksInGit : true; // Default to true if not specified
const addAliases = options.aliases || false;
if (dryRun) {
log('info', 'DRY RUN MODE: No files will be modified');
log('info', 'Would initialize Task Master project');
log('info', 'Would create/update necessary project files');
// Show flag-specific behavior
log(
'info',
`${addAliases ? 'Would add shell aliases (tm, taskmaster)' : 'Would skip shell aliases'}`
);
log(
'info',
`${initGit ? 'Would initialize Git repository' : 'Would skip Git initialization'}`
);
log(
'info',
`${storeTasksInGit ? 'Would store tasks in Git' : 'Would exclude tasks from Git'}`
);
if (addAliases) {
log('info', 'Would add shell aliases for task-master');
}
return {
dryRun: true
};
}
createProjectStructure(
addAliases,
initGit,
storeTasksInGit,
dryRun,
options,
selectedRuleProfiles
);
createProjectStructure(addAliases, dryRun, options, selectedRuleProfiles);
} else {
// Interactive logic
log('info', 'Required options not provided, proceeding with prompts.');
@@ -436,45 +367,14 @@ async function initializeProject(options = {}) {
input: process.stdin,
output: process.stdout
});
// Prompt for shell aliases (skip if --aliases or --no-aliases flag was provided)
let addAliasesPrompted = true; // Default to true
if (options.addAliases !== undefined) {
addAliasesPrompted = options.addAliases; // Use flag value if provided
} else {
const addAliasesInput = await promptQuestion(
rl,
chalk.cyan(
'Add shell aliases for task-master? This lets you type "tm" instead of "task-master" (Y/n): '
)
);
addAliasesPrompted = addAliasesInput.trim().toLowerCase() !== 'n';
}
// Prompt for Git initialization (skip if --git or --no-git flag was provided)
let initGitPrompted = true; // Default to true
if (options.initGit !== undefined) {
initGitPrompted = options.initGit; // Use flag value if provided
} else {
const gitInitInput = await promptQuestion(
rl,
chalk.cyan('Initialize a Git repository in project root? (Y/n): ')
);
initGitPrompted = gitInitInput.trim().toLowerCase() !== 'n';
}
// Prompt for Git tasks storage (skip if --git-tasks or --no-git-tasks flag was provided)
let storeGitPrompted = true; // Default to true
if (options.storeTasksInGit !== undefined) {
storeGitPrompted = options.storeTasksInGit; // Use flag value if provided
} else {
const gitTasksInput = await promptQuestion(
rl,
chalk.cyan(
'Store tasks in Git (tasks.json and tasks/ directory)? (Y/n): '
)
);
storeGitPrompted = gitTasksInput.trim().toLowerCase() !== 'n';
}
// Only prompt for shell aliases
const addAliasesInput = await promptQuestion(
rl,
chalk.cyan(
'Add shell aliases for task-master? This lets you type "tm" instead of "task-master" (Y/n): '
)
);
const addAliasesPrompted = addAliasesInput.trim().toLowerCase() !== 'n';
// Confirm settings...
console.log('\nTask Master Project settings:');
@@ -484,14 +384,6 @@ async function initializeProject(options = {}) {
),
chalk.white(addAliasesPrompted ? 'Yes' : 'No')
);
console.log(
chalk.blue('Initialize Git repository in project root:'),
chalk.white(initGitPrompted ? 'Yes' : 'No')
);
console.log(
chalk.blue('Store tasks in Git (tasks.json and tasks/ directory):'),
chalk.white(storeGitPrompted ? 'Yes' : 'No')
);
const confirmInput = await promptQuestion(
rl,
@@ -512,6 +404,16 @@ async function initializeProject(options = {}) {
'info',
`Using rule profiles provided via command line: ${selectedRuleProfiles.join(', ')}`
);
} else {
try {
const targetDir = process.cwd();
execSync('npx task-master rules setup', {
stdio: 'inherit',
cwd: targetDir
});
} catch (error) {
log('error', 'Failed to run interactive rules setup:', error.message);
}
}
const dryRun = options.dryRun || false;
@@ -520,21 +422,9 @@ async function initializeProject(options = {}) {
log('info', 'DRY RUN MODE: No files will be modified');
log('info', 'Would initialize Task Master project');
log('info', 'Would create/update necessary project files');
// Show flag-specific behavior
log(
'info',
`${addAliasesPrompted ? 'Would add shell aliases (tm, taskmaster)' : 'Would skip shell aliases'}`
);
log(
'info',
`${initGitPrompted ? 'Would initialize Git repository' : 'Would skip Git initialization'}`
);
log(
'info',
`${storeGitPrompted ? 'Would store tasks in Git' : 'Would exclude tasks from Git'}`
);
if (addAliasesPrompted) {
log('info', 'Would add shell aliases for task-master');
}
return {
dryRun: true
};
@@ -543,17 +433,13 @@ async function initializeProject(options = {}) {
// Create structure using only necessary values
createProjectStructure(
addAliasesPrompted,
initGitPrompted,
storeGitPrompted,
dryRun,
options,
selectedRuleProfiles
);
rl.close();
} catch (error) {
if (rl) {
rl.close();
}
rl.close();
log('error', `Error during initialization process: ${error.message}`);
process.exit(1);
}
@@ -572,11 +458,9 @@ function promptQuestion(rl, question) {
// Function to create the project structure
function createProjectStructure(
addAliases,
initGit,
storeTasksInGit,
dryRun,
options,
selectedRuleProfiles = RULE_PROFILES
selectedRuleProfiles = RULE_PROFILES // Default to all rule profiles
) {
const targetDir = process.cwd();
log('info', `Initializing project in ${targetDir}`);
@@ -623,67 +507,27 @@ function createProjectStructure(
}
);
// Copy .gitignore with GitTasks preference
try {
const gitignoreTemplatePath = path.join(
__dirname,
'..',
'assets',
'gitignore'
);
const templateContent = fs.readFileSync(gitignoreTemplatePath, 'utf8');
manageGitignoreFile(
path.join(targetDir, GITIGNORE_FILE),
templateContent,
storeTasksInGit,
log
);
} catch (error) {
log('error', `Failed to create .gitignore: ${error.message}`);
}
// Copy .gitignore
copyTemplateFile('gitignore', path.join(targetDir, GITIGNORE_FILE));
// Copy example_prd.txt to NEW location
copyTemplateFile('example_prd.txt', path.join(targetDir, EXAMPLE_PRD_FILE));
// Initialize git repository if git is available
try {
if (initGit === false) {
log('info', 'Git initialization skipped due to --no-git flag.');
} else if (initGit === true) {
if (insideGitWorkTree()) {
log(
'info',
'Existing Git repository detected skipping git init despite --git flag.'
);
} else {
log('info', 'Initializing Git repository due to --git flag...');
execSync('git init', { cwd: targetDir, stdio: 'ignore' });
log('success', 'Git repository initialized');
}
} else {
// Default behavior when no flag is provided (from interactive prompt)
if (insideGitWorkTree()) {
log('info', 'Existing Git repository detected skipping git init.');
} else {
log(
'info',
'No Git repository detected. Initializing one in project root...'
);
execSync('git init', { cwd: targetDir, stdio: 'ignore' });
log('success', 'Git repository initialized');
}
if (!fs.existsSync(path.join(targetDir, '.git'))) {
log('info', 'Initializing git repository...');
execSync('git init', { stdio: 'ignore' });
log('success', 'Git repository initialized');
}
} catch (error) {
log('warn', 'Git not available, skipping repository initialization');
}
// Only run the manual transformer if rules were provided via flags.
// The interactive `rules --setup` wizard handles its own installation.
if (options.rulesExplicitlyProvided || options.yes) {
log('info', 'Generating profile rules from command-line flags...');
for (const profileName of selectedRuleProfiles) {
_processSingleProfile(profileName);
}
// Generate profile rules from assets/rules
log('info', 'Generating profile rules from assets/rules...');
for (const profileName of selectedRuleProfiles) {
_processSingleProfile(profileName);
}
// Add shell aliases if requested
@@ -714,49 +558,6 @@ function createProjectStructure(
);
}
// === Add Rule Profiles Setup Step ===
if (
!isSilentMode() &&
!dryRun &&
!options?.yes &&
!options.rulesExplicitlyProvided
) {
console.log(
boxen(chalk.cyan('Configuring Rule Profiles...'), {
padding: 0.5,
margin: { top: 1, bottom: 0.5 },
borderStyle: 'round',
borderColor: 'blue'
})
);
log(
'info',
'Running interactive rules setup. Please select which rule profiles to include.'
);
try {
// Correct command confirmed by you.
execSync('npx task-master rules --setup', {
stdio: 'inherit',
cwd: targetDir
});
log('success', 'Rule profiles configured.');
} catch (error) {
log('error', 'Failed to configure rule profiles:', error.message);
log('warn', 'You may need to run "task-master rules --setup" manually.');
}
} else if (isSilentMode() || dryRun || options?.yes) {
// This branch can log why setup was skipped, similar to the model setup logic.
if (options.rulesExplicitlyProvided) {
log(
'info',
'Skipping interactive rules setup because --rules flag was used.'
);
} else {
log('info', 'Skipping interactive rules setup in non-interactive mode.');
}
}
// =====================================
// === Add Model Configuration Step ===
if (!isSilentMode() && !dryRun && !options?.yes) {
console.log(
@@ -798,17 +599,6 @@ function createProjectStructure(
}
// ====================================
// Add shell aliases if requested
if (addAliases && !dryRun) {
log('info', 'Adding shell aliases...');
const aliasResult = addShellAliases();
if (aliasResult) {
log('success', 'Shell aliases added successfully');
}
} else if (addAliases && dryRun) {
log('info', 'DRY RUN: Would add shell aliases (tm, taskmaster)');
}
// Display success message
if (!isSilentMode()) {
console.log(

View File

@@ -3342,11 +3342,6 @@ ${result.result}
.option('--skip-install', 'Skip installing dependencies')
.option('--dry-run', 'Show what would be done without making changes')
.option('--aliases', 'Add shell aliases (tm, taskmaster)')
.option('--no-aliases', 'Skip shell aliases (tm, taskmaster)')
.option('--git', 'Initialize Git repository')
.option('--no-git', 'Skip Git repository initialization')
.option('--git-tasks', 'Store tasks in Git')
.option('--no-git-tasks', 'No Git storage of tasks')
.action(async (cmdOptions) => {
// cmdOptions contains parsed arguments
// Parse rules: accept space or comma separated, default to all available rules
@@ -3828,26 +3823,7 @@ Examples:
if (options[RULES_SETUP_ACTION]) {
// Run interactive rules setup ONLY (no project init)
const selectedRuleProfiles = await runInteractiveProfilesSetup();
if (!selectedRuleProfiles || selectedRuleProfiles.length === 0) {
console.log(chalk.yellow('No profiles selected. Exiting.'));
return;
}
console.log(
chalk.blue(
`Installing ${selectedRuleProfiles.length} selected profile(s)...`
)
);
for (let i = 0; i < selectedRuleProfiles.length; i++) {
const profile = selectedRuleProfiles[i];
console.log(
chalk.blue(
`Processing profile ${i + 1}/${selectedRuleProfiles.length}: ${profile}...`
)
);
for (const profile of selectedRuleProfiles) {
if (!isValidProfile(profile)) {
console.warn(
`Rule profile for "${profile}" not found. Valid profiles: ${RULE_PROFILES.join(', ')}. Skipping.`
@@ -3855,20 +3831,16 @@ Examples:
continue;
}
const profileConfig = getRulesProfile(profile);
const addResult = convertAllRulesToProfileRules(
projectDir,
profileConfig
);
if (typeof profileConfig.onAddRulesProfile === 'function') {
profileConfig.onAddRulesProfile(projectDir);
}
console.log(chalk.green(generateProfileSummary(profile, addResult)));
}
console.log(
chalk.green(
`\nCompleted installation of all ${selectedRuleProfiles.length} profile(s).`
)
);
return;
}

View File

@@ -54,7 +54,7 @@ const DEFAULTS = {
// No default fallback provider/model initially
provider: 'anthropic',
modelId: 'claude-3-5-sonnet',
maxTokens: 8192, // Default parameters if fallback IS configured
maxTokens: 64000, // Default parameters if fallback IS configured
temperature: 0.2
}
},
@@ -571,11 +571,10 @@ function getMcpApiKeyStatus(providerName, projectRoot = null) {
const mcpConfigRaw = fs.readFileSync(mcpConfigPath, 'utf-8');
const mcpConfig = JSON.parse(mcpConfigRaw);
const mcpEnv =
mcpConfig?.mcpServers?.['task-master-ai']?.env ||
mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
const mcpEnv = mcpConfig?.mcpServers?.['taskmaster-ai']?.env;
if (!mcpEnv) {
return false;
// console.warn(chalk.yellow('Warning: Could not find taskmaster-ai env in mcp.json.'));
return false; // Structure missing
}
let apiKeyToCheck = null;
@@ -783,15 +782,9 @@ function getAllProviders() {
function getBaseUrlForRole(role, explicitRoot = null) {
const roleConfig = getModelConfigForRole(role, explicitRoot);
if (roleConfig && typeof roleConfig.baseURL === 'string') {
return roleConfig.baseURL;
}
const provider = roleConfig?.provider;
if (provider) {
const envVarName = `${provider.toUpperCase()}_BASE_URL`;
return resolveEnvVariable(envVarName, null, explicitRoot);
}
return undefined;
return roleConfig && typeof roleConfig.baseURL === 'string'
? roleConfig.baseURL
: undefined;
}
export {

View File

@@ -54,7 +54,7 @@
"output": 15.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 8192
"max_tokens": 64000
}
],
"openai": [
@@ -84,8 +84,7 @@
"input": 2.0,
"output": 8.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000
"allowed_roles": ["main", "fallback"]
},
{
"id": "o3-mini",

View File

@@ -27,6 +27,7 @@ import {
} from '../utils.js';
import { generateObjectService } from '../ai-services-unified.js';
import { getDefaultPriority } from '../config-manager.js';
import generateTaskFiles from './generate-task-files.js';
import ContextGatherer from '../utils/contextGatherer.js';
// Define Zod schema for the expected AI output object
@@ -43,7 +44,7 @@ const AiTaskDataSchema = z.object({
.describe('Detailed approach for verifying task completion'),
dependencies: z
.array(z.number())
.nullable()
.optional()
.describe(
'Array of task IDs that this task depends on (must be completed before this task can start)'
)

View File

@@ -32,12 +32,7 @@ async function expandAllTasks(
context = {},
outputFormat = 'text' // Assume text default for CLI
) {
const {
session,
mcpLog,
projectRoot: providedProjectRoot,
tag: contextTag
} = context;
const { session, mcpLog, projectRoot: providedProjectRoot } = context;
const isMCPCall = !!mcpLog; // Determine if called from MCP
const projectRoot = providedProjectRoot || findProjectRoot();
@@ -79,7 +74,7 @@ async function expandAllTasks(
try {
logger.info(`Reading tasks from ${tasksPath}`);
const data = readJSON(tasksPath, projectRoot, contextTag);
const data = readJSON(tasksPath, projectRoot);
if (!data || !data.tasks) {
throw new Error(`Invalid tasks data in ${tasksPath}`);
}
@@ -129,7 +124,7 @@ async function expandAllTasks(
numSubtasks,
useResearch,
additionalContext,
{ ...context, projectRoot, tag: data.tag || contextTag }, // Pass the whole context object with projectRoot and resolved tag
{ ...context, projectRoot }, // Pass the whole context object with projectRoot
force
);
expandedCount++;

View File

@@ -43,9 +43,8 @@ const subtaskSchema = z
),
testStrategy: z
.string()
.nullable()
.optional()
.describe('Approach for testing this subtask')
.default('')
})
.strict();
const subtaskArraySchema = z.array(subtaskSchema);
@@ -418,7 +417,7 @@ async function expandTask(
context = {},
force = false
) {
const { session, mcpLog, projectRoot: contextProjectRoot, tag } = context;
const { session, mcpLog, projectRoot: contextProjectRoot } = context;
const outputFormat = mcpLog ? 'json' : 'text';
// Determine projectRoot: Use from context if available, otherwise derive from tasksPath
@@ -440,7 +439,7 @@ async function expandTask(
try {
// --- Task Loading/Filtering (Unchanged) ---
logger.info(`Reading tasks from ${tasksPath}`);
const data = readJSON(tasksPath, projectRoot, tag);
const data = readJSON(tasksPath, projectRoot);
if (!data || !data.tasks)
throw new Error(`Invalid tasks data in ${tasksPath}`);
const taskIndex = data.tasks.findIndex(
@@ -669,7 +668,7 @@ async function expandTask(
// --- End Change: Append instead of replace ---
data.tasks[taskIndex] = task; // Assign the modified task back
writeJSON(tasksPath, data, projectRoot, tag);
writeJSON(tasksPath, data);
// await generateTaskFiles(tasksPath, path.dirname(tasksPath));
// Display AI Usage Summary for CLI

View File

@@ -26,11 +26,11 @@ const prdSingleTaskSchema = z.object({
id: z.number().int().positive(),
title: z.string().min(1),
description: z.string().min(1),
details: z.string().nullable(),
testStrategy: z.string().nullable(),
priority: z.enum(['high', 'medium', 'low']).nullable(),
dependencies: z.array(z.number().int().positive()).nullable(),
status: z.string().nullable()
details: z.string().optional().default(''),
testStrategy: z.string().optional().default(''),
priority: z.enum(['high', 'medium', 'low']).default('medium'),
dependencies: z.array(z.number().int().positive()).optional().default([]),
status: z.string().optional().default('pending')
});
// Define the Zod schema for the ENTIRE expected AI response object

View File

@@ -36,27 +36,10 @@ const updatedTaskSchema = z
description: z.string(),
status: z.string(),
dependencies: z.array(z.union([z.number().int(), z.string()])),
priority: z.string().nullable().default('medium'),
details: z.string().nullable().default(''),
testStrategy: z.string().nullable().default(''),
subtasks: z
.array(
z.object({
id: z
.number()
.int()
.positive()
.describe('Sequential subtask ID starting from 1'),
title: z.string(),
description: z.string(),
status: z.string(),
dependencies: z.array(z.number().int()).nullable().default([]),
details: z.string().nullable().default(''),
testStrategy: z.string().nullable().default('')
})
)
.nullable()
.default([])
priority: z.string().optional(),
details: z.string().optional(),
testStrategy: z.string().optional(),
subtasks: z.array(z.any()).optional()
})
.strip(); // Allows parsing even if AI adds extra fields, but validation focuses on schema
@@ -458,8 +441,6 @@ Guidelines:
9. Instead, add a new subtask that clearly indicates what needs to be changed or replaced
10. Use the existence of completed subtasks as an opportunity to make new subtasks more specific and targeted
11. Ensure any new subtasks have unique IDs that don't conflict with existing ones
12. CRITICAL: For subtask IDs, use ONLY numeric values (1, 2, 3, etc.) NOT strings ("1", "2", "3")
13. CRITICAL: Subtask IDs should start from 1 and increment sequentially (1, 2, 3...) - do NOT use parent task ID as prefix
The changes described in the prompt should be thoughtfully applied to make the task more accurate and actionable.`;
@@ -592,37 +573,6 @@ The changes described in the prompt should be thoughtfully applied to make the t
);
updatedTask.status = taskToUpdate.status;
}
// Fix subtask IDs if they exist (ensure they are numeric and sequential)
if (updatedTask.subtasks && Array.isArray(updatedTask.subtasks)) {
let currentSubtaskId = 1;
updatedTask.subtasks = updatedTask.subtasks.map((subtask) => {
// Fix AI-generated subtask IDs that might be strings or use parent ID as prefix
const correctedSubtask = {
...subtask,
id: currentSubtaskId, // Override AI-generated ID with correct sequential ID
dependencies: Array.isArray(subtask.dependencies)
? subtask.dependencies
.map((dep) =>
typeof dep === 'string' ? parseInt(dep, 10) : dep
)
.filter(
(depId) =>
!Number.isNaN(depId) &&
depId >= 1 &&
depId < currentSubtaskId
)
: [],
status: subtask.status || 'pending'
};
currentSubtaskId++;
return correctedSubtask;
});
report(
'info',
`Fixed ${updatedTask.subtasks.length} subtask IDs to be sequential numeric IDs.`
);
}
// Preserve completed subtasks (Keep existing logic)
if (taskToUpdate.subtasks?.length > 0) {
if (!updatedTask.subtasks) {

View File

@@ -35,10 +35,10 @@ const updatedTaskSchema = z
description: z.string(),
status: z.string(),
dependencies: z.array(z.union([z.number().int(), z.string()])),
priority: z.string().nullable(),
details: z.string().nullable(),
testStrategy: z.string().nullable(),
subtasks: z.array(z.any()).nullable() // Keep subtasks flexible for now
priority: z.string().optional(),
details: z.string().optional(),
testStrategy: z.string().optional(),
subtasks: z.array(z.any()).optional() // Keep subtasks flexible for now
})
.strip(); // Allow potential extra fields during parsing if needed, then validate structure
const updatedTaskArraySchema = z.array(updatedTaskSchema);

View File

@@ -73,7 +73,7 @@ function resolveEnvVariable(key, session = null, projectRoot = null) {
*/
function findProjectRoot(
startDir = process.cwd(),
markers = ['package.json', 'pyproject.toml', '.git', LEGACY_CONFIG_FILE]
markers = ['package.json', '.git', LEGACY_CONFIG_FILE]
) {
let currentPath = path.resolve(startDir);
const rootPath = path.parse(currentPath).root;

View File

@@ -349,25 +349,6 @@ function getCurrentBranchSync(projectRoot) {
}
}
/**
* Check if the current working directory is inside a Git work-tree.
* Uses `git rev-parse --is-inside-work-tree` which is more specific than --git-dir
* for detecting work-trees (excludes bare repos and .git directories).
* This is ideal for preventing accidental git init in existing work-trees.
* @returns {boolean} True if inside a Git work-tree, false otherwise.
*/
function insideGitWorkTree() {
try {
execSync('git rev-parse --is-inside-work-tree', {
stdio: 'ignore',
cwd: process.cwd()
});
return true;
} catch {
return false;
}
}
// Export all functions
export {
isGitRepository,
@@ -385,6 +366,5 @@ export {
checkAndAutoSwitchGitTag,
checkAndAutoSwitchGitTagSync,
isGitRepositorySync,
getCurrentBranchSync,
insideGitWorkTree
getCurrentBranchSync
};

View File

@@ -1,293 +0,0 @@
// Utility to manage .gitignore files with task file preferences and template merging
import fs from 'fs';
import path from 'path';
// Constants
const TASK_FILES_COMMENT = '# Task files';
const TASK_JSON_PATTERN = 'tasks.json';
const TASK_DIR_PATTERN = 'tasks/';
/**
* Normalizes a line by removing comments and trimming whitespace
* @param {string} line - Line to normalize
* @returns {string} Normalized line
*/
function normalizeLine(line) {
return line.trim().replace(/^#/, '').trim();
}
/**
* Checks if a line is task-related (tasks.json or tasks/)
* @param {string} line - Line to check
* @returns {boolean} True if line is task-related
*/
function isTaskLine(line) {
const normalized = normalizeLine(line);
return normalized === TASK_JSON_PATTERN || normalized === TASK_DIR_PATTERN;
}
/**
* Adjusts task-related lines in template based on storage preference
* @param {string[]} templateLines - Array of template lines
* @param {boolean} storeTasksInGit - Whether to comment out task lines
* @returns {string[]} Adjusted template lines
*/
function adjustTaskLinesInTemplate(templateLines, storeTasksInGit) {
return templateLines.map((line) => {
if (isTaskLine(line)) {
const normalized = normalizeLine(line);
// Preserve original trailing whitespace from the line
const originalTrailingSpace = line.match(/\s*$/)[0];
return storeTasksInGit
? `# ${normalized}${originalTrailingSpace}`
: `${normalized}${originalTrailingSpace}`;
}
return line;
});
}
/**
* Removes existing task files section from content
* @param {string[]} existingLines - Existing file lines
* @returns {string[]} Lines with task section removed
*/
function removeExistingTaskSection(existingLines) {
const cleanedLines = [];
let inTaskSection = false;
for (const line of existingLines) {
// Start of task files section
if (line.trim() === TASK_FILES_COMMENT) {
inTaskSection = true;
continue;
}
// Task lines (commented or not)
if (isTaskLine(line)) {
continue;
}
// Empty lines within task section
if (inTaskSection && !line.trim()) {
continue;
}
// End of task section (any non-empty, non-task line)
if (inTaskSection && line.trim() && !isTaskLine(line)) {
inTaskSection = false;
}
// Keep all other lines
if (!inTaskSection) {
cleanedLines.push(line);
}
}
return cleanedLines;
}
/**
* Filters template lines to only include new content not already present
* @param {string[]} templateLines - Template lines
* @param {Set<string>} existingLinesSet - Set of existing trimmed lines
* @returns {string[]} New lines to add
*/
function filterNewTemplateLines(templateLines, existingLinesSet) {
return templateLines.filter((line) => {
const trimmed = line.trim();
if (!trimmed) return false;
// Skip task-related lines (handled separately)
if (isTaskLine(line) || trimmed === TASK_FILES_COMMENT) {
return false;
}
// Include only if not already present
return !existingLinesSet.has(trimmed);
});
}
/**
* Builds the task files section based on storage preference
* @param {boolean} storeTasksInGit - Whether to comment out task lines
* @returns {string[]} Task files section lines
*/
function buildTaskFilesSection(storeTasksInGit) {
const section = [TASK_FILES_COMMENT];
if (storeTasksInGit) {
section.push(`# ${TASK_JSON_PATTERN}`, `# ${TASK_DIR_PATTERN} `);
} else {
section.push(TASK_JSON_PATTERN, `${TASK_DIR_PATTERN} `);
}
return section;
}
/**
* Adds a separator line if needed (avoids double spacing)
* @param {string[]} lines - Current lines array
*/
function addSeparatorIfNeeded(lines) {
if (lines.some((line) => line.trim())) {
const lastLine = lines[lines.length - 1];
if (lastLine && lastLine.trim()) {
lines.push('');
}
}
}
/**
* Validates input parameters
* @param {string} targetPath - Path to .gitignore file
* @param {string} content - Template content
* @param {boolean} storeTasksInGit - Storage preference
* @throws {Error} If validation fails
*/
function validateInputs(targetPath, content, storeTasksInGit) {
if (!targetPath || typeof targetPath !== 'string') {
throw new Error('targetPath must be a non-empty string');
}
if (!targetPath.endsWith('.gitignore')) {
throw new Error('targetPath must end with .gitignore');
}
if (!content || typeof content !== 'string') {
throw new Error('content must be a non-empty string');
}
if (typeof storeTasksInGit !== 'boolean') {
throw new Error('storeTasksInGit must be a boolean');
}
}
/**
* Creates a new .gitignore file from template
* @param {string} targetPath - Path to create file at
* @param {string[]} templateLines - Adjusted template lines
* @param {function} log - Logging function
*/
function createNewGitignoreFile(targetPath, templateLines, log) {
try {
fs.writeFileSync(targetPath, templateLines.join('\n'));
if (typeof log === 'function') {
log('success', `Created ${targetPath} with full template`);
}
} catch (error) {
if (typeof log === 'function') {
log('error', `Failed to create ${targetPath}: ${error.message}`);
}
throw error;
}
}
/**
* Merges template content with existing .gitignore file
* @param {string} targetPath - Path to existing file
* @param {string[]} templateLines - Adjusted template lines
* @param {boolean} storeTasksInGit - Storage preference
* @param {function} log - Logging function
*/
function mergeWithExistingFile(
targetPath,
templateLines,
storeTasksInGit,
log
) {
try {
// Read and process existing file
const existingContent = fs.readFileSync(targetPath, 'utf8');
const existingLines = existingContent.split('\n');
// Remove existing task section
const cleanedExistingLines = removeExistingTaskSection(existingLines);
// Find new template lines to add
const existingLinesSet = new Set(
cleanedExistingLines.map((line) => line.trim()).filter((line) => line)
);
const newLines = filterNewTemplateLines(templateLines, existingLinesSet);
// Build final content
const finalLines = [...cleanedExistingLines];
// Add new template content
if (newLines.length > 0) {
addSeparatorIfNeeded(finalLines);
finalLines.push(...newLines);
}
// Add task files section
addSeparatorIfNeeded(finalLines);
finalLines.push(...buildTaskFilesSection(storeTasksInGit));
// Write result
fs.writeFileSync(targetPath, finalLines.join('\n'));
if (typeof log === 'function') {
const hasNewContent =
newLines.length > 0 ? ' and merged new content' : '';
log(
'success',
`Updated ${targetPath} according to user preference${hasNewContent}`
);
}
} catch (error) {
if (typeof log === 'function') {
log(
'error',
`Failed to merge content with ${targetPath}: ${error.message}`
);
}
throw error;
}
}
/**
* Manages .gitignore file creation and updates with task file preferences
* @param {string} targetPath - Path to the .gitignore file
* @param {string} content - Template content for .gitignore
* @param {boolean} storeTasksInGit - Whether to store tasks in git or not
* @param {function} log - Logging function (level, message)
* @throws {Error} If validation or file operations fail
*/
function manageGitignoreFile(
targetPath,
content,
storeTasksInGit = true,
log = null
) {
// Validate inputs
validateInputs(targetPath, content, storeTasksInGit);
// Process template with task preference
const templateLines = content.split('\n');
const adjustedTemplateLines = adjustTaskLinesInTemplate(
templateLines,
storeTasksInGit
);
// Handle file creation or merging
if (!fs.existsSync(targetPath)) {
createNewGitignoreFile(targetPath, adjustedTemplateLines, log);
} else {
mergeWithExistingFile(
targetPath,
adjustedTemplateLines,
storeTasksInGit,
log
);
}
}
export default manageGitignoreFile;
export {
manageGitignoreFile,
normalizeLine,
isTaskLine,
buildTaskFilesSection,
TASK_FILES_COMMENT,
TASK_JSON_PATTERN,
TASK_DIR_PATTERN
};

View File

@@ -206,7 +206,6 @@ export function convertAllRulesToProfileRules(projectDir, profile) {
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const assetsDir = path.join(__dirname, '..', '..', 'assets');
if (typeof profile.onPostConvertRulesProfile === 'function') {
profile.onPostConvertRulesProfile(projectDir, assetsDir);
}

View File

@@ -333,8 +333,8 @@ log_step() {
log_step "Initializing Task Master project (non-interactive)"
task-master init -y --name="E2E Test $TIMESTAMP" --description="Automated E2E test run"
if [ ! -f ".taskmaster/config.json" ]; then
log_error "Initialization failed: .taskmaster/config.json not found."
if [ ! -f ".taskmasterconfig" ]; then
log_error "Initialization failed: .taskmasterconfig not found."
exit 1
fi
log_success "Project initialized."
@@ -344,8 +344,8 @@ log_step() {
exit_status_prd=$?
echo "$cmd_output_prd"
extract_and_sum_cost "$cmd_output_prd"
if [ $exit_status_prd -ne 0 ] || [ ! -s ".taskmaster/tasks/tasks.json" ]; then
log_error "Parsing PRD failed: .taskmaster/tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
if [ $exit_status_prd -ne 0 ] || [ ! -s "tasks/tasks.json" ]; then
log_error "Parsing PRD failed: tasks/tasks.json not found or is empty. Exit status: $exit_status_prd"
exit 1
else
log_success "PRD parsed successfully."
@@ -386,95 +386,6 @@ log_step() {
task-master list --with-subtasks > task_list_after_changes.log
log_success "Task list after changes saved to task_list_after_changes.log"
# === Start New Test Section: Tag-Aware Expand Testing ===
log_step "Creating additional tag for expand testing"
task-master add-tag feature-expand --description="Tag for testing expand command with tag preservation"
log_success "Created feature-expand tag."
log_step "Adding task to feature-expand tag"
task-master add-task --tag=feature-expand --prompt="Test task for tag-aware expansion" --priority=medium
# Get the new task ID dynamically
new_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
log_success "Added task $new_expand_task_id to feature-expand tag."
log_step "Verifying tags exist before expand test"
task-master tags > tags_before_expand.log
tag_count_before=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
log_success "Tag count before expand: $tag_count_before"
log_step "Expanding task in feature-expand tag (testing tag corruption fix)"
cmd_output_expand_tagged=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" 2>&1)
exit_status_expand_tagged=$?
echo "$cmd_output_expand_tagged"
extract_and_sum_cost "$cmd_output_expand_tagged"
if [ $exit_status_expand_tagged -ne 0 ]; then
log_error "Tagged expand failed. Exit status: $exit_status_expand_tagged"
else
log_success "Tagged expand completed."
fi
log_step "Verifying tag preservation after expand"
task-master tags > tags_after_expand.log
tag_count_after=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
if [ "$tag_count_before" -eq "$tag_count_after" ]; then
log_success "Tag count preserved: $tag_count_after (no corruption detected)"
else
log_error "Tag corruption detected! Before: $tag_count_before, After: $tag_count_after"
fi
log_step "Verifying master tag still exists and has tasks"
master_task_count=$(jq -r '.master.tasks | length' .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
if [ "$master_task_count" -gt "0" ]; then
log_success "Master tag preserved with $master_task_count tasks"
else
log_error "Master tag corrupted or empty after tagged expand"
fi
log_step "Verifying feature-expand tag has expanded subtasks"
expanded_subtask_count=$(jq -r ".\"feature-expand\".tasks[] | select(.id == $new_expand_task_id) | .subtasks | length" .taskmaster/tasks/tasks.json 2>/dev/null || echo "0")
if [ "$expanded_subtask_count" -gt "0" ]; then
log_success "Expand successful: $expanded_subtask_count subtasks created in feature-expand tag"
else
log_error "Expand failed: No subtasks found in feature-expand tag"
fi
log_step "Testing force expand with tag preservation"
cmd_output_force_expand=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" --force 2>&1)
exit_status_force_expand=$?
echo "$cmd_output_force_expand"
extract_and_sum_cost "$cmd_output_force_expand"
# Verify tags still preserved after force expand
tag_count_after_force=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
if [ "$tag_count_before" -eq "$tag_count_after_force" ]; then
log_success "Force expand preserved all tags"
else
log_error "Force expand caused tag corruption"
fi
log_step "Testing expand --all with tag preservation"
# Add another task to feature-expand for expand-all testing
task-master add-task --tag=feature-expand --prompt="Second task for expand-all testing" --priority=low
second_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json)
cmd_output_expand_all=$(task-master expand --tag=feature-expand --all 2>&1)
exit_status_expand_all=$?
echo "$cmd_output_expand_all"
extract_and_sum_cost "$cmd_output_expand_all"
# Verify tags preserved after expand-all
tag_count_after_all=$(jq 'keys | length' .taskmaster/tasks/tasks.json)
if [ "$tag_count_before" -eq "$tag_count_after_all" ]; then
log_success "Expand --all preserved all tags"
else
log_error "Expand --all caused tag corruption"
fi
log_success "Completed expand --all tag preservation test."
# === End New Test Section: Tag-Aware Expand Testing ===
# === Test Model Commands ===
log_step "Checking initial model configuration"
task-master models > models_initial_config.log
@@ -715,7 +626,7 @@ log_step() {
# Find the next available task ID dynamically instead of hardcoding 11, 12
# Assuming tasks are added sequentially and we didn't remove any core tasks yet
last_task_id=$(jq '[.master.tasks[].id] | max' .taskmaster/tasks/tasks.json)
last_task_id=$(jq '[.tasks[].id] | max' tasks/tasks.json)
manual_task_id=$((last_task_id + 1))
ai_task_id=$((manual_task_id + 1))
@@ -836,30 +747,30 @@ log_step() {
task-master list --with-subtasks > task_list_after_clear_all.log
log_success "Task list after clear-all saved. (Manual/LLM check recommended to verify subtasks removed)"
log_step "Expanding Task 3 again (to have subtasks for next test)"
task-master expand --id=3
log_success "Attempted to expand Task 3."
# Verify 3.1 exists
if ! jq -e '.master.tasks[] | select(.id == 3) | .subtasks[] | select(.id == 1)' .taskmaster/tasks/tasks.json > /dev/null; then
log_error "Subtask 3.1 not found in tasks.json after expanding Task 3."
log_step "Expanding Task 1 again (to have subtasks for next test)"
task-master expand --id=1
log_success "Attempted to expand Task 1 again."
# Verify 1.1 exists again
if ! jq -e '.tasks[] | select(.id == 1) | .subtasks[] | select(.id == 1)' tasks/tasks.json > /dev/null; then
log_error "Subtask 1.1 not found in tasks.json after re-expanding Task 1."
exit 1
fi
log_step "Adding dependency: Task 4 depends on Subtask 3.1"
task-master add-dependency --id=4 --depends-on=3.1
log_success "Added dependency 4 -> 3.1."
log_step "Adding dependency: Task 3 depends on Subtask 1.1"
task-master add-dependency --id=3 --depends-on=1.1
log_success "Added dependency 3 -> 1.1."
log_step "Showing Task 4 details (after adding subtask dependency)"
task-master show 4 > task_4_details_after_dep_add.log
log_success "Task 4 details saved. (Manual/LLM check recommended for dependency [3.1])"
log_step "Showing Task 3 details (after adding subtask dependency)"
task-master show 3 > task_3_details_after_dep_add.log
log_success "Task 3 details saved. (Manual/LLM check recommended for dependency [1.1])"
log_step "Removing dependency: Task 4 depends on Subtask 3.1"
task-master remove-dependency --id=4 --depends-on=3.1
log_success "Removed dependency 4 -> 3.1."
log_step "Removing dependency: Task 3 depends on Subtask 1.1"
task-master remove-dependency --id=3 --depends-on=1.1
log_success "Removed dependency 3 -> 1.1."
log_step "Showing Task 4 details (after removing subtask dependency)"
task-master show 4 > task_4_details_after_dep_remove.log
log_success "Task 4 details saved. (Manual/LLM check recommended to verify dependency removed)"
log_step "Showing Task 3 details (after removing subtask dependency)"
task-master show 3 > task_3_details_after_dep_remove.log
log_success "Task 3 details saved. (Manual/LLM check recommended to verify dependency removed)"
# === End New Test Section ===

View File

@@ -1,581 +0,0 @@
/**
* Integration tests for manage-gitignore.js module
* Tests actual file system operations in a temporary directory
*/
import fs from 'fs';
import path from 'path';
import os from 'os';
import manageGitignoreFile from '../../src/utils/manage-gitignore.js';
describe('manage-gitignore.js Integration Tests', () => {
let tempDir;
let testGitignorePath;
beforeEach(() => {
// Create a temporary directory for each test
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gitignore-test-'));
testGitignorePath = path.join(tempDir, '.gitignore');
});
afterEach(() => {
// Clean up temporary directory after each test
if (fs.existsSync(tempDir)) {
fs.rmSync(tempDir, { recursive: true, force: true });
}
});
describe('New File Creation', () => {
const templateContent = `# Logs
logs
*.log
npm-debug.log*
# Dependencies
node_modules/
jspm_packages/
# Environment variables
.env
.env.local
# Task files
tasks.json
tasks/ `;
test('should create new .gitignore file with commented task lines (storeTasksInGit = true)', () => {
const logs = [];
const mockLog = (level, message) => logs.push({ level, message });
manageGitignoreFile(testGitignorePath, templateContent, true, mockLog);
// Verify file was created
expect(fs.existsSync(testGitignorePath)).toBe(true);
// Verify content
const content = fs.readFileSync(testGitignorePath, 'utf8');
expect(content).toContain('# Logs');
expect(content).toContain('logs');
expect(content).toContain('# Dependencies');
expect(content).toContain('node_modules/');
expect(content).toContain('# Task files');
expect(content).toContain('tasks.json');
expect(content).toContain('tasks/');
// Verify task lines are commented (storeTasksInGit = true)
expect(content).toMatch(
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
);
// Verify log message
expect(logs).toContainEqual({
level: 'success',
message: expect.stringContaining('Created')
});
});
test('should create new .gitignore file with uncommented task lines (storeTasksInGit = false)', () => {
const logs = [];
const mockLog = (level, message) => logs.push({ level, message });
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
// Verify file was created
expect(fs.existsSync(testGitignorePath)).toBe(true);
// Verify content
const content = fs.readFileSync(testGitignorePath, 'utf8');
expect(content).toContain('# Task files');
// Verify task lines are uncommented (storeTasksInGit = false)
expect(content).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
// Verify log message
expect(logs).toContainEqual({
level: 'success',
message: expect.stringContaining('Created')
});
});
test('should work without log function', () => {
expect(() => {
manageGitignoreFile(testGitignorePath, templateContent, false);
}).not.toThrow();
expect(fs.existsSync(testGitignorePath)).toBe(true);
});
});
describe('File Merging', () => {
const templateContent = `# Logs
logs
*.log
# Dependencies
node_modules/
# Environment variables
.env
# Task files
tasks.json
tasks/ `;
test('should merge template with existing file content', () => {
// Create existing .gitignore file
const existingContent = `# Existing content
old-files.txt
*.backup
# Old task files (to be replaced)
# Task files
# tasks.json
# tasks/
# More existing content
cache/`;
fs.writeFileSync(testGitignorePath, existingContent);
const logs = [];
const mockLog = (level, message) => logs.push({ level, message });
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
// Verify file still exists
expect(fs.existsSync(testGitignorePath)).toBe(true);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should retain existing non-task content
expect(content).toContain('# Existing content');
expect(content).toContain('old-files.txt');
expect(content).toContain('*.backup');
expect(content).toContain('# More existing content');
expect(content).toContain('cache/');
// Should add new template content
expect(content).toContain('# Logs');
expect(content).toContain('logs');
expect(content).toContain('# Dependencies');
expect(content).toContain('node_modules/');
expect(content).toContain('# Environment variables');
expect(content).toContain('.env');
// Should replace task section with new preference (storeTasksInGit = false means uncommented)
expect(content).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
// Verify log message
expect(logs).toContainEqual({
level: 'success',
message: expect.stringContaining('Updated')
});
});
test('should handle switching task preferences from commented to uncommented', () => {
// Create existing file with commented task lines
const existingContent = `# Existing
existing.txt
# Task files
# tasks.json
# tasks/ `;
fs.writeFileSync(testGitignorePath, existingContent);
// Update with storeTasksInGit = true (commented)
manageGitignoreFile(testGitignorePath, templateContent, true);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should retain existing content
expect(content).toContain('# Existing');
expect(content).toContain('existing.txt');
// Should have commented task lines (storeTasksInGit = true)
expect(content).toMatch(
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
);
});
test('should handle switching task preferences from uncommented to commented', () => {
// Create existing file with uncommented task lines
const existingContent = `# Existing
existing.txt
# Task files
tasks.json
tasks/ `;
fs.writeFileSync(testGitignorePath, existingContent);
// Update with storeTasksInGit = false (uncommented)
manageGitignoreFile(testGitignorePath, templateContent, false);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should retain existing content
expect(content).toContain('# Existing');
expect(content).toContain('existing.txt');
// Should have uncommented task lines (storeTasksInGit = false)
expect(content).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
});
test('should not duplicate existing template content', () => {
// Create existing file that already has some template content
const existingContent = `# Logs
logs
*.log
# Dependencies
node_modules/
# Custom content
custom.txt
# Task files
# tasks.json
# tasks/ `;
fs.writeFileSync(testGitignorePath, existingContent);
manageGitignoreFile(testGitignorePath, templateContent, false);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should not duplicate logs section
const logsMatches = content.match(/# Logs/g);
expect(logsMatches).toHaveLength(1);
// Should not duplicate dependencies section
const depsMatches = content.match(/# Dependencies/g);
expect(depsMatches).toHaveLength(1);
// Should retain custom content
expect(content).toContain('# Custom content');
expect(content).toContain('custom.txt');
// Should add new template content that wasn't present
expect(content).toContain('# Environment variables');
expect(content).toContain('.env');
});
test('should handle empty existing file', () => {
// Create empty file
fs.writeFileSync(testGitignorePath, '');
manageGitignoreFile(testGitignorePath, templateContent, false);
expect(fs.existsSync(testGitignorePath)).toBe(true);
const content = fs.readFileSync(testGitignorePath, 'utf8');
expect(content).toContain('# Logs');
expect(content).toContain('# Task files');
expect(content).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
});
test('should handle file with only whitespace', () => {
// Create file with only whitespace
fs.writeFileSync(testGitignorePath, ' \n\n \n');
manageGitignoreFile(testGitignorePath, templateContent, true);
const content = fs.readFileSync(testGitignorePath, 'utf8');
expect(content).toContain('# Logs');
expect(content).toContain('# Task files');
expect(content).toMatch(
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
);
});
});
describe('Complex Task Section Handling', () => {
test('should remove task section with mixed comments and spacing', () => {
const existingContent = `# Dependencies
node_modules/
# Task files
# tasks.json
tasks/
# More content
more.txt`;
const templateContent = `# New content
new.txt
# Task files
tasks.json
tasks/ `;
fs.writeFileSync(testGitignorePath, existingContent);
manageGitignoreFile(testGitignorePath, templateContent, false);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should retain non-task content
expect(content).toContain('# Dependencies');
expect(content).toContain('node_modules/');
expect(content).toContain('# More content');
expect(content).toContain('more.txt');
// Should add new content
expect(content).toContain('# New content');
expect(content).toContain('new.txt');
// Should have clean task section (storeTasksInGit = false means uncommented)
expect(content).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
});
test('should handle multiple task file variations', () => {
const existingContent = `# Existing
existing.txt
# Task files
tasks.json
# tasks.json
# tasks/
tasks/
#tasks.json
# More content
more.txt`;
const templateContent = `# Task files
tasks.json
tasks/ `;
fs.writeFileSync(testGitignorePath, existingContent);
manageGitignoreFile(testGitignorePath, templateContent, true);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should retain non-task content
expect(content).toContain('# Existing');
expect(content).toContain('existing.txt');
expect(content).toContain('# More content');
expect(content).toContain('more.txt');
// Should have clean task section with preference applied (storeTasksInGit = true means commented)
expect(content).toMatch(
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
);
// Should not have multiple task sections
const taskFileMatches = content.match(/# Task files/g);
expect(taskFileMatches).toHaveLength(1);
});
});
describe('Error Handling', () => {
test('should handle permission errors gracefully', () => {
// Create a directory where we would create the file, then remove write permissions
const readOnlyDir = path.join(tempDir, 'readonly');
fs.mkdirSync(readOnlyDir);
fs.chmodSync(readOnlyDir, 0o444); // Read-only
const readOnlyGitignorePath = path.join(readOnlyDir, '.gitignore');
const templateContent = `# Test
test.txt
# Task files
tasks.json
tasks/ `;
const logs = [];
const mockLog = (level, message) => logs.push({ level, message });
expect(() => {
manageGitignoreFile(
readOnlyGitignorePath,
templateContent,
false,
mockLog
);
}).toThrow();
// Verify error was logged
expect(logs).toContainEqual({
level: 'error',
message: expect.stringContaining('Failed to create')
});
// Restore permissions for cleanup
fs.chmodSync(readOnlyDir, 0o755);
});
test('should handle read errors on existing files', () => {
// Create a file then remove read permissions
fs.writeFileSync(testGitignorePath, 'existing content');
fs.chmodSync(testGitignorePath, 0o000); // No permissions
const templateContent = `# Test
test.txt
# Task files
tasks.json
tasks/ `;
const logs = [];
const mockLog = (level, message) => logs.push({ level, message });
expect(() => {
manageGitignoreFile(testGitignorePath, templateContent, false, mockLog);
}).toThrow();
// Verify error was logged
expect(logs).toContainEqual({
level: 'error',
message: expect.stringContaining('Failed to merge content')
});
// Restore permissions for cleanup
fs.chmodSync(testGitignorePath, 0o644);
});
});
describe('Real-world Scenarios', () => {
test('should handle typical Node.js project .gitignore', () => {
const existingNodeGitignore = `# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Dependency directories
node_modules/
jspm_packages/
# Optional npm cache directory
.npm
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
# next.js build output
.next`;
const taskMasterTemplate = `# Logs
logs
*.log
# Dependencies
node_modules/
# Environment variables
.env
# Build output
dist/
build/
# Task files
tasks.json
tasks/ `;
fs.writeFileSync(testGitignorePath, existingNodeGitignore);
manageGitignoreFile(testGitignorePath, taskMasterTemplate, false);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should retain existing Node.js specific entries
expect(content).toContain('npm-debug.log*');
expect(content).toContain('yarn-debug.log*');
expect(content).toContain('*.pid');
expect(content).toContain('jspm_packages/');
expect(content).toContain('.npm');
expect(content).toContain('*.tgz');
expect(content).toContain('.yarn-integrity');
expect(content).toContain('.next');
// Should add new content from template that wasn't present
expect(content).toContain('dist/');
expect(content).toContain('build/');
// Should add task files section with correct preference (storeTasksInGit = false means uncommented)
expect(content).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
// Should not duplicate common entries
const nodeModulesMatches = content.match(/node_modules\//g);
expect(nodeModulesMatches).toHaveLength(1);
const logsMatches = content.match(/# Logs/g);
expect(logsMatches).toHaveLength(1);
});
test('should handle project with existing task files in git', () => {
const existingContent = `# Dependencies
node_modules/
# Logs
*.log
# Current task setup - keeping in git
# Task files
tasks.json
tasks/
# Build output
dist/`;
const templateContent = `# New template
# Dependencies
node_modules/
# Task files
tasks.json
tasks/ `;
fs.writeFileSync(testGitignorePath, existingContent);
// Change preference to exclude tasks from git (storeTasksInGit = false means uncommented/ignored)
manageGitignoreFile(testGitignorePath, templateContent, false);
const content = fs.readFileSync(testGitignorePath, 'utf8');
// Should retain existing content
expect(content).toContain('# Dependencies');
expect(content).toContain('node_modules/');
expect(content).toContain('# Logs');
expect(content).toContain('*.log');
expect(content).toContain('# Build output');
expect(content).toContain('dist/');
// Should update task preference to uncommented (storeTasksInGit = false)
expect(content).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
});
});
});

View File

@@ -133,7 +133,7 @@ jest.mock('../../../scripts/modules/utils.js', () => ({
readComplexityReport: mockReadComplexityReport,
CONFIG: {
model: 'claude-3-7-sonnet-20250219',
maxTokens: 8192,
maxTokens: 64000,
temperature: 0.2,
defaultSubtasks: 5
}
@@ -625,38 +625,19 @@ describe('MCP Server Direct Functions', () => {
// For successful cases, record that functions were called but don't make real calls
mockEnableSilentMode();
// Mock expandAllTasks - now returns a structured object instead of undefined
// Mock expandAllTasks
const mockExpandAll = jest.fn().mockImplementation(async () => {
// Return the new structured response that matches the actual implementation
return {
success: true,
expandedCount: 2,
failedCount: 0,
skippedCount: 1,
tasksToExpand: 3,
telemetryData: {
timestamp: new Date().toISOString(),
commandName: 'expand-all-tasks',
totalCost: 0.05,
totalTokens: 1000,
inputTokens: 600,
outputTokens: 400
}
};
// Just simulate success without any real operations
return undefined; // expandAllTasks doesn't return anything
});
// Call mock expandAllTasks with the correct signature
const result = await mockExpandAll(
args.file, // tasksPath
args.num, // numSubtasks
args.research || false, // useResearch
args.prompt || '', // additionalContext
args.force || false, // force
{
mcpLog: mockLogger,
session: options.session,
projectRoot: args.projectRoot
}
// Call mock expandAllTasks
await mockExpandAll(
args.num,
args.research || false,
args.prompt || '',
args.force || false,
{ mcpLog: mockLogger, session: options.session }
);
mockDisableSilentMode();
@@ -664,14 +645,13 @@ describe('MCP Server Direct Functions', () => {
return {
success: true,
data: {
message: `Expand all operation completed. Expanded: ${result.expandedCount}, Failed: ${result.failedCount}, Skipped: ${result.skippedCount}`,
message: 'Successfully expanded all pending tasks with subtasks',
details: {
expandedCount: result.expandedCount,
failedCount: result.failedCount,
skippedCount: result.skippedCount,
tasksToExpand: result.tasksToExpand
},
telemetryData: result.telemetryData
numSubtasks: args.num,
research: args.research || false,
prompt: args.prompt || '',
force: args.force || false
}
}
};
}
@@ -691,13 +671,10 @@ describe('MCP Server Direct Functions', () => {
// Assert
expect(result.success).toBe(true);
expect(result.data.message).toMatch(/Expand all operation completed/);
expect(result.data.details.expandedCount).toBe(2);
expect(result.data.details.failedCount).toBe(0);
expect(result.data.details.skippedCount).toBe(1);
expect(result.data.details.tasksToExpand).toBe(3);
expect(result.data.telemetryData).toBeDefined();
expect(result.data.telemetryData.commandName).toBe('expand-all-tasks');
expect(result.data.message).toBe(
'Successfully expanded all pending tasks with subtasks'
);
expect(result.data.details.numSubtasks).toBe(3);
expect(mockEnableSilentMode).toHaveBeenCalled();
expect(mockDisableSilentMode).toHaveBeenCalled();
});
@@ -718,8 +695,7 @@ describe('MCP Server Direct Functions', () => {
// Assert
expect(result.success).toBe(true);
expect(result.data.details.expandedCount).toBe(2);
expect(result.data.telemetryData).toBeDefined();
expect(result.data.details.research).toBe(true);
expect(mockEnableSilentMode).toHaveBeenCalled();
expect(mockDisableSilentMode).toHaveBeenCalled();
});
@@ -739,8 +715,7 @@ describe('MCP Server Direct Functions', () => {
// Assert
expect(result.success).toBe(true);
expect(result.data.details.expandedCount).toBe(2);
expect(result.data.telemetryData).toBeDefined();
expect(result.data.details.force).toBe(true);
expect(mockEnableSilentMode).toHaveBeenCalled();
expect(mockDisableSilentMode).toHaveBeenCalled();
});
@@ -760,77 +735,11 @@ describe('MCP Server Direct Functions', () => {
// Assert
expect(result.success).toBe(true);
expect(result.data.details.expandedCount).toBe(2);
expect(result.data.telemetryData).toBeDefined();
expect(result.data.details.prompt).toBe(
'Additional context for subtasks'
);
expect(mockEnableSilentMode).toHaveBeenCalled();
expect(mockDisableSilentMode).toHaveBeenCalled();
});
test('should handle case with no eligible tasks', async () => {
// Arrange
const args = {
projectRoot: testProjectRoot,
file: testTasksPath,
num: 3
};
// Act - Mock the scenario where no tasks are eligible for expansion
async function testNoEligibleTasks(args, mockLogger, options = {}) {
mockEnableSilentMode();
const mockExpandAll = jest.fn().mockImplementation(async () => {
return {
success: true,
expandedCount: 0,
failedCount: 0,
skippedCount: 0,
tasksToExpand: 0,
telemetryData: null,
message: 'No tasks eligible for expansion.'
};
});
const result = await mockExpandAll(
args.file,
args.num,
false,
'',
false,
{
mcpLog: mockLogger,
session: options.session,
projectRoot: args.projectRoot
},
'json'
);
mockDisableSilentMode();
return {
success: true,
data: {
message: result.message,
details: {
expandedCount: result.expandedCount,
failedCount: result.failedCount,
skippedCount: result.skippedCount,
tasksToExpand: result.tasksToExpand
},
telemetryData: result.telemetryData
}
};
}
const result = await testNoEligibleTasks(args, mockLogger, {
session: mockSession
});
// Assert
expect(result.success).toBe(true);
expect(result.data.message).toBe('No tasks eligible for expansion.');
expect(result.data.details.expandedCount).toBe(0);
expect(result.data.details.tasksToExpand).toBe(0);
expect(result.data.telemetryData).toBeNull();
});
});
});

View File

@@ -129,7 +129,7 @@ const DEFAULT_CONFIG = {
fallback: {
provider: 'anthropic',
modelId: 'claude-3-5-sonnet',
maxTokens: 8192,
maxTokens: 64000,
temperature: 0.2
}
},

View File

@@ -75,7 +75,7 @@ const DEFAULT_CONFIG = {
fallback: {
provider: 'anthropic',
modelId: 'claude-3-5-sonnet',
maxTokens: 8192,
maxTokens: 64000,
temperature: 0.2
}
},

View File

@@ -1,538 +0,0 @@
import { jest } from '@jest/globals';
import fs from 'fs';
import path from 'path';
import os from 'os';
// Reduce noise in test output
process.env.TASKMASTER_LOG_LEVEL = 'error';
// === Mock everything early ===
jest.mock('child_process', () => ({ execSync: jest.fn() }));
jest.mock('fs', () => ({
...jest.requireActual('fs'),
mkdirSync: jest.fn(),
writeFileSync: jest.fn(),
readFileSync: jest.fn(),
appendFileSync: jest.fn(),
existsSync: jest.fn(),
mkdtempSync: jest.requireActual('fs').mkdtempSync,
rmSync: jest.requireActual('fs').rmSync
}));
// Mock console methods to suppress output
const consoleMethods = ['log', 'info', 'warn', 'error', 'clear'];
consoleMethods.forEach((method) => {
global.console[method] = jest.fn();
});
// Mock ES modules using unstable_mockModule
jest.unstable_mockModule('../../scripts/modules/utils.js', () => ({
isSilentMode: jest.fn(() => true),
enableSilentMode: jest.fn(),
log: jest.fn(),
findProjectRoot: jest.fn(() => process.cwd())
}));
// Mock git-utils module
jest.unstable_mockModule('../../scripts/modules/utils/git-utils.js', () => ({
insideGitWorkTree: jest.fn(() => false)
}));
// Mock rule transformer
jest.unstable_mockModule('../../src/utils/rule-transformer.js', () => ({
convertAllRulesToProfileRules: jest.fn(),
getRulesProfile: jest.fn(() => ({
conversionConfig: {},
globalReplacements: []
}))
}));
// Mock any other modules that might output or do real operations
jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
createDefaultConfig: jest.fn(() => ({ models: {}, project: {} })),
saveConfig: jest.fn()
}));
// Mock display libraries
jest.mock('figlet', () => ({ textSync: jest.fn(() => 'MOCKED BANNER') }));
jest.mock('boxen', () => jest.fn(() => 'MOCKED BOX'));
jest.mock('gradient-string', () => jest.fn(() => jest.fn((text) => text)));
jest.mock('chalk', () => ({
blue: jest.fn((text) => text),
green: jest.fn((text) => text),
red: jest.fn((text) => text),
yellow: jest.fn((text) => text),
cyan: jest.fn((text) => text),
white: jest.fn((text) => text),
dim: jest.fn((text) => text),
bold: jest.fn((text) => text),
underline: jest.fn((text) => text)
}));
const { execSync } = jest.requireMock('child_process');
const mockFs = jest.requireMock('fs');
// Import the mocked modules
const mockUtils = await import('../../scripts/modules/utils.js');
const mockGitUtils = await import('../../scripts/modules/utils/git-utils.js');
const mockRuleTransformer = await import('../../src/utils/rule-transformer.js');
// Import after mocks
const { initializeProject } = await import('../../scripts/init.js');
describe('initializeProject Git / Alias flag logic', () => {
let tmpDir;
const origCwd = process.cwd();
// Standard non-interactive options for all tests
const baseOptions = {
yes: true,
skipInstall: true,
name: 'test-project',
description: 'Test project description',
version: '1.0.0',
author: 'Test Author'
};
beforeEach(() => {
jest.clearAllMocks();
// Set up basic fs mocks
mockFs.mkdirSync.mockImplementation(() => {});
mockFs.writeFileSync.mockImplementation(() => {});
mockFs.readFileSync.mockImplementation((filePath) => {
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
return 'mock template content';
}
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
return '# existing config';
}
return '';
});
mockFs.appendFileSync.mockImplementation(() => {});
mockFs.existsSync.mockImplementation((filePath) => {
// Template source files exist
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
return true;
}
// Shell config files exist by default
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
return true;
}
return false;
});
// Reset utils mocks
mockUtils.isSilentMode.mockReturnValue(true);
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
// Default execSync mock
execSync.mockImplementation(() => '');
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'tm-init-'));
process.chdir(tmpDir);
});
afterEach(() => {
process.chdir(origCwd);
fs.rmSync(tmpDir, { recursive: true, force: true });
});
describe('Git Flag Behavior', () => {
it('completes successfully with git:false in dry run', async () => {
const result = await initializeProject({
...baseOptions,
git: false,
aliases: false,
dryRun: true
});
expect(result.dryRun).toBe(true);
});
it('completes successfully with git:true when not inside repo', async () => {
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
await expect(
initializeProject({
...baseOptions,
git: true,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('completes successfully when already inside repo', async () => {
mockGitUtils.insideGitWorkTree.mockReturnValue(true);
await expect(
initializeProject({
...baseOptions,
git: true,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('uses default git behavior without errors', async () => {
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
await expect(
initializeProject({
...baseOptions,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('handles git command failures gracefully', async () => {
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
execSync.mockImplementation((cmd) => {
if (cmd.includes('git init')) {
throw new Error('git not found');
}
return '';
});
await expect(
initializeProject({
...baseOptions,
git: true,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
});
describe('Alias Flag Behavior', () => {
it('completes successfully when aliases:true and environment is set up', async () => {
const originalShell = process.env.SHELL;
const originalHome = process.env.HOME;
process.env.SHELL = '/bin/zsh';
process.env.HOME = '/mock/home';
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: true,
dryRun: false
})
).resolves.not.toThrow();
process.env.SHELL = originalShell;
process.env.HOME = originalHome;
});
it('completes successfully when aliases:false', async () => {
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('handles missing shell gracefully', async () => {
const originalShell = process.env.SHELL;
const originalHome = process.env.HOME;
delete process.env.SHELL; // Remove shell env var
process.env.HOME = '/mock/home';
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: true,
dryRun: false
})
).resolves.not.toThrow();
process.env.SHELL = originalShell;
process.env.HOME = originalHome;
});
it('handles missing shell config file gracefully', async () => {
const originalShell = process.env.SHELL;
const originalHome = process.env.HOME;
process.env.SHELL = '/bin/zsh';
process.env.HOME = '/mock/home';
// Shell config doesn't exist
mockFs.existsSync.mockImplementation((filePath) => {
if (filePath.includes('.zshrc') || filePath.includes('.bashrc')) {
return false;
}
if (filePath.includes('assets') || filePath.includes('.cursor/rules')) {
return true;
}
return false;
});
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: true,
dryRun: false
})
).resolves.not.toThrow();
process.env.SHELL = originalShell;
process.env.HOME = originalHome;
});
});
describe('Flag Combinations', () => {
it.each`
git | aliases | description
${true} | ${true} | ${'git & aliases enabled'}
${true} | ${false} | ${'git enabled, aliases disabled'}
${false} | ${true} | ${'git disabled, aliases enabled'}
${false} | ${false} | ${'git & aliases disabled'}
`('handles $description without errors', async ({ git, aliases }) => {
const originalShell = process.env.SHELL;
const originalHome = process.env.HOME;
if (aliases) {
process.env.SHELL = '/bin/zsh';
process.env.HOME = '/mock/home';
}
if (git) {
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
}
await expect(
initializeProject({
...baseOptions,
git,
aliases,
dryRun: false
})
).resolves.not.toThrow();
process.env.SHELL = originalShell;
process.env.HOME = originalHome;
});
});
describe('Dry Run Mode', () => {
it('returns dry run result and performs no operations', async () => {
const result = await initializeProject({
...baseOptions,
git: true,
aliases: true,
dryRun: true
});
expect(result.dryRun).toBe(true);
});
it.each`
git | aliases | description
${true} | ${false} | ${'git-specific behavior'}
${false} | ${false} | ${'no-git behavior'}
${false} | ${true} | ${'alias behavior'}
`('shows $description in dry run', async ({ git, aliases }) => {
const result = await initializeProject({
...baseOptions,
git,
aliases,
dryRun: true
});
expect(result.dryRun).toBe(true);
});
});
describe('Error Handling', () => {
it('handles npm install failures gracefully', async () => {
execSync.mockImplementation((cmd) => {
if (cmd.includes('npm install')) {
throw new Error('npm failed');
}
return '';
});
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: false,
skipInstall: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('handles git failures gracefully', async () => {
mockGitUtils.insideGitWorkTree.mockReturnValue(false);
execSync.mockImplementation((cmd) => {
if (cmd.includes('git init')) {
throw new Error('git failed');
}
return '';
});
await expect(
initializeProject({
...baseOptions,
git: true,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('handles file system errors gracefully', async () => {
mockFs.mkdirSync.mockImplementation(() => {
throw new Error('Permission denied');
});
// Should handle file system errors gracefully
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
});
describe('Non-Interactive Mode', () => {
it('bypasses prompts with yes:true', async () => {
const result = await initializeProject({
...baseOptions,
git: true,
aliases: true,
dryRun: true
});
expect(result).toEqual({ dryRun: true });
});
it('completes without hanging', async () => {
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('handles all flag combinations without hanging', async () => {
const flagCombinations = [
{ git: true, aliases: true },
{ git: true, aliases: false },
{ git: false, aliases: true },
{ git: false, aliases: false },
{} // No flags (uses defaults)
];
for (const flags of flagCombinations) {
await expect(
initializeProject({
...baseOptions,
...flags,
dryRun: true // Use dry run for speed
})
).resolves.not.toThrow();
}
});
it('accepts complete project details', async () => {
await expect(
initializeProject({
name: 'test-project',
description: 'test description',
version: '2.0.0',
author: 'Test User',
git: false,
aliases: false,
dryRun: true
})
).resolves.not.toThrow();
});
it('works with skipInstall option', async () => {
await expect(
initializeProject({
...baseOptions,
skipInstall: true,
git: false,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
});
describe('Function Integration', () => {
it('calls utility functions without errors', async () => {
await initializeProject({
...baseOptions,
git: false,
aliases: false,
dryRun: false
});
// Verify that utility functions were called
expect(mockUtils.isSilentMode).toHaveBeenCalled();
expect(
mockRuleTransformer.convertAllRulesToProfileRules
).toHaveBeenCalled();
});
it('handles template operations gracefully', async () => {
// Make file operations throw errors
mockFs.writeFileSync.mockImplementation(() => {
throw new Error('Write failed');
});
// Should complete despite file operation failures
await expect(
initializeProject({
...baseOptions,
git: false,
aliases: false,
dryRun: false
})
).resolves.not.toThrow();
});
it('validates boolean flag conversion', async () => {
// Test the boolean flag handling specifically
await expect(
initializeProject({
...baseOptions,
git: true, // Should convert to initGit: true
aliases: false, // Should convert to addAliases: false
dryRun: true
})
).resolves.not.toThrow();
await expect(
initializeProject({
...baseOptions,
git: false, // Should convert to initGit: false
aliases: true, // Should convert to addAliases: true
dryRun: true
})
).resolves.not.toThrow();
});
});
});

View File

@@ -1,439 +0,0 @@
/**
* Unit tests for manage-gitignore.js module
* Tests the logic with Jest spies instead of mocked modules
*/
import { jest } from '@jest/globals';
import fs from 'fs';
import path from 'path';
import os from 'os';
// Import the module under test and its exports
import manageGitignoreFile, {
normalizeLine,
isTaskLine,
buildTaskFilesSection,
TASK_FILES_COMMENT,
TASK_JSON_PATTERN,
TASK_DIR_PATTERN
} from '../../src/utils/manage-gitignore.js';
describe('manage-gitignore.js Unit Tests', () => {
let tempDir;
beforeEach(() => {
jest.clearAllMocks();
// Create a temporary directory for testing
tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'manage-gitignore-test-'));
});
afterEach(() => {
// Clean up the temporary directory
try {
fs.rmSync(tempDir, { recursive: true, force: true });
} catch (err) {
// Ignore cleanup errors
}
});
describe('Constants', () => {
test('should have correct constant values', () => {
expect(TASK_FILES_COMMENT).toBe('# Task files');
expect(TASK_JSON_PATTERN).toBe('tasks.json');
expect(TASK_DIR_PATTERN).toBe('tasks/');
});
});
describe('normalizeLine function', () => {
test('should remove leading/trailing whitespace', () => {
expect(normalizeLine(' test ')).toBe('test');
});
test('should remove comment hash and trim', () => {
expect(normalizeLine('# tasks.json')).toBe('tasks.json');
expect(normalizeLine('#tasks/')).toBe('tasks/');
});
test('should handle empty strings', () => {
expect(normalizeLine('')).toBe('');
expect(normalizeLine(' ')).toBe('');
});
test('should handle lines without comments', () => {
expect(normalizeLine('tasks.json')).toBe('tasks.json');
});
});
describe('isTaskLine function', () => {
test('should identify task.json patterns', () => {
expect(isTaskLine('tasks.json')).toBe(true);
expect(isTaskLine('# tasks.json')).toBe(true);
expect(isTaskLine(' # tasks.json ')).toBe(true);
});
test('should identify tasks/ patterns', () => {
expect(isTaskLine('tasks/')).toBe(true);
expect(isTaskLine('# tasks/')).toBe(true);
expect(isTaskLine(' # tasks/ ')).toBe(true);
});
test('should reject non-task patterns', () => {
expect(isTaskLine('node_modules/')).toBe(false);
expect(isTaskLine('# Some comment')).toBe(false);
expect(isTaskLine('')).toBe(false);
expect(isTaskLine('tasks.txt')).toBe(false);
});
});
describe('buildTaskFilesSection function', () => {
test('should build commented section when storeTasksInGit is true (tasks stored in git)', () => {
const result = buildTaskFilesSection(true);
expect(result).toEqual(['# Task files', '# tasks.json', '# tasks/ ']);
});
test('should build uncommented section when storeTasksInGit is false (tasks ignored)', () => {
const result = buildTaskFilesSection(false);
expect(result).toEqual(['# Task files', 'tasks.json', 'tasks/ ']);
});
});
describe('manageGitignoreFile function - Input Validation', () => {
test('should throw error for invalid targetPath', () => {
expect(() => {
manageGitignoreFile('', 'content', false);
}).toThrow('targetPath must be a non-empty string');
expect(() => {
manageGitignoreFile(null, 'content', false);
}).toThrow('targetPath must be a non-empty string');
expect(() => {
manageGitignoreFile('invalid.txt', 'content', false);
}).toThrow('targetPath must end with .gitignore');
});
test('should throw error for invalid content', () => {
expect(() => {
manageGitignoreFile('.gitignore', '', false);
}).toThrow('content must be a non-empty string');
expect(() => {
manageGitignoreFile('.gitignore', null, false);
}).toThrow('content must be a non-empty string');
});
test('should throw error for invalid storeTasksInGit', () => {
expect(() => {
manageGitignoreFile('.gitignore', 'content', 'not-boolean');
}).toThrow('storeTasksInGit must be a boolean');
});
});
describe('manageGitignoreFile function - File Operations with Spies', () => {
let writeFileSyncSpy;
let readFileSyncSpy;
let existsSyncSpy;
let mockLog;
beforeEach(() => {
// Set up spies
writeFileSyncSpy = jest
.spyOn(fs, 'writeFileSync')
.mockImplementation(() => {});
readFileSyncSpy = jest
.spyOn(fs, 'readFileSync')
.mockImplementation(() => '');
existsSyncSpy = jest
.spyOn(fs, 'existsSync')
.mockImplementation(() => false);
mockLog = jest.fn();
});
afterEach(() => {
// Restore original implementations
writeFileSyncSpy.mockRestore();
readFileSyncSpy.mockRestore();
existsSyncSpy.mockRestore();
});
describe('New File Creation', () => {
const templateContent = `# Logs
logs
*.log
# Task files
tasks.json
tasks/ `;
test('should create new file with commented task lines when storeTasksInGit is true', () => {
existsSyncSpy.mockReturnValue(false); // File doesn't exist
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
expect(writeFileSyncSpy).toHaveBeenCalledWith(
'.gitignore',
`# Logs
logs
*.log
# Task files
# tasks.json
# tasks/ `
);
expect(mockLog).toHaveBeenCalledWith(
'success',
'Created .gitignore with full template'
);
});
test('should create new file with uncommented task lines when storeTasksInGit is false', () => {
existsSyncSpy.mockReturnValue(false); // File doesn't exist
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
expect(writeFileSyncSpy).toHaveBeenCalledWith(
'.gitignore',
`# Logs
logs
*.log
# Task files
tasks.json
tasks/ `
);
expect(mockLog).toHaveBeenCalledWith(
'success',
'Created .gitignore with full template'
);
});
test('should handle write errors gracefully', () => {
existsSyncSpy.mockReturnValue(false);
const writeError = new Error('Permission denied');
writeFileSyncSpy.mockImplementation(() => {
throw writeError;
});
expect(() => {
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
}).toThrow('Permission denied');
expect(mockLog).toHaveBeenCalledWith(
'error',
'Failed to create .gitignore: Permission denied'
);
});
});
describe('File Merging', () => {
const templateContent = `# Logs
logs
*.log
# Dependencies
node_modules/
# Task files
tasks.json
tasks/ `;
test('should merge with existing file and add new content', () => {
const existingContent = `# Old content
old-file.txt
# Task files
# tasks.json
# tasks/`;
existsSyncSpy.mockReturnValue(true); // File exists
readFileSyncSpy.mockReturnValue(existingContent);
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
expect(writeFileSyncSpy).toHaveBeenCalledWith(
'.gitignore',
expect.stringContaining('# Old content')
);
expect(writeFileSyncSpy).toHaveBeenCalledWith(
'.gitignore',
expect.stringContaining('# Logs')
);
expect(writeFileSyncSpy).toHaveBeenCalledWith(
'.gitignore',
expect.stringContaining('# Dependencies')
);
expect(writeFileSyncSpy).toHaveBeenCalledWith(
'.gitignore',
expect.stringContaining('# Task files')
);
});
test('should remove existing task section and replace with new preferences', () => {
const existingContent = `# Existing
existing.txt
# Task files
tasks.json
tasks/
# More content
more.txt`;
existsSyncSpy.mockReturnValue(true);
readFileSyncSpy.mockReturnValue(existingContent);
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
// Should contain existing non-task content
expect(writtenContent).toContain('# Existing');
expect(writtenContent).toContain('existing.txt');
expect(writtenContent).toContain('# More content');
expect(writtenContent).toContain('more.txt');
// Should contain new template content
expect(writtenContent).toContain('# Logs');
expect(writtenContent).toContain('# Dependencies');
// Should have uncommented task lines (storeTasksInGit = false means ignore tasks)
expect(writtenContent).toMatch(
/# Task files\s*[\r\n]+tasks\.json\s*[\r\n]+tasks\/ /
);
});
test('should handle different task preferences correctly', () => {
const existingContent = `# Existing
existing.txt
# Task files
# tasks.json
# tasks/`;
existsSyncSpy.mockReturnValue(true);
readFileSyncSpy.mockReturnValue(existingContent);
// Test with storeTasksInGit = true (commented)
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
expect(writtenContent).toMatch(
/# Task files\s*[\r\n]+# tasks\.json\s*[\r\n]+# tasks\/ /
);
});
test('should not duplicate existing template content', () => {
const existingContent = `# Logs
logs
*.log
# Dependencies
node_modules/
# Task files
# tasks.json
# tasks/`;
existsSyncSpy.mockReturnValue(true);
readFileSyncSpy.mockReturnValue(existingContent);
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
// Should not duplicate the logs section
const logsCount = (writtenContent.match(/# Logs/g) || []).length;
expect(logsCount).toBe(1);
// Should not duplicate dependencies
const depsCount = (writtenContent.match(/# Dependencies/g) || [])
.length;
expect(depsCount).toBe(1);
});
test('should handle read errors gracefully', () => {
existsSyncSpy.mockReturnValue(true);
const readError = new Error('File not readable');
readFileSyncSpy.mockImplementation(() => {
throw readError;
});
expect(() => {
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
}).toThrow('File not readable');
expect(mockLog).toHaveBeenCalledWith(
'error',
'Failed to merge content with .gitignore: File not readable'
);
});
test('should handle write errors during merge gracefully', () => {
existsSyncSpy.mockReturnValue(true);
readFileSyncSpy.mockReturnValue('existing content');
const writeError = new Error('Disk full');
writeFileSyncSpy.mockImplementation(() => {
throw writeError;
});
expect(() => {
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
}).toThrow('Disk full');
expect(mockLog).toHaveBeenCalledWith(
'error',
'Failed to merge content with .gitignore: Disk full'
);
});
});
describe('Edge Cases', () => {
test('should work without log function', () => {
existsSyncSpy.mockReturnValue(false);
const templateContent = `# Test
test.txt
# Task files
tasks.json
tasks/`;
expect(() => {
manageGitignoreFile('.gitignore', templateContent, false);
}).not.toThrow();
expect(writeFileSyncSpy).toHaveBeenCalled();
});
test('should handle empty existing file', () => {
existsSyncSpy.mockReturnValue(true);
readFileSyncSpy.mockReturnValue('');
const templateContent = `# Task files
tasks.json
tasks/`;
manageGitignoreFile('.gitignore', templateContent, false, mockLog);
expect(writeFileSyncSpy).toHaveBeenCalled();
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
expect(writtenContent).toContain('# Task files');
});
test('should handle template with only task files', () => {
existsSyncSpy.mockReturnValue(false);
const templateContent = `# Task files
tasks.json
tasks/ `;
manageGitignoreFile('.gitignore', templateContent, true, mockLog);
const writtenContent = writeFileSyncSpy.mock.calls[0][1];
expect(writtenContent).toBe(`# Task files
# tasks.json
# tasks/ `);
});
});
});
});

View File

@@ -1,324 +0,0 @@
/**
* Tests for the expand-all MCP tool
*
* Note: This test does NOT test the actual implementation. It tests that:
* 1. The tool is registered correctly with the correct parameters
* 2. Arguments are passed correctly to expandAllTasksDirect
* 3. Error handling works as expected
*
* We do NOT import the real implementation - everything is mocked
*/
import { jest } from '@jest/globals';
// Mock EVERYTHING
const mockExpandAllTasksDirect = jest.fn();
jest.mock('../../../../mcp-server/src/core/task-master-core.js', () => ({
expandAllTasksDirect: mockExpandAllTasksDirect
}));
const mockHandleApiResult = jest.fn((result) => result);
const mockGetProjectRootFromSession = jest.fn(() => '/mock/project/root');
const mockCreateErrorResponse = jest.fn((msg) => ({
success: false,
error: { code: 'ERROR', message: msg }
}));
const mockWithNormalizedProjectRoot = jest.fn((fn) => fn);
jest.mock('../../../../mcp-server/src/tools/utils.js', () => ({
getProjectRootFromSession: mockGetProjectRootFromSession,
handleApiResult: mockHandleApiResult,
createErrorResponse: mockCreateErrorResponse,
withNormalizedProjectRoot: mockWithNormalizedProjectRoot
}));
// Mock the z object from zod
const mockZod = {
object: jest.fn(() => mockZod),
string: jest.fn(() => mockZod),
number: jest.fn(() => mockZod),
boolean: jest.fn(() => mockZod),
optional: jest.fn(() => mockZod),
describe: jest.fn(() => mockZod),
_def: {
shape: () => ({
num: {},
research: {},
prompt: {},
force: {},
tag: {},
projectRoot: {}
})
}
};
jest.mock('zod', () => ({
z: mockZod
}));
// DO NOT import the real module - create a fake implementation
// This is the fake implementation of registerExpandAllTool
const registerExpandAllTool = (server) => {
// Create simplified version of the tool config
const toolConfig = {
name: 'expand_all',
description: 'Use Taskmaster to expand all eligible pending tasks',
parameters: mockZod,
// Create a simplified mock of the execute function
execute: mockWithNormalizedProjectRoot(async (args, context) => {
const { log, session } = context;
try {
log.info &&
log.info(`Starting expand-all with args: ${JSON.stringify(args)}`);
// Call expandAllTasksDirect
const result = await mockExpandAllTasksDirect(args, log, { session });
// Handle result
return mockHandleApiResult(result, log);
} catch (error) {
log.error && log.error(`Error in expand-all tool: ${error.message}`);
return mockCreateErrorResponse(error.message);
}
})
};
// Register the tool with the server
server.addTool(toolConfig);
};
describe('MCP Tool: expand-all', () => {
// Create mock server
let mockServer;
let executeFunction;
// Create mock logger
const mockLogger = {
debug: jest.fn(),
info: jest.fn(),
warn: jest.fn(),
error: jest.fn()
};
// Test data
const validArgs = {
num: 3,
research: true,
prompt: 'additional context',
force: false,
tag: 'master',
projectRoot: '/test/project'
};
// Standard responses
const successResponse = {
success: true,
data: {
message:
'Expand all operation completed. Expanded: 2, Failed: 0, Skipped: 1',
details: {
expandedCount: 2,
failedCount: 0,
skippedCount: 1,
tasksToExpand: 3,
telemetryData: {
commandName: 'expand-all-tasks',
totalCost: 0.15,
totalTokens: 2500
}
}
}
};
const errorResponse = {
success: false,
error: {
code: 'EXPAND_ALL_ERROR',
message: 'Failed to expand tasks'
}
};
beforeEach(() => {
// Reset all mocks
jest.clearAllMocks();
// Create mock server
mockServer = {
addTool: jest.fn((config) => {
executeFunction = config.execute;
})
};
// Setup default successful response
mockExpandAllTasksDirect.mockResolvedValue(successResponse);
// Register the tool
registerExpandAllTool(mockServer);
});
test('should register the tool correctly', () => {
// Verify tool was registered
expect(mockServer.addTool).toHaveBeenCalledWith(
expect.objectContaining({
name: 'expand_all',
description: expect.stringContaining('expand all eligible pending'),
parameters: expect.any(Object),
execute: expect.any(Function)
})
);
// Verify the tool config was passed
const toolConfig = mockServer.addTool.mock.calls[0][0];
expect(toolConfig).toHaveProperty('parameters');
expect(toolConfig).toHaveProperty('execute');
});
test('should execute the tool with valid parameters', async () => {
// Setup context
const mockContext = {
log: mockLogger,
session: { workingDirectory: '/mock/dir' }
};
// Execute the function
const result = await executeFunction(validArgs, mockContext);
// Verify expandAllTasksDirect was called with correct arguments
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
validArgs,
mockLogger,
{ session: mockContext.session }
);
// Verify handleApiResult was called
expect(mockHandleApiResult).toHaveBeenCalledWith(
successResponse,
mockLogger
);
expect(result).toEqual(successResponse);
});
test('should handle expand all with no eligible tasks', async () => {
// Arrange
const mockDirectResult = {
success: true,
data: {
message:
'Expand all operation completed. Expanded: 0, Failed: 0, Skipped: 0',
details: {
expandedCount: 0,
failedCount: 0,
skippedCount: 0,
tasksToExpand: 0,
telemetryData: null
}
}
};
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
mockHandleApiResult.mockReturnValue({
success: true,
data: mockDirectResult.data
});
// Act
const result = await executeFunction(validArgs, {
log: mockLogger,
session: { workingDirectory: '/test' }
});
// Assert
expect(result.success).toBe(true);
expect(result.data.details.expandedCount).toBe(0);
expect(result.data.details.tasksToExpand).toBe(0);
});
test('should handle expand all with mixed success/failure', async () => {
// Arrange
const mockDirectResult = {
success: true,
data: {
message:
'Expand all operation completed. Expanded: 2, Failed: 1, Skipped: 0',
details: {
expandedCount: 2,
failedCount: 1,
skippedCount: 0,
tasksToExpand: 3,
telemetryData: {
commandName: 'expand-all-tasks',
totalCost: 0.1,
totalTokens: 1500
}
}
}
};
mockExpandAllTasksDirect.mockResolvedValue(mockDirectResult);
mockHandleApiResult.mockReturnValue({
success: true,
data: mockDirectResult.data
});
// Act
const result = await executeFunction(validArgs, {
log: mockLogger,
session: { workingDirectory: '/test' }
});
// Assert
expect(result.success).toBe(true);
expect(result.data.details.expandedCount).toBe(2);
expect(result.data.details.failedCount).toBe(1);
});
test('should handle errors from expandAllTasksDirect', async () => {
// Arrange
mockExpandAllTasksDirect.mockRejectedValue(
new Error('Direct function error')
);
// Act
const result = await executeFunction(validArgs, {
log: mockLogger,
session: { workingDirectory: '/test' }
});
// Assert
expect(mockLogger.error).toHaveBeenCalledWith(
expect.stringContaining('Error in expand-all tool')
);
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
'Direct function error'
);
});
test('should handle different argument combinations', async () => {
// Test with minimal args
const minimalArgs = {
projectRoot: '/test/project'
};
// Act
await executeFunction(minimalArgs, {
log: mockLogger,
session: { workingDirectory: '/test' }
});
// Assert
expect(mockExpandAllTasksDirect).toHaveBeenCalledWith(
minimalArgs,
mockLogger,
expect.any(Object)
);
});
test('should use withNormalizedProjectRoot wrapper correctly', () => {
// Verify that the execute function is wrapped with withNormalizedProjectRoot
expect(mockWithNormalizedProjectRoot).toHaveBeenCalledWith(
expect.any(Function)
);
});
});

View File

@@ -1,502 +0,0 @@
/**
* Tests for the expand-all-tasks.js module
*/
import { jest } from '@jest/globals';
// Mock the dependencies before importing the module under test
jest.unstable_mockModule(
'../../../../../scripts/modules/task-manager/expand-task.js',
() => ({
default: jest.fn()
})
);
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
readJSON: jest.fn(),
log: jest.fn(),
isSilentMode: jest.fn(() => false),
findProjectRoot: jest.fn(() => '/test/project'),
aggregateTelemetry: jest.fn()
}));
jest.unstable_mockModule(
'../../../../../scripts/modules/config-manager.js',
() => ({
getDebugFlag: jest.fn(() => false)
})
);
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
startLoadingIndicator: jest.fn(),
stopLoadingIndicator: jest.fn(),
displayAiUsageSummary: jest.fn()
}));
jest.unstable_mockModule('chalk', () => ({
default: {
white: { bold: jest.fn((text) => text) },
cyan: jest.fn((text) => text),
green: jest.fn((text) => text),
gray: jest.fn((text) => text),
red: jest.fn((text) => text),
bold: jest.fn((text) => text)
}
}));
jest.unstable_mockModule('boxen', () => ({
default: jest.fn((text) => text)
}));
// Import the mocked modules
const { default: expandTask } = await import(
'../../../../../scripts/modules/task-manager/expand-task.js'
);
const { readJSON, aggregateTelemetry, findProjectRoot } = await import(
'../../../../../scripts/modules/utils.js'
);
// Import the module under test
const { default: expandAllTasks } = await import(
'../../../../../scripts/modules/task-manager/expand-all-tasks.js'
);
const mockExpandTask = expandTask;
const mockReadJSON = readJSON;
const mockAggregateTelemetry = aggregateTelemetry;
const mockFindProjectRoot = findProjectRoot;
describe('expandAllTasks', () => {
const mockTasksPath = '/test/tasks.json';
const mockProjectRoot = '/test/project';
const mockSession = { userId: 'test-user' };
const mockMcpLog = {
info: jest.fn(),
warn: jest.fn(),
error: jest.fn(),
debug: jest.fn()
};
const sampleTasksData = {
tag: 'master',
tasks: [
{
id: 1,
title: 'Pending Task 1',
status: 'pending',
subtasks: []
},
{
id: 2,
title: 'In Progress Task',
status: 'in-progress',
subtasks: []
},
{
id: 3,
title: 'Done Task',
status: 'done',
subtasks: []
},
{
id: 4,
title: 'Task with Subtasks',
status: 'pending',
subtasks: [{ id: '4.1', title: 'Existing subtask' }]
}
]
};
beforeEach(() => {
jest.clearAllMocks();
mockReadJSON.mockReturnValue(sampleTasksData);
mockAggregateTelemetry.mockReturnValue({
timestamp: '2024-01-01T00:00:00.000Z',
commandName: 'expand-all-tasks',
totalCost: 0.1,
totalTokens: 2000,
inputTokens: 1200,
outputTokens: 800
});
});
describe('successful expansion', () => {
test('should expand all eligible pending tasks', async () => {
// Arrange
const mockTelemetryData = {
timestamp: '2024-01-01T00:00:00.000Z',
commandName: 'expand-task',
totalCost: 0.05,
totalTokens: 1000
};
mockExpandTask.mockResolvedValue({
telemetryData: mockTelemetryData
});
// Act
const result = await expandAllTasks(
mockTasksPath,
3, // numSubtasks
false, // useResearch
'test context', // additionalContext
false, // force
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot,
tag: 'master'
},
'json' // outputFormat
);
// Assert
expect(result.success).toBe(true);
expect(result.expandedCount).toBe(2); // Tasks 1 and 2 (pending and in-progress)
expect(result.failedCount).toBe(0);
expect(result.skippedCount).toBe(0);
expect(result.tasksToExpand).toBe(2);
expect(result.telemetryData).toBeDefined();
// Verify readJSON was called correctly
expect(mockReadJSON).toHaveBeenCalledWith(
mockTasksPath,
mockProjectRoot,
'master'
);
// Verify expandTask was called for eligible tasks
expect(mockExpandTask).toHaveBeenCalledTimes(2);
expect(mockExpandTask).toHaveBeenCalledWith(
mockTasksPath,
1,
3,
false,
'test context',
expect.objectContaining({
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot,
tag: 'master'
}),
false
);
});
test('should handle force flag to expand tasks with existing subtasks', async () => {
// Arrange
mockExpandTask.mockResolvedValue({
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
});
// Act
const result = await expandAllTasks(
mockTasksPath,
2,
false,
'',
true, // force = true
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot
},
'json'
);
// Assert
expect(result.expandedCount).toBe(3); // Tasks 1, 2, and 4 (including task with existing subtasks)
expect(mockExpandTask).toHaveBeenCalledTimes(3);
});
test('should handle research flag', async () => {
// Arrange
mockExpandTask.mockResolvedValue({
telemetryData: { commandName: 'expand-task', totalCost: 0.08 }
});
// Act
const result = await expandAllTasks(
mockTasksPath,
undefined, // numSubtasks not specified
true, // useResearch = true
'research context',
false,
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot
},
'json'
);
// Assert
expect(result.success).toBe(true);
expect(mockExpandTask).toHaveBeenCalledWith(
mockTasksPath,
expect.any(Number),
undefined,
true, // research flag passed correctly
'research context',
expect.any(Object),
false
);
});
test('should return success with message when no tasks are eligible', async () => {
// Arrange - Mock tasks data with no eligible tasks
const noEligibleTasksData = {
tag: 'master',
tasks: [
{ id: 1, status: 'done', subtasks: [] },
{
id: 2,
status: 'pending',
subtasks: [{ id: '2.1', title: 'existing' }]
}
]
};
mockReadJSON.mockReturnValue(noEligibleTasksData);
// Act
const result = await expandAllTasks(
mockTasksPath,
3,
false,
'',
false, // force = false, so task with subtasks won't be expanded
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot
},
'json'
);
// Assert
expect(result.success).toBe(true);
expect(result.expandedCount).toBe(0);
expect(result.failedCount).toBe(0);
expect(result.skippedCount).toBe(0);
expect(result.tasksToExpand).toBe(0);
expect(result.message).toBe('No tasks eligible for expansion.');
expect(mockExpandTask).not.toHaveBeenCalled();
});
});
describe('error handling', () => {
test('should handle expandTask failures gracefully', async () => {
// Arrange
mockExpandTask
.mockResolvedValueOnce({ telemetryData: { totalCost: 0.05 } }) // First task succeeds
.mockRejectedValueOnce(new Error('AI service error')); // Second task fails
// Act
const result = await expandAllTasks(
mockTasksPath,
3,
false,
'',
false,
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot
},
'json'
);
// Assert
expect(result.success).toBe(true);
expect(result.expandedCount).toBe(1);
expect(result.failedCount).toBe(1);
});
test('should throw error when tasks.json is invalid', async () => {
// Arrange
mockReadJSON.mockReturnValue(null);
// Act & Assert
await expect(
expandAllTasks(
mockTasksPath,
3,
false,
'',
false,
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot
},
'json'
)
).rejects.toThrow('Invalid tasks data');
});
test('should throw error when project root cannot be determined', async () => {
// Arrange - Mock findProjectRoot to return null for this test
mockFindProjectRoot.mockReturnValueOnce(null);
// Act & Assert
await expect(
expandAllTasks(
mockTasksPath,
3,
false,
'',
false,
{
session: mockSession,
mcpLog: mockMcpLog
// No projectRoot provided, and findProjectRoot will return null
},
'json'
)
).rejects.toThrow('Could not determine project root directory');
});
});
describe('telemetry aggregation', () => {
test('should aggregate telemetry data from multiple expand operations', async () => {
// Arrange
const telemetryData1 = {
commandName: 'expand-task',
totalCost: 0.03,
totalTokens: 600
};
const telemetryData2 = {
commandName: 'expand-task',
totalCost: 0.04,
totalTokens: 800
};
mockExpandTask
.mockResolvedValueOnce({ telemetryData: telemetryData1 })
.mockResolvedValueOnce({ telemetryData: telemetryData2 });
// Act
const result = await expandAllTasks(
mockTasksPath,
3,
false,
'',
false,
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot
},
'json'
);
// Assert
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
[telemetryData1, telemetryData2],
'expand-all-tasks'
);
expect(result.telemetryData).toBeDefined();
expect(result.telemetryData.commandName).toBe('expand-all-tasks');
});
test('should handle missing telemetry data gracefully', async () => {
// Arrange
mockExpandTask.mockResolvedValue({}); // No telemetryData
// Act
const result = await expandAllTasks(
mockTasksPath,
3,
false,
'',
false,
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot
},
'json'
);
// Assert
expect(result.success).toBe(true);
expect(mockAggregateTelemetry).toHaveBeenCalledWith(
[],
'expand-all-tasks'
);
});
});
describe('output format handling', () => {
test('should use text output format for CLI calls', async () => {
// Arrange
mockExpandTask.mockResolvedValue({
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
});
// Act
const result = await expandAllTasks(
mockTasksPath,
3,
false,
'',
false,
{
projectRoot: mockProjectRoot
// No mcpLog provided, should use CLI logger
},
'text' // CLI output format
);
// Assert
expect(result.success).toBe(true);
// In text mode, loading indicators and console output would be used
// This is harder to test directly but we can verify the result structure
});
test('should handle context tag properly', async () => {
// Arrange
const taggedTasksData = {
...sampleTasksData,
tag: 'feature-branch'
};
mockReadJSON.mockReturnValue(taggedTasksData);
mockExpandTask.mockResolvedValue({
telemetryData: { commandName: 'expand-task', totalCost: 0.05 }
});
// Act
const result = await expandAllTasks(
mockTasksPath,
3,
false,
'',
false,
{
session: mockSession,
mcpLog: mockMcpLog,
projectRoot: mockProjectRoot,
tag: 'feature-branch'
},
'json'
);
// Assert
expect(mockReadJSON).toHaveBeenCalledWith(
mockTasksPath,
mockProjectRoot,
'feature-branch'
);
expect(mockExpandTask).toHaveBeenCalledWith(
mockTasksPath,
expect.any(Number),
3,
false,
'',
expect.objectContaining({
tag: 'feature-branch'
}),
false
);
});
});
});

View File

@@ -1,888 +0,0 @@
/**
* Tests for the expand-task.js module
*/
import { jest } from '@jest/globals';
import fs from 'fs';
// Mock the dependencies before importing the module under test
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
readJSON: jest.fn(),
writeJSON: jest.fn(),
log: jest.fn(),
CONFIG: {
model: 'mock-claude-model',
maxTokens: 4000,
temperature: 0.7,
debug: false
},
sanitizePrompt: jest.fn((prompt) => prompt),
truncate: jest.fn((text) => text),
isSilentMode: jest.fn(() => false),
findTaskById: jest.fn(),
findProjectRoot: jest.fn((tasksPath) => '/mock/project/root'),
getCurrentTag: jest.fn(() => 'master'),
ensureTagMetadata: jest.fn((tagObj) => tagObj),
flattenTasksWithSubtasks: jest.fn((tasks) => {
const allTasks = [];
const queue = [...(tasks || [])];
while (queue.length > 0) {
const task = queue.shift();
allTasks.push(task);
if (task.subtasks) {
for (const subtask of task.subtasks) {
queue.push({ ...subtask, id: `${task.id}.${subtask.id}` });
}
}
}
return allTasks;
}),
readComplexityReport: jest.fn(),
markMigrationForNotice: jest.fn(),
performCompleteTagMigration: jest.fn(),
setTasksForTag: jest.fn(),
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || [])
}));
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
displayBanner: jest.fn(),
getStatusWithColor: jest.fn((status) => status),
startLoadingIndicator: jest.fn(),
stopLoadingIndicator: jest.fn(),
succeedLoadingIndicator: jest.fn(),
failLoadingIndicator: jest.fn(),
warnLoadingIndicator: jest.fn(),
infoLoadingIndicator: jest.fn(),
displayAiUsageSummary: jest.fn(),
displayContextAnalysis: jest.fn()
}));
jest.unstable_mockModule(
'../../../../../scripts/modules/ai-services-unified.js',
() => ({
generateTextService: jest.fn().mockResolvedValue({
mainResult: JSON.stringify({
subtasks: [
{
id: 1,
title: 'Set up project structure',
description:
'Create the basic project directory structure and configuration files',
dependencies: [],
details:
'Initialize package.json, create src/ and test/ directories, set up linting configuration',
status: 'pending',
testStrategy:
'Verify all expected files and directories are created'
},
{
id: 2,
title: 'Implement core functionality',
description: 'Develop the main application logic and core features',
dependencies: [1],
details:
'Create main classes, implement business logic, set up data models',
status: 'pending',
testStrategy: 'Unit tests for all core functions and classes'
},
{
id: 3,
title: 'Add user interface',
description: 'Create the user interface components and layouts',
dependencies: [2],
details:
'Design UI components, implement responsive layouts, add user interactions',
status: 'pending',
testStrategy: 'UI tests and visual regression testing'
}
]
}),
telemetryData: {
timestamp: new Date().toISOString(),
userId: '1234567890',
commandName: 'expand-task',
modelUsed: 'claude-3-5-sonnet',
providerName: 'anthropic',
inputTokens: 1000,
outputTokens: 500,
totalTokens: 1500,
totalCost: 0.012414,
currency: 'USD'
}
})
})
);
jest.unstable_mockModule(
'../../../../../scripts/modules/config-manager.js',
() => ({
getDefaultSubtasks: jest.fn(() => 3),
getDebugFlag: jest.fn(() => false)
})
);
jest.unstable_mockModule(
'../../../../../scripts/modules/utils/contextGatherer.js',
() => ({
ContextGatherer: jest.fn().mockImplementation(() => ({
gather: jest.fn().mockResolvedValue({
contextSummary: 'Mock context summary',
allRelatedTaskIds: [],
graphVisualization: 'Mock graph'
})
}))
})
);
jest.unstable_mockModule(
'../../../../../scripts/modules/task-manager/generate-task-files.js',
() => ({
default: jest.fn().mockResolvedValue()
})
);
// Mock external UI libraries
jest.unstable_mockModule('chalk', () => ({
default: {
white: { bold: jest.fn((text) => text) },
cyan: Object.assign(
jest.fn((text) => text),
{
bold: jest.fn((text) => text)
}
),
green: jest.fn((text) => text),
yellow: jest.fn((text) => text),
bold: jest.fn((text) => text)
}
}));
jest.unstable_mockModule('boxen', () => ({
default: jest.fn((text) => text)
}));
jest.unstable_mockModule('cli-table3', () => ({
default: jest.fn().mockImplementation(() => ({
push: jest.fn(),
toString: jest.fn(() => 'mocked table')
}))
}));
// Mock process.exit to prevent Jest worker crashes
const mockExit = jest.spyOn(process, 'exit').mockImplementation((code) => {
throw new Error(`process.exit called with "${code}"`);
});
// Import the mocked modules
const {
readJSON,
writeJSON,
log,
findTaskById,
ensureTagMetadata,
readComplexityReport,
findProjectRoot
} = await import('../../../../../scripts/modules/utils.js');
const { generateTextService } = await import(
'../../../../../scripts/modules/ai-services-unified.js'
);
const generateTaskFiles = (
await import(
'../../../../../scripts/modules/task-manager/generate-task-files.js'
)
).default;
// Import the module under test
const { default: expandTask } = await import(
'../../../../../scripts/modules/task-manager/expand-task.js'
);
describe('expandTask', () => {
const sampleTasks = {
master: {
tasks: [
{
id: 1,
title: 'Task 1',
description: 'First task',
status: 'done',
dependencies: [],
details: 'Already completed task',
subtasks: []
},
{
id: 2,
title: 'Task 2',
description: 'Second task',
status: 'pending',
dependencies: [],
details: 'Task ready for expansion',
subtasks: []
},
{
id: 3,
title: 'Complex Task',
description: 'A complex task that needs breakdown',
status: 'pending',
dependencies: [1],
details: 'This task involves multiple steps',
subtasks: []
},
{
id: 4,
title: 'Task with existing subtasks',
description: 'Task that already has subtasks',
status: 'pending',
dependencies: [],
details: 'Has existing subtasks',
subtasks: [
{
id: 1,
title: 'Existing subtask',
description: 'Already exists',
status: 'pending',
dependencies: []
}
]
}
]
},
'feature-branch': {
tasks: [
{
id: 1,
title: 'Feature Task 1',
description: 'Task in feature branch',
status: 'pending',
dependencies: [],
details: 'Feature-specific task',
subtasks: []
}
]
}
};
// Create a helper function for consistent mcpLog mock
const createMcpLogMock = () => ({
info: jest.fn(),
warn: jest.fn(),
error: jest.fn(),
debug: jest.fn(),
success: jest.fn()
});
beforeEach(() => {
jest.clearAllMocks();
mockExit.mockClear();
// Default readJSON implementation - returns tagged structure
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
const selectedTag = tag || 'master';
return {
...sampleTasksCopy[selectedTag],
tag: selectedTag,
_rawTaggedData: sampleTasksCopy
};
});
// Default findTaskById implementation
findTaskById.mockImplementation((tasks, taskId) => {
const id = parseInt(taskId, 10);
return tasks.find((t) => t.id === id);
});
// Default complexity report (no report available)
readComplexityReport.mockReturnValue(null);
// Mock findProjectRoot to return consistent path for complexity report
findProjectRoot.mockReturnValue('/mock/project/root');
writeJSON.mockResolvedValue();
generateTaskFiles.mockResolvedValue();
log.mockImplementation(() => {});
// Mock console.log to avoid output during tests
jest.spyOn(console, 'log').mockImplementation(() => {});
});
afterEach(() => {
console.log.mockRestore();
});
describe('Basic Functionality', () => {
test('should expand a task with AI-generated subtasks', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const numSubtasks = 3;
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
const result = await expandTask(
tasksPath,
taskId,
numSubtasks,
false,
'',
context,
false
);
// Assert
expect(readJSON).toHaveBeenCalledWith(
tasksPath,
'/mock/project/root',
undefined
);
expect(generateTextService).toHaveBeenCalledWith(expect.any(Object));
expect(writeJSON).toHaveBeenCalledWith(
tasksPath,
expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 2,
subtasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Set up project structure',
status: 'pending'
}),
expect.objectContaining({
id: 2,
title: 'Implement core functionality',
status: 'pending'
}),
expect.objectContaining({
id: 3,
title: 'Add user interface',
status: 'pending'
})
])
})
]),
tag: 'master',
_rawTaggedData: expect.objectContaining({
master: expect.objectContaining({
tasks: expect.any(Array)
})
})
}),
'/mock/project/root',
undefined
);
expect(result).toEqual(
expect.objectContaining({
task: expect.objectContaining({
id: 2,
subtasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Set up project structure',
status: 'pending'
}),
expect.objectContaining({
id: 2,
title: 'Implement core functionality',
status: 'pending'
}),
expect.objectContaining({
id: 3,
title: 'Add user interface',
status: 'pending'
})
])
}),
telemetryData: expect.any(Object)
})
);
});
test('should handle research flag correctly', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const numSubtasks = 3;
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
await expandTask(
tasksPath,
taskId,
numSubtasks,
true, // useResearch = true
'Additional context for research',
context,
false
);
// Assert
expect(generateTextService).toHaveBeenCalledWith(
expect.objectContaining({
role: 'research',
commandName: expect.any(String)
})
);
});
test('should handle complexity report integration without errors', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act & Assert - Should complete without errors
const result = await expandTask(
tasksPath,
taskId,
undefined, // numSubtasks not specified
false,
'',
context,
false
);
// Assert - Should successfully expand and return expected structure
expect(result).toEqual(
expect.objectContaining({
task: expect.objectContaining({
id: 2,
subtasks: expect.any(Array)
}),
telemetryData: expect.any(Object)
})
);
expect(generateTextService).toHaveBeenCalled();
});
});
describe('Tag Handling (The Critical Bug Fix)', () => {
test('should preserve tagged structure when expanding with default tag', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root',
tag: 'master' // Explicit tag context
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - CRITICAL: Check tag is passed to readJSON and writeJSON
expect(readJSON).toHaveBeenCalledWith(
tasksPath,
'/mock/project/root',
'master'
);
expect(writeJSON).toHaveBeenCalledWith(
tasksPath,
expect.objectContaining({
tag: 'master',
_rawTaggedData: expect.objectContaining({
master: expect.any(Object),
'feature-branch': expect.any(Object)
})
}),
'/mock/project/root',
'master' // CRITICAL: Tag must be passed to writeJSON
);
});
test('should preserve tagged structure when expanding with non-default tag', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '1'; // Task in feature-branch
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root',
tag: 'feature-branch' // Different tag context
};
// Configure readJSON to return feature-branch data
readJSON.mockImplementation((tasksPath, projectRoot, tag) => {
const sampleTasksCopy = JSON.parse(JSON.stringify(sampleTasks));
return {
...sampleTasksCopy['feature-branch'],
tag: 'feature-branch',
_rawTaggedData: sampleTasksCopy
};
});
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - CRITICAL: Check tag preservation for non-default tag
expect(readJSON).toHaveBeenCalledWith(
tasksPath,
'/mock/project/root',
'feature-branch'
);
expect(writeJSON).toHaveBeenCalledWith(
tasksPath,
expect.objectContaining({
tag: 'feature-branch',
_rawTaggedData: expect.objectContaining({
master: expect.any(Object),
'feature-branch': expect.any(Object)
})
}),
'/mock/project/root',
'feature-branch' // CRITICAL: Correct tag passed to writeJSON
);
});
test('should NOT corrupt tagged structure when tag is undefined', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
// No tag specified - should default gracefully
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - Should still preserve structure with undefined tag
expect(readJSON).toHaveBeenCalledWith(
tasksPath,
'/mock/project/root',
undefined
);
expect(writeJSON).toHaveBeenCalledWith(
tasksPath,
expect.objectContaining({
_rawTaggedData: expect.objectContaining({
master: expect.any(Object)
})
}),
'/mock/project/root',
undefined
);
// CRITICAL: Verify structure is NOT flattened to old format
const writeCallArgs = writeJSON.mock.calls[0][1];
expect(writeCallArgs).toHaveProperty('tasks'); // Should have tasks property from readJSON mock
expect(writeCallArgs).toHaveProperty('_rawTaggedData'); // Should preserve tagged structure
});
});
describe('Force Flag Handling', () => {
test('should replace existing subtasks when force=true', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '4'; // Task with existing subtasks
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, true);
// Assert - Should replace existing subtasks
expect(writeJSON).toHaveBeenCalledWith(
tasksPath,
expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 4,
subtasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Set up project structure'
})
])
})
])
}),
'/mock/project/root',
undefined
);
});
test('should append to existing subtasks when force=false', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '4'; // Task with existing subtasks
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - Should append to existing subtasks with proper ID increments
expect(writeJSON).toHaveBeenCalledWith(
tasksPath,
expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 4,
subtasks: expect.arrayContaining([
// Should contain both existing and new subtasks
expect.any(Object),
expect.any(Object),
expect.any(Object),
expect.any(Object) // 1 existing + 3 new = 4 total
])
})
])
}),
'/mock/project/root',
undefined
);
});
});
describe('Error Handling', () => {
test('should handle non-existent task ID', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '999'; // Non-existent task
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
findTaskById.mockReturnValue(null);
// Act & Assert
await expect(
expandTask(tasksPath, taskId, 3, false, '', context, false)
).rejects.toThrow('Task 999 not found');
expect(writeJSON).not.toHaveBeenCalled();
});
test('should expand tasks regardless of status (including done tasks)', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '1'; // Task with 'done' status
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
const result = await expandTask(
tasksPath,
taskId,
3,
false,
'',
context,
false
);
// Assert - Should successfully expand even 'done' tasks
expect(writeJSON).toHaveBeenCalled();
expect(result).toEqual(
expect.objectContaining({
task: expect.objectContaining({
id: 1,
status: 'done', // Status unchanged
subtasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Set up project structure',
status: 'pending'
})
])
}),
telemetryData: expect.any(Object)
})
);
});
test('should handle AI service failures', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
generateTextService.mockRejectedValueOnce(new Error('AI service error'));
// Act & Assert
await expect(
expandTask(tasksPath, taskId, 3, false, '', context, false)
).rejects.toThrow('AI service error');
expect(writeJSON).not.toHaveBeenCalled();
});
test('should handle file read errors', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
readJSON.mockImplementation(() => {
throw new Error('File read failed');
});
// Act & Assert
await expect(
expandTask(tasksPath, taskId, 3, false, '', context, false)
).rejects.toThrow('File read failed');
expect(writeJSON).not.toHaveBeenCalled();
});
test('should handle invalid tasks data', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
readJSON.mockReturnValue(null);
// Act & Assert
await expect(
expandTask(tasksPath, taskId, 3, false, '', context, false)
).rejects.toThrow();
});
});
describe('Output Format Handling', () => {
test('should display telemetry for CLI output format', async () => {
// Arrange
const { displayAiUsageSummary } = await import(
'../../../../../scripts/modules/ui.js'
);
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
projectRoot: '/mock/project/root'
// No mcpLog - should trigger CLI mode
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - Should display telemetry for CLI users
expect(displayAiUsageSummary).toHaveBeenCalledWith(
expect.objectContaining({
commandName: 'expand-task',
modelUsed: 'claude-3-5-sonnet',
totalCost: 0.012414
}),
'cli'
);
});
test('should not display telemetry for MCP output format', async () => {
// Arrange
const { displayAiUsageSummary } = await import(
'../../../../../scripts/modules/ui.js'
);
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - Should NOT display telemetry for MCP (handled at higher level)
expect(displayAiUsageSummary).not.toHaveBeenCalled();
});
});
describe('Edge Cases', () => {
test('should handle empty additional context', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - Should work with empty context (but may include project context)
expect(generateTextService).toHaveBeenCalledWith(
expect.objectContaining({
prompt: expect.stringMatching(/.*/) // Just ensure prompt exists
})
);
});
test('should handle additional context correctly', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const additionalContext = 'Use React hooks and TypeScript';
const context = {
mcpLog: createMcpLogMock(),
projectRoot: '/mock/project/root'
};
// Act
await expandTask(
tasksPath,
taskId,
3,
false,
additionalContext,
context,
false
);
// Assert - Should include additional context in prompt
expect(generateTextService).toHaveBeenCalledWith(
expect.objectContaining({
prompt: expect.stringContaining('Use React hooks and TypeScript')
})
);
});
test('should handle missing project root in context', async () => {
// Arrange
const tasksPath = 'tasks/tasks.json';
const taskId = '2';
const context = {
mcpLog: createMcpLogMock()
// No projectRoot in context
};
// Act
await expandTask(tasksPath, taskId, 3, false, '', context, false);
// Assert - Should derive project root from tasksPath
expect(findProjectRoot).toHaveBeenCalledWith(tasksPath);
expect(readJSON).toHaveBeenCalledWith(
tasksPath,
'/mock/project/root',
undefined
);
});
});
});

View File

@@ -123,9 +123,7 @@ describe('updateTasks', () => {
details: 'New details 2 based on direction',
description: 'Updated description',
dependencies: [],
priority: 'medium',
testStrategy: 'Unit test the updated functionality',
subtasks: []
priority: 'medium'
},
{
id: 3,
@@ -134,9 +132,7 @@ describe('updateTasks', () => {
details: 'New details 3 based on direction',
description: 'Updated description',
dependencies: [],
priority: 'medium',
testStrategy: 'Integration test the updated features',
subtasks: []
priority: 'medium'
}
];