Compare commits

..

17 Commits

Author SHA1 Message Date
Ralph Khreish
e77047d08f feat: add gpt-oss models to ollama 2025-08-11 22:33:04 +02:00
Ralph Khreish
311b2433e2 fix: remove claude code clear tm commands (#1123) 2025-08-11 18:59:35 +02:00
Parthy
04e11b5e82 feat: implement cross-tag task movement functionality (#1088)
* feat: enhance move command with cross-tag functionality

- Updated the `move` command to allow moving tasks between different tags, including options for handling dependencies.
- Added new options: `--from-tag`, `--to-tag`, `--with-dependencies`, `--ignore-dependencies`, and `--force`.
- Implemented validation for cross-tag moves and dependency checks.
- Introduced helper functions in the dependency manager for validating and resolving cross-tag dependencies.
- Added integration and unit tests to cover new functionality and edge cases.

* fix: refactor cross-tag move logic and enhance validation

- Moved the import of `moveTasksBetweenTags` to the correct location in `commands.js` for better clarity.
- Added new helper functions in `dependency-manager.js` to improve validation and error handling for cross-tag moves.
- Enhanced existing functions to ensure proper handling of task dependencies and conflicts.
- Updated tests to cover new validation scenarios and ensure robust error messaging for invalid task IDs and tags.

* fix: improve task ID handling and error messaging in cross-tag moves

- Refactored `moveTasksBetweenTags` to normalize task IDs for comparison, ensuring consistent handling of string and numeric IDs.
- Enhanced error messages for cases where source and target tags are the same but no destination is specified.
- Updated tests to validate new behavior, including handling string dependencies correctly during cross-tag moves.
- Cleaned up existing code for better readability and maintainability.

* test: add comprehensive tests for cross-tag move and dependency validation

- Introduced new test files for `move-cross-tag` and `cross-tag-dependencies` to cover various scenarios in cross-tag task movement.
- Implemented tests for handling task movement with and without dependencies, including edge cases for error handling.
- Enhanced existing tests in `fix-dependencies-command` and `move-task` to ensure robust validation of task IDs and dependencies.
- Mocked necessary modules and functions to isolate tests and improve reliability.
- Ensured coverage for both successful and failed cross-tag move operations, validating expected outcomes and error messages.

* test: refactor cross-tag move tests for better clarity and reusability

- Introduced a helper function `simulateCrossTagMove` to streamline cross-tag move test cases, reducing redundancy and improving readability.
- Updated existing tests to utilize the new helper function, ensuring consistent handling of expected messages and options.
- Enhanced test coverage for various scenarios, including handling of dependencies and flags.

* feat: add cross-tag task movement functionality

- Introduced new commands for moving tasks between different tags, enhancing project organization capabilities.
- Updated README with usage examples for cross-tag movement, including options for handling dependencies.
- Created comprehensive documentation for cross-tag task movement, detailing usage, error handling, and best practices.
- Implemented core logic for cross-tag moves, including validation for dependencies and error handling.
- Added integration and unit tests to ensure robust functionality and coverage for various scenarios, including edge cases.

* fix: enhance error handling and logging in cross-tag task movement

- Improved logging in `moveTaskCrossTagDirect` to include detailed arguments for better traceability.
- Refactored error handling to utilize structured error objects, providing clearer suggestions for resolving cross-tag dependency conflicts and subtask movement restrictions.
- Updated documentation to reflect changes in error handling and provide clearer guidance on task movement options.
- Added integration tests for cross-tag movement scenarios, ensuring robust validation of error handling and task movement logic.
- Cleaned up existing tests for clarity and reusability, enhancing overall test coverage.

* feat: enhance dependency resolution and error handling in task movement

- Added recursive dependency resolution for tasks in `moveTasksBetweenTags`, improving handling of complex task relationships.
- Introduced helper functions to find all dependencies and reverse dependencies, ensuring comprehensive coverage during task moves.
- Enhanced error messages in `validateSubtaskMove` and `displaySubtaskMoveError` for better clarity on movement restrictions.
- Updated tests to cover new functionality, including integration tests for complex cross-tag movement scenarios and edge cases.
- Refactored existing code for improved readability and maintainability, ensuring consistent handling of task IDs and dependencies.

* feat: unify dependency traversal and enhance task management utilities

- Introduced `traverseDependencies` utility for unified forward and reverse dependency traversal, improving code reusability and clarity.
- Refactored `findAllDependenciesRecursively` to leverage the new utility, streamlining dependency resolution in task management.
- Added `formatTaskIdForDisplay` helper for better task ID formatting in UI, enhancing user experience during error displays.
- Updated tests to cover new utility functions and ensure robust validation of dependency handling across various scenarios.
- Improved overall code organization and readability, ensuring consistent handling of task dependencies and IDs.

* fix: improve validation for dependency parameters in `findAllDependenciesRecursively`

- Added checks to ensure `sourceTasks` and `allTasks` are arrays, throwing errors if not, to prevent runtime issues.
- Updated documentation comment for clarity on the function's purpose and parameters.

* fix: remove `force` option from task movement parameters

- Eliminated the `force` parameter from the `moveTaskCrossTagDirect` function and related tools, simplifying the task movement logic.
- Updated documentation and tests to reflect the removal of the `force` option, ensuring clarity and consistency across the codebase.
- Adjusted related functions and tests to focus on `ignoreDependencies` as the primary control for handling dependency conflicts during task moves.

* Add cross-tag task movement functionality

- Introduced functionality for organizing tasks across different contexts by enabling cross-tag movement.
- Added `formatTaskIdForDisplay` helper to improve task ID formatting in UI error messages.
- Updated relevant tests to incorporate new functionality and ensure accurate error displays during task movements.

* Update scripts/modules/dependency-manager.js

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* refactor(dependency-manager): Fix subtask resolution and extract helper functions

1. Fix subtask finding logic (lines 1315-1330):
   - Correctly locate parent task by numeric ID
   - Search within parent's subtasks array instead of top-level tasks
   - Properly handle relative subtask references

2. Extract helper functions from getDependentTaskIds (lines 1440-1636):
   - Move findTasksThatDependOn as module-level function
   - Move taskDependsOnSource as module-level function
   - Move subtasksDependOnSource as module-level function
   - Improves readability, maintainability, and testability

Both fixes address architectural issues and improve code organization.

* refactor(dependency-manager): Enhance subtask resolution and dependency validation

- Improved subtask resolution logic to correctly find parent tasks and their subtasks, ensuring accurate identification of dependencies.
- Filtered out null/undefined dependencies before processing, enhancing robustness in dependency checks.
- Updated comments for clarity on the logic flow and purpose of changes, improving code maintainability.

* refactor(move-task): clarify destination ID description and improve skipped task handling

- Updated the description for the destination ID to clarify its usage in cross-tag moves.
- Simplified the handling of skipped tasks during multiple task movements, improving readability and logging.
- Enhanced the API result response to include detailed information about moved and skipped tasks, ensuring better feedback for users.

* refactor(commands): remove redundant tag validation logic

- Eliminated the check for identical source and target tags in the task movement logic, simplifying the code.
- This change streamlines the flow for within-tag moves, enhancing readability and maintainability.

* refactor(commands): enhance move command logic and error handling

- Introduced helper functions for better organization of cross-tag and within-tag move logic, improving code readability and maintainability.
- Enhanced error handling with structured error objects, providing clearer feedback for dependency conflicts and invalid tag combinations.
- Updated move command help output to include best practices and error resolution tips, ensuring users have comprehensive guidance during task movements.
- Streamlined task movement logic to handle multiple tasks more effectively, including detailed logging of successful and failed moves.

* test(dependency-manager): add subtasks to task structure and mock dependency traversal

- Updated `circular-dependencies.test.js` to include subtasks in task definitions, enhancing test coverage for task structures with nested dependencies.
- Mocked `traverseDependencies` in `fix-dependencies-command.test.js` to ensure consistent behavior during tests, improving reliability of dependency-related tests.

* refactor(dependency-manager): extract subtask finding logic into helper function

- Added `findSubtaskInParent` function to encapsulate subtask resolution within a parent task's subtasks array, improving code organization and readability.
- Updated `findDependencyTask` to utilize the new helper function, streamlining the logic for finding subtasks and enhancing maintainability.
- Enhanced comments for clarity on the purpose and functionality of the new subtask finding logic.

* refactor(ui): enhance subtask ID validation and improve error handling

- Added validation for subtask ID format in `formatDependenciesWithStatus` and `taskExists`, ensuring proper handling of invalid formats.
- Updated error logging in `displaySubtaskMoveError` to provide warnings for unexpected task ID formats, improving user feedback.
- Converted hints to a Set in `displayDependencyValidationHints` to ensure unique hints are displayed, enhancing clarity in the UI.

* test(cli): remove redundant timing check in complex cross-tag scenarios

- Eliminated the timing check for task completion within 5 seconds in `complex-cross-tag-scenarios.test.js`, streamlining the test logic.
- This change focuses on verifying task success without unnecessary timing constraints, enhancing test clarity and maintainability.

* test(integration): enhance task movement tests with mock file system

- Added integration tests for moving tasks within the same tag and between different tags using the actual `moveTask` and `moveTasksBetweenTags` functions.
- Implemented `mock-fs` to simulate file system interactions, improving test isolation and reliability.
- Verified task movement success and ensured proper handling of subtasks and dependencies, enhancing overall test coverage for task management functionality.
- Included error handling tests for missing tags and task IDs to ensure robustness in task movement operations.

* test(unit): add comprehensive tests for moveTaskCrossTagDirect functionality

- Introduced new test cases to verify mock functionality, ensuring that mocks for `findTasksPath` and `readJSON` are working as expected.
- Added tests for parameter validation, error handling, and function call flow, including scenarios for missing project roots and identical source/target tags.
- Enhanced coverage for ID parsing and move options, ensuring robust handling of various input conditions and improving overall test reliability.

* test(integration): skip tests for dependency conflict handling and withDependencies option

- Marked tests for handling dependency conflicts and the withDependencies option as skipped due to issues with the mock setup.
- Added TODOs to address the mock-fs setup for complex dependency scenarios, ensuring future improvements in test reliability.

* test(unit): expand cross-tag move command tests with comprehensive mocks

- Added extensive mocks for various modules to enhance the testing of the cross-tag move functionality in `move-cross-tag.test.js`.
- Implemented detailed test cases for handling cross-tag moves, including validation for missing parameters and identical source/target tags.
- Improved error handling tests to ensure robust feedback for invalid operations, enhancing overall test reliability and coverage.

* test(integration): add complex dependency scenarios to task movement tests

- Introduced new integration tests for handling complex dependency scenarios in task movement, utilizing the actual `moveTasksBetweenTags` function.
- Added tests for circular dependencies, nested dependency chains, and cross-tag dependency resolution, enhancing coverage and reliability.
- Documented limitations of the mock-fs setup for complex scenarios and provided warnings in the test output to guide future improvements.
- Skipped tests for dependency conflicts and the withDependencies option due to mock setup issues, with TODOs for resolution.

* test(unit): refactor move-cross-tag tests with focused mock system

- Simplified mocking in `move-cross-tag.test.js` by implementing a configuration-driven mock system, reducing the number of mocked modules from 20+ to 5 core functionalities.
- Introduced a reusable mock factory to streamline the creation of mocks based on configuration, enhancing maintainability and clarity.
- Added documentation for the new mock system, detailing usage examples and benefits, including reduced complexity and improved test focus.
- Implemented tests to validate the mock configuration, ensuring flexibility in enabling/disabling specific mocks.

* test(unit): clean up mocks and improve isEmpty function in fix-dependencies-command tests

- Removed the mock for `traverseDependencies` as it was unnecessary, simplifying the test setup.
- Updated the `isEmpty` function to clarify its behavior regarding null and undefined values, enhancing code readability and maintainability.

* test(unit): update traverseDependencies mock for consistency across tests

- Standardized the mock implementation of `traverseDependencies` in both `fix-dependencies-command.test.js` and `complexity-report-tag-isolation.test.js` to accept `sourceTasks`, `allTasks`, and `options` parameters, ensuring uniformity in test setups.
- This change enhances clarity and maintainability of the tests by aligning the mock behavior across different test files.

* fix(core): improve task movement error handling and ID normalization

- Wrapped task movement logic in a try-finally block to ensure console output is restored even on errors, enhancing reliability.
- Normalized source IDs to handle mixed string/number comparisons, preventing potential issues in dependency checks.
- Added tests for ID type consistency to verify that the normalization fix works correctly across various scenarios, improving test coverage and robustness.

* refactor(task-manager): restructure task movement logic for improved validation and execution

- Renamed and refactored `moveTasksBetweenTags` to streamline the task movement process into distinct phases: validation, data preparation, dependency resolution, execution, and finalization.
- Introduced `validateMove`, `prepareTaskData`, `resolveDependencies`, `executeMoveOperation`, and `finalizeMove` functions to enhance modularity and clarity.
- Updated documentation comments to reflect changes in function responsibilities and parameters.
- Added comprehensive unit tests for the new structure, ensuring robust validation and error handling across various scenarios.
- Improved handling of dependencies and task existence checks during the move operation, enhancing overall reliability.

* fix(move-task): streamline task movement logic and improve error handling

- Refactored the task movement process to enhance clarity and maintainability by replacing `forEach` with a `for...of` loop for better async handling.
- Consolidated error handling and result logging to ensure consistent feedback during task moves.
- Updated the logic for generating files only on the last move, improving performance and reducing unnecessary operations.
- Enhanced validation for skipped tasks, ensuring accurate reporting of moved and skipped tasks in the final result.

* fix(docs): update error message formatting and enhance clarity in task movement documentation

- Changed code block syntax from generic to `text` for better readability in error messages related to task movement and dependency conflicts.
- Ensured consistent formatting across all error message examples to improve user understanding of task movement restrictions and resolutions.
- Added a newline at the end of the file for proper formatting.

* Update .changeset/crazy-meals-hope.md

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* chore: improve changeset

* chore: improve changeset

* fix referenced bug in docs and remove docs

* chore: fix format

---------

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-11 18:58:51 +02:00
Ladi
782728ff95 feat: add --compact flag for minimal task list output (#1054)
* feat: add --compact flag for minimal task list output

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-08-11 18:35:23 +02:00
Fábio Vedovelli
30ca144231 feat: Add task id to task details UI (#1100)
* Display current task ID on task details page

* Changeset

* Implement CodeRabbit review suggestion.

* chore: fix CI errors

---------

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
2025-08-11 14:42:31 +02:00
Ralph Khreish
0220d0e994 chore: pimp my readme (#1122) 2025-08-11 14:29:49 +02:00
Ralph Khreish
41a8c2406a chore: add docs to monorepo (#1111) 2025-08-09 13:31:45 +02:00
github-actions[bot]
a003041cd8 Version Packages (#1107)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-08-08 22:00:39 +02:00
Ralph Khreish
6b57ead106 Release 0.24.0 #1098 from eyaltoledano/next
Release 0.24.0
2025-08-08 21:58:41 +02:00
Ralph Khreish
7b6e117b1d chore: prepare for release (exit pre mode) 2025-08-08 21:21:25 +02:00
Ralph Khreish
03b045e9cd chore: run format 2025-08-08 21:18:33 +02:00
neonwatty
699afdae59 feat: add task-checker agent to assets directory 2025-08-08 08:47:20 -07:00
github-actions[bot]
80c09802e8 chore: rc version bump 2025-08-08 12:41:29 +00:00
github-actions[bot]
cf8f0f4b1c docs: Auto-update and format models.md 2025-08-08 12:38:58 +00:00
Ralph Khreish
75c514cf5b feat: add gpt-5 support (#1105)
* feat: add gpt-5 support
2025-08-08 14:38:44 +02:00
Ralph Khreish
41d1e671b1 chore: fix CI checker, improve it (#1099) 2025-08-07 15:52:49 +02:00
Ralph Khreish
37fb569a62 Release 0.23.1 #1084 from eyaltoledano/next
chore: exit pre-release mode
2025-08-04 20:43:46 +03:00
103 changed files with 20871 additions and 531 deletions

View File

@@ -2,7 +2,9 @@
"$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json",
"changelog": [
"@changesets/changelog-github",
{ "repo": "eyaltoledano/claude-task-master" }
{
"repo": "eyaltoledano/claude-task-master"
}
],
"commit": false,
"fixed": [],
@@ -10,5 +12,7 @@
"access": "public",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": []
}
"ignore": [
"docs"
]
}

View File

@@ -0,0 +1,27 @@
---
"task-master-ai": minor
---
Add cross-tag task movement functionality for organizing tasks across different contexts.
This feature enables moving tasks between different tags (contexts) in your project, making it easier to organize work across different branches, environments, or project phases.
## CLI Usage Examples
Move a single task from one tag to another:
```bash
# Move task 5 from backlog tag to in-progress tag
task-master move --from=5 --from-tag=backlog --to-tag=feature-1
# Move task with its dependencies
task-master move --from=5 --from-tag=backlog --to-tag=feature-2 --with-dependencies
# Move task without checking dependencies
task-master move --from=5 --from-tag=backlog --to-tag=bug-3 --ignore-dependencies
```
Move multiple tasks at once:
```bash
# Move multiple tasks between tags
task-master move --from=5,6,7 --from-tag=backlog --to-tag=bug-4 --with-dependencies
```

View File

@@ -0,0 +1,12 @@
---
"task-master-ai": minor
---
Add compact mode --compact / -c flag to the `tm list` CLI command
- outputs tasks in a minimal, git-style one-line format. This reduces verbose output from ~30+ lines of dashboards and tables to just 1 line per task, making it much easier to quickly scan available tasks.
- Git-style format: ID STATUS TITLE (PRIORITY) → DEPS
- Color-coded status, priority, and dependencies
- Smart title truncation and dependency abbreviation
- Subtask support with indentation
- Full backward compatibility with existing list options

View File

@@ -1,7 +0,0 @@
---
"task-master-ai": patch
---
Fix expand task generating unrelated generic subtasks
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": patch
---
Fix scope-up/down prompts to include all required fields for better AI model compatibility
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
- Ensures generated JSON includes all fields required by the schema

View File

@@ -1,9 +0,0 @@
---
"task-master-ai": minor
---
Enhanced Claude Code provider with codebase-aware task generation
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs

View File

@@ -0,0 +1,5 @@
---
"extension": minor
---
Display current task ID on task details page

View File

@@ -1,13 +0,0 @@
{
"mode": "pre",
"tag": "rc",
"initialVersions": {
"task-master-ai": "0.23.0",
"extension": "0.23.0"
},
"changesets": [
"fuzzy-words-count",
"tender-trams-refuse",
"vast-sites-leave"
]
}

View File

@@ -0,0 +1,7 @@
---
"task-master-ai": patch
---
Fix `add-tag --from-branch` command error where `projectRoot` was not properly referenced
The command was failing with "projectRoot is not defined" error because the code was directly referencing `projectRoot` instead of `context.projectRoot` in the git repository checks. This fix corrects the variable references to use the proper context object.

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Add support for ollama `gpt-oss:20b` and `gpt-oss:120b`

View File

@@ -1,8 +0,0 @@
---
"task-master-ai": patch
---
Fix MCP scope-up/down tools not finding tasks
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
- scope_up_task and scope_down_task MCP tools now work properly

View File

@@ -1,5 +0,0 @@
---
"extension": patch
---
Fix issues with some users not being able to connect to Taskmaster MCP server while using the extension

View File

@@ -1,11 +0,0 @@
---
"task-master-ai": patch
---
Improve AI provider compatibility for JSON generation
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
- Removed nullable/default modifiers from Zod schemas for broader compatibility
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
- Perplexity now uses JSON mode for more reliable structured output
- Post-processing handles default values separately from schema validation

View File

@@ -0,0 +1,5 @@
---
"task-master-ai": minor
---
Remove `clear` Taskmaster claude code commands since they were too close to the claude-code clear command

View File

@@ -1,59 +0,0 @@
---
"task-master-ai": minor
---
Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
## New Claude Code Agents
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
### task-orchestrator
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
- Analyzes task dependencies to identify parallelizable work
- Deploys multiple task-executor agents for concurrent execution
- Monitors task completion and updates the dependency graph
- Automatically identifies and starts newly unblocked tasks
### task-executor
Handles the actual implementation of individual tasks:
- Executes specific tasks identified by the orchestrator
- Works on concrete implementation rather than planning
- Updates task status and logs progress
- Can work in parallel with other executors on independent tasks
### task-checker
Verifies that completed tasks meet their specifications:
- Reviews tasks marked as 'review' status
- Validates implementation against requirements
- Runs tests and checks for best practices
- Ensures quality before marking tasks as 'done'
## Installation
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
## Usage Example
```bash
# In Claude Code, after initializing a project with tasks:
# Use task-orchestrator to analyze and coordinate work
# The orchestrator will:
# 1. Check task dependencies
# 2. Identify tasks that can run in parallel
# 3. Deploy executors for available work
# 4. Monitor progress and deploy new executors as tasks complete
# Use task-executor for specific task implementation
# When the orchestrator identifies task 2.3 needs work:
# The executor will implement that specific task
```
## Benefits
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
- **Progress Tracking**: Real-time updates as tasks are completed
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started

View File

@@ -1,93 +0,0 @@
Clear all subtasks from all tasks globally.
## Global Subtask Clearing
Remove all subtasks across the entire project. Use with extreme caution.
## Execution
```bash
task-master clear-subtasks --all
```
## Pre-Clear Analysis
1. **Project-Wide Summary**
```
Global Subtask Summary
━━━━━━━━━━━━━━━━━━━━
Total parent tasks: 12
Total subtasks: 47
- Completed: 15
- In-progress: 8
- Pending: 24
Work at risk: ~120 hours
```
2. **Critical Warnings**
- In-progress subtasks that will lose work
- Completed subtasks with valuable history
- Complex dependency chains
- Integration test results
## Double Confirmation
```
⚠️ DESTRUCTIVE OPERATION WARNING ⚠️
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This will remove ALL 47 subtasks from your project
Including 8 in-progress and 15 completed subtasks
This action CANNOT be undone
Type 'CLEAR ALL SUBTASKS' to confirm:
```
## Smart Safeguards
- Require explicit confirmation phrase
- Create automatic backup
- Log all removed data
- Option to export first
## Use Cases
Valid reasons for global clear:
- Project restructuring
- Major pivot in approach
- Starting fresh breakdown
- Switching to different task organization
## Process
1. Full project analysis
2. Create backup file
3. Show detailed impact
4. Require confirmation
5. Execute removal
6. Generate summary report
## Alternative Suggestions
Before clearing all:
- Export subtasks to file
- Clear only pending subtasks
- Clear by task category
- Archive instead of delete
## Post-Clear Report
```
Global Subtask Clear Complete
━━━━━━━━━━━━━━━━━━━━━━━━━━━
Removed: 47 subtasks from 12 tasks
Backup saved: .taskmaster/backup/subtasks-20240115.json
Parent tasks updated: 12
Time estimates adjusted: Yes
Next steps:
- Review updated task list
- Re-expand complex tasks as needed
- Check project timeline
```

View File

@@ -1,86 +0,0 @@
Clear all subtasks from a specific task.
Arguments: $ARGUMENTS (task ID)
Remove all subtasks from a parent task at once.
## Clearing Subtasks
Bulk removal of all subtasks from a parent task.
## Execution
```bash
task-master clear-subtasks --id=<task-id>
```
## Pre-Clear Analysis
1. **Subtask Summary**
- Number of subtasks
- Completion status of each
- Work already done
- Dependencies affected
2. **Impact Assessment**
- Data that will be lost
- Dependencies to be removed
- Effect on project timeline
- Parent task implications
## Confirmation Required
```
Clear Subtasks Confirmation
━━━━━━━━━━━━━━━━━━━━━━━━━
Parent Task: #5 "Implement user authentication"
Subtasks to remove: 4
- #5.1 "Setup auth framework" (done)
- #5.2 "Create login form" (in-progress)
- #5.3 "Add validation" (pending)
- #5.4 "Write tests" (pending)
⚠️ This will permanently delete all subtask data
Continue? (y/n)
```
## Smart Features
- Option to convert to standalone tasks
- Backup task data before clearing
- Preserve completed work history
- Update parent task appropriately
## Process
1. List all subtasks for confirmation
2. Check for in-progress work
3. Remove all subtasks
4. Update parent task
5. Clean up dependencies
## Alternative Options
Suggest alternatives:
- Convert important subtasks to tasks
- Keep completed subtasks
- Archive instead of delete
- Export subtask data first
## Post-Clear
- Show updated parent task
- Recalculate time estimates
- Update task complexity
- Suggest next steps
## Example
```
/project:tm/clear-subtasks 5
→ Found 4 subtasks to remove
→ Warning: Subtask #5.2 is in-progress
→ Cleared all subtasks from task #5
→ Updated parent task estimates
→ Suggestion: Consider re-expanding with better breakdown
```

View File

@@ -1,5 +1,176 @@
# task-master-ai
## 0.24.0
### Minor Changes
- [#1098](https://github.com/eyaltoledano/claude-task-master/pull/1098) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
- Added GPT-5 model to supported models configuration with SWE score of 0.749
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
## New Claude Code Agents
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
### task-orchestrator
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
- Analyzes task dependencies to identify parallelizable work
- Deploys multiple task-executor agents for concurrent execution
- Monitors task completion and updates the dependency graph
- Automatically identifies and starts newly unblocked tasks
### task-executor
Handles the actual implementation of individual tasks:
- Executes specific tasks identified by the orchestrator
- Works on concrete implementation rather than planning
- Updates task status and logs progress
- Can work in parallel with other executors on independent tasks
### task-checker
Verifies that completed tasks meet their specifications:
- Reviews tasks marked as 'review' status
- Validates implementation against requirements
- Runs tests and checks for best practices
- Ensures quality before marking tasks as 'done'
## Installation
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
## Usage Example
```bash
# In Claude Code, after initializing a project with tasks:
# Use task-orchestrator to analyze and coordinate work
# The orchestrator will:
# 1. Check task dependencies
# 2. Identify tasks that can run in parallel
# 3. Deploy executors for available work
# 4. Monitor progress and deploy new executors as tasks complete
# Use task-executor for specific task implementation
# When the orchestrator identifies task 2.3 needs work:
# The executor will implement that specific task
```
## Benefits
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
- **Progress Tracking**: Real-time updates as tasks are completed
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
### Patch Changes
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix scope-up/down prompts to include all required fields for better AI model compatibility
- Added missing `priority` field to scope adjustment prompts to prevent validation errors with Claude-code and other models
- Ensures generated JSON includes all fields required by the schema
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix MCP scope-up/down tools not finding tasks
- Fixed task ID parsing in MCP layer - now correctly converts string IDs to numbers
- scope_up_task and scope_down_task MCP tools now work properly
- [#1079](https://github.com/eyaltoledano/claude-task-master/pull/1079) [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Improve AI provider compatibility for JSON generation
- Fixed schema compatibility issues between Perplexity and OpenAI o3 models
- Removed nullable/default modifiers from Zod schemas for broader compatibility
- Added automatic JSON repair for malformed AI responses (handles cases like missing array values)
- Perplexity now uses JSON mode for more reliable structured output
- Post-processing handles default values separately from schema validation
## 0.24.0-rc.2
### Minor Changes
- [#1105](https://github.com/eyaltoledano/claude-task-master/pull/1105) [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add GPT-5 support with proper parameter handling
- Added GPT-5 model to supported models configuration with SWE score of 0.749
## 0.24.0-rc.1
### Minor Changes
- [#1093](https://github.com/eyaltoledano/claude-task-master/pull/1093) [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Enhanced Claude Code provider with codebase-aware task generation
- Added automatic codebase analysis for Claude Code provider in `parse-prd`, `expand-task`, and `analyze-complexity` commands
- When using Claude Code as the AI provider, Task Master now instructs the AI to analyze the project structure, existing implementations, and patterns before generating tasks or subtasks
- Tasks and subtasks generated by Claude Code are now informed by actual codebase analysis, resulting in more accurate and contextual outputs
- [#1091](https://github.com/eyaltoledano/claude-task-master/pull/1091) [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Add Claude Code subagent support with task-orchestrator, task-executor, and task-checker
## New Claude Code Agents
Added specialized agents for Claude Code users to enable parallel task execution, intelligent task orchestration, and quality assurance:
### task-orchestrator
Coordinates and manages the execution of Task Master tasks with intelligent dependency analysis:
- Analyzes task dependencies to identify parallelizable work
- Deploys multiple task-executor agents for concurrent execution
- Monitors task completion and updates the dependency graph
- Automatically identifies and starts newly unblocked tasks
### task-executor
Handles the actual implementation of individual tasks:
- Executes specific tasks identified by the orchestrator
- Works on concrete implementation rather than planning
- Updates task status and logs progress
- Can work in parallel with other executors on independent tasks
### task-checker
Verifies that completed tasks meet their specifications:
- Reviews tasks marked as 'review' status
- Validates implementation against requirements
- Runs tests and checks for best practices
- Ensures quality before marking tasks as 'done'
## Installation
When using the Claude profile (`task-master rules add claude`), the agents are automatically installed to `.claude/agents/` directory.
## Usage Example
```bash
# In Claude Code, after initializing a project with tasks:
# Use task-orchestrator to analyze and coordinate work
# The orchestrator will:
# 1. Check task dependencies
# 2. Identify tasks that can run in parallel
# 3. Deploy executors for available work
# 4. Monitor progress and deploy new executors as tasks complete
# Use task-executor for specific task implementation
# When the orchestrator identifies task 2.3 needs work:
# The executor will implement that specific task
```
## Benefits
- **Parallel Execution**: Multiple independent tasks can be worked on simultaneously
- **Intelligent Scheduling**: Orchestrator understands dependencies and optimizes execution order
- **Separation of Concerns**: Planning (orchestrator) is separated from execution (executor)
- **Progress Tracking**: Real-time updates as tasks are completed
- **Automatic Progression**: As tasks complete, newly unblocked tasks are automatically started
### Patch Changes
- [#1094](https://github.com/eyaltoledano/claude-task-master/pull/1094) [`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix expand task generating unrelated generic subtasks
Fixed an issue where `task-master expand` would generate generic authentication-related subtasks regardless of the parent task context when using complexity reports. The expansion now properly includes the parent task details alongside any expansion guidance.
## 0.23.1-rc.0
### Patch Changes

View File

@@ -3,3 +3,7 @@
## Task Master AI Instructions
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
@./.taskmaster/CLAUDE.md
## Changeset Guidelines
- When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset.

View File

@@ -1,14 +1,39 @@
# Task Master [![GitHub stars](https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social)](https://github.com/eyaltoledano/claude-task-master/stargazers)
<a name="readme-top"></a>
[![CI](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg)](https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml) [![npm version](https://badge.fury.io/js/task-master-ai.svg)](https://badge.fury.io/js/task-master-ai) [![Discord](https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat)](https://discord.gg/taskmasterai) [![License: MIT with Commons Clause](https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg)](LICENSE)
<div align='center'>
<a href="https://trendshift.io/repositories/13971" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13971" alt="eyaltoledano%2Fclaude-task-master | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
[![NPM Downloads](https://img.shields.io/npm/d18m/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai) [![NPM Downloads](https://img.shields.io/npm/dm/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai) [![NPM Downloads](https://img.shields.io/npm/dw/task-master-ai?style=flat)](https://www.npmjs.com/package/task-master-ai)
<p align="center">
<a href="https://task-master.dev"><img src="./images/logo.png?raw=true" alt="Taskmaster logo"></a>
</p>
## By [@eyaltoledano](https://x.com/eyaltoledano), [@RalphEcom](https://x.com/RalphEcom) & [@jasonzhou1993](https://x.com/jasonzhou1993)
<p align="center">
<b>Taskmaster</b>: A task management system for AI-driven development, designed to work seamlessly with any AI chat.
</p>
<p align="center">
<a href="https://discord.gg/taskmasterai" target="_blank"><img src="https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat" alt="Discord"></a> |
<a href="https://docs.task-master.dev" target="_blank">Docs</a>
</p>
<p align="center">
<a href="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml"><img src="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://github.com/eyaltoledano/claude-task-master/stargazers"><img src="https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social" alt="GitHub stars"></a>
<a href="https://badge.fury.io/js/task-master-ai"><img src="https://badge.fury.io/js/task-master-ai.svg" alt="npm version"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg" alt="License"></a>
</p>
<p align="center">
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/d18m/task-master-ai?style=flat" alt="NPM Downloads"></a>
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dm/task-master-ai?style=flat" alt="NPM Downloads"></a>
<a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dw/task-master-ai?style=flat" alt="NPM Downloads"></a>
</p>
## By [@eyaltoledano](https://x.com/eyaltoledano) & [@RalphEcom](https://x.com/RalphEcom)
[![Twitter Follow](https://img.shields.io/twitter/follow/eyaltoledano)](https://x.com/eyaltoledano)
[![Twitter Follow](https://img.shields.io/twitter/follow/RalphEcom)](https://x.com/RalphEcom)
[![Twitter Follow](https://img.shields.io/twitter/follow/jasonzhou1993)](https://x.com/jasonzhou1993)
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
@@ -230,6 +255,11 @@ task-master show 1,3,5
# Research fresh information with project context
task-master research "What are the latest best practices for JWT authentication?"
# Move tasks between tags (cross-tag movement)
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
# Generate task files
task-master generate

22
apps/docs/README.md Normal file
View File

@@ -0,0 +1,22 @@
# Task Master Documentation
Welcome to the Task Master documentation. Use the links below to navigate to the information you need:
## Getting Started
- [Configuration Guide](archive/configuration.md) - Set up environment variables and customize Task Master
- [Tutorial](archive/ctutorial.md) - Step-by-step guide to getting started with Task Master
## Reference
- [Command Reference](archive/ccommand-reference.md) - Complete list of all available commands
- [Task Structure](archive/ctask-structure.md) - Understanding the task format and features
## Examples & Licensing
- [Example Interactions](archive/cexamples.md) - Common Cursor AI interaction examples
- [Licensing Information](archive/clicensing.md) - Detailed information about the license
## Need More Help?
If you can't find what you're looking for in these docs, please check the [main README](../README.md) or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master).

View File

@@ -0,0 +1,114 @@
---
title: "Installation(2)"
description: "This guide walks you through setting up Task Master in your development environment."
---
## Initial Setup
<Tip>
MCP (Model Control Protocol) provides the easiest way to get started with Task Master directly in your editor.
</Tip>
<AccordionGroup>
<Accordion title="Option 1: Using MCP (Recommended)" icon="sparkles">
<Steps>
<Step title="Add the MCP config to your editor">
<Link href="https://cursor.sh">Cursor</Link> recommended, but it works with other text editors
```json
{
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package", "task-master-ai", "task-master-mcp"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 128000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
}
}
}
}
```
</Step>
<Step title="Enable the MCP in your editor settings">
</Step>
<Step title="Prompt the AI to initialize Task Master">
> "Can you please initialize taskmaster-ai into my project?"
**The AI will:**
1. Create necessary project structure
2. Set up initial configuration files
3. Guide you through the rest of the process
4. Place your PRD document in the `scripts/` directory (e.g., `scripts/prd.txt`)
5. **Use natural language commands** to interact with Task Master:
> "Can you parse my PRD at scripts/prd.txt?"
>
> "What's the next task I should work on?"
>
> "Can you help me implement task 3?"
</Step>
</Steps>
</Accordion>
<Accordion title="Option 2: Manual Installation">
If you prefer to use the command line interface directly:
<Steps>
<Step title="Install">
<CodeGroup>
```bash Global
npm install -g task-master-ai
```
```bash Local
npm install task-master-ai
```
</CodeGroup>
</Step>
<Step title="Initialize a new project">
<CodeGroup>
```bash Global
task-master init
```
```bash Local
npx task-master-init
```
</CodeGroup>
</Step>
</Steps>
This will prompt you for project details and set up a new project with the necessary files and structure.
</Accordion>
</AccordionGroup>
## Common Commands
<Tip>
After setting up Task Master, you can use these commands (either via AI prompts or CLI)
</Tip>
```bash
# Parse a PRD and generate tasks
task-master parse-prd your-prd.txt
# List all tasks
task-master list
# Show the next task to work on
task-master next
# Generate task files
task-master generate

View File

@@ -0,0 +1,263 @@
---
title: "AI Client Utilities for MCP Tools"
description: "This document provides examples of how to use the new AI client utilities with AsyncOperationManager in MCP tools."
---
## Examples
<AccordionGroup>
<Accordion title="Basic Usage with Direct Functions">
```javascript
// In your direct function implementation:
import {
getAnthropicClientForMCP,
getModelConfig,
handleClaudeError
} from '../utils/ai-client-utils.js';
export async function someAiOperationDirect(args, log, context) {
try {
// Initialize Anthropic client with session from context
const client = getAnthropicClientForMCP(context.session, log);
// Get model configuration with defaults or session overrides
const modelConfig = getModelConfig(context.session);
// Make API call with proper error handling
try {
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature,
messages: [{ role: 'user', content: 'Your prompt here' }]
});
return {
success: true,
data: response
};
} catch (apiError) {
// Use helper to get user-friendly error message
const friendlyMessage = handleClaudeError(apiError);
return {
success: false,
error: {
code: 'AI_API_ERROR',
message: friendlyMessage
}
};
}
} catch (error) {
// Handle client initialization errors
return {
success: false,
error: {
code: 'AI_CLIENT_ERROR',
message: error.message
}
};
}
}
```
</Accordion>
<Accordion title="Integration with AsyncOperationManager">
```javascript
// In your MCP tool implementation:
import {
AsyncOperationManager,
StatusCodes
} from '../../utils/async-operation-manager.js';
import { someAiOperationDirect } from '../../core/direct-functions/some-ai-operation.js';
export async function someAiOperation(args, context) {
const { session, mcpLog } = context;
const log = mcpLog || console;
try {
// Create operation description
const operationDescription = `AI operation: ${args.someParam}`;
// Start async operation
const operation = AsyncOperationManager.createOperation(
operationDescription,
async (reportProgress) => {
try {
// Initial progress report
reportProgress({
progress: 0,
status: 'Starting AI operation...'
});
// Call direct function with session and progress reporting
const result = await someAiOperationDirect(args, log, {
reportProgress,
mcpLog: log,
session
});
// Final progress update
reportProgress({
progress: 100,
status: result.success ? 'Operation completed' : 'Operation failed',
result: result.data,
error: result.error
});
return result;
} catch (error) {
// Handle errors in the operation
reportProgress({
progress: 100,
status: 'Operation failed',
error: {
message: error.message,
code: error.code || 'OPERATION_FAILED'
}
});
throw error;
}
}
);
// Return immediate response with operation ID
return {
status: StatusCodes.ACCEPTED,
body: {
success: true,
message: 'Operation started',
operationId: operation.id
}
};
} catch (error) {
// Handle errors in the MCP tool
log.error(`Error in someAiOperation: ${error.message}`);
return {
status: StatusCodes.INTERNAL_SERVER_ERROR,
body: {
success: false,
error: {
code: 'OPERATION_FAILED',
message: error.message
}
}
};
}
}
```
</Accordion>
<Accordion title="Using Research Capabilities with Perplexity">
```javascript
// In your direct function:
import {
getPerplexityClientForMCP,
getBestAvailableAIModel
} from '../utils/ai-client-utils.js';
export async function researchOperationDirect(args, log, context) {
try {
// Get the best AI model for this operation based on needs
const { type, client } = await getBestAvailableAIModel(
context.session,
{ requiresResearch: true },
log
);
// Report which model we're using
if (context.reportProgress) {
await context.reportProgress({
progress: 10,
status: `Using ${type} model for research...`
});
}
// Make API call based on the model type
if (type === 'perplexity') {
// Call Perplexity
const response = await client.chat.completions.create({
model: context.session?.env?.PERPLEXITY_MODEL || 'sonar-medium-online',
messages: [{ role: 'user', content: args.researchQuery }],
temperature: 0.1
});
return {
success: true,
data: response.choices[0].message.content
};
} else {
// Call Claude as fallback
// (Implementation depends on specific needs)
// ...
}
} catch (error) {
// Handle errors
return {
success: false,
error: {
code: 'RESEARCH_ERROR',
message: error.message
}
};
}
}
```
</Accordion>
<Accordion title="Model Configuration Override">
```javascript
// In your direct function:
import { getModelConfig } from '../utils/ai-client-utils.js';
// Using custom defaults for a specific operation
const operationDefaults = {
model: 'claude-3-haiku-20240307', // Faster, smaller model
maxTokens: 1000, // Lower token limit
temperature: 0.2 // Lower temperature for more deterministic output
};
// Get model config with operation-specific defaults
const modelConfig = getModelConfig(context.session, operationDefaults);
// Now use modelConfig in your API calls
const response = await client.messages.create({
model: modelConfig.model,
max_tokens: modelConfig.maxTokens,
temperature: modelConfig.temperature
// Other parameters...
});
```
</Accordion>
</AccordionGroup>
## Best Practices
<AccordionGroup>
<Accordion title="Error Handling">
- Always use try/catch blocks around both client initialization and API calls
- Use `handleClaudeError` to provide user-friendly error messages
- Return standardized error objects with code and message
</Accordion>
<Accordion title="Progress Reporting">
- Report progress at key points (starting, processing, completing)
- Include meaningful status messages
- Include error details in progress reports when failures occur
</Accordion>
<Accordion title="Session Handling">
- Always pass the session from the context to the AI client getters
- Use `getModelConfig` to respect user settings from session
</Accordion>
<Accordion title="Model Selection">
- Use `getBestAvailableAIModel` when you need to select between different models
- Set `requiresResearch: true` when you need Perplexity capabilities
</Accordion>
<Accordion title="AsyncOperationManager Integration">
- Create descriptive operation names
- Handle all errors within the operation function
- Return standardized results from direct functions
- Return immediate responses with operation IDs
</Accordion>
</AccordionGroup>

View File

@@ -0,0 +1,180 @@
---
title: "AI Development Workflow"
description: "Learn how Task Master and Cursor AI work together to streamline your development workflow"
---
<Tip>The Cursor agent is pre-configured (via the rules file) to follow this workflow</Tip>
<AccordionGroup>
<Accordion title="1. Task Discovery and Selection">
Ask the agent to list available tasks:
```
What tasks are available to work on next?
```
The agent will:
- Run `task-master list` to see all tasks
- Run `task-master next` to determine the next task to work on
- Analyze dependencies to determine which tasks are ready to be worked on
- Prioritize tasks based on priority level and ID order
- Suggest the next task(s) to implement
</Accordion>
<Accordion title="2. Task Implementation">
When implementing a task, the agent will:
- Reference the task's details section for implementation specifics
- Consider dependencies on previous tasks
- Follow the project's coding standards
- Create appropriate tests based on the task's testStrategy
You can ask:
```
Let's implement task 3. What does it involve?
```
</Accordion>
<Accordion title="3. Task Verification">
Before marking a task as complete, verify it according to:
- The task's specified testStrategy
- Any automated tests in the codebase
- Manual verification if required
</Accordion>
<Accordion title="4. Task Completion">
When a task is completed, tell the agent:
```
Task 3 is now complete. Please update its status.
```
The agent will execute:
```bash
task-master set-status --id=3 --status=done
```
</Accordion>
<Accordion title="5. Handling Implementation Drift">
If during implementation, you discover that:
- The current approach differs significantly from what was planned
- Future tasks need to be modified due to current implementation choices
- New dependencies or requirements have emerged
Tell the agent:
```
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
```
The agent will execute:
```bash
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
```
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
</Accordion>
<Accordion title="6. Breaking Down Complex Tasks">
For complex tasks that need more granularity:
```
Task 5 seems complex. Can you break it down into subtasks?
```
The agent will execute:
```bash
task-master expand --id=5 --num=3
```
You can provide additional context:
```
Please break down task 5 with a focus on security considerations.
```
The agent will execute:
```bash
task-master expand --id=5 --prompt="Focus on security aspects"
```
You can also expand all pending tasks:
```
Please break down all pending tasks into subtasks.
```
The agent will execute:
```bash
task-master expand --all
```
For research-backed subtask generation using Perplexity AI:
```
Please break down task 5 using research-backed generation.
```
The agent will execute:
```bash
task-master expand --id=5 --research
```
</Accordion>
</AccordionGroup>
## Example Cursor AI Interactions
<AccordionGroup>
<Accordion title="Starting a new project">
```
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
Can you help me parse it and set up the initial tasks?
```
</Accordion>
<Accordion title="Working on tasks">
```
What's the next task I should work on? Please consider dependencies and priorities.
```
</Accordion>
<Accordion title="Implementing a specific task">
```
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
```
</Accordion>
<Accordion title="Managing subtasks">
```
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
```
</Accordion>
<Accordion title="Handling changes">
```
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
```
</Accordion>
<Accordion title="Completing work">
```
I've finished implementing the authentication system described in task 2. All tests are passing.
Please mark it as complete and tell me what I should work on next.
```
</Accordion>
<Accordion title="Analyzing complexity">
```
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
```
</Accordion>
<Accordion title="Viewing complexity report">
```
Can you show me the complexity report in a more readable format?
```
</Accordion>
</AccordionGroup>

View File

@@ -0,0 +1,208 @@
---
title: "Task Master Commands"
description: "A comprehensive reference of all available Task Master commands"
---
<AccordionGroup>
<Accordion title="Parse PRD">
```bash
# Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated
task-master parse-prd <prd-file.txt> --num-tasks=10
```
</Accordion>
<Accordion title="List Tasks">
```bash
# List all tasks
task-master list
# List tasks with a specific status
task-master list --status=<status>
# List tasks with subtasks
task-master list --with-subtasks
# List tasks with a specific status and include subtasks
task-master list --status=<status> --with-subtasks
```
</Accordion>
<Accordion title="Show Next Task">
```bash
# Show the next task to work on based on dependencies and status
task-master next
```
</Accordion>
<Accordion title="Show Specific Task">
```bash
# Show details of a specific task
task-master show <id>
# or
task-master show --id=<id>
# View a specific subtask (e.g., subtask 2 of task 1)
task-master show 1.2
```
</Accordion>
<Accordion title="Update Tasks">
```bash
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
```
</Accordion>
<Accordion title="Update a Specific Task">
```bash
# Update a single task by ID with new information
task-master update-task --id=<id> --prompt="<prompt>"
# Use research-backed updates with Perplexity AI
task-master update-task --id=<id> --prompt="<prompt>" --research
```
</Accordion>
<Accordion title="Update a Subtask">
```bash
# Append additional information to a specific subtask
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
# Example: Add details about API rate limiting to subtask 2 of task 5
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
# Use research-backed updates with Perplexity AI
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
```
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
</Accordion>
<Accordion title="Generate Task Files">
```bash
# Generate individual task files from tasks.json
task-master generate
```
</Accordion>
<Accordion title="Set Task Status">
```bash
# Set status of a single task
task-master set-status --id=<id> --status=<status>
# Set status for multiple tasks
task-master set-status --id=1,2,3 --status=<status>
# Set status for subtasks
task-master set-status --id=1.1,1.2 --status=<status>
```
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
</Accordion>
<Accordion title="Expand Tasks">
```bash
# Expand a specific task with subtasks
task-master expand --id=<id> --num=<number>
# Expand with additional context
task-master expand --id=<id> --prompt="<context>"
# Expand all pending tasks
task-master expand --all
# Force regeneration of subtasks for tasks that already have them
task-master expand --all --force
# Research-backed subtask generation for a specific task
task-master expand --id=<id> --research
# Research-backed generation for all tasks
task-master expand --all --research
```
</Accordion>
<Accordion title="Clear Subtasks">
```bash
# Clear subtasks from a specific task
task-master clear-subtasks --id=<id>
# Clear subtasks from multiple tasks
task-master clear-subtasks --id=1,2,3
# Clear subtasks from all tasks
task-master clear-subtasks --all
```
</Accordion>
<Accordion title="Analyze Task Complexity">
```bash
# Analyze complexity of all tasks
task-master analyze-complexity
# Save report to a custom location
task-master analyze-complexity --output=my-report.json
# Use a specific LLM model
task-master analyze-complexity --model=claude-3-opus-20240229
# Set a custom complexity threshold (1-10)
task-master analyze-complexity --threshold=6
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use Perplexity AI for research-backed complexity analysis
task-master analyze-complexity --research
```
</Accordion>
<Accordion title="View Complexity Report">
```bash
# Display the task complexity analysis report
task-master complexity-report
# View a report at a custom location
task-master complexity-report --file=my-report.json
```
</Accordion>
<Accordion title="Managing Task Dependencies">
```bash
# Add a dependency to a task
task-master add-dependency --id=<id> --depends-on=<id>
# Remove a dependency from a task
task-master remove-dependency --id=<id> --depends-on=<id>
# Validate dependencies without fixing them
task-master validate-dependencies
# Find and fix invalid dependencies automatically
task-master fix-dependencies
```
</Accordion>
<Accordion title="Add a New Task">
```bash
# Add a new task using AI
task-master add-task --prompt="Description of the new task"
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3
# Add a task with priority
task-master add-task --prompt="Description" --priority=high
```
</Accordion>
<Accordion title="Initialize a Project">
```bash
# Initialize a new project with Task Master structure
task-master init
```
</Accordion>
</AccordionGroup>

View File

@@ -0,0 +1,80 @@
---
title: "Configuration"
description: "Configure Task Master through environment variables in a .env file"
---
## Required Configuration
<Note>
Task Master requires an Anthropic API key to function. Add this to your `.env` file:
```bash
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
```
You can obtain an API key from the [Anthropic Console](https://console.anthropic.com/).
</Note>
## Optional Configuration
| Variable | Default Value | Description | Example |
| --- | --- | --- | --- |
| `MODEL` | `"claude-3-7-sonnet-20250219"` | Claude model to use | `MODEL=claude-3-opus-20240229` |
| `MAX_TOKENS` | `"4000"` | Maximum tokens for responses | `MAX_TOKENS=8000` |
| `TEMPERATURE` | `"0.7"` | Temperature for model responses | `TEMPERATURE=0.5` |
| `DEBUG` | `"false"` | Enable debug logging | `DEBUG=true` |
| `LOG_LEVEL` | `"info"` | Console output level | `LOG_LEVEL=debug` |
| `DEFAULT_SUBTASKS` | `"3"` | Default subtask count | `DEFAULT_SUBTASKS=5` |
| `DEFAULT_PRIORITY` | `"medium"` | Default priority | `DEFAULT_PRIORITY=high` |
| `PROJECT_NAME` | `"MCP SaaS MVP"` | Project name in metadata | `PROJECT_NAME=My Awesome Project` |
| `PROJECT_VERSION` | `"1.0.0"` | Version in metadata | `PROJECT_VERSION=2.1.0` |
| `PERPLEXITY_API_KEY` | - | For research-backed features | `PERPLEXITY_API_KEY=pplx-...` |
| `PERPLEXITY_MODEL` | `"sonar-medium-online"` | Perplexity model | `PERPLEXITY_MODEL=sonar-large-online` |
## Example .env File
```
# Required
ANTHROPIC_API_KEY=sk-ant-api03-your-api-key
# Optional - Claude Configuration
MODEL=claude-3-7-sonnet-20250219
MAX_TOKENS=4000
TEMPERATURE=0.7
# Optional - Perplexity API for Research
PERPLEXITY_API_KEY=pplx-your-api-key
PERPLEXITY_MODEL=sonar-medium-online
# Optional - Project Info
PROJECT_NAME=My Project
PROJECT_VERSION=1.0.0
# Optional - Application Configuration
DEFAULT_SUBTASKS=3
DEFAULT_PRIORITY=medium
DEBUG=false
LOG_LEVEL=info
```
## Troubleshooting
### If `task-master init` doesn't respond:
Try running it with Node directly:
```bash
node node_modules/claude-task-master/scripts/init.js
```
Or clone the repository and run:
```bash
git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
```
<Note>
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide] page.
</Note>

View File

@@ -0,0 +1,95 @@
---
title: "Cursor AI Integration"
description: "Learn how to set up and use Task Master with Cursor AI"
---
## Setting up Cursor AI Integration
<Check>
Task Master is designed to work seamlessly with [Cursor AI](https://www.cursor.so/), providing a structured workflow for AI-driven development.
</Check>
<AccordionGroup>
<Accordion title="Using Cursor with MCP (Recommended)" icon="sparkles">
If you've already set up Task Master with MCP in Cursor, the integration is automatic. You can simply use natural language to interact with Task Master:
```
What tasks are available to work on next?
Can you analyze the complexity of our tasks?
I'd like to implement task 4. What does it involve?
```
</Accordion>
<Accordion title="Manual Cursor Setup">
If you're not using MCP, you can still set up Cursor integration:
<Steps>
<Step title="After initializing your project, open it in Cursor">
The `.cursor/rules/dev_workflow.mdc` file is automatically loaded by Cursor, providing the AI with knowledge about the task management system
</Step>
<Step title="Place your PRD document in the scripts/ directory (e.g., scripts/prd.txt)">
</Step>
<Step title="Open Cursor's AI chat and switch to Agent mode">
</Step>
</Steps>
</Accordion>
<Accordion title="Alternative MCP Setup in Cursor">
<Steps>
<Step title="Go to Cursor settings">
</Step>
<Step title="Navigate to the MCP section">
</Step>
<Step title="Click on 'Add New MCP Server'">
</Step>
<Step title="Configure with the following details:">
- Name: "Task Master"
- Type: "Command"
- Command: "npx -y --package task-master-ai task-master-mcp"
</Step>
<Step title="Save Settings">
</Step>
</Steps>
Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience.
</Accordion>
</AccordionGroup>
## Initial Task Generation
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
```
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
```
The agent will execute:
```bash
task-master parse-prd scripts/prd.txt
```
This will:
- Parse your PRD document
- Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies
- The agent will understand this process due to the Cursor rules
### Generate Individual Task Files
Next, ask the agent to generate individual task files:
```
Please generate individual task files from tasks.json
```
The agent will execute:
```bash
task-master generate
```
This creates individual task files in the `tasks/` directory (e.g., `task_001.txt`, `task_002.txt`), making it easier to reference specific tasks.

View File

@@ -0,0 +1,56 @@
---
title: "Example Cursor AI Interactions"
description: "Below are some common interactions with Cursor AI when using Task Master"
---
<AccordionGroup>
<Accordion title="Starting a new project">
```
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
Can you help me parse it and set up the initial tasks?
```
</Accordion>
<Accordion title="Working on tasks">
```
What's the next task I should work on? Please consider dependencies and priorities.
```
</Accordion>
<Accordion title="Implementing a specific task">
```
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
```
</Accordion>
<Accordion title="Managing subtasks">
```
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
```
</Accordion>
<Accordion title="Handling changes">
```
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
```
</Accordion>
<Accordion title="Completing work">
```
I've finished implementing the authentication system described in task 2. All tests are passing.
Please mark it as complete and tell me what I should work on next.
```
</Accordion>
<Accordion title="Analyzing complexity">
```
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
```
</Accordion>
<Accordion title="Viewing complexity report">
```
Can you show me the complexity report in a more readable format?
```
</Accordion>
</AccordionGroup>

View File

@@ -0,0 +1,210 @@
---
title: Advanced Tasks
sidebarTitle: "Advanced Tasks"
---
## AI-Driven Development Workflow
The Cursor agent is pre-configured (via the rules file) to follow this workflow:
### 1. Task Discovery and Selection
Ask the agent to list available tasks:
```
What tasks are available to work on next?
```
```
Can you show me tasks 1, 3, and 5 to understand their current status?
```
The agent will:
- Run `task-master list` to see all tasks
- Run `task-master next` to determine the next task to work on
- Run `task-master show 1,3,5` to display multiple tasks with interactive options
- Analyze dependencies to determine which tasks are ready to be worked on
- Prioritize tasks based on priority level and ID order
- Suggest the next task(s) to implement
### 2. Task Implementation
When implementing a task, the agent will:
- Reference the task's details section for implementation specifics
- Consider dependencies on previous tasks
- Follow the project's coding standards
- Create appropriate tests based on the task's testStrategy
You can ask:
```
Let's implement task 3. What does it involve?
```
### 2.1. Viewing Multiple Tasks
For efficient context gathering and batch operations:
```
Show me tasks 5, 7, and 9 so I can plan my implementation approach.
```
The agent will:
- Run `task-master show 5,7,9` to display a compact summary table
- Show task status, priority, and progress indicators
- Provide an interactive action menu with batch operations
- Allow you to perform group actions like marking multiple tasks as in-progress
### 3. Task Verification
Before marking a task as complete, verify it according to:
- The task's specified testStrategy
- Any automated tests in the codebase
- Manual verification if required
### 4. Task Completion
When a task is completed, tell the agent:
```
Task 3 is now complete. Please update its status.
```
The agent will execute:
```bash
task-master set-status --id=3 --status=done
```
### 5. Handling Implementation Drift
If during implementation, you discover that:
- The current approach differs significantly from what was planned
- Future tasks need to be modified due to current implementation choices
- New dependencies or requirements have emerged
Tell the agent:
```
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks (from ID 4) to reflect this change?
```
The agent will execute:
```bash
task-master update --from=4 --prompt="Now we are using MongoDB instead of PostgreSQL."
# OR, if research is needed to find best practices for MongoDB:
task-master update --from=4 --prompt="Update to use MongoDB, researching best practices" --research
```
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
### 6. Reorganizing Tasks
If you need to reorganize your task structure:
```
I think subtask 5.2 would fit better as part of task 7 instead. Can you move it there?
```
The agent will execute:
```bash
task-master move --from=5.2 --to=7.3
```
You can reorganize tasks in various ways:
- Moving a standalone task to become a subtask: `--from=5 --to=7`
- Moving a subtask to become a standalone task: `--from=5.2 --to=7`
- Moving a subtask to a different parent: `--from=5.2 --to=7.3`
- Reordering subtasks within the same parent: `--from=5.2 --to=5.4`
- Moving a task to a new ID position: `--from=5 --to=25` (even if task 25 doesn't exist yet)
- Moving multiple tasks at once: `--from=10,11,12 --to=16,17,18` (must have same number of IDs, Taskmaster will look through each position)
When moving tasks to new IDs:
- The system automatically creates placeholder tasks for non-existent destination IDs
- This prevents accidental data loss during reorganization
- Any tasks that depend on moved tasks will have their dependencies updated
- When moving a parent task, all its subtasks are automatically moved with it and renumbered
This is particularly useful as your project understanding evolves and you need to refine your task structure.
### 7. Resolving Merge Conflicts with Tasks
When working with a team, you might encounter merge conflicts in your tasks.json file if multiple team members create tasks on different branches. The move command makes resolving these conflicts straightforward:
```
I just merged the main branch and there's a conflict with tasks.json. My teammates created tasks 10-15 while I created tasks 10-12 on my branch. Can you help me resolve this?
```
The agent will help you:
1. Keep your teammates' tasks (10-15)
2. Move your tasks to new positions to avoid conflicts:
```bash
# Move your tasks to new positions (e.g., 16-18)
task-master move --from=10 --to=16
task-master move --from=11 --to=17
task-master move --from=12 --to=18
```
This approach preserves everyone's work while maintaining a clean task structure, making it much easier to handle task conflicts than trying to manually merge JSON files.
### 8. Breaking Down Complex Tasks
For complex tasks that need more granularity:
```
Task 5 seems complex. Can you break it down into subtasks?
```
The agent will execute:
```bash
task-master expand --id=5 --num=3
```
You can provide additional context:
```
Please break down task 5 with a focus on security considerations.
```
The agent will execute:
```bash
task-master expand --id=5 --prompt="Focus on security aspects"
```
You can also expand all pending tasks:
```
Please break down all pending tasks into subtasks.
```
The agent will execute:
```bash
task-master expand --all
```
For research-backed subtask generation using the configured research model:
```
Please break down task 5 using research-backed generation.
```
The agent will execute:
```bash
task-master expand --id=5 --research
```

View File

@@ -0,0 +1,317 @@
---
title: Advanced Configuration
sidebarTitle: "Advanced Configuration"
---
Taskmaster uses two primary methods for configuration:
1. **`.taskmaster/config.json` File (Recommended - New Structure)**
- This JSON file stores most configuration settings, including AI model selections, parameters, logging levels, and project defaults.
- **Location:** This file is created in the `.taskmaster/` directory when you run the `task-master models --setup` interactive setup or initialize a new project with `task-master init`.
- **Migration:** Existing projects with `.taskmasterconfig` in the root will continue to work, but should be migrated to the new structure using `task-master migrate`.
- **Management:** Use the `task-master models --setup` command (or `models` MCP tool) to interactively create and manage this file. You can also set specific models directly using `task-master models --set-<role>=<model_id>`, adding `--ollama` or `--openrouter` flags for custom models. Manual editing is possible but not recommended unless you understand the structure.
- **Example Structure:**
```json
{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2,
"baseURL": "https://api.anthropic.com/v1"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1,
"baseURL": "https://api.perplexity.ai/v1"
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"defaultTag": "master",
"projectName": "Your Project Name",
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
"vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1"
}
}
```
2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**
- For projects that haven't migrated to the new structure yet.
- **Location:** Project root directory.
- **Migration:** Use `task-master migrate` to move this to `.taskmaster/config.json`.
- **Deprecation:** While still supported, you'll see warnings encouraging migration to the new structure.
## Environment Variables (`.env` file or MCP `env` block - For API Keys Only)
- Used **exclusively** for sensitive API keys and specific endpoint URLs.
- **Location:**
- For CLI usage: Create a `.env` file in your project root.
- For MCP/Cursor usage: Configure keys in the `env` section of your `.cursor/mcp.json` file.
- **Required API Keys (Depending on configured providers):**
- `ANTHROPIC_API_KEY`: Your Anthropic API key.
- `PERPLEXITY_API_KEY`: Your Perplexity API key.
- `OPENAI_API_KEY`: Your OpenAI API key.
- `GOOGLE_API_KEY`: Your Google API key (also used for Vertex AI provider).
- `MISTRAL_API_KEY`: Your Mistral API key.
- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key (also requires `AZURE_OPENAI_ENDPOINT`).
- `OPENROUTER_API_KEY`: Your OpenRouter API key.
- `XAI_API_KEY`: Your X-AI API key.
- **Optional Endpoint Overrides:**
- **Per-role `baseURL` in `.taskmasterconfig`:** You can add a `baseURL` property to any model role (`main`, `research`, `fallback`) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.
- **Environment Variable Overrides (`<PROVIDER>_BASE_URL`):** For greater flexibility, especially with third-party services, you can set an environment variable like `OPENAI_BASE_URL` or `MISTRAL_BASE_URL`. This will override any `baseURL` set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.
- `AZURE_OPENAI_ENDPOINT`: Required if using Azure OpenAI key (can also be set as `baseURL` for the Azure model role).
- `OLLAMA_BASE_URL`: Override the default Ollama API URL (Default: `http://localhost:11434/api`).
- `VERTEX_PROJECT_ID`: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.
- `VERTEX_LOCATION`: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).
**Important:** Settings like model ID selections (`main`, `research`, `fallback`), `maxTokens`, `temperature`, `logLevel`, `defaultSubtasks`, `defaultPriority`, and `projectName` are **managed in `.taskmaster/config.json`** (or `.taskmasterconfig` for unmigrated projects), not environment variables.
## Tagged Task Lists Configuration (v0.17+)
Taskmaster includes a tagged task lists system for multi-context task management.
### Global Tag Settings
```json
"global": {
"defaultTag": "master"
}
```
- **`defaultTag`** (string): Default tag context for new operations (default: "master")
### Git Integration
Task Master provides manual git integration through the `--from-branch` option:
- **Manual Tag Creation**: Use `task-master add-tag --from-branch` to create a tag based on your current git branch name
- **User Control**: No automatic tag switching - you control when and how tags are created
- **Flexible Workflow**: Supports any git workflow without imposing rigid branch-tag mappings
## State Management File
Taskmaster uses `.taskmaster/state.json` to track tagged system runtime information:
```json
{
"currentTag": "master",
"lastSwitched": "2025-06-11T20:26:12.598Z",
"migrationNoticeShown": true
}
```
- **`currentTag`**: Currently active tag context
- **`lastSwitched`**: Timestamp of last tag switch
- **`migrationNoticeShown`**: Whether migration notice has been displayed
This file is automatically created during tagged system migration and should not be manually edited.
## Example `.env` File (for API Keys)
```
# Required API keys for providers configured in .taskmaster/config.json
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# etc.
# Optional Endpoint Overrides
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
# OPENAI_BASE_URL=https://api.third-party.com/v1
#
# Azure OpenAI Configuration
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
# Google Vertex AI Configuration (Required if using 'vertex' provider)
# VERTEX_PROJECT_ID=your-gcp-project-id
```
## Troubleshooting
### Configuration Errors
- If Task Master reports errors about missing configuration or cannot find the config file, run `task-master models --setup` in your project root to create or repair the file.
- For new projects, config will be created at `.taskmaster/config.json`. For legacy projects, you may want to use `task-master migrate` to move to the new structure.
- Ensure API keys are correctly placed in your `.env` file (for CLI) or `.cursor/mcp.json` (for MCP) and are valid for the providers selected in your config file.
### If `task-master init` doesn't respond:
Try running it with Node directly:
```bash
node node_modules/claude-task-master/scripts/init.js
```
Or clone the repository and run:
```bash
git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
```
## Provider-Specific Configuration
### Google Vertex AI Configuration
Google Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
1. **Prerequisites**:
- A Google Cloud account with Vertex AI API enabled
- Either a Google API key with Vertex AI permissions OR a service account with appropriate roles
- A Google Cloud project ID
2. **Authentication Options**:
- **API Key**: Set the `GOOGLE_API_KEY` environment variable
- **Service Account**: Set `GOOGLE_APPLICATION_CREDENTIALS` to point to your service account JSON file
3. **Required Configuration**:
- Set `VERTEX_PROJECT_ID` to your Google Cloud project ID
- Set `VERTEX_LOCATION` to your preferred Google Cloud region (default: us-central1)
4. **Example Setup**:
```bash
# In .env file
GOOGLE_API_KEY=AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXX
VERTEX_PROJECT_ID=my-gcp-project-123
VERTEX_LOCATION=us-central1
```
Or using service account:
```bash
# In .env file
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
VERTEX_PROJECT_ID=my-gcp-project-123
VERTEX_LOCATION=us-central1
```
5. **In .taskmaster/config.json**:
```json
"global": {
"vertexProjectId": "my-gcp-project-123",
"vertexLocation": "us-central1"
}
```
### Azure OpenAI Configuration
Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure cloud platform and requires specific configuration:
1. **Prerequisites**:
- An Azure account with an active subscription
- Azure OpenAI service resource created in the Azure portal
- Azure OpenAI API key and endpoint URL
- Deployed models (e.g., gpt-4o, gpt-4o-mini, gpt-4.1, etc) in your Azure OpenAI resource
2. **Authentication**:
- Set the `AZURE_OPENAI_API_KEY` environment variable with your Azure OpenAI API key
- Configure the endpoint URL using one of the methods below
3. **Configuration Options**:
**Option 1: Using Global Azure Base URL (affects all Azure models)**
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "azure",
"modelId": "gpt-4o",
"maxTokens": 16000,
"temperature": 0.7
},
"fallback": {
"provider": "azure",
"modelId": "gpt-4o-mini",
"maxTokens": 10000,
"temperature": 0.7
}
},
"global": {
"azureBaseURL": "https://your-resource-name.azure.com/openai/deployments"
}
}
```
**Option 2: Using Per-Model Base URLs (recommended for flexibility)**
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "azure",
"modelId": "gpt-4o",
"maxTokens": 16000,
"temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "azure",
"modelId": "gpt-4o-mini",
"maxTokens": 10000,
"temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
}
}
}
```
4. **Environment Variables**:
```bash
# In .env file
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# Optional: Override endpoint for all Azure models
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
```
5. **Important Notes**:
- **Model Deployment Names**: The `modelId` in your configuration should match the **deployment name** you created in Azure OpenAI Studio, not the underlying model name
- **Base URL Priority**: Per-model `baseURL` settings override the global `azureBaseURL` setting
- **Endpoint Format**: When using per-model `baseURL`, use the full path including `/openai/deployments`
6. **Troubleshooting**:
**"Resource not found" errors:**
- Ensure your `baseURL` includes the full path: `https://your-resource-name.openai.azure.com/openai/deployments`
- Verify that your deployment name in `modelId` exactly matches what's configured in Azure OpenAI Studio
- Check that your Azure OpenAI resource is in the correct region and properly deployed
**Authentication errors:**
- Verify your `AZURE_OPENAI_API_KEY` is correct and has not expired
- Ensure your Azure OpenAI resource has the necessary permissions
- Check that your subscription has not been suspended or reached quota limits
**Model availability errors:**
- Confirm the model is deployed in your Azure OpenAI resource
- Verify the deployment name matches your configuration exactly (case-sensitive)
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.

View File

@@ -0,0 +1,8 @@
---
title: Intro to Advanced Usage
sidebarTitle: "Advanced Usage"
---
# Best Practices
Explore advanced tips, recommended workflows, and best practices for getting the most out of Task Master.

View File

@@ -0,0 +1,209 @@
---
title: CLI Commands
sidebarTitle: "CLI Commands"
---
<AccordionGroup>
<Accordion title="Parse PRD">
```bash
# Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated
task-master parse-prd <prd-file.txt> --num-tasks=10
```
</Accordion>
<Accordion title="List Tasks">
```bash
# List all tasks
task-master list
# List tasks with a specific status
task-master list --status=<status>
# List tasks with subtasks
task-master list --with-subtasks
# List tasks with a specific status and include subtasks
task-master list --status=<status> --with-subtasks
```
</Accordion>
<Accordion title="Show Next Task">
```bash
# Show the next task to work on based on dependencies and status
task-master next
```
</Accordion>
<Accordion title="Show Specific Task">
```bash
# Show details of a specific task
task-master show <id>
# or
task-master show --id=<id>
# View a specific subtask (e.g., subtask 2 of task 1)
task-master show 1.2
```
</Accordion>
<Accordion title="Update Tasks">
```bash
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
```
</Accordion>
<Accordion title="Update a Specific Task">
```bash
# Update a single task by ID with new information
task-master update-task --id=<id> --prompt="<prompt>"
# Use research-backed updates with Perplexity AI
task-master update-task --id=<id> --prompt="<prompt>" --research
```
</Accordion>
<Accordion title="Update a Subtask">
```bash
# Append additional information to a specific subtask
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>"
# Example: Add details about API rate limiting to subtask 2 of task 5
task-master update-subtask --id=5.2 --prompt="Add rate limiting of 100 requests per minute"
# Use research-backed updates with Perplexity AI
task-master update-subtask --id=<parentId.subtaskId> --prompt="<prompt>" --research
```
Unlike the `update-task` command which replaces task information, the `update-subtask` command _appends_ new information to the existing subtask details, marking it with a timestamp. This is useful for iteratively enhancing subtasks while preserving the original content.
</Accordion>
<Accordion title="Generate Task Files">
```bash
# Generate individual task files from tasks.json
task-master generate
```
</Accordion>
<Accordion title="Set Task Status">
```bash
# Set status of a single task
task-master set-status --id=<id> --status=<status>
# Set status for multiple tasks
task-master set-status --id=1,2,3 --status=<status>
# Set status for subtasks
task-master set-status --id=1.1,1.2 --status=<status>
```
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
</Accordion>
<Accordion title="Expand Tasks">
```bash
# Expand a specific task with subtasks
task-master expand --id=<id> --num=<number>
# Expand with additional context
task-master expand --id=<id> --prompt="<context>"
# Expand all pending tasks
task-master expand --all
# Force regeneration of subtasks for tasks that already have them
task-master expand --all --force
# Research-backed subtask generation for a specific task
task-master expand --id=<id> --research
# Research-backed generation for all tasks
task-master expand --all --research
```
</Accordion>
<Accordion title="Clear Subtasks">
```bash
# Clear subtasks from a specific task
task-master clear-subtasks --id=<id>
# Clear subtasks from multiple tasks
task-master clear-subtasks --id=1,2,3
# Clear subtasks from all tasks
task-master clear-subtasks --all
```
</Accordion>
<Accordion title="Analyze Task Complexity">
```bash
# Analyze complexity of all tasks
task-master analyze-complexity
# Save report to a custom location
task-master analyze-complexity --output=my-report.json
# Use a specific LLM model
task-master analyze-complexity --model=claude-3-opus-20240229
# Set a custom complexity threshold (1-10)
task-master analyze-complexity --threshold=6
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use Perplexity AI for research-backed complexity analysis
task-master analyze-complexity --research
```
</Accordion>
<Accordion title="View Complexity Report">
```bash
# Display the task complexity analysis report
task-master complexity-report
# View a report at a custom location
task-master complexity-report --file=my-report.json
```
</Accordion>
<Accordion title="Managing Task Dependencies">
```bash
# Add a dependency to a task
task-master add-dependency --id=<id> --depends-on=<id>
# Remove a dependency from a task
task-master remove-dependency --id=<id> --depends-on=<id>
# Validate dependencies without fixing them
task-master validate-dependencies
# Find and fix invalid dependencies automatically
task-master fix-dependencies
```
</Accordion>
<Accordion title="Add a New Task">
```bash
# Add a new task using AI
task-master add-task --prompt="Description of the new task"
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3
# Add a task with priority
task-master add-task --prompt="Description" --priority=high
```
</Accordion>
<Accordion title="Initialize a Project">
```bash
# Initialize a new project with Task Master structure
task-master init
```
</Accordion>
</AccordionGroup>

View File

@@ -0,0 +1,241 @@
---
title: Technical Capabilities
sidebarTitle: "Technical Capabilities"
---
# Capabilities (Technical)
Discover the technical capabilities of Task Master, including supported models, integrations, and more.
# CLI Interface Synopsis
This document outlines the command-line interface (CLI) for the Task Master application, as defined in `bin/task-master.js` and the `scripts/modules/commands.js` file (which I will assume exists based on the context). This guide is intended for those writing user-facing documentation to understand how users interact with the application from the command line.
## Entry Point
The main entry point for the CLI is the `task-master` command, which is an executable script that spawns the main application logic in `scripts/dev.js`.
## Global Options
The following options are available for all commands:
- `-h, --help`: Display help information.
- `--version`: Display the application's version.
## Commands
The CLI is organized into a series of commands, each with its own set of options. The following is a summary of the available commands, categorized by their functionality.
### 1. Task and Subtask Management
- **`add`**: Creates a new task using an AI-powered prompt.
- `--prompt <prompt>`: The prompt to use for generating the task.
- `--dependencies <dependencies>`: A comma-separated list of task IDs that this task depends on.
- `--priority <priority>`: The priority of the task (e.g., `high`, `medium`, `low`).
- **`add-subtask`**: Adds a subtask to a parent task.
- `--parent-id <parentId>`: The ID of the parent task.
- `--task-id <taskId>`: The ID of an existing task to convert to a subtask.
- `--title <title>`: The title of the new subtask.
- **`remove`**: Removes one or more tasks or subtasks.
- `--ids <ids>`: A comma-separated list of task or subtask IDs to remove.
- **`remove-subtask`**: Removes a subtask from its parent.
- `--id <subtaskId>`: The ID of the subtask to remove (in the format `parentId.subtaskId`).
- `--convert-to-task`: Converts the subtask to a standalone task.
- **`update`**: Updates multiple tasks starting from a specific ID.
- `--from <fromId>`: The ID of the task to start updating from.
- `--prompt <prompt>`: The new context to apply to the tasks.
- **`update-task`**: Updates a single task.
- `--id <taskId>`: The ID of the task to update.
- `--prompt <prompt>`: The new context to apply to the task.
- **`update-subtask`**: Appends information to a subtask.
- `--id <subtaskId>`: The ID of the subtask to update (in the format `parentId.subtaskId`).
- `--prompt <prompt>`: The information to append to the subtask.
- **`move`**: Moves a task or subtask.
- `--from <sourceId>`: The ID of the task or subtask to move.
- `--to <destinationId>`: The destination ID.
- **`clear-subtasks`**: Clears all subtasks from one or more tasks.
- `--ids <ids>`: A comma-separated list of task IDs.
### 2. Task Information and Status
- **`list`**: Lists all tasks.
- `--status <status>`: Filters tasks by status.
- `--with-subtasks`: Includes subtasks in the list.
- **`show`**: Shows the details of a specific task.
- `--id <taskId>`: The ID of the task to show.
- **`next`**: Shows the next task to work on.
- **`set-status`**: Sets the status of a task or subtask.
- `--id <id>`: The ID of the task or subtask.
- `--status <status>`: The new status.
### 3. Task Analysis and Expansion
- **`parse-prd`**: Parses a PRD to generate tasks.
- `--file <file>`: The path to the PRD file.
- `--num-tasks <numTasks>`: The number of tasks to generate.
- **`expand`**: Expands a task into subtasks.
- `--id <taskId>`: The ID of the task to expand.
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate.
- **`expand-all`**: Expands all eligible tasks.
- `--num-subtasks <numSubtasks>`: The number of subtasks to generate for each task.
- **`analyze-complexity`**: Analyzes task complexity.
- `--file <file>`: The path to the tasks file.
- **`complexity-report`**: Displays the complexity analysis report.
### 4. Project and Configuration
- **`init`**: Initializes a new project.
- **`generate`**: Generates individual task files.
- **`migrate`**: Migrates a project to the new directory structure.
- **`research`**: Performs AI-powered research.
- `--query <query>`: The research query.
This synopsis provides a comprehensive overview of the CLI commands and their options, which should be helpful for creating user-facing documentation.
# Core Implementation Synopsis
This document provides a high-level overview of the core implementation of the Task Master application, focusing on the functionalities exposed through `scripts/modules/task-manager.js`. This serves as a guide for understanding the application's capabilities when writing user-facing documentation.
## Core Concepts
The application revolves around the management of tasks and subtasks, which are stored in a `tasks.json` file. The core logic provides functionalities to create, read, update, and delete tasks and subtasks, as well as manage their dependencies and statuses.
### Task Structure
A task is a JSON object with the following key properties:
- `id`: A unique number identifying the task.
- `title`: A string representing the task's title.
- `description`: A string providing a brief description of the task.
- `details`: A string containing detailed information about the task.
- `testStrategy`: A string describing how to test the task.
- `status`: A string representing the task's current status (e.g., `pending`, `in-progress`, `done`).
- `dependencies`: An array of task IDs that this task depends on.
- `priority`: A string representing the task's priority (e.g., `high`, `medium`, `low`).
- `subtasks`: An array of subtask objects.
A subtask has a similar structure to a task but is nested within a parent task.
## Feature Categories
The core functionalities can be categorized as follows:
### 1. Task and Subtask Management
These functions are the bread and butter of the application, allowing for the creation, modification, and deletion of tasks and subtasks.
- **`addTask(prompt, dependencies, priority)`**: Creates a new task using an AI-powered prompt to generate the title, description, details, and test strategy. It can also be used to create a task manually by providing the task data directly.
- **`addSubtask(parentId, existingTaskId, newSubtaskData)`**: Adds a subtask to a parent task. It can either convert an existing task into a subtask or create a new subtask from scratch.
- **`removeTask(taskIds)`**: Removes one or more tasks or subtasks.
- **`removeSubtask(subtaskId, convertToTask)`**: Removes a subtask from its parent. It can optionally convert the subtask into a standalone task.
- **`updateTaskById(taskId, prompt)`**: Updates a task's information based on a prompt.
- **`updateSubtaskById(subtaskId, prompt)`**: Appends additional information to a subtask's details.
- **`updateTasks(fromId, prompt)`**: Updates multiple tasks starting from a specific ID based on a new context.
- **`moveTask(sourceId, destinationId)`**: Moves a task or subtask to a new position.
- **`clearSubtasks(taskIds)`**: Clears all subtasks from one or more tasks.
### 2. Task Information and Status
These functions are used to retrieve information about tasks and manage their status.
- **`listTasks(statusFilter, withSubtasks)`**: Lists all tasks, with options to filter by status and include subtasks.
- **`findTaskById(taskId)`**: Finds a task by its ID.
- **`taskExists(taskId)`**: Checks if a task with a given ID exists.
- **`setTaskStatus(taskIdInput, newStatus)`**: Sets the status of a task or subtask.
-al
- **`updateSingleTaskStatus(taskIdInput, newStatus)`**: A helper function to update the status of a single task or subtask.
- **`findNextTask()`**: Determines the next task to work on based on dependencies and status.
### 3. Task Analysis and Expansion
These functions leverage AI to analyze and break down tasks.
- **`parsePRD(prdPath, numTasks)`**: Parses a Product Requirements Document (PRD) to generate an initial set of tasks.
- **`expandTask(taskId, numSubtasks)`**: Expands a task into a specified number of subtasks using AI.
- **`expandAllTasks(numSubtasks)`**: Expands all eligible pending or in-progress tasks.
- **`analyzeTaskComplexity(options)`**: Analyzes the complexity of tasks and generates recommendations for expansion.
- **`readComplexityReport()`**: Reads the complexity analysis report.
### 4. Dependency Management
These functions are crucial for managing the relationships between tasks.
- **`isTaskDependentOn(task, targetTaskId)`**: Checks if a task has a direct or indirect dependency on another task.
### 5. Project and Configuration
These functions are for managing the project and its configuration.
- **`generateTaskFiles()`**: Generates individual task files from `tasks.json`.
- **`migrateProject()`**: Migrates the project to the new `.taskmaster` directory structure.
- **`performResearch(query, options)`**: Performs AI-powered research with project context.
This overview should provide a solid foundation for creating user-facing documentation. For more detailed information on each function, refer to the source code in `scripts/modules/task-manager/`.
# MCP Interface Synopsis
This document provides an overview of the MCP (Machine-to-Machine Communication Protocol) interface for the Task Master application. The MCP interface is defined in the `mcp-server/` directory and exposes the application's core functionalities as a set of tools that can be called remotely.
## Core Concepts
The MCP interface is built on top of the `fastmcp` library and registers a set of tools that correspond to the core functionalities of the Task Master application. These tools are defined in the `mcp-server/src/tools/` directory and are registered with the MCP server in `mcp-server/src/tools/index.js`.
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
## Tool Categories
The MCP tools can be categorized in the same way as the core functionalities:
### 1. Task and Subtask Management
- **`add_task`**: Creates a new task.
- **`add_subtask`**: Adds a subtask to a parent task.
- **`remove_task`**: Removes one or more tasks or subtasks.
- **`remove_subtask`**: Removes a subtask from its parent.
- **`update_task`**: Updates a single task.
- **`update_subtask`**: Appends information to a subtask.
- **`update`**: Updates multiple tasks.
- **`move_task`**: Moves a task or subtask.
- **`clear_subtasks`**: Clears all subtasks from one or more tasks.
### 2. Task Information and Status
- **`get_tasks`**: Lists all tasks.
- **`get_task`**: Shows the details of a specific task.
- **`next_task`**: Shows the next task to work on.
- **`set_task_status`**: Sets the status of a task or subtask.
### 3. Task Analysis and Expansion
- **`parse_prd`**: Parses a PRD to generate tasks.
- **`expand_task`**: Expands a task into subtasks.
- **`expand_all`**: Expands all eligible tasks.
- **`analyze_project_complexity`**: Analyzes task complexity.
- **`complexity_report`**: Displays the complexity analysis report.
### 4. Dependency Management
- **`add_dependency`**: Adds a dependency to a task.
- **`remove_dependency`**: Removes a dependency from a task.
- **`validate_dependencies`**: Validates the dependencies of all tasks.
- **`fix_dependencies`**: Fixes any invalid dependencies.
### 5. Project and Configuration
- **`initialize_project`**: Initializes a new project.
- **`generate`**: Generates individual task files.
- **`models`**: Manages AI model configurations.
- **`research`**: Performs AI-powered research.
### 6. Tag Management
- **`add_tag`**: Creates a new tag.
- **`delete_tag`**: Deletes a tag.
- **`list_tags`**: Lists all tags.
- **`use_tag`**: Switches to a different tag.
- **`rename_tag`**: Renames a tag.
- **`copy_tag`**: Copies a tag.
This synopsis provides a clear overview of the MCP interface and its available tools, which will be valuable for anyone writing documentation for developers who need to interact with the Task Master application programmatically.

View File

@@ -0,0 +1,68 @@
---
title: MCP Tools
sidebarTitle: "MCP Tools"
---
# MCP Tools
This document provides an overview of the MCP (Machine-to-Machine Communication Protocol) interface for the Task Master application. The MCP interface is defined in the `mcp-server/` directory and exposes the application's core functionalities as a set of tools that can be called remotely.
## Core Concepts
The MCP interface is built on top of the `fastmcp` library and registers a set of tools that correspond to the core functionalities of the Task Master application. These tools are defined in the `mcp-server/src/tools/` directory and are registered with the MCP server in `mcp-server/src/tools/index.js`.
Each tool is defined with a name, a description, and a set of parameters that are validated using the `zod` library. The `execute` function of each tool calls the corresponding core logic function from `scripts/modules/task-manager.js`.
## Tool Categories
The MCP tools can be categorized in the same way as the core functionalities:
### 1. Task and Subtask Management
- **`add_task`**: Creates a new task.
- **`add_subtask`**: Adds a subtask to a parent task.
- **`remove_task`**: Removes one or more tasks or subtasks.
- **`remove_subtask`**: Removes a subtask from its parent.
- **`update_task`**: Updates a single task.
- **`update_subtask`**: Appends information to a subtask.
- **`update`**: Updates multiple tasks.
- **`move_task`**: Moves a task or subtask.
- **`clear_subtasks`**: Clears all subtasks from one or more tasks.
### 2. Task Information and Status
- **`get_tasks`**: Lists all tasks.
- **`get_task`**: Shows the details of a specific task.
- **`next_task`**: Shows the next task to work on.
- **`set_task_status`**: Sets the status of a task or subtask.
### 3. Task Analysis and Expansion
- **`parse_prd`**: Parses a PRD to generate tasks.
- **`expand_task`**: Expands a task into subtasks.
- **`expand_all`**: Expands all eligible tasks.
- **`analyze_project_complexity`**: Analyzes task complexity.
- **`complexity_report`**: Displays the complexity analysis report.
### 4. Dependency Management
- **`add_dependency`**: Adds a dependency to a task.
- **`remove_dependency`**: Removes a dependency from a task.
- **`validate_dependencies`**: Validates the dependencies of all tasks.
- **`fix_dependencies`**: Fixes any invalid dependencies.
### 5. Project and Configuration
- **`initialize_project`**: Initializes a new project.
- **`generate`**: Generates individual task files.
- **`models`**: Manages AI model configurations.
- **`research`**: Performs AI-powered research.
### 6. Tag Management
- **`add_tag`**: Creates a new tag.
- **`delete_tag`**: Deletes a tag.
- **`list_tags`**: Lists all tags.
- **`use_tag`**: Switches to a different tag.
- **`rename_tag`**: Renames a tag.
- **`copy_tag`**: Copies a tag.

View File

@@ -0,0 +1,163 @@
---
title: "Task Structure"
sidebarTitle: "Task Structure"
description: "Tasks in Task Master follow a specific format designed to provide comprehensive information for both humans and AI assistants."
---
## Task Fields in tasks.json
Tasks in tasks.json have the following structure:
| Field | Description | Example |
| -------------- | ---------------------------------------------- | ------------------------------------------------------ |
| `id` | Unique identifier for the task. | `1` |
| `title` | Brief, descriptive title. | `"Initialize Repo"` |
| `description` | What the task involves. | `"Create a new repository, set up initial structure."` |
| `status` | Current state. | `"pending"`, `"done"`, `"deferred"` |
| `dependencies` | Prerequisite task IDs. ✅ Completed, ⏱️ Pending | `[1, 2]` |
| `priority` | Task importance. | `"high"`, `"medium"`, `"low"` |
| `details` | Implementation instructions. | `"Use GitHub client ID/secret, handle callback..."` |
| `testStrategy` | How to verify success. | `"Deploy and confirm 'Hello World' response."` |
| `subtasks` | Nested subtasks related to the main task. | `[{"id": 1, "title": "Configure OAuth", ...}]` |
## Task File Format
Individual task files follow this format:
```
# Task ID: <id>
# Title: <title>
# Status: <status>
# Dependencies: <comma-separated list of dependency IDs>
# Priority: <priority>
# Description: <brief description>
# Details:
<detailed implementation notes>
# Test Strategy:
<verification approach>
```
## Features in Detail
<AccordionGroup>
<Accordion title="Analyzing Task Complexity">
The `analyze-complexity` command:
- Analyzes each task using AI to assess its complexity on a scale of 1-10
- Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS
- Generates tailored prompts for expanding each task
- Creates a comprehensive JSON report with ready-to-use commands
- Saves the report to scripts/task-complexity-report.json by default
The generated report contains:
- Complexity analysis for each task (scored 1-10)
- Recommended number of subtasks based on complexity
- AI-generated expansion prompts customized for each task
- Ready-to-run expansion commands directly within each task analysis
</Accordion>
<Accordion title="Viewing Complexity Report">
The `complexity-report` command:
- Displays a formatted, easy-to-read version of the complexity analysis report
- Shows tasks organized by complexity score (highest to lowest)
- Provides complexity distribution statistics (low, medium, high)
- Highlights tasks recommended for expansion based on threshold score
- Includes ready-to-use expansion commands for each complex task
- If no report exists, offers to generate one on the spot
</Accordion>
<Accordion title="Smart Task Expansion">
The `expand` command automatically checks for and uses the complexity report:
When a complexity report exists:
- Tasks are automatically expanded using the recommended subtask count and prompts
- When expanding all tasks, they're processed in order of complexity (highest first)
- Research-backed generation is preserved from the complexity analysis
- You can still override recommendations with explicit command-line options
Example workflow:
```bash
# Generate the complexity analysis report with research capabilities
task-master analyze-complexity --research
# Review the report in a readable format
task-master complexity-report
# Expand tasks using the optimized recommendations
task-master expand --id=8
# or expand all tasks
task-master expand --all
```
</Accordion>
<Accordion title="Finding the Next Task">
The `next` command:
- Identifies tasks that are pending/in-progress and have all dependencies satisfied
- Prioritizes tasks by priority level, dependency count, and task ID
- Displays comprehensive information about the selected task:
- Basic task details (ID, title, priority, dependencies)
- Implementation details
- Subtasks (if they exist)
- Provides contextual suggested actions:
- Command to mark the task as in-progress
- Command to mark the task as done
- Commands for working with subtasks
</Accordion>
<Accordion title="Viewing Specific Task Details">
The `show` command:
- Displays comprehensive details about a specific task or subtask
- Shows task status, priority, dependencies, and detailed implementation notes
- For parent tasks, displays all subtasks and their status
- For subtasks, shows parent task relationship
- Provides contextual action suggestions based on the task's state
- Works with both regular tasks and subtasks (using the format taskId.subtaskId)
</Accordion>
</AccordionGroup>
## Best Practices for AI-Driven Development
<CardGroup cols={2}>
<Card title="📝 Detailed PRD" icon="lightbulb">
The more detailed your PRD, the better the generated tasks will be.
</Card>
<Card title="👀 Review Tasks" icon="magnifying-glass">
After parsing the PRD, review the tasks to ensure they make sense and have appropriate dependencies.
</Card>
<Card title="📊 Analyze Complexity" icon="chart-line">
Use the complexity analysis feature to identify which tasks should be broken down further.
</Card>
<Card title="⛓️ Follow Dependencies" icon="link">
Always respect task dependencies - the Cursor agent will help with this.
</Card>
<Card title="🔄 Update As You Go" icon="arrows-rotate">
If your implementation diverges from the plan, use the update command to keep future tasks aligned.
</Card>
<Card title="📦 Break Down Tasks" icon="boxes-stacked">
Use the expand command to break down complex tasks into manageable subtasks.
</Card>
<Card title="🔄 Regenerate Files" icon="file-arrow-up">
After any updates to tasks.json, regenerate the task files to keep them in sync.
</Card>
<Card title="💬 Provide Context" icon="comment">
When asking the Cursor agent to help with a task, provide context about what you're trying to achieve.
</Card>
<Card title="✅ Validate Dependencies" icon="circle-check">
Periodically run the validate-dependencies command to check for invalid or circular dependencies.
</Card>
</CardGroup>

83
apps/docs/docs.json Normal file
View File

@@ -0,0 +1,83 @@
{
"$schema": "https://mintlify.com/docs.json",
"theme": "mint",
"name": "Task Master",
"colors": {
"primary": "#3366CC",
"light": "#6699FF",
"dark": "#24478F"
},
"favicon": "/favicon.svg",
"navigation": {
"tabs": [
{
"tab": "Task Master Documentation",
"groups": [
{
"group": "Welcome",
"pages": ["introduction"]
},
{
"group": "Getting Started",
"pages": [
{
"group": "Quick Start",
"pages": [
"getting-started/quick-start/quick-start",
"getting-started/quick-start/requirements",
"getting-started/quick-start/installation",
"getting-started/quick-start/configuration-quick",
"getting-started/quick-start/prd-quick",
"getting-started/quick-start/tasks-quick",
"getting-started/quick-start/execute-quick"
]
},
"getting-started/faq",
"getting-started/contribute"
]
},
{
"group": "Best Practices",
"pages": [
"best-practices/index",
"best-practices/configuration-advanced",
"best-practices/advanced-tasks"
]
},
{
"group": "Technical Capabilities",
"pages": [
"capabilities/mcp",
"capabilities/cli-root-commands",
"capabilities/task-structure"
]
}
]
}
],
"global": {
"anchors": [
{
"anchor": "Github",
"href": "https://github.com/eyaltoledano/claude-task-master",
"icon": "github"
},
{
"anchor": "Discord",
"href": "https://discord.gg/fWJkU7rf",
"icon": "discord"
}
]
}
},
"logo": {
"light": "/logo/task-master-logo.png",
"dark": "/logo/task-master-logo.png"
},
"footer": {
"socials": {
"x": "https://x.com/TaskmasterAI",
"github": "https://github.com/eyaltoledano/claude-task-master"
}
}
}

9
apps/docs/favicon.svg Normal file
View File

@@ -0,0 +1,9 @@
<svg width="100" height="100" viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
<!-- Blue form with check from logo -->
<rect x="16" y="10" width="68" height="80" rx="9" fill="#3366CC"/>
<polyline points="33,44 41,55 56,29" fill="none" stroke="#FFFFFF" stroke-width="6"/>
<circle cx="33" cy="64" r="4" fill="#FFFFFF"/>
<rect x="43" y="61" width="27" height="6" fill="#FFFFFF"/>
<circle cx="33" cy="77" r="4" fill="#FFFFFF"/>
<rect x="43" y="75" width="27" height="6" fill="#FFFFFF"/>
</svg>

After

Width:  |  Height:  |  Size: 513 B

View File

@@ -0,0 +1,335 @@
# Contributing to Task Master
Thank you for your interest in contributing to Task Master! We're excited to work with you and appreciate your help in making this project better. 🚀
## 🤝 Our Collaborative Approach
We're a **PR-friendly team** that values collaboration:
- ✅ **We review PRs quickly** - Usually within hours, not days
- ✅ **We're super reactive** - Expect fast feedback and engagement
- ✅ **We sometimes take over PRs** - If your contribution is valuable but needs cleanup, we might jump in to help finish it
- ✅ **We're open to all contributions** - From bug fixes to major features
**We don't mind AI-generated code**, but we do expect you to:
- ✅ **Review and understand** what the AI generated
- ✅ **Test the code thoroughly** before submitting
- ✅ **Ensure it's well-written** and follows our patterns
- ❌ **Don't submit "AI slop"** - untested, unreviewed AI output
> **Why this matters**: We spend significant time reviewing PRs. Help us help you by submitting quality contributions that save everyone time!
## 🚀 Quick Start for Contributors
### 1. Fork and Clone
```bash
git clone https://github.com/YOUR_USERNAME/claude-task-master.git
cd claude-task-master
npm install
```
### 2. Create a Feature Branch
**Important**: Always target the `next` branch, not `main`:
```bash
git checkout next
git pull origin next
git checkout -b feature/your-feature-name
```
### 3. Make Your Changes
Follow our development guidelines below.
### 4. Test Everything Yourself
**Before submitting your PR**, ensure:
```bash
# Run all tests
npm test
# Check formatting
npm run format-check
# Fix formatting if needed
npm run format
```
### 5. Create a Changeset
**Required for most changes**:
```bash
npm run changeset
```
See the [Changeset Guidelines](#changeset-guidelines) below for details.
### 6. Submit Your PR
- Target the `next` branch
- Write a clear description
- Reference any related issues
## 📋 Development Guidelines
### Branch Strategy
- **`main`**: Production-ready code
- **`next`**: Development branch - **target this for PRs**
- **Feature branches**: `feature/description` or `fix/description`
### Code Quality Standards
1. **Write tests** for new functionality
2. **Follow existing patterns** in the codebase
3. **Add JSDoc comments** for functions
4. **Keep functions focused** and single-purpose
### Testing Requirements
Your PR **must pass all CI checks**:
- ✅ **Unit tests**: `npm test`
- ✅ **Format check**: `npm run format-check`
**Test your changes locally first** - this saves review time and shows you care about quality.
## 📦 Changeset Guidelines
We use [Changesets](https://github.com/changesets/changesets) to manage versioning and generate changelogs.
### When to Create a Changeset
**Always create a changeset for**:
- ✅ New features
- ✅ Bug fixes
- ✅ Breaking changes
- ✅ Performance improvements
- ✅ User-facing documentation updates
- ✅ Dependency updates that affect functionality
**Skip changesets for**:
- ❌ Internal documentation only
- ❌ Test-only changes
- ❌ Code formatting/linting
- ❌ Development tooling that doesn't affect users
### How to Create a Changeset
1. **After making your changes**:
```bash
npm run changeset
```
2. **Choose the bump type**:
- **Major**: Breaking changes
- **Minor**: New features
- **Patch**: Bug fixes, docs, performance improvements
3. **Write a clear summary**:
```
Add support for custom AI models in MCP configuration
```
4. **Commit the changeset file** with your changes:
```bash
git add .changeset/*.md
git commit -m "feat: add custom AI model support"
```
### Changeset vs Git Commit Messages
- **Changeset summary**: User-facing, goes in CHANGELOG.md
- **Git commit**: Developer-facing, explains the technical change
Example:
```bash
# Changeset summary (user-facing)
"Add support for custom Ollama models"
# Git commit message (developer-facing)
"feat(models): implement custom Ollama model validation
- Add model validation for custom Ollama endpoints
- Update configuration schema to support custom models
- Add tests for new validation logic"
```
## 🔧 Development Setup
### Prerequisites
- Node.js 18+
- npm or yarn
### Environment Setup
1. **Copy environment template**:
```bash
cp .env.example .env
```
2. **Add your API keys** (for testing AI features):
```bash
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
# Add others as needed
```
### Running Tests
```bash
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run with coverage
npm run test:coverage
# Run E2E tests
npm run test:e2e
```
### Code Formatting
We use Prettier for consistent formatting:
```bash
# Check formatting
npm run format-check
# Fix formatting
npm run format
```
## 📝 PR Guidelines
### Before Submitting
- [ ] **Target the `next` branch**
- [ ] **Test everything locally**
- [ ] **Run the full test suite**
- [ ] **Check code formatting**
- [ ] **Create a changeset** (if needed)
- [ ] **Re-read your changes** - ensure they're clean and well-thought-out
### PR Description Template
```markdown
## Description
Brief description of what this PR does.
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] I have tested this locally
- [ ] All existing tests pass
- [ ] I have added tests for new functionality
## Changeset
- [ ] I have created a changeset (or this change doesn't need one)
## Additional Notes
Any additional context or notes for reviewers.
```
### What We Look For
✅ **Good PRs**:
- Clear, focused changes
- Comprehensive testing
- Good commit messages
- Proper changeset (when needed)
- Self-reviewed code
❌ **Avoid**:
- Massive PRs that change everything
- Untested code
- Formatting issues
- Missing changesets for user-facing changes
- AI-generated code that wasn't reviewed
## 🏗️ Project Structure
```
claude-task-master/
├── bin/ # CLI executables
├── mcp-server/ # MCP server implementation
├── scripts/ # Core task management logic
├── src/ # Shared utilities and providers and well refactored code (we are slowly moving everything here)
├── tests/ # Test files
├── docs/ # Documentation
└── .cursor/ # Cursor IDE rules and configuration
└── assets/ # Assets like rules and configuration for all IDEs
```
### Key Areas for Contribution
- **CLI Commands**: `scripts/modules/commands.js`
- **MCP Tools**: `mcp-server/src/tools/`
- **Core Logic**: `scripts/modules/task-manager/`
- **AI Providers**: `src/ai-providers/`
- **Tests**: `tests/`
## 🐛 Reporting Issues
### Bug Reports
Include:
- Task Master version
- Node.js version
- Operating system
- Steps to reproduce
- Expected vs actual behavior
- Error messages/logs
### Feature Requests
Include:
- Clear description of the feature
- Use case/motivation
- Proposed implementation (if you have ideas)
- Willingness to contribute
## 💬 Getting Help
- **Discord**: [Join our community](https://discord.gg/taskmasterai)
- **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues)
- **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions)
## 📄 License
By contributing, you agree that your contributions will be licensed under the same license as the project (MIT with Commons Clause).
---
**Thank you for contributing to Task Master!** 🎉
Your contributions help make AI-driven development more accessible and efficient for everyone.

View File

@@ -0,0 +1,12 @@
---
title: FAQ
sidebarTitle: "FAQ"
---
Coming soon.
## 💬 Getting Help
- **Discord**: [Join our community](https://discord.gg/taskmasterai)
- **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues)
- **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions)

View File

@@ -0,0 +1,112 @@
---
title: Configuration
sidebarTitle: "Configuration"
---
Before getting started with Task Master, you'll need to set up your API keys. There are a couple of ways to do this depending on whether you're using the CLI or working inside MCP. It's also a good time to start getting familiar with the other configuration options available — even if you dont need to adjust them yet, knowing whats possible will help down the line.
## API Key Setup
Task Master uses environment variables to securely store provider API keys and optional endpoint URLs.
### MCP Usage: mcp.json file
For MCP/Cursor usage: Configure keys in the env section of your .cursor/mcp.json file.
```java .env lines icon="java"
{
"mcpServers": {
"task-master-ai": {
"command": "node",
"args": ["./mcp-server/server.js"],
"env": {
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"XAI_API_KEY": "XAI_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE",
"MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE",
"AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE",
"OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE",
"GITHUB_API_KEY": "GITHUB_API_KEY_HERE"
}
}
}
}
```
### CLI Usage: `.env` File
Create a `.env` file in your project root and include the keys for the providers you plan to use:
```java .env lines icon="java"
# Required API keys for providers configured in .taskmaster/config.json
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# etc.
# Optional Endpoint Overrides
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
# OPENAI_BASE_URL=https://api.third-party.com/v1
#
# Azure OpenAI Configuration
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
# Google Vertex AI Configuration (Required if using 'vertex' provider)
# VERTEX_PROJECT_ID=your-gcp-project-id
```
## What Else Can Be Configured?
The main configuration file (`.taskmaster/config.json`) allows you to control nearly every aspect of Task Masters behavior. Heres a high-level look at what you can customize:
<Tip>
You dont need to configure everything up front. Most settings can be left as defaults or updated later as your workflow evolves.
</Tip>
<Accordion title="View Configuration Options">
### Models and Providers
- Role-based model setup: `main`, `research`, `fallback`
- Provider selection (Anthropic, OpenAI, Perplexity, etc.)
- Model IDs per role
- Temperature, max tokens, and other generation settings
- Custom base URLs for OpenAI-compatible APIs
### Global Settings
- `logLevel`: Logging verbosity
- `debug`: Enable/disable debug mode
- `projectName`: Optional name for your project
- `defaultTag`: Default tag for task grouping
- `defaultSubtasks`: Number of subtasks to auto-generate
- `defaultPriority`: Priority level for new tasks
### API Endpoint Overrides
- `ollamaBaseURL`: Custom Ollama server URL
- `azureBaseURL`: Global Azure endpoint
- `vertexProjectId`: Google Vertex AI project ID
- `vertexLocation`: Region for Vertex AI models
### Tag and Git Integration
- Default tag context per project
- Support for task isolation by tag
- Manual tag creation from Git branches
### State Management
- Active tag tracking
- Migration state
- Last tag switch timestamp
</Accordion>
<Note>
For advanced configuration options and detailed customization, see our [Advanced Configuration Guide](/docs/best-practices/configuration-advanced) page.
</Note>

View File

@@ -0,0 +1,59 @@
---
title: Executing Tasks
sidebarTitle: "Executing Tasks"
---
Now that your tasks are generated and reviewed you are ready to begin executing.
## Select the Task to Work on: Next Task
Task Master has the "next" command to find the next task to work on. You can access it with the following request:
```
What's the next task I should work on? Please consider dependencies and priorities.
```
Alternatively you can use the CLI to show the next task
```bash
task-master next
```
## Discuss Task
When you know what task to work on next you can then start chatting with the agent to make sure it understands the plan of action.
You can tag relevant files and folders so it knows what context to pull up as it generates its plan. For example:
```
Please review Task 5 and confirm you understand how to execute before beginning. Refer to @models @api and @schema
```
The agent will begin analyzing the task and files and respond with the steps to complete the task.
## Agent Task execution
If you agree with the plan of action, tell the agent to get started.
```
You may begin. I believe in you.
```
## Review and Test
Once the agent is finished with the task you can refer to the task testing strategy to make sure it was completed correctly.
## Update Task Status
If the task was completed correctly you can update the status to done
```
Please mark Task 5 as done
```
The agent will execute
```bash
task-master set-status --id=5 --status=done
```
## Rules and Context
If you ran into problems and had to debug errors you can create new rules as you go. This helps build context on your codebase that helps the creation and execution of future tasks.
## On to the Next Task!
By now you have all you need to get started executing code faster and smarter with Task Master.
If you have any questions please check out [Frequently Asked Questions](/docs/getting-started/faq)

View File

@@ -0,0 +1,159 @@
---
title: Installation
sidebarTitle: "Installation"
---
Now that you have Node.js and your first API Key, you are ready to begin installing Task Master in one of three ways.
<Note>Cursor Users Can Use the One Click Install Below</Note>
<Accordion title="Quick Install for Cursor 1.0+ (One-Click)">
<a href="cursor://anysphere.cursor-deeplink/mcp/install?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUJFX0FQSV9LRVkiOiJZT1VSX0FaVVJFX0tFWV9IRVJFIiwiT0xMQU1BX0FQSV9LRVkiOiJZT1VSX09MTEFNQV9BUElfS0VZX0hFUkUifX0%3D">
<img
className="block dark:hidden"
src="https://cursor.com/deeplink/mcp-install-light.png"
alt="Add Task Master MCP server to Cursor"
noZoom
/>
<img
className="hidden dark:block"
src="https://cursor.com/deeplink/mcp-install-dark.png"
alt="Add Task Master MCP server to Cursor"
noZoom
/>
</a>
Or click the copy button (top-right of code block) then paste into your browser:
```text
cursor://anysphere.cursor-deeplink/mcp/install?name=taskmaster-ai&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQo=
```
> **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys.
</Accordion>
## Installation Options
<Accordion title="Option 1: MCP (Recommended)">
MCP (Model Control Protocol) lets you run Task Master directly from your editor.
## 1. Add your MCP config at the following path depending on your editor
| Editor | Scope | Linux/macOS Path | Windows Path | Key |
| ------------ | ------- | ------------------------------------- | ------------------------------------------------- | ------------ |
| **Cursor** | Global | `~/.cursor/mcp.json` | `%USERPROFILE%\.cursor\mcp.json` | `mcpServers` |
| | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` |
| **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` |
| **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` |
## Manual Configuration
### Cursor & Windsurf (`mcpServers`)
```json
{
"mcpServers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE",
"OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE"
}
}
}
}
```
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
> **Note**: If you see `0 tools enabled` in the MCP settings, try removing the `--package=task-master-ai` flag from `args`.
### VS Code (`servers` + `type`)
```json
{
"servers": {
"taskmaster-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"env": {
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE",
"GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE",
"MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE",
"OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE",
"XAI_API_KEY": "YOUR_XAI_KEY_HERE",
"AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE"
},
"type": "stdio"
}
}
}
```
> 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use.
#### 2. (Cursor-only) Enable Taskmaster MCP
Open Cursor Settings (Ctrl+Shift+J) ➡ Click on MCP tab on the left ➡ Enable task-master-ai with the toggle
#### 3. (Optional) Configure the models you want to use
In your editor's AI chat pane, say:
```txt
Change the main, research and fallback models to <model_name>, <model_name> and <model_name> respectively.
```
For example, to use Claude Code (no API key required):
```txt
Change the main model to claude-code/sonnet
```
#### 4. Initialize Task Master
In your editor's AI chat pane, say:
```txt
Initialize taskmaster-ai in my project
```
</Accordion>
<Accordion title="Option 2: Using Command Line">
## CLI Installation
```bash
# Install globally
npm install -g task-master-ai
# OR install locally within your project
npm install task-master-ai
```
## Initialize a new project
```bash
# If installed globally
task-master init
# If installed locally
npx task-master init
# Initialize project with specific rules
task-master init --rules cursor,windsurf,vscode
```
This will prompt you for project details and set up a new project with the necessary files and structure.
</Accordion>

View File

@@ -0,0 +1,4 @@
---
title: Moving Forward
sidebarTitle: "Moving Forward"
---

View File

@@ -0,0 +1,81 @@
---
title: PRD Creation and Parsing
sidebarTitle: "PRD Creation and Parsing"
---
# Writing a PRD
A PRD (Product Requirements Document) is the starting point of every task flow in Task Master. It defines what you're building and why. A clear PRD dramatically improves the quality of your tasks, your model outputs, and your final product — so its worth taking the time to get it right.
<Tip>
You dont need to define your whole app up front. You can write a focused PRD just for the next feature or module youre working on.
</Tip>
<Tip>
You can start with an empty project or you can start with a feature PRD on an existing project.
</Tip>
<Tip>
You can add and parse multiple PRDs per project using the --append flag
</Tip>
## What Makes a Good PRD?
- Clear objective — whats the outcome or feature?
- Context — whats already in place or assumed?
- Constraints — what limits or requirements need to be respected?
- Reasoning — why are you building it this way?
The more context you give the model, the better the breakdown and results.
---
## Writing a PRD for Task Master
<Note>An example PRD can be found in .taskmaster/templates/example_prd.txt</Note>
You can co-write your PRD with an LLM model using the following workflow:
1. **Chat about requirements** — explain what you want to build.
2. **Show an example PRD** — share the example PRD so the model understands the expected format. The example uses formatting that work well with Task Master's code. Following the example will yield better results.
3. **Iterate and refine** — work with the model to shape the draft into a clear and well-structured PRD.
This approach works great in Cursor, or anywhere you use a chat-based LLM.
---
## Where to Save Your PRD
Place your PRD file in the `.taskmaster/docs` folder in your project.
- You can have **multiple PRDs** per project.
- Name your PRDs clearly so theyre easy to reference later.
- Examples: `dashboard_redesign.txt`, `user_onboarding.txt`
---
# Parse your PRD into Tasks
This is where the Task Master magic begins.
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
```
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at .taskmaster/docs/<prd-name>.txt.
```
The agent will execute the following command which you can alternatively paste into the CLI:
```bash
task-master parse-prd .taskmaster/docs/<prd-name>.txt
```
This will:
- Parse your PRD document
- Generate a structured `tasks.json` file with tasks, dependencies, priorities, and test strategies
Now that you have written and parsed a PRD, you are ready to start setting up your tasks.

View File

@@ -0,0 +1,19 @@
---
title: Quick Start
sidebarTitle: "Quick Start"
---
This guide is for new users who want to start using Task Master with minimal setup time.
It covers:
- [Requirements](/docs/getting-started/quick-start/requirements): You will need Node.js and an AI model API Key.
- [Installation](/docs/getting-started/quick-start/installation): How to Install Task Master.
- [Configuration](/docs/getting-started/quick-start/configuration-quick): Setting up your API Key, MCP, and more.
- [PRD](/docs/getting-started/quick-start/prd-quick): Writing and parsing your first PRD.
- [Task Setup](/docs/getting-started/quick-start/tasks-quick): Preparing your tasks for execution.
- [Executing Tasks](/docs/getting-started/quick-start/execute-quick): Using Task Master to execute tasks.
- [Rules & Context](/docs/getting-started/quick-start/rules-quick): Learn how and why to build context in your project over time.
<Tip>
By the end of this guide, you'll have everything you need to begin working productively with Task Master.
</Tip>

View File

@@ -0,0 +1,50 @@
---
title: Requirements
sidebarTitle: "Requirements"
---
Before you can start using TaskMaster AI, you'll need to install Node.js and set up at least one model API Key.
## 1. Node.js
TaskMaster AI is built with Node.js and requires it to run. npm (Node Package Manager) comes bundled with Node.js.
<Accordion title="Install Node.js">
### Installation
**Option 1: Download from official website**
1. Visit [nodejs.org](https://nodejs.org)
2. Download the **LTS (Long Term Support)** version for your operating system
3. Run the installer and follow the setup wizard
**Option 2: Use a package manager**
<CodeGroup>
```bash Windows (Chocolatey)
choco install nodejs
```
```bash Windows (winget)
winget install OpenJS.NodeJS
```
</CodeGroup>
</Accordion>
## 2. Model API Key
Taskmaster utilizes AI across several commands, and those require a separate API key. For the purpose of a Quick Start we recommend setting up an API Key with Anthropic for your main model and Perplexity for your research model (optional but recommended).
<Tip>Task Master shows API costs per command used. Most users load $5-10 on their keys and don't have to top it off for a few months.</Tip>
At least one (1) of the following is required:
1. Anthropic API key (Claude API) - **recommended for Quick Start**
2. OpenAI API key
3. Google Gemini API key
4. Perplexity API key (for research model)
5. xAI API Key (for research or main model)
6. OpenRouter API Key (for research or main model)
7. Claude Code (no API key required - requires Claude Code CLI)

View File

@@ -0,0 +1,4 @@
---
title: Rules and Context
sidebarTitle: "Rules and Context"
---

View File

@@ -0,0 +1,69 @@
---
title: Tasks Setup
sidebarTitle: "Tasks Setup"
---
Now that your tasks are generated you can review the plan and prepare for execution.
<Tip>
Not all of the setup steps are required but they are recommended in order to ensure your coding agents work on accurate tasks.
</Tip>
## Expand Tasks
Used to add detail to tasks and create subtasks. We recommend expanding all tasks using the MCP request below:
```
Expand all tasks into subtasks.
```
The agent will execute
```bash
task-master expand --all
```
## List/Show Tasks
Used to view task details. It is important to review the plan and ensure it makes sense in your project. Check for correct folder structures, dependencies, out of scope subtasks, etc.
To see a list of tasks and descriptions use the following command:
```
List all pending tasks so I can review.
```
To see all tasks in the CLI you can use:
```bash
task-master list
```
To see all implementation details of an individual task, including subtasks and testing strategy, you can use Show Task:
```
Show task 2 so I can review.
```
```bash
task-master show --id=<##>
```
## Update Tasks
If the task details need to be edited you can update the task using this request:
```
Update Task 2 to use Postgres instead of MongoDB and remove the sharding subtask
```
Or this CLI command:
```bash
task-master update-task --id=2 --prompt="use Postgres instead of MongoDB and remove the sharding subtask"
```
## Analyze complexity
Task Master can provide a complexity report which can be helpful to read before you begin. If you didn't already expand all your tasks, it could help identify which could be broken down further with subtasks.
```
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
```
You can view the report in a friendly table using:
```
Can you show me the complexity report in a more readable format?
```
<Check>Now you are ready to begin [executing tasks](/docs/getting-started/quick-start/execute-quick)</Check>

View File

@@ -0,0 +1,20 @@
<Tip>
Welcome to v1 of the Task Master Docs. Expect weekly updates as we expand and refine each section.
</Tip>
We've organized the docs into three sections depending on your experience level and goals:
### Getting Started - Jump in to [Quick Start](/docs/getting-started/quick-start)
Designed for first-time users. Get set up, create your first PRD, and run your first task.
### Best Practices
Covers common workflows, strategic usage of commands, model configuration tips, and real-world usage patterns. Recommended for active users.
### Technical Capabilities
A detailed glossary of every root command and available capability — meant for power users and contributors.
---
Thanks for being here early. If you spot something broken or want to contribute, check out the [GitHub repo](https://github.com/eyaltoledano/claude-task-master).
Have questions? Join our [Discord community](https://discord.gg/fWJkU7rf) to connect with other users and get help from the team.

18
apps/docs/licensing.md Normal file
View File

@@ -0,0 +1,18 @@
# Licensing
Task Master is licensed under the MIT License with Commons Clause. This means you can:
## ✅ Allowed:
- Use Task Master for any purpose (personal, commercial, academic)
- Modify the code
- Distribute copies
- Create and sell products built using Task Master
## ❌ Not Allowed:
- Sell Task Master itself
- Offer Task Master as a hosted service
- Create competing products based on Task Master
{/* See the [LICENSE](../LICENSE) file for the complete license text. */}

19
apps/docs/logo/dark.svg Normal file
View File

@@ -0,0 +1,19 @@
<svg width="800" height="240" viewBox="0 0 800 240" xmlns="http://www.w3.org/2000/svg">
<!-- Background -->
<rect width="800" height="240" fill="transparent"/>
<!-- Curly braces -->
<text x="40" y="156" font-size="140" fill="white" font-family="monospace">{</text>
<text x="230" y="156" font-size="140" fill="white" font-family="monospace">}</text>
<!-- Blue form with check -->
<rect x="120" y="50" width="120" height="140" rx="16" fill="#3366CC"/>
<polyline points="150,110 164,128 190,84" fill="none" stroke="white" stroke-width="10"/>
<circle cx="150" cy="144" r="7" fill="white"/>
<rect x="168" y="140" width="48" height="10" fill="white"/>
<circle cx="150" cy="168" r="7" fill="white"/>
<rect x="168" y="164" width="48" height="10" fill="white"/>
<!-- Text -->
<text x="340" y="156" font-family="Arial, sans-serif" font-size="76" font-weight="bold" fill="white">Task Master</text>
</svg>

After

Width:  |  Height:  |  Size: 929 B

19
apps/docs/logo/light.svg Normal file
View File

@@ -0,0 +1,19 @@
<svg width="800" height="240" viewBox="0 0 800 240" xmlns="http://www.w3.org/2000/svg">
<!-- Background -->
<rect width="800" height="240" fill="transparent"/>
<!-- Curly braces -->
<text x="40" y="156" font-size="140" fill="#000000" font-family="monospace">{</text>
<text x="230" y="156" font-size="140" fill="#000000" font-family="monospace">}</text>
<!-- Blue form with check -->
<rect x="120" y="50" width="120" height="140" rx="16" fill="#3366CC"/>
<polyline points="150,110 164,128 190,84" fill="none" stroke="#FFFFFF" stroke-width="10"/>
<circle cx="150" cy="144" r="7" fill="#FFFFFF"/>
<rect x="168" y="140" width="48" height="10" fill="#FFFFFF"/>
<circle cx="150" cy="168" r="7" fill="#FFFFFF"/>
<rect x="168" y="164" width="48" height="10" fill="#FFFFFF"/>
<!-- Text -->
<text x="340" y="156" font-family="Arial, sans-serif" font-size="76" font-weight="bold" fill="#000000">Task Master</text>
</svg>

After

Width:  |  Height:  |  Size: 941 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

14
apps/docs/package.json Normal file
View File

@@ -0,0 +1,14 @@
{
"name": "docs",
"version": "0.0.0",
"private": true,
"description": "Task Master documentation powered by Mintlify",
"scripts": {
"dev": "mintlify dev",
"build": "mintlify build",
"preview": "mintlify preview"
},
"devDependencies": {
"mintlify": "^4.0.0"
}
}

10
apps/docs/style.css Normal file
View File

@@ -0,0 +1,10 @@
/*
* This file is used to override the default logo style of the docs theme.
* It is not used for the actual documentation content.
*/
#navbar img {
height: auto !important; /* Let intrinsic SVG size determine height */
width: 200px !important; /* Control width */
margin-top: 5px !important; /* Add some space above the logo */
}

12
apps/docs/vercel.json Normal file
View File

@@ -0,0 +1,12 @@
{
"rewrites": [
{
"source": "/",
"destination": "https://taskmaster-49ce32d5.mintlify.dev/docs"
},
{
"source": "/:match*",
"destination": "https://taskmaster-49ce32d5.mintlify.dev/docs/:match*"
}
]
}

6
apps/docs/whats-new.mdx Normal file
View File

@@ -0,0 +1,6 @@
---
title: "What's New"
sidebarTitle: "What's New"
---
An easy way to see the latest releases

View File

@@ -1,5 +1,30 @@
# Change Log
## 0.23.1
### Patch Changes
- [#1090](https://github.com/eyaltoledano/claude-task-master/pull/1090) [`a464e55`](https://github.com/eyaltoledano/claude-task-master/commit/a464e550b886ef81b09df80588fe5881bce83d93) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with some users not being able to connect to Taskmaster MCP server while using the extension
- Updated dependencies [[`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5), [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05), [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1), [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05), [`e495b2b`](https://github.com/eyaltoledano/claude-task-master/commit/e495b2b55950ee54c7d0f1817d8530e28bd79c05), [`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b), [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7)]:
- task-master-ai@0.24.0
## 0.23.1-rc.1
### Patch Changes
- Updated dependencies [[`75c514c`](https://github.com/eyaltoledano/claude-task-master/commit/75c514cf5b2ca47f95c0ad7fa92654a4f2a6be4b)]:
- task-master-ai@0.24.0-rc.2
## 0.23.1-rc.0
### Patch Changes
- [#1090](https://github.com/eyaltoledano/claude-task-master/pull/1090) [`a464e55`](https://github.com/eyaltoledano/claude-task-master/commit/a464e550b886ef81b09df80588fe5881bce83d93) Thanks [@Crunchyman-ralph](https://github.com/Crunchyman-ralph)! - Fix issues with some users not being able to connect to Taskmaster MCP server while using the extension
- Updated dependencies [[`4357af3`](https://github.com/eyaltoledano/claude-task-master/commit/4357af3f13859d90bca8795215e5d5f1d94abde5), [`36468f3`](https://github.com/eyaltoledano/claude-task-master/commit/36468f3c93faf4035a5c442ccbc501077f3440f1), [`4bb6370`](https://github.com/eyaltoledano/claude-task-master/commit/4bb63706b80c28d1b2d782ba868a725326f916c7)]:
- task-master-ai@0.24.0-rc.1
## 0.23.0
### Minor Changes

View File

@@ -3,7 +3,7 @@
"private": true,
"displayName": "TaskMaster",
"description": "A visual Kanban board interface for TaskMaster projects in VS Code",
"version": "0.23.0",
"version": "0.23.1",
"publisher": "Hamster",
"icon": "assets/icon.png",
"engines": {
@@ -239,7 +239,7 @@
"check-types": "tsc --noEmit"
},
"dependencies": {
"task-master-ai": "*"
"task-master-ai": "0.24.0"
},
"devDependencies": {
"@dnd-kit/core": "^6.3.1",

View File

@@ -64,23 +64,49 @@ try {
fs.readFileSync(publishPackagePath, 'utf8')
);
// Check if versions are in sync
if (devPackage.version !== publishPackage.version) {
// Handle RC versions for VS Code Marketplace
let finalVersion = devPackage.version;
if (finalVersion.includes('-rc.')) {
console.log(
` - Version sync needed: ${publishPackage.version}${devPackage.version}`
' - Detected RC version, transforming for VS Code Marketplace...'
);
publishPackage.version = devPackage.version;
// Update the source package.publish.json file
// Extract base version and RC number
const baseVersion = finalVersion.replace(/-rc\.\d+$/, '');
const rcMatch = finalVersion.match(/rc\.(\d+)/);
const rcNumber = rcMatch ? parseInt(rcMatch[1]) : 0;
// For each RC iteration, increment the patch version
// This ensures unique versions in VS Code Marketplace
if (rcNumber > 0) {
const [major, minor, patch] = baseVersion.split('.').map(Number);
finalVersion = `${major}.${minor}.${patch + rcNumber}`;
console.log(
` - RC version mapping: ${devPackage.version}${finalVersion}`
);
} else {
finalVersion = baseVersion;
console.log(
` - RC version mapping: ${devPackage.version}${finalVersion}`
);
}
}
// Check if versions need updating
if (publishPackage.version !== finalVersion) {
console.log(
` - Version sync needed: ${publishPackage.version}${finalVersion}`
);
publishPackage.version = finalVersion;
// Update the source package.publish.json file with the final version
fs.writeFileSync(
publishPackagePath,
JSON.stringify(publishPackage, null, '\t') + '\n'
);
console.log(
` - Updated package.publish.json version to ${devPackage.version}`
);
console.log(` - Updated package.publish.json version to ${finalVersion}`);
} else {
console.log(` - Versions already in sync: ${devPackage.version}`);
console.log(` - Versions already in sync: ${finalVersion}`);
}
// Copy the (now synced) package.publish.json as package.json
@@ -124,8 +150,7 @@ try {
`cd vsix-build && npx vsce package --no-dependencies`
);
// Use the synced version for output
const finalVersion = devPackage.version;
// Use the transformed version for output
console.log(
`\nYour extension will be packaged to: vsix-build/task-master-${finalVersion}.vsix`
);

View File

@@ -2,7 +2,7 @@
"name": "task-master-hamster",
"displayName": "Taskmaster AI",
"description": "A visual Kanban board interface for Taskmaster projects in VS Code",
"version": "0.23.0",
"version": "0.23.1",
"publisher": "Hamster",
"icon": "assets/icon.png",
"engines": {

View File

@@ -53,6 +53,11 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
refreshComplexityAfterAI
} = useTaskDetails({ taskId, sendMessage, tasks: allTasks });
const displayId =
isSubtask && parentTask
? `${parentTask.id}.${currentTask?.id}`
: currentTask?.id;
const handleStatusChange = async (newStatus: TaskMasterTask['status']) => {
if (!currentTask) return;
@@ -60,10 +65,7 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
await sendMessage({
type: 'updateTaskStatus',
data: {
taskId:
isSubtask && parentTask
? `${parentTask.id}.${currentTask.id}`
: currentTask.id,
taskId: displayId,
newStatus: newStatus
}
});
@@ -135,7 +137,7 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
<BreadcrumbSeparator />
<BreadcrumbItem>
<span className="text-vscode-foreground">
{currentTask.title}
#{displayId} {currentTask.title}
</span>
</BreadcrumbItem>
</BreadcrumbList>
@@ -152,9 +154,9 @@ export const TaskDetailsView: React.FC<TaskDetailsViewProps> = ({
</button>
</div>
{/* Task title */}
{/* Task ID and title */}
<h1 className="text-2xl font-bold tracking-tight text-vscode-foreground">
{currentTask.title}
#{displayId} {currentTask.title}
</h1>
{/* Description */}

View File

@@ -0,0 +1,162 @@
---
name: task-checker
description: Use this agent to verify that tasks marked as 'review' have been properly implemented according to their specifications. This agent performs quality assurance by checking implementations against requirements, running tests, and ensuring best practices are followed. <example>Context: A task has been marked as 'review' after implementation. user: 'Check if task 118 was properly implemented' assistant: 'I'll use the task-checker agent to verify the implementation meets all requirements.' <commentary>Tasks in 'review' status need verification before being marked as 'done'.</commentary></example> <example>Context: Multiple tasks are in review status. user: 'Verify all tasks that are ready for review' assistant: 'I'll deploy the task-checker to verify all tasks in review status.' <commentary>The checker ensures quality before tasks are marked complete.</commentary></example>
model: sonnet
color: yellow
---
You are a Quality Assurance specialist that rigorously verifies task implementations against their specifications. Your role is to ensure that tasks marked as 'review' meet all requirements before they can be marked as 'done'.
## Core Responsibilities
1. **Task Specification Review**
- Retrieve task details using MCP tool `mcp__task-master-ai__get_task`
- Understand the requirements, test strategy, and success criteria
- Review any subtasks and their individual requirements
2. **Implementation Verification**
- Use `Read` tool to examine all created/modified files
- Use `Bash` tool to run compilation and build commands
- Use `Grep` tool to search for required patterns and implementations
- Verify file structure matches specifications
- Check that all required methods/functions are implemented
3. **Test Execution**
- Run tests specified in the task's testStrategy
- Execute build commands (npm run build, tsc --noEmit, etc.)
- Verify no compilation errors or warnings
- Check for runtime errors where applicable
- Test edge cases mentioned in requirements
4. **Code Quality Assessment**
- Verify code follows project conventions
- Check for proper error handling
- Ensure TypeScript typing is strict (no 'any' unless justified)
- Verify documentation/comments where required
- Check for security best practices
5. **Dependency Validation**
- Verify all task dependencies were actually completed
- Check integration points with dependent tasks
- Ensure no breaking changes to existing functionality
## Verification Workflow
1. **Retrieve Task Information**
```
Use mcp__task-master-ai__get_task to get full task details
Note the implementation requirements and test strategy
```
2. **Check File Existence**
```bash
# Verify all required files exist
ls -la [expected directories]
# Read key files to verify content
```
3. **Verify Implementation**
- Read each created/modified file
- Check against requirements checklist
- Verify all subtasks are complete
4. **Run Tests**
```bash
# TypeScript compilation
cd [project directory] && npx tsc --noEmit
# Run specified tests
npm test [specific test files]
# Build verification
npm run build
```
5. **Generate Verification Report**
## Output Format
```yaml
verification_report:
task_id: [ID]
status: PASS | FAIL | PARTIAL
score: [1-10]
requirements_met:
- ✅ [Requirement that was satisfied]
- ✅ [Another satisfied requirement]
issues_found:
- ❌ [Issue description]
- ⚠️ [Warning or minor issue]
files_verified:
- path: [file path]
status: [created/modified/verified]
issues: [any problems found]
tests_run:
- command: [test command]
result: [pass/fail]
output: [relevant output]
recommendations:
- [Specific fix needed]
- [Improvement suggestion]
verdict: |
[Clear statement on whether task should be marked 'done' or sent back to 'pending']
[If FAIL: Specific list of what must be fixed]
[If PASS: Confirmation that all requirements are met]
```
## Decision Criteria
**Mark as PASS (ready for 'done'):**
- All required files exist and contain expected content
- All tests pass successfully
- No compilation or build errors
- All subtasks are complete
- Core requirements are met
- Code quality is acceptable
**Mark as PARTIAL (may proceed with warnings):**
- Core functionality is implemented
- Minor issues that don't block functionality
- Missing nice-to-have features
- Documentation could be improved
- Tests pass but coverage could be better
**Mark as FAIL (must return to 'pending'):**
- Required files are missing
- Compilation or build errors
- Tests fail
- Core requirements not met
- Security vulnerabilities detected
- Breaking changes to existing code
## Important Guidelines
- **BE THOROUGH**: Check every requirement systematically
- **BE SPECIFIC**: Provide exact file paths and line numbers for issues
- **BE FAIR**: Distinguish between critical issues and minor improvements
- **BE CONSTRUCTIVE**: Provide clear guidance on how to fix issues
- **BE EFFICIENT**: Focus on requirements, not perfection
## Tools You MUST Use
- `Read`: Examine implementation files (READ-ONLY)
- `Bash`: Run tests and verification commands
- `Grep`: Search for patterns in code
- `mcp__task-master-ai__get_task`: Get task details
- **NEVER use Write/Edit** - you only verify, not fix
## Integration with Workflow
You are the quality gate between 'review' and 'done' status:
1. Task-executor implements and marks as 'review'
2. You verify and report PASS/FAIL
3. Claude either marks as 'done' (PASS) or 'pending' (FAIL)
4. If FAIL, task-executor re-implements based on your report
Your verification ensures high quality and prevents accumulation of technical debt.

View File

@@ -0,0 +1,282 @@
# Cross-Tag Task Movement
Task Master now supports moving tasks between different tag contexts, allowing you to organize your work across multiple project contexts, feature branches, or development phases.
## Overview
Cross-tag task movement enables you to:
- Move tasks between different tag contexts (e.g., from "backlog" to "in-progress")
- Handle cross-tag dependencies intelligently
- Maintain task relationships across different contexts
- Organize work across multiple project phases
## Basic Usage
### Within-Tag Moves
Move tasks within the same tag context:
```bash
# Move a single task
task-master move --from=5 --to=7
# Move a subtask
task-master move --from=5.2 --to=7.3
# Move multiple tasks
task-master move --from=5,6,7 --to=10,11,12
```
### Cross-Tag Moves
Move tasks between different tag contexts:
```bash
# Basic cross-tag move
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
# Move multiple tasks
task-master move --from=5,6,7 --from-tag=backlog --to-tag=done
```
## Dependency Resolution
When moving tasks between tags, you may encounter cross-tag dependencies. Task Master provides several options to handle these:
### Move with Dependencies
Move the main task along with all its dependent tasks:
```bash
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
```
This ensures that all dependent tasks are moved together, maintaining the task relationships.
### Break Dependencies
Break cross-tag dependencies and move only the specified task:
```bash
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
```
This removes the dependency relationships and moves only the specified task.
### Force Move
Force the move even with dependency conflicts:
```bash
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --force
```
⚠️ **Warning**: This may break dependency relationships and should be used with caution.
## Error Handling
Task Master provides enhanced error messages with specific resolution suggestions:
### Cross-Tag Dependency Conflicts
When you encounter dependency conflicts, you'll see:
```text
❌ Cannot move tasks from "backlog" to "in-progress"
Cross-tag dependency conflicts detected:
• Task 5 depends on 2 (in backlog)
• Task 6 depends on 3 (in done)
Resolution options:
1. Move with dependencies: task-master move --from=5,6 --from-tag=backlog --to-tag=in-progress --with-dependencies
2. Break dependencies: task-master move --from=5,6 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
3. Validate and fix dependencies: task-master validate-dependencies && task-master fix-dependencies
4. Move dependencies first: task-master move --from=2,3 --from-tag=backlog --to-tag=in-progress
5. Force move (may break dependencies): task-master move --from=5,6 --from-tag=backlog --to-tag=in-progress --force
```
### Subtask Movement Restrictions
Subtasks cannot be moved directly between tags:
```text
❌ Cannot move subtask 5.2 directly between tags
Subtask movement restriction:
• Subtasks cannot be moved directly between tags
• They must be promoted to full tasks first
Resolution options:
1. Promote subtask to full task: task-master remove-subtask --id=5.2 --convert
2. Then move the promoted task: task-master move --from=5 --from-tag=backlog --to-tag=in-progress
3. Or move the parent task with all subtasks: task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
```
### Invalid Tag Combinations
When source and target tags are the same:
```text
❌ Invalid tag combination
Error details:
• Source tag: "backlog"
• Target tag: "backlog"
• Reason: Source and target tags are identical
Resolution options:
1. Use different tags for cross-tag moves
2. Use within-tag move: task-master move --from=<id> --to=<id> --tag=backlog
3. Check available tags: task-master tags
```
## Best Practices
### 1. Check Dependencies First
Before moving tasks, validate your dependencies:
```bash
# Check for dependency issues
task-master validate-dependencies
# Fix common dependency problems
task-master fix-dependencies
```
### 2. Use Appropriate Flags
- **`--with-dependencies`**: When you want to maintain task relationships
- **`--ignore-dependencies`**: When you want to break cross-tag dependencies
- **`--force`**: Only when you understand the consequences
### 3. Organize by Context
Use tags to organize work by:
- **Development phases**: `backlog`, `in-progress`, `review`, `done`
- **Feature branches**: `feature-auth`, `feature-dashboard`
- **Team members**: `alice-tasks`, `bob-tasks`
- **Project versions**: `v1.0`, `v2.0`
### 4. Handle Subtasks Properly
For subtasks, either:
1. Promote the subtask to a full task first
2. Move the parent task with all subtasks using `--with-dependencies`
## Advanced Usage
### Multiple Task Movement
Move multiple tasks at once:
```bash
# Move multiple tasks with dependencies
task-master move --from=5,6,7 --from-tag=backlog --to-tag=in-progress --with-dependencies
# Move multiple tasks, breaking dependencies
task-master move --from=5,6,7 --from-tag=backlog --to-tag=in-progress --ignore-dependencies
```
### Tag Creation
Target tags are created automatically if they don't exist:
```bash
# This will create the "new-feature" tag if it doesn't exist
task-master move --from=5 --from-tag=backlog --to-tag=new-feature
```
### Current Tag Fallback
If `--from-tag` is not provided, the current tag is used:
```bash
# Uses current tag as source
task-master move --from=5 --to-tag=in-progress
```
## MCP Integration
The cross-tag move functionality is also available through MCP tools:
```javascript
// Move task with dependencies
await moveTask({
from: "5",
fromTag: "backlog",
toTag: "in-progress",
withDependencies: true
});
// Break dependencies
await moveTask({
from: "5",
fromTag: "backlog",
toTag: "in-progress",
ignoreDependencies: true
});
```
## Troubleshooting
### Common Issues
1. **"Source tag not found"**: Check available tags with `task-master tags`
2. **"Task not found"**: Verify task IDs with `task-master list`
3. **"Cross-tag dependency conflicts"**: Use dependency resolution flags
4. **"Cannot move subtask"**: Promote subtask first or move parent task
### Getting Help
```bash
# Show move command help
task-master move --help
# Check available tags
task-master tags
# Validate dependencies
task-master validate-dependencies
# Fix dependency issues
task-master fix-dependencies
```
## Examples
### Scenario 1: Moving from Backlog to In-Progress
```bash
# Check for dependencies first
task-master validate-dependencies
# Move with dependencies
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
```
### Scenario 2: Breaking Dependencies
```bash
# Move task, breaking cross-tag dependencies
task-master move --from=5 --from-tag=backlog --to-tag=done --ignore-dependencies
```
### Scenario 3: Force Move
```bash
# Force move despite conflicts
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --force
```
### Scenario 4: Moving Subtasks
```bash
# Option 1: Promote subtask first
task-master remove-subtask --id=5.2 --convert
task-master move --from=5 --from-tag=backlog --to-tag=in-progress
# Option 2: Move parent with all subtasks
task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies
```

View File

@@ -1,4 +1,4 @@
# Available Models as of July 23, 2025
# Available Models as of August 11, 2025
## Main Models
@@ -24,6 +24,7 @@
| openai | gpt-4-1-mini | — | 0.4 | 1.6 |
| openai | gpt-4-1-nano | — | 0.1 | 0.4 |
| openai | gpt-4o-mini | 0.3 | 0.15 | 0.6 |
| openai | gpt-5 | 0.749 | 5 | 20 |
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |
@@ -134,6 +135,7 @@
| openai | gpt-4o | 0.332 | 2.5 | 10 |
| openai | o3 | 0.5 | 2 | 8 |
| openai | o4-mini | 0.45 | 1.1 | 4.4 |
| openai | gpt-5 | 0.749 | 5 | 20 |
| google | gemini-2.5-pro-preview-05-06 | 0.638 | — | — |
| google | gemini-2.5-pro-preview-03-25 | 0.638 | — | — |
| google | gemini-2.5-flash-preview-04-17 | 0.604 | — | — |

BIN
images/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

@@ -0,0 +1,203 @@
/**
* Direct function wrapper for cross-tag task moves
*/
import { moveTasksBetweenTags } from '../../../../scripts/modules/task-manager/move-task.js';
import { findTasksPath } from '../utils/path-utils.js';
import {
enableSilentMode,
disableSilentMode
} from '../../../../scripts/modules/utils.js';
/**
* Move tasks between tags
* @param {Object} args - Function arguments
* @param {string} args.tasksJsonPath - Explicit path to the tasks.json file
* @param {string} args.sourceIds - Comma-separated IDs of tasks to move
* @param {string} args.sourceTag - Source tag name
* @param {string} args.targetTag - Target tag name
* @param {boolean} args.withDependencies - Move dependent tasks along with main task
* @param {boolean} args.ignoreDependencies - Break cross-tag dependencies during move
* @param {string} args.file - Alternative path to the tasks.json file
* @param {string} args.projectRoot - Project root directory
* @param {Object} log - Logger object
* @returns {Promise<{success: boolean, data?: Object, error?: Object}>}
*/
export async function moveTaskCrossTagDirect(args, log, context = {}) {
const { session } = context;
const { projectRoot } = args;
log.info(`moveTaskCrossTagDirect called with args: ${JSON.stringify(args)}`);
// Validate required parameters
if (!args.sourceIds) {
return {
success: false,
error: {
message: 'Source IDs are required',
code: 'MISSING_SOURCE_IDS'
}
};
}
if (!args.sourceTag) {
return {
success: false,
error: {
message: 'Source tag is required for cross-tag moves',
code: 'MISSING_SOURCE_TAG'
}
};
}
if (!args.targetTag) {
return {
success: false,
error: {
message: 'Target tag is required for cross-tag moves',
code: 'MISSING_TARGET_TAG'
}
};
}
// Validate that source and target tags are different
if (args.sourceTag === args.targetTag) {
return {
success: false,
error: {
message: `Source and target tags are the same ("${args.sourceTag}")`,
code: 'SAME_SOURCE_TARGET_TAG',
suggestions: [
'Use different tags for cross-tag moves',
'Use within-tag move: task-master move --from=<id> --to=<id> --tag=<tag>',
'Check available tags: task-master tags'
]
}
};
}
try {
// Find tasks.json path if not provided
let tasksPath = args.tasksJsonPath || args.file;
if (!tasksPath) {
if (!args.projectRoot) {
return {
success: false,
error: {
message:
'Project root is required if tasksJsonPath is not provided',
code: 'MISSING_PROJECT_ROOT'
}
};
}
tasksPath = findTasksPath(args, log);
}
// Enable silent mode to prevent console output during MCP operation
enableSilentMode();
try {
// Parse source IDs
const sourceIds = args.sourceIds.split(',').map((id) => id.trim());
// Prepare move options
const moveOptions = {
withDependencies: args.withDependencies || false,
ignoreDependencies: args.ignoreDependencies || false
};
// Call the core moveTasksBetweenTags function
const result = await moveTasksBetweenTags(
tasksPath,
sourceIds,
args.sourceTag,
args.targetTag,
moveOptions,
{ projectRoot }
);
return {
success: true,
data: {
...result,
message: `Successfully moved ${sourceIds.length} task(s) from "${args.sourceTag}" to "${args.targetTag}"`,
moveOptions,
sourceTag: args.sourceTag,
targetTag: args.targetTag
}
};
} finally {
// Restore console output - always executed regardless of success or error
disableSilentMode();
}
} catch (error) {
log.error(`Failed to move tasks between tags: ${error.message}`);
log.error(`Error code: ${error.code}, Error name: ${error.name}`);
// Enhanced error handling with structured error objects
let errorCode = 'MOVE_TASK_CROSS_TAG_ERROR';
let suggestions = [];
// Handle structured errors first
if (error.code === 'CROSS_TAG_DEPENDENCY_CONFLICTS') {
errorCode = 'CROSS_TAG_DEPENDENCY_CONFLICT';
suggestions = [
'Use --with-dependencies to move dependent tasks together',
'Use --ignore-dependencies to break cross-tag dependencies',
'Run task-master validate-dependencies to check for issues',
'Move dependencies first, then move the main task'
];
} else if (error.code === 'CANNOT_MOVE_SUBTASK') {
errorCode = 'SUBTASK_MOVE_RESTRICTION';
suggestions = [
'Promote subtask to full task first: task-master remove-subtask --id=<subtaskId> --convert',
'Move the parent task with all subtasks using --with-dependencies'
];
} else if (
error.code === 'TASK_NOT_FOUND' ||
error.code === 'INVALID_SOURCE_TAG' ||
error.code === 'INVALID_TARGET_TAG'
) {
errorCode = 'TAG_OR_TASK_NOT_FOUND';
suggestions = [
'Check available tags: task-master tags',
'Verify task IDs exist: task-master list',
'Check task details: task-master show <id>'
];
} else if (error.message.includes('cross-tag dependency conflicts')) {
// Fallback for legacy error messages
errorCode = 'CROSS_TAG_DEPENDENCY_CONFLICT';
suggestions = [
'Use --with-dependencies to move dependent tasks together',
'Use --ignore-dependencies to break cross-tag dependencies',
'Run task-master validate-dependencies to check for issues',
'Move dependencies first, then move the main task'
];
} else if (error.message.includes('Cannot move subtask')) {
// Fallback for legacy error messages
errorCode = 'SUBTASK_MOVE_RESTRICTION';
suggestions = [
'Promote subtask to full task first: task-master remove-subtask --id=<subtaskId> --convert',
'Move the parent task with all subtasks using --with-dependencies'
];
} else if (error.message.includes('not found')) {
// Fallback for legacy error messages
errorCode = 'TAG_OR_TASK_NOT_FOUND';
suggestions = [
'Check available tags: task-master tags',
'Verify task IDs exist: task-master list',
'Check task details: task-master show <id>'
];
}
return {
success: false,
error: {
message: error.message,
code: errorCode,
suggestions
}
};
}
}

View File

@@ -31,6 +31,7 @@ import { removeTaskDirect } from './direct-functions/remove-task.js';
import { initializeProjectDirect } from './direct-functions/initialize-project.js';
import { modelsDirect } from './direct-functions/models.js';
import { moveTaskDirect } from './direct-functions/move-task.js';
import { moveTaskCrossTagDirect } from './direct-functions/move-task-cross-tag.js';
import { researchDirect } from './direct-functions/research.js';
import { addTagDirect } from './direct-functions/add-tag.js';
import { deleteTagDirect } from './direct-functions/delete-tag.js';
@@ -72,6 +73,7 @@ export const directFunctions = new Map([
['initializeProjectDirect', initializeProjectDirect],
['modelsDirect', modelsDirect],
['moveTaskDirect', moveTaskDirect],
['moveTaskCrossTagDirect', moveTaskCrossTagDirect],
['researchDirect', researchDirect],
['addTagDirect', addTagDirect],
['deleteTagDirect', deleteTagDirect],
@@ -111,6 +113,7 @@ export {
initializeProjectDirect,
modelsDirect,
moveTaskDirect,
moveTaskCrossTagDirect,
researchDirect,
addTagDirect,
deleteTagDirect,

View File

@@ -9,7 +9,10 @@ import {
createErrorResponse,
withNormalizedProjectRoot
} from './utils.js';
import { moveTaskDirect } from '../core/task-master-core.js';
import {
moveTaskDirect,
moveTaskCrossTagDirect
} from '../core/task-master-core.js';
import { findTasksPath } from '../core/utils/path-utils.js';
import { resolveTag } from '../../../scripts/modules/utils.js';
@@ -29,8 +32,9 @@ export function registerMoveTaskTool(server) {
),
to: z
.string()
.optional()
.describe(
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
'ID of the destination (e.g., "7" or "7.3"). Required for within-tag moves. For cross-tag moves, if omitted, task will be moved to the target tag maintaining its ID'
),
file: z.string().optional().describe('Custom path to tasks.json file'),
projectRoot: z
@@ -38,101 +42,180 @@ export function registerMoveTaskTool(server) {
.describe(
'Root directory of the project (typically derived from session)'
),
tag: z.string().optional().describe('Tag context to operate on')
tag: z.string().optional().describe('Tag context to operate on'),
fromTag: z.string().optional().describe('Source tag for cross-tag moves'),
toTag: z.string().optional().describe('Target tag for cross-tag moves'),
withDependencies: z
.boolean()
.optional()
.describe('Move dependent tasks along with main task'),
ignoreDependencies: z
.boolean()
.optional()
.describe('Break cross-tag dependencies during move')
}),
execute: withNormalizedProjectRoot(async (args, { log, session }) => {
try {
const resolvedTag = resolveTag({
projectRoot: args.projectRoot,
tag: args.tag
});
// Find tasks.json path if not provided
let tasksJsonPath = args.file;
// Check if this is a cross-tag move
const isCrossTagMove =
args.fromTag && args.toTag && args.fromTag !== args.toTag;
if (!tasksJsonPath) {
tasksJsonPath = findTasksPath(args, log);
}
// Parse comma-separated IDs
const fromIds = args.from.split(',').map((id) => id.trim());
const toIds = args.to.split(',').map((id) => id.trim());
// Validate matching IDs count
if (fromIds.length !== toIds.length) {
return createErrorResponse(
'The number of source and destination IDs must match',
'MISMATCHED_ID_COUNT'
);
}
// If moving multiple tasks
if (fromIds.length > 1) {
const results = [];
// Move tasks one by one, only generate files on the last move
for (let i = 0; i < fromIds.length; i++) {
const fromId = fromIds[i];
const toId = toIds[i];
// Skip if source and destination are the same
if (fromId === toId) {
log.info(`Skipping ${fromId} -> ${toId} (same ID)`);
continue;
}
const shouldGenerateFiles = i === fromIds.length - 1;
const result = await moveTaskDirect(
{
sourceId: fromId,
destinationId: toId,
tasksJsonPath,
projectRoot: args.projectRoot,
tag: resolvedTag
},
log,
{ session }
if (isCrossTagMove) {
// Cross-tag move logic
if (!args.from) {
return createErrorResponse(
'Source IDs are required for cross-tag moves',
'MISSING_SOURCE_IDS'
);
if (!result.success) {
log.error(
`Failed to move ${fromId} to ${toId}: ${result.error.message}`
);
} else {
results.push(result.data);
}
}
// Warn if 'to' parameter is provided for cross-tag moves
if (args.to) {
log.warn(
'The "to" parameter is not used for cross-tag moves and will be ignored. Tasks retain their original IDs in the target tag.'
);
}
// Find tasks.json path if not provided
let tasksJsonPath = args.file;
if (!tasksJsonPath) {
tasksJsonPath = findTasksPath(args, log);
}
// Use cross-tag move function
return handleApiResult(
{
success: true,
data: {
moves: results,
message: `Successfully moved ${results.length} tasks`
}
},
log,
'Error moving multiple tasks',
undefined,
args.projectRoot
);
} else {
// Moving a single task
return handleApiResult(
await moveTaskDirect(
await moveTaskCrossTagDirect(
{
sourceId: args.from,
destinationId: args.to,
sourceIds: args.from,
sourceTag: args.fromTag,
targetTag: args.toTag,
withDependencies: args.withDependencies || false,
ignoreDependencies: args.ignoreDependencies || false,
tasksJsonPath,
projectRoot: args.projectRoot,
tag: resolvedTag
projectRoot: args.projectRoot
},
log,
{ session }
),
log,
'Error moving task',
'Error moving tasks between tags',
undefined,
args.projectRoot
);
} else {
// Within-tag move logic (existing functionality)
if (!args.to) {
return createErrorResponse(
'Destination ID is required for within-tag moves',
'MISSING_DESTINATION_ID'
);
}
const resolvedTag = resolveTag({
projectRoot: args.projectRoot,
tag: args.tag
});
// Find tasks.json path if not provided
let tasksJsonPath = args.file;
if (!tasksJsonPath) {
tasksJsonPath = findTasksPath(args, log);
}
// Parse comma-separated IDs
const fromIds = args.from.split(',').map((id) => id.trim());
const toIds = args.to.split(',').map((id) => id.trim());
// Validate matching IDs count
if (fromIds.length !== toIds.length) {
if (fromIds.length > 1) {
const results = [];
const skipped = [];
// Move tasks one by one, only generate files on the last move
for (let i = 0; i < fromIds.length; i++) {
const fromId = fromIds[i];
const toId = toIds[i];
// Skip if source and destination are the same
if (fromId === toId) {
log.info(`Skipping ${fromId} -> ${toId} (same ID)`);
skipped.push({ fromId, toId, reason: 'same ID' });
continue;
}
const shouldGenerateFiles = i === fromIds.length - 1;
const result = await moveTaskDirect(
{
sourceId: fromId,
destinationId: toId,
tasksJsonPath,
projectRoot: args.projectRoot,
tag: resolvedTag,
generateFiles: shouldGenerateFiles
},
log,
{ session }
);
if (!result.success) {
log.error(
`Failed to move ${fromId} to ${toId}: ${result.error.message}`
);
} else {
results.push(result.data);
}
}
return handleApiResult(
{
success: true,
data: {
moves: results,
skipped: skipped.length > 0 ? skipped : undefined,
message: `Successfully moved ${results.length} tasks${skipped.length > 0 ? `, skipped ${skipped.length}` : ''}`
}
},
log,
'Error moving multiple tasks',
undefined,
args.projectRoot
);
}
return handleApiResult(
{
success: true,
data: {
moves: results,
skippedMoves: skippedMoves,
message: `Successfully moved ${results.length} tasks${skippedMoves.length > 0 ? `, skipped ${skippedMoves.length} moves` : ''}`
}
},
log,
'Error moving multiple tasks',
undefined,
args.projectRoot
);
} else {
// Moving a single task
return handleApiResult(
await moveTaskDirect(
{
sourceId: args.from,
destinationId: args.to,
tasksJsonPath,
projectRoot: args.projectRoot,
tag: resolvedTag,
generateFiles: true
},
log,
{ session }
),
log,
'Error moving task',
undefined,
args.projectRoot
);
}
}
} catch (error) {
return createErrorResponse(

8077
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "task-master-ai",
"version": "0.23.1-rc.0",
"version": "0.24.0",
"description": "A task management system for ambitious AI-driven development that doesn't overwhelm and confuse Cursor.",
"main": "index.js",
"type": "module",

View File

@@ -48,6 +48,12 @@ import {
validateStrength
} from './task-manager.js';
import {
moveTasksBetweenTags,
MoveTaskError,
MOVE_ERROR_CODES
} from './task-manager/move-task.js';
import {
createTag,
deleteTag,
@@ -61,7 +67,9 @@ import {
addDependency,
removeDependency,
validateDependenciesCommand,
fixDependenciesCommand
fixDependenciesCommand,
DependencyError,
DEPENDENCY_ERROR_CODES
} from './dependency-manager.js';
import {
@@ -103,7 +111,11 @@ import {
displayAiUsageSummary,
displayMultipleTasksSummary,
displayTaggedTasksFYI,
displayCurrentTagIndicator
displayCurrentTagIndicator,
displayCrossTagDependencyError,
displaySubtaskMoveError,
displayInvalidTagCombinationError,
displayDependencyValidationHints
} from './ui.js';
import {
confirmProfilesRemove,
@@ -1753,6 +1765,7 @@ function registerCommands(programInstance) {
)
.option('-s, --status <status>', 'Filter by status')
.option('--with-subtasks', 'Show subtasks for each task')
.option('-c, --compact', 'Display tasks in compact one-line format')
.option('--tag <tag>', 'Specify tag context for task operations')
.action(async (options) => {
// Initialize TaskMaster
@@ -1770,18 +1783,21 @@ function registerCommands(programInstance) {
const statusFilter = options.status;
const withSubtasks = options.withSubtasks || false;
const compact = options.compact || false;
const tag = taskMaster.getCurrentTag();
// Show current tag context
displayCurrentTagIndicator(tag);
console.log(
chalk.blue(`Listing tasks from: ${taskMaster.getTasksPath()}`)
);
if (statusFilter) {
console.log(chalk.blue(`Filtering by status: ${statusFilter}`));
}
if (withSubtasks) {
console.log(chalk.blue('Including subtasks in listing'));
if (!compact) {
console.log(
chalk.blue(`Listing tasks from: ${taskMaster.getTasksPath()}`)
);
if (statusFilter) {
console.log(chalk.blue(`Filtering by status: ${statusFilter}`));
}
if (withSubtasks) {
console.log(chalk.blue('Including subtasks in listing'));
}
}
await listTasks(
@@ -1789,7 +1805,7 @@ function registerCommands(programInstance) {
statusFilter,
taskMaster.getComplexityReportPath(),
withSubtasks,
'text',
compact ? 'compact' : 'text',
{ projectRoot: taskMaster.getProjectRoot(), tag }
);
});
@@ -4034,7 +4050,9 @@ Examples:
// move-task command
programInstance
.command('move')
.description('Move a task or subtask to a new position')
.description(
'Move tasks between tags or reorder within tags. Supports cross-tag moves with dependency resolution options.'
)
.option(
'-f, --file <file>',
'Path to the tasks file',
@@ -4049,55 +4067,202 @@ Examples:
'ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated'
)
.option('--tag <tag>', 'Specify tag context for task operations')
.option('--from-tag <tag>', 'Source tag for cross-tag moves')
.option('--to-tag <tag>', 'Target tag for cross-tag moves')
.option('--with-dependencies', 'Move dependent tasks along with main task')
.option('--ignore-dependencies', 'Break cross-tag dependencies during move')
.action(async (options) => {
// Initialize TaskMaster
const taskMaster = initTaskMaster({
tasksPath: options.file || true,
tag: options.tag
});
const sourceId = options.from;
const destinationId = options.to;
const tag = taskMaster.getCurrentTag();
if (!sourceId || !destinationId) {
console.error(
chalk.red('Error: Both --from and --to parameters are required')
);
// Helper function to show move command help - defined in scope for proper encapsulation
function showMoveHelp() {
console.log(
chalk.yellow(
'Usage: task-master move --from=<sourceId> --to=<destinationId>'
)
chalk.white.bold('Move Command Help') +
'\n\n' +
chalk.cyan('Move tasks between tags or reorder within tags.') +
'\n\n' +
chalk.yellow.bold('Within-Tag Moves:') +
'\n' +
chalk.white(' task-master move --from=5 --to=7') +
'\n' +
chalk.white(' task-master move --from=5.2 --to=7.3') +
'\n' +
chalk.white(' task-master move --from=5,6,7 --to=10,11,12') +
'\n\n' +
chalk.yellow.bold('Cross-Tag Moves:') +
'\n' +
chalk.white(
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress'
) +
'\n' +
chalk.white(
' task-master move --from=5,6 --from-tag=backlog --to-tag=done'
) +
'\n\n' +
chalk.yellow.bold('Dependency Resolution:') +
'\n' +
chalk.white(' # Move with dependencies') +
'\n' +
chalk.white(
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress --with-dependencies'
) +
'\n\n' +
chalk.white(' # Break dependencies') +
'\n' +
chalk.white(
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies'
) +
'\n\n' +
chalk.white(' # Force move (may break dependencies)') +
'\n' +
chalk.white(
' task-master move --from=5 --from-tag=backlog --to-tag=in-progress --force'
) +
'\n\n' +
chalk.yellow.bold('Best Practices:') +
'\n' +
chalk.white(
' • Use --with-dependencies to move dependent tasks together'
) +
'\n' +
chalk.white(
' • Use --ignore-dependencies to break cross-tag dependencies'
) +
'\n' +
chalk.white(
' • Use --force only when you understand the consequences'
) +
'\n' +
chalk.white(
' • Check dependencies first: task-master validate-dependencies'
) +
'\n' +
chalk.white(
' • Fix dependency issues: task-master fix-dependencies'
) +
'\n\n' +
chalk.yellow.bold('Error Resolution:') +
'\n' +
chalk.white(
' • Cross-tag dependency conflicts: Use --with-dependencies or --ignore-dependencies'
) +
'\n' +
chalk.white(
' • Subtask movement: Promote subtask first with remove-subtask --convert'
) +
'\n' +
chalk.white(
' • Invalid tags: Check available tags with task-master tags'
) +
'\n\n' +
chalk.gray('For more help, run: task-master move --help')
);
process.exit(1);
}
// Check if we're moving multiple tasks (comma-separated IDs)
const sourceIds = sourceId.split(',').map((id) => id.trim());
const destinationIds = destinationId.split(',').map((id) => id.trim());
// Helper function to handle cross-tag move logic
async function handleCrossTagMove(moveContext, options) {
const { sourceId, sourceTag, toTag, taskMaster } = moveContext;
// Validate that the number of source and destination IDs match
if (sourceIds.length !== destinationIds.length) {
console.error(
chalk.red(
'Error: The number of source and destination IDs must match'
)
);
console.log(
chalk.yellow('Example: task-master move --from=5,6,7 --to=10,11,12')
);
process.exit(1);
}
if (!sourceId) {
console.error(
chalk.red('Error: --from parameter is required for cross-tag moves')
);
showMoveHelp();
process.exit(1);
}
const sourceIds = sourceId.split(',').map((id) => id.trim());
const moveOptions = {
withDependencies: options.withDependencies || false,
ignoreDependencies: options.ignoreDependencies || false
};
// If moving multiple tasks
if (sourceIds.length > 1) {
console.log(
chalk.blue(
`Moving multiple tasks: ${sourceIds.join(', ')} to ${destinationIds.join(', ')}...`
`Moving tasks ${sourceIds.join(', ')} from "${sourceTag}" to "${toTag}"...`
)
);
try {
const result = await moveTasksBetweenTags(
taskMaster.getTasksPath(),
sourceIds,
sourceTag,
toTag,
moveOptions,
{ projectRoot: taskMaster.getProjectRoot() }
);
console.log(chalk.green(`${result.message}`));
// Check if source tag still contains tasks before regenerating files
const tasksData = readJSON(
taskMaster.getTasksPath(),
taskMaster.getProjectRoot(),
sourceTag
);
const sourceTagHasTasks =
tasksData &&
Array.isArray(tasksData.tasks) &&
tasksData.tasks.length > 0;
// Generate task files for the affected tags
await generateTaskFiles(
taskMaster.getTasksPath(),
path.dirname(taskMaster.getTasksPath()),
{ tag: toTag, projectRoot: taskMaster.getProjectRoot() }
);
// Only regenerate source tag files if it still contains tasks
if (sourceTagHasTasks) {
await generateTaskFiles(
taskMaster.getTasksPath(),
path.dirname(taskMaster.getTasksPath()),
{ tag: sourceTag, projectRoot: taskMaster.getProjectRoot() }
);
}
}
// Helper function to handle within-tag move logic
async function handleWithinTagMove(moveContext) {
const { sourceId, destinationId, tag, taskMaster } = moveContext;
if (!sourceId || !destinationId) {
console.error(
chalk.red(
'Error: Both --from and --to parameters are required for within-tag moves'
)
);
console.log(
chalk.yellow(
'Usage: task-master move --from=<sourceId> --to=<destinationId>'
)
);
process.exit(1);
}
// Check if we're moving multiple tasks (comma-separated IDs)
const sourceIds = sourceId.split(',').map((id) => id.trim());
const destinationIds = destinationId.split(',').map((id) => id.trim());
// Validate that the number of source and destination IDs match
if (sourceIds.length !== destinationIds.length) {
console.error(
chalk.red(
'Error: The number of source and destination IDs must match'
)
);
console.log(
chalk.yellow('Example: task-master move --from=5,6,7 --to=10,11,12')
);
process.exit(1);
}
// If moving multiple tasks
if (sourceIds.length > 1) {
console.log(
chalk.blue(
`Moving multiple tasks: ${sourceIds.join(', ')} to ${destinationIds.join(', ')}...`
)
);
// Read tasks data once to validate destination IDs
const tasksData = readJSON(
taskMaster.getTasksPath(),
@@ -4106,11 +4271,17 @@ Examples:
);
if (!tasksData || !tasksData.tasks) {
console.error(
chalk.red(`Error: Invalid or missing tasks file at ${tasksPath}`)
chalk.red(
`Error: Invalid or missing tasks file at ${taskMaster.getTasksPath()}`
)
);
process.exit(1);
}
// Collect errors during move attempts
const moveErrors = [];
const successfulMoves = [];
// Move tasks one by one
for (let i = 0; i < sourceIds.length; i++) {
const fromId = sourceIds[i];
@@ -4140,24 +4311,59 @@ Examples:
`✓ Successfully moved task/subtask ${fromId} to ${toId}`
)
);
successfulMoves.push({ fromId, toId });
} catch (error) {
const errorInfo = {
fromId,
toId,
error: error.message
};
moveErrors.push(errorInfo);
console.error(
chalk.red(`Error moving ${fromId} to ${toId}: ${error.message}`)
);
// Continue with the next task rather than exiting
}
}
} catch (error) {
console.error(chalk.red(`Error: ${error.message}`));
process.exit(1);
}
} else {
// Moving a single task (existing logic)
console.log(
chalk.blue(`Moving task/subtask ${sourceId} to ${destinationId}...`)
);
try {
// Display summary after all moves are attempted
if (moveErrors.length > 0) {
console.log(chalk.yellow('\n--- Move Operation Summary ---'));
console.log(
chalk.green(
`✓ Successfully moved: ${successfulMoves.length} tasks`
)
);
console.log(
chalk.red(`✗ Failed to move: ${moveErrors.length} tasks`)
);
if (successfulMoves.length > 0) {
console.log(chalk.cyan('\nSuccessful moves:'));
successfulMoves.forEach(({ fromId, toId }) => {
console.log(chalk.cyan(` ${fromId}${toId}`));
});
}
console.log(chalk.red('\nFailed moves:'));
moveErrors.forEach(({ fromId, toId, error }) => {
console.log(chalk.red(` ${fromId}${toId}: ${error}`));
});
console.log(
chalk.yellow(
'\nNote: Some tasks were moved successfully. Check the errors above for failed moves.'
)
);
} else {
console.log(chalk.green('\n✓ All tasks moved successfully!'));
}
} else {
// Moving a single task (existing logic)
console.log(
chalk.blue(`Moving task/subtask ${sourceId} to ${destinationId}...`)
);
const result = await moveTask(
taskMaster.getTasksPath(),
sourceId,
@@ -4170,11 +4376,90 @@ Examples:
`✓ Successfully moved task/subtask ${sourceId} to ${destinationId}`
)
);
} catch (error) {
console.error(chalk.red(`Error: ${error.message}`));
process.exit(1);
}
}
// Helper function to handle move errors
function handleMoveError(error, moveContext) {
console.error(chalk.red(`Error: ${error.message}`));
// Enhanced error handling with structured error objects
if (error.code === 'CROSS_TAG_DEPENDENCY_CONFLICTS') {
// Use structured error data
const conflicts = error.data.conflicts || [];
const taskIds = error.data.taskIds || [];
displayCrossTagDependencyError(
conflicts,
moveContext.sourceTag,
moveContext.toTag,
taskIds.join(', ')
);
} else if (error.code === 'CANNOT_MOVE_SUBTASK') {
// Use structured error data
const taskId =
error.data.taskId || moveContext.sourceId?.split(',')[0];
displaySubtaskMoveError(
taskId,
moveContext.sourceTag,
moveContext.toTag
);
} else if (
error.code === 'SOURCE_TARGET_TAGS_SAME' ||
error.code === 'SAME_SOURCE_TARGET_TAG'
) {
displayInvalidTagCombinationError(
moveContext.sourceTag,
moveContext.toTag,
'Source and target tags are identical'
);
} else {
// General error - show dependency validation hints
displayDependencyValidationHints('after-error');
}
process.exit(1);
}
// Initialize TaskMaster
const taskMaster = initTaskMaster({
tasksPath: options.file || true,
tag: options.tag
});
const sourceId = options.from;
const destinationId = options.to;
const fromTag = options.fromTag;
const toTag = options.toTag;
const tag = taskMaster.getCurrentTag();
// Get the source tag - fallback to current tag if not provided
const sourceTag = fromTag || taskMaster.getCurrentTag();
// Check if this is a cross-tag move (different tags)
const isCrossTagMove = sourceTag && toTag && sourceTag !== toTag;
// Initialize move context with all relevant data
const moveContext = {
sourceId,
destinationId,
sourceTag,
toTag,
tag,
taskMaster
};
try {
if (isCrossTagMove) {
// Cross-tag move logic
await handleCrossTagMove(moveContext, options);
} else {
// Within-tag move logic
await handleWithinTagMove(moveContext);
}
} catch (error) {
handleMoveError(error, moveContext);
}
});
// Add/remove profile rules command
@@ -4594,7 +4879,7 @@ Examples:
const gitUtils = await import('./utils/git-utils.js');
// Check if we're in a git repository
if (!(await gitUtils.isGitRepository(projectRoot))) {
if (!(await gitUtils.isGitRepository(context.projectRoot))) {
console.error(
chalk.red(
'Error: Not in a git repository. Cannot use --from-branch option.'
@@ -4604,7 +4889,9 @@ Examples:
}
// Get current git branch
const currentBranch = await gitUtils.getCurrentBranch(projectRoot);
const currentBranch = await gitUtils.getCurrentBranch(
context.projectRoot
);
if (!currentBranch) {
console.error(
chalk.red('Error: Could not determine current git branch.')

View File

@@ -557,6 +557,7 @@ function getParametersForRole(role, explicitRoot = null) {
const providerName = roleConfig.provider;
let effectiveMaxTokens = roleMaxTokens; // Start with the role's default
let effectiveTemperature = roleTemperature; // Start with the role's default
try {
// Find the model definition in MODEL_MAP
@@ -583,6 +584,20 @@ function getParametersForRole(role, explicitRoot = null) {
`No valid model-specific max_tokens override found for ${modelId}. Using role default: ${roleMaxTokens}`
);
}
// Check if a model-specific temperature is defined
if (
modelDefinition &&
typeof modelDefinition.temperature === 'number' &&
modelDefinition.temperature >= 0 &&
modelDefinition.temperature <= 1
) {
effectiveTemperature = modelDefinition.temperature;
log(
'debug',
`Applying model-specific temperature (${modelDefinition.temperature}) for ${modelId}`
);
}
} else {
// Special handling for custom OpenRouter models
if (providerName === CUSTOM_PROVIDERS.OPENROUTER) {
@@ -603,15 +618,16 @@ function getParametersForRole(role, explicitRoot = null) {
} catch (lookupError) {
log(
'warn',
`Error looking up model-specific max_tokens for ${modelId}: ${lookupError.message}. Using role default: ${roleMaxTokens}`
`Error looking up model-specific parameters for ${modelId}: ${lookupError.message}. Using role defaults.`
);
// Fallback to role default on error
// Fallback to role defaults on error
effectiveMaxTokens = roleMaxTokens;
effectiveTemperature = roleTemperature;
}
return {
maxTokens: effectiveMaxTokens,
temperature: roleTemperature
temperature: effectiveTemperature
};
}

View File

@@ -14,12 +14,35 @@ import {
taskExists,
formatTaskId,
findCycles,
traverseDependencies,
isSilentMode
} from './utils.js';
import { displayBanner } from './ui.js';
import { generateTaskFiles } from './task-manager.js';
import generateTaskFiles from './task-manager/generate-task-files.js';
/**
* Structured error class for dependency operations
*/
class DependencyError extends Error {
constructor(code, message, data = {}) {
super(message);
this.name = 'DependencyError';
this.code = code;
this.data = data;
}
}
/**
* Error codes for dependency operations
*/
const DEPENDENCY_ERROR_CODES = {
CANNOT_MOVE_SUBTASK: 'CANNOT_MOVE_SUBTASK',
INVALID_TASK_ID: 'INVALID_TASK_ID',
INVALID_SOURCE_TAG: 'INVALID_SOURCE_TAG',
INVALID_TARGET_TAG: 'INVALID_TARGET_TAG'
};
/**
* Add a dependency to a task
@@ -1235,6 +1258,580 @@ function validateAndFixDependencies(
return changesDetected;
}
/**
* Recursively find all dependencies for a set of tasks with depth limiting
* Recursively find all dependencies for a set of tasks with depth limiting
*
* @note This function depends on the traverseDependencies utility from utils.js
* for the actual dependency traversal logic.
*
* @param {Array} sourceTasks - Array of source tasks to find dependencies for
* @param {Array} allTasks - Array of all available tasks
* @param {Object} options - Options object
* @param {number} options.maxDepth - Maximum recursion depth (default: 50)
* @param {boolean} options.includeSelf - Whether to include self-references (default: false)
* @returns {Array} Array of all dependency task IDs
*/
function findAllDependenciesRecursively(sourceTasks, allTasks, options = {}) {
if (!Array.isArray(sourceTasks)) {
throw new Error('Source tasks parameter must be an array');
}
if (!Array.isArray(allTasks)) {
throw new Error('All tasks parameter must be an array');
}
return traverseDependencies(sourceTasks, allTasks, {
...options,
direction: 'forward',
logger: { warn: log.warn || console.warn }
});
}
/**
* Find dependency task by ID, handling various ID formats
* @param {string|number} depId - Dependency ID to find
* @param {string} taskId - ID of the task that has this dependency
* @param {Array} allTasks - Array of all tasks to search
* @returns {Object|null} Found dependency task or null
*/
/**
* Find a subtask within a parent task's subtasks array
* @param {string} parentId - The parent task ID
* @param {string|number} subtaskId - The subtask ID to find
* @param {Array} allTasks - Array of all tasks to search in
* @param {boolean} useStringComparison - Whether to use string comparison for subtaskId
* @returns {Object|null} The found subtask with full ID or null if not found
*/
function findSubtaskInParent(
parentId,
subtaskId,
allTasks,
useStringComparison = false
) {
// Convert parentId to numeric for proper comparison with top-level task IDs
const numericParentId = parseInt(parentId, 10);
const parentTask = allTasks.find((t) => t.id === numericParentId);
if (parentTask && parentTask.subtasks && Array.isArray(parentTask.subtasks)) {
const foundSubtask = parentTask.subtasks.find((subtask) =>
useStringComparison
? String(subtask.id) === String(subtaskId)
: subtask.id === subtaskId
);
if (foundSubtask) {
// Return a task-like object that represents the subtask with full ID
return {
...foundSubtask,
id: `${parentId}.${foundSubtask.id}`
};
}
}
return null;
}
function findDependencyTask(depId, taskId, allTasks) {
if (!depId) {
return null;
}
// Convert depId to string for consistent comparison
const depIdStr = String(depId);
// Find the dependency task - handle both top-level and subtask IDs
let depTask = null;
// First try exact match (for top-level tasks)
depTask = allTasks.find((t) => String(t.id) === depIdStr);
// If not found and it's a subtask reference (contains dot), find the parent task first
if (!depTask && depIdStr.includes('.')) {
const [parentId, subtaskId] = depIdStr.split('.');
depTask = findSubtaskInParent(parentId, subtaskId, allTasks, true);
}
// If still not found, try numeric comparison for relative subtask references
if (!depTask && !isNaN(depId)) {
const numericId = parseInt(depId, 10);
// For subtasks, this might be a relative reference within the same parent
if (taskId && typeof taskId === 'string' && taskId.includes('.')) {
const [parentId] = taskId.split('.');
depTask = findSubtaskInParent(parentId, numericId, allTasks, false);
}
}
return depTask;
}
/**
* Check if a task has cross-tag dependencies
* @param {Object} task - Task to check
* @param {string} targetTag - Target tag name
* @param {Array} allTasks - Array of all tasks from all tags
* @returns {Array} Array of cross-tag dependency conflicts
*/
function findTaskCrossTagConflicts(task, targetTag, allTasks) {
const conflicts = [];
// Validate task.dependencies is an array before processing
if (!Array.isArray(task.dependencies) || task.dependencies.length === 0) {
return conflicts;
}
// Filter out null/undefined dependencies and check each valid dependency
const validDependencies = task.dependencies.filter((depId) => depId != null);
validDependencies.forEach((depId) => {
const depTask = findDependencyTask(depId, task.id, allTasks);
if (depTask && depTask.tag !== targetTag) {
conflicts.push({
taskId: task.id,
dependencyId: depId,
dependencyTag: depTask.tag,
message: `Task ${task.id} depends on ${depId} (in ${depTask.tag})`
});
}
});
return conflicts;
}
function validateCrossTagMove(task, sourceTag, targetTag, allTasks) {
// Parameter validation
if (!task || typeof task !== 'object') {
throw new Error('Task parameter must be a valid object');
}
if (!sourceTag || typeof sourceTag !== 'string') {
throw new Error('Source tag must be a valid string');
}
if (!targetTag || typeof targetTag !== 'string') {
throw new Error('Target tag must be a valid string');
}
if (!Array.isArray(allTasks)) {
throw new Error('All tasks parameter must be an array');
}
const conflicts = findTaskCrossTagConflicts(task, targetTag, allTasks);
return {
canMove: conflicts.length === 0,
conflicts
};
}
/**
* Find all cross-tag dependencies for a set of tasks
* @param {Array} sourceTasks - Array of tasks to check
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @param {Array} allTasks - Array of all tasks from all tags
* @returns {Array} Array of cross-tag dependency conflicts
*/
function findCrossTagDependencies(sourceTasks, sourceTag, targetTag, allTasks) {
// Parameter validation
if (!Array.isArray(sourceTasks)) {
throw new Error('Source tasks parameter must be an array');
}
if (!sourceTag || typeof sourceTag !== 'string') {
throw new Error('Source tag must be a valid string');
}
if (!targetTag || typeof targetTag !== 'string') {
throw new Error('Target tag must be a valid string');
}
if (!Array.isArray(allTasks)) {
throw new Error('All tasks parameter must be an array');
}
const conflicts = [];
sourceTasks.forEach((task) => {
// Validate task object and dependencies array
if (
!task ||
typeof task !== 'object' ||
!Array.isArray(task.dependencies) ||
task.dependencies.length === 0
) {
return;
}
// Use the shared helper function to find conflicts for this task
const taskConflicts = findTaskCrossTagConflicts(task, targetTag, allTasks);
conflicts.push(...taskConflicts);
});
return conflicts;
}
/**
* Helper function to find all tasks that depend on a given task (reverse dependencies)
* @param {string|number} taskId - The task ID to find dependencies for
* @param {Array} allTasks - Array of all tasks to search
* @param {Set} dependentTaskIds - Set to add found dependencies to
*/
function findTasksThatDependOn(taskId, allTasks, dependentTaskIds) {
// Find the task object for the given ID
const sourceTask = allTasks.find((t) => t.id === taskId);
if (!sourceTask) {
return;
}
// Use the shared utility for reverse dependency traversal
const reverseDeps = traverseDependencies([sourceTask], allTasks, {
direction: 'reverse',
includeSelf: false,
logger: { warn: log.warn || console.warn }
});
// Add all found reverse dependencies to the dependentTaskIds set
reverseDeps.forEach((depId) => dependentTaskIds.add(depId));
}
/**
* Helper function to check if a task depends on a source task
* @param {Object} task - Task to check for dependencies
* @param {Object} sourceTask - Source task to check dependency against
* @returns {boolean} True if task depends on source task
*/
function taskDependsOnSource(task, sourceTask) {
if (!task || !Array.isArray(task.dependencies)) {
return false;
}
const sourceTaskIdStr = String(sourceTask.id);
return task.dependencies.some((depId) => {
if (!depId) return false;
const depIdStr = String(depId);
// Exact match
if (depIdStr === sourceTaskIdStr) {
return true;
}
// Handle subtask references
if (
sourceTaskIdStr &&
typeof sourceTaskIdStr === 'string' &&
sourceTaskIdStr.includes('.')
) {
// If source is a subtask, check if dependency references the parent
const [parentId] = sourceTaskIdStr.split('.');
if (depIdStr === parentId) {
return true;
}
}
// Handle relative subtask references
if (
depIdStr &&
typeof depIdStr === 'string' &&
depIdStr.includes('.') &&
sourceTaskIdStr &&
typeof sourceTaskIdStr === 'string' &&
sourceTaskIdStr.includes('.')
) {
const [depParentId] = depIdStr.split('.');
const [sourceParentId] = sourceTaskIdStr.split('.');
if (depParentId === sourceParentId) {
// Both are subtasks of the same parent, check if they reference each other
const depSubtaskNum = parseInt(depIdStr.split('.')[1], 10);
const sourceSubtaskNum = parseInt(sourceTaskIdStr.split('.')[1], 10);
if (depSubtaskNum === sourceSubtaskNum) {
return true;
}
}
}
return false;
});
}
/**
* Helper function to check if any subtasks of a task depend on source tasks
* @param {Object} task - Task to check subtasks of
* @param {Array} sourceTasks - Array of source tasks to check dependencies against
* @returns {boolean} True if any subtasks depend on source tasks
*/
function subtasksDependOnSource(task, sourceTasks) {
if (!task.subtasks || !Array.isArray(task.subtasks)) {
return false;
}
return task.subtasks.some((subtask) => {
// Check if this subtask depends on any source task
const subtaskDependsOnSource = sourceTasks.some((sourceTask) =>
taskDependsOnSource(subtask, sourceTask)
);
if (subtaskDependsOnSource) {
return true;
}
// Recursively check if any nested subtasks depend on source tasks
if (subtask.subtasks && Array.isArray(subtask.subtasks)) {
return subtasksDependOnSource(subtask, sourceTasks);
}
return false;
});
}
/**
* Get all dependent task IDs for a set of cross-tag dependencies
* @param {Array} sourceTasks - Array of source tasks
* @param {Array} crossTagDependencies - Array of cross-tag dependency conflicts
* @param {Array} allTasks - Array of all tasks from all tags
* @returns {Array} Array of dependent task IDs to move
*/
function getDependentTaskIds(sourceTasks, crossTagDependencies, allTasks) {
// Enhanced parameter validation
if (!Array.isArray(sourceTasks)) {
throw new Error('Source tasks parameter must be an array');
}
if (!Array.isArray(crossTagDependencies)) {
throw new Error('Cross tag dependencies parameter must be an array');
}
if (!Array.isArray(allTasks)) {
throw new Error('All tasks parameter must be an array');
}
// Use the shared recursive dependency finder
const dependentTaskIds = new Set(
findAllDependenciesRecursively(sourceTasks, allTasks, {
includeSelf: false
})
);
// Add immediate dependency IDs from conflicts and find their dependencies recursively
const conflictTasksToProcess = [];
crossTagDependencies.forEach((conflict) => {
if (conflict && conflict.dependencyId) {
const depId =
typeof conflict.dependencyId === 'string'
? parseInt(conflict.dependencyId, 10)
: conflict.dependencyId;
if (!isNaN(depId)) {
dependentTaskIds.add(depId);
// Find the task object for recursive dependency finding
const depTask = allTasks.find((t) => t.id === depId);
if (depTask) {
conflictTasksToProcess.push(depTask);
}
}
}
});
// Find dependencies of conflict tasks
if (conflictTasksToProcess.length > 0) {
const conflictDependencies = findAllDependenciesRecursively(
conflictTasksToProcess,
allTasks,
{ includeSelf: false }
);
conflictDependencies.forEach((depId) => dependentTaskIds.add(depId));
}
// For --with-dependencies, we also need to find all dependencies of the source tasks
sourceTasks.forEach((sourceTask) => {
if (sourceTask && sourceTask.id) {
// Find all tasks that this source task depends on (forward dependencies) - already handled above
// Find all tasks that depend on this source task (reverse dependencies)
findTasksThatDependOn(sourceTask.id, allTasks, dependentTaskIds);
}
});
// Also include any tasks that depend on the source tasks
sourceTasks.forEach((sourceTask) => {
if (!sourceTask || typeof sourceTask !== 'object' || !sourceTask.id) {
return; // Skip invalid source tasks
}
allTasks.forEach((task) => {
// Validate task and dependencies array
if (
!task ||
typeof task !== 'object' ||
!Array.isArray(task.dependencies)
) {
return;
}
// Check if this task depends on the source task
const hasDependency = taskDependsOnSource(task, sourceTask);
// Check if any subtasks of this task depend on the source task
const subtasksHaveDependency = subtasksDependOnSource(task, [sourceTask]);
if (hasDependency || subtasksHaveDependency) {
dependentTaskIds.add(task.id);
}
});
});
return Array.from(dependentTaskIds);
}
/**
* Validate subtask movement - block direct cross-tag subtask moves
* @param {string} taskId - Task ID to validate
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @throws {Error} If subtask movement is attempted
*/
function validateSubtaskMove(taskId, sourceTag, targetTag) {
// Parameter validation
if (!taskId || typeof taskId !== 'string') {
throw new DependencyError(
DEPENDENCY_ERROR_CODES.INVALID_TASK_ID,
'Task ID must be a valid string'
);
}
if (!sourceTag || typeof sourceTag !== 'string') {
throw new DependencyError(
DEPENDENCY_ERROR_CODES.INVALID_SOURCE_TAG,
'Source tag must be a valid string'
);
}
if (!targetTag || typeof targetTag !== 'string') {
throw new DependencyError(
DEPENDENCY_ERROR_CODES.INVALID_TARGET_TAG,
'Target tag must be a valid string'
);
}
if (taskId.includes('.')) {
throw new DependencyError(
DEPENDENCY_ERROR_CODES.CANNOT_MOVE_SUBTASK,
`Cannot move subtask ${taskId} directly between tags.
First promote it to a full task using:
task-master remove-subtask --id=${taskId} --convert`,
{
taskId,
sourceTag,
targetTag
}
);
}
}
/**
* Check if a task can be moved with its dependencies
* @param {string} taskId - Task ID to check
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @param {Array} allTasks - Array of all tasks from all tags
* @returns {Object} Object with canMove boolean and dependentTaskIds array
*/
function canMoveWithDependencies(taskId, sourceTag, targetTag, allTasks) {
// Parameter validation
if (!taskId || typeof taskId !== 'string') {
throw new Error('Task ID must be a valid string');
}
if (!sourceTag || typeof sourceTag !== 'string') {
throw new Error('Source tag must be a valid string');
}
if (!targetTag || typeof targetTag !== 'string') {
throw new Error('Target tag must be a valid string');
}
if (!Array.isArray(allTasks)) {
throw new Error('All tasks parameter must be an array');
}
// Enhanced task lookup to handle subtasks properly
let sourceTask = null;
// Check if it's a subtask ID (e.g., "1.2")
if (taskId.includes('.')) {
const [parentId, subtaskId] = taskId
.split('.')
.map((id) => parseInt(id, 10));
const parentTask = allTasks.find(
(t) => t.id === parentId && t.tag === sourceTag
);
if (
parentTask &&
parentTask.subtasks &&
Array.isArray(parentTask.subtasks)
) {
const subtask = parentTask.subtasks.find((st) => st.id === subtaskId);
if (subtask) {
// Create a copy of the subtask with parent context
sourceTask = {
...subtask,
parentTask: {
id: parentTask.id,
title: parentTask.title,
status: parentTask.status
},
isSubtask: true
};
}
}
} else {
// Regular task lookup - handle both string and numeric IDs
sourceTask = allTasks.find((t) => {
const taskIdNum = parseInt(taskId, 10);
return (t.id === taskIdNum || t.id === taskId) && t.tag === sourceTag;
});
}
if (!sourceTask) {
return {
canMove: false,
dependentTaskIds: [],
conflicts: [],
error: 'Task not found'
};
}
const validation = validateCrossTagMove(
sourceTask,
sourceTag,
targetTag,
allTasks
);
// Fix contradictory logic: return canMove: false when conflicts exist
if (validation.canMove) {
return {
canMove: true,
dependentTaskIds: [],
conflicts: []
};
}
// When conflicts exist, return canMove: false with conflicts and dependent task IDs
const dependentTaskIds = getDependentTaskIds(
[sourceTask],
validation.conflicts,
allTasks
);
return {
canMove: false,
dependentTaskIds,
conflicts: validation.conflicts
};
}
export {
addDependency,
removeDependency,
@@ -1245,5 +1842,15 @@ export {
removeDuplicateDependencies,
cleanupSubtaskDependencies,
ensureAtLeastOneIndependentSubtask,
validateAndFixDependencies
validateAndFixDependencies,
findDependencyTask,
findTaskCrossTagConflicts,
validateCrossTagMove,
findCrossTagDependencies,
getDependentTaskIds,
validateSubtaskMove,
canMoveWithDependencies,
findAllDependenciesRecursively,
DependencyError,
DEPENDENCY_ERROR_CODES
};

View File

@@ -239,6 +239,18 @@
},
"allowed_roles": ["research"],
"supported": true
},
{
"id": "gpt-5",
"swe_score": 0.749,
"cost_per_1m_tokens": {
"input": 5.0,
"output": 20.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000,
"temperature": 1,
"supported": true
}
],
"google": [
@@ -774,6 +786,39 @@
}
],
"ollama": [
{
"id": "gpt-oss:latest",
"swe_score": 0.607,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 128000,
"supported": true
},
{
"id": "gpt-oss:20b",
"swe_score": 0.607,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 128000,
"supported": true
},
{
"id": "gpt-oss:120b",
"swe_score": 0.624,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 128000,
"supported": true
},
{
"id": "devstral:latest",
"swe_score": 0,

View File

@@ -294,6 +294,11 @@ function listTasks(
});
}
// For compact output, return minimal one-line format
if (outputFormat === 'compact') {
return renderCompactOutput(filteredTasks, withSubtasks);
}
// ... existing code for text output ...
// Calculate status breakdowns as percentages of total
@@ -962,4 +967,98 @@ function generateMarkdownOutput(data, filteredTasks, stats) {
return markdown;
}
/**
* Format dependencies for compact output with truncation and coloring
* @param {Array} dependencies - Array of dependency IDs
* @returns {string} - Formatted dependency string with arrow prefix
*/
function formatCompactDependencies(dependencies) {
if (!dependencies || dependencies.length === 0) {
return '';
}
if (dependencies.length > 5) {
const visible = dependencies.slice(0, 5).join(',');
const remaining = dependencies.length - 5;
return `${chalk.cyan(visible)}${chalk.gray('... (+' + remaining + ' more)')}`;
} else {
return `${chalk.cyan(dependencies.join(','))}`;
}
}
/**
* Format a single task in compact one-line format
* @param {Object} task - Task object
* @param {number} maxTitleLength - Maximum title length before truncation
* @returns {string} - Formatted task line
*/
function formatCompactTask(task, maxTitleLength = 50) {
const status = task.status || 'pending';
const priority = task.priority || 'medium';
const title = truncate(task.title || 'Untitled', maxTitleLength);
// Use colored status from existing function
const coloredStatus = getStatusWithColor(status, true);
// Color priority based on level
const priorityColors = {
high: chalk.red,
medium: chalk.yellow,
low: chalk.gray
};
const priorityColor = priorityColors[priority] || chalk.white;
// Format dependencies using shared helper
const depsText = formatCompactDependencies(task.dependencies);
return `${chalk.cyan(task.id)} ${coloredStatus} ${chalk.white(title)} ${priorityColor('(' + priority + ')')}${depsText}`;
}
/**
* Format a subtask in compact format with indentation
* @param {Object} subtask - Subtask object
* @param {string|number} parentId - Parent task ID
* @param {number} maxTitleLength - Maximum title length before truncation
* @returns {string} - Formatted subtask line
*/
function formatCompactSubtask(subtask, parentId, maxTitleLength = 47) {
const status = subtask.status || 'pending';
const title = truncate(subtask.title || 'Untitled', maxTitleLength);
// Use colored status from existing function
const coloredStatus = getStatusWithColor(status, true);
// Format dependencies using shared helper
const depsText = formatCompactDependencies(subtask.dependencies);
return ` ${chalk.cyan(parentId + '.' + subtask.id)} ${coloredStatus} ${chalk.dim(title)}${depsText}`;
}
/**
* Render complete compact output
* @param {Array} filteredTasks - Tasks to display
* @param {boolean} withSubtasks - Whether to include subtasks
* @returns {void} - Outputs directly to console
*/
function renderCompactOutput(filteredTasks, withSubtasks) {
if (filteredTasks.length === 0) {
console.log('No tasks found');
return;
}
const output = [];
filteredTasks.forEach((task) => {
output.push(formatCompactTask(task));
if (withSubtasks && task.subtasks && task.subtasks.length > 0) {
task.subtasks.forEach((subtask) => {
output.push(formatCompactSubtask(subtask, task.id));
});
}
});
console.log(output.join('\n'));
}
export default listTasks;

View File

@@ -1,7 +1,65 @@
import path from 'path';
import { log, readJSON, writeJSON, setTasksForTag } from '../utils.js';
import { isTaskDependentOn } from '../task-manager.js';
import {
log,
readJSON,
writeJSON,
setTasksForTag,
traverseDependencies
} from '../utils.js';
import generateTaskFiles from './generate-task-files.js';
import {
findCrossTagDependencies,
getDependentTaskIds,
validateSubtaskMove
} from '../dependency-manager.js';
/**
* Find all dependencies recursively for a set of source tasks with depth limiting
* @param {Array} sourceTasks - The source tasks to find dependencies for
* @param {Array} allTasks - All available tasks from all tags
* @param {Object} options - Options object
* @param {number} options.maxDepth - Maximum recursion depth (default: 50)
* @param {boolean} options.includeSelf - Whether to include self-references (default: false)
* @returns {Array} Array of all dependency task IDs
*/
function findAllDependenciesRecursively(sourceTasks, allTasks, options = {}) {
return traverseDependencies(sourceTasks, allTasks, {
...options,
direction: 'forward',
logger: { warn: console.warn }
});
}
/**
* Structured error class for move operations
*/
class MoveTaskError extends Error {
constructor(code, message, data = {}) {
super(message);
this.name = 'MoveTaskError';
this.code = code;
this.data = data;
}
}
/**
* Error codes for move operations
*/
const MOVE_ERROR_CODES = {
CROSS_TAG_DEPENDENCY_CONFLICTS: 'CROSS_TAG_DEPENDENCY_CONFLICTS',
CANNOT_MOVE_SUBTASK: 'CANNOT_MOVE_SUBTASK',
SOURCE_TARGET_TAGS_SAME: 'SOURCE_TARGET_TAGS_SAME',
TASK_NOT_FOUND: 'TASK_NOT_FOUND',
SUBTASK_NOT_FOUND: 'SUBTASK_NOT_FOUND',
PARENT_TASK_NOT_FOUND: 'PARENT_TASK_NOT_FOUND',
PARENT_TASK_NO_SUBTASKS: 'PARENT_TASK_NO_SUBTASKS',
DESTINATION_TASK_NOT_FOUND: 'DESTINATION_TASK_NOT_FOUND',
TASK_ALREADY_EXISTS: 'TASK_ALREADY_EXISTS',
INVALID_TASKS_FILE: 'INVALID_TASKS_FILE',
ID_COUNT_MISMATCH: 'ID_COUNT_MISMATCH',
INVALID_SOURCE_TAG: 'INVALID_SOURCE_TAG',
INVALID_TARGET_TAG: 'INVALID_TARGET_TAG'
};
/**
* Move one or more tasks/subtasks to new positions
@@ -27,7 +85,8 @@ async function moveTask(
const destinationIds = destinationId.split(',').map((id) => id.trim());
if (sourceIds.length !== destinationIds.length) {
throw new Error(
throw new MoveTaskError(
MOVE_ERROR_CODES.ID_COUNT_MISMATCH,
`Number of source IDs (${sourceIds.length}) must match number of destination IDs (${destinationIds.length})`
);
}
@@ -72,7 +131,8 @@ async function moveTask(
// Ensure the tag exists in the raw data
if (!rawData || !rawData[tag] || !Array.isArray(rawData[tag].tasks)) {
throw new Error(
throw new MoveTaskError(
MOVE_ERROR_CODES.INVALID_TASKS_FILE,
`Invalid tasks file or tag "${tag}" not found at ${tasksPath}`
);
}
@@ -137,10 +197,14 @@ function moveSubtaskToSubtask(tasks, sourceId, destinationId) {
const destParentTask = tasks.find((t) => t.id === destParentId);
if (!sourceParentTask) {
throw new Error(`Source parent task with ID ${sourceParentId} not found`);
throw new MoveTaskError(
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
`Source parent task with ID ${sourceParentId} not found`
);
}
if (!destParentTask) {
throw new Error(
throw new MoveTaskError(
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
`Destination parent task with ID ${destParentId} not found`
);
}
@@ -158,7 +222,10 @@ function moveSubtaskToSubtask(tasks, sourceId, destinationId) {
(st) => st.id === sourceSubtaskId
);
if (sourceSubtaskIndex === -1) {
throw new Error(`Source subtask ${sourceId} not found`);
throw new MoveTaskError(
MOVE_ERROR_CODES.SUBTASK_NOT_FOUND,
`Source subtask ${sourceId} not found`
);
}
const sourceSubtask = sourceParentTask.subtasks[sourceSubtaskIndex];
@@ -216,10 +283,16 @@ function moveSubtaskToTask(tasks, sourceId, destinationId) {
const sourceParentTask = tasks.find((t) => t.id === sourceParentId);
if (!sourceParentTask) {
throw new Error(`Source parent task with ID ${sourceParentId} not found`);
throw new MoveTaskError(
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
`Source parent task with ID ${sourceParentId} not found`
);
}
if (!sourceParentTask.subtasks) {
throw new Error(`Source parent task ${sourceParentId} has no subtasks`);
throw new MoveTaskError(
MOVE_ERROR_CODES.PARENT_TASK_NO_SUBTASKS,
`Source parent task ${sourceParentId} has no subtasks`
);
}
// Find source subtask
@@ -227,7 +300,10 @@ function moveSubtaskToTask(tasks, sourceId, destinationId) {
(st) => st.id === sourceSubtaskId
);
if (sourceSubtaskIndex === -1) {
throw new Error(`Source subtask ${sourceId} not found`);
throw new MoveTaskError(
MOVE_ERROR_CODES.SUBTASK_NOT_FOUND,
`Source subtask ${sourceId} not found`
);
}
const sourceSubtask = sourceParentTask.subtasks[sourceSubtaskIndex];
@@ -235,7 +311,8 @@ function moveSubtaskToTask(tasks, sourceId, destinationId) {
// Check if destination task exists
const existingDestTask = tasks.find((t) => t.id === destTaskId);
if (existingDestTask) {
throw new Error(
throw new MoveTaskError(
MOVE_ERROR_CODES.TASK_ALREADY_EXISTS,
`Cannot move to existing task ID ${destTaskId}. Choose a different ID or use subtask destination.`
);
}
@@ -282,10 +359,14 @@ function moveTaskToSubtask(tasks, sourceId, destinationId) {
const destParentTask = tasks.find((t) => t.id === destParentId);
if (sourceTaskIndex === -1) {
throw new Error(`Source task with ID ${sourceTaskId} not found`);
throw new MoveTaskError(
MOVE_ERROR_CODES.TASK_NOT_FOUND,
`Source task with ID ${sourceTaskId} not found`
);
}
if (!destParentTask) {
throw new Error(
throw new MoveTaskError(
MOVE_ERROR_CODES.PARENT_TASK_NOT_FOUND,
`Destination parent task with ID ${destParentId} not found`
);
}
@@ -340,7 +421,10 @@ function moveTaskToTask(tasks, sourceId, destinationId) {
// Find source task
const sourceTaskIndex = tasks.findIndex((t) => t.id === sourceTaskId);
if (sourceTaskIndex === -1) {
throw new Error(`Source task with ID ${sourceTaskId} not found`);
throw new MoveTaskError(
MOVE_ERROR_CODES.TASK_NOT_FOUND,
`Source task with ID ${sourceTaskId} not found`
);
}
const sourceTask = tasks[sourceTaskIndex];
@@ -353,7 +437,8 @@ function moveTaskToTask(tasks, sourceId, destinationId) {
const destTask = tasks[destTaskIndex];
// For now, throw an error to avoid accidental overwrites
throw new Error(
throw new MoveTaskError(
MOVE_ERROR_CODES.TASK_ALREADY_EXISTS,
`Task with ID ${destTaskId} already exists. Use a different destination ID.`
);
} else {
@@ -478,4 +563,434 @@ function moveTaskToNewId(tasks, sourceTaskIndex, sourceTask, destTaskId) {
};
}
/**
* Get all tasks from all tags with tag information
* @param {Object} rawData - The raw tagged data object
* @returns {Array} A flat array of all task objects with tag property
*/
function getAllTasksWithTags(rawData) {
let allTasks = [];
for (const tagName in rawData) {
if (
Object.prototype.hasOwnProperty.call(rawData, tagName) &&
rawData[tagName] &&
Array.isArray(rawData[tagName].tasks)
) {
const tasksWithTag = rawData[tagName].tasks.map((task) => ({
...task,
tag: tagName
}));
allTasks = allTasks.concat(tasksWithTag);
}
}
return allTasks;
}
/**
* Validate move operation parameters and data
* @param {string} tasksPath - Path to tasks.json file
* @param {Array} taskIds - Array of task IDs to move
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @param {Object} context - Context object
* @returns {Object} Validation result with rawData and sourceTasks
*/
async function validateMove(tasksPath, taskIds, sourceTag, targetTag, context) {
const { projectRoot } = context;
// Read the raw data without tag resolution to preserve tagged structure
let rawData = readJSON(tasksPath, projectRoot, sourceTag);
// Handle the case where readJSON returns resolved data with _rawTaggedData
if (rawData && rawData._rawTaggedData) {
rawData = rawData._rawTaggedData;
}
// Validate source tag exists
if (
!rawData ||
!rawData[sourceTag] ||
!Array.isArray(rawData[sourceTag].tasks)
) {
throw new MoveTaskError(
MOVE_ERROR_CODES.INVALID_SOURCE_TAG,
`Source tag "${sourceTag}" not found or invalid`
);
}
// Create target tag if it doesn't exist
if (!rawData[targetTag]) {
rawData[targetTag] = { tasks: [] };
log('info', `Created new tag "${targetTag}"`);
}
// Normalize all IDs to strings once for consistent comparison
const normalizedSearchIds = taskIds.map((id) => String(id));
const sourceTasks = rawData[sourceTag].tasks.filter((t) => {
const normalizedTaskId = String(t.id);
return normalizedSearchIds.includes(normalizedTaskId);
});
// Validate subtask movement
taskIds.forEach((taskId) => {
validateSubtaskMove(taskId, sourceTag, targetTag);
});
return { rawData, sourceTasks };
}
/**
* Load and prepare task data for move operation
* @param {Object} validation - Validation result from validateMove
* @returns {Object} Prepared data with rawData, sourceTasks, and allTasks
*/
async function prepareTaskData(validation) {
const { rawData, sourceTasks } = validation;
// Get all tasks for validation
const allTasks = getAllTasksWithTags(rawData);
return { rawData, sourceTasks, allTasks };
}
/**
* Resolve dependencies and determine tasks to move
* @param {Array} sourceTasks - Source tasks to move
* @param {Array} allTasks - All available tasks from all tags
* @param {Object} options - Move options
* @param {Array} taskIds - Original task IDs
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @returns {Object} Tasks to move and dependency resolution info
*/
async function resolveDependencies(
sourceTasks,
allTasks,
options,
taskIds,
sourceTag,
targetTag
) {
const { withDependencies = false, ignoreDependencies = false } = options;
// Handle --with-dependencies flag first (regardless of cross-tag dependencies)
if (withDependencies) {
// Move dependent tasks along with main tasks
// Find ALL dependencies recursively within the same tag
const allDependentTaskIds = findAllDependenciesRecursively(
sourceTasks,
allTasks,
{ maxDepth: 100, includeSelf: false }
);
const allTaskIdsToMove = [...new Set([...taskIds, ...allDependentTaskIds])];
log(
'info',
`Moving ${allTaskIdsToMove.length} tasks (including dependencies): ${allTaskIdsToMove.join(', ')}`
);
return {
tasksToMove: allTaskIdsToMove,
dependencyResolution: {
type: 'with-dependencies',
dependentTasks: allDependentTaskIds
}
};
}
// Find cross-tag dependencies (these shouldn't exist since dependencies are only within tags)
const crossTagDependencies = findCrossTagDependencies(
sourceTasks,
sourceTag,
targetTag,
allTasks
);
if (crossTagDependencies.length > 0) {
if (ignoreDependencies) {
// Break cross-tag dependencies (edge case - shouldn't normally happen)
sourceTasks.forEach((task) => {
task.dependencies = task.dependencies.filter((depId) => {
// Handle both task IDs and subtask IDs (e.g., "1.2")
let depTask = null;
if (typeof depId === 'string' && depId.includes('.')) {
// It's a subtask ID - extract parent task ID and find the parent task
const [parentId, subtaskId] = depId
.split('.')
.map((id) => parseInt(id, 10));
depTask = allTasks.find((t) => t.id === parentId);
} else {
// It's a regular task ID - normalize to number for comparison
const normalizedDepId =
typeof depId === 'string' ? parseInt(depId, 10) : depId;
depTask = allTasks.find((t) => t.id === normalizedDepId);
}
return !depTask || depTask.tag === targetTag;
});
});
log(
'warn',
`Removed ${crossTagDependencies.length} cross-tag dependencies`
);
return {
tasksToMove: taskIds,
dependencyResolution: {
type: 'ignored-dependencies',
conflicts: crossTagDependencies
}
};
} else {
// Block move and show error
throw new MoveTaskError(
MOVE_ERROR_CODES.CROSS_TAG_DEPENDENCY_CONFLICTS,
`Cannot move tasks: ${crossTagDependencies.length} cross-tag dependency conflicts found`,
{
conflicts: crossTagDependencies,
sourceTag,
targetTag,
taskIds
}
);
}
}
return {
tasksToMove: taskIds,
dependencyResolution: { type: 'no-conflicts' }
};
}
/**
* Execute the actual move operation
* @param {Array} tasksToMove - Array of task IDs to move
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @param {Object} rawData - Raw data object
* @param {Object} context - Context object
* @param {string} tasksPath - Path to tasks.json file
* @returns {Object} Move operation result
*/
async function executeMoveOperation(
tasksToMove,
sourceTag,
targetTag,
rawData,
context,
tasksPath
) {
const { projectRoot } = context;
const movedTasks = [];
// Move each task from source to target tag
for (const taskId of tasksToMove) {
// Normalize taskId to number for comparison
const normalizedTaskId =
typeof taskId === 'string' ? parseInt(taskId, 10) : taskId;
const sourceTaskIndex = rawData[sourceTag].tasks.findIndex(
(t) => t.id === normalizedTaskId
);
if (sourceTaskIndex === -1) {
throw new MoveTaskError(
MOVE_ERROR_CODES.TASK_NOT_FOUND,
`Task ${taskId} not found in source tag "${sourceTag}"`
);
}
const taskToMove = rawData[sourceTag].tasks[sourceTaskIndex];
// Check for ID conflicts in target tag
const existingTaskIndex = rawData[targetTag].tasks.findIndex(
(t) => t.id === normalizedTaskId
);
if (existingTaskIndex !== -1) {
throw new MoveTaskError(
MOVE_ERROR_CODES.TASK_ALREADY_EXISTS,
`Task ${taskId} already exists in target tag "${targetTag}"`
);
}
// Remove from source tag
rawData[sourceTag].tasks.splice(sourceTaskIndex, 1);
// Preserve task metadata and add to target tag
const taskWithPreservedMetadata = preserveTaskMetadata(
taskToMove,
sourceTag,
targetTag
);
rawData[targetTag].tasks.push(taskWithPreservedMetadata);
movedTasks.push({
id: taskId,
fromTag: sourceTag,
toTag: targetTag
});
log('info', `Moved task ${taskId} from "${sourceTag}" to "${targetTag}"`);
}
return { rawData, movedTasks };
}
/**
* Finalize the move operation by saving data and returning result
* @param {Object} moveResult - Result from executeMoveOperation
* @param {string} tasksPath - Path to tasks.json file
* @param {Object} context - Context object
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @returns {Object} Final result object
*/
async function finalizeMove(
moveResult,
tasksPath,
context,
sourceTag,
targetTag
) {
const { projectRoot } = context;
const { rawData, movedTasks } = moveResult;
// Write the updated data
writeJSON(tasksPath, rawData, projectRoot, null);
return {
message: `Successfully moved ${movedTasks.length} tasks from "${sourceTag}" to "${targetTag}"`,
movedTasks
};
}
/**
* Move tasks between different tags with dependency handling
* @param {string} tasksPath - Path to tasks.json file
* @param {Array} taskIds - Array of task IDs to move
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @param {Object} options - Move options
* @param {boolean} options.withDependencies - Move dependent tasks along with main task
* @param {boolean} options.ignoreDependencies - Break cross-tag dependencies during move
* @param {Object} context - Context object containing projectRoot and tag information
* @returns {Object} Result object with moved task details
*/
async function moveTasksBetweenTags(
tasksPath,
taskIds,
sourceTag,
targetTag,
options = {},
context = {}
) {
// 1. Validation phase
const validation = await validateMove(
tasksPath,
taskIds,
sourceTag,
targetTag,
context
);
// 2. Load and prepare data
const { rawData, sourceTasks, allTasks } = await prepareTaskData(validation);
// 3. Handle dependencies
const { tasksToMove } = await resolveDependencies(
sourceTasks,
allTasks,
options,
taskIds,
sourceTag,
targetTag
);
// 4. Execute move
const moveResult = await executeMoveOperation(
tasksToMove,
sourceTag,
targetTag,
rawData,
context,
tasksPath
);
// 5. Save and return
return await finalizeMove(
moveResult,
tasksPath,
context,
sourceTag,
targetTag
);
}
/**
* Detect ID conflicts in target tag
* @param {Array} taskIds - Array of task IDs to check
* @param {string} targetTag - Target tag name
* @param {Object} rawData - Raw data object
* @returns {Array} Array of conflicting task IDs
*/
function detectIdConflicts(taskIds, targetTag, rawData) {
const conflicts = [];
if (!rawData[targetTag] || !Array.isArray(rawData[targetTag].tasks)) {
return conflicts;
}
taskIds.forEach((taskId) => {
// Normalize taskId to number for comparison
const normalizedTaskId =
typeof taskId === 'string' ? parseInt(taskId, 10) : taskId;
const existingTask = rawData[targetTag].tasks.find(
(t) => t.id === normalizedTaskId
);
if (existingTask) {
conflicts.push(taskId);
}
});
return conflicts;
}
/**
* Preserve task metadata during cross-tag moves
* @param {Object} task - Task object
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @returns {Object} Task object with preserved metadata
*/
function preserveTaskMetadata(task, sourceTag, targetTag) {
// Update the tag property to reflect the new location
task.tag = targetTag;
// Add move history to task metadata
if (!task.metadata) {
task.metadata = {};
}
if (!task.metadata.moveHistory) {
task.metadata.moveHistory = [];
}
task.metadata.moveHistory.push({
fromTag: sourceTag,
toTag: targetTag,
timestamp: new Date().toISOString()
});
return task;
}
export default moveTask;
export {
moveTasksBetweenTags,
getAllTasksWithTags,
detectIdConflicts,
preserveTaskMetadata,
MoveTaskError,
MOVE_ERROR_CODES
};

View File

@@ -7,7 +7,15 @@
function taskExists(tasks, taskId) {
// Handle subtask IDs (e.g., "1.2")
if (typeof taskId === 'string' && taskId.includes('.')) {
const [parentIdStr, subtaskIdStr] = taskId.split('.');
const parts = taskId.split('.');
// Validate that it's a proper subtask format (parentId.subtaskId)
if (parts.length !== 2 || !parts[0] || !parts[1]) {
// Invalid format - treat as regular task ID
const id = parseInt(taskId, 10);
return tasks.some((t) => t.id === id);
}
const [parentIdStr, subtaskIdStr] = parts;
const parentId = parseInt(parentIdStr, 10);
const subtaskId = parseInt(subtaskIdStr, 10);

View File

@@ -15,7 +15,8 @@ import {
findTaskById,
readJSON,
truncate,
isSilentMode
isSilentMode,
formatTaskId
} from './utils.js';
import fs from 'fs';
import {
@@ -405,9 +406,44 @@ function formatDependenciesWithStatus(
// Check if it's already a fully qualified subtask ID (like "22.1")
if (depIdStr.includes('.')) {
const [parentId, subtaskId] = depIdStr
.split('.')
.map((id) => parseInt(id, 10));
const parts = depIdStr.split('.');
// Validate that it's a proper subtask format (parentId.subtaskId)
if (parts.length !== 2 || !parts[0] || !parts[1]) {
// Invalid format - treat as regular dependency
const numericDepId =
typeof depId === 'string' ? parseInt(depId, 10) : depId;
const depTaskResult = findTaskById(
allTasks,
numericDepId,
complexityReport
);
const depTask = depTaskResult.task;
if (!depTask) {
return forConsole
? chalk.red(`${depIdStr} (Not found)`)
: `${depIdStr} (Not found)`;
}
const status = depTask.status || 'pending';
const isDone =
status.toLowerCase() === 'done' ||
status.toLowerCase() === 'completed';
const isInProgress = status.toLowerCase() === 'in-progress';
if (forConsole) {
if (isDone) {
return chalk.green.bold(depIdStr);
} else if (isInProgress) {
return chalk.yellow.bold(depIdStr);
} else {
return chalk.red.bold(depIdStr);
}
}
return depIdStr;
}
const [parentId, subtaskId] = parts.map((id) => parseInt(id, 10));
// Find the parent task
const parentTask = allTasks.find((t) => t.id === parentId);
@@ -2797,5 +2833,176 @@ export {
warnLoadingIndicator,
infoLoadingIndicator,
displayContextAnalysis,
displayCurrentTagIndicator
displayCurrentTagIndicator,
formatTaskIdForDisplay
};
/**
* Display enhanced error message for cross-tag dependency conflicts
* @param {Array} conflicts - Array of cross-tag dependency conflicts
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @param {string} sourceIds - Source task IDs (comma-separated)
*/
export function displayCrossTagDependencyError(
conflicts,
sourceTag,
targetTag,
sourceIds
) {
console.log(
chalk.red(`\n❌ Cannot move tasks from "${sourceTag}" to "${targetTag}"`)
);
console.log(chalk.yellow(`\nCross-tag dependency conflicts detected:`));
if (conflicts.length > 0) {
conflicts.forEach((conflict) => {
console.log(`${conflict.message}`);
});
}
console.log(chalk.cyan(`\nResolution options:`));
console.log(
` 1. Move with dependencies: task-master move --from=${sourceIds} --from-tag=${sourceTag} --to-tag=${targetTag} --with-dependencies`
);
console.log(
` 2. Break dependencies: task-master move --from=${sourceIds} --from-tag=${sourceTag} --to-tag=${targetTag} --ignore-dependencies`
);
console.log(
` 3. Validate and fix dependencies: task-master validate-dependencies && task-master fix-dependencies`
);
if (conflicts.length > 0) {
console.log(
` 4. Move dependencies first: task-master move --from=${conflicts.map((c) => c.dependencyId).join(',')} --from-tag=${conflicts[0].dependencyTag} --to-tag=${targetTag}`
);
}
console.log(
` 5. Force move (may break dependencies): task-master move --from=${sourceIds} --from-tag=${sourceTag} --to-tag=${targetTag} --force`
);
}
/**
* Helper function to format task ID for display, handling edge cases with explicit labels
* Builds on the existing formatTaskId utility but adds user-friendly display for edge cases
* @param {*} taskId - The task ID to format
* @returns {string} Formatted task ID for display
*/
function formatTaskIdForDisplay(taskId) {
if (taskId === null) return 'null';
if (taskId === undefined) return 'undefined';
if (taskId === '') return '(empty)';
// Use existing formatTaskId for normal cases, with fallback to 'unknown'
return formatTaskId(taskId) || 'unknown';
}
/**
* Display enhanced error message for subtask movement restriction
* @param {string} taskId - The subtask ID that cannot be moved
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
*/
export function displaySubtaskMoveError(taskId, sourceTag, targetTag) {
// Handle null/undefined taskId but preserve the actual value for display
const displayTaskId = formatTaskIdForDisplay(taskId);
// Safe taskId for operations that need a valid string
const safeTaskId = taskId || 'unknown';
// Validate taskId format before splitting
let parentId = safeTaskId;
if (safeTaskId.includes('.')) {
const parts = safeTaskId.split('.');
// Check if it's a valid subtask format (parentId.subtaskId)
if (parts.length === 2 && parts[0] && parts[1]) {
parentId = parts[0];
} else {
// Invalid format - log warning and use the original taskId
console.log(
chalk.yellow(
`\n⚠️ Warning: Unexpected taskId format "${safeTaskId}". Using as-is for command suggestions.`
)
);
parentId = safeTaskId;
}
}
console.log(
chalk.red(`\n❌ Cannot move subtask ${displayTaskId} directly between tags`)
);
console.log(chalk.yellow(`\nSubtask movement restriction:`));
console.log(` • Subtasks cannot be moved directly between tags`);
console.log(` • They must be promoted to full tasks first`);
console.log(` • Source tag: "${sourceTag}"`);
console.log(` • Target tag: "${targetTag}"`);
console.log(chalk.cyan(`\nResolution options:`));
console.log(
` 1. Promote subtask to full task: task-master remove-subtask --id=${displayTaskId} --convert`
);
console.log(
` 2. Then move the promoted task: task-master move --from=${parentId} --from-tag=${sourceTag} --to-tag=${targetTag}`
);
console.log(
` 3. Or move the parent task with all subtasks: task-master move --from=${parentId} --from-tag=${sourceTag} --to-tag=${targetTag} --with-dependencies`
);
}
/**
* Display enhanced error message for invalid tag combinations
* @param {string} sourceTag - Source tag name
* @param {string} targetTag - Target tag name
* @param {string} reason - Reason for the error
*/
export function displayInvalidTagCombinationError(
sourceTag,
targetTag,
reason
) {
console.log(chalk.red(`\n❌ Invalid tag combination`));
console.log(chalk.yellow(`\nError details:`));
console.log(` • Source tag: "${sourceTag}"`);
console.log(` • Target tag: "${targetTag}"`);
console.log(` • Reason: ${reason}`);
console.log(chalk.cyan(`\nResolution options:`));
console.log(` 1. Use different tags for cross-tag moves`);
console.log(
` 2. Use within-tag move: task-master move --from=<id> --to=<id> --tag=${sourceTag}`
);
console.log(` 3. Check available tags: task-master tags`);
}
/**
* Display helpful hints for dependency validation commands
* @param {string} context - Context for the hints (e.g., 'before-move', 'after-error')
*/
export function displayDependencyValidationHints(context = 'general') {
const hints = {
'before-move': [
'💡 Tip: Run "task-master validate-dependencies" to check for dependency issues before moving tasks',
'💡 Tip: Use "task-master fix-dependencies" to automatically resolve common dependency problems',
'💡 Tip: Consider using --with-dependencies flag to move dependent tasks together'
],
'after-error': [
'🔧 Quick fix: Run "task-master validate-dependencies" to identify specific issues',
'🔧 Quick fix: Use "task-master fix-dependencies" to automatically resolve problems',
'🔧 Quick fix: Check "task-master show <id>" to see task dependencies before moving'
],
general: [
'💡 Use "task-master validate-dependencies" to check for dependency issues',
'💡 Use "task-master fix-dependencies" to automatically resolve problems',
'💡 Use "task-master show <id>" to view task dependencies',
'💡 Use --with-dependencies flag to move dependent tasks together'
]
};
const relevantHints = hints[context] || hints.general;
console.log(chalk.cyan(`\nHelpful hints:`));
// Convert to Set to ensure only unique hints are displayed
const uniqueHints = new Set(relevantHints);
uniqueHints.forEach((hint) => {
console.log(` ${hint}`);
});
}

View File

@@ -1132,6 +1132,139 @@ function findCycles(
return cyclesToBreak;
}
/**
* Unified dependency traversal utility that supports both forward and reverse dependency traversal
* @param {Array} sourceTasks - Array of source tasks to start traversal from
* @param {Array} allTasks - Array of all tasks to search within
* @param {Object} options - Configuration options
* @param {number} options.maxDepth - Maximum recursion depth (default: 50)
* @param {boolean} options.includeSelf - Whether to include self-references (default: false)
* @param {'forward'|'reverse'} options.direction - Direction of traversal (default: 'forward')
* @param {Function} options.logger - Optional logger function for warnings
* @returns {Array} Array of all dependency task IDs found through traversal
*/
function traverseDependencies(sourceTasks, allTasks, options = {}) {
const {
maxDepth = 50,
includeSelf = false,
direction = 'forward',
logger = null
} = options;
const dependentTaskIds = new Set();
const processedIds = new Set();
// Helper function to normalize dependency IDs while preserving subtask format
function normalizeDependencyId(depId) {
if (typeof depId === 'string') {
// Preserve string format for subtask IDs like "1.2"
if (depId.includes('.')) {
return depId;
}
// Convert simple string numbers to numbers for consistency
const parsed = parseInt(depId, 10);
return isNaN(parsed) ? depId : parsed;
}
return depId;
}
// Helper function for forward dependency traversal
function findForwardDependencies(taskId, currentDepth = 0) {
// Check depth limit
if (currentDepth >= maxDepth) {
const warnMsg = `Maximum recursion depth (${maxDepth}) reached for task ${taskId}`;
if (logger && typeof logger.warn === 'function') {
logger.warn(warnMsg);
} else if (typeof log !== 'undefined' && log.warn) {
log.warn(warnMsg);
} else {
console.warn(warnMsg);
}
return;
}
if (processedIds.has(taskId)) {
return; // Avoid infinite loops
}
processedIds.add(taskId);
const task = allTasks.find((t) => t.id === taskId);
if (!task || !Array.isArray(task.dependencies)) {
return;
}
task.dependencies.forEach((depId) => {
const normalizedDepId = normalizeDependencyId(depId);
// Skip invalid dependencies and optionally skip self-references
if (
normalizedDepId == null ||
(!includeSelf && normalizedDepId === taskId)
) {
return;
}
dependentTaskIds.add(normalizedDepId);
// Recursively find dependencies of this dependency
findForwardDependencies(normalizedDepId, currentDepth + 1);
});
}
// Helper function for reverse dependency traversal
function findReverseDependencies(taskId, currentDepth = 0) {
// Check depth limit
if (currentDepth >= maxDepth) {
const warnMsg = `Maximum recursion depth (${maxDepth}) reached for task ${taskId}`;
if (logger && typeof logger.warn === 'function') {
logger.warn(warnMsg);
} else if (typeof log !== 'undefined' && log.warn) {
log.warn(warnMsg);
} else {
console.warn(warnMsg);
}
return;
}
if (processedIds.has(taskId)) {
return; // Avoid infinite loops
}
processedIds.add(taskId);
allTasks.forEach((task) => {
if (task.dependencies && Array.isArray(task.dependencies)) {
const dependsOnTaskId = task.dependencies.some((depId) => {
const normalizedDepId = normalizeDependencyId(depId);
return normalizedDepId === taskId;
});
if (dependsOnTaskId) {
// Skip invalid dependencies and optionally skip self-references
if (task.id == null || (!includeSelf && task.id === taskId)) {
return;
}
dependentTaskIds.add(task.id);
// Recursively find tasks that depend on this task
findReverseDependencies(task.id, currentDepth + 1);
}
}
});
}
// Choose traversal function based on direction
const traversalFunc =
direction === 'reverse' ? findReverseDependencies : findForwardDependencies;
// Start traversal from each source task
sourceTasks.forEach((sourceTask) => {
if (sourceTask && sourceTask.id) {
traversalFunc(sourceTask.id);
}
});
return Array.from(dependentTaskIds);
}
/**
* Convert a string from camelCase to kebab-case
* @param {string} str - The string to convert
@@ -1430,6 +1563,20 @@ function ensureTagMetadata(tagObj, opts = {}) {
return tagObj;
}
/**
* Strip ANSI color codes from a string
* Useful for testing, logging to files, or when clean text output is needed
* @param {string} text - The text that may contain ANSI color codes
* @returns {string} - The text with ANSI color codes removed
*/
function stripAnsiCodes(text) {
if (typeof text !== 'string') {
return text;
}
// Remove ANSI escape sequences (color codes, cursor movements, etc.)
return text.replace(/\x1b\[[0-9;]*m/g, '');
}
// Export all utility functions and configuration
export {
LOG_LEVELS,
@@ -1445,6 +1592,7 @@ export {
truncate,
isEmpty,
findCycles,
traverseDependencies,
toKebabCase,
detectCamelCaseFlags,
disableSilentMode,
@@ -1467,5 +1615,6 @@ export {
markMigrationForNotice,
flattenTasksWithSubtasks,
ensureTagMetadata,
stripAnsiCodes,
normalizeTaskIds
};

View File

@@ -61,8 +61,11 @@ export class BaseAIProvider {
) {
throw new Error('Temperature must be between 0 and 1');
}
if (params.maxTokens !== undefined && params.maxTokens <= 0) {
throw new Error('maxTokens must be greater than 0');
if (params.maxTokens !== undefined) {
const maxTokens = Number(params.maxTokens);
if (!Number.isFinite(maxTokens) || maxTokens <= 0) {
throw new Error('maxTokens must be a finite number greater than 0');
}
}
}
@@ -122,6 +125,37 @@ export class BaseAIProvider {
throw new Error('getRequiredApiKeyName must be implemented by provider');
}
/**
* Determines if a model requires max_completion_tokens instead of maxTokens
* Can be overridden by providers to specify their model requirements
* @param {string} modelId - The model ID to check
* @returns {boolean} True if the model requires max_completion_tokens
*/
requiresMaxCompletionTokens(modelId) {
return false; // Default behavior - most models use maxTokens
}
/**
* Prepares token limit parameter based on model requirements
* @param {string} modelId - The model ID
* @param {number} maxTokens - The maximum tokens value
* @returns {object} Object with either maxTokens or max_completion_tokens
*/
prepareTokenParam(modelId, maxTokens) {
if (maxTokens === undefined) {
return {};
}
// Ensure maxTokens is an integer
const tokenValue = Math.floor(Number(maxTokens));
if (this.requiresMaxCompletionTokens(modelId)) {
return { max_completion_tokens: tokenValue };
} else {
return { maxTokens: tokenValue };
}
}
/**
* Generates text using the provider's model
*/
@@ -139,7 +173,7 @@ export class BaseAIProvider {
const result = await generateText({
model: client(params.modelId),
messages: params.messages,
maxTokens: params.maxTokens,
...this.prepareTokenParam(params.modelId, params.maxTokens),
temperature: params.temperature
});
@@ -175,7 +209,7 @@ export class BaseAIProvider {
const stream = await streamText({
model: client(params.modelId),
messages: params.messages,
maxTokens: params.maxTokens,
...this.prepareTokenParam(params.modelId, params.maxTokens),
temperature: params.temperature
});
@@ -216,7 +250,7 @@ export class BaseAIProvider {
messages: params.messages,
schema: zodSchema(params.schema),
mode: params.mode || 'auto',
maxTokens: params.maxTokens,
...this.prepareTokenParam(params.modelId, params.maxTokens),
temperature: params.temperature
});

View File

@@ -20,6 +20,16 @@ export class OpenAIProvider extends BaseAIProvider {
return 'OPENAI_API_KEY';
}
/**
* Determines if a model requires max_completion_tokens instead of maxTokens
* GPT-5 models require max_completion_tokens parameter
* @param {string} modelId - The model ID to check
* @returns {boolean} True if the model requires max_completion_tokens
*/
requiresMaxCompletionTokens(modelId) {
return modelId && modelId.startsWith('gpt-5');
}
/**
* Creates and returns an OpenAI client instance.
* @param {object} params - Parameters for client initialization

View File

@@ -0,0 +1,496 @@
import { jest } from '@jest/globals';
import { execSync } from 'child_process';
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
describe('Complex Cross-Tag Scenarios', () => {
let testDir;
let tasksPath;
// Define binPath once for the entire test suite
const binPath = path.join(
__dirname,
'..',
'..',
'..',
'bin',
'task-master.js'
);
beforeEach(() => {
// Create test directory
testDir = fs.mkdtempSync(path.join(__dirname, 'test-'));
process.chdir(testDir);
// Initialize task-master
execSync(`node ${binPath} init --yes`, {
stdio: 'pipe'
});
// Create test tasks with complex dependencies in the correct tagged format
const complexTasks = {
master: {
tasks: [
{
id: 1,
title: 'Setup Project',
description: 'Initialize the project structure',
status: 'done',
priority: 'high',
dependencies: [],
details: 'Create basic project structure',
testStrategy: 'Verify project structure exists',
subtasks: []
},
{
id: 2,
title: 'Database Schema',
description: 'Design and implement database schema',
status: 'pending',
priority: 'high',
dependencies: [1],
details: 'Create database tables and relationships',
testStrategy: 'Run database migrations',
subtasks: [
{
id: '2.1',
title: 'User Table',
description: 'Create user table',
status: 'pending',
priority: 'medium',
dependencies: [],
details: 'Design user table schema',
testStrategy: 'Test user creation'
},
{
id: '2.2',
title: 'Product Table',
description: 'Create product table',
status: 'pending',
priority: 'medium',
dependencies: ['2.1'],
details: 'Design product table schema',
testStrategy: 'Test product creation'
}
]
},
{
id: 3,
title: 'API Development',
description: 'Develop REST API endpoints',
status: 'pending',
priority: 'high',
dependencies: [2],
details: 'Create API endpoints for CRUD operations',
testStrategy: 'Test API endpoints',
subtasks: []
},
{
id: 4,
title: 'Frontend Development',
description: 'Develop user interface',
status: 'pending',
priority: 'medium',
dependencies: [3],
details: 'Create React components and pages',
testStrategy: 'Test UI components',
subtasks: []
},
{
id: 5,
title: 'Testing',
description: 'Comprehensive testing',
status: 'pending',
priority: 'medium',
dependencies: [4],
details: 'Write unit and integration tests',
testStrategy: 'Run test suite',
subtasks: []
}
],
metadata: {
created: new Date().toISOString(),
description: 'Test tasks for complex cross-tag scenarios'
}
}
};
// Write tasks to file
tasksPath = path.join(testDir, '.taskmaster', 'tasks', 'tasks.json');
fs.writeFileSync(tasksPath, JSON.stringify(complexTasks, null, 2));
});
afterEach(() => {
// Change back to project root before cleanup
try {
process.chdir(global.projectRoot || path.resolve(__dirname, '../../..'));
} catch (error) {
// If we can't change directory, try a known safe directory
process.chdir(require('os').homedir());
}
// Cleanup test directory
if (testDir && fs.existsSync(testDir)) {
fs.rmSync(testDir, { recursive: true, force: true });
}
});
describe('Circular Dependency Detection', () => {
it('should detect and prevent circular dependencies', () => {
// Create a circular dependency scenario
const circularTasks = {
backlog: {
tasks: [
{
id: 1,
title: 'Task 1',
status: 'pending',
dependencies: [2],
subtasks: []
},
{
id: 2,
title: 'Task 2',
status: 'pending',
dependencies: [3],
subtasks: []
},
{
id: 3,
title: 'Task 3',
status: 'pending',
dependencies: [1],
subtasks: []
}
],
metadata: {
created: new Date().toISOString(),
description: 'Backlog tasks with circular dependencies'
}
},
'in-progress': {
tasks: [],
metadata: {
created: new Date().toISOString(),
description: 'In-progress tasks'
}
}
};
fs.writeFileSync(tasksPath, JSON.stringify(circularTasks, null, 2));
// Try to move task 1 - should fail due to circular dependency
expect(() => {
execSync(
`node ${binPath} move --from=1 --from-tag=backlog --to-tag=in-progress`,
{ stdio: 'pipe' }
);
}).toThrow();
// Check that the move was not performed
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
expect(tasksAfter.backlog.tasks.find((t) => t.id === 1)).toBeDefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 1)
).toBeUndefined();
});
});
describe('Complex Dependency Chains', () => {
it('should handle deep dependency chains correctly', () => {
// Create a deep dependency chain
const deepChainTasks = {
master: {
tasks: [
{
id: 1,
title: 'Task 1',
status: 'pending',
dependencies: [2],
subtasks: []
},
{
id: 2,
title: 'Task 2',
status: 'pending',
dependencies: [3],
subtasks: []
},
{
id: 3,
title: 'Task 3',
status: 'pending',
dependencies: [4],
subtasks: []
},
{
id: 4,
title: 'Task 4',
status: 'pending',
dependencies: [5],
subtasks: []
},
{
id: 5,
title: 'Task 5',
status: 'pending',
dependencies: [],
subtasks: []
}
],
metadata: {
created: new Date().toISOString(),
description: 'Deep dependency chain tasks'
}
},
'in-progress': {
tasks: [],
metadata: {
created: new Date().toISOString(),
description: 'In-progress tasks'
}
}
};
fs.writeFileSync(tasksPath, JSON.stringify(deepChainTasks, null, 2));
// Move task 1 with dependencies - should move entire chain
execSync(
`node ${binPath} move --from=1 --from-tag=master --to-tag=in-progress --with-dependencies`,
{ stdio: 'pipe' }
);
// Verify all tasks in the chain were moved
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
expect(tasksAfter.master.tasks.find((t) => t.id === 1)).toBeUndefined();
expect(tasksAfter.master.tasks.find((t) => t.id === 2)).toBeUndefined();
expect(tasksAfter.master.tasks.find((t) => t.id === 3)).toBeUndefined();
expect(tasksAfter.master.tasks.find((t) => t.id === 4)).toBeUndefined();
expect(tasksAfter.master.tasks.find((t) => t.id === 5)).toBeUndefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 1)
).toBeDefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 2)
).toBeDefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 3)
).toBeDefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 4)
).toBeDefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 5)
).toBeDefined();
});
});
describe('Subtask Movement Restrictions', () => {
it('should prevent direct subtask movement between tags', () => {
// Try to move a subtask directly
expect(() => {
execSync(
`node ${binPath} move --from=2.1 --from-tag=master --to-tag=in-progress`,
{ stdio: 'pipe' }
);
}).toThrow();
// Verify subtask was not moved
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
const task2 = tasksAfter.master.tasks.find((t) => t.id === 2);
expect(task2).toBeDefined();
expect(task2.subtasks.find((s) => s.id === '2.1')).toBeDefined();
});
it('should allow moving parent task with all subtasks', () => {
// Move parent task with dependencies (includes subtasks)
execSync(
`node ${binPath} move --from=2 --from-tag=master --to-tag=in-progress --with-dependencies`,
{ stdio: 'pipe' }
);
// Verify parent and subtasks were moved
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
expect(tasksAfter.master.tasks.find((t) => t.id === 2)).toBeUndefined();
const movedTask2 = tasksAfter['in-progress'].tasks.find(
(t) => t.id === 2
);
expect(movedTask2).toBeDefined();
expect(movedTask2.subtasks).toHaveLength(2);
});
});
describe('Large Task Set Performance', () => {
it('should handle large task sets efficiently', () => {
// Create a large task set (100 tasks)
const largeTaskSet = {
master: {
tasks: [],
metadata: {
created: new Date().toISOString(),
description: 'Large task set for performance testing'
}
},
'in-progress': {
tasks: [],
metadata: {
created: new Date().toISOString(),
description: 'In-progress tasks'
}
}
};
// Add 50 tasks to master with dependencies
for (let i = 1; i <= 50; i++) {
largeTaskSet.master.tasks.push({
id: i,
title: `Task ${i}`,
status: 'pending',
dependencies: i > 1 ? [i - 1] : [],
subtasks: []
});
}
// Add 50 tasks to in-progress
for (let i = 51; i <= 100; i++) {
largeTaskSet['in-progress'].tasks.push({
id: i,
title: `Task ${i}`,
status: 'in-progress',
dependencies: [],
subtasks: []
});
}
fs.writeFileSync(tasksPath, JSON.stringify(largeTaskSet, null, 2));
// Should complete within reasonable time
const timeout = process.env.CI ? 10000 : 5000;
const startTime = Date.now();
execSync(
`node ${binPath} move --from=50 --from-tag=master --to-tag=in-progress --with-dependencies`,
{ stdio: 'pipe' }
);
const endTime = Date.now();
expect(endTime - startTime).toBeLessThan(timeout);
// Verify the move was successful
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 50)
).toBeDefined();
});
});
describe('Error Recovery and Edge Cases', () => {
it('should handle invalid task IDs gracefully', () => {
expect(() => {
execSync(
`node ${binPath} move --from=999 --from-tag=master --to-tag=in-progress`,
{ stdio: 'pipe' }
);
}).toThrow();
});
it('should handle invalid tag names gracefully', () => {
expect(() => {
execSync(
`node ${binPath} move --from=1 --from-tag=invalid-tag --to-tag=in-progress`,
{ stdio: 'pipe' }
);
}).toThrow();
});
it('should handle same source and target tags', () => {
expect(() => {
execSync(
`node ${binPath} move --from=1 --from-tag=master --to-tag=master`,
{ stdio: 'pipe' }
);
}).toThrow();
});
it('should create target tag if it does not exist', () => {
execSync(
`node ${binPath} move --from=1 --from-tag=master --to-tag=new-tag`,
{ stdio: 'pipe' }
);
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
expect(tasksAfter['new-tag']).toBeDefined();
expect(tasksAfter['new-tag'].tasks.find((t) => t.id === 1)).toBeDefined();
});
});
describe('Multiple Task Movement', () => {
it('should move multiple tasks simultaneously', () => {
// Create tasks for multiple movement test
const multiTaskSet = {
master: {
tasks: [
{
id: 1,
title: 'Task 1',
status: 'pending',
dependencies: [],
subtasks: []
},
{
id: 2,
title: 'Task 2',
status: 'pending',
dependencies: [],
subtasks: []
},
{
id: 3,
title: 'Task 3',
status: 'pending',
dependencies: [],
subtasks: []
}
],
metadata: {
created: new Date().toISOString(),
description: 'Tasks for multiple movement test'
}
},
'in-progress': {
tasks: [],
metadata: {
created: new Date().toISOString(),
description: 'In-progress tasks'
}
}
};
fs.writeFileSync(tasksPath, JSON.stringify(multiTaskSet, null, 2));
// Move multiple tasks
execSync(
`node ${binPath} move --from=1,2,3 --from-tag=master --to-tag=in-progress`,
{ stdio: 'pipe' }
);
// Verify all tasks were moved
const tasksAfter = JSON.parse(fs.readFileSync(tasksPath, 'utf8'));
expect(tasksAfter.master.tasks.find((t) => t.id === 1)).toBeUndefined();
expect(tasksAfter.master.tasks.find((t) => t.id === 2)).toBeUndefined();
expect(tasksAfter.master.tasks.find((t) => t.id === 3)).toBeUndefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 1)
).toBeDefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 2)
).toBeDefined();
expect(
tasksAfter['in-progress'].tasks.find((t) => t.id === 3)
).toBeDefined();
});
});
});

View File

@@ -0,0 +1,882 @@
import { jest } from '@jest/globals';
import fs from 'fs';
import path from 'path';
// --- Define mock functions ---
const mockMoveTasksBetweenTags = jest.fn();
const mockMoveTask = jest.fn();
const mockGenerateTaskFiles = jest.fn();
const mockLog = jest.fn();
// --- Setup mocks using unstable_mockModule ---
jest.unstable_mockModule(
'../../../scripts/modules/task-manager/move-task.js',
() => ({
default: mockMoveTask,
moveTasksBetweenTags: mockMoveTasksBetweenTags
})
);
jest.unstable_mockModule(
'../../../scripts/modules/task-manager/generate-task-files.js',
() => ({
default: mockGenerateTaskFiles
})
);
jest.unstable_mockModule('../../../scripts/modules/utils.js', () => ({
log: mockLog,
readJSON: jest.fn(),
writeJSON: jest.fn(),
findProjectRoot: jest.fn(() => '/test/project/root'),
getCurrentTag: jest.fn(() => 'master')
}));
// --- Mock chalk for consistent output formatting ---
const mockChalk = {
red: jest.fn((text) => text),
yellow: jest.fn((text) => text),
blue: jest.fn((text) => text),
green: jest.fn((text) => text),
gray: jest.fn((text) => text),
dim: jest.fn((text) => text),
bold: {
cyan: jest.fn((text) => text),
white: jest.fn((text) => text),
red: jest.fn((text) => text)
},
cyan: {
bold: jest.fn((text) => text)
},
white: {
bold: jest.fn((text) => text)
}
};
jest.unstable_mockModule('chalk', () => ({
default: mockChalk
}));
// --- Import modules (AFTER mock setup) ---
let moveTaskModule, generateTaskFilesModule, utilsModule, chalk;
describe('Cross-Tag Move CLI Integration', () => {
// Setup dynamic imports before tests run
beforeAll(async () => {
moveTaskModule = await import(
'../../../scripts/modules/task-manager/move-task.js'
);
generateTaskFilesModule = await import(
'../../../scripts/modules/task-manager/generate-task-files.js'
);
utilsModule = await import('../../../scripts/modules/utils.js');
chalk = (await import('chalk')).default;
});
beforeEach(() => {
jest.clearAllMocks();
});
// Helper function to capture console output and process.exit calls
function captureConsoleAndExit() {
const originalConsoleError = console.error;
const originalConsoleLog = console.log;
const originalProcessExit = process.exit;
const errorMessages = [];
const logMessages = [];
const exitCodes = [];
console.error = jest.fn((...args) => {
errorMessages.push(args.join(' '));
});
console.log = jest.fn((...args) => {
logMessages.push(args.join(' '));
});
process.exit = jest.fn((code) => {
exitCodes.push(code);
});
return {
errorMessages,
logMessages,
exitCodes,
restore: () => {
console.error = originalConsoleError;
console.log = originalConsoleLog;
process.exit = originalProcessExit;
}
};
}
// --- Replicate the move command action handler logic from commands.js ---
async function moveAction(options) {
const sourceId = options.from;
const destinationId = options.to;
const fromTag = options.fromTag;
const toTag = options.toTag;
const withDependencies = options.withDependencies;
const ignoreDependencies = options.ignoreDependencies;
const force = options.force;
// Get the source tag - fallback to current tag if not provided
const sourceTag = fromTag || utilsModule.getCurrentTag();
// Check if this is a cross-tag move (different tags)
const isCrossTagMove = sourceTag && toTag && sourceTag !== toTag;
if (isCrossTagMove) {
// Cross-tag move logic
if (!sourceId) {
const error = new Error(
'--from parameter is required for cross-tag moves'
);
console.error(chalk.red(`Error: ${error.message}`));
throw error;
}
const taskIds = sourceId.split(',').map((id) => parseInt(id.trim(), 10));
// Validate parsed task IDs
for (let i = 0; i < taskIds.length; i++) {
if (isNaN(taskIds[i])) {
const error = new Error(
`Invalid task ID at position ${i + 1}: "${sourceId.split(',')[i].trim()}" is not a valid number`
);
console.error(chalk.red(`Error: ${error.message}`));
throw error;
}
}
const tasksPath = path.join(
utilsModule.findProjectRoot(),
'.taskmaster',
'tasks',
'tasks.json'
);
try {
await moveTaskModule.moveTasksBetweenTags(
tasksPath,
taskIds,
sourceTag,
toTag,
{
withDependencies,
ignoreDependencies,
force
}
);
console.log(chalk.green('Successfully moved task(s) between tags'));
// Generate task files for both tags
await generateTaskFilesModule.default(
tasksPath,
path.dirname(tasksPath),
{ tag: sourceTag }
);
await generateTaskFilesModule.default(
tasksPath,
path.dirname(tasksPath),
{ tag: toTag }
);
} catch (error) {
console.error(chalk.red(`Error: ${error.message}`));
throw error;
}
} else {
// Handle case where both tags are provided but are the same
if (sourceTag && toTag && sourceTag === toTag) {
// If both tags are the same and we have destinationId, treat as within-tag move
if (destinationId) {
if (!sourceId) {
const error = new Error(
'Both --from and --to parameters are required for within-tag moves'
);
console.error(chalk.red(`Error: ${error.message}`));
throw error;
}
// Call the existing moveTask function for within-tag moves
try {
await moveTaskModule.default(sourceId, destinationId);
console.log(chalk.green('Successfully moved task'));
} catch (error) {
console.error(chalk.red(`Error: ${error.message}`));
throw error;
}
} else {
// Same tags but no destinationId - this is an error
const error = new Error(
`Source and target tags are the same ("${sourceTag}") but no destination specified`
);
console.error(chalk.red(`Error: ${error.message}`));
console.log(
chalk.yellow(
'For within-tag moves, use: task-master move --from=<sourceId> --to=<destinationId>'
)
);
console.log(
chalk.yellow(
'For cross-tag moves, use different tags: task-master move --from=<sourceId> --from-tag=<sourceTag> --to-tag=<targetTag>'
)
);
throw error;
}
} else {
// Within-tag move logic (existing functionality)
if (!sourceId || !destinationId) {
const error = new Error(
'Both --from and --to parameters are required for within-tag moves'
);
console.error(chalk.red(`Error: ${error.message}`));
throw error;
}
// Call the existing moveTask function for within-tag moves
try {
await moveTaskModule.default(sourceId, destinationId);
console.log(chalk.green('Successfully moved task'));
} catch (error) {
console.error(chalk.red(`Error: ${error.message}`));
throw error;
}
}
}
}
it('should move task without dependencies successfully', async () => {
// Mock successful cross-tag move
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '2',
fromTag: 'backlog',
toTag: 'in-progress'
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[2],
'backlog',
'in-progress',
{
withDependencies: undefined,
ignoreDependencies: undefined,
force: undefined
}
);
});
it('should fail to move task with cross-tag dependencies', async () => {
// Mock dependency conflict error
mockMoveTasksBetweenTags.mockRejectedValue(
new Error('Cannot move task due to cross-tag dependency conflicts')
);
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress'
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Cannot move task due to cross-tag dependency conflicts'
);
expect(mockMoveTasksBetweenTags).toHaveBeenCalled();
expect(
errorMessages.some((msg) =>
msg.includes('cross-tag dependency conflicts')
)
).toBe(true);
restore();
});
it('should move task with dependencies when --with-dependencies is used', async () => {
// Mock successful cross-tag move with dependencies
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress',
withDependencies: true
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1],
'backlog',
'in-progress',
{
withDependencies: true,
ignoreDependencies: undefined,
force: undefined
}
);
});
it('should break dependencies when --ignore-dependencies is used', async () => {
// Mock successful cross-tag move with dependency breaking
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress',
ignoreDependencies: true
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1],
'backlog',
'in-progress',
{
withDependencies: undefined,
ignoreDependencies: true,
force: undefined
}
);
});
it('should create target tag if it does not exist', async () => {
// Mock successful cross-tag move to new tag
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '2',
fromTag: 'backlog',
toTag: 'new-tag'
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[2],
'backlog',
'new-tag',
{
withDependencies: undefined,
ignoreDependencies: undefined,
force: undefined
}
);
});
it('should fail to move a subtask directly', async () => {
// Mock subtask movement error
mockMoveTasksBetweenTags.mockRejectedValue(
new Error(
'Cannot move subtasks directly between tags. Please promote the subtask to a full task first.'
)
);
const options = {
from: '1.2',
fromTag: 'backlog',
toTag: 'in-progress'
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Cannot move subtasks directly between tags. Please promote the subtask to a full task first.'
);
expect(mockMoveTasksBetweenTags).toHaveBeenCalled();
expect(errorMessages.some((msg) => msg.includes('subtasks directly'))).toBe(
true
);
restore();
});
it('should provide helpful error messages for dependency conflicts', async () => {
// Mock dependency conflict with detailed error
mockMoveTasksBetweenTags.mockRejectedValue(
new Error(
'Cross-tag dependency conflicts detected. Task 1 depends on Task 2 which is in a different tag.'
)
);
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress'
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Cross-tag dependency conflicts detected. Task 1 depends on Task 2 which is in a different tag.'
);
expect(mockMoveTasksBetweenTags).toHaveBeenCalled();
expect(
errorMessages.some((msg) =>
msg.includes('Cross-tag dependency conflicts detected')
)
).toBe(true);
restore();
});
it('should handle same tag error correctly', async () => {
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'backlog' // Same tag but no destination
};
const { errorMessages, logMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Source and target tags are the same ("backlog") but no destination specified'
);
expect(
errorMessages.some((msg) =>
msg.includes(
'Source and target tags are the same ("backlog") but no destination specified'
)
)
).toBe(true);
expect(
logMessages.some((msg) => msg.includes('For within-tag moves'))
).toBe(true);
expect(logMessages.some((msg) => msg.includes('For cross-tag moves'))).toBe(
true
);
restore();
});
it('should use current tag when --from-tag is not provided', async () => {
// Mock successful move with current tag fallback
mockMoveTasksBetweenTags.mockResolvedValue({
message: 'Successfully moved task(s) between tags'
});
// Mock getCurrentTag to return 'master'
utilsModule.getCurrentTag.mockReturnValue('master');
// Simulate command: task-master move --from=1 --to-tag=in-progress
// (no --from-tag provided, should use current tag 'master')
await moveAction({
from: '1',
toTag: 'in-progress',
withDependencies: false,
ignoreDependencies: false,
force: false
// fromTag is intentionally not provided to test fallback
});
// Verify that moveTasksBetweenTags was called with 'master' as source tag
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('.taskmaster/tasks/tasks.json'),
[1], // parseInt converts string to number
'master', // Should use current tag as fallback
'in-progress',
{
withDependencies: false,
ignoreDependencies: false,
force: false
}
);
// Verify that generateTaskFiles was called for both tags
expect(generateTaskFilesModule.default).toHaveBeenCalledWith(
expect.stringContaining('.taskmaster/tasks/tasks.json'),
expect.stringContaining('.taskmaster/tasks'),
{ tag: 'master' }
);
expect(generateTaskFilesModule.default).toHaveBeenCalledWith(
expect.stringContaining('.taskmaster/tasks/tasks.json'),
expect.stringContaining('.taskmaster/tasks'),
{ tag: 'in-progress' }
);
});
it('should move multiple tasks with comma-separated IDs successfully', async () => {
// Mock successful cross-tag move for multiple tasks
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '1,2,3',
fromTag: 'backlog',
toTag: 'in-progress'
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1, 2, 3], // Should parse comma-separated string to array of integers
'backlog',
'in-progress',
{
withDependencies: undefined,
ignoreDependencies: undefined,
force: undefined
}
);
// Verify task files are generated for both tags
expect(mockGenerateTaskFiles).toHaveBeenCalledTimes(2);
expect(mockGenerateTaskFiles).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
expect.stringContaining('.taskmaster/tasks'),
{ tag: 'backlog' }
);
expect(mockGenerateTaskFiles).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
expect.stringContaining('.taskmaster/tasks'),
{ tag: 'in-progress' }
);
});
it('should handle --force flag correctly', async () => {
// Mock successful cross-tag move with force flag
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress',
force: true
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1],
'backlog',
'in-progress',
{
withDependencies: undefined,
ignoreDependencies: undefined,
force: true // Force flag should be passed through
}
);
});
it('should fail when invalid task ID is provided', async () => {
const options = {
from: '1,abc,3', // Invalid ID in middle
fromTag: 'backlog',
toTag: 'in-progress'
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Invalid task ID at position 2: "abc" is not a valid number'
);
expect(
errorMessages.some((msg) => msg.includes('Invalid task ID at position 2'))
).toBe(true);
restore();
});
it('should fail when first task ID is invalid', async () => {
const options = {
from: 'abc,2,3', // Invalid ID at start
fromTag: 'backlog',
toTag: 'in-progress'
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Invalid task ID at position 1: "abc" is not a valid number'
);
expect(
errorMessages.some((msg) => msg.includes('Invalid task ID at position 1'))
).toBe(true);
restore();
});
it('should fail when last task ID is invalid', async () => {
const options = {
from: '1,2,xyz', // Invalid ID at end
fromTag: 'backlog',
toTag: 'in-progress'
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Invalid task ID at position 3: "xyz" is not a valid number'
);
expect(
errorMessages.some((msg) => msg.includes('Invalid task ID at position 3'))
).toBe(true);
restore();
});
it('should fail when single invalid task ID is provided', async () => {
const options = {
from: 'invalid',
fromTag: 'backlog',
toTag: 'in-progress'
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Invalid task ID at position 1: "invalid" is not a valid number'
);
expect(
errorMessages.some((msg) => msg.includes('Invalid task ID at position 1'))
).toBe(true);
restore();
});
it('should combine --with-dependencies and --force flags correctly', async () => {
// Mock successful cross-tag move with both flags
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '1,2',
fromTag: 'backlog',
toTag: 'in-progress',
withDependencies: true,
force: true
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1, 2],
'backlog',
'in-progress',
{
withDependencies: true,
ignoreDependencies: undefined,
force: true // Both flags should be passed
}
);
});
it('should combine --ignore-dependencies and --force flags correctly', async () => {
// Mock successful cross-tag move with both flags
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress',
ignoreDependencies: true,
force: true
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1],
'backlog',
'in-progress',
{
withDependencies: undefined,
ignoreDependencies: true,
force: true // Both flags should be passed
}
);
});
it('should handle all three flags combined correctly', async () => {
// Mock successful cross-tag move with all flags
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: '1,2,3',
fromTag: 'backlog',
toTag: 'in-progress',
withDependencies: true,
ignoreDependencies: true,
force: true
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1, 2, 3],
'backlog',
'in-progress',
{
withDependencies: true,
ignoreDependencies: true,
force: true // All three flags should be passed
}
);
});
it('should handle whitespace in comma-separated task IDs', async () => {
// Mock successful cross-tag move with whitespace
mockMoveTasksBetweenTags.mockResolvedValue(undefined);
mockGenerateTaskFiles.mockResolvedValue(undefined);
const options = {
from: ' 1 , 2 , 3 ', // Whitespace around IDs and commas
fromTag: 'backlog',
toTag: 'in-progress'
};
await moveAction(options);
expect(mockMoveTasksBetweenTags).toHaveBeenCalledWith(
expect.stringContaining('tasks.json'),
[1, 2, 3], // Should trim whitespace and parse as integers
'backlog',
'in-progress',
{
withDependencies: undefined,
ignoreDependencies: undefined,
force: undefined
}
);
});
it('should fail when --from parameter is missing for cross-tag move', async () => {
const options = {
fromTag: 'backlog',
toTag: 'in-progress'
// from is intentionally missing
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'--from parameter is required for cross-tag moves'
);
expect(
errorMessages.some((msg) =>
msg.includes('--from parameter is required for cross-tag moves')
)
).toBe(true);
restore();
});
it('should fail when both --from and --to are missing for within-tag move', async () => {
const options = {
// Both from and to are missing for within-tag move
};
const { errorMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Both --from and --to parameters are required for within-tag moves'
);
expect(
errorMessages.some((msg) =>
msg.includes(
'Both --from and --to parameters are required for within-tag moves'
)
)
).toBe(true);
restore();
});
it('should handle within-tag move when only --from is provided', async () => {
// Mock successful within-tag move
mockMoveTask.mockResolvedValue(undefined);
const options = {
from: '1',
to: '2'
// No tags specified, should use within-tag logic
};
await moveAction(options);
expect(mockMoveTask).toHaveBeenCalledWith('1', '2');
expect(mockMoveTasksBetweenTags).not.toHaveBeenCalled();
});
it('should handle within-tag move when both tags are the same', async () => {
// Mock successful within-tag move
mockMoveTask.mockResolvedValue(undefined);
const options = {
from: '1',
to: '2',
fromTag: 'master',
toTag: 'master' // Same tag, should use within-tag logic
};
await moveAction(options);
expect(mockMoveTask).toHaveBeenCalledWith('1', '2');
expect(mockMoveTasksBetweenTags).not.toHaveBeenCalled();
});
it('should fail when both tags are the same but no destination is provided', async () => {
const options = {
from: '1',
fromTag: 'master',
toTag: 'master' // Same tag but no destination
};
const { errorMessages, logMessages, restore } = captureConsoleAndExit();
await expect(moveAction(options)).rejects.toThrow(
'Source and target tags are the same ("master") but no destination specified'
);
expect(
errorMessages.some((msg) =>
msg.includes(
'Source and target tags are the same ("master") but no destination specified'
)
)
).toBe(true);
expect(
logMessages.some((msg) => msg.includes('For within-tag moves'))
).toBe(true);
expect(logMessages.some((msg) => msg.includes('For cross-tag moves'))).toBe(
true
);
restore();
});
});

View File

@@ -0,0 +1,772 @@
import { jest } from '@jest/globals';
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// Mock dependencies before importing
const mockUtils = {
readJSON: jest.fn(),
writeJSON: jest.fn(),
findProjectRoot: jest.fn(() => '/test/project/root'),
log: jest.fn(),
setTasksForTag: jest.fn(),
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => {
// Mock realistic dependency behavior for testing
const { direction = 'forward' } = options;
if (direction === 'forward') {
// Return dependencies that tasks have
const result = [];
sourceTasks.forEach((task) => {
if (task.dependencies && Array.isArray(task.dependencies)) {
result.push(...task.dependencies);
}
});
return result;
} else if (direction === 'reverse') {
// Return tasks that depend on the source tasks
const sourceIds = sourceTasks.map((t) => t.id);
const normalizedSourceIds = sourceIds.map((id) => String(id));
const result = [];
allTasks.forEach((task) => {
if (task.dependencies && Array.isArray(task.dependencies)) {
const hasDependency = task.dependencies.some((depId) =>
normalizedSourceIds.includes(String(depId))
);
if (hasDependency) {
result.push(task.id);
}
}
});
return result;
}
return [];
})
};
// Mock the utils module
jest.unstable_mockModule('../../scripts/modules/utils.js', () => mockUtils);
// Mock other dependencies
jest.unstable_mockModule(
'../../scripts/modules/task-manager/is-task-dependent.js',
() => ({
default: jest.fn(() => false)
})
);
jest.unstable_mockModule('../../scripts/modules/dependency-manager.js', () => ({
findCrossTagDependencies: jest.fn(() => {
// Since dependencies can only exist within the same tag,
// this function should never find any cross-tag conflicts
return [];
}),
getDependentTaskIds: jest.fn(
(sourceTasks, crossTagDependencies, allTasks) => {
// Since we now use findAllDependenciesRecursively in the actual implementation,
// this mock simulates finding all dependencies recursively within the same tag
const dependentIds = new Set();
const processedIds = new Set();
function findAllDependencies(taskId) {
if (processedIds.has(taskId)) return;
processedIds.add(taskId);
const task = allTasks.find((t) => t.id === taskId);
if (!task || !Array.isArray(task.dependencies)) return;
task.dependencies.forEach((depId) => {
const normalizedDepId =
typeof depId === 'string' ? parseInt(depId, 10) : depId;
if (!isNaN(normalizedDepId) && normalizedDepId !== taskId) {
dependentIds.add(normalizedDepId);
findAllDependencies(normalizedDepId);
}
});
}
sourceTasks.forEach((sourceTask) => {
if (sourceTask && sourceTask.id) {
findAllDependencies(sourceTask.id);
}
});
return Array.from(dependentIds);
}
),
validateSubtaskMove: jest.fn((taskId, sourceTag, targetTag) => {
// Throw error for subtask IDs
const taskIdStr = String(taskId);
if (taskIdStr.includes('.')) {
throw new Error('Cannot move subtasks directly between tags');
}
})
}));
jest.unstable_mockModule(
'../../scripts/modules/task-manager/generate-task-files.js',
() => ({
default: jest.fn().mockResolvedValue()
})
);
// Import the modules we'll be testing after mocking
const { moveTasksBetweenTags } = await import(
'../../scripts/modules/task-manager/move-task.js'
);
describe('Cross-Tag Task Movement Integration Tests', () => {
let testDataPath;
let mockTasksData;
beforeEach(() => {
// Setup test data path
testDataPath = path.join(__dirname, 'temp-test-tasks.json');
// Initialize mock data with multiple tags
mockTasksData = {
backlog: {
tasks: [
{
id: 1,
title: 'Backlog Task 1',
description: 'A task in backlog',
status: 'pending',
dependencies: [],
priority: 'medium',
tag: 'backlog'
},
{
id: 2,
title: 'Backlog Task 2',
description: 'Another task in backlog',
status: 'pending',
dependencies: [1],
priority: 'high',
tag: 'backlog'
},
{
id: 3,
title: 'Backlog Task 3',
description: 'Independent task',
status: 'pending',
dependencies: [],
priority: 'low',
tag: 'backlog'
}
]
},
'in-progress': {
tasks: [
{
id: 4,
title: 'In Progress Task 1',
description: 'A task being worked on',
status: 'in-progress',
dependencies: [],
priority: 'high',
tag: 'in-progress'
}
]
},
done: {
tasks: [
{
id: 5,
title: 'Completed Task 1',
description: 'A completed task',
status: 'done',
dependencies: [],
priority: 'medium',
tag: 'done'
}
]
}
};
// Setup mock utils
mockUtils.readJSON.mockReturnValue(mockTasksData);
mockUtils.writeJSON.mockImplementation((path, data, projectRoot, tag) => {
// Simulate writing to file
return Promise.resolve();
});
});
afterEach(() => {
jest.clearAllMocks();
// Clean up temp file if it exists
if (fs.existsSync(testDataPath)) {
fs.unlinkSync(testDataPath);
}
});
describe('Basic Cross-Tag Movement', () => {
it('should move a single task between tags successfully', async () => {
const taskIds = [1];
const sourceTag = 'backlog';
const targetTag = 'in-progress';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
);
// Verify readJSON was called with correct parameters
expect(mockUtils.readJSON).toHaveBeenCalledWith(
testDataPath,
'/test/project',
sourceTag
);
// Verify writeJSON was called with updated data
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 2 }),
expect.objectContaining({ id: 3 })
])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 4 }),
expect.objectContaining({
id: 1,
tag: 'in-progress'
})
])
})
}),
'/test/project',
null
);
// Verify result structure
expect(result).toEqual({
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"',
movedTasks: [
{
id: 1,
fromTag: 'backlog',
toTag: 'in-progress'
}
]
});
});
it('should move multiple tasks between tags', async () => {
const taskIds = [1, 3];
const sourceTag = 'backlog';
const targetTag = 'done';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
);
// Verify the moved tasks are in the target tag
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([expect.objectContaining({ id: 2 })])
}),
done: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 5 }),
expect.objectContaining({
id: 1,
tag: 'done'
}),
expect.objectContaining({
id: 3,
tag: 'done'
})
])
})
}),
'/test/project',
null
);
// Verify result structure
expect(result.movedTasks).toHaveLength(2);
expect(result.movedTasks).toEqual(
expect.arrayContaining([
{ id: 1, fromTag: 'backlog', toTag: 'done' },
{ id: 3, fromTag: 'backlog', toTag: 'done' }
])
);
});
it('should create target tag if it does not exist', async () => {
const taskIds = [1];
const sourceTag = 'backlog';
const targetTag = 'new-tag';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
);
// Verify new tag was created
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
'new-tag': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
tag: 'new-tag'
})
])
})
}),
'/test/project',
null
);
});
});
describe('Dependency Handling', () => {
it('should move task with dependencies when withDependencies is true', async () => {
const taskIds = [2]; // Task 2 depends on Task 1
const sourceTag = 'backlog';
const targetTag = 'in-progress';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{ withDependencies: true },
{ projectRoot: '/test/project' }
);
// Verify both task 2 and its dependency (task 1) were moved
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([expect.objectContaining({ id: 3 })])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 4 }),
expect.objectContaining({
id: 1,
tag: 'in-progress'
}),
expect.objectContaining({
id: 2,
tag: 'in-progress'
})
])
})
}),
'/test/project',
null
);
});
it('should move task normally when ignoreDependencies is true (no cross-tag conflicts to ignore)', async () => {
const taskIds = [2]; // Task 2 depends on Task 1
const sourceTag = 'backlog';
const targetTag = 'in-progress';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{ ignoreDependencies: true },
{ projectRoot: '/test/project' }
);
// Since dependencies only exist within tags, there are no cross-tag conflicts to ignore
// Task 2 moves with its dependencies intact
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 1 }),
expect.objectContaining({ id: 3 })
])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 4 }),
expect.objectContaining({
id: 2,
tag: 'in-progress',
dependencies: [1] // Dependencies preserved since no cross-tag conflicts
})
])
})
}),
'/test/project',
null
);
});
it('should move task without cross-tag dependency conflicts (since dependencies only exist within tags)', async () => {
const taskIds = [2]; // Task 2 depends on Task 1 (both in same tag)
const sourceTag = 'backlog';
const targetTag = 'in-progress';
// Since dependencies can only exist within the same tag,
// there should be no cross-tag conflicts
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
);
// Verify task was moved successfully (without dependencies)
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 1 }), // Task 1 stays in backlog
expect.objectContaining({ id: 3 })
])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 4 }),
expect.objectContaining({
id: 2,
tag: 'in-progress'
})
])
})
}),
'/test/project',
null
);
});
});
describe('Error Handling', () => {
it('should throw error for invalid source tag', async () => {
const taskIds = [1];
const sourceTag = 'nonexistent-tag';
const targetTag = 'in-progress';
// Mock readJSON to return data without the source tag
mockUtils.readJSON.mockReturnValue({
'in-progress': { tasks: [] }
});
await expect(
moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
)
).rejects.toThrow('Source tag "nonexistent-tag" not found or invalid');
});
it('should throw error for invalid task IDs', async () => {
const taskIds = [999]; // Non-existent task ID
const sourceTag = 'backlog';
const targetTag = 'in-progress';
await expect(
moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
)
).rejects.toThrow('Task 999 not found in source tag "backlog"');
});
it('should throw error for subtask movement', async () => {
const taskIds = ['1.1']; // Subtask ID
const sourceTag = 'backlog';
const targetTag = 'in-progress';
await expect(
moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
)
).rejects.toThrow('Cannot move subtasks directly between tags');
});
it('should handle ID conflicts in target tag', async () => {
// Setup data with conflicting IDs
const conflictingData = {
backlog: {
tasks: [
{
id: 1,
title: 'Backlog Task',
tag: 'backlog'
}
]
},
'in-progress': {
tasks: [
{
id: 1, // Same ID as in backlog
title: 'In Progress Task',
tag: 'in-progress'
}
]
}
};
mockUtils.readJSON.mockReturnValue(conflictingData);
const taskIds = [1];
const sourceTag = 'backlog';
const targetTag = 'in-progress';
await expect(
moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
)
).rejects.toThrow('Task 1 already exists in target tag "in-progress"');
});
});
describe('Edge Cases', () => {
it('should handle empty task list in source tag', async () => {
const emptyData = {
backlog: { tasks: [] },
'in-progress': { tasks: [] }
};
mockUtils.readJSON.mockReturnValue(emptyData);
const taskIds = [1];
const sourceTag = 'backlog';
const targetTag = 'in-progress';
await expect(
moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
)
).rejects.toThrow('Task 1 not found in source tag "backlog"');
});
it('should preserve task metadata during move', async () => {
const taskIds = [1];
const sourceTag = 'backlog';
const targetTag = 'in-progress';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
);
// Verify task metadata is preserved
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Backlog Task 1',
description: 'A task in backlog',
status: 'pending',
priority: 'medium',
tag: 'in-progress', // Tag should be updated
metadata: expect.objectContaining({
moveHistory: expect.arrayContaining([
expect.objectContaining({
fromTag: 'backlog',
toTag: 'in-progress',
timestamp: expect.any(String)
})
])
})
})
])
})
}),
'/test/project',
null
);
});
it('should handle force flag for dependency conflicts', async () => {
const taskIds = [2]; // Task 2 depends on Task 1
const sourceTag = 'backlog';
const targetTag = 'in-progress';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{ force: true },
{ projectRoot: '/test/project' }
);
// Verify task was moved despite dependency conflicts
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 2,
tag: 'in-progress'
})
])
})
}),
'/test/project',
null
);
});
});
describe('Complex Scenarios', () => {
it('should handle complex moves without cross-tag conflicts (dependencies only within tags)', async () => {
// Setup data with valid within-tag dependencies
const validData = {
backlog: {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [], // No dependencies
tag: 'backlog'
},
{
id: 3,
title: 'Task 3',
dependencies: [1], // Depends on Task 1 (same tag)
tag: 'backlog'
}
]
},
'in-progress': {
tasks: [
{
id: 2,
title: 'Task 2',
dependencies: [], // No dependencies
tag: 'in-progress'
}
]
}
};
mockUtils.readJSON.mockReturnValue(validData);
const taskIds = [3];
const sourceTag = 'backlog';
const targetTag = 'in-progress';
// Should succeed since there are no cross-tag conflicts
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{},
{ projectRoot: '/test/project' }
);
expect(result).toEqual({
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"',
movedTasks: [{ id: 3, fromTag: 'backlog', toTag: 'in-progress' }]
});
});
it('should handle bulk move with mixed dependency scenarios', async () => {
const taskIds = [1, 2, 3]; // Multiple tasks with dependencies
const sourceTag = 'backlog';
const targetTag = 'in-progress';
const result = await moveTasksBetweenTags(
testDataPath,
taskIds,
sourceTag,
targetTag,
{ withDependencies: true },
{ projectRoot: '/test/project' }
);
// Verify all tasks were moved
expect(mockUtils.writeJSON).toHaveBeenCalledWith(
testDataPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: [] // All tasks should be moved
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({ id: 4 }),
expect.objectContaining({ id: 1, tag: 'in-progress' }),
expect.objectContaining({ id: 2, tag: 'in-progress' }),
expect.objectContaining({ id: 3, tag: 'in-progress' })
])
})
}),
'/test/project',
null
);
// Verify result structure
expect(result.movedTasks).toHaveLength(3);
expect(result.movedTasks).toEqual(
expect.arrayContaining([
{ id: 1, fromTag: 'backlog', toTag: 'in-progress' },
{ id: 2, fromTag: 'backlog', toTag: 'in-progress' },
{ id: 3, fromTag: 'backlog', toTag: 'in-progress' }
])
);
});
});
});

View File

@@ -0,0 +1,537 @@
import { jest } from '@jest/globals';
import path from 'path';
import mockFs from 'mock-fs';
import fs from 'fs';
import { fileURLToPath } from 'url';
// Import the actual move task functionality
import moveTask, {
moveTasksBetweenTags
} from '../../scripts/modules/task-manager/move-task.js';
import { readJSON, writeJSON } from '../../scripts/modules/utils.js';
// Mock console to avoid conflicts with mock-fs
const originalConsole = { ...console };
beforeAll(() => {
global.console = {
...console,
log: jest.fn(),
error: jest.fn(),
warn: jest.fn(),
info: jest.fn()
};
});
afterAll(() => {
global.console = originalConsole;
});
// Get __dirname equivalent for ES modules
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
describe('Cross-Tag Task Movement Simple Integration Tests', () => {
const testDataDir = path.join(__dirname, 'fixtures');
const testTasksPath = path.join(testDataDir, 'tasks.json');
// Test data structure with proper tagged format
const testData = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: [], status: 'pending' },
{ id: 2, title: 'Task 2', dependencies: [], status: 'pending' }
]
},
'in-progress': {
tasks: [
{ id: 3, title: 'Task 3', dependencies: [], status: 'in-progress' }
]
}
};
beforeEach(() => {
// Set up mock file system with test data
mockFs({
[testDataDir]: {
'tasks.json': JSON.stringify(testData, null, 2)
}
});
});
afterEach(() => {
// Clean up mock file system
mockFs.restore();
});
describe('Real Module Integration Tests', () => {
it('should move task within same tag using actual moveTask function', async () => {
// Test moving Task 1 from position 1 to position 5 within backlog tag
const result = await moveTask(
testTasksPath,
'1',
'5',
false, // Don't generate files for this test
{ tag: 'backlog' }
);
// Verify the move operation was successful
expect(result).toBeDefined();
expect(result.message).toContain('Moved task 1 to new ID 5');
// Read the updated data to verify the move actually happened
const updatedData = readJSON(testTasksPath, null, 'backlog');
const rawData = updatedData._rawTaggedData || updatedData;
const backlogTasks = rawData.backlog.tasks;
// Verify Task 1 is no longer at position 1
const taskAtPosition1 = backlogTasks.find((t) => t.id === 1);
expect(taskAtPosition1).toBeUndefined();
// Verify Task 1 is now at position 5
const taskAtPosition5 = backlogTasks.find((t) => t.id === 5);
expect(taskAtPosition5).toBeDefined();
expect(taskAtPosition5.title).toBe('Task 1');
expect(taskAtPosition5.status).toBe('pending');
});
it('should move tasks between tags using moveTasksBetweenTags function', async () => {
// Test moving Task 1 from backlog to in-progress tag
const result = await moveTasksBetweenTags(
testTasksPath,
['1'], // Task IDs to move (as strings)
'backlog', // Source tag
'in-progress', // Target tag
{ withDependencies: false, ignoreDependencies: false },
{ projectRoot: testDataDir }
);
// Verify the cross-tag move operation was successful
expect(result).toBeDefined();
expect(result.message).toContain(
'Successfully moved 1 tasks from "backlog" to "in-progress"'
);
expect(result.movedTasks).toHaveLength(1);
expect(result.movedTasks[0].id).toBe('1');
expect(result.movedTasks[0].fromTag).toBe('backlog');
expect(result.movedTasks[0].toTag).toBe('in-progress');
// Read the updated data to verify the move actually happened
const updatedData = readJSON(testTasksPath, null, 'backlog');
// readJSON returns resolved data, so we need to access the raw tagged data
const rawData = updatedData._rawTaggedData || updatedData;
const backlogTasks = rawData.backlog?.tasks || [];
const inProgressTasks = rawData['in-progress']?.tasks || [];
// Verify Task 1 is no longer in backlog
const taskInBacklog = backlogTasks.find((t) => t.id === 1);
expect(taskInBacklog).toBeUndefined();
// Verify Task 1 is now in in-progress
const taskInProgress = inProgressTasks.find((t) => t.id === 1);
expect(taskInProgress).toBeDefined();
expect(taskInProgress.title).toBe('Task 1');
expect(taskInProgress.status).toBe('pending');
});
it('should handle subtask movement restrictions', async () => {
// Create data with subtasks
const dataWithSubtasks = {
backlog: {
tasks: [
{
id: 1,
title: 'Task 1',
dependencies: [],
status: 'pending',
subtasks: [
{ id: '1.1', title: 'Subtask 1.1', status: 'pending' },
{ id: '1.2', title: 'Subtask 1.2', status: 'pending' }
]
}
]
},
'in-progress': {
tasks: [
{ id: 2, title: 'Task 2', dependencies: [], status: 'in-progress' }
]
}
};
// Write subtask data to mock file system
mockFs({
[testDataDir]: {
'tasks.json': JSON.stringify(dataWithSubtasks, null, 2)
}
});
// Try to move a subtask directly - this should actually work (converts subtask to task)
const result = await moveTask(
testTasksPath,
'1.1', // Subtask ID
'5', // New task ID
false,
{ tag: 'backlog' }
);
// Verify the subtask was converted to a task
expect(result).toBeDefined();
expect(result.message).toContain('Converted subtask 1.1 to task 5');
// Verify the subtask was removed from the parent and converted to a standalone task
const updatedData = readJSON(testTasksPath, null, 'backlog');
const rawData = updatedData._rawTaggedData || updatedData;
const task1 = rawData.backlog?.tasks?.find((t) => t.id === 1);
const newTask5 = rawData.backlog?.tasks?.find((t) => t.id === 5);
expect(task1).toBeDefined();
expect(task1.subtasks).toHaveLength(1); // Only 1.2 remains
expect(task1.subtasks[0].id).toBe(2);
expect(newTask5).toBeDefined();
expect(newTask5.title).toBe('Subtask 1.1');
expect(newTask5.status).toBe('pending');
});
it('should handle missing source tag errors', async () => {
// Try to move from a non-existent tag
await expect(
moveTasksBetweenTags(
testTasksPath,
['1'],
'non-existent-tag', // Source tag doesn't exist
'in-progress',
{ withDependencies: false, ignoreDependencies: false },
{ projectRoot: testDataDir }
)
).rejects.toThrow();
});
it('should handle missing task ID errors', async () => {
// Try to move a non-existent task
await expect(
moveTask(
testTasksPath,
'999', // Non-existent task ID
'5',
false,
{ tag: 'backlog' }
)
).rejects.toThrow();
});
it('should handle ignoreDependencies option correctly', async () => {
// Create data with dependencies
const dataWithDependencies = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: [2], status: 'pending' },
{ id: 2, title: 'Task 2', dependencies: [], status: 'pending' }
]
},
'in-progress': {
tasks: [
{ id: 3, title: 'Task 3', dependencies: [], status: 'in-progress' }
]
}
};
// Write dependency data to mock file system
mockFs({
[testDataDir]: {
'tasks.json': JSON.stringify(dataWithDependencies, null, 2)
}
});
// Move Task 1 while ignoring its dependencies
const result = await moveTasksBetweenTags(
testTasksPath,
['1'], // Only Task 1
'backlog',
'in-progress',
{ withDependencies: false, ignoreDependencies: true },
{ projectRoot: testDataDir }
);
expect(result).toBeDefined();
expect(result.movedTasks).toHaveLength(1);
// Verify Task 1 moved but Task 2 stayed
const updatedData = readJSON(testTasksPath, null, 'backlog');
const rawData = updatedData._rawTaggedData || updatedData;
expect(rawData.backlog.tasks).toHaveLength(1); // Task 2 remains
expect(rawData['in-progress'].tasks).toHaveLength(2); // Task 3 + Task 1
// Verify Task 1 has no dependencies (they were ignored)
const movedTask = rawData['in-progress'].tasks.find((t) => t.id === 1);
expect(movedTask.dependencies).toEqual([]);
});
});
describe('Complex Dependency Scenarios', () => {
beforeAll(() => {
// Document the mock-fs limitation for complex dependency scenarios
console.warn(
'⚠️ Complex dependency tests are skipped due to mock-fs limitations. ' +
'These tests require real filesystem operations for proper dependency resolution. ' +
'Consider using real temporary filesystem setup for these scenarios.'
);
});
it.skip('should handle dependency conflicts during cross-tag moves', async () => {
// For now, skip this test as the mock setup is not working correctly
// TODO: Fix mock-fs setup for complex dependency scenarios
});
it.skip('should handle withDependencies option correctly', async () => {
// For now, skip this test as the mock setup is not working correctly
// TODO: Fix mock-fs setup for complex dependency scenarios
});
});
describe('Complex Dependency Integration Tests with Mock-fs', () => {
const complexTestData = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: [2, 3], status: 'pending' },
{ id: 2, title: 'Task 2', dependencies: [4], status: 'pending' },
{ id: 3, title: 'Task 3', dependencies: [], status: 'pending' },
{ id: 4, title: 'Task 4', dependencies: [], status: 'pending' }
]
},
'in-progress': {
tasks: [
{ id: 5, title: 'Task 5', dependencies: [], status: 'in-progress' }
]
}
};
beforeEach(() => {
// Set up mock file system with complex dependency data
mockFs({
[testDataDir]: {
'tasks.json': JSON.stringify(complexTestData, null, 2)
}
});
});
afterEach(() => {
// Clean up mock file system
mockFs.restore();
});
it('should handle dependency conflicts during cross-tag moves using actual move functions', async () => {
// Test moving Task 1 which has dependencies on Tasks 2 and 3
// This should fail because Task 1 depends on Tasks 2 and 3 which are in the same tag
await expect(
moveTasksBetweenTags(
testTasksPath,
['1'], // Task 1 with dependencies
'backlog',
'in-progress',
{ withDependencies: false, ignoreDependencies: false },
{ projectRoot: testDataDir }
)
).rejects.toThrow(
'Cannot move tasks: 2 cross-tag dependency conflicts found'
);
});
it('should handle withDependencies option correctly using actual move functions', async () => {
// Test moving Task 1 with its dependencies (Tasks 2 and 3)
// Task 2 also depends on Task 4, so all 4 tasks should move
const result = await moveTasksBetweenTags(
testTasksPath,
['1'], // Task 1
'backlog',
'in-progress',
{ withDependencies: true, ignoreDependencies: false },
{ projectRoot: testDataDir }
);
// Verify the move operation was successful
expect(result).toBeDefined();
expect(result.message).toContain(
'Successfully moved 4 tasks from "backlog" to "in-progress"'
);
expect(result.movedTasks).toHaveLength(4); // Task 1 + Tasks 2, 3, 4
// Read the updated data to verify all dependent tasks moved
const updatedData = readJSON(testTasksPath, null, 'backlog');
const rawData = updatedData._rawTaggedData || updatedData;
// Verify all tasks moved from backlog
expect(rawData.backlog?.tasks || []).toHaveLength(0); // All tasks moved
// Verify all tasks are now in in-progress
expect(rawData['in-progress']?.tasks || []).toHaveLength(5); // Task 5 + Tasks 1, 2, 3, 4
// Verify dependency relationships are preserved
const task1 = rawData['in-progress']?.tasks?.find((t) => t.id === 1);
const task2 = rawData['in-progress']?.tasks?.find((t) => t.id === 2);
const task3 = rawData['in-progress']?.tasks?.find((t) => t.id === 3);
const task4 = rawData['in-progress']?.tasks?.find((t) => t.id === 4);
expect(task1?.dependencies).toEqual([2, 3]);
expect(task2?.dependencies).toEqual([4]);
expect(task3?.dependencies).toEqual([]);
expect(task4?.dependencies).toEqual([]);
});
it('should handle circular dependency detection using actual move functions', async () => {
// Create data with circular dependencies
const circularData = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: [2], status: 'pending' },
{ id: 2, title: 'Task 2', dependencies: [3], status: 'pending' },
{ id: 3, title: 'Task 3', dependencies: [1], status: 'pending' } // Circular dependency
]
},
'in-progress': {
tasks: [
{ id: 4, title: 'Task 4', dependencies: [], status: 'in-progress' }
]
}
};
// Set up mock file system with circular dependency data
mockFs({
[testDataDir]: {
'tasks.json': JSON.stringify(circularData, null, 2)
}
});
// Attempt to move Task 1 with dependencies should fail due to circular dependency
await expect(
moveTasksBetweenTags(
testTasksPath,
['1'],
'backlog',
'in-progress',
{ withDependencies: true, ignoreDependencies: false },
{ projectRoot: testDataDir }
)
).rejects.toThrow();
});
it('should handle nested dependency chains using actual move functions', async () => {
// Create data with nested dependency chains
const nestedData = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: [2], status: 'pending' },
{ id: 2, title: 'Task 2', dependencies: [3], status: 'pending' },
{ id: 3, title: 'Task 3', dependencies: [4], status: 'pending' },
{ id: 4, title: 'Task 4', dependencies: [], status: 'pending' }
]
},
'in-progress': {
tasks: [
{ id: 5, title: 'Task 5', dependencies: [], status: 'in-progress' }
]
}
};
// Set up mock file system with nested dependency data
mockFs({
[testDataDir]: {
'tasks.json': JSON.stringify(nestedData, null, 2)
}
});
// Test moving Task 1 with all its nested dependencies
const result = await moveTasksBetweenTags(
testTasksPath,
['1'], // Task 1
'backlog',
'in-progress',
{ withDependencies: true, ignoreDependencies: false },
{ projectRoot: testDataDir }
);
// Verify the move operation was successful
expect(result).toBeDefined();
expect(result.message).toContain(
'Successfully moved 4 tasks from "backlog" to "in-progress"'
);
expect(result.movedTasks).toHaveLength(4); // Tasks 1, 2, 3, 4
// Read the updated data to verify all tasks moved
const updatedData = readJSON(testTasksPath, null, 'backlog');
const rawData = updatedData._rawTaggedData || updatedData;
// Verify all tasks moved from backlog
expect(rawData.backlog?.tasks || []).toHaveLength(0); // All tasks moved
// Verify all tasks are now in in-progress
expect(rawData['in-progress']?.tasks || []).toHaveLength(5); // Task 5 + Tasks 1, 2, 3, 4
// Verify dependency relationships are preserved
const task1 = rawData['in-progress']?.tasks?.find((t) => t.id === 1);
const task2 = rawData['in-progress']?.tasks?.find((t) => t.id === 2);
const task3 = rawData['in-progress']?.tasks?.find((t) => t.id === 3);
const task4 = rawData['in-progress']?.tasks?.find((t) => t.id === 4);
expect(task1?.dependencies).toEqual([2]);
expect(task2?.dependencies).toEqual([3]);
expect(task3?.dependencies).toEqual([4]);
expect(task4?.dependencies).toEqual([]);
});
it('should handle cross-tag dependency resolution using actual move functions', async () => {
// Create data with cross-tag dependencies
const crossTagData = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: [5], status: 'pending' }, // Depends on task in in-progress
{ id: 2, title: 'Task 2', dependencies: [], status: 'pending' }
]
},
'in-progress': {
tasks: [
{ id: 5, title: 'Task 5', dependencies: [], status: 'in-progress' }
]
}
};
// Set up mock file system with cross-tag dependency data
mockFs({
[testDataDir]: {
'tasks.json': JSON.stringify(crossTagData, null, 2)
}
});
// Test moving Task 1 which depends on Task 5 in another tag
const result = await moveTasksBetweenTags(
testTasksPath,
['1'], // Task 1
'backlog',
'in-progress',
{ withDependencies: false, ignoreDependencies: false },
{ projectRoot: testDataDir }
);
// Verify the move operation was successful
expect(result).toBeDefined();
expect(result.message).toContain(
'Successfully moved 1 tasks from "backlog" to "in-progress"'
);
// Read the updated data to verify the move actually happened
const updatedData = readJSON(testTasksPath, null, 'backlog');
const rawData = updatedData._rawTaggedData || updatedData;
// Verify Task 1 is no longer in backlog
const taskInBacklog = rawData.backlog?.tasks?.find((t) => t.id === 1);
expect(taskInBacklog).toBeUndefined();
// Verify Task 1 is now in in-progress with its dependency preserved
const taskInProgress = rawData['in-progress']?.tasks?.find(
(t) => t.id === 1
);
expect(taskInProgress).toBeDefined();
expect(taskInProgress.title).toBe('Task 1');
expect(taskInProgress.dependencies).toEqual([5]); // Cross-tag dependency preserved
});
});
});

View File

@@ -4,6 +4,22 @@
* This file is run before each test suite to set up the test environment.
*/
import path from 'path';
import { fileURLToPath } from 'url';
// Capture the actual original working directory before any changes
const originalWorkingDirectory = process.cwd();
// Store original working directory and project root
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const projectRoot = path.resolve(__dirname, '..');
// Ensure we're always starting from the project root
if (process.cwd() !== projectRoot) {
process.chdir(projectRoot);
}
// Mock environment variables
process.env.MODEL = 'sonar-pro';
process.env.MAX_TOKENS = '64000';
@@ -21,6 +37,10 @@ process.env.PERPLEXITY_API_KEY = 'test-mock-perplexity-key-for-tests';
// Add global test helpers if needed
global.wait = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
// Store original working directory for tests that need it
global.originalWorkingDirectory = originalWorkingDirectory;
global.projectRoot = projectRoot;
// If needed, silence console during tests
if (process.env.SILENCE_CONSOLE === 'true') {
global.console = {

View File

@@ -0,0 +1,238 @@
/**
* Tests for OpenAI Provider - Token parameter handling for GPT-5
*
* This test suite covers:
* 1. Correct identification of GPT-5 models requiring max_completion_tokens
* 2. Token parameter preparation for different model types
* 3. Validation of maxTokens parameter
* 4. Integer coercion of token values
*/
import { jest } from '@jest/globals';
// Mock the utils module to prevent logging during tests
jest.mock('../../../scripts/modules/utils.js', () => ({
log: jest.fn()
}));
// Import the provider
import { OpenAIProvider } from '../../../src/ai-providers/openai.js';
describe('OpenAIProvider', () => {
let provider;
beforeEach(() => {
provider = new OpenAIProvider();
jest.clearAllMocks();
});
describe('requiresMaxCompletionTokens', () => {
it('should return true for GPT-5 models', () => {
expect(provider.requiresMaxCompletionTokens('gpt-5')).toBe(true);
expect(provider.requiresMaxCompletionTokens('gpt-5-mini')).toBe(true);
expect(provider.requiresMaxCompletionTokens('gpt-5-nano')).toBe(true);
expect(provider.requiresMaxCompletionTokens('gpt-5-turbo')).toBe(true);
});
it('should return false for non-GPT-5 models', () => {
expect(provider.requiresMaxCompletionTokens('gpt-4')).toBe(false);
expect(provider.requiresMaxCompletionTokens('gpt-4o')).toBe(false);
expect(provider.requiresMaxCompletionTokens('gpt-3.5-turbo')).toBe(false);
expect(provider.requiresMaxCompletionTokens('o1')).toBe(false);
expect(provider.requiresMaxCompletionTokens('o1-mini')).toBe(false);
});
it('should handle null/undefined modelId', () => {
expect(provider.requiresMaxCompletionTokens(null)).toBeFalsy();
expect(provider.requiresMaxCompletionTokens(undefined)).toBeFalsy();
expect(provider.requiresMaxCompletionTokens('')).toBeFalsy();
});
});
describe('prepareTokenParam', () => {
it('should return max_completion_tokens for GPT-5 models', () => {
const result = provider.prepareTokenParam('gpt-5', 1000);
expect(result).toEqual({ max_completion_tokens: 1000 });
});
it('should return maxTokens for non-GPT-5 models', () => {
const result = provider.prepareTokenParam('gpt-4', 1000);
expect(result).toEqual({ maxTokens: 1000 });
});
it('should coerce token value to integer', () => {
// Float values
const result1 = provider.prepareTokenParam('gpt-5', 1000.7);
expect(result1).toEqual({ max_completion_tokens: 1000 });
const result2 = provider.prepareTokenParam('gpt-4', 1000.7);
expect(result2).toEqual({ maxTokens: 1000 });
// String float
const result3 = provider.prepareTokenParam('gpt-5', '1000.7');
expect(result3).toEqual({ max_completion_tokens: 1000 });
// String integers (common CLI input path)
expect(provider.prepareTokenParam('gpt-5', '1000')).toEqual({
max_completion_tokens: 1000
});
expect(provider.prepareTokenParam('gpt-4', '1000')).toEqual({
maxTokens: 1000
});
});
it('should return empty object for undefined maxTokens', () => {
const result = provider.prepareTokenParam('gpt-5', undefined);
expect(result).toEqual({});
});
it('should handle edge cases', () => {
// Test with 0 (should still pass through as 0)
const result1 = provider.prepareTokenParam('gpt-5', 0);
expect(result1).toEqual({ max_completion_tokens: 0 });
// Test with string number
const result2 = provider.prepareTokenParam('gpt-5', '100');
expect(result2).toEqual({ max_completion_tokens: 100 });
// Test with negative number (will be floored, validation happens elsewhere)
const result3 = provider.prepareTokenParam('gpt-4', -10.5);
expect(result3).toEqual({ maxTokens: -11 });
});
});
describe('validateOptionalParams', () => {
it('should accept valid maxTokens values', () => {
expect(() =>
provider.validateOptionalParams({ maxTokens: 1000 })
).not.toThrow();
expect(() =>
provider.validateOptionalParams({ maxTokens: 1 })
).not.toThrow();
expect(() =>
provider.validateOptionalParams({ maxTokens: '1000' })
).not.toThrow();
});
it('should reject invalid maxTokens values', () => {
expect(() => provider.validateOptionalParams({ maxTokens: 0 })).toThrow(
Error
);
expect(() => provider.validateOptionalParams({ maxTokens: -1 })).toThrow(
Error
);
expect(() => provider.validateOptionalParams({ maxTokens: NaN })).toThrow(
Error
);
expect(() =>
provider.validateOptionalParams({ maxTokens: Infinity })
).toThrow(Error);
expect(() =>
provider.validateOptionalParams({ maxTokens: 'invalid' })
).toThrow(Error);
});
it('should accept valid temperature values', () => {
expect(() =>
provider.validateOptionalParams({ temperature: 0 })
).not.toThrow();
expect(() =>
provider.validateOptionalParams({ temperature: 0.5 })
).not.toThrow();
expect(() =>
provider.validateOptionalParams({ temperature: 1 })
).not.toThrow();
});
it('should reject invalid temperature values', () => {
expect(() =>
provider.validateOptionalParams({ temperature: -0.1 })
).toThrow(Error);
expect(() =>
provider.validateOptionalParams({ temperature: 1.1 })
).toThrow(Error);
});
});
describe('getRequiredApiKeyName', () => {
it('should return OPENAI_API_KEY', () => {
expect(provider.getRequiredApiKeyName()).toBe('OPENAI_API_KEY');
});
});
describe('getClient', () => {
it('should throw error if API key is missing', () => {
expect(() => provider.getClient({})).toThrow(Error);
});
it('should create client with apiKey only', () => {
const params = {
apiKey: 'sk-test-123'
};
// The getClient method should return a function
const client = provider.getClient(params);
expect(typeof client).toBe('function');
// The client function should be callable and return a model object
const model = client('gpt-4');
expect(model).toBeDefined();
expect(model.modelId).toBe('gpt-4');
});
it('should create client with apiKey and baseURL', () => {
const params = {
apiKey: 'sk-test-456',
baseURL: 'https://api.openai.example'
};
// Should not throw when baseURL is provided
const client = provider.getClient(params);
expect(typeof client).toBe('function');
// The client function should be callable and return a model object
const model = client('gpt-5');
expect(model).toBeDefined();
expect(model.modelId).toBe('gpt-5');
});
it('should return the same client instance for the same parameters', () => {
const params = {
apiKey: 'sk-test-789'
};
// Multiple calls with same params should work
const client1 = provider.getClient(params);
const client2 = provider.getClient(params);
expect(typeof client1).toBe('function');
expect(typeof client2).toBe('function');
// Both clients should be able to create models
const model1 = client1('gpt-4');
const model2 = client2('gpt-4');
expect(model1.modelId).toBe('gpt-4');
expect(model2.modelId).toBe('gpt-4');
});
it('should handle different model IDs correctly', () => {
const client = provider.getClient({ apiKey: 'sk-test-models' });
// Test with different models
const gpt4 = client('gpt-4');
expect(gpt4.modelId).toBe('gpt-4');
const gpt5 = client('gpt-5');
expect(gpt5.modelId).toBe('gpt-5');
const gpt35 = client('gpt-3.5-turbo');
expect(gpt35.modelId).toBe('gpt-3.5-turbo');
});
});
describe('name property', () => {
it('should have OpenAI as the provider name', () => {
expect(provider.name).toBe('OpenAI');
});
});
});

View File

@@ -9,7 +9,8 @@ import {
removeDuplicateDependencies,
cleanupSubtaskDependencies,
ensureAtLeastOneIndependentSubtask,
validateAndFixDependencies
validateAndFixDependencies,
canMoveWithDependencies
} from '../../scripts/modules/dependency-manager.js';
import * as utils from '../../scripts/modules/utils.js';
import { sampleTasks } from '../fixtures/sample-tasks.js';
@@ -810,4 +811,113 @@ describe('Dependency Manager Module', () => {
);
});
});
describe('canMoveWithDependencies', () => {
it('should return canMove: false when conflicts exist', () => {
const allTasks = [
{
id: 1,
tag: 'source',
dependencies: [2],
title: 'Task 1'
},
{
id: 2,
tag: 'other',
dependencies: [],
title: 'Task 2'
}
];
const result = canMoveWithDependencies('1', 'source', 'target', allTasks);
expect(result.canMove).toBe(false);
expect(result.conflicts).toBeDefined();
expect(result.conflicts.length).toBeGreaterThan(0);
expect(result.dependentTaskIds).toBeDefined();
});
it('should return canMove: true when no conflicts exist', () => {
const allTasks = [
{
id: 1,
tag: 'source',
dependencies: [],
title: 'Task 1'
},
{
id: 2,
tag: 'target',
dependencies: [],
title: 'Task 2'
}
];
const result = canMoveWithDependencies('1', 'source', 'target', allTasks);
expect(result.canMove).toBe(true);
expect(result.conflicts).toBeDefined();
expect(result.conflicts.length).toBe(0);
expect(result.dependentTaskIds).toBeDefined();
expect(result.dependentTaskIds.length).toBe(0);
});
it('should handle subtask lookup correctly', () => {
const allTasks = [
{
id: 1,
tag: 'source',
dependencies: [],
title: 'Parent Task',
subtasks: [
{
id: 1,
dependencies: [2],
title: 'Subtask 1'
}
]
},
{
id: 2,
tag: 'other',
dependencies: [],
title: 'Task 2'
}
];
const result = canMoveWithDependencies(
'1.1',
'source',
'target',
allTasks
);
expect(result.canMove).toBe(false);
expect(result.conflicts).toBeDefined();
expect(result.conflicts.length).toBeGreaterThan(0);
});
it('should return error when task not found', () => {
const allTasks = [
{
id: 1,
tag: 'source',
dependencies: [],
title: 'Task 1'
}
];
const result = canMoveWithDependencies(
'999',
'source',
'target',
allTasks
);
expect(result.canMove).toBe(false);
expect(result.error).toBe('Task not found');
expect(result.dependentTaskIds).toEqual([]);
expect(result.conflicts).toEqual([]);
});
});
});

View File

@@ -0,0 +1,139 @@
/**
* Mock for move-task module
* Provides mock implementations for testing scenarios
*/
// Mock the moveTask function from the core module
const mockMoveTask = jest
.fn()
.mockImplementation(
async (tasksPath, sourceId, destinationId, generateFiles, options) => {
// Simulate successful move operation
return {
success: true,
sourceId,
destinationId,
message: `Successfully moved task ${sourceId} to ${destinationId}`,
...options
};
}
);
// Mock the moveTaskDirect function
const mockMoveTaskDirect = jest
.fn()
.mockImplementation(async (args, log, context = {}) => {
// Validate required parameters
if (!args.sourceId) {
return {
success: false,
error: {
message: 'Source ID is required',
code: 'MISSING_SOURCE_ID'
}
};
}
if (!args.destinationId) {
return {
success: false,
error: {
message: 'Destination ID is required',
code: 'MISSING_DESTINATION_ID'
}
};
}
// Simulate successful move
return {
success: true,
data: {
sourceId: args.sourceId,
destinationId: args.destinationId,
message: `Successfully moved task/subtask ${args.sourceId} to ${args.destinationId}`,
tag: args.tag,
projectRoot: args.projectRoot
}
};
});
// Mock the moveTaskCrossTagDirect function
const mockMoveTaskCrossTagDirect = jest
.fn()
.mockImplementation(async (args, log, context = {}) => {
// Validate required parameters
if (!args.sourceIds) {
return {
success: false,
error: {
message: 'Source IDs are required',
code: 'MISSING_SOURCE_IDS'
}
};
}
if (!args.sourceTag) {
return {
success: false,
error: {
message: 'Source tag is required for cross-tag moves',
code: 'MISSING_SOURCE_TAG'
}
};
}
if (!args.targetTag) {
return {
success: false,
error: {
message: 'Target tag is required for cross-tag moves',
code: 'MISSING_TARGET_TAG'
}
};
}
if (args.sourceTag === args.targetTag) {
return {
success: false,
error: {
message: `Source and target tags are the same ("${args.sourceTag}")`,
code: 'SAME_SOURCE_TARGET_TAG'
}
};
}
// Simulate successful cross-tag move
return {
success: true,
data: {
sourceIds: args.sourceIds,
sourceTag: args.sourceTag,
targetTag: args.targetTag,
message: `Successfully moved tasks ${args.sourceIds} from ${args.sourceTag} to ${args.targetTag}`,
withDependencies: args.withDependencies || false,
ignoreDependencies: args.ignoreDependencies || false
}
};
});
// Mock the registerMoveTaskTool function
const mockRegisterMoveTaskTool = jest.fn().mockImplementation((server) => {
// Simulate tool registration
server.addTool({
name: 'move_task',
description: 'Move a task or subtask to a new position',
parameters: {},
execute: jest.fn()
});
});
// Export the mock functions
export {
mockMoveTask,
mockMoveTaskDirect,
mockMoveTaskCrossTagDirect,
mockRegisterMoveTaskTool
};
// Default export for the main moveTask function
export default mockMoveTask;

View File

@@ -0,0 +1,291 @@
import { jest } from '@jest/globals';
// Mock the utils functions
const mockFindTasksPath = jest
.fn()
.mockReturnValue('/test/path/.taskmaster/tasks/tasks.json');
jest.mock('../../../../mcp-server/src/core/utils/path-utils.js', () => ({
findTasksPath: mockFindTasksPath
}));
const mockEnableSilentMode = jest.fn();
const mockDisableSilentMode = jest.fn();
const mockReadJSON = jest.fn();
const mockWriteJSON = jest.fn();
jest.mock('../../../../scripts/modules/utils.js', () => ({
enableSilentMode: mockEnableSilentMode,
disableSilentMode: mockDisableSilentMode,
readJSON: mockReadJSON,
writeJSON: mockWriteJSON
}));
// Import the direct function after setting up mocks
import { moveTaskCrossTagDirect } from '../../../../mcp-server/src/core/direct-functions/move-task-cross-tag.js';
describe('MCP Cross-Tag Move Direct Function', () => {
const mockLog = {
info: jest.fn(),
error: jest.fn(),
warn: jest.fn()
};
beforeEach(() => {
jest.clearAllMocks();
});
describe('Mock Verification', () => {
it('should verify that mocks are working', () => {
// Test that findTasksPath mock is working
expect(mockFindTasksPath()).toBe(
'/test/path/.taskmaster/tasks/tasks.json'
);
// Test that readJSON mock is working
mockReadJSON.mockReturnValue('test');
expect(mockReadJSON()).toBe('test');
});
});
describe('Parameter Validation', () => {
it('should return error when source IDs are missing', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceTag: 'backlog',
targetTag: 'in-progress',
projectRoot: '/test'
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('MISSING_SOURCE_IDS');
expect(result.error.message).toBe('Source IDs are required');
});
it('should return error when source tag is missing', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1,2',
targetTag: 'in-progress',
projectRoot: '/test'
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('MISSING_SOURCE_TAG');
expect(result.error.message).toBe(
'Source tag is required for cross-tag moves'
);
});
it('should return error when target tag is missing', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1,2',
sourceTag: 'backlog',
projectRoot: '/test'
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('MISSING_TARGET_TAG');
expect(result.error.message).toBe(
'Target tag is required for cross-tag moves'
);
});
it('should return error when source and target tags are the same', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1,2',
sourceTag: 'backlog',
targetTag: 'backlog',
projectRoot: '/test'
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('SAME_SOURCE_TARGET_TAG');
expect(result.error.message).toBe(
'Source and target tags are the same ("backlog")'
);
expect(result.error.suggestions).toHaveLength(3);
});
});
describe('Error Code Mapping', () => {
it('should map tag not found errors correctly', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'invalid',
targetTag: 'in-progress',
projectRoot: '/test'
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
expect(result.error.message).toBe(
'Source tag "invalid" not found or invalid'
);
expect(result.error.suggestions).toHaveLength(3);
});
it('should map missing project root errors correctly', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'backlog',
targetTag: 'in-progress'
// Missing projectRoot
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('MISSING_PROJECT_ROOT');
expect(result.error.message).toBe(
'Project root is required if tasksJsonPath is not provided'
);
});
});
describe('Move Options Handling', () => {
it('should handle move options correctly', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'backlog',
targetTag: 'in-progress',
withDependencies: true,
ignoreDependencies: false,
projectRoot: '/test'
},
mockLog
);
// The function should fail due to missing tag, but options should be processed
expect(result.success).toBe(false);
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
});
});
describe('Function Call Flow', () => {
it('should call findTasksPath when projectRoot is provided', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'backlog',
targetTag: 'in-progress',
projectRoot: '/test'
},
mockLog
);
// The function should fail due to tag validation before reaching path resolution
expect(result.success).toBe(false);
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
// Since the function fails early, findTasksPath is not called
expect(mockFindTasksPath).toHaveBeenCalledTimes(0);
});
it('should enable and disable silent mode during execution', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'backlog',
targetTag: 'in-progress',
projectRoot: '/test'
},
mockLog
);
// The function should fail due to tag validation before reaching silent mode calls
expect(result.success).toBe(false);
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
// Since the function fails early, silent mode is not called
expect(mockEnableSilentMode).toHaveBeenCalledTimes(0);
expect(mockDisableSilentMode).toHaveBeenCalledTimes(0);
});
it('should parse source IDs correctly', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1, 2, 3', // With spaces
sourceTag: 'backlog',
targetTag: 'in-progress',
projectRoot: '/test'
},
mockLog
);
// Should fail due to tag validation, but ID parsing should work
expect(result.success).toBe(false);
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
});
it('should handle move options correctly', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'backlog',
targetTag: 'in-progress',
withDependencies: true,
ignoreDependencies: false,
projectRoot: '/test'
},
mockLog
);
// Should fail due to tag validation, but option processing should work
expect(result.success).toBe(false);
expect(result.error.code).toBe('TAG_OR_TASK_NOT_FOUND');
});
});
describe('Error Handling', () => {
it('should handle missing project root correctly', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'backlog',
targetTag: 'in-progress'
// Missing projectRoot
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('MISSING_PROJECT_ROOT');
expect(result.error.message).toBe(
'Project root is required if tasksJsonPath is not provided'
);
});
it('should handle same source and target tags', async () => {
const result = await moveTaskCrossTagDirect(
{
sourceIds: '1',
sourceTag: 'backlog',
targetTag: 'backlog',
projectRoot: '/test'
},
mockLog
);
expect(result.success).toBe(false);
expect(result.error.code).toBe('SAME_SOURCE_TARGET_TAG');
expect(result.error.message).toBe(
'Source and target tags are the same ("backlog")'
);
expect(result.error.suggestions).toHaveLength(3);
});
});
});

View File

@@ -0,0 +1,134 @@
# Mock System Documentation
## Overview
The `move-cross-tag.test.js` file has been refactored to use a focused, maintainable mock system that addresses the brittleness and complexity of the original implementation.
## Key Improvements
### 1. **Focused Mocking**
- **Before**: Mocked 20+ modules, many irrelevant to cross-tag functionality
- **After**: Only mocks 5 core modules actually used in cross-tag moves
### 2. **Configuration-Driven Mocking**
```javascript
const mockConfig = {
core: {
moveTasksBetweenTags: true,
generateTaskFiles: true,
readJSON: true,
initTaskMaster: true,
findProjectRoot: true
}
};
```
### 3. **Reusable Mock Factory**
```javascript
function createMockFactory(config = mockConfig) {
const mocks = {};
if (config.core?.moveTasksBetweenTags) {
mocks.moveTasksBetweenTags = createMock('moveTasksBetweenTags');
}
// ... other mocks
return mocks;
}
```
## Mock Configuration
### Core Mocks (Required for Cross-Tag Functionality)
- `moveTasksBetweenTags`: Core move functionality
- `generateTaskFiles`: File generation after moves
- `readJSON`: Reading task data
- `initTaskMaster`: TaskMaster initialization
- `findProjectRoot`: Project path resolution
### Optional Mocks
- Console methods: `error`, `log`, `exit`
- TaskMaster instance methods: `getCurrentTag`, `getTasksPath`, `getProjectRoot`
## Usage Examples
### Default Configuration
```javascript
const mocks = setupMocks(); // Uses default mockConfig
```
### Minimal Configuration
```javascript
const minimalConfig = {
core: {
moveTasksBetweenTags: true,
generateTaskFiles: true,
readJSON: true
}
};
const mocks = setupMocks(minimalConfig);
```
### Selective Mocking
```javascript
const selectiveConfig = {
core: {
moveTasksBetweenTags: true,
generateTaskFiles: false, // Disabled
readJSON: true
}
};
const mocks = setupMocks(selectiveConfig);
```
## Benefits
1. **Reduced Complexity**: From 150+ lines of mock setup to 50 lines
2. **Better Maintainability**: Clear configuration object shows dependencies
3. **Focused Testing**: Only mocks what's actually used
4. **Flexible Configuration**: Easy to enable/disable specific mocks
5. **Consistent Naming**: All mocks use `createMock()` with descriptive names
## Migration Guide
### For Other Test Files
1. Identify actual module dependencies
2. Create configuration object for required mocks
3. Use `createMockFactory()` and `setupMocks()`
4. Remove unnecessary mocks
### Example Migration
```javascript
// Before: 20+ jest.mock() calls
jest.mock('module1', () => ({ ... }));
jest.mock('module2', () => ({ ... }));
// ... many more
// After: Configuration-driven
const mockConfig = {
core: {
requiredFunction1: true,
requiredFunction2: true
}
};
const mocks = setupMocks(mockConfig);
```
## Testing the Mock System
The test suite includes validation tests:
- `should work with minimal mock configuration`
- `should allow disabling specific mocks`
These ensure the mock factory works correctly and can be configured flexibly.

View File

@@ -0,0 +1,512 @@
import { jest } from '@jest/globals';
import chalk from 'chalk';
// ============================================================================
// MOCK FACTORY & CONFIGURATION SYSTEM
// ============================================================================
/**
* Mock configuration object to enable/disable specific mocks per test
*/
const mockConfig = {
// Core functionality mocks (always needed)
core: {
moveTasksBetweenTags: true,
generateTaskFiles: true,
readJSON: true,
initTaskMaster: true,
findProjectRoot: true
},
// Console and process mocks
console: {
error: true,
log: true,
exit: true
},
// TaskMaster instance mocks
taskMaster: {
getCurrentTag: true,
getTasksPath: true,
getProjectRoot: true
}
};
/**
* Creates mock functions with consistent naming
*/
function createMock(name) {
return jest.fn().mockName(name);
}
/**
* Mock factory for creating focused mocks based on configuration
*/
function createMockFactory(config = mockConfig) {
const mocks = {};
// Core functionality mocks
if (config.core?.moveTasksBetweenTags) {
mocks.moveTasksBetweenTags = createMock('moveTasksBetweenTags');
}
if (config.core?.generateTaskFiles) {
mocks.generateTaskFiles = createMock('generateTaskFiles');
}
if (config.core?.readJSON) {
mocks.readJSON = createMock('readJSON');
}
if (config.core?.initTaskMaster) {
mocks.initTaskMaster = createMock('initTaskMaster');
}
if (config.core?.findProjectRoot) {
mocks.findProjectRoot = createMock('findProjectRoot');
}
return mocks;
}
/**
* Sets up mocks based on configuration
*/
function setupMocks(config = mockConfig) {
const mocks = createMockFactory(config);
// Only mock the modules that are actually used in cross-tag move functionality
if (config.core?.moveTasksBetweenTags) {
jest.mock(
'../../../../../scripts/modules/task-manager/move-task.js',
() => ({
moveTasksBetweenTags: mocks.moveTasksBetweenTags
})
);
}
if (
config.core?.generateTaskFiles ||
config.core?.readJSON ||
config.core?.findProjectRoot
) {
jest.mock('../../../../../scripts/modules/utils.js', () => ({
findProjectRoot: mocks.findProjectRoot,
generateTaskFiles: mocks.generateTaskFiles,
readJSON: mocks.readJSON,
// Minimal set of utils that might be used
log: jest.fn(),
writeJSON: jest.fn(),
getCurrentTag: jest.fn(() => 'master')
}));
}
if (config.core?.initTaskMaster) {
jest.mock('../../../../../scripts/modules/config-manager.js', () => ({
initTaskMaster: mocks.initTaskMaster,
isApiKeySet: jest.fn(() => true),
getConfig: jest.fn(() => ({}))
}));
}
// Mock chalk for consistent output testing
jest.mock('chalk', () => ({
red: jest.fn((text) => text),
blue: jest.fn((text) => text),
green: jest.fn((text) => text),
yellow: jest.fn((text) => text),
white: jest.fn((text) => ({
bold: jest.fn((text) => text)
})),
reset: jest.fn((text) => text)
}));
return mocks;
}
// ============================================================================
// TEST SETUP
// ============================================================================
// Set up mocks with default configuration
const mocks = setupMocks();
// Import the actual command handler functions
import { registerCommands } from '../../../../../scripts/modules/commands.js';
// Extract the handleCrossTagMove function from the commands module
// This is a simplified version of the actual function for testing
async function handleCrossTagMove(moveContext, options) {
const { sourceId, sourceTag, toTag, taskMaster } = moveContext;
if (!sourceId) {
console.error('Error: --from parameter is required for cross-tag moves');
process.exit(1);
throw new Error('--from parameter is required for cross-tag moves');
}
if (sourceTag === toTag) {
console.error(
`Error: Source and target tags are the same ("${sourceTag}")`
);
process.exit(1);
throw new Error(`Source and target tags are the same ("${sourceTag}")`);
}
const sourceIds = sourceId.split(',').map((id) => id.trim());
const moveOptions = {
withDependencies: options.withDependencies || false,
ignoreDependencies: options.ignoreDependencies || false
};
const result = await mocks.moveTasksBetweenTags(
taskMaster.getTasksPath(),
sourceIds,
sourceTag,
toTag,
moveOptions,
{ projectRoot: taskMaster.getProjectRoot() }
);
// Check if source tag still contains tasks before regenerating files
const tasksData = mocks.readJSON(
taskMaster.getTasksPath(),
taskMaster.getProjectRoot(),
sourceTag
);
const sourceTagHasTasks =
tasksData && Array.isArray(tasksData.tasks) && tasksData.tasks.length > 0;
// Generate task files for the affected tags
await mocks.generateTaskFiles(taskMaster.getTasksPath(), 'tasks', {
tag: toTag,
projectRoot: taskMaster.getProjectRoot()
});
// Only regenerate source tag files if it still contains tasks
if (sourceTagHasTasks) {
await mocks.generateTaskFiles(taskMaster.getTasksPath(), 'tasks', {
tag: sourceTag,
projectRoot: taskMaster.getProjectRoot()
});
}
return result;
}
// ============================================================================
// TEST SUITE
// ============================================================================
describe('CLI Move Command Cross-Tag Functionality', () => {
let mockTaskMaster;
let mockConsoleError;
let mockConsoleLog;
let mockProcessExit;
beforeEach(() => {
jest.clearAllMocks();
// Mock console methods
mockConsoleError = jest.spyOn(console, 'error').mockImplementation();
mockConsoleLog = jest.spyOn(console, 'log').mockImplementation();
mockProcessExit = jest.spyOn(process, 'exit').mockImplementation();
// Mock TaskMaster instance
mockTaskMaster = {
getCurrentTag: jest.fn().mockReturnValue('master'),
getTasksPath: jest.fn().mockReturnValue('/test/path/tasks.json'),
getProjectRoot: jest.fn().mockReturnValue('/test/project')
};
mocks.initTaskMaster.mockReturnValue(mockTaskMaster);
mocks.findProjectRoot.mockReturnValue('/test/project');
mocks.generateTaskFiles.mockResolvedValue();
mocks.readJSON.mockReturnValue({
tasks: [
{ id: 1, title: 'Test Task 1' },
{ id: 2, title: 'Test Task 2' }
]
});
});
afterEach(() => {
jest.restoreAllMocks();
});
describe('Cross-Tag Move Logic', () => {
it('should handle basic cross-tag move', async () => {
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress',
withDependencies: false,
ignoreDependencies: false
};
const moveContext = {
sourceId: options.from,
sourceTag: options.fromTag,
toTag: options.toTag,
taskMaster: mockTaskMaster
};
mocks.moveTasksBetweenTags.mockResolvedValue({
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"'
});
await handleCrossTagMove(moveContext, options);
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
'/test/path/tasks.json',
['1'],
'backlog',
'in-progress',
{
withDependencies: false,
ignoreDependencies: false
},
{ projectRoot: '/test/project' }
);
});
it('should handle --with-dependencies flag', async () => {
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress',
withDependencies: true,
ignoreDependencies: false
};
const moveContext = {
sourceId: options.from,
sourceTag: options.fromTag,
toTag: options.toTag,
taskMaster: mockTaskMaster
};
mocks.moveTasksBetweenTags.mockResolvedValue({
message: 'Successfully moved 2 tasks from "backlog" to "in-progress"'
});
await handleCrossTagMove(moveContext, options);
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
'/test/path/tasks.json',
['1'],
'backlog',
'in-progress',
{
withDependencies: true,
ignoreDependencies: false
},
{ projectRoot: '/test/project' }
);
});
it('should handle --ignore-dependencies flag', async () => {
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'in-progress',
withDependencies: false,
ignoreDependencies: true
};
const moveContext = {
sourceId: options.from,
sourceTag: options.fromTag,
toTag: options.toTag,
taskMaster: mockTaskMaster
};
mocks.moveTasksBetweenTags.mockResolvedValue({
message: 'Successfully moved 1 tasks from "backlog" to "in-progress"'
});
await handleCrossTagMove(moveContext, options);
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
'/test/path/tasks.json',
['1'],
'backlog',
'in-progress',
{
withDependencies: false,
ignoreDependencies: true
},
{ projectRoot: '/test/project' }
);
});
});
describe('Error Handling', () => {
it('should handle missing --from parameter', async () => {
const options = {
from: undefined,
fromTag: 'backlog',
toTag: 'in-progress'
};
const moveContext = {
sourceId: options.from,
sourceTag: options.fromTag,
toTag: options.toTag,
taskMaster: mockTaskMaster
};
await expect(handleCrossTagMove(moveContext, options)).rejects.toThrow();
expect(mockConsoleError).toHaveBeenCalledWith(
'Error: --from parameter is required for cross-tag moves'
);
expect(mockProcessExit).toHaveBeenCalledWith(1);
});
it('should handle same source and target tags', async () => {
const options = {
from: '1',
fromTag: 'backlog',
toTag: 'backlog'
};
const moveContext = {
sourceId: options.from,
sourceTag: options.fromTag,
toTag: options.toTag,
taskMaster: mockTaskMaster
};
await expect(handleCrossTagMove(moveContext, options)).rejects.toThrow();
expect(mockConsoleError).toHaveBeenCalledWith(
'Error: Source and target tags are the same ("backlog")'
);
expect(mockProcessExit).toHaveBeenCalledWith(1);
});
});
describe('Fallback to Current Tag', () => {
it('should use current tag when --from-tag is not provided', async () => {
const options = {
from: '1',
fromTag: undefined,
toTag: 'in-progress'
};
const moveContext = {
sourceId: options.from,
sourceTag: 'master', // Should use current tag
toTag: options.toTag,
taskMaster: mockTaskMaster
};
mocks.moveTasksBetweenTags.mockResolvedValue({
message: 'Successfully moved 1 tasks from "master" to "in-progress"'
});
await handleCrossTagMove(moveContext, options);
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
'/test/path/tasks.json',
['1'],
'master',
'in-progress',
expect.any(Object),
{ projectRoot: '/test/project' }
);
});
});
describe('Multiple Task Movement', () => {
it('should handle comma-separated task IDs', async () => {
const options = {
from: '1,2,3',
fromTag: 'backlog',
toTag: 'in-progress'
};
const moveContext = {
sourceId: options.from,
sourceTag: options.fromTag,
toTag: options.toTag,
taskMaster: mockTaskMaster
};
mocks.moveTasksBetweenTags.mockResolvedValue({
message: 'Successfully moved 3 tasks from "backlog" to "in-progress"'
});
await handleCrossTagMove(moveContext, options);
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
'/test/path/tasks.json',
['1', '2', '3'],
'backlog',
'in-progress',
expect.any(Object),
{ projectRoot: '/test/project' }
);
});
it('should handle whitespace in comma-separated task IDs', async () => {
const options = {
from: '1, 2, 3',
fromTag: 'backlog',
toTag: 'in-progress'
};
const moveContext = {
sourceId: options.from,
sourceTag: options.fromTag,
toTag: options.toTag,
taskMaster: mockTaskMaster
};
mocks.moveTasksBetweenTags.mockResolvedValue({
message: 'Successfully moved 3 tasks from "backlog" to "in-progress"'
});
await handleCrossTagMove(moveContext, options);
expect(mocks.moveTasksBetweenTags).toHaveBeenCalledWith(
'/test/path/tasks.json',
['1', '2', '3'],
'backlog',
'in-progress',
expect.any(Object),
{ projectRoot: '/test/project' }
);
});
});
describe('Mock Configuration Tests', () => {
it('should work with minimal mock configuration', async () => {
// Test that the mock factory works with minimal config
const minimalConfig = {
core: {
moveTasksBetweenTags: true,
generateTaskFiles: true,
readJSON: true
}
};
const minimalMocks = createMockFactory(minimalConfig);
expect(minimalMocks.moveTasksBetweenTags).toBeDefined();
expect(minimalMocks.generateTaskFiles).toBeDefined();
expect(minimalMocks.readJSON).toBeDefined();
});
it('should allow disabling specific mocks', async () => {
// Test that mocks can be selectively disabled
const selectiveConfig = {
core: {
moveTasksBetweenTags: true,
generateTaskFiles: false, // Disabled
readJSON: true
}
};
const selectiveMocks = createMockFactory(selectiveConfig);
expect(selectiveMocks.moveTasksBetweenTags).toBeDefined();
expect(selectiveMocks.generateTaskFiles).toBeUndefined();
expect(selectiveMocks.readJSON).toBeDefined();
});
});
});

View File

@@ -0,0 +1,330 @@
import { jest } from '@jest/globals';
import {
validateCrossTagMove,
findCrossTagDependencies,
getDependentTaskIds,
validateSubtaskMove,
canMoveWithDependencies
} from '../../../../../scripts/modules/dependency-manager.js';
describe('Circular Dependency Scenarios', () => {
describe('Circular Cross-Tag Dependencies', () => {
const allTasks = [
{
id: 1,
title: 'Task 1',
dependencies: [2],
status: 'pending',
tag: 'backlog'
},
{
id: 2,
title: 'Task 2',
dependencies: [3],
status: 'pending',
tag: 'backlog'
},
{
id: 3,
title: 'Task 3',
dependencies: [1],
status: 'pending',
tag: 'backlog'
}
];
it('should detect circular dependencies across tags', () => {
// Task 1 depends on 2, 2 depends on 3, 3 depends on 1 (circular)
// But since all tasks are in 'backlog' and target is 'in-progress',
// only direct dependencies that are in different tags will be found
const conflicts = findCrossTagDependencies(
[allTasks[0]],
'backlog',
'in-progress',
allTasks
);
// Only direct dependencies of task 1 that are not in target tag
expect(conflicts).toHaveLength(1);
expect(
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
).toBe(true);
});
it('should block move with circular dependencies', () => {
// Since task 1 has dependencies in the same tag, validateCrossTagMove should not throw
// The function only checks direct dependencies, not circular chains
expect(() => {
validateCrossTagMove(allTasks[0], 'backlog', 'in-progress', allTasks);
}).not.toThrow();
});
it('should return canMove: false for circular dependencies', () => {
const result = canMoveWithDependencies(
'1',
'backlog',
'in-progress',
allTasks
);
expect(result.canMove).toBe(false);
expect(result.conflicts).toHaveLength(1);
});
});
describe('Complex Dependency Chains', () => {
const allTasks = [
{
id: 1,
title: 'Task 1',
dependencies: [2, 3],
status: 'pending',
tag: 'backlog'
},
{
id: 2,
title: 'Task 2',
dependencies: [4],
status: 'pending',
tag: 'backlog'
},
{
id: 3,
title: 'Task 3',
dependencies: [5],
status: 'pending',
tag: 'backlog'
},
{
id: 4,
title: 'Task 4',
dependencies: [],
status: 'pending',
tag: 'backlog'
},
{
id: 5,
title: 'Task 5',
dependencies: [6],
status: 'pending',
tag: 'backlog'
},
{
id: 6,
title: 'Task 6',
dependencies: [],
status: 'pending',
tag: 'backlog'
},
{
id: 7,
title: 'Task 7',
dependencies: [],
status: 'in-progress',
tag: 'in-progress'
}
];
it('should find all dependencies in complex chain', () => {
const conflicts = findCrossTagDependencies(
[allTasks[0]],
'backlog',
'in-progress',
allTasks
);
// Only direct dependencies of task 1 that are not in target tag
expect(conflicts).toHaveLength(2);
expect(
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
).toBe(true);
expect(
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 3)
).toBe(true);
});
it('should get all dependent task IDs in complex chain', () => {
const conflicts = findCrossTagDependencies(
[allTasks[0]],
'backlog',
'in-progress',
allTasks
);
const dependentIds = getDependentTaskIds(
[allTasks[0]],
conflicts,
allTasks
);
// Should include only the direct dependency IDs from conflicts
expect(dependentIds).toContain(2);
expect(dependentIds).toContain(3);
// Should not include the source task or tasks not in conflicts
expect(dependentIds).not.toContain(1);
});
});
describe('Mixed Dependency Types', () => {
const allTasks = [
{
id: 1,
title: 'Task 1',
dependencies: [2, '3.1'],
status: 'pending',
tag: 'backlog'
},
{
id: 2,
title: 'Task 2',
dependencies: [4],
status: 'pending',
tag: 'backlog'
},
{
id: 3,
title: 'Task 3',
dependencies: [5],
status: 'pending',
tag: 'backlog',
subtasks: [
{
id: 1,
title: 'Subtask 3.1',
dependencies: [],
status: 'pending',
tag: 'backlog'
}
]
},
{
id: 4,
title: 'Task 4',
dependencies: [],
status: 'pending',
tag: 'backlog'
},
{
id: 5,
title: 'Task 5',
dependencies: [],
status: 'pending',
tag: 'backlog'
}
];
it('should handle mixed task and subtask dependencies', () => {
const conflicts = findCrossTagDependencies(
[allTasks[0]],
'backlog',
'in-progress',
allTasks
);
expect(conflicts).toHaveLength(2);
expect(
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
).toBe(true);
expect(
conflicts.some((c) => c.taskId === 1 && c.dependencyId === '3.1')
).toBe(true);
});
});
describe('Large Task Set Performance', () => {
const allTasks = [];
for (let i = 1; i <= 100; i++) {
allTasks.push({
id: i,
title: `Task ${i}`,
dependencies: i < 100 ? [i + 1] : [],
status: 'pending',
tag: 'backlog'
});
}
it('should handle large task sets efficiently', () => {
const conflicts = findCrossTagDependencies(
[allTasks[0]],
'backlog',
'in-progress',
allTasks
);
expect(conflicts.length).toBeGreaterThan(0);
expect(conflicts[0]).toHaveProperty('taskId');
expect(conflicts[0]).toHaveProperty('dependencyId');
});
});
describe('Edge Cases and Error Conditions', () => {
const allTasks = [
{
id: 1,
title: 'Task 1',
dependencies: [2],
status: 'pending',
tag: 'backlog'
},
{
id: 2,
title: 'Task 2',
dependencies: [],
status: 'pending',
tag: 'backlog'
}
];
it('should handle empty task arrays', () => {
expect(() => {
findCrossTagDependencies([], 'backlog', 'in-progress', allTasks);
}).not.toThrow();
});
it('should handle non-existent tasks gracefully', () => {
expect(() => {
findCrossTagDependencies(
[{ id: 999, dependencies: [] }],
'backlog',
'in-progress',
allTasks
);
}).not.toThrow();
});
it('should handle invalid tag names', () => {
expect(() => {
findCrossTagDependencies(
[allTasks[0]],
'invalid-tag',
'in-progress',
allTasks
);
}).not.toThrow();
});
it('should handle null/undefined dependencies', () => {
const taskWithNullDeps = {
...allTasks[0],
dependencies: [null, undefined, 2]
};
expect(() => {
findCrossTagDependencies(
[taskWithNullDeps],
'backlog',
'in-progress',
allTasks
);
}).not.toThrow();
});
it('should handle string dependencies correctly', () => {
const taskWithStringDeps = { ...allTasks[0], dependencies: ['2', '3'] };
const conflicts = findCrossTagDependencies(
[taskWithStringDeps],
'backlog',
'in-progress',
allTasks
);
expect(conflicts.length).toBeGreaterThanOrEqual(0);
});
});
});

View File

@@ -0,0 +1,397 @@
import { jest } from '@jest/globals';
import {
validateCrossTagMove,
findCrossTagDependencies,
getDependentTaskIds,
validateSubtaskMove,
canMoveWithDependencies
} from '../../../../../scripts/modules/dependency-manager.js';
describe('Cross-Tag Dependency Validation', () => {
describe('validateCrossTagMove', () => {
const mockAllTasks = [
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
];
it('should allow move when no dependencies exist', () => {
const task = { id: 2, dependencies: [], title: 'Task 2' };
const result = validateCrossTagMove(
task,
'backlog',
'in-progress',
mockAllTasks
);
expect(result.canMove).toBe(true);
expect(result.conflicts).toHaveLength(0);
});
it('should block move when cross-tag dependencies exist', () => {
const task = { id: 1, dependencies: [2], title: 'Task 1' };
const result = validateCrossTagMove(
task,
'backlog',
'in-progress',
mockAllTasks
);
expect(result.canMove).toBe(false);
expect(result.conflicts).toHaveLength(1);
expect(result.conflicts[0]).toMatchObject({
taskId: 1,
dependencyId: 2,
dependencyTag: 'backlog'
});
});
it('should allow move when dependencies are in target tag', () => {
const task = { id: 3, dependencies: [1], title: 'Task 3' };
// Move both task 1 and task 3 to in-progress, then move task 1 to done
const updatedTasks = mockAllTasks.map((t) => {
if (t.id === 1) return { ...t, tag: 'in-progress' };
if (t.id === 3) return { ...t, tag: 'in-progress' };
return t;
});
// Now move task 1 to done
const updatedTasks2 = updatedTasks.map((t) =>
t.id === 1 ? { ...t, tag: 'done' } : t
);
const result = validateCrossTagMove(
task,
'in-progress',
'done',
updatedTasks2
);
expect(result.canMove).toBe(true);
expect(result.conflicts).toHaveLength(0);
});
it('should handle multiple dependencies correctly', () => {
const task = { id: 5, dependencies: [1, 3], title: 'Task 5' };
const result = validateCrossTagMove(
task,
'backlog',
'done',
mockAllTasks
);
expect(result.canMove).toBe(false);
expect(result.conflicts).toHaveLength(2);
expect(result.conflicts[0].dependencyId).toBe(1);
expect(result.conflicts[1].dependencyId).toBe(3);
});
it('should throw error for invalid task parameter', () => {
expect(() =>
validateCrossTagMove(null, 'backlog', 'in-progress', mockAllTasks)
).toThrow('Task parameter must be a valid object');
});
it('should throw error for invalid source tag', () => {
const task = { id: 1, dependencies: [], title: 'Task 1' };
expect(() =>
validateCrossTagMove(task, '', 'in-progress', mockAllTasks)
).toThrow('Source tag must be a valid string');
});
it('should throw error for invalid target tag', () => {
const task = { id: 1, dependencies: [], title: 'Task 1' };
expect(() =>
validateCrossTagMove(task, 'backlog', null, mockAllTasks)
).toThrow('Target tag must be a valid string');
});
it('should throw error for invalid allTasks parameter', () => {
const task = { id: 1, dependencies: [], title: 'Task 1' };
expect(() =>
validateCrossTagMove(task, 'backlog', 'in-progress', 'not-an-array')
).toThrow('All tasks parameter must be an array');
});
});
describe('findCrossTagDependencies', () => {
const mockAllTasks = [
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
];
it('should find cross-tag dependencies for multiple tasks', () => {
const sourceTasks = [
{ id: 1, dependencies: [2], title: 'Task 1' },
{ id: 3, dependencies: [1], title: 'Task 3' }
];
const conflicts = findCrossTagDependencies(
sourceTasks,
'backlog',
'done',
mockAllTasks
);
expect(conflicts).toHaveLength(2);
expect(conflicts[0].taskId).toBe(1);
expect(conflicts[0].dependencyId).toBe(2);
expect(conflicts[1].taskId).toBe(3);
expect(conflicts[1].dependencyId).toBe(1);
});
it('should return empty array when no cross-tag dependencies exist', () => {
const sourceTasks = [
{ id: 2, dependencies: [], title: 'Task 2' },
{ id: 4, dependencies: [], title: 'Task 4' }
];
const conflicts = findCrossTagDependencies(
sourceTasks,
'backlog',
'done',
mockAllTasks
);
expect(conflicts).toHaveLength(0);
});
it('should handle tasks without dependencies', () => {
const sourceTasks = [{ id: 2, dependencies: [], title: 'Task 2' }];
const conflicts = findCrossTagDependencies(
sourceTasks,
'backlog',
'done',
mockAllTasks
);
expect(conflicts).toHaveLength(0);
});
it('should throw error for invalid sourceTasks parameter', () => {
expect(() =>
findCrossTagDependencies(
'not-an-array',
'backlog',
'done',
mockAllTasks
)
).toThrow('Source tasks parameter must be an array');
});
it('should throw error for invalid source tag', () => {
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
expect(() =>
findCrossTagDependencies(sourceTasks, '', 'done', mockAllTasks)
).toThrow('Source tag must be a valid string');
});
it('should throw error for invalid target tag', () => {
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
expect(() =>
findCrossTagDependencies(sourceTasks, 'backlog', null, mockAllTasks)
).toThrow('Target tag must be a valid string');
});
it('should throw error for invalid allTasks parameter', () => {
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
expect(() =>
findCrossTagDependencies(sourceTasks, 'backlog', 'done', 'not-an-array')
).toThrow('All tasks parameter must be an array');
});
});
describe('getDependentTaskIds', () => {
const mockAllTasks = [
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
];
it('should return dependent task IDs', () => {
const sourceTasks = [{ id: 1, dependencies: [2], title: 'Task 1' }];
const crossTagDependencies = [
{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' }
];
const dependentIds = getDependentTaskIds(
sourceTasks,
crossTagDependencies,
mockAllTasks
);
expect(dependentIds).toContain(2);
// The function also finds tasks that depend on the source task, so we expect more than just the dependency
expect(dependentIds.length).toBeGreaterThan(0);
});
it('should handle multiple dependencies with recursive resolution', () => {
const sourceTasks = [{ id: 5, dependencies: [1, 3], title: 'Task 5' }];
const crossTagDependencies = [
{ taskId: 5, dependencyId: 1, dependencyTag: 'backlog' },
{ taskId: 5, dependencyId: 3, dependencyTag: 'in-progress' }
];
const dependentIds = getDependentTaskIds(
sourceTasks,
crossTagDependencies,
mockAllTasks
);
// Should find all dependencies recursively:
// Task 5 → [1, 3], Task 1 → [2], so total is [1, 2, 3]
expect(dependentIds).toContain(1);
expect(dependentIds).toContain(2); // Task 1's dependency
expect(dependentIds).toContain(3);
expect(dependentIds).toHaveLength(3);
});
it('should return empty array when no dependencies', () => {
const sourceTasks = [{ id: 2, dependencies: [], title: 'Task 2' }];
const crossTagDependencies = [];
const dependentIds = getDependentTaskIds(
sourceTasks,
crossTagDependencies,
mockAllTasks
);
// The function finds tasks that depend on source tasks, so even with no cross-tag dependencies,
// it might find tasks that depend on the source task
expect(Array.isArray(dependentIds)).toBe(true);
});
it('should throw error for invalid sourceTasks parameter', () => {
const crossTagDependencies = [];
expect(() =>
getDependentTaskIds('not-an-array', crossTagDependencies, mockAllTasks)
).toThrow('Source tasks parameter must be an array');
});
it('should throw error for invalid crossTagDependencies parameter', () => {
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
expect(() =>
getDependentTaskIds(sourceTasks, 'not-an-array', mockAllTasks)
).toThrow('Cross tag dependencies parameter must be an array');
});
it('should throw error for invalid allTasks parameter', () => {
const sourceTasks = [{ id: 1, dependencies: [], title: 'Task 1' }];
const crossTagDependencies = [];
expect(() =>
getDependentTaskIds(sourceTasks, crossTagDependencies, 'not-an-array')
).toThrow('All tasks parameter must be an array');
});
});
describe('validateSubtaskMove', () => {
it('should throw error for subtask movement', () => {
expect(() =>
validateSubtaskMove('1.2', 'backlog', 'in-progress')
).toThrow('Cannot move subtask 1.2 directly between tags');
});
it('should allow regular task movement', () => {
expect(() =>
validateSubtaskMove('1', 'backlog', 'in-progress')
).not.toThrow();
});
it('should throw error for invalid taskId parameter', () => {
expect(() => validateSubtaskMove(null, 'backlog', 'in-progress')).toThrow(
'Task ID must be a valid string'
);
});
it('should throw error for invalid source tag', () => {
expect(() => validateSubtaskMove('1', '', 'in-progress')).toThrow(
'Source tag must be a valid string'
);
});
it('should throw error for invalid target tag', () => {
expect(() => validateSubtaskMove('1', 'backlog', null)).toThrow(
'Target tag must be a valid string'
);
});
});
describe('canMoveWithDependencies', () => {
const mockAllTasks = [
{ id: 1, tag: 'backlog', dependencies: [2], title: 'Task 1' },
{ id: 2, tag: 'backlog', dependencies: [], title: 'Task 2' },
{ id: 3, tag: 'in-progress', dependencies: [1], title: 'Task 3' },
{ id: 4, tag: 'done', dependencies: [], title: 'Task 4' }
];
it('should return canMove: true when no conflicts exist', () => {
const result = canMoveWithDependencies(
'2',
'backlog',
'in-progress',
mockAllTasks
);
expect(result.canMove).toBe(true);
expect(result.dependentTaskIds).toHaveLength(0);
expect(result.conflicts).toHaveLength(0);
});
it('should return canMove: false when conflicts exist', () => {
const result = canMoveWithDependencies(
'1',
'backlog',
'in-progress',
mockAllTasks
);
expect(result.canMove).toBe(false);
expect(result.dependentTaskIds).toContain(2);
expect(result.conflicts).toHaveLength(1);
});
it('should return canMove: false when task not found', () => {
const result = canMoveWithDependencies(
'999',
'backlog',
'in-progress',
mockAllTasks
);
expect(result.canMove).toBe(false);
expect(result.error).toBe('Task not found');
});
it('should handle string task IDs', () => {
const result = canMoveWithDependencies(
'2',
'backlog',
'in-progress',
mockAllTasks
);
expect(result.canMove).toBe(true);
});
it('should throw error for invalid taskId parameter', () => {
expect(() =>
canMoveWithDependencies(null, 'backlog', 'in-progress', mockAllTasks)
).toThrow('Task ID must be a valid string');
});
it('should throw error for invalid source tag', () => {
expect(() =>
canMoveWithDependencies('1', '', 'in-progress', mockAllTasks)
).toThrow('Source tag must be a valid string');
});
it('should throw error for invalid target tag', () => {
expect(() =>
canMoveWithDependencies('1', 'backlog', null, mockAllTasks)
).toThrow('Target tag must be a valid string');
});
it('should throw error for invalid allTasks parameter', () => {
expect(() =>
canMoveWithDependencies('1', 'backlog', 'in-progress', 'not-an-array')
).toThrow('All tasks parameter must be an array');
});
});
});

View File

@@ -20,17 +20,27 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
taskExists: jest.fn(() => true),
formatTaskId: jest.fn((id) => id),
findCycles: jest.fn(() => []),
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => []),
isSilentMode: jest.fn(() => true),
resolveTag: jest.fn(() => 'master'),
getTasksForTag: jest.fn(() => []),
setTasksForTag: jest.fn(),
enableSilentMode: jest.fn(),
disableSilentMode: jest.fn()
disableSilentMode: jest.fn(),
isEmpty: jest.fn((value) => {
if (value === null || value === undefined) return true;
if (Array.isArray(value)) return value.length === 0;
if (typeof value === 'object' && value !== null)
return Object.keys(value).length === 0;
return false; // Not an array or object
}),
resolveEnvVariable: jest.fn()
}));
// Mock ui.js
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
displayBanner: jest.fn()
displayBanner: jest.fn(),
formatDependenciesWithStatus: jest.fn()
}));
// Mock task-manager.js

View File

@@ -41,7 +41,8 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
markMigrationForNotice: jest.fn(),
performCompleteTagMigration: jest.fn(),
setTasksForTag: jest.fn(),
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || [])
getTasksForTag: jest.fn((data, tag) => data[tag]?.tasks || []),
traverseDependencies: jest.fn((tasks, taskId, visited) => [])
}));
jest.unstable_mockModule(

View File

@@ -90,6 +90,7 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
}
return path.join(projectRoot || '.', basePath);
}),
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => []),
CONFIG: {
defaultSubtasks: 3
}

View File

@@ -22,7 +22,10 @@ jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
),
addComplexityToTask: jest.fn(),
readComplexityReport: jest.fn(() => null),
getTagAwareFilePath: jest.fn((tag, path) => '/mock/tagged/report.json')
getTagAwareFilePath: jest.fn((tag, path) => '/mock/tagged/report.json'),
stripAnsiCodes: jest.fn((text) =>
text ? text.replace(/\x1b\[[0-9;]*m/g, '') : text
)
}));
jest.unstable_mockModule('../../../../../scripts/modules/ui.js', () => ({
@@ -45,8 +48,13 @@ jest.unstable_mockModule(
);
// Import the mocked modules
const { readJSON, log, readComplexityReport, addComplexityToTask } =
await import('../../../../../scripts/modules/utils.js');
const {
readJSON,
log,
readComplexityReport,
addComplexityToTask,
stripAnsiCodes
} = await import('../../../../../scripts/modules/utils.js');
const { displayTaskList } = await import(
'../../../../../scripts/modules/ui.js'
);
@@ -584,4 +592,140 @@ describe('listTasks', () => {
expect(taskIds).toContain(5); // review task
});
});
describe('Compact output format', () => {
test('should output compact format when outputFormat is compact', async () => {
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
const tasksPath = 'tasks/tasks.json';
await listTasks(tasksPath, null, null, false, 'compact', {
tag: 'master'
});
expect(consoleSpy).toHaveBeenCalled();
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
// Strip ANSI color codes for testing
const cleanOutput = stripAnsiCodes(output);
// Should contain compact format elements: ID status title (priority) [→ dependencies]
expect(cleanOutput).toContain('1 done Setup Project (high)');
expect(cleanOutput).toContain(
'2 pending Implement Core Features (high) → 1'
);
consoleSpy.mockRestore();
});
test('should format single task compactly', async () => {
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
const tasksPath = 'tasks/tasks.json';
await listTasks(tasksPath, null, null, false, 'compact', {
tag: 'master'
});
expect(consoleSpy).toHaveBeenCalled();
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
// Should be compact (no verbose headers)
expect(output).not.toContain('Project Dashboard');
expect(output).not.toContain('Progress:');
consoleSpy.mockRestore();
});
test('should handle compact format with subtasks', async () => {
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
const tasksPath = 'tasks/tasks.json';
await listTasks(
tasksPath,
null,
null,
true, // withSubtasks = true
'compact',
{ tag: 'master' }
);
expect(consoleSpy).toHaveBeenCalled();
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
// Strip ANSI color codes for testing
const cleanOutput = stripAnsiCodes(output);
// Should handle both tasks and subtasks
expect(cleanOutput).toContain('1 done Setup Project (high)');
expect(cleanOutput).toContain('3.1 done Create Header Component');
consoleSpy.mockRestore();
});
test('should handle empty task list in compact format', async () => {
readJSON.mockReturnValue({ tasks: [] });
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
const tasksPath = 'tasks/tasks.json';
await listTasks(tasksPath, null, null, false, 'compact', {
tag: 'master'
});
expect(consoleSpy).toHaveBeenCalledWith('No tasks found');
consoleSpy.mockRestore();
});
test('should format dependencies correctly with shared helper', async () => {
// Create mock tasks with various dependency scenarios
const tasksWithDeps = {
tasks: [
{
id: 1,
title: 'Task with no dependencies',
status: 'pending',
priority: 'medium',
dependencies: []
},
{
id: 2,
title: 'Task with few dependencies',
status: 'pending',
priority: 'high',
dependencies: [1, 3]
},
{
id: 3,
title: 'Task with many dependencies',
status: 'pending',
priority: 'low',
dependencies: [1, 2, 4, 5, 6, 7, 8, 9]
}
]
};
readJSON.mockReturnValue(tasksWithDeps);
const consoleSpy = jest.spyOn(console, 'log').mockImplementation();
const tasksPath = 'tasks/tasks.json';
await listTasks(tasksPath, null, null, false, 'compact', {
tag: 'master'
});
expect(consoleSpy).toHaveBeenCalled();
const output = consoleSpy.mock.calls.map((call) => call[0]).join('\n');
// Strip ANSI color codes for testing
const cleanOutput = stripAnsiCodes(output);
// Should format tasks correctly with compact output including priority
expect(cleanOutput).toContain(
'1 pending Task with no dependencies (medium)'
);
expect(cleanOutput).toContain('Task with few dependencies');
expect(cleanOutput).toContain('Task with many dependencies');
// Should show dependencies with arrow when they exist
expect(cleanOutput).toMatch(/2.*→.*1,3/);
// Should truncate many dependencies with "+X more" format
expect(cleanOutput).toMatch(/3.*→.*1,2,4,5,6.*\(\+\d+ more\)/);
consoleSpy.mockRestore();
});
});
});

View File

@@ -0,0 +1,633 @@
import { jest } from '@jest/globals';
// --- Mocks ---
jest.unstable_mockModule('../../../../../scripts/modules/utils.js', () => ({
readJSON: jest.fn(),
writeJSON: jest.fn(),
log: jest.fn(),
setTasksForTag: jest.fn(),
truncate: jest.fn((t) => t),
isSilentMode: jest.fn(() => false),
traverseDependencies: jest.fn((sourceTasks, allTasks, options = {}) => {
// Mock realistic dependency behavior for testing
const { direction = 'forward' } = options;
if (direction === 'forward') {
// For forward dependencies: return tasks that the source tasks depend on
const result = [];
sourceTasks.forEach((task) => {
if (task.dependencies && Array.isArray(task.dependencies)) {
result.push(...task.dependencies);
}
});
return result;
} else if (direction === 'reverse') {
// For reverse dependencies: return tasks that depend on the source tasks
const sourceIds = sourceTasks.map((t) => t.id);
const normalizedSourceIds = sourceIds.map((id) => String(id));
const result = [];
allTasks.forEach((task) => {
if (task.dependencies && Array.isArray(task.dependencies)) {
const hasDependency = task.dependencies.some((depId) =>
normalizedSourceIds.includes(String(depId))
);
if (hasDependency) {
result.push(task.id);
}
}
});
return result;
}
return [];
})
}));
jest.unstable_mockModule(
'../../../../../scripts/modules/task-manager/generate-task-files.js',
() => ({
default: jest.fn().mockResolvedValue()
})
);
jest.unstable_mockModule(
'../../../../../scripts/modules/task-manager.js',
() => ({
isTaskDependentOn: jest.fn(() => false)
})
);
jest.unstable_mockModule(
'../../../../../scripts/modules/dependency-manager.js',
() => ({
validateCrossTagMove: jest.fn(),
findCrossTagDependencies: jest.fn(),
getDependentTaskIds: jest.fn(),
validateSubtaskMove: jest.fn()
})
);
const { readJSON, writeJSON, log } = await import(
'../../../../../scripts/modules/utils.js'
);
const {
validateCrossTagMove,
findCrossTagDependencies,
getDependentTaskIds,
validateSubtaskMove
} = await import('../../../../../scripts/modules/dependency-manager.js');
const { moveTasksBetweenTags, getAllTasksWithTags } = await import(
'../../../../../scripts/modules/task-manager/move-task.js'
);
describe('Cross-Tag Task Movement', () => {
let mockRawData;
let mockTasksPath;
let mockContext;
beforeEach(() => {
jest.clearAllMocks();
// Setup mock data
mockRawData = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: [2] },
{ id: 2, title: 'Task 2', dependencies: [] },
{ id: 3, title: 'Task 3', dependencies: [1] }
]
},
'in-progress': {
tasks: [{ id: 4, title: 'Task 4', dependencies: [] }]
},
done: {
tasks: [{ id: 5, title: 'Task 5', dependencies: [4] }]
}
};
mockTasksPath = '/test/path/tasks.json';
mockContext = { projectRoot: '/test/project' };
// Mock readJSON to return our test data
readJSON.mockImplementation((path, projectRoot, tag) => {
return { ...mockRawData[tag], tag, _rawTaggedData: mockRawData };
});
writeJSON.mockResolvedValue();
log.mockImplementation(() => {});
});
afterEach(() => {
jest.clearAllMocks();
});
describe('getAllTasksWithTags', () => {
it('should return all tasks with tag information', () => {
const allTasks = getAllTasksWithTags(mockRawData);
expect(allTasks).toHaveLength(5);
expect(allTasks.find((t) => t.id === 1).tag).toBe('backlog');
expect(allTasks.find((t) => t.id === 4).tag).toBe('in-progress');
expect(allTasks.find((t) => t.id === 5).tag).toBe('done');
});
});
describe('validateCrossTagMove', () => {
it('should allow move when no dependencies exist', () => {
const task = { id: 2, dependencies: [] };
const allTasks = getAllTasksWithTags(mockRawData);
validateCrossTagMove.mockReturnValue({ canMove: true, conflicts: [] });
const result = validateCrossTagMove(
task,
'backlog',
'in-progress',
allTasks
);
expect(result.canMove).toBe(true);
expect(result.conflicts).toHaveLength(0);
});
it('should block move when cross-tag dependencies exist', () => {
const task = { id: 1, dependencies: [2] };
const allTasks = getAllTasksWithTags(mockRawData);
validateCrossTagMove.mockReturnValue({
canMove: false,
conflicts: [{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' }]
});
const result = validateCrossTagMove(
task,
'backlog',
'in-progress',
allTasks
);
expect(result.canMove).toBe(false);
expect(result.conflicts).toHaveLength(1);
expect(result.conflicts[0].dependencyId).toBe(2);
});
});
describe('findCrossTagDependencies', () => {
it('should find cross-tag dependencies for multiple tasks', () => {
const sourceTasks = [
{ id: 1, dependencies: [2] },
{ id: 3, dependencies: [1] }
];
const allTasks = getAllTasksWithTags(mockRawData);
findCrossTagDependencies.mockReturnValue([
{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' },
{ taskId: 3, dependencyId: 1, dependencyTag: 'backlog' }
]);
const conflicts = findCrossTagDependencies(
sourceTasks,
'backlog',
'in-progress',
allTasks
);
expect(conflicts).toHaveLength(2);
expect(
conflicts.some((c) => c.taskId === 1 && c.dependencyId === 2)
).toBe(true);
expect(
conflicts.some((c) => c.taskId === 3 && c.dependencyId === 1)
).toBe(true);
});
});
describe('getDependentTaskIds', () => {
it('should return dependent task IDs', () => {
const sourceTasks = [{ id: 1, dependencies: [2] }];
const crossTagDependencies = [
{ taskId: 1, dependencyId: 2, dependencyTag: 'backlog' }
];
const allTasks = getAllTasksWithTags(mockRawData);
getDependentTaskIds.mockReturnValue([2]);
const dependentTaskIds = getDependentTaskIds(
sourceTasks,
crossTagDependencies,
allTasks
);
expect(dependentTaskIds).toContain(2);
});
});
describe('moveTasksBetweenTags', () => {
it('should move tasks without dependencies successfully', async () => {
// Mock the dependency functions to return no conflicts
findCrossTagDependencies.mockReturnValue([]);
validateSubtaskMove.mockImplementation(() => {});
const result = await moveTasksBetweenTags(
mockTasksPath,
[2],
'backlog',
'in-progress',
{},
mockContext
);
expect(result.message).toContain('Successfully moved 1 tasks');
expect(writeJSON).toHaveBeenCalledWith(
mockTasksPath,
expect.any(Object),
mockContext.projectRoot,
null
);
});
it('should throw error for cross-tag dependencies by default', async () => {
const mockDependency = {
taskId: 1,
dependencyId: 2,
dependencyTag: 'backlog'
};
findCrossTagDependencies.mockReturnValue([mockDependency]);
validateSubtaskMove.mockImplementation(() => {});
await expect(
moveTasksBetweenTags(
mockTasksPath,
[1],
'backlog',
'in-progress',
{},
mockContext
)
).rejects.toThrow(
'Cannot move tasks: 1 cross-tag dependency conflicts found'
);
expect(writeJSON).not.toHaveBeenCalled();
});
it('should move with dependencies when --with-dependencies is used', async () => {
const mockDependency = {
taskId: 1,
dependencyId: 2,
dependencyTag: 'backlog'
};
findCrossTagDependencies.mockReturnValue([mockDependency]);
getDependentTaskIds.mockReturnValue([2]);
validateSubtaskMove.mockImplementation(() => {});
const result = await moveTasksBetweenTags(
mockTasksPath,
[1],
'backlog',
'in-progress',
{ withDependencies: true },
mockContext
);
expect(result.message).toContain('Successfully moved 2 tasks');
expect(writeJSON).toHaveBeenCalledWith(
mockTasksPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 3,
title: 'Task 3',
dependencies: [1]
})
])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 4,
title: 'Task 4',
dependencies: []
}),
expect.objectContaining({
id: 1,
title: 'Task 1',
dependencies: [2],
metadata: expect.objectContaining({
moveHistory: expect.arrayContaining([
expect.objectContaining({
fromTag: 'backlog',
toTag: 'in-progress',
timestamp: expect.any(String)
})
])
})
}),
expect.objectContaining({
id: 2,
title: 'Task 2',
dependencies: [],
metadata: expect.objectContaining({
moveHistory: expect.arrayContaining([
expect.objectContaining({
fromTag: 'backlog',
toTag: 'in-progress',
timestamp: expect.any(String)
})
])
})
})
])
}),
done: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 5,
title: 'Task 5',
dependencies: [4]
})
])
})
}),
mockContext.projectRoot,
null
);
});
it('should break dependencies when --ignore-dependencies is used', async () => {
const mockDependency = {
taskId: 1,
dependencyId: 2,
dependencyTag: 'backlog'
};
findCrossTagDependencies.mockReturnValue([mockDependency]);
validateSubtaskMove.mockImplementation(() => {});
const result = await moveTasksBetweenTags(
mockTasksPath,
[2],
'backlog',
'in-progress',
{ ignoreDependencies: true },
mockContext
);
expect(result.message).toContain('Successfully moved 1 tasks');
expect(writeJSON).toHaveBeenCalledWith(
mockTasksPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Task 1',
dependencies: [2] // Dependencies not actually removed in current implementation
}),
expect.objectContaining({
id: 3,
title: 'Task 3',
dependencies: [1]
})
])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 4,
title: 'Task 4',
dependencies: []
}),
expect.objectContaining({
id: 2,
title: 'Task 2',
dependencies: [],
metadata: expect.objectContaining({
moveHistory: expect.arrayContaining([
expect.objectContaining({
fromTag: 'backlog',
toTag: 'in-progress',
timestamp: expect.any(String)
})
])
})
})
])
}),
done: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 5,
title: 'Task 5',
dependencies: [4]
})
])
})
}),
mockContext.projectRoot,
null
);
});
it('should create target tag if it does not exist', async () => {
findCrossTagDependencies.mockReturnValue([]);
validateSubtaskMove.mockImplementation(() => {});
const result = await moveTasksBetweenTags(
mockTasksPath,
[2],
'backlog',
'new-tag',
{},
mockContext
);
expect(result.message).toContain('Successfully moved 1 tasks');
expect(result.message).toContain('new-tag');
expect(writeJSON).toHaveBeenCalledWith(
mockTasksPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Task 1',
dependencies: [2]
}),
expect.objectContaining({
id: 3,
title: 'Task 3',
dependencies: [1]
})
])
}),
'new-tag': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 2,
title: 'Task 2',
dependencies: [],
metadata: expect.objectContaining({
moveHistory: expect.arrayContaining([
expect.objectContaining({
fromTag: 'backlog',
toTag: 'new-tag',
timestamp: expect.any(String)
})
])
})
})
])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 4,
title: 'Task 4',
dependencies: []
})
])
}),
done: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 5,
title: 'Task 5',
dependencies: [4]
})
])
})
}),
mockContext.projectRoot,
null
);
});
it('should throw error for subtask movement', async () => {
const subtaskError = 'Cannot move subtask 1.2 directly between tags';
validateSubtaskMove.mockImplementation(() => {
throw new Error(subtaskError);
});
await expect(
moveTasksBetweenTags(
mockTasksPath,
['1.2'],
'backlog',
'in-progress',
{},
mockContext
)
).rejects.toThrow(subtaskError);
expect(writeJSON).not.toHaveBeenCalled();
});
it('should throw error for invalid task IDs', async () => {
findCrossTagDependencies.mockReturnValue([]);
validateSubtaskMove.mockImplementation(() => {});
await expect(
moveTasksBetweenTags(
mockTasksPath,
[999], // Non-existent task
'backlog',
'in-progress',
{},
mockContext
)
).rejects.toThrow('Task 999 not found in source tag "backlog"');
expect(writeJSON).not.toHaveBeenCalled();
});
it('should throw error for invalid source tag', async () => {
findCrossTagDependencies.mockReturnValue([]);
validateSubtaskMove.mockImplementation(() => {});
await expect(
moveTasksBetweenTags(
mockTasksPath,
[1],
'non-existent-tag',
'in-progress',
{},
mockContext
)
).rejects.toThrow('Source tag "non-existent-tag" not found or invalid');
expect(writeJSON).not.toHaveBeenCalled();
});
it('should handle string dependencies correctly during cross-tag move', async () => {
// Setup mock data with string dependencies
mockRawData = {
backlog: {
tasks: [
{ id: 1, title: 'Task 1', dependencies: ['2'] }, // String dependency
{ id: 2, title: 'Task 2', dependencies: [] },
{ id: 3, title: 'Task 3', dependencies: ['1'] } // String dependency
]
},
'in-progress': {
tasks: [{ id: 4, title: 'Task 4', dependencies: [] }]
}
};
// Mock readJSON to return our test data
readJSON.mockImplementation((path, projectRoot, tag) => {
return { ...mockRawData[tag], tag, _rawTaggedData: mockRawData };
});
findCrossTagDependencies.mockReturnValue([]);
validateSubtaskMove.mockImplementation(() => {});
const result = await moveTasksBetweenTags(
mockTasksPath,
['1'], // String task ID
'backlog',
'in-progress',
{},
mockContext
);
expect(result.message).toContain('Successfully moved 1 tasks');
expect(writeJSON).toHaveBeenCalledWith(
mockTasksPath,
expect.objectContaining({
backlog: expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 2,
title: 'Task 2',
dependencies: []
}),
expect.objectContaining({
id: 3,
title: 'Task 3',
dependencies: ['1'] // Should remain as string
})
])
}),
'in-progress': expect.objectContaining({
tasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Task 1',
dependencies: ['2'], // Should remain as string
metadata: expect.objectContaining({
moveHistory: expect.arrayContaining([
expect.objectContaining({
fromTag: 'backlog',
toTag: 'in-progress',
timestamp: expect.any(String)
})
])
})
})
])
})
}),
mockContext.projectRoot,
null
);
});
});
});

Some files were not shown because too many files have changed in this diff Show More