Refactors the initialize_project MCP tool to call a dedicated direct function (initializeProjectDirect) instead of executing the CLI command. This improves reliability and aligns it with other MCP tools.
Key changes include: Modified initialize-project.js to call initializeProjectDirect, required projectRoot parameter, implemented handleApiResult for MCP response formatting, enhanced direct function to prioritize args.projectRoot over session-derived paths, added validation to prevent initialization in invalid directories, forces yes:true for non-interactive use, ensures process.chdir() targets validated directory, and added isSilentMode() checks to suppress console output during MCP operations.
This resolves issues where the tool previously failed due to incorrect fallback directory resolution when session context was incomplete.
Centralizes init command logic within the main CLI structure. The action handler in commands.js now directly calls initializeProject from the init.js module, resolving issues with argument parsing (like -y) and removing the need for the separate bin/task-master-init.js executable. Updates package.json and bin/task-master.js accordingly.
This commit fixes several issues with command line interface error handling:
1. Fix inconsistent behavior between --no-generate and --skip-generate:
- Standardized on --skip-generate across all commands
- Updated bin/task-master.js to use --skip-generate instead of --no-generate
- Modified add-subtask and remove-subtask commands to use --skip-generate
2. Enhance error handling for unknown options:
- Removed .allowUnknownOption() from commands to properly detect unknown options
- Added global error handler in bin/task-master.js for unknown commands/options
- Added command-specific error handlers with helpful error messages
3. Improve user experience with better help messages:
- Added helper functions to display formatted command help on errors
- Created command-specific help displays for add-subtask and remove-subtask
- Show available options when encountering unknown options
4. Update MCP server configuration:
- Modified .cursor/mcp.json to use node ./mcp-server/server.js directly
- Removed npx -y usage for more reliable execution
5. Other minor improvements:
- Adjusted column width for task ID display in UI
- Updated version number in package-lock.json to 0.9.30
This resolves issues where users would see confusing error messages like
'error: unknown option --generate' when using an incorrect flag."
This commit introduces several significant improvements:
- **Enhanced Unit Testing:** Vastly improved unit tests for the module, covering core functions, edge cases, and error handling. Simplified test functions and comprehensive mocking were implemented for better isolation and reliability. Added new section to tests.mdc detailing reliable testing techniques.
- **CLI Kebab-Case Flag Enforcement:** The CLI now enforces kebab-case for flags, providing helpful error messages when camelCase is used. This improves consistency and user experience.
- **AI Enhancements:**
- Enabled 128k token output for Claude 3.7 Sonnet by adding the header.
- Added a new task to to document this change and its testing strategy.
- Added unit tests to verify the Anthropic client configuration.
- Added and utility functions.
- **Improved Test Coverage:** Added tests for the new CLI flag validation logic.
- Fixed CLI wrapper to convert camelCase options to kebab-case when passing to dev.js
- Added explicit support for --input option in parse-prd command
- Updated commands.mdc to clarify Commander.js camelCase/kebab-case behavior
- Enhanced the function in ℹ️ Initialized Perplexity client with OpenAI compatibility layer
_____ _ __ __ _
|_ _|_ _ ___| | __ | \/ | __ _ ___| |_ ___ _ __
| |/ _` / __| |/ / | |\/| |/ _` / __| __/ _ \ '__|
| | (_| \__ \ < | | | | (_| \__ \ || __/ |
|_|\__,_|___/_|\_\ |_| |_|\__,_|___/\__\___|_|
by https://x.com/eyaltoledano
╭────────────────────────────────────────────╮
│ │
│ Version: 0.9.24 Project: Task Master │
│ │
╰────────────────────────────────────────────╯
╭─────────────────────╮
│ │
│ Task Master CLI │
│ │
╰─────────────────────╯
╭───────────────────╮
│ Task Generation │
╰───────────────────╯
parse-prd --input=<file.txt> [--tasks=10] Generate tasks from a PRD document
generate Create individual task files from tasks…
╭───────────────────╮
│ Task Management │
╰───────────────────╯
list [--status=<status>] [--with-subtas… List all tasks with their status
set-status --id=<id> --status=<status> Update task status (done, pending, etc.)
update --from=<id> --prompt="<context>" Update tasks based on new requirements
add-task --prompt="<text>" [--dependencies=… Add a new task using AI
add-dependency --id=<id> --depends-on=<id> Add a dependency to a task
remove-dependency --id=<id> --depends-on=<id> Remove a dependency from a task
╭──────────────────────────╮
│ Task Analysis & Detail │
╰──────────────────────────╯
analyze-complexity [--research] [--threshold=5] Analyze tasks and generate expansion re…
complexity-report [--file=<path>] Display the complexity analysis report
expand --id=<id> [--num=5] [--research] [… Break down tasks into detailed subtasks
expand --all [--force] [--research] Expand all pending tasks with subtasks
clear-subtasks --id=<id> Remove subtasks from specified tasks
╭─────────────────────────────╮
│ Task Navigation & Viewing │
╰─────────────────────────────╯
next Show the next task to work on based on …
show <id> Display detailed information about a sp…
╭─────────────────────────╮
│ Dependency Management │
╰─────────────────────────╯
validate-dependenci… Identify invalid dependencies without f…
fix-dependencies Fix invalid dependencies automatically
╭─────────────────────────╮
│ Environment Variables │
╰─────────────────────────╯
ANTHROPIC_API_KEY Your Anthropic API key Required
MODEL Claude model to use Default: claude-3-7-sonn…
MAX_TOKENS Maximum tokens for responses Default: 4000
TEMPERATURE Temperature for model responses Default: 0.7
PERPLEXITY_API_KEY Perplexity API key for research Optional
PERPLEXITY_MODEL Perplexity model to use Default: sonar-pro
DEBUG Enable debug logging Default: false
LOG_LEVEL Console output level (debug,info,warn,error) Default: info
DEFAULT_SUBTASKS Default number of subtasks to generate Default: 3
DEFAULT_PRIORITY Default task priority Default: medium
PROJECT_NAME Project name displayed in UI Default: Task Master
_____ _ __ __ _
|_ _|_ _ ___| | __ | \/ | __ _ ___| |_ ___ _ __
| |/ _` / __| |/ / | |\/| |/ _` / __| __/ _ \ '__|
| | (_| \__ \ < | | | | (_| \__ \ || __/ |
|_|\__,_|___/_|\_\ |_| |_|\__,_|___/\__\___|_|
by https://x.com/eyaltoledano
╭────────────────────────────────────────────╮
│ │
│ Version: 0.9.24 Project: Task Master │
│ │
╰────────────────────────────────────────────╯
╭─────────────────────╮
│ │
│ Task Master CLI │
│ │
╰─────────────────────╯
╭───────────────────╮
│ Task Generation │
╰───────────────────╯
parse-prd --input=<file.txt> [--tasks=10] Generate tasks from a PRD document
generate Create individual task files from tasks…
╭───────────────────╮
│ Task Management │
╰───────────────────╯
list [--status=<status>] [--with-subtas… List all tasks with their status
set-status --id=<id> --status=<status> Update task status (done, pending, etc.)
update --from=<id> --prompt="<context>" Update tasks based on new requirements
add-task --prompt="<text>" [--dependencies=… Add a new task using AI
add-dependency --id=<id> --depends-on=<id> Add a dependency to a task
remove-dependency --id=<id> --depends-on=<id> Remove a dependency from a task
╭──────────────────────────╮
│ Task Analysis & Detail │
╰──────────────────────────╯
analyze-complexity [--research] [--threshold=5] Analyze tasks and generate expansion re…
complexity-report [--file=<path>] Display the complexity analysis report
expand --id=<id> [--num=5] [--research] [… Break down tasks into detailed subtasks
expand --all [--force] [--research] Expand all pending tasks with subtasks
clear-subtasks --id=<id> Remove subtasks from specified tasks
╭─────────────────────────────╮
│ Task Navigation & Viewing │
╰─────────────────────────────╯
next Show the next task to work on based on …
show <id> Display detailed information about a sp…
╭─────────────────────────╮
│ Dependency Management │
╰─────────────────────────╯
validate-dependenci… Identify invalid dependencies without f…
fix-dependencies Fix invalid dependencies automatically
╭─────────────────────────╮
│ Environment Variables │
╰─────────────────────────╯
ANTHROPIC_API_KEY Your Anthropic API key Required
MODEL Claude model to use Default: claude-3-7-sonn…
MAX_TOKENS Maximum tokens for responses Default: 4000
TEMPERATURE Temperature for model responses Default: 0.7
PERPLEXITY_API_KEY Perplexity API key for research Optional
PERPLEXITY_MODEL Perplexity model to use Default: sonar-pro
DEBUG Enable debug logging Default: false
LOG_LEVEL Console output level (debug,info,warn,error) Default: info
DEFAULT_SUBTASKS Default number of subtasks to generate Default: 3
DEFAULT_PRIORITY Default task priority Default: medium
PROJECT_NAME Project name displayed in UI Default: Task Master to correctly handle kebab-case flags by:
- Converting camelCase options back to kebab-case for command line arguments.
- Checking the original CLI arguments to determine the format used by the user.
- Preserving the original flag format when passing it to the underlying script.
- Special handling for and flags to ensure they are correctly interpreted.
- Updated boolean flag handling to correctly manage negated options and preserve user-specified formats.
- Marked task 022 as done and updated the status of its sub-tasks in .
- Added tasks 26, 27 and 28 for context improvements related to task generation
This commit ensures that all kebab-case flags are handled consistently across the CLI, improving user experience and command reliability.
This commit introduces a comprehensive set of skipped tests to both and . These skipped tests serve as a blueprint for future test implementation, outlining the necessary test cases for currently untested functionalities.
- Ensures sync with bin/ folder by adding -r/--research to the command
- Fixes an issue that improperly parsed command line args
- Ensures confirmation card on dependency add/remove
- Properly formats some sub-task dependencies
**Potentially addressed issues:**
While primarily focused on adding test coverage, this commit also implicitly addresses potential issues by:
- **Improving error handling coverage:** The addition of skipped tests for error scenarios in functions like , , , and highlights areas where error handling needs to be robustly tested and potentially improved in the codebase.
- **Enhancing dependency validation:** Skipped tests for include validation of dependencies, prompting a review of the dependency validation logic and ensuring its correctness.
- **Standardizing test coverage:** By creating a clear roadmap for testing all functions, this commit contributes to a more standardized and complete test suite, reducing the likelihood of undiscovered bugs in the future.
**task-manager.test.js:**
- Added skipped test blocks for the following functions:
- : Includes tests for handling valid JSON responses, malformed JSON, missing tasks in responses, Perplexity AI research integration, Claude fallback, and parallel task processing.
- : Covers tests for updating tasks based on context, handling Claude streaming, Perplexity AI integration, scenarios with no tasks to update, and error handling during updates.
- : Includes tests for generating task files from , formatting dependencies with status indicators, handling tasks without subtasks, empty task arrays, and dependency validation before file generation.
- : Covers tests for updating task status, subtask status using dot notation, updating multiple tasks, automatic subtask status updates, parent task update suggestions, and handling non-existent task IDs.
- : Includes tests for updating regular and subtask statuses, handling parent tasks without subtasks, and non-existent subtask IDs.
- : Covers tests for displaying all tasks, filtering by status, displaying subtasks, showing completion statistics, identifying the next task, and handling empty task arrays.
- : Includes tests for generating subtasks, using complexity reports for subtask counts, Perplexity AI integration, appending subtasks, skipping completed tasks, and error handling during subtask generation.
- : Covers tests for expanding all pending tasks, sorting by complexity, skipping tasks with existing subtasks (unless forced), using task-specific parameters from complexity reports, handling empty task arrays, and error handling for individual tasks.
- : Includes tests for clearing subtasks from specific and multiple tasks, handling tasks without subtasks, non-existent task IDs, and regenerating task files after clearing subtasks.
- : Covers tests for adding new tasks using AI, handling Claude streaming, validating dependencies, handling malformed AI responses, and using existing task context for generation.
**utils.test.js:**
- Added skipped test blocks for the following functions:
- : Tests for logging messages according to log levels and filtering messages below configured levels.
- : Tests for reading and parsing valid JSON files, handling file not found errors, and invalid JSON formats.
- : Tests for writing JSON data to files and handling file write errors.
- : Tests for escaping double quotes in prompts and handling prompts without special characters.
- : Tests for reading and parsing complexity reports, handling missing report files, and custom report paths.
- : Tests for finding tasks in reports by ID, handling non-existent task IDs, and invalid report structures.
- : Tests for verifying existing task and subtask IDs, handling non-existent IDs, and invalid inputs.
- : Tests for formatting numeric and string task IDs and preserving dot notation for subtasks.
- : Tests for detecting simple and complex cycles in dependency graphs, handling acyclic graphs, and empty dependency maps.
These skipped tests provide a clear roadmap for future test development, ensuring comprehensive coverage for core functionalities in both modules. They document the intended behavior of each function and outline various scenarios, including happy paths, edge cases, and error conditions, thereby improving the overall test strategy and maintainability of the Task Master CLI.