Revert "Release 0.13.0"

This commit is contained in:
Ralph Khreish
2025-05-03 14:38:33 +02:00
committed by GitHub
parent 8dace2186c
commit 6f5ddabc96
177 changed files with 13894 additions and 26358 deletions

View File

@@ -46,20 +46,3 @@ Generate task files from sample tasks.json data and verify the content matches t
### Details:
<info added on 2025-05-01T21:59:10.551Z>
{
"id": 5,
"title": "Implement Change Detection and Update Handling",
"description": "Create a system to detect changes in task files and tasks.json, and handle updates bidirectionally. This includes implementing file watching or comparison mechanisms, determining which version is newer, and applying changes in the appropriate direction. Ensure the system handles edge cases like deleted files, new tasks, and conflicting changes.",
"status": "done",
"dependencies": [
1,
3,
4,
2
],
"acceptanceCriteria": "- Detects changes in both task files and tasks.json\n- Determines which version is newer based on modification timestamps or content\n- Applies changes in the appropriate direction (file to JSON or JSON to file)\n- Handles edge cases like deleted files, new tasks, and renamed tasks\n- Provides options for manual conflict resolution when necessary\n- Maintains data integrity during the synchronization process\n- Includes a command to force synchronization in either direction\n- Logs all synchronization activities for troubleshooting\n\nEach of these subtasks addresses a specific component of the task file generation system, following a logical progression from template design to bidirectional synchronization. The dependencies ensure that prerequisites are completed before dependent work begins, and the acceptance criteria provide clear guidelines for verifying each subtask's completion.",
"details": "[2025-05-01 21:59:07] Adding another note via MCP test."
}
</info added on 2025-05-01T21:59:10.551Z>

View File

@@ -1,6 +1,6 @@
# Task ID: 23
# Title: Complete MCP Server Implementation for Task Master using FastMCP
# Status: done
# Status: in-progress
# Dependencies: 22
# Priority: medium
# Description: Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices.
@@ -221,7 +221,7 @@ Testing approach:
- Test error handling with invalid inputs
- Benchmark endpoint performance
## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [done]
## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [cancelled]
### Dependencies: 23.1, 23.2, 23.3
### Description: Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling.
### Details:
@@ -329,7 +329,7 @@ function listTasks(tasksPath, statusFilter, withSubtasks = false, outputFormat =
7. Add cache statistics for monitoring performance
8. Create unit tests for context management and caching functionality
## 10. Enhance Tool Registration and Resource Management [done]
## 10. Enhance Tool Registration and Resource Management [deferred]
### Dependencies: 23.1, 23.8
### Description: Refactor tool registration to follow FastMCP best practices, using decorators and improving the overall structure. Implement proper resource management for task templates and other shared resources.
### Details:
@@ -412,7 +412,7 @@ Best practices for integrating resources with Task Master functionality:
By properly implementing these resources and resource templates, we can provide rich, contextual data to LLM clients, enhancing the Task Master's capabilities and user experience.
</info added on 2025-03-31T18:35:21.513Z>
## 11. Implement Comprehensive Error Handling [done]
## 11. Implement Comprehensive Error Handling [deferred]
### Dependencies: 23.1, 23.3
### Description: Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses.
### Details:
@@ -424,7 +424,7 @@ By properly implementing these resources and resource templates, we can provide
### Details:
1. Design structured log format for consistent parsing\n2. Implement different log levels (debug, info, warn, error)\n3. Add request/response logging middleware\n4. Implement correlation IDs for request tracking\n5. Add performance metrics logging\n6. Configure log output destinations (console, file)\n7. Document logging patterns and usage
## 13. Create Testing Framework and Test Suite [done]
## 13. Create Testing Framework and Test Suite [deferred]
### Dependencies: 23.1, 23.3
### Description: Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests.
### Details:
@@ -436,7 +436,7 @@ By properly implementing these resources and resource templates, we can provide
### Details:
1. Create functionality to detect if .cursor/mcp.json exists in the project\n2. Implement logic to create a new mcp.json file with proper structure if it doesn't exist\n3. Add functionality to read and parse existing mcp.json if it exists\n4. Create method to add a new taskmaster-ai server entry to the mcpServers object\n5. Implement intelligent JSON merging that avoids trailing commas and syntax errors\n6. Ensure proper formatting and indentation in the generated/updated JSON\n7. Add validation to verify the updated configuration is valid JSON\n8. Include this functionality in the init workflow\n9. Add error handling for file system operations and JSON parsing\n10. Document the mcp.json structure and integration process
## 15. Implement SSE Support for Real-time Updates [done]
## 15. Implement SSE Support for Real-time Updates [deferred]
### Dependencies: 23.1, 23.3, 23.11
### Description: Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients
### Details:
@@ -923,7 +923,7 @@ Following MCP implementation standards:
8. Update tests to reflect the new naming conventions
9. Create a linting rule to enforce naming conventions in future development
## 34. Review functionality of all MCP direct functions [done]
## 34. Review functionality of all MCP direct functions [in-progress]
### Dependencies: None
### Description: Verify that all implemented MCP direct functions work correctly with edge cases
### Details:
@@ -1130,13 +1130,13 @@ By implementing these advanced techniques, task-master can achieve robust path h
### Details:
## 44. Implement init MCP command [done]
## 44. Implement init MCP command [deferred]
### Dependencies: None
### Description: Create MCP tool implementation for the init command
### Details:
## 45. Support setting env variables through mcp server [done]
## 45. Support setting env variables through mcp server [pending]
### Dependencies: None
### Description: currently we need to access the env variables through the env file present in the project (that we either create or find and append to). we could abstract this by allowing users to define the env vars in the mcp.json directly as folks currently do. mcp.json should then be in gitignore if thats the case. but for this i think in fastmcp all we need is to access ENV in a specific way. we need to find that way and then implement it
### Details:

View File

@@ -1,6 +1,6 @@
# Task ID: 35
# Title: Integrate Grok3 API for Research Capabilities
# Status: cancelled
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Replace the current Perplexity API integration with Grok3 API for all research-related functionalities while maintaining existing feature parity.

View File

@@ -1,6 +1,6 @@
# Task ID: 36
# Title: Add Ollama Support for AI Services as Claude Alternative
# Status: deferred
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Implement Ollama integration as an alternative to Claude for all main AI services, allowing users to run local language models instead of relying on cloud-based Claude API.

View File

@@ -1,6 +1,6 @@
# Task ID: 37
# Title: Add Gemini Support for Main AI Services as Claude Alternative
# Status: done
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Implement Google's Gemini API integration as an alternative to Claude for all main AI services, allowing users to switch between different LLM providers.

View File

@@ -37,29 +37,3 @@ Test cases should include:
- Running the command on tasks with existing implementation plans to ensure proper appending
Manually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements.
# Subtasks:
## 1. Retrieve Task Content [in-progress]
### Dependencies: None
### Description: Fetch the content of the specified task from the task management system. This includes the task title, description, and any associated details.
### Details:
Implement a function to retrieve task details based on a task ID. Handle cases where the task does not exist.
## 2. Generate Implementation Plan with AI [pending]
### Dependencies: 40.1
### Description: Use an AI model (Claude or Perplexity) to generate an implementation plan based on the retrieved task content. The plan should outline the steps required to complete the task.
### Details:
Implement logic to switch between Claude and Perplexity APIs. Handle API authentication and rate limiting. Prompt the AI model with the task content and request a detailed implementation plan.
## 3. Format Plan in XML [pending]
### Dependencies: 40.2, 40.2
### Description: Format the generated implementation plan within XML tags. Each step in the plan should be represented as an XML element with appropriate attributes.
### Details:
Define the XML schema for the implementation plan. Implement a function to convert the AI-generated plan into the defined XML format. Ensure proper XML syntax and validation.
## 4. Error Handling and Output [pending]
### Dependencies: 40.3
### Description: Implement error handling for all steps, including API failures and XML formatting errors. Output the formatted XML plan to the console or a file.
### Details:
Add try-except blocks to handle potential exceptions. Log errors for debugging. Provide informative error messages to the user. Output the XML plan in a user-friendly format.

View File

@@ -70,65 +70,3 @@ This implementation should include:
6. Performance Testing:
- Measure rendering time for large projects
- Ensure reasonable performance with 100+ interconnected tasks
# Subtasks:
## 1. CLI Command Setup [pending]
### Dependencies: None
### Description: Design and implement the command-line interface for the dependency graph tool, including argument parsing and help documentation.
### Details:
Define commands for input file specification, output options, filtering, and other user-configurable parameters.
## 2. Graph Layout Algorithms [pending]
### Dependencies: 41.1
### Description: Develop or integrate algorithms to compute optimal node and edge placement for clear and readable graph layouts in a terminal environment.
### Details:
Consider topological sorting, hierarchical, and force-directed layouts suitable for ASCII/Unicode rendering.
## 3. ASCII/Unicode Rendering Engine [pending]
### Dependencies: 41.2
### Description: Implement rendering logic to display the dependency graph using ASCII and Unicode characters in the terminal.
### Details:
Support for various node and edge styles, and ensure compatibility with different terminal types.
## 4. Color Coding Support [pending]
### Dependencies: 41.3
### Description: Add color coding to nodes and edges to visually distinguish types, statuses, or other attributes in the graph.
### Details:
Use ANSI escape codes for color; provide options for colorblind-friendly palettes.
## 5. Circular Dependency Detection [pending]
### Dependencies: 41.2
### Description: Implement algorithms to detect and highlight circular dependencies within the graph.
### Details:
Clearly mark cycles in the rendered output and provide warnings or errors as appropriate.
## 6. Filtering and Search Functionality [pending]
### Dependencies: 41.1, 41.2
### Description: Enable users to filter nodes and edges by criteria such as name, type, or dependency depth.
### Details:
Support command-line flags for filtering and interactive search if feasible.
## 7. Accessibility Features [pending]
### Dependencies: 41.3, 41.4
### Description: Ensure the tool is accessible, including support for screen readers, high-contrast modes, and keyboard navigation.
### Details:
Provide alternative text output and ensure color is not the sole means of conveying information.
## 8. Performance Optimization [pending]
### Dependencies: 41.2, 41.3, 41.4, 41.5, 41.6
### Description: Profile and optimize the tool for large graphs to ensure responsive rendering and low memory usage.
### Details:
Implement lazy loading, efficient data structures, and parallel processing where appropriate.
## 9. Documentation [pending]
### Dependencies: 41.1, 41.2, 41.3, 41.4, 41.5, 41.6, 41.7, 41.8
### Description: Write comprehensive user and developer documentation covering installation, usage, configuration, and extension.
### Details:
Include examples, troubleshooting, and contribution guidelines.
## 10. Testing and Validation [pending]
### Dependencies: 41.1, 41.2, 41.3, 41.4, 41.5, 41.6, 41.7, 41.8, 41.9
### Description: Develop automated tests for all major features, including CLI parsing, layout correctness, rendering, color coding, filtering, and cycle detection.
### Details:
Include unit, integration, and regression tests; validate accessibility and performance claims.

View File

@@ -51,41 +51,3 @@ Testing should verify both the functionality and the quality of suggestions:
- Test with a parent task that has no description
- Test with a parent task that already has many subtasks
- Test with a newly created system with minimal task history
# Subtasks:
## 1. Implement parent task validation [pending]
### Dependencies: None
### Description: Create validation logic to ensure subtasks are being added to valid parent tasks
### Details:
Develop functions to verify that the parent task exists in the system before allowing subtask creation. Handle error cases gracefully with informative messages. Include validation for task ID format and existence in the database.
## 2. Build context gathering mechanism [pending]
### Dependencies: 53.1
### Description: Develop a system to collect relevant context from parent task and existing subtasks
### Details:
Create functions to extract information from the parent task including title, description, and metadata. Also gather information about any existing subtasks to provide context for AI suggestions. Format this data appropriately for the AI prompt.
## 3. Develop AI suggestion logic for subtasks [pending]
### Dependencies: 53.2
### Description: Create the core AI integration to generate relevant subtask suggestions
### Details:
Implement the AI prompt engineering and response handling for subtask generation. Ensure the AI provides structured output with appropriate fields for subtasks. Include error handling for API failures and malformed responses.
## 4. Create interactive CLI interface [pending]
### Dependencies: 53.3
### Description: Build a user-friendly command-line interface for the subtask suggestion feature
### Details:
Develop CLI commands and options for requesting subtask suggestions. Include interactive elements for selecting, modifying, or rejecting suggested subtasks. Ensure clear user feedback throughout the process.
## 5. Implement subtask linking functionality [pending]
### Dependencies: 53.4
### Description: Create system to properly link suggested subtasks to their parent task
### Details:
Develop the database operations to save accepted subtasks and link them to the parent task. Include functionality for setting dependencies between subtasks. Ensure proper transaction handling to maintain data integrity.
## 6. Perform comprehensive testing [pending]
### Dependencies: 53.5
### Description: Test the subtask suggestion feature across various scenarios
### Details:
Create unit tests for each component. Develop integration tests for the full feature workflow. Test edge cases including invalid inputs, API failures, and unusual task structures. Document test results and fix any identified issues.

View File

@@ -1,6 +1,6 @@
# Task ID: 54
# Title: Add Research Flag to Add-Task Command
# Status: done
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Enhance the add-task command with a --research flag that allows users to perform quick research on the task topic before finalizing task creation.

View File

@@ -1,6 +1,6 @@
# Task ID: 56
# Title: Refactor Task-Master Files into Node Module Structure
# Status: done
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Restructure the task-master files by moving them from the project root into a proper node module structure to improve organization and maintainability.

View File

@@ -1,6 +1,6 @@
# Task ID: 58
# Title: Implement Elegant Package Update Mechanism for Task-Master
# Status: done
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a robust update mechanism that handles package updates gracefully, ensuring all necessary files are updated when the global package is upgraded.

View File

@@ -1,6 +1,6 @@
# Task ID: 59
# Title: Remove Manual Package.json Modifications and Implement Automatic Dependency Management
# Status: done
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Eliminate code that manually modifies users' package.json files and implement proper npm dependency management that automatically handles package requirements when users install task-master-ai.
@@ -28,41 +28,3 @@ This change will make the package more reliable, follow npm best practices, and
7. Test the uninstall process to verify it cleanly removes the package without leaving unwanted modifications
8. Verify the package works in different npm environments (npm 6, 7, 8) and with different Node.js versions
9. Create an integration test that simulates a real user workflow from installation through usage
# Subtasks:
## 1. Conduct Code Audit for Dependency Management [done]
### Dependencies: None
### Description: Review the current codebase to identify all areas where dependencies are manually managed, modified, or referenced outside of npm best practices.
### Details:
Focus on scripts, configuration files, and any custom logic related to dependency installation or versioning.
## 2. Remove Manual Dependency Modifications [done]
### Dependencies: 59.1
### Description: Eliminate any custom scripts or manual steps that alter dependencies outside of npm's standard workflow.
### Details:
Refactor or delete code that manually installs, updates, or modifies dependencies, ensuring all dependency management is handled via npm.
## 3. Update npm Dependencies [done]
### Dependencies: 59.2
### Description: Update all project dependencies using npm, ensuring versions are current and compatible, and resolve any conflicts.
### Details:
Run npm update, audit for vulnerabilities, and adjust package.json and package-lock.json as needed.
## 4. Update Initialization and Installation Commands [done]
### Dependencies: 59.3
### Description: Revise project setup scripts and documentation to reflect the new npm-based dependency management approach.
### Details:
Ensure that all initialization commands (e.g., npm install) are up-to-date and remove references to deprecated manual steps.
## 5. Update Documentation [done]
### Dependencies: 59.4
### Description: Revise project documentation to describe the new dependency management process and provide clear setup instructions.
### Details:
Update README, onboarding guides, and any developer documentation to align with npm best practices.
## 6. Perform Regression Testing [done]
### Dependencies: 59.5
### Description: Run comprehensive tests to ensure that the refactor has not introduced any regressions or broken existing functionality.
### Details:
Execute automated and manual tests, focusing on areas affected by dependency management changes.

File diff suppressed because it is too large Load Diff

View File

@@ -1,90 +0,0 @@
# Task ID: 62
# Title: Add --simple Flag to Update Commands for Direct Text Input
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Implement a --simple flag for update-task and update-subtask commands that allows users to add timestamped notes without AI processing, directly using the text from the prompt.
# Details:
This task involves modifying the update-task and update-subtask commands to accept a new --simple flag option. When this flag is present, the system should bypass the AI processing pipeline and directly use the text provided by the user as the update content. The implementation should:
1. Update the command parsers for both update-task and update-subtask to recognize the --simple flag
2. Modify the update logic to check for this flag and conditionally skip AI processing
3. When the flag is present, format the user's input text with a timestamp in the same format as AI-processed updates
4. Ensure the update is properly saved to the task or subtask's history
5. Update the help documentation to include information about this new flag
6. The timestamp format should match the existing format used for AI-generated updates
7. The simple update should be visually distinguishable from AI updates in the display (consider adding a 'manual update' indicator)
8. Maintain all existing functionality when the flag is not used
# Test Strategy:
Testing should verify both the functionality and user experience of the new feature:
1. Unit tests:
- Test that the command parser correctly recognizes the --simple flag
- Verify that AI processing is bypassed when the flag is present
- Ensure timestamps are correctly formatted and added
2. Integration tests:
- Update a task with --simple flag and verify the exact text is saved
- Update a subtask with --simple flag and verify the exact text is saved
- Compare the output format with AI-processed updates to ensure consistency
3. User experience tests:
- Verify help documentation correctly explains the new flag
- Test with various input lengths to ensure proper formatting
- Ensure the update appears correctly when viewing task history
4. Edge cases:
- Test with empty input text
- Test with very long input text
- Test with special characters and formatting in the input
# Subtasks:
## 1. Update command parsers to recognize --simple flag [pending]
### Dependencies: None
### Description: Modify the command parsers for both update-task and update-subtask commands to recognize and process the new --simple flag option.
### Details:
Add the --simple flag option to the command parser configurations in the CLI module. This should be implemented as a boolean flag that doesn't require any additional arguments. Update both the update-task and update-subtask command definitions to include this new option.
## 2. Implement conditional logic to bypass AI processing [pending]
### Dependencies: 62.1
### Description: Modify the update logic to check for the --simple flag and conditionally skip the AI processing pipeline when the flag is present.
### Details:
In the update handlers for both commands, add a condition to check if the --simple flag is set. If it is, create a path that bypasses the normal AI processing flow. This will require modifying the update functions to accept the flag parameter and branch the execution flow accordingly.
## 3. Format user input with timestamp for simple updates [pending]
### Dependencies: 62.2
### Description: Implement functionality to format the user's direct text input with a timestamp in the same format as AI-processed updates when the --simple flag is used.
### Details:
Create a utility function that takes the user's raw input text and prepends a timestamp in the same format used for AI-generated updates. This function should be called when the --simple flag is active. Ensure the timestamp format is consistent with the existing format used throughout the application.
## 4. Add visual indicator for manual updates [pending]
### Dependencies: 62.3
### Description: Make simple updates visually distinguishable from AI-processed updates by adding a 'manual update' indicator or other visual differentiation.
### Details:
Modify the update formatting to include a visual indicator (such as '[Manual Update]' prefix or different styling) when displaying updates that were created using the --simple flag. This will help users distinguish between AI-processed and manually entered updates.
## 5. Implement storage of simple updates in history [pending]
### Dependencies: 62.3, 62.4
### Description: Ensure that updates made with the --simple flag are properly saved to the task or subtask's history in the same way as AI-processed updates.
### Details:
Modify the storage logic to save the formatted simple updates to the task or subtask history. The storage format should be consistent with AI-processed updates, but include the manual indicator. Ensure that the update is properly associated with the correct task or subtask.
## 6. Update help documentation for the new flag [pending]
### Dependencies: 62.1
### Description: Update the help documentation for both update-task and update-subtask commands to include information about the new --simple flag.
### Details:
Add clear descriptions of the --simple flag to the help text for both commands. The documentation should explain that the flag allows users to add timestamped notes without AI processing, directly using the text from the prompt. Include examples of how to use the flag.
## 7. Implement integration tests for the simple update feature [pending]
### Dependencies: 62.1, 62.2, 62.3, 62.4, 62.5
### Description: Create comprehensive integration tests to verify that the --simple flag works correctly in both commands and integrates properly with the rest of the system.
### Details:
Develop integration tests that verify the entire flow of using the --simple flag with both update commands. Tests should confirm that updates are correctly formatted, stored, and displayed. Include edge cases such as empty input, very long input, and special characters.
## 8. Perform final validation and documentation [pending]
### Dependencies: 62.1, 62.2, 62.3, 62.4, 62.5, 62.6, 62.7
### Description: Conduct final validation of the feature across all use cases and update the user documentation to include the new functionality.
### Details:
Perform end-to-end testing of the feature to ensure it works correctly in all scenarios. Update the user documentation with detailed information about the new --simple flag, including its purpose, how to use it, and examples. Ensure that the documentation clearly explains the difference between AI-processed updates and simple updates.

View File

@@ -1,138 +0,0 @@
# Task ID: 63
# Title: Add pnpm Support for the Taskmaster Package
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Implement full support for pnpm as an alternative package manager in the Taskmaster application, ensuring users have the exact same experience as with npm when installing and managing the package. The installation process, including any CLI prompts or web interfaces, must serve the exact same content and user experience regardless of whether npm or pnpm is used. The project uses 'module' as the package type, defines binaries 'task-master' and 'task-master-mcp', and its core logic resides in 'scripts/modules/'. The 'init' command (via scripts/init.js) creates the directory structure (.cursor/rules, scripts, tasks), copies templates (.env.example, .gitignore, rule files, dev.js), manages package.json merging, and sets up MCP config (.cursor/mcp.json). All dependencies are standard npm dependencies listed in package.json, and manual modifications are being removed.
# Details:
This task involves:
1. Update the installation documentation to include pnpm installation commands (e.g., `pnpm add taskmaster`).
2. Ensure all package scripts are compatible with pnpm's execution model:
- Review and modify package.json scripts if necessary
- Test script execution with pnpm syntax (`pnpm run <script>`)
- Address any pnpm-specific path or execution differences
- Confirm that scripts responsible for showing a website or prompt during install behave identically with pnpm and npm
3. Create a pnpm-lock.yaml file by installing dependencies with pnpm.
4. Test the application's installation and operation when installed via pnpm:
- Global installation (`pnpm add -g taskmaster`)
- Local project installation
- Verify CLI commands work correctly when installed with pnpm
- Verify binaries `task-master` and `task-master-mcp` are properly linked
- Ensure the `init` command (scripts/init.js) correctly creates directory structure and copies templates as described
5. Update CI/CD pipelines to include testing with pnpm:
- Add a pnpm test matrix to GitHub Actions workflows
- Ensure tests pass when dependencies are installed with pnpm
6. Handle any pnpm-specific dependency resolution issues:
- Address potential hoisting differences between npm and pnpm
- Test with pnpm's strict mode to ensure compatibility
- Verify proper handling of 'module' package type
7. Document any pnpm-specific considerations or commands in the README and documentation.
8. Verify that the `scripts/init.js` file works correctly with pnpm:
- Ensure it properly creates `.cursor/rules`, `scripts`, and `tasks` directories
- Verify template copying (`.env.example`, `.gitignore`, rule files, `dev.js`)
- Confirm `package.json` merging works correctly
- Test MCP config setup (`.cursor/mcp.json`)
9. Ensure core logic in `scripts/modules/` works correctly when installed via pnpm.
This implementation should maintain full feature parity and identical user experience regardless of which package manager is used to install Taskmaster.
# Test Strategy:
1. Manual Testing:
- Install Taskmaster globally using pnpm: `pnpm add -g taskmaster`
- Install Taskmaster locally in a test project: `pnpm add taskmaster`
- Verify all CLI commands function correctly with both installation methods
- Test all major features to ensure they work identically to npm installations
- Verify binaries `task-master` and `task-master-mcp` are properly linked and executable
- Test the `init` command to ensure it correctly sets up the directory structure and files as defined in scripts/init.js
2. Automated Testing:
- Create a dedicated test workflow in GitHub Actions that uses pnpm
- Run the full test suite using pnpm to install dependencies
- Verify all tests pass with the same results as npm
3. Documentation Testing:
- Review all documentation to ensure pnpm commands are correctly documented
- Verify installation instructions work as written
- Test any pnpm-specific instructions or notes
4. Compatibility Testing:
- Test on different operating systems (Windows, macOS, Linux)
- Verify compatibility with different pnpm versions (latest stable and LTS)
- Test in environments with multiple package managers installed
- Verify proper handling of 'module' package type
5. Edge Case Testing:
- Test installation in a project that uses pnpm workspaces
- Verify behavior when upgrading from an npm installation to pnpm
- Test with pnpm's various flags and modes (--frozen-lockfile, --strict-peer-dependencies)
6. Performance Comparison:
- Measure and document any performance differences between package managers
- Compare installation times and disk space usage
7. Structure Testing:
- Verify that the core logic in `scripts/modules/` is accessible and functions correctly
- Confirm that the `init` command properly creates all required directories and files as per scripts/init.js
- Test package.json merging functionality
- Verify MCP config setup
Success criteria: Taskmaster should install and function identically regardless of whether it was installed via npm or pnpm, with no degradation in functionality, performance, or user experience. All binaries should be properly linked, and the directory structure should be correctly created.
# Subtasks:
## 1. Update Documentation for pnpm Support [pending]
### Dependencies: None
### Description: Revise installation and usage documentation to include pnpm commands and instructions for installing and managing Taskmaster with pnpm. Clearly state that the installation process, including any website or UI shown, is identical to npm. Ensure documentation reflects the use of 'module' package type, binaries, and the init process as defined in scripts/init.js.
### Details:
Add pnpm installation commands (e.g., `pnpm add taskmaster`) and update all relevant sections in the README and official docs to reflect pnpm as a supported package manager. Document that any installation website or prompt is the same as with npm. Include notes on the 'module' package type, binaries, and the directory/template setup performed by scripts/init.js.
## 2. Ensure Package Scripts Compatibility with pnpm [pending]
### Dependencies: 63.1
### Description: Review and update package.json scripts to ensure they work seamlessly with pnpm's execution model. Confirm that any scripts responsible for showing a website or prompt during install behave identically with pnpm and npm. Ensure compatibility with 'module' package type and correct binary definitions.
### Details:
Test all scripts using `pnpm run <script>`, address any pnpm-specific path or execution differences, and modify scripts as needed for compatibility. Pay special attention to any scripts that trigger a website or prompt during installation, ensuring they serve the same content as npm. Validate that scripts/init.js and binaries are referenced correctly for ESM ('module') projects.
## 3. Generate and Validate pnpm Lockfile [pending]
### Dependencies: 63.2
### Description: Install dependencies using pnpm to create a pnpm-lock.yaml file and ensure it accurately reflects the project's dependency tree, considering the 'module' package type.
### Details:
Run `pnpm install` to generate the lockfile, check it into version control, and verify that dependency resolution is correct and consistent. Ensure that all dependencies listed in package.json are resolved as expected for an ESM project.
## 4. Test Taskmaster Installation and Operation with pnpm [pending]
### Dependencies: 63.3
### Description: Thoroughly test Taskmaster's installation and CLI operation when installed via pnpm, both globally and locally. Confirm that any website or UI shown during installation is identical to npm. Validate that binaries and the init process (scripts/init.js) work as expected.
### Details:
Perform global (`pnpm add -g taskmaster`) and local installations, verify CLI commands, and check for any pnpm-specific issues or incompatibilities. Ensure any installation UIs or websites appear identical to npm installations, including any website or prompt shown during install. Test that binaries 'task-master' and 'task-master-mcp' are linked and that scripts/init.js creates the correct structure and templates.
## 5. Integrate pnpm into CI/CD Pipeline [pending]
### Dependencies: 63.4
### Description: Update CI/CD workflows to include pnpm in the test matrix, ensuring all tests pass when dependencies are installed with pnpm. Confirm that tests cover the 'module' package type, binaries, and init process.
### Details:
Modify GitHub Actions or other CI configurations to use pnpm/action-setup, run tests with pnpm, and cache pnpm dependencies for efficiency. Ensure that CI covers CLI commands, binary linking, and the directory/template setup performed by scripts/init.js.
## 6. Verify Installation UI/Website Consistency [pending]
### Dependencies: 63.4
### Description: Ensure any installation UIs, websites, or interactive prompts—including any website or prompt shown during install—appear and function identically when installing with pnpm compared to npm. Confirm that the experience is consistent for the 'module' package type and the init process.
### Details:
Identify all user-facing elements during the installation process, including any website or prompt shown during install, and verify they are consistent across package managers. If a website is shown during installation, ensure it appears the same regardless of package manager used. Validate that any prompts or UIs triggered by scripts/init.js are identical.
## 7. Test init.js Script with pnpm [pending]
### Dependencies: 63.4
### Description: Verify that the scripts/init.js file works correctly when Taskmaster is installed via pnpm, creating the proper directory structure and copying all required templates as defined in the project structure.
### Details:
Test the init command to ensure it properly creates .cursor/rules, scripts, and tasks directories, copies templates (.env.example, .gitignore, rule files, dev.js), handles package.json merging, and sets up MCP config (.cursor/mcp.json) as per scripts/init.js.
## 8. Verify Binary Links with pnpm [pending]
### Dependencies: 63.4
### Description: Ensure that the task-master and task-master-mcp binaries are properly defined in package.json, linked, and executable when installed via pnpm, in both global and local installations.
### Details:
Check that the binaries defined in package.json are correctly linked in node_modules/.bin when installed with pnpm, and that they can be executed without errors. Validate that binaries work for ESM ('module') projects and are accessible after both global and local installs.

View File

@@ -1,202 +0,0 @@
# Task ID: 64
# Title: Add Yarn Support for Taskmaster Installation
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Implement full support for installing and managing Taskmaster using Yarn package manager, ensuring users have the exact same experience as with npm or pnpm. The installation process, including any CLI prompts or web interfaces, must serve the exact same content and user experience regardless of whether npm, pnpm, or Yarn is used. The project uses 'module' as the package type, defines binaries 'task-master' and 'task-master-mcp', and its core logic resides in 'scripts/modules/'. The 'init' command (via scripts/init.js) creates the directory structure (.cursor/rules, scripts, tasks), copies templates (.env.example, .gitignore, rule files, dev.js), manages package.json merging, and sets up MCP config (.cursor/mcp.json). All dependencies are standard npm dependencies listed in package.json, and manual modifications are being removed.
If the installation process includes a website component (such as for account setup or registration), ensure that any required website actions (e.g., creating an account, logging in, or configuring user settings) are clearly documented and tested for parity between Yarn and other package managers. If no website or account setup is required, confirm and document this explicitly.
# Details:
This task involves adding comprehensive Yarn support to the Taskmaster package to ensure it can be properly installed and managed using Yarn. Implementation should include:
1. Update package.json to ensure compatibility with Yarn installation methods, considering the 'module' package type and binary definitions
2. Verify all scripts and dependencies work correctly with Yarn
3. Add Yarn-specific configuration files (e.g., .yarnrc.yml if needed)
4. Update installation documentation to include Yarn installation instructions
5. Ensure all post-install scripts work correctly with Yarn
6. Verify that all CLI commands function properly when installed via Yarn
7. Ensure binaries `task-master` and `task-master-mcp` are properly linked
8. Test the `scripts/init.js` file with Yarn to verify it correctly:
- Creates directory structure (`.cursor/rules`, `scripts`, `tasks`)
- Copies templates (`.env.example`, `.gitignore`, rule files, `dev.js`)
- Manages `package.json` merging
- Sets up MCP config (`.cursor/mcp.json`)
9. Handle any Yarn-specific package resolution or hoisting issues
10. Test compatibility with different Yarn versions (classic and berry/v2+)
11. Ensure proper lockfile generation and management
12. Update any package manager detection logic in the codebase to recognize Yarn installations
13. Verify that core logic in `scripts/modules/` works correctly when installed via Yarn
14. If the installation process includes a website component, verify that any account setup or user registration flows work identically with Yarn as they do with npm or pnpm. If website actions are required, document the steps and ensure they are tested for parity. If not, confirm and document that no website or account setup is needed.
The implementation should maintain feature parity and identical user experience regardless of which package manager (npm, pnpm, or Yarn) is used to install Taskmaster.
# Test Strategy:
Testing should verify complete Yarn support through the following steps:
1. Fresh installation tests:
- Install Taskmaster using `yarn add taskmaster` (global and local installations)
- Verify installation completes without errors
- Check that binaries `task-master` and `task-master-mcp` are properly linked
- Test the `init` command to ensure it correctly sets up the directory structure and files as defined in scripts/init.js
2. Functionality tests:
- Run all Taskmaster commands on a Yarn-installed version
- Verify all features work identically to npm installations
- Test with both Yarn v1 (classic) and Yarn v2+ (berry)
- Verify proper handling of 'module' package type
3. Update/uninstall tests:
- Test updating the package using Yarn commands
- Verify clean uninstallation using Yarn
4. CI integration:
- Add Yarn installation tests to CI pipeline
- Test on different operating systems (Windows, macOS, Linux)
5. Documentation verification:
- Ensure all documentation accurately reflects Yarn installation methods
- Verify any Yarn-specific commands or configurations are properly documented
6. Edge cases:
- Test installation in monorepo setups using Yarn workspaces
- Verify compatibility with other Yarn-specific features (plug'n'play, zero-installs)
7. Structure Testing:
- Verify that the core logic in `scripts/modules/` is accessible and functions correctly
- Confirm that the `init` command properly creates all required directories and files as per scripts/init.js
- Test package.json merging functionality
- Verify MCP config setup
8. Website/Account Setup Testing:
- If the installation process includes a website component, test the complete user flow including account setup, registration, or configuration steps. Ensure these work identically with Yarn as with npm. If no website or account setup is required, confirm and document this in the test results.
- Document any website-specific steps that users need to complete during installation.
All tests should pass with the same results as when using npm, with identical user experience throughout the installation and usage process.
# Subtasks:
## 1. Update package.json for Yarn Compatibility [pending]
### Dependencies: None
### Description: Modify the package.json file to ensure all dependencies, scripts, and configurations are compatible with Yarn's installation and resolution methods. Confirm that any scripts responsible for showing a website or prompt during install behave identically with Yarn and npm. Ensure compatibility with 'module' package type and correct binary definitions.
### Details:
Review and update dependency declarations, script syntax, and any package manager-specific fields to avoid conflicts or unsupported features when using Yarn. Pay special attention to any scripts that trigger a website or prompt during installation, ensuring they serve the same content as npm. Validate that scripts/init.js and binaries are referenced correctly for ESM ('module') projects.
## 2. Add Yarn-Specific Configuration Files [pending]
### Dependencies: 64.1
### Description: Introduce Yarn-specific configuration files such as .yarnrc.yml if needed to optimize Yarn behavior and ensure consistent installs for 'module' package type and binary definitions.
### Details:
Determine if Yarn v2+ (Berry) or classic requires additional configuration for the project, and add or update .yarnrc.yml or .yarnrc files accordingly. Ensure configuration supports ESM and binary linking.
## 3. Test and Fix Yarn Compatibility for Scripts and CLI [pending]
### Dependencies: 64.2
### Description: Ensure all scripts, post-install hooks, and CLI commands function correctly when Taskmaster is installed and managed via Yarn. Confirm that any website or UI shown during installation is identical to npm. Validate that binaries and the init process (scripts/init.js) work as expected.
### Details:
Test all lifecycle scripts, post-install actions, and CLI commands using Yarn. Address any issues related to environment variables, script execution, or dependency hoisting. Ensure any website or prompt shown during install is the same as with npm. Validate that binaries 'task-master' and 'task-master-mcp' are linked and that scripts/init.js creates the correct structure and templates.
## 4. Update Documentation for Yarn Installation and Usage [pending]
### Dependencies: 64.3
### Description: Revise installation and usage documentation to include clear instructions for installing and managing Taskmaster with Yarn. Clearly state that the installation process, including any website or UI shown, is identical to npm. Ensure documentation reflects the use of 'module' package type, binaries, and the init process as defined in scripts/init.js. If the installation process includes a website component or requires account setup, document the steps users must follow. If not, explicitly state that no website or account setup is required.
### Details:
Add Yarn-specific installation commands, troubleshooting tips, and notes on version compatibility to the README and any relevant docs. Document that any installation website or prompt is the same as with npm. Include notes on the 'module' package type, binaries, and the directory/template setup performed by scripts/init.js. If website or account setup is required during installation, provide clear instructions; otherwise, confirm and document that no such steps are needed.
## 5. Implement and Test Package Manager Detection Logic [pending]
### Dependencies: 64.4
### Description: Update or add logic in the codebase to detect Yarn installations and handle Yarn-specific behaviors, ensuring feature parity across package managers. Ensure detection logic works for 'module' package type and binary definitions.
### Details:
Modify detection logic to recognize Yarn (classic and berry), handle lockfile generation, and resolve any Yarn-specific package resolution or hoisting issues. Ensure detection logic supports ESM and binary linking.
## 6. Verify Installation UI/Website Consistency [pending]
### Dependencies: 64.3
### Description: Ensure any installation UIs, websites, or interactive prompts—including any website or prompt shown during install—appear and function identically when installing with Yarn compared to npm. Confirm that the experience is consistent for the 'module' package type and the init process. If the installation process includes a website or account setup, verify that all required website actions (e.g., account creation, login) are consistent and documented. If not, confirm and document that no website or account setup is needed.
### Details:
Identify all user-facing elements during the installation process, including any website or prompt shown during install, and verify they are consistent across package managers. If a website is shown during installation or account setup is required, ensure it appears and functions the same regardless of package manager used, and document the steps. If not, confirm and document that no website or account setup is needed. Validate that any prompts or UIs triggered by scripts/init.js are identical.
## 7. Test init.js Script with Yarn [pending]
### Dependencies: 64.3
### Description: Verify that the scripts/init.js file works correctly when Taskmaster is installed via Yarn, creating the proper directory structure and copying all required templates as defined in the project structure.
### Details:
Test the init command to ensure it properly creates .cursor/rules, scripts, and tasks directories, copies templates (.env.example, .gitignore, rule files, dev.js), handles package.json merging, and sets up MCP config (.cursor/mcp.json) as per scripts/init.js.
## 8. Verify Binary Links with Yarn [pending]
### Dependencies: 64.3
### Description: Ensure that the task-master and task-master-mcp binaries are properly defined in package.json, linked, and executable when installed via Yarn, in both global and local installations.
### Details:
Check that the binaries defined in package.json are correctly linked in node_modules/.bin when installed with Yarn, and that they can be executed without errors. Validate that binaries work for ESM ('module') projects and are accessible after both global and local installs.
## 9. Test Website Account Setup with Yarn [pending]
### Dependencies: 64.6
### Description: If the installation process includes a website component, verify that account setup, registration, or any other user-specific configurations work correctly when Taskmaster is installed via Yarn. If no website or account setup is required, confirm and document this explicitly.
### Details:
Test the complete user flow for any website component that appears during installation, including account creation, login, and configuration steps. Ensure that all website interactions work identically with Yarn as they do with npm or pnpm. Document any website-specific steps that users need to complete during the installation process. If no website or account setup is required, confirm and document this.
<info added on 2025-04-25T08:45:48.709Z>
Since the request is vague, I'll provide helpful implementation details for testing website account setup with Yarn:
For thorough testing, create a test matrix covering different browsers (Chrome, Firefox, Safari) and operating systems (Windows, macOS, Linux). Document specific Yarn-related environment variables that might affect website connectivity. Use tools like Playwright or Cypress to automate the account setup flow testing, capturing screenshots at each step for documentation. Implement network throttling tests to verify behavior under poor connectivity. Create a checklist of all UI elements that should be verified during the account setup process, including form validation, error messages, and success states. If no website component exists, explicitly document this in the project README and installation guides to prevent user confusion.
</info added on 2025-04-25T08:45:48.709Z>
<info added on 2025-04-25T08:46:08.651Z>
- For environments where the website component requires integration with external authentication providers (such as OAuth, SSO, or LDAP), ensure that these flows are tested specifically when Taskmaster is installed via Yarn. Validate that redirect URIs, token exchanges, and session persistence behave as expected across all supported browsers.
- If the website setup involves configuring application pools or web server settings (e.g., with IIS), document any Yarn-specific considerations, such as environment variable propagation or file permission differences, that could affect the web service's availability or configuration[2].
- When automating tests, include validation for accessibility compliance (e.g., using axe-core or Lighthouse) during the account setup process to ensure the UI is usable for all users.
- Capture and log all HTTP requests and responses during the account setup flow to help diagnose any discrepancies between Yarn and other package managers. This can be achieved by enabling network logging in Playwright or Cypress test runs.
- If the website component supports batch operations or automated uploads (such as uploading user data or configuration files), verify that these automation features function identically after installation with Yarn[3].
- For documentation, provide annotated screenshots or screen recordings of the account setup process, highlighting any Yarn-specific prompts, warnings, or differences encountered.
- If the website component is not required, add a badge or prominent note in the README and installation guides stating "No website or account setup required," and reference the test results confirming this.
</info added on 2025-04-25T08:46:08.651Z>
<info added on 2025-04-25T17:04:12.550Z>
For clarity, this task does not involve setting up a Yarn account. Yarn itself is just a package manager that doesn't require any account creation. The task is about testing whether any website component that is part of Taskmaster (if one exists) works correctly when Taskmaster is installed using Yarn as the package manager.
To be specific:
- You don't need to create a Yarn account
- Yarn is simply the tool used to install Taskmaster (`yarn add taskmaster` instead of `npm install taskmaster`)
- The testing focuses on whether any web interfaces or account setup processes that are part of Taskmaster itself function correctly when the installation was done via Yarn
- If Taskmaster includes a web dashboard or requires users to create accounts within the Taskmaster system, those features should be tested
If you're uncertain whether Taskmaster includes a website component at all, the first step would be to check the project documentation or perform an initial installation to determine if any web interface exists.
</info added on 2025-04-25T17:04:12.550Z>
<info added on 2025-04-25T17:19:03.256Z>
When testing website account setup with Yarn after the codebase refactor, pay special attention to:
- Verify that any environment-specific configuration files (like `.env` or config JSON files) are properly loaded when the application is installed via Yarn
- Test the session management implementation to ensure user sessions persist correctly across page refreshes and browser restarts
- Check that any database migrations or schema updates required for account setup execute properly when installed via Yarn
- Validate that client-side form validation logic works consistently with server-side validation
- Ensure that any WebSocket connections for real-time features initialize correctly after the refactor
- Test account deletion and data export functionality to verify GDPR compliance remains intact
- Document any changes to the authentication flow that resulted from the refactor and confirm they work identically with Yarn installation
</info added on 2025-04-25T17:19:03.256Z>
<info added on 2025-04-25T17:22:05.951Z>
When testing website account setup with Yarn after the logging fix, implement these additional verification steps:
1. Verify that all account-related actions are properly logged with the correct log levels (debug, info, warn, error) according to the updated logging framework
2. Test the error handling paths specifically - force authentication failures and verify the logs contain sufficient diagnostic information
3. Check that sensitive user information is properly redacted in logs according to privacy requirements
4. Confirm that log rotation and persistence work correctly when high volumes of authentication attempts occur
5. Validate that any custom logging middleware correctly captures HTTP request/response data for account operations
6. Test that log aggregation tools (if used) can properly parse and display the account setup logs in their expected format
7. Verify that performance metrics for account setup flows are correctly captured in logs for monitoring purposes
8. Document any Yarn-specific environment variables that affect the logging configuration for the website component
</info added on 2025-04-25T17:22:05.951Z>
<info added on 2025-04-25T17:22:46.293Z>
When testing website account setup with Yarn, consider implementing a positive user experience validation:
1. Measure and document time-to-completion for the account setup process to ensure it meets usability standards
2. Create a satisfaction survey for test users to rate the account setup experience on a 1-5 scale
3. Implement A/B testing for different account setup flows to identify the most user-friendly approach
4. Add delightful micro-interactions or success animations that make the setup process feel rewarding
5. Test the "welcome" or "onboarding" experience that follows successful account creation
6. Ensure helpful tooltips and contextual help are displayed at appropriate moments during setup
7. Verify that error messages are friendly, clear, and provide actionable guidance rather than technical jargon
8. Test the account recovery flow to ensure users have a smooth experience if they forget credentials
</info added on 2025-04-25T17:22:46.293Z>

View File

@@ -1,11 +0,0 @@
# Task ID: 65
# Title: Add Bun Support for Taskmaster Installation
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Implement full support for installing and managing Taskmaster using the Bun package manager, ensuring the installation process and user experience are identical to npm, pnpm, and Yarn.
# Details:
Update the Taskmaster installation scripts and documentation to support Bun as a first-class package manager. Ensure that users can install Taskmaster and run all CLI commands (including 'init' via scripts/init.js) using Bun, with the same directory structure, template copying, package.json merging, and MCP config setup as with npm, pnpm, and Yarn. Verify that all dependencies are compatible with Bun and that any Bun-specific configuration (such as lockfile handling or binary linking) is handled correctly. If the installation process includes a website or account setup, document and test these flows for parity; if not, explicitly confirm and document that no such steps are required. Update all relevant documentation and installation guides to include Bun instructions for macOS, Linux, and Windows (including WSL and PowerShell). Address any known Bun-specific issues (e.g., sporadic install hangs) with clear troubleshooting guidance.
# Test Strategy:
1. Install Taskmaster using Bun on macOS, Linux, and Windows (including WSL and PowerShell), following the updated documentation. 2. Run the full installation and initialization process, verifying that the directory structure, templates, and MCP config are set up identically to npm, pnpm, and Yarn. 3. Execute all CLI commands (including 'init') and confirm functional parity. 4. If a website or account setup is required, test these flows for consistency; if not, confirm and document this. 5. Check for Bun-specific issues (e.g., install hangs) and verify that troubleshooting steps are effective. 6. Ensure the documentation is clear, accurate, and up to date for all supported platforms.

View File

@@ -1,61 +0,0 @@
# Task ID: 66
# Title: Support Status Filtering in Show Command for Subtasks
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Enhance the 'show' command to accept a status parameter that filters subtasks by their current status, allowing users to view only subtasks matching a specific status.
# Details:
This task involves modifying the existing 'show' command functionality to support status-based filtering of subtasks. Implementation details include:
1. Update the command parser to accept a new '--status' or '-s' flag followed by a status value (e.g., 'task-master show --status=in-progress' or 'task-master show -s completed').
2. Modify the show command handler in the appropriate module (likely in scripts/modules/) to:
- Parse and validate the status parameter
- Filter the subtasks collection based on the provided status before displaying results
- Handle invalid status values gracefully with appropriate error messages
- Support standard status values (e.g., 'not-started', 'in-progress', 'completed', 'blocked')
- Consider supporting multiple status values (comma-separated or multiple flags)
3. Update the help documentation to include information about the new status filtering option.
4. Ensure backward compatibility - the show command should function as before when no status parameter is provided.
5. Consider adding a '--status-list' option to display all available status values for reference.
6. Update any relevant unit tests to cover the new functionality.
7. If the application uses a database or persistent storage, ensure the filtering happens at the query level for performance when possible.
8. Maintain consistent formatting and styling of output regardless of filtering.
# Test Strategy:
Testing for this feature should include:
1. Unit tests:
- Test parsing of the status parameter in various formats (--status=value, -s value)
- Test filtering logic with different status values
- Test error handling for invalid status values
- Test backward compatibility (no status parameter)
- Test edge cases (empty status, case sensitivity, etc.)
2. Integration tests:
- Verify that the command correctly filters subtasks when a valid status is provided
- Verify that all subtasks are shown when no status filter is applied
- Test with a project containing subtasks of various statuses
3. Manual testing:
- Create a test project with multiple subtasks having different statuses
- Run the show command with different status filters and verify results
- Test with both long-form (--status) and short-form (-s) parameters
- Verify help documentation correctly explains the new parameter
4. Edge case testing:
- Test with non-existent status values
- Test with empty project (no subtasks)
- Test with a project where all subtasks have the same status
5. Documentation verification:
- Ensure the README or help documentation is updated to include the new parameter
- Verify examples in documentation work as expected
All tests should pass before considering this task complete.

View File

@@ -1,43 +0,0 @@
# Task ID: 67
# Title: Add CLI JSON output and Cursor keybindings integration
# Status: pending
# Dependencies: None
# Priority: high
# Description: Enhance Taskmaster CLI with JSON output option and add a new command to install pre-configured Cursor keybindings
# Details:
This task has two main components:\n\n1. Add `--json` flag to all relevant CLI commands:\n - Modify the CLI command handlers to check for a `--json` flag\n - When the flag is present, output the raw data from the MCP tools in JSON format instead of formatting for human readability\n - Ensure consistent JSON schema across all commands\n - Add documentation for this feature in the help text for each command\n - Test with common scenarios like `task-master next --json` and `task-master show <id> --json`\n\n2. Create a new `install-keybindings` command:\n - Create a new CLI command that installs pre-configured Taskmaster keybindings to Cursor\n - Detect the user's OS to determine the correct path to Cursor's keybindings.json\n - Check if the file exists; create it if it doesn't\n - Add useful Taskmaster keybindings like:\n - Quick access to next task with output to clipboard\n - Task status updates\n - Opening new agent chat with context from the current task\n - Implement safeguards to prevent duplicate keybindings\n - Add undo functionality or backup of previous keybindings\n - Support custom key combinations via command flags
# Test Strategy:
1. JSON output testing:\n - Unit tests for each command with the --json flag\n - Verify JSON schema consistency across commands\n - Validate that all necessary task data is included in the JSON output\n - Test piping output to other commands like jq\n\n2. Keybindings command testing:\n - Test on different OSes (macOS, Windows, Linux)\n - Verify correct path detection for Cursor's keybindings.json\n - Test behavior when file doesn't exist\n - Test behavior when existing keybindings conflict\n - Validate the installed keybindings work as expected\n - Test uninstall/restore functionality
# Subtasks:
## 1. Implement Core JSON Output Logic for `next` and `show` Commands [pending]
### Dependencies: None
### Description: Modify the command handlers for `task-master next` and `task-master show <id>` to recognize and handle a `--json` flag. When the flag is present, output the raw data received from MCP tools directly as JSON.
### Details:
Use a CLI argument parsing library (e.g., argparse, click, commander) to add the `--json` boolean flag. In the command execution logic, check if the flag is set. If true, serialize the data object (before any human-readable formatting) into a JSON string and print it to stdout. If false, proceed with the existing formatting logic. Focus on these two commands first to establish the pattern.
## 2. Extend JSON Output to All Relevant Commands and Ensure Schema Consistency [pending]
### Dependencies: 67.1
### Description: Apply the JSON output pattern established in subtask 1 to all other relevant Taskmaster CLI commands that display data (e.g., `list`, `status`, etc.). Ensure the JSON structure is consistent where applicable (e.g., task objects should have the same fields). Add help text mentioning the `--json` flag for each modified command.
### Details:
Identify all commands that output structured data. Refactor the JSON output logic into a reusable utility function if possible. Define a standard schema for common data types like tasks. Update the help documentation for each command to include the `--json` flag description. Ensure error outputs are also handled appropriately (e.g., potentially outputting JSON error objects).
## 3. Create `install-keybindings` Command Structure and OS Detection [pending]
### Dependencies: None
### Description: Set up the basic structure for the new `task-master install-keybindings` command. Implement logic to detect the user's operating system (Linux, macOS, Windows) and determine the default path to Cursor's `keybindings.json` file.
### Details:
Add a new command entry point using the CLI framework. Use standard library functions (e.g., `os.platform()` in Node, `platform.system()` in Python) to detect the OS. Define constants or a configuration map for the default `keybindings.json` paths for each supported OS. Handle cases where the path might vary (e.g., different installation methods for Cursor). Add basic help text for the new command.
## 4. Implement Keybinding File Handling and Backup Logic [pending]
### Dependencies: 67.3
### Description: Implement the core logic within the `install-keybindings` command to read the target `keybindings.json` file. If it exists, create a backup. If it doesn't exist, create a new file with an empty JSON array `[]`. Prepare the structure to add new keybindings.
### Details:
Use file system modules to check for file existence, read, write, and copy files. Implement a backup mechanism (e.g., copy `keybindings.json` to `keybindings.json.bak`). Handle potential file I/O errors gracefully (e.g., permissions issues). Parse the existing JSON content; if parsing fails, report an error and potentially abort. Ensure the file is created with `[]` if it's missing.
## 5. Add Taskmaster Keybindings, Prevent Duplicates, and Support Customization [pending]
### Dependencies: 67.4
### Description: Define the specific Taskmaster keybindings (e.g., next task to clipboard, status update, open agent chat) and implement the logic to merge them into the user's `keybindings.json` data. Prevent adding duplicate keybindings (based on command ID or key combination). Add support for custom key combinations via command flags.
### Details:
Define the desired keybindings as a list of JSON objects following Cursor's format. Before adding, iterate through the existing keybindings (parsed in subtask 4) to check if a Taskmaster keybinding with the same command or key combination already exists. If not, append the new keybinding to the list. Add command-line flags (e.g., `--next-key='ctrl+alt+n'`) to allow users to override default key combinations. Serialize the updated list back to JSON and write it to the `keybindings.json` file.

View File

@@ -1,11 +0,0 @@
# Task ID: 68
# Title: Ability to create tasks without parsing PRD
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Which just means that when we create a task, if there's no tasks.json, we should create it calling the same function that is done by parse-prd. this lets taskmaster be used without a prd as a starding point.
# Details:
# Test Strategy:

View File

@@ -1,59 +0,0 @@
# Task ID: 69
# Title: Enhance Analyze Complexity for Specific Task IDs
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Modify the analyze-complexity feature (CLI and MCP) to allow analyzing only specified task IDs and append/update results in the report.
# Details:
Implementation Plan:
1. **Core Logic (`scripts/modules/task-manager/analyze-task-complexity.js`):**
* Modify the function signature to accept an optional `options.ids` parameter (string, comma-separated IDs).
* If `options.ids` is present:
* Parse the `ids` string into an array of target IDs.
* Filter `tasksData.tasks` to *only* include tasks matching the target IDs. Use this filtered list for analysis.
* Handle cases where provided IDs don't exist in `tasks.json`.
* If `options.ids` is *not* present: Continue with existing logic (filtering by active status).
* **Report Handling:**
* Before generating the analysis, check if the `outputPath` report file exists.
* If it exists, read the existing `complexityAnalysis` array.
* Generate the new analysis *only* for the target tasks (filtered by ID or status).
* Merge the results: Remove any entries from the *existing* array that match the IDs analyzed in the *current run*. Then, append the *new* analysis results to the array.
* Update the `meta` section (`generatedAt`, `tasksAnalyzed` should reflect *this run*).
* Write the *merged* `complexityAnalysis` array and updated `meta` back to the report file.
* If the report file doesn't exist, create it as usual.
* **Prompt Generation:** Ensure `generateInternalComplexityAnalysisPrompt` receives the correctly filtered list of tasks.
2. **CLI (`scripts/modules/commands.js`):**
* Add a new option `--id <ids>` to the `analyze-complexity` command definition. Description: "Comma-separated list of specific task IDs to analyze".
* In the `.action` handler:
* Check if `options.id` is provided.
* If yes, pass `options.id` (as the comma-separated string) to the `analyzeTaskComplexity` core function via the `options` object.
* Update user feedback messages to indicate specific task analysis.
3. **MCP Tool (`mcp-server/src/tools/analyze.js`):**
* Add a new optional parameter `ids: z.string().optional().describe("Comma-separated list of task IDs to analyze specifically")` to the Zod schema for the `analyze_project_complexity` tool.
* In the `execute` method, pass `args.ids` to the `analyzeTaskComplexityDirect` function within its `args` object.
4. **Direct Function (`mcp-server/src/core/direct-functions/analyze-task-complexity.js`):**
* Update the function to receive the `ids` string within the `args` object.
* Pass the `ids` string along to the core `analyzeTaskComplexity` function within its `options` object.
5. **Documentation:** Update relevant rule files (`commands.mdc`, `taskmaster.mdc`) to reflect the new `--id` option/parameter.
# Test Strategy:
1. **CLI:**
* Run `task-master analyze-complexity --id=<id1>` (where report doesn't exist). Verify report created with only task id1.
* Run `task-master analyze-complexity --id=<id2>` (where report exists). Verify report updated, containing analysis for both id1 and id2 (id2 replaces any previous id2 analysis).
* Run `task-master analyze-complexity --id=<id1>,<id3>`. Verify report updated, containing id1, id2, id3.
* Run `task-master analyze-complexity` (no id). Verify it analyzes *all* active tasks and updates the report accordingly, merging with previous specific analyses.
* Test with invalid/non-existent IDs.
2. **MCP:**
* Call `analyze_project_complexity` tool with `ids: "<id1>"`. Verify report creation/update.
* Call `analyze_project_complexity` tool with `ids: "<id1>,<id2>"`. Verify report merging.
* Call `analyze_project_complexity` tool without `ids`. Verify full analysis and merging.
3. Verify report `meta` section is updated correctly on each run.

View File

@@ -1,11 +0,0 @@
# Task ID: 70
# Title: Implement 'diagram' command for Mermaid diagram generation
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Develop a CLI command named 'diagram' that generates Mermaid diagrams to visualize task dependencies and workflows, with options to target specific tasks or generate comprehensive diagrams for all tasks.
# Details:
The task involves implementing a new command that accepts an optional '--id' parameter: if provided, the command generates a diagram illustrating the chosen task and its dependencies; if omitted, it produces a diagram that includes all tasks. The diagrams should use color coding to reflect task status and arrows to denote dependencies. In addition to CLI rendering, the command should offer an option to save the output as a Markdown (.md) file. Consider integrating with the existing task management system to pull task details and status. Pay attention to formatting consistency and error handling for invalid or missing task IDs. Comments should be added to the code to improve maintainability, and unit tests should cover edge cases such as cyclic dependencies, missing tasks, and invalid input formats.
# Test Strategy:
Verify the command functionality by testing with both specific task IDs and general invocation: 1) Run the command with a valid '--id' and ensure the resulting diagram accurately depicts the specified task's dependencies with correct color codings for statuses. 2) Execute the command without '--id' to ensure a complete workflow diagram is generated for all tasks. 3) Check that arrows correctly represent dependency relationships. 4) Validate the Markdown (.md) file export option by confirming the file format and content after saving. 5) Test error responses for non-existent task IDs and malformed inputs.

View File

@@ -1,23 +0,0 @@
# Task ID: 71
# Title: Add Model-Specific maxTokens Override Configuration
# Status: done
# Dependencies: None
# Priority: high
# Description: Implement functionality to allow specifying a maximum token limit for individual AI models within .taskmasterconfig, overriding the role-based maxTokens if the model-specific limit is lower.
# Details:
1. **Modify `.taskmasterconfig` Structure:** Add a new top-level section `modelOverrides` (e.g., `"modelOverrides": { "o3-mini": { "maxTokens": 100000 } }`).
2. **Update `config-manager.js`:**
- Modify config loading to read the new `modelOverrides` section.
- Update `getParametersForRole(role)` logic: Fetch role defaults (roleMaxTokens, temperature). Get the modelId for the role. Look up `modelOverrides[modelId].maxTokens` (modelSpecificMaxTokens). Calculate `effectiveMaxTokens = Math.min(roleMaxTokens, modelSpecificMaxTokens ?? Infinity)`. Return `{ maxTokens: effectiveMaxTokens, temperature }`.
3. **Update Documentation:** Add an example of `modelOverrides` to `.taskmasterconfig.example` or relevant documentation.
# Test Strategy:
1. **Unit Tests (`config-manager.js`):**
- Verify `getParametersForRole` returns role defaults when no override exists.
- Verify `getParametersForRole` returns the lower model-specific limit when an override exists and is lower.
- Verify `getParametersForRole` returns the role limit when an override exists but is higher.
- Verify handling of missing `modelOverrides` section.
2. **Integration Tests (`ai-services-unified.js`):**
- Call an AI service (e.g., `generateTextService`) with a config having a model override.
- Mock the underlying provider function.
- Assert that the `maxTokens` value passed to the mocked provider function matches the expected (potentially overridden) minimum value.

View File

@@ -1,11 +0,0 @@
# Task ID: 72
# Title: Implement PDF Generation for Project Progress and Dependency Overview
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Develop a feature to generate a PDF report summarizing the current project progress and visualizing the dependency chain of tasks.
# Details:
This task involves creating a new CLI command named 'progress-pdf' within the existing project framework to generate a PDF document. The PDF should include: 1) A summary of project progress, detailing completed, in-progress, and pending tasks with their respective statuses and completion percentages if applicable. 2) A visual representation of the task dependency chain, leveraging the output format from the 'diagram' command (Task 70) to include Mermaid diagrams or similar visualizations converted to image format for PDF embedding. Use a suitable PDF generation library (e.g., jsPDF for JavaScript environments or ReportLab for Python) compatible with the projects tech stack. Ensure the command accepts optional parameters to filter tasks by status or ID for customized reports. Handle large dependency chains by implementing pagination or zoomable image sections in the PDF. Provide error handling for cases where diagram generation or PDF creation fails, logging detailed error messages for debugging. Consider accessibility by ensuring text in the PDF is selectable and images have alt text descriptions. Integrate this feature with the existing CLI structure, ensuring it aligns with the projects configuration settings (e.g., output directory for generated files). Document the command usage and parameters in the projects help or README file.
# Test Strategy:
Verify the completion of this task through a multi-step testing approach: 1) Unit Tests: Create tests for the PDF generation logic to ensure data (task statuses and dependencies) is correctly fetched and formatted. Mock the PDF library to test edge cases like empty task lists or broken dependency links. 2) Integration Tests: Run the 'progress-pdf' command via CLI to confirm it generates a PDF file without errors under normal conditions, with filtered task IDs, and with various status filters. Validate that the output file exists in the specified directory and can be opened. 3) Content Validation: Manually or via automated script, check the generated PDF content to ensure it accurately reflects the current project state (compare task counts and statuses against a known project state) and includes dependency diagrams as images. 4) Error Handling Tests: Simulate failures in diagram generation or PDF creation (e.g., invalid output path, library errors) and verify that appropriate error messages are logged and the command exits gracefully. 5) Accessibility Checks: Use a PDF accessibility tool or manual inspection to confirm that text is selectable and images have alt text. Run these tests across different project sizes (small with few tasks, large with complex dependencies) to ensure scalability. Document test results and include a sample PDF output in the project repository for reference.

View File

@@ -1,44 +0,0 @@
# Task ID: 73
# Title: Implement Custom Model ID Support for Ollama/OpenRouter
# Status: in-progress
# Dependencies: None
# Priority: medium
# Description: Allow users to specify custom model IDs for Ollama and OpenRouter providers via CLI flag and interactive setup, with appropriate validation and warnings.
# Details:
**CLI (`task-master models --set-<role> <id> --custom`):**
- Modify `scripts/modules/task-manager/models.js`: `setModel` function.
- Check internal `available_models.json` first.
- If not found and `--custom` is provided:
- Fetch `https://openrouter.ai/api/v1/models`. (Need to add `https` import).
- If ID found in OpenRouter list: Set `provider: 'openrouter'`, `modelId: <id>`. Warn user about lack of official validation.
- If ID not found in OpenRouter: Assume Ollama. Set `provider: 'ollama'`, `modelId: <id>`. Warn user strongly (model must be pulled, compatibility not guaranteed).
- If not found and `--custom` is *not* provided: Fail with error message guiding user to use `--custom`.
**Interactive Setup (`task-master models --setup`):**
- Modify `scripts/modules/commands.js`: `runInteractiveSetup` function.
- Add options to `inquirer` choices for each role: `OpenRouter (Enter Custom ID)` and `Ollama (Enter Custom ID)`.
- If `__CUSTOM_OPENROUTER__` selected:
- Prompt for custom ID.
- Fetch OpenRouter list and validate ID exists. Fail setup for that role if not found.
- Update config and show warning if found.
- If `__CUSTOM_OLLAMA__` selected:
- Prompt for custom ID.
- Update config directly (no live validation).
- Show strong Ollama warning.
# Test Strategy:
**Unit Tests:**
- Test `setModel` logic for internal models, custom OpenRouter (valid/invalid), custom Ollama, missing `--custom` flag.
- Test `runInteractiveSetup` for new custom options flow, including OpenRouter validation success/failure.
**Integration Tests:**
- Test the `task-master models` command with `--custom` flag variations.
- Test the `task-master models --setup` interactive flow for custom options.
**Manual Testing:**
- Run `task-master models --setup` and select custom options.
- Run `task-master models --set-main <valid_openrouter_id> --custom`. Verify config and warning.
- Run `task-master models --set-main <invalid_openrouter_id> --custom`. Verify error.
- Run `task-master models --set-main <ollama_model_id> --custom`. Verify config and warning.
- Run `task-master models --set-main <custom_id>` (without `--custom`). Verify error.
- Check `getModelConfiguration` output reflects custom models correctly.

View File

@@ -1,19 +0,0 @@
# Task ID: 74
# Title: PR Review: better-model-management
# Status: done
# Dependencies: None
# Priority: medium
# Description: will add subtasks
# Details:
# Test Strategy:
# Subtasks:
## 1. pull out logWrapper into utils [done]
### Dependencies: None
### Description: its being used a lot across direct functions and repeated right now
### Details:

View File

@@ -1,11 +0,0 @@
# Task ID: 75
# Title: Integrate Google Search Grounding for Research Role
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Update the AI service layer to enable Google Search Grounding specifically when a Google model is used in the 'research' role.
# Details:
**Goal:** Conditionally enable Google Search Grounding based on the AI role.\n\n**Implementation Plan:**\n\n1. **Modify `ai-services-unified.js`:** Update `generateTextService`, `streamTextService`, and `generateObjectService`.\n2. **Conditional Logic:** Inside these functions, check if `providerName === 'google'` AND `role === 'research'`.\n3. **Construct `providerOptions`:** If the condition is met, create an options object:\n ```javascript\n let providerSpecificOptions = {};\n if (providerName === 'google' && role === 'research') {\n log('info', 'Enabling Google Search Grounding for research role.');\n providerSpecificOptions = {\n google: {\n useSearchGrounding: true,\n // Optional: Add dynamic retrieval for compatible models\n // dynamicRetrievalConfig: { mode: 'MODE_DYNAMIC' } \n }\n };\n }\n ```\n4. **Pass Options to SDK:** Pass `providerSpecificOptions` to the Vercel AI SDK functions (`generateText`, `streamText`, `generateObject`) via the `providerOptions` parameter:\n ```javascript\n const { text, ... } = await generateText({\n // ... other params\n providerOptions: providerSpecificOptions \n });\n ```\n5. **Update `supported-models.json`:** Ensure Google models intended for research (e.g., `gemini-1.5-pro-latest`, `gemini-1.5-flash-latest`) include `'research'` in their `allowed_roles` array.\n\n**Rationale:** This approach maintains the clear separation between 'main' and 'research' roles, ensuring grounding is only activated when explicitly requested via the `--research` flag or when the research model is invoked.\n\n**Clarification:** The Search Grounding feature is specifically designed to provide up-to-date information from the web when using Google models. This implementation ensures that grounding is only activated in research contexts where current information is needed, while preserving normal operation for standard tasks. The `useSearchGrounding: true` flag instructs the Google API to augment the model's knowledge with recent web search results relevant to the query.
# Test Strategy:
1. Configure a Google model (e.g., gemini-1.5-flash-latest) as the 'research' model in `.taskmasterconfig`.\n2. Run a command with the `--research` flag (e.g., `task-master add-task --prompt='Latest news on AI SDK 4.2' --research`).\n3. Verify logs show 'Enabling Google Search Grounding'.\n4. Check if the task output incorporates recent information.\n5. Configure the same Google model as the 'main' model.\n6. Run a command *without* the `--research` flag.\n7. Verify logs *do not* show grounding being enabled.\n8. Add unit tests to `ai-services-unified.test.js` to verify the conditional logic for adding `providerOptions`. Ensure mocks correctly simulate different roles and providers.

View File

@@ -1,59 +0,0 @@
# Task ID: 76
# Title: Develop E2E Test Framework for Taskmaster MCP Server (FastMCP over stdio)
# Status: pending
# Dependencies: None
# Priority: high
# Description: Design and implement an end-to-end (E2E) test framework for the Taskmaster MCP server, enabling programmatic interaction with the FastMCP server over stdio by sending and receiving JSON tool request/response messages.
# Details:
Research existing E2E testing approaches for MCP servers, referencing examples such as the MCP Server E2E Testing Example. Architect a test harness (preferably in Python or Node.js) that can launch the FastMCP server as a subprocess, establish stdio communication, and send well-formed JSON tool request messages.
Implementation details:
1. Use `subprocess.Popen` (Python) or `child_process.spawn` (Node.js) to launch the FastMCP server with appropriate stdin/stdout pipes
2. Implement a message protocol handler that formats JSON requests with proper line endings and message boundaries
3. Create a buffered reader for stdout that correctly handles chunked responses and reconstructs complete JSON objects
4. Develop a request/response correlation mechanism using unique IDs for each request
5. Implement timeout handling for requests that don't receive responses
Implement robust parsing of JSON responses, including error handling for malformed or unexpected output. The framework should support defining test cases as scripts or data files, allowing for easy addition of new scenarios.
Test case structure should include:
- Setup phase for environment preparation
- Sequence of tool requests with expected responses
- Validation functions for response verification
- Teardown phase for cleanup
Ensure the framework can assert on both the structure and content of responses, and provide clear logging for debugging. Document setup, usage, and extension instructions. Consider cross-platform compatibility and CI integration.
**Clarification:** The E2E test framework should focus on testing the FastMCP server's ability to correctly process tool requests and return appropriate responses. This includes verifying that the server properly handles different types of tool calls (e.g., file operations, web requests, task management), validates input parameters, and returns well-structured responses. The framework should be designed to be extensible, allowing new test cases to be added as the server's capabilities evolve. Tests should cover both happy paths and error conditions to ensure robust server behavior under various scenarios.
# Test Strategy:
Verify the framework by implementing a suite of representative E2E tests that cover typical tool requests and edge cases. Specific test cases should include:
1. Basic tool request/response validation
- Send a simple file_read request and verify response structure
- Test with valid and invalid file paths
- Verify error handling for non-existent files
2. Concurrent request handling
- Send multiple requests in rapid succession
- Verify all responses are received and correlated correctly
3. Large payload testing
- Test with large file contents (>1MB)
- Verify correct handling of chunked responses
4. Error condition testing
- Malformed JSON requests
- Invalid tool names
- Missing required parameters
- Server crash recovery
Confirm that tests can start and stop the FastMCP server, send requests, and accurately parse and validate responses. Implement specific assertions for response timing, structure validation using JSON schema, and content verification. Intentionally introduce malformed requests and simulate server errors to ensure robust error handling.
Implement detailed logging with different verbosity levels:
- ERROR: Failed tests and critical issues
- WARNING: Unexpected but non-fatal conditions
- INFO: Test progress and results
- DEBUG: Raw request/response data
Run the test suite in a clean environment and confirm all expected assertions and logs are produced. Validate that new test cases can be added with minimal effort and that the framework integrates with CI pipelines. Create a CI configuration that runs tests on each commit.

File diff suppressed because one or more lines are too long