docs: update changeset with model config while preserving existing changes

This commit is contained in:
Eyal Toledano
2025-04-08 15:39:25 -04:00
parent b7580e038d
commit 9a66db0309
16 changed files with 3566 additions and 20 deletions

View File

@@ -4,6 +4,23 @@
- Adjusts the MCP server invokation in the mcp.json we ship with `task-master init`. Fully functional now.
- Rename the npx -y command. It's now `npx -y task-master-ai task-master-mcp`
- Add additional binary alias: `task-master-mcp-server` pointing to the same MCP server script
- **Significant improvements to model configuration:**
- Increase context window from 64k to 128k tokens (MAX_TOKENS=128000) for handling larger codebases
- Reduce temperature from 0.4 to 0.2 for more consistent, deterministic outputs
- Set default model to "claude-3-7-sonnet-20250219" in configuration
- Update Perplexity model to "sonar-pro" for research operations
- Increase default subtasks generation from 4 to 5 for more granular task breakdown
- Set consistent default priority to "medium" for all new tasks
- **Clarify environment configuration approaches:**
- For direct MCP usage: Configure API keys directly in `.cursor/mcp.json`
- For npm package usage: Configure API keys in `.env` file
- Update templates with clearer placeholder values and formatting
- Provide explicit documentation about configuration methods in both environments
- Use consistent placeholder format "YOUR_ANTHROPIC_API_KEY_HERE" in mcp.json
- Rename MCP tools to better align with API conventions and natural language in client chat:
- Rename `list-tasks` to `get-tasks` for more intuitive client requests like "get my tasks"
- Rename `show-task` to `get-task` for consistency with GET-based API naming conventions

View File

@@ -6,12 +6,12 @@
"./mcp-server/server.js"
],
"env": {
"ANTHROPIC_API_KEY": "%ANTHROPIC_API_KEY%",
"PERPLEXITY_API_KEY": "%PERPLEXITY_API_KEY%",
"MODEL": "%MODEL%",
"PERPLEXITY_MODEL": "%PERPLEXITY_MODEL%",
"MAX_TOKENS": 64000,
"TEMPERATURE": 0.4,
"ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE",
"PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE",
"MODEL": "claude-3-7-sonnet-20250219",
"PERPLEXITY_MODEL": "sonar-pro",
"MAX_TOKENS": 128000,
"TEMPERATURE": 0.2,
"DEFAULT_SUBTASKS": 5,
"DEFAULT_PRIORITY": "medium"
}

View File

@@ -1,20 +1,20 @@
# API Keys (Required)
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Format: sk-ant-api03-...
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Format: pplx-...
ANTHROPIC_API_KEY=your_anthropic_api_key_here # Format: sk-ant-api03-...
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Format: pplx-...
# Model Configuration
MODEL=claude-3-7-sonnet-20250219 # Recommended models: claude-3-7-sonnet-20250219, claude-3-opus-20240229
PERPLEXITY_MODEL=sonar-pro # Perplexity model for research-backed subtasks
MAX_TOKENS=64000 # Maximum tokens for model responses
TEMPERATURE=0.4 # Temperature for model responses (0.0-1.0)
MODEL=claude-3-7-sonnet-20250219 # Recommended models: claude-3-7-sonnet-20250219, claude-3-opus-20240229
PERPLEXITY_MODEL=sonar-pro # Perplexity model for research-backed subtasks
MAX_TOKENS=128000 # Maximum tokens for model responses
TEMPERATURE=0.2 # Temperature for model responses (0.0-1.0)
# Logging Configuration
DEBUG=false # Enable debug logging (true/false)
LOG_LEVEL=info # Log level (debug, info, warn, error)
DEBUG=false # Enable debug logging (true/false)
LOG_LEVEL=info # Log level (debug, info, warn, error)
# Task Generation Settings
DEFAULT_SUBTASKS=4 # Default number of subtasks when expanding
DEFAULT_PRIORITY=medium # Default priority for generated tasks (high, medium, low)
DEFAULT_SUBTASKS=5 # Default number of subtasks when expanding
DEFAULT_PRIORITY=medium # Default priority for generated tasks (high, medium, low)
# Project Metadata (Optional)
PROJECT_NAME=Your Project Name # Override default project name in tasks.json
PROJECT_NAME=Your Project Name # Override default project name in tasks.json

3
package-lock.json generated
View File

@@ -32,7 +32,8 @@
"bin": {
"task-master": "bin/task-master.js",
"task-master-init": "bin/task-master-init.js",
"task-master-mcp": "mcp-server/server.js"
"task-master-mcp": "mcp-server/server.js",
"task-master-mcp-server": "mcp-server/server.js"
},
"devDependencies": {
"@changesets/changelog-github": "^0.5.1",

55
tasks/task_046.txt Normal file
View File

@@ -0,0 +1,55 @@
# Task ID: 46
# Title: Implement ICE Analysis Command for Task Prioritization
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a new command that analyzes and ranks tasks based on Impact, Confidence, and Ease (ICE) scoring methodology, generating a comprehensive prioritization report.
# Details:
Develop a new command called `analyze-ice` that evaluates non-completed tasks (excluding those marked as done, cancelled, or deferred) and ranks them according to the ICE methodology:
1. Core functionality:
- Calculate an Impact score (how much value the task will deliver)
- Calculate a Confidence score (how certain we are about the impact)
- Calculate an Ease score (how easy it is to implement)
- Compute a total ICE score (sum or product of the three components)
2. Implementation details:
- Reuse the filtering logic from `analyze-complexity` to select relevant tasks
- Leverage the LLM to generate scores for each dimension on a scale of 1-10
- For each task, prompt the LLM to evaluate and justify each score based on task description and details
- Create an `ice_report.md` file similar to the complexity report
- Sort tasks by total ICE score in descending order
3. CLI rendering:
- Implement a sister command `show-ice-report` that displays the report in the terminal
- Format the output with colorized scores and rankings
- Include options to sort by individual components (impact, confidence, or ease)
4. Integration:
- If a complexity report exists, reference it in the ICE report for additional context
- Consider adding a combined view that shows both complexity and ICE scores
The command should follow the same design patterns as `analyze-complexity` for consistency and code reuse.
# Test Strategy:
1. Unit tests:
- Test the ICE scoring algorithm with various mock task inputs
- Verify correct filtering of tasks based on status
- Test the sorting functionality with different ranking criteria
2. Integration tests:
- Create a test project with diverse tasks and verify the generated ICE report
- Test the integration with existing complexity reports
- Verify that changes to task statuses correctly update the ICE analysis
3. CLI tests:
- Verify the `analyze-ice` command generates the expected report file
- Test the `show-ice-report` command renders correctly in the terminal
- Test with various flag combinations and sorting options
4. Validation criteria:
- The ICE scores should be reasonable and consistent
- The report should clearly explain the rationale behind each score
- The ranking should prioritize high-impact, high-confidence, easy-to-implement tasks
- Performance should be acceptable even with a large number of tasks
- The command should handle edge cases gracefully (empty projects, missing data)

66
tasks/task_047.txt Normal file
View File

@@ -0,0 +1,66 @@
# Task ID: 47
# Title: Enhance Task Suggestion Actions Card Workflow
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Redesign the suggestion actions card to implement a structured workflow for task expansion, subtask creation, context addition, and task management.
# Details:
Implement a new workflow for the suggestion actions card that guides users through a logical sequence when working with tasks and subtasks:
1. Task Expansion Phase:
- Add a prominent 'Expand Task' button at the top of the suggestion card
- Implement an 'Add Subtask' button that becomes active after task expansion
- Allow users to add multiple subtasks sequentially
- Provide visual indication of the current phase (expansion phase)
2. Context Addition Phase:
- After subtasks are created, transition to the context phase
- Implement an 'Update Subtask' action that allows appending context to each subtask
- Create a UI element showing which subtask is currently being updated
- Provide a progress indicator showing which subtasks have received context
- Include a mechanism to navigate between subtasks for context addition
3. Task Management Phase:
- Once all subtasks have context, enable the 'Set as In Progress' button
- Add a 'Start Working' button that directs the agent to begin with the first subtask
- Implement an 'Update Task' action that consolidates all notes and reorganizes them into improved subtask details
- Provide a confirmation dialog when restructuring task content
4. UI/UX Considerations:
- Use visual cues (colors, icons) to indicate the current phase
- Implement tooltips explaining each action's purpose
- Add a progress tracker showing completion status across all phases
- Ensure the UI adapts responsively to different screen sizes
The implementation should maintain all existing functionality while guiding users through this more structured approach to task management.
# Test Strategy:
Testing should verify the complete workflow functions correctly:
1. Unit Tests:
- Test each button/action individually to ensure it performs its specific function
- Verify state transitions between phases work correctly
- Test edge cases (e.g., attempting to set a task in progress before adding context)
2. Integration Tests:
- Verify the complete workflow from task expansion to starting work
- Test that context added to subtasks is properly saved and displayed
- Ensure the 'Update Task' functionality correctly consolidates and restructures content
3. UI/UX Testing:
- Verify visual indicators correctly show the current phase
- Test responsive design on various screen sizes
- Ensure tooltips and help text are displayed correctly
4. User Acceptance Testing:
- Create test scenarios covering the complete workflow:
a. Expand a task and add 3 subtasks
b. Add context to each subtask
c. Set the task as in progress
d. Use update-task to restructure the content
e. Verify the agent correctly begins work on the first subtask
- Test with both simple and complex tasks to ensure scalability
5. Regression Testing:
- Verify that existing functionality continues to work
- Ensure compatibility with keyboard shortcuts and accessibility features

44
tasks/task_048.txt Normal file
View File

@@ -0,0 +1,44 @@
# Task ID: 48
# Title: Refactor Prompts into Centralized Structure
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a dedicated 'prompts' folder and move all prompt definitions from inline function implementations to individual files, establishing a centralized prompt management system.
# Details:
This task involves restructuring how prompts are managed in the codebase:
1. Create a new 'prompts' directory at the appropriate level in the project structure
2. For each existing prompt currently embedded in functions:
- Create a dedicated file with a descriptive name (e.g., 'task_suggestion_prompt.js')
- Extract the prompt text/object into this file
- Export the prompt using the appropriate module pattern
3. Modify all functions that currently contain inline prompts to import them from the new centralized location
4. Establish a consistent naming convention for prompt files (e.g., feature_action_prompt.js)
5. Consider creating an index.js file in the prompts directory to provide a clean import interface
6. Document the new prompt structure in the project documentation
7. Ensure that any prompt that requires dynamic content insertion maintains this capability after refactoring
This refactoring will improve maintainability by making prompts easier to find, update, and reuse across the application.
# Test Strategy:
Testing should verify that the refactoring maintains identical functionality while improving code organization:
1. Automated Tests:
- Run existing test suite to ensure no functionality is broken
- Create unit tests for the new prompt import mechanism
- Verify that dynamically constructed prompts still receive their parameters correctly
2. Manual Testing:
- Execute each feature that uses prompts and compare outputs before and after refactoring
- Verify that all prompts are properly loaded from their new locations
- Check that no prompt text is accidentally modified during the migration
3. Code Review:
- Confirm all prompts have been moved to the new structure
- Verify consistent naming conventions are followed
- Check that no duplicate prompts exist
- Ensure imports are correctly implemented in all files that previously contained inline prompts
4. Documentation:
- Verify documentation is updated to reflect the new prompt organization
- Confirm the index.js export pattern works as expected for importing prompts

66
tasks/task_049.txt Normal file
View File

@@ -0,0 +1,66 @@
# Task ID: 49
# Title: Implement Code Quality Analysis Command
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a command that analyzes the codebase to identify patterns and verify functions against current best practices, generating improvement recommendations and potential refactoring tasks.
# Details:
Develop a new command called `analyze-code-quality` that performs the following functions:
1. **Pattern Recognition**:
- Scan the codebase to identify recurring patterns in code structure, function design, and architecture
- Categorize patterns by frequency and impact on maintainability
- Generate a report of common patterns with examples from the codebase
2. **Best Practice Verification**:
- For each function in specified files, extract its purpose, parameters, and implementation details
- Create a verification checklist for each function that includes:
- Function naming conventions
- Parameter handling
- Error handling
- Return value consistency
- Documentation quality
- Complexity metrics
- Use an API integration with Perplexity or similar AI service to evaluate each function against current best practices
3. **Improvement Recommendations**:
- Generate specific refactoring suggestions for functions that don't align with best practices
- Include code examples of the recommended improvements
- Estimate the effort required for each refactoring suggestion
4. **Task Integration**:
- Create a mechanism to convert high-value improvement recommendations into Taskmaster tasks
- Allow users to select which recommendations to convert to tasks
- Generate properly formatted task descriptions that include the current implementation, recommended changes, and justification
The command should accept parameters for targeting specific directories or files, setting the depth of analysis, and filtering by improvement impact level.
# Test Strategy:
Testing should verify all aspects of the code analysis command:
1. **Functionality Testing**:
- Create a test codebase with known patterns and anti-patterns
- Verify the command correctly identifies all patterns in the test codebase
- Check that function verification correctly flags issues in deliberately non-compliant functions
- Confirm recommendations are relevant and implementable
2. **Integration Testing**:
- Test the AI service integration with mock responses to ensure proper handling of API calls
- Verify the task creation workflow correctly generates well-formed tasks
- Test integration with existing Taskmaster commands and workflows
3. **Performance Testing**:
- Measure execution time on codebases of various sizes
- Ensure memory usage remains reasonable even on large codebases
- Test with rate limiting on API calls to ensure graceful handling
4. **User Experience Testing**:
- Have developers use the command on real projects and provide feedback
- Verify the output is actionable and clear
- Test the command with different parameter combinations
5. **Validation Criteria**:
- Command successfully analyzes at least 95% of functions in the codebase
- Generated recommendations are specific and actionable
- Created tasks follow the project's task format standards
- Analysis results are consistent across multiple runs on the same codebase

131
tasks/task_050.txt Normal file
View File

@@ -0,0 +1,131 @@
# Task ID: 50
# Title: Implement Test Coverage Tracking System by Task
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a system that maps test coverage to specific tasks and subtasks, enabling targeted test generation and tracking of code coverage at the task level.
# Details:
Develop a comprehensive test coverage tracking system with the following components:
1. Create a `tests.json` file structure in the `tasks/` directory that associates test suites and individual tests with specific task IDs or subtask IDs.
2. Build a generator that processes code coverage reports and updates the `tests.json` file to maintain an accurate mapping between tests and tasks.
3. Implement a parser that can extract code coverage information from standard coverage tools (like Istanbul/nyc, Jest coverage reports) and convert it to the task-based format.
4. Create CLI commands that can:
- Display test coverage for a specific task/subtask
- Identify untested code related to a particular task
- Generate test suggestions for uncovered code using LLMs
5. Extend the MCP (Mission Control Panel) to visualize test coverage by task, showing percentage covered and highlighting areas needing tests.
6. Develop an automated test generation system that uses LLMs to create targeted tests for specific uncovered code sections within a task.
7. Implement a workflow that integrates with the existing task management system, allowing developers to see test requirements alongside implementation requirements.
The system should maintain bidirectional relationships: from tests to tasks and from tasks to the code they affect, enabling precise tracking of what needs testing for each development task.
# Test Strategy:
Testing should verify all components of the test coverage tracking system:
1. **File Structure Tests**: Verify the `tests.json` file is correctly created and follows the expected schema with proper task/test relationships.
2. **Coverage Report Processing**: Create mock coverage reports and verify they are correctly parsed and integrated into the `tests.json` file.
3. **CLI Command Tests**: Test each CLI command with various inputs:
- Test coverage display for existing tasks
- Edge cases like tasks with no tests
- Tasks with partial coverage
4. **Integration Tests**: Verify the entire workflow from code changes to coverage reporting to task-based test suggestions.
5. **LLM Test Generation**: Validate that generated tests actually cover the intended code paths by running them against the codebase.
6. **UI/UX Tests**: Ensure the MCP correctly displays coverage information and that the interface for viewing and managing test coverage is intuitive.
7. **Performance Tests**: Measure the performance impact of the coverage tracking system, especially for large codebases.
Create a test suite that can run in CI/CD to ensure the test coverage tracking system itself maintains high coverage and reliability.
# Subtasks:
## 1. Design and implement tests.json data structure [pending]
### Dependencies: None
### Description: Create a comprehensive data structure that maps tests to tasks/subtasks and tracks coverage metrics. This structure will serve as the foundation for the entire test coverage tracking system.
### Details:
1. Design a JSON schema for tests.json that includes: test IDs, associated task/subtask IDs, coverage percentages, test types (unit/integration/e2e), file paths, and timestamps.
2. Implement bidirectional relationships by creating references between tests.json and tasks.json.
3. Define fields for tracking statement coverage, branch coverage, and function coverage per task.
4. Add metadata fields for test quality metrics beyond coverage (complexity, mutation score).
5. Create utility functions to read/write/update the tests.json file.
6. Implement validation logic to ensure data integrity between tasks and tests.
7. Add version control compatibility by using relative paths and stable identifiers.
8. Test the data structure with sample data representing various test scenarios.
9. Document the schema with examples and usage guidelines.
## 2. Develop coverage report parser and adapter system [pending]
### Dependencies: 50.1
### Description: Create a framework-agnostic system that can parse coverage reports from various testing tools and convert them to the standardized task-based format in tests.json.
### Details:
1. Research and document output formats for major coverage tools (Istanbul/nyc, Jest, Pytest, JaCoCo).
2. Design a normalized intermediate coverage format that any test tool can map to.
3. Implement adapter classes for each major testing framework that convert their reports to the intermediate format.
4. Create a parser registry that can automatically detect and use the appropriate parser based on input format.
5. Develop a mapping algorithm that associates coverage data with specific tasks based on file paths and code blocks.
6. Implement file path normalization to handle different operating systems and environments.
7. Add error handling for malformed or incomplete coverage reports.
8. Create unit tests for each adapter using sample coverage reports.
9. Implement a command-line interface for manual parsing and testing.
10. Document the extension points for adding custom coverage tool adapters.
## 3. Build coverage tracking and update generator [pending]
### Dependencies: 50.1, 50.2
### Description: Create a system that processes code coverage reports, maps them to tasks, and updates the tests.json file to maintain accurate coverage tracking over time.
### Details:
1. Implement a coverage processor that takes parsed coverage data and maps it to task IDs.
2. Create algorithms to calculate aggregate coverage metrics at the task and subtask levels.
3. Develop a change detection system that identifies when tests or code have changed and require updates.
4. Implement incremental update logic to avoid reprocessing unchanged tests.
5. Create a task-code association system that maps specific code blocks to tasks for granular tracking.
6. Add historical tracking to monitor coverage trends over time.
7. Implement hooks for CI/CD integration to automatically update coverage after test runs.
8. Create a conflict resolution strategy for when multiple tests cover the same code areas.
9. Add performance optimizations for large codebases and test suites.
10. Develop unit tests that verify correct aggregation and mapping of coverage data.
11. Document the update workflow with sequence diagrams and examples.
## 4. Implement CLI commands for coverage operations [pending]
### Dependencies: 50.1, 50.2, 50.3
### Description: Create a set of command-line interface tools that allow developers to view, analyze, and manage test coverage at the task level.
### Details:
1. Design a cohesive CLI command structure with subcommands for different coverage operations.
2. Implement 'coverage show' command to display test coverage for a specific task/subtask.
3. Create 'coverage gaps' command to identify untested code related to a particular task.
4. Develop 'coverage history' command to show how coverage has changed over time.
5. Implement 'coverage generate' command that uses LLMs to suggest tests for uncovered code.
6. Add filtering options to focus on specific test types or coverage thresholds.
7. Create formatted output options (JSON, CSV, markdown tables) for integration with other tools.
8. Implement colorized terminal output for better readability of coverage reports.
9. Add batch processing capabilities for running operations across multiple tasks.
10. Create comprehensive help documentation and examples for each command.
11. Develop unit and integration tests for CLI commands.
12. Document command usage patterns and example workflows.
## 5. Develop AI-powered test generation system [pending]
### Dependencies: 50.1, 50.2, 50.3, 50.4
### Description: Create an intelligent system that uses LLMs to generate targeted tests for uncovered code sections within tasks, integrating with the existing task management workflow.
### Details:
1. Design prompt templates for different test types (unit, integration, E2E) that incorporate task descriptions and code context.
2. Implement code analysis to extract relevant context from uncovered code sections.
3. Create a test generation pipeline that combines task metadata, code context, and coverage gaps.
4. Develop strategies for maintaining test context across task changes and updates.
5. Implement test quality evaluation to ensure generated tests are meaningful and effective.
6. Create a feedback mechanism to improve prompts based on acceptance or rejection of generated tests.
7. Add support for different testing frameworks and languages through templating.
8. Implement caching to avoid regenerating similar tests.
9. Create a workflow that integrates with the task management system to suggest tests alongside implementation requirements.
10. Develop specialized generation modes for edge cases, regression tests, and performance tests.
11. Add configuration options for controlling test generation style and coverage goals.
12. Create comprehensive documentation on how to use and extend the test generation system.
13. Implement evaluation metrics to track the effectiveness of AI-generated tests.

176
tasks/task_051.txt Normal file
View File

@@ -0,0 +1,176 @@
# Task ID: 51
# Title: Implement Perplexity Research Command
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a command that allows users to quickly research topics using Perplexity AI, with options to include task context or custom prompts.
# Details:
Develop a new command called 'research' that integrates with Perplexity AI's API to fetch information on specified topics. The command should:
1. Accept the following parameters:
- A search query string (required)
- A task or subtask ID for context (optional)
- A custom prompt to guide the research (optional)
2. When a task/subtask ID is provided, extract relevant information from it to enrich the research query with context.
3. Implement proper API integration with Perplexity, including authentication and rate limiting handling.
4. Format and display the research results in a readable format in the terminal, with options to:
- Save the results to a file
- Copy results to clipboard
- Generate a summary of key points
5. Cache research results to avoid redundant API calls for the same queries.
6. Provide a configuration option to set the depth/detail level of research (quick overview vs. comprehensive).
7. Handle errors gracefully, especially network issues or API limitations.
The command should follow the existing CLI structure and maintain consistency with other commands in the system.
# Test Strategy:
1. Unit tests:
- Test the command with various combinations of parameters (query only, query+task, query+custom prompt, all parameters)
- Mock the Perplexity API responses to test different scenarios (successful response, error response, rate limiting)
- Verify that task context is correctly extracted and incorporated into the research query
2. Integration tests:
- Test actual API calls to Perplexity with valid credentials (using a test account)
- Verify the caching mechanism works correctly for repeated queries
- Test error handling with intentionally invalid requests
3. User acceptance testing:
- Have team members use the command for real research needs and provide feedback
- Verify the command works in different network environments
- Test the command with very long queries and responses
4. Performance testing:
- Measure and optimize response time for queries
- Test behavior under poor network conditions
Validate that the research results are properly formatted, readable, and that all output options (save, copy) function correctly.
# Subtasks:
## 1. Create Perplexity API Client Service [pending]
### Dependencies: None
### Description: Develop a service module that handles all interactions with the Perplexity AI API, including authentication, request formatting, and response handling.
### Details:
Implementation details:
1. Create a new service file `services/perplexityService.js`
2. Implement authentication using the PERPLEXITY_API_KEY from environment variables
3. Create functions for making API requests to Perplexity with proper error handling:
- `queryPerplexity(searchQuery, options)` - Main function to query the API
- `handleRateLimiting(response)` - Logic to handle rate limits with exponential backoff
4. Implement response parsing and formatting functions
5. Add proper error handling for network issues, authentication problems, and API limitations
6. Create a simple caching mechanism using a Map or object to store recent query results
7. Add configuration options for different detail levels (quick vs comprehensive)
Testing approach:
- Write unit tests using Jest to verify API client functionality with mocked responses
- Test error handling with simulated network failures
- Verify caching mechanism works correctly
- Test with various query types and options
## 2. Implement Task Context Extraction Logic [pending]
### Dependencies: None
### Description: Create utility functions to extract relevant context from tasks and subtasks to enhance research queries with project-specific information.
### Details:
Implementation details:
1. Create a new utility file `utils/contextExtractor.js`
2. Implement a function `extractTaskContext(taskId)` that:
- Loads the task/subtask data from tasks.json
- Extracts relevant information (title, description, details)
- Formats the extracted information into a context string for research
3. Add logic to handle both task and subtask IDs
4. Implement a function to combine extracted context with the user's search query
5. Create a function to identify and extract key terminology from tasks
6. Add functionality to include parent task context when a subtask ID is provided
7. Implement proper error handling for invalid task IDs
Testing approach:
- Write unit tests to verify context extraction from sample tasks
- Test with various task structures and content types
- Verify error handling for missing or invalid tasks
- Test the quality of extracted context with sample queries
## 3. Build Research Command CLI Interface [pending]
### Dependencies: 51.1, 51.2
### Description: Implement the Commander.js command structure for the 'research' command with all required options and parameters.
### Details:
Implementation details:
1. Create a new command file `commands/research.js`
2. Set up the Commander.js command structure with the following options:
- Required search query parameter
- `--task` or `-t` option for task/subtask ID
- `--prompt` or `-p` option for custom research prompt
- `--save` or `-s` option to save results to a file
- `--copy` or `-c` option to copy results to clipboard
- `--summary` or `-m` option to generate a summary
- `--detail` or `-d` option to set research depth (default: medium)
3. Implement command validation logic
4. Connect the command to the Perplexity service created in subtask 1
5. Integrate the context extraction logic from subtask 2
6. Register the command in the main CLI application
7. Add help text and examples
Testing approach:
- Test command registration and option parsing
- Verify command validation logic works correctly
- Test with various combinations of options
- Ensure proper error messages for invalid inputs
## 4. Implement Results Processing and Output Formatting [pending]
### Dependencies: 51.1, 51.3
### Description: Create functionality to process, format, and display research results in the terminal with options for saving, copying, and summarizing.
### Details:
Implementation details:
1. Create a new module `utils/researchFormatter.js`
2. Implement terminal output formatting with:
- Color-coded sections for better readability
- Proper text wrapping for terminal width
- Highlighting of key points
3. Add functionality to save results to a file:
- Create a `research-results` directory if it doesn't exist
- Save results with timestamp and query in filename
- Support multiple formats (text, markdown, JSON)
4. Implement clipboard copying using a library like `clipboardy`
5. Create a summarization function that extracts key points from research results
6. Add progress indicators during API calls
7. Implement pagination for long results
Testing approach:
- Test output formatting with various result lengths and content types
- Verify file saving functionality creates proper files with correct content
- Test clipboard functionality
- Verify summarization produces useful results
## 5. Implement Caching and Results Management System [pending]
### Dependencies: 51.1, 51.4
### Description: Create a persistent caching system for research results and implement functionality to manage, retrieve, and reference previous research.
### Details:
Implementation details:
1. Create a research results database using a simple JSON file or SQLite:
- Store queries, timestamps, and results
- Index by query and related task IDs
2. Implement cache retrieval and validation:
- Check for cached results before making API calls
- Validate cache freshness with configurable TTL
3. Add commands to manage research history:
- List recent research queries
- Retrieve past research by ID or search term
- Clear cache or delete specific entries
4. Create functionality to associate research results with tasks:
- Add metadata linking research to specific tasks
- Implement command to show all research related to a task
5. Add configuration options for cache behavior in user settings
6. Implement export/import functionality for research data
Testing approach:
- Test cache storage and retrieval with various queries
- Verify cache invalidation works correctly
- Test history management commands
- Verify task association functionality
- Test with large cache sizes to ensure performance

51
tasks/task_052.txt Normal file
View File

@@ -0,0 +1,51 @@
# Task ID: 52
# Title: Implement Task Suggestion Command for CLI
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a new CLI command 'suggest-task' that generates contextually relevant task suggestions based on existing tasks and allows users to accept, decline, or regenerate suggestions.
# Details:
Implement a new command 'suggest-task' that can be invoked from the CLI to generate intelligent task suggestions. The command should:
1. Collect a snapshot of all existing tasks including their titles, descriptions, statuses, and dependencies
2. Extract parent task subtask titles (not full objects) to provide context
3. Use this information to generate a contextually appropriate new task suggestion
4. Present the suggestion to the user in a clear format
5. Provide an interactive interface with options to:
- Accept the suggestion (creating a new task with the suggested details)
- Decline the suggestion (exiting without creating a task)
- Regenerate a new suggestion (requesting an alternative)
The implementation should follow a similar pattern to the 'generate-subtask' command but operate at the task level rather than subtask level. The command should use the project's existing AI integration to analyze the current task structure and generate relevant suggestions. Ensure proper error handling for API failures and implement a timeout mechanism for suggestion generation.
The command should accept optional flags to customize the suggestion process, such as:
- `--parent=<task-id>` to suggest a task related to a specific parent task
- `--type=<task-type>` to suggest a specific type of task (feature, bugfix, refactor, etc.)
- `--context=<additional-context>` to provide additional information for the suggestion
# Test Strategy:
Testing should verify both the functionality and user experience of the suggest-task command:
1. Unit tests:
- Test the task collection mechanism to ensure it correctly gathers existing task data
- Test the context extraction logic to verify it properly isolates relevant subtask titles
- Test the suggestion generation with mocked AI responses
- Test the command's parsing of various flag combinations
2. Integration tests:
- Test the end-to-end flow with a mock project structure
- Verify the command correctly interacts with the AI service
- Test the task creation process when a suggestion is accepted
3. User interaction tests:
- Test the accept/decline/regenerate interface works correctly
- Verify appropriate feedback is displayed to the user
- Test handling of unexpected user inputs
4. Edge cases:
- Test behavior when run in an empty project with no existing tasks
- Test with malformed task data
- Test with API timeouts or failures
- Test with extremely large numbers of existing tasks
Manually verify the command produces contextually appropriate suggestions that align with the project's current state and needs.

53
tasks/task_053.txt Normal file
View File

@@ -0,0 +1,53 @@
# Task ID: 53
# Title: Implement Subtask Suggestion Feature for Parent Tasks
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Create a new CLI command that suggests contextually relevant subtasks for existing parent tasks, allowing users to accept, decline, or regenerate suggestions before adding them to the system.
# Details:
Develop a new command `suggest-subtask <task-id>` that generates intelligent subtask suggestions for a specified parent task. The implementation should:
1. Accept a parent task ID as input and validate it exists
2. Gather a snapshot of all existing tasks in the system (titles only, with their statuses and dependencies)
3. Retrieve the full details of the specified parent task
4. Use this context to generate a relevant subtask suggestion that would logically help complete the parent task
5. Present the suggestion to the user in the CLI with options to:
- Accept (a): Add the subtask to the system under the parent task
- Decline (d): Reject the suggestion without adding anything
- Regenerate (r): Generate a new alternative subtask suggestion
- Edit (e): Accept but allow editing the title/description before adding
The suggestion algorithm should consider:
- The parent task's description and requirements
- Current progress (% complete) of the parent task
- Existing subtasks already created for this parent
- Similar patterns from other tasks in the system
- Logical next steps based on software development best practices
When a subtask is accepted, it should be properly linked to the parent task and assigned appropriate default values for priority and status.
# Test Strategy:
Testing should verify both the functionality and the quality of suggestions:
1. Unit tests:
- Test command parsing and validation of task IDs
- Test snapshot creation of existing tasks
- Test the suggestion generation with mocked data
- Test the user interaction flow with simulated inputs
2. Integration tests:
- Create a test parent task and verify subtask suggestions are contextually relevant
- Test the accept/decline/regenerate workflow end-to-end
- Verify proper linking of accepted subtasks to parent tasks
- Test with various types of parent tasks (frontend, backend, documentation, etc.)
3. Quality assessment:
- Create a benchmark set of 10 diverse parent tasks
- Generate 3 subtask suggestions for each and have team members rate relevance on 1-5 scale
- Ensure average relevance score exceeds 3.5/5
- Verify suggestions don't duplicate existing subtasks
4. Edge cases:
- Test with a parent task that has no description
- Test with a parent task that already has many subtasks
- Test with a newly created system with minimal task history

43
tasks/task_054.txt Normal file
View File

@@ -0,0 +1,43 @@
# Task ID: 54
# Title: Add Research Flag to Add-Task Command
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Enhance the add-task command with a --research flag that allows users to perform quick research on the task topic before finalizing task creation.
# Details:
Modify the existing add-task command to accept a new optional flag '--research'. When this flag is provided, the system should pause the task creation process and invoke the Perplexity research functionality (similar to Task #51) to help users gather information about the task topic before finalizing the task details. The implementation should:
1. Update the command parser to recognize the new --research flag
2. When the flag is present, extract the task title/description as the research topic
3. Call the Perplexity research functionality with this topic
4. Display research results to the user
5. Allow the user to refine their task based on the research (modify title, description, etc.)
6. Continue with normal task creation flow after research is complete
7. Ensure the research results can be optionally attached to the task as reference material
8. Add appropriate help text explaining this feature in the command help
The implementation should leverage the existing Perplexity research command from Task #51, ensuring code reuse where possible.
# Test Strategy:
Testing should verify both the functionality and usability of the new feature:
1. Unit tests:
- Verify the command parser correctly recognizes the --research flag
- Test that the research functionality is properly invoked with the correct topic
- Ensure task creation proceeds correctly after research is complete
2. Integration tests:
- Test the complete flow from command invocation to task creation with research
- Verify research results are properly attached to the task when requested
- Test error handling when research API is unavailable
3. Manual testing:
- Run the command with --research flag and verify the user experience
- Test with various task topics to ensure research is relevant
- Verify the help documentation correctly explains the feature
- Test the command without the flag to ensure backward compatibility
4. Edge cases:
- Test with very short/vague task descriptions
- Test with complex technical topics
- Test cancellation of task creation during the research phase

50
tasks/task_055.txt Normal file
View File

@@ -0,0 +1,50 @@
# Task ID: 55
# Title: Implement Positional Arguments Support for CLI Commands
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Upgrade CLI commands to support positional arguments alongside the existing flag-based syntax, allowing for more intuitive command usage.
# Details:
This task involves modifying the command parsing logic in commands.js to support positional arguments as an alternative to the current flag-based approach. The implementation should:
1. Update the argument parsing logic to detect when arguments are provided without flag prefixes (--)
2. Map positional arguments to their corresponding parameters based on their order
3. For each command in commands.js, define a consistent positional argument order (e.g., for set-status: first arg = id, second arg = status)
4. Maintain backward compatibility with the existing flag-based syntax
5. Handle edge cases such as:
- Commands with optional parameters
- Commands with multiple parameters
- Commands that accept arrays or complex data types
6. Update the help text for each command to show both usage patterns
7. Modify the cursor rules to work with both input styles
8. Ensure error messages are clear when positional arguments are provided incorrectly
Example implementations:
- `task-master set-status 25 done` should be equivalent to `task-master set-status --id=25 --status=done`
- `task-master add-task "New task name" "Task description"` should be equivalent to `task-master add-task --name="New task name" --description="Task description"`
The code should prioritize maintaining the existing functionality while adding this new capability.
# Test Strategy:
Testing should verify both the new positional argument functionality and continued support for flag-based syntax:
1. Unit tests:
- Create tests for each command that verify it works with both positional and flag-based arguments
- Test edge cases like missing arguments, extra arguments, and mixed usage (some positional, some flags)
- Verify help text correctly displays both usage patterns
2. Integration tests:
- Test the full CLI with various commands using both syntax styles
- Verify that output is identical regardless of which syntax is used
- Test commands with different numbers of arguments
3. Manual testing:
- Run through a comprehensive set of real-world usage scenarios with both syntax styles
- Verify cursor behavior works correctly with both input methods
- Check that error messages are helpful when incorrect positional arguments are provided
4. Documentation verification:
- Ensure README and help text accurately reflect the new dual syntax support
- Verify examples in documentation show both styles where appropriate
All tests should pass with 100% of commands supporting both argument styles without any regression in existing functionality.

View File

@@ -2518,7 +2518,68 @@
"dependencies": [],
"priority": "medium",
"details": "Develop a comprehensive test coverage tracking system with the following components:\n\n1. Create a `tests.json` file structure in the `tasks/` directory that associates test suites and individual tests with specific task IDs or subtask IDs.\n\n2. Build a generator that processes code coverage reports and updates the `tests.json` file to maintain an accurate mapping between tests and tasks.\n\n3. Implement a parser that can extract code coverage information from standard coverage tools (like Istanbul/nyc, Jest coverage reports) and convert it to the task-based format.\n\n4. Create CLI commands that can:\n - Display test coverage for a specific task/subtask\n - Identify untested code related to a particular task\n - Generate test suggestions for uncovered code using LLMs\n\n5. Extend the MCP (Mission Control Panel) to visualize test coverage by task, showing percentage covered and highlighting areas needing tests.\n\n6. Develop an automated test generation system that uses LLMs to create targeted tests for specific uncovered code sections within a task.\n\n7. Implement a workflow that integrates with the existing task management system, allowing developers to see test requirements alongside implementation requirements.\n\nThe system should maintain bidirectional relationships: from tests to tasks and from tasks to the code they affect, enabling precise tracking of what needs testing for each development task.",
"testStrategy": "Testing should verify all components of the test coverage tracking system:\n\n1. **File Structure Tests**: Verify the `tests.json` file is correctly created and follows the expected schema with proper task/test relationships.\n\n2. **Coverage Report Processing**: Create mock coverage reports and verify they are correctly parsed and integrated into the `tests.json` file.\n\n3. **CLI Command Tests**: Test each CLI command with various inputs:\n - Test coverage display for existing tasks\n - Edge cases like tasks with no tests\n - Tasks with partial coverage\n\n4. **Integration Tests**: Verify the entire workflow from code changes to coverage reporting to task-based test suggestions.\n\n5. **LLM Test Generation**: Validate that generated tests actually cover the intended code paths by running them against the codebase.\n\n6. **UI/UX Tests**: Ensure the MCP correctly displays coverage information and that the interface for viewing and managing test coverage is intuitive.\n\n7. **Performance Tests**: Measure the performance impact of the coverage tracking system, especially for large codebases.\n\nCreate a test suite that can run in CI/CD to ensure the test coverage tracking system itself maintains high coverage and reliability."
"testStrategy": "Testing should verify all components of the test coverage tracking system:\n\n1. **File Structure Tests**: Verify the `tests.json` file is correctly created and follows the expected schema with proper task/test relationships.\n\n2. **Coverage Report Processing**: Create mock coverage reports and verify they are correctly parsed and integrated into the `tests.json` file.\n\n3. **CLI Command Tests**: Test each CLI command with various inputs:\n - Test coverage display for existing tasks\n - Edge cases like tasks with no tests\n - Tasks with partial coverage\n\n4. **Integration Tests**: Verify the entire workflow from code changes to coverage reporting to task-based test suggestions.\n\n5. **LLM Test Generation**: Validate that generated tests actually cover the intended code paths by running them against the codebase.\n\n6. **UI/UX Tests**: Ensure the MCP correctly displays coverage information and that the interface for viewing and managing test coverage is intuitive.\n\n7. **Performance Tests**: Measure the performance impact of the coverage tracking system, especially for large codebases.\n\nCreate a test suite that can run in CI/CD to ensure the test coverage tracking system itself maintains high coverage and reliability.",
"subtasks": [
{
"id": 1,
"title": "Design and implement tests.json data structure",
"description": "Create a comprehensive data structure that maps tests to tasks/subtasks and tracks coverage metrics. This structure will serve as the foundation for the entire test coverage tracking system.",
"dependencies": [],
"details": "1. Design a JSON schema for tests.json that includes: test IDs, associated task/subtask IDs, coverage percentages, test types (unit/integration/e2e), file paths, and timestamps.\n2. Implement bidirectional relationships by creating references between tests.json and tasks.json.\n3. Define fields for tracking statement coverage, branch coverage, and function coverage per task.\n4. Add metadata fields for test quality metrics beyond coverage (complexity, mutation score).\n5. Create utility functions to read/write/update the tests.json file.\n6. Implement validation logic to ensure data integrity between tasks and tests.\n7. Add version control compatibility by using relative paths and stable identifiers.\n8. Test the data structure with sample data representing various test scenarios.\n9. Document the schema with examples and usage guidelines.",
"status": "pending",
"parentTaskId": 50
},
{
"id": 2,
"title": "Develop coverage report parser and adapter system",
"description": "Create a framework-agnostic system that can parse coverage reports from various testing tools and convert them to the standardized task-based format in tests.json.",
"dependencies": [
1
],
"details": "1. Research and document output formats for major coverage tools (Istanbul/nyc, Jest, Pytest, JaCoCo).\n2. Design a normalized intermediate coverage format that any test tool can map to.\n3. Implement adapter classes for each major testing framework that convert their reports to the intermediate format.\n4. Create a parser registry that can automatically detect and use the appropriate parser based on input format.\n5. Develop a mapping algorithm that associates coverage data with specific tasks based on file paths and code blocks.\n6. Implement file path normalization to handle different operating systems and environments.\n7. Add error handling for malformed or incomplete coverage reports.\n8. Create unit tests for each adapter using sample coverage reports.\n9. Implement a command-line interface for manual parsing and testing.\n10. Document the extension points for adding custom coverage tool adapters.",
"status": "pending",
"parentTaskId": 50
},
{
"id": 3,
"title": "Build coverage tracking and update generator",
"description": "Create a system that processes code coverage reports, maps them to tasks, and updates the tests.json file to maintain accurate coverage tracking over time.",
"dependencies": [
1,
2
],
"details": "1. Implement a coverage processor that takes parsed coverage data and maps it to task IDs.\n2. Create algorithms to calculate aggregate coverage metrics at the task and subtask levels.\n3. Develop a change detection system that identifies when tests or code have changed and require updates.\n4. Implement incremental update logic to avoid reprocessing unchanged tests.\n5. Create a task-code association system that maps specific code blocks to tasks for granular tracking.\n6. Add historical tracking to monitor coverage trends over time.\n7. Implement hooks for CI/CD integration to automatically update coverage after test runs.\n8. Create a conflict resolution strategy for when multiple tests cover the same code areas.\n9. Add performance optimizations for large codebases and test suites.\n10. Develop unit tests that verify correct aggregation and mapping of coverage data.\n11. Document the update workflow with sequence diagrams and examples.",
"status": "pending",
"parentTaskId": 50
},
{
"id": 4,
"title": "Implement CLI commands for coverage operations",
"description": "Create a set of command-line interface tools that allow developers to view, analyze, and manage test coverage at the task level.",
"dependencies": [
1,
2,
3
],
"details": "1. Design a cohesive CLI command structure with subcommands for different coverage operations.\n2. Implement 'coverage show' command to display test coverage for a specific task/subtask.\n3. Create 'coverage gaps' command to identify untested code related to a particular task.\n4. Develop 'coverage history' command to show how coverage has changed over time.\n5. Implement 'coverage generate' command that uses LLMs to suggest tests for uncovered code.\n6. Add filtering options to focus on specific test types or coverage thresholds.\n7. Create formatted output options (JSON, CSV, markdown tables) for integration with other tools.\n8. Implement colorized terminal output for better readability of coverage reports.\n9. Add batch processing capabilities for running operations across multiple tasks.\n10. Create comprehensive help documentation and examples for each command.\n11. Develop unit and integration tests for CLI commands.\n12. Document command usage patterns and example workflows.",
"status": "pending",
"parentTaskId": 50
},
{
"id": 5,
"title": "Develop AI-powered test generation system",
"description": "Create an intelligent system that uses LLMs to generate targeted tests for uncovered code sections within tasks, integrating with the existing task management workflow.",
"dependencies": [
1,
2,
3,
4
],
"details": "1. Design prompt templates for different test types (unit, integration, E2E) that incorporate task descriptions and code context.\n2. Implement code analysis to extract relevant context from uncovered code sections.\n3. Create a test generation pipeline that combines task metadata, code context, and coverage gaps.\n4. Develop strategies for maintaining test context across task changes and updates.\n5. Implement test quality evaluation to ensure generated tests are meaningful and effective.\n6. Create a feedback mechanism to improve prompts based on acceptance or rejection of generated tests.\n7. Add support for different testing frameworks and languages through templating.\n8. Implement caching to avoid regenerating similar tests.\n9. Create a workflow that integrates with the task management system to suggest tests alongside implementation requirements.\n10. Develop specialized generation modes for edge cases, regression tests, and performance tests.\n11. Add configuration options for controlling test generation style and coverage goals.\n12. Create comprehensive documentation on how to use and extend the test generation system.\n13. Implement evaluation metrics to track the effectiveness of AI-generated tests.",
"status": "pending",
"parentTaskId": 50
}
]
},
{
"id": 51,
@@ -2528,7 +2589,63 @@
"dependencies": [],
"priority": "medium",
"details": "Develop a new command called 'research' that integrates with Perplexity AI's API to fetch information on specified topics. The command should:\n\n1. Accept the following parameters:\n - A search query string (required)\n - A task or subtask ID for context (optional)\n - A custom prompt to guide the research (optional)\n\n2. When a task/subtask ID is provided, extract relevant information from it to enrich the research query with context.\n\n3. Implement proper API integration with Perplexity, including authentication and rate limiting handling.\n\n4. Format and display the research results in a readable format in the terminal, with options to:\n - Save the results to a file\n - Copy results to clipboard\n - Generate a summary of key points\n\n5. Cache research results to avoid redundant API calls for the same queries.\n\n6. Provide a configuration option to set the depth/detail level of research (quick overview vs. comprehensive).\n\n7. Handle errors gracefully, especially network issues or API limitations.\n\nThe command should follow the existing CLI structure and maintain consistency with other commands in the system.",
"testStrategy": "1. Unit tests:\n - Test the command with various combinations of parameters (query only, query+task, query+custom prompt, all parameters)\n - Mock the Perplexity API responses to test different scenarios (successful response, error response, rate limiting)\n - Verify that task context is correctly extracted and incorporated into the research query\n\n2. Integration tests:\n - Test actual API calls to Perplexity with valid credentials (using a test account)\n - Verify the caching mechanism works correctly for repeated queries\n - Test error handling with intentionally invalid requests\n\n3. User acceptance testing:\n - Have team members use the command for real research needs and provide feedback\n - Verify the command works in different network environments\n - Test the command with very long queries and responses\n\n4. Performance testing:\n - Measure and optimize response time for queries\n - Test behavior under poor network conditions\n\nValidate that the research results are properly formatted, readable, and that all output options (save, copy) function correctly."
"testStrategy": "1. Unit tests:\n - Test the command with various combinations of parameters (query only, query+task, query+custom prompt, all parameters)\n - Mock the Perplexity API responses to test different scenarios (successful response, error response, rate limiting)\n - Verify that task context is correctly extracted and incorporated into the research query\n\n2. Integration tests:\n - Test actual API calls to Perplexity with valid credentials (using a test account)\n - Verify the caching mechanism works correctly for repeated queries\n - Test error handling with intentionally invalid requests\n\n3. User acceptance testing:\n - Have team members use the command for real research needs and provide feedback\n - Verify the command works in different network environments\n - Test the command with very long queries and responses\n\n4. Performance testing:\n - Measure and optimize response time for queries\n - Test behavior under poor network conditions\n\nValidate that the research results are properly formatted, readable, and that all output options (save, copy) function correctly.",
"subtasks": [
{
"id": 1,
"title": "Create Perplexity API Client Service",
"description": "Develop a service module that handles all interactions with the Perplexity AI API, including authentication, request formatting, and response handling.",
"dependencies": [],
"details": "Implementation details:\n1. Create a new service file `services/perplexityService.js`\n2. Implement authentication using the PERPLEXITY_API_KEY from environment variables\n3. Create functions for making API requests to Perplexity with proper error handling:\n - `queryPerplexity(searchQuery, options)` - Main function to query the API\n - `handleRateLimiting(response)` - Logic to handle rate limits with exponential backoff\n4. Implement response parsing and formatting functions\n5. Add proper error handling for network issues, authentication problems, and API limitations\n6. Create a simple caching mechanism using a Map or object to store recent query results\n7. Add configuration options for different detail levels (quick vs comprehensive)\n\nTesting approach:\n- Write unit tests using Jest to verify API client functionality with mocked responses\n- Test error handling with simulated network failures\n- Verify caching mechanism works correctly\n- Test with various query types and options",
"status": "pending",
"parentTaskId": 51
},
{
"id": 2,
"title": "Implement Task Context Extraction Logic",
"description": "Create utility functions to extract relevant context from tasks and subtasks to enhance research queries with project-specific information.",
"dependencies": [],
"details": "Implementation details:\n1. Create a new utility file `utils/contextExtractor.js`\n2. Implement a function `extractTaskContext(taskId)` that:\n - Loads the task/subtask data from tasks.json\n - Extracts relevant information (title, description, details)\n - Formats the extracted information into a context string for research\n3. Add logic to handle both task and subtask IDs\n4. Implement a function to combine extracted context with the user's search query\n5. Create a function to identify and extract key terminology from tasks\n6. Add functionality to include parent task context when a subtask ID is provided\n7. Implement proper error handling for invalid task IDs\n\nTesting approach:\n- Write unit tests to verify context extraction from sample tasks\n- Test with various task structures and content types\n- Verify error handling for missing or invalid tasks\n- Test the quality of extracted context with sample queries",
"status": "pending",
"parentTaskId": 51
},
{
"id": 3,
"title": "Build Research Command CLI Interface",
"description": "Implement the Commander.js command structure for the 'research' command with all required options and parameters.",
"dependencies": [
1,
2
],
"details": "Implementation details:\n1. Create a new command file `commands/research.js`\n2. Set up the Commander.js command structure with the following options:\n - Required search query parameter\n - `--task` or `-t` option for task/subtask ID\n - `--prompt` or `-p` option for custom research prompt\n - `--save` or `-s` option to save results to a file\n - `--copy` or `-c` option to copy results to clipboard\n - `--summary` or `-m` option to generate a summary\n - `--detail` or `-d` option to set research depth (default: medium)\n3. Implement command validation logic\n4. Connect the command to the Perplexity service created in subtask 1\n5. Integrate the context extraction logic from subtask 2\n6. Register the command in the main CLI application\n7. Add help text and examples\n\nTesting approach:\n- Test command registration and option parsing\n- Verify command validation logic works correctly\n- Test with various combinations of options\n- Ensure proper error messages for invalid inputs",
"status": "pending",
"parentTaskId": 51
},
{
"id": 4,
"title": "Implement Results Processing and Output Formatting",
"description": "Create functionality to process, format, and display research results in the terminal with options for saving, copying, and summarizing.",
"dependencies": [
1,
3
],
"details": "Implementation details:\n1. Create a new module `utils/researchFormatter.js`\n2. Implement terminal output formatting with:\n - Color-coded sections for better readability\n - Proper text wrapping for terminal width\n - Highlighting of key points\n3. Add functionality to save results to a file:\n - Create a `research-results` directory if it doesn't exist\n - Save results with timestamp and query in filename\n - Support multiple formats (text, markdown, JSON)\n4. Implement clipboard copying using a library like `clipboardy`\n5. Create a summarization function that extracts key points from research results\n6. Add progress indicators during API calls\n7. Implement pagination for long results\n\nTesting approach:\n- Test output formatting with various result lengths and content types\n- Verify file saving functionality creates proper files with correct content\n- Test clipboard functionality\n- Verify summarization produces useful results",
"status": "pending",
"parentTaskId": 51
},
{
"id": 5,
"title": "Implement Caching and Results Management System",
"description": "Create a persistent caching system for research results and implement functionality to manage, retrieve, and reference previous research.",
"dependencies": [
1,
4
],
"details": "Implementation details:\n1. Create a research results database using a simple JSON file or SQLite:\n - Store queries, timestamps, and results\n - Index by query and related task IDs\n2. Implement cache retrieval and validation:\n - Check for cached results before making API calls\n - Validate cache freshness with configurable TTL\n3. Add commands to manage research history:\n - List recent research queries\n - Retrieve past research by ID or search term\n - Clear cache or delete specific entries\n4. Create functionality to associate research results with tasks:\n - Add metadata linking research to specific tasks\n - Implement command to show all research related to a task\n5. Add configuration options for cache behavior in user settings\n6. Implement export/import functionality for research data\n\nTesting approach:\n- Test cache storage and retrieval with various queries\n- Verify cache invalidation works correctly\n- Test history management commands\n- Verify task association functionality\n- Test with large cache sizes to ensure performance",
"status": "pending",
"parentTaskId": 51
}
]
},
{
"id": 52,
@@ -2569,6 +2686,46 @@
"priority": "medium",
"details": "This task involves modifying the command parsing logic in commands.js to support positional arguments as an alternative to the current flag-based approach. The implementation should:\n\n1. Update the argument parsing logic to detect when arguments are provided without flag prefixes (--)\n2. Map positional arguments to their corresponding parameters based on their order\n3. For each command in commands.js, define a consistent positional argument order (e.g., for set-status: first arg = id, second arg = status)\n4. Maintain backward compatibility with the existing flag-based syntax\n5. Handle edge cases such as:\n - Commands with optional parameters\n - Commands with multiple parameters\n - Commands that accept arrays or complex data types\n6. Update the help text for each command to show both usage patterns\n7. Modify the cursor rules to work with both input styles\n8. Ensure error messages are clear when positional arguments are provided incorrectly\n\nExample implementations:\n- `task-master set-status 25 done` should be equivalent to `task-master set-status --id=25 --status=done`\n- `task-master add-task \"New task name\" \"Task description\"` should be equivalent to `task-master add-task --name=\"New task name\" --description=\"Task description\"`\n\nThe code should prioritize maintaining the existing functionality while adding this new capability.",
"testStrategy": "Testing should verify both the new positional argument functionality and continued support for flag-based syntax:\n\n1. Unit tests:\n - Create tests for each command that verify it works with both positional and flag-based arguments\n - Test edge cases like missing arguments, extra arguments, and mixed usage (some positional, some flags)\n - Verify help text correctly displays both usage patterns\n\n2. Integration tests:\n - Test the full CLI with various commands using both syntax styles\n - Verify that output is identical regardless of which syntax is used\n - Test commands with different numbers of arguments\n\n3. Manual testing:\n - Run through a comprehensive set of real-world usage scenarios with both syntax styles\n - Verify cursor behavior works correctly with both input methods\n - Check that error messages are helpful when incorrect positional arguments are provided\n\n4. Documentation verification:\n - Ensure README and help text accurately reflect the new dual syntax support\n - Verify examples in documentation show both styles where appropriate\n\nAll tests should pass with 100% of commands supporting both argument styles without any regression in existing functionality."
},
{
"id": 56,
"title": "Refactor Task-Master Files into Node Module Structure",
"description": "Restructure the task-master files by moving them from the project root into a proper node module structure to improve organization and maintainability.",
"status": "pending",
"dependencies": [],
"priority": "medium",
"details": "This task involves a significant refactoring of the task-master system to follow better Node.js module practices. Currently, task-master files are located in the project root, which creates clutter and doesn't follow best practices for Node.js applications. The refactoring should:\n\n1. Create a dedicated directory structure within node_modules or as a local package\n2. Update all import/require paths throughout the codebase to reference the new module location\n3. Reorganize the files into a logical structure (lib/, utils/, commands/, etc.)\n4. Ensure the module has a proper package.json with dependencies and exports\n5. Update any build processes, scripts, or configuration files to reflect the new structure\n6. Maintain backward compatibility where possible to minimize disruption\n7. Document the new structure and any changes to usage patterns\n\nThis is a high-risk refactoring as it touches many parts of the system, so it should be approached methodically with frequent testing. Consider using a feature branch and implementing the changes incrementally rather than all at once.",
"testStrategy": "Testing for this refactoring should be comprehensive to ensure nothing breaks during the restructuring:\n\n1. Create a complete inventory of existing functionality through automated tests before starting\n2. Implement unit tests for each module to verify they function correctly in the new structure\n3. Create integration tests that verify the interactions between modules work as expected\n4. Test all CLI commands to ensure they continue to function with the new module structure\n5. Verify that all import/require statements resolve correctly\n6. Test on different environments (development, staging) to ensure compatibility\n7. Perform regression testing on all features that depend on task-master functionality\n8. Create a rollback plan and test it to ensure we can revert changes if critical issues arise\n9. Conduct performance testing to ensure the refactoring doesn't introduce overhead\n10. Have multiple developers test the changes on their local environments before merging"
},
{
"id": 57,
"title": "Enhance Task-Master CLI User Experience and Interface",
"description": "Improve the Task-Master CLI's user experience by refining the interface, reducing verbose logging, and adding visual polish to create a more professional and intuitive tool.",
"status": "pending",
"dependencies": [],
"priority": "medium",
"details": "The current Task-Master CLI interface is functional but lacks polish and produces excessive log output. This task involves several key improvements:\n\n1. Log Management:\n - Implement log levels (ERROR, WARN, INFO, DEBUG, TRACE)\n - Only show INFO and above by default\n - Add a --verbose flag to show all logs\n - Create a dedicated log file for detailed logs\n\n2. Visual Enhancements:\n - Add a clean, branded header when the tool starts\n - Implement color-coding for different types of messages (success in green, errors in red, etc.)\n - Use spinners or progress indicators for operations that take time\n - Add clear visual separation between command input and output\n\n3. Interactive Elements:\n - Add loading animations for longer operations\n - Implement interactive prompts for complex inputs instead of requiring all parameters upfront\n - Add confirmation dialogs for destructive operations\n\n4. Output Formatting:\n - Format task listings in tables with consistent spacing\n - Implement a compact mode and a detailed mode for viewing tasks\n - Add visual indicators for task status (icons or colors)\n\n5. Help and Documentation:\n - Enhance help text with examples and clearer descriptions\n - Add contextual hints for common next steps after commands\n\nUse libraries like chalk, ora, inquirer, and boxen to implement these improvements. Ensure the interface remains functional in CI/CD environments where interactive elements might not be supported.",
"testStrategy": "Testing should verify both functionality and user experience improvements:\n\n1. Automated Tests:\n - Create unit tests for log level filtering functionality\n - Test that all commands still function correctly with the new UI\n - Verify that non-interactive mode works in CI environments\n - Test that verbose and quiet modes function as expected\n\n2. User Experience Testing:\n - Create a test script that runs through common user flows\n - Capture before/after screenshots for visual comparison\n - Measure and compare the number of lines output for common operations\n\n3. Usability Testing:\n - Have 3-5 team members perform specific tasks using the new interface\n - Collect feedback on clarity, ease of use, and visual appeal\n - Identify any confusion points or areas for improvement\n\n4. Edge Case Testing:\n - Test in terminals with different color schemes and sizes\n - Verify functionality in environments without color support\n - Test with very large task lists to ensure formatting remains clean\n\nAcceptance Criteria:\n- Log output is reduced by at least 50% in normal operation\n- All commands provide clear visual feedback about their progress and completion\n- Help text is comprehensive and includes examples\n- Interface is visually consistent across all commands\n- Tool remains fully functional in non-interactive environments"
},
{
"id": 58,
"title": "Implement Elegant Package Update Mechanism for Task-Master",
"description": "Create a robust update mechanism that handles package updates gracefully, ensuring all necessary files are updated when the global package is upgraded.",
"status": "pending",
"dependencies": [],
"priority": "medium",
"details": "Develop a comprehensive update system with these components:\n\n1. **Update Detection**: When task-master runs, check if the current version matches the installed version. If not, notify the user an update is available.\n\n2. **Update Command**: Implement a dedicated `task-master update` command that:\n - Updates the global package (`npm -g task-master-ai@latest`)\n - Automatically runs necessary initialization steps\n - Preserves user configurations while updating system files\n\n3. **Smart File Management**:\n - Create a manifest of core files with checksums\n - During updates, compare existing files with the manifest\n - Only overwrite files that have changed in the update\n - Preserve user-modified files with an option to merge changes\n\n4. **Configuration Versioning**:\n - Add version tracking to configuration files\n - Implement migration paths for configuration changes between versions\n - Provide backward compatibility for older configurations\n\n5. **Update Notifications**:\n - Add a non-intrusive notification when updates are available\n - Include a changelog summary of what's new\n\nThis system should work seamlessly with the existing `task-master init` command but provide a more automated and user-friendly update experience.",
"testStrategy": "Test the update mechanism with these specific scenarios:\n\n1. **Version Detection Test**:\n - Install an older version, then verify the system correctly detects when a newer version is available\n - Test with minor and major version changes\n\n2. **Update Command Test**:\n - Verify `task-master update` successfully updates the global package\n - Confirm all necessary files are updated correctly\n - Test with and without user-modified files present\n\n3. **File Preservation Test**:\n - Modify configuration files, then update\n - Verify user changes are preserved while system files are updated\n - Test with conflicts between user changes and system updates\n\n4. **Rollback Test**:\n - Implement and test a rollback mechanism if updates fail\n - Verify system returns to previous working state\n\n5. **Integration Test**:\n - Create a test project with the current version\n - Run through the update process\n - Verify all functionality continues to work after update\n\n6. **Edge Case Tests**:\n - Test updating with insufficient permissions\n - Test updating with network interruptions\n - Test updating from very old versions to latest"
},
{
"id": 59,
"title": "Remove Manual Package.json Modifications and Implement Automatic Dependency Management",
"description": "Eliminate code that manually modifies users' package.json files and implement proper npm dependency management that automatically handles package requirements when users install task-master-ai.",
"status": "pending",
"dependencies": [],
"priority": "medium",
"details": "Currently, the application is attempting to manually modify users' package.json files, which is not the recommended approach for npm packages. Instead:\n\n1. Review all code that directly manipulates package.json files in users' projects\n2. Remove these manual modifications\n3. Properly define all dependencies in the package.json of task-master-ai itself\n4. Ensure all peer dependencies are correctly specified\n5. For any scripts that need to be available to users, use proper npm bin linking or npx commands\n6. Update the installation process to leverage npm's built-in dependency management\n7. If configuration is needed in users' projects, implement a proper initialization command that creates config files rather than modifying package.json\n8. Document the new approach in the README and any other relevant documentation\n\nThis change will make the package more reliable, follow npm best practices, and prevent potential conflicts or errors when modifying users' project files.",
"testStrategy": "1. Create a fresh test project directory\n2. Install the updated task-master-ai package using npm install task-master-ai\n3. Verify that no code attempts to modify the test project's package.json\n4. Confirm all dependencies are properly installed in node_modules\n5. Test all commands to ensure they work without the previous manual package.json modifications\n6. Try installing in projects with various existing configurations to ensure no conflicts occur\n7. Test the uninstall process to verify it cleanly removes the package without leaving unwanted modifications\n8. Verify the package works in different npm environments (npm 6, 7, 8) and with different Node.js versions\n9. Create an integration test that simulates a real user workflow from installation through usage"
}
]
}

2636
tasks/tasks.json.bak Normal file

File diff suppressed because one or more lines are too long