Merge crunchyman/feat.add.mcp.2 into next
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Task ID: 1
|
||||
# Title: Implement Task Data Structure
|
||||
# Status: done
|
||||
# Status: in-progress
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Design and implement the core tasks.json structure that will serve as the single source of truth for the system.
|
||||
|
||||
156
tasks/task_034.txt
Normal file
156
tasks/task_034.txt
Normal file
@@ -0,0 +1,156 @@
|
||||
# Task ID: 34
|
||||
# Title: Implement updateTask Command for Single Task Updates
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Create a new command that allows updating a specific task by ID using AI-driven refinement while preserving completed subtasks and supporting all existing update command options.
|
||||
# Details:
|
||||
Implement a new command called 'updateTask' that focuses on updating a single task rather than all tasks from an ID onwards. The implementation should:
|
||||
|
||||
1. Accept a single task ID as a required parameter
|
||||
2. Use the same AI-driven approach as the existing update command to refine the task
|
||||
3. Preserve the completion status of any subtasks that were previously marked as complete
|
||||
4. Support all options from the existing update command including:
|
||||
- The research flag for Perplexity integration
|
||||
- Any formatting or refinement options
|
||||
- Task context options
|
||||
5. Update the CLI help documentation to include this new command
|
||||
6. Ensure the command follows the same pattern as other commands in the codebase
|
||||
7. Add appropriate error handling for cases where the specified task ID doesn't exist
|
||||
8. Implement the ability to update task title, description, and details separately if needed
|
||||
9. Ensure the command returns appropriate success/failure messages
|
||||
10. Optimize the implementation to only process the single task rather than scanning through all tasks
|
||||
|
||||
The command should reuse existing AI prompt templates where possible but modify them to focus on refining a single task rather than multiple tasks.
|
||||
|
||||
# Test Strategy:
|
||||
Testing should verify the following aspects:
|
||||
|
||||
1. **Basic Functionality Test**: Verify that the command successfully updates a single task when given a valid task ID
|
||||
2. **Preservation Test**: Create a task with completed subtasks, update it, and verify the completion status remains intact
|
||||
3. **Research Flag Test**: Test the command with the research flag and verify it correctly integrates with Perplexity
|
||||
4. **Error Handling Tests**:
|
||||
- Test with non-existent task ID and verify appropriate error message
|
||||
- Test with invalid parameters and verify helpful error messages
|
||||
5. **Integration Test**: Run a complete workflow that creates a task, updates it with updateTask, and then verifies the changes are persisted
|
||||
6. **Comparison Test**: Compare the results of updating a single task with updateTask versus using the original update command on the same task to ensure consistent quality
|
||||
7. **Performance Test**: Measure execution time compared to the full update command to verify efficiency gains
|
||||
8. **CLI Help Test**: Verify the command appears correctly in help documentation with appropriate descriptions
|
||||
|
||||
Create unit tests for the core functionality and integration tests for the complete workflow. Document any edge cases discovered during testing.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Create updateTaskById function in task-manager.js [done]
|
||||
### Dependencies: None
|
||||
### Description: Implement a new function in task-manager.js that focuses on updating a single task by ID using AI-driven refinement while preserving completed subtasks.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Create a new `updateTaskById` function in task-manager.js that accepts parameters: taskId, options object (containing research flag, formatting options, etc.)
|
||||
2. Implement logic to find a specific task by ID in the tasks array
|
||||
3. Add appropriate error handling for cases where the task ID doesn't exist (throw a custom error)
|
||||
4. Reuse existing AI prompt templates but modify them to focus on refining a single task
|
||||
5. Implement logic to preserve completion status of subtasks that were previously marked as complete
|
||||
6. Add support for updating task title, description, and details separately based on options
|
||||
7. Optimize the implementation to only process the single task rather than scanning through all tasks
|
||||
8. Return the updated task and appropriate success/failure messages
|
||||
|
||||
Testing approach:
|
||||
- Unit test the function with various scenarios including:
|
||||
- Valid task ID with different update options
|
||||
- Non-existent task ID
|
||||
- Task with completed subtasks to verify preservation
|
||||
- Different combinations of update options
|
||||
|
||||
## 2. Implement updateTask command in commands.js [done]
|
||||
### Dependencies: 34.1
|
||||
### Description: Create a new command called 'updateTask' in commands.js that leverages the updateTaskById function to update a specific task by ID.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Create a new command object for 'updateTask' in commands.js following the Command pattern
|
||||
2. Define command parameters including a required taskId parameter
|
||||
3. Support all options from the existing update command:
|
||||
- Research flag for Perplexity integration
|
||||
- Formatting and refinement options
|
||||
- Task context options
|
||||
4. Implement the command handler function that calls the updateTaskById function from task-manager.js
|
||||
5. Add appropriate error handling to catch and display user-friendly error messages
|
||||
6. Ensure the command follows the same pattern as other commands in the codebase
|
||||
7. Implement proper validation of input parameters
|
||||
8. Format and return appropriate success/failure messages to the user
|
||||
|
||||
Testing approach:
|
||||
- Unit test the command handler with various input combinations
|
||||
- Test error handling scenarios
|
||||
- Verify command options are correctly passed to the updateTaskById function
|
||||
|
||||
## 3. Add comprehensive error handling and validation [done]
|
||||
### Dependencies: 34.1, 34.2
|
||||
### Description: Implement robust error handling and validation for the updateTask command to ensure proper user feedback and system stability.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Create custom error types for different failure scenarios (TaskNotFoundError, ValidationError, etc.)
|
||||
2. Implement input validation for the taskId parameter and all options
|
||||
3. Add proper error handling for AI service failures with appropriate fallback mechanisms
|
||||
4. Implement concurrency handling to prevent conflicts when multiple updates occur simultaneously
|
||||
5. Add comprehensive logging for debugging and auditing purposes
|
||||
6. Ensure all error messages are user-friendly and actionable
|
||||
7. Implement proper HTTP status codes for API responses if applicable
|
||||
8. Add validation to ensure the task exists before attempting updates
|
||||
|
||||
Testing approach:
|
||||
- Test various error scenarios including invalid inputs, non-existent tasks, and API failures
|
||||
- Verify error messages are clear and helpful
|
||||
- Test concurrency scenarios with multiple simultaneous updates
|
||||
- Verify logging captures appropriate information for troubleshooting
|
||||
|
||||
## 4. Write comprehensive tests for updateTask command [done]
|
||||
### Dependencies: 34.1, 34.2, 34.3
|
||||
### Description: Create a comprehensive test suite for the updateTask command to ensure it works correctly in all scenarios and maintains backward compatibility.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Create unit tests for the updateTaskById function in task-manager.js
|
||||
- Test finding and updating tasks with various IDs
|
||||
- Test preservation of completed subtasks
|
||||
- Test different update options combinations
|
||||
- Test error handling for non-existent tasks
|
||||
2. Create unit tests for the updateTask command in commands.js
|
||||
- Test command parameter parsing
|
||||
- Test option handling
|
||||
- Test error scenarios and messages
|
||||
3. Create integration tests that verify the end-to-end flow
|
||||
- Test the command with actual AI service integration
|
||||
- Test with mock AI responses for predictable testing
|
||||
4. Implement test fixtures and mocks for consistent testing
|
||||
5. Add performance tests to ensure the command is efficient
|
||||
6. Test edge cases such as empty tasks, tasks with many subtasks, etc.
|
||||
|
||||
Testing approach:
|
||||
- Use Jest or similar testing framework
|
||||
- Implement mocks for external dependencies like AI services
|
||||
- Create test fixtures for consistent test data
|
||||
- Use snapshot testing for command output verification
|
||||
|
||||
## 5. Update CLI documentation and help text [done]
|
||||
### Dependencies: 34.2
|
||||
### Description: Update the CLI help documentation to include the new updateTask command and ensure users understand its purpose and options.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Add comprehensive help text for the updateTask command including:
|
||||
- Command description
|
||||
- Required and optional parameters
|
||||
- Examples of usage
|
||||
- Description of all supported options
|
||||
2. Update the main CLI help documentation to include the new command
|
||||
3. Add the command to any relevant command groups or categories
|
||||
4. Create usage examples that demonstrate common scenarios
|
||||
5. Update README.md and other documentation files to include information about the new command
|
||||
6. Add inline code comments explaining the implementation details
|
||||
7. Update any API documentation if applicable
|
||||
8. Create or update user guides with the new functionality
|
||||
|
||||
Testing approach:
|
||||
- Verify help text is displayed correctly when running `--help`
|
||||
- Review documentation for clarity and completeness
|
||||
- Have team members review the documentation for usability
|
||||
- Test examples to ensure they work as documented
|
||||
|
||||
48
tasks/task_035.txt
Normal file
48
tasks/task_035.txt
Normal file
@@ -0,0 +1,48 @@
|
||||
# Task ID: 35
|
||||
# Title: Integrate Grok3 API for Research Capabilities
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Replace the current Perplexity API integration with Grok3 API for all research-related functionalities while maintaining existing feature parity.
|
||||
# Details:
|
||||
This task involves migrating from Perplexity to Grok3 API for research capabilities throughout the application. Implementation steps include:
|
||||
|
||||
1. Create a new API client module for Grok3 in `src/api/grok3.ts` that handles authentication, request formatting, and response parsing
|
||||
2. Update the research service layer to use the new Grok3 client instead of Perplexity
|
||||
3. Modify the request payload structure to match Grok3's expected format (parameters like temperature, max_tokens, etc.)
|
||||
4. Update response handling to properly parse and extract Grok3's response format
|
||||
5. Implement proper error handling for Grok3-specific error codes and messages
|
||||
6. Update environment variables and configuration files to include Grok3 API keys and endpoints
|
||||
7. Ensure rate limiting and quota management are properly implemented according to Grok3's specifications
|
||||
8. Update any UI components that display research provider information to show Grok3 instead of Perplexity
|
||||
9. Maintain backward compatibility for any stored research results from Perplexity
|
||||
10. Document the new API integration in the developer documentation
|
||||
|
||||
Grok3 API has different parameter requirements and response formats compared to Perplexity, so careful attention must be paid to these differences during implementation.
|
||||
|
||||
# Test Strategy:
|
||||
Testing should verify that the Grok3 API integration works correctly and maintains feature parity with the previous Perplexity implementation:
|
||||
|
||||
1. Unit tests:
|
||||
- Test the Grok3 API client with mocked responses
|
||||
- Verify proper error handling for various error scenarios (rate limits, authentication failures, etc.)
|
||||
- Test the transformation of application requests to Grok3-compatible format
|
||||
|
||||
2. Integration tests:
|
||||
- Perform actual API calls to Grok3 with test credentials
|
||||
- Verify that research results are correctly parsed and returned
|
||||
- Test with various types of research queries to ensure broad compatibility
|
||||
|
||||
3. End-to-end tests:
|
||||
- Test the complete research flow from UI input to displayed results
|
||||
- Verify that all existing research features work with the new API
|
||||
|
||||
4. Performance tests:
|
||||
- Compare response times between Perplexity and Grok3
|
||||
- Ensure the application handles any differences in response time appropriately
|
||||
|
||||
5. Regression tests:
|
||||
- Verify that existing features dependent on research capabilities continue to work
|
||||
- Test that stored research results from Perplexity are still accessible and displayed correctly
|
||||
|
||||
Create a test environment with both APIs available to compare results and ensure quality before fully replacing Perplexity with Grok3.
|
||||
48
tasks/task_036.txt
Normal file
48
tasks/task_036.txt
Normal file
@@ -0,0 +1,48 @@
|
||||
# Task ID: 36
|
||||
# Title: Add Ollama Support for AI Services as Claude Alternative
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Implement Ollama integration as an alternative to Claude for all main AI services, allowing users to run local language models instead of relying on cloud-based Claude API.
|
||||
# Details:
|
||||
This task involves creating a comprehensive Ollama integration that can replace Claude across all main AI services in the application. Implementation should include:
|
||||
|
||||
1. Create an OllamaService class that implements the same interface as the ClaudeService to ensure compatibility
|
||||
2. Add configuration options to specify Ollama endpoint URL (default: http://localhost:11434)
|
||||
3. Implement model selection functionality to allow users to choose which Ollama model to use (e.g., llama3, mistral, etc.)
|
||||
4. Handle prompt formatting specific to Ollama models, ensuring proper system/user message separation
|
||||
5. Implement proper error handling for cases where Ollama server is unavailable or returns errors
|
||||
6. Add fallback mechanism to Claude when Ollama fails or isn't configured
|
||||
7. Update the AI service factory to conditionally create either Claude or Ollama service based on configuration
|
||||
8. Ensure token counting and rate limiting are appropriately handled for Ollama models
|
||||
9. Add documentation for users explaining how to set up and use Ollama with the application
|
||||
10. Optimize prompt templates specifically for Ollama models if needed
|
||||
|
||||
The implementation should be toggled through a configuration option (useOllama: true/false) and should maintain all existing functionality currently provided by Claude.
|
||||
|
||||
# Test Strategy:
|
||||
Testing should verify that Ollama integration works correctly as a drop-in replacement for Claude:
|
||||
|
||||
1. Unit tests:
|
||||
- Test OllamaService class methods in isolation with mocked responses
|
||||
- Verify proper error handling when Ollama server is unavailable
|
||||
- Test fallback mechanism to Claude when configured
|
||||
|
||||
2. Integration tests:
|
||||
- Test with actual Ollama server running locally with at least two different models
|
||||
- Verify all AI service functions work correctly with Ollama
|
||||
- Compare outputs between Claude and Ollama for quality assessment
|
||||
|
||||
3. Configuration tests:
|
||||
- Verify toggling between Claude and Ollama works as expected
|
||||
- Test with various model configurations
|
||||
|
||||
4. Performance tests:
|
||||
- Measure and compare response times between Claude and Ollama
|
||||
- Test with different load scenarios
|
||||
|
||||
5. Manual testing:
|
||||
- Verify all main AI features work correctly with Ollama
|
||||
- Test edge cases like very long inputs or specialized tasks
|
||||
|
||||
Create a test document comparing output quality between Claude and various Ollama models to help users understand the tradeoffs.
|
||||
49
tasks/task_037.txt
Normal file
49
tasks/task_037.txt
Normal file
@@ -0,0 +1,49 @@
|
||||
# Task ID: 37
|
||||
# Title: Add Gemini Support for Main AI Services as Claude Alternative
|
||||
# Status: pending
|
||||
# Dependencies: None
|
||||
# Priority: medium
|
||||
# Description: Implement Google's Gemini API integration as an alternative to Claude for all main AI services, allowing users to switch between different LLM providers.
|
||||
# Details:
|
||||
This task involves integrating Google's Gemini API across all main AI services that currently use Claude:
|
||||
|
||||
1. Create a new GeminiService class that implements the same interface as the existing ClaudeService
|
||||
2. Implement authentication and API key management for Gemini API
|
||||
3. Map our internal prompt formats to Gemini's expected input format
|
||||
4. Handle Gemini-specific parameters (temperature, top_p, etc.) and response parsing
|
||||
5. Update the AI service factory/provider to support selecting Gemini as an alternative
|
||||
6. Add configuration options in settings to allow users to select Gemini as their preferred provider
|
||||
7. Implement proper error handling for Gemini-specific API errors
|
||||
8. Ensure streaming responses are properly supported if Gemini offers this capability
|
||||
9. Update documentation to reflect the new Gemini option
|
||||
10. Consider implementing model selection if Gemini offers multiple models (e.g., Gemini Pro, Gemini Ultra)
|
||||
11. Ensure all existing AI capabilities (summarization, code generation, etc.) maintain feature parity when using Gemini
|
||||
|
||||
The implementation should follow the same pattern as the recent Ollama integration (Task #36) to maintain consistency in how alternative AI providers are supported.
|
||||
|
||||
# Test Strategy:
|
||||
Testing should verify Gemini integration works correctly across all AI services:
|
||||
|
||||
1. Unit tests:
|
||||
- Test GeminiService class methods with mocked API responses
|
||||
- Verify proper error handling for common API errors
|
||||
- Test configuration and model selection functionality
|
||||
|
||||
2. Integration tests:
|
||||
- Verify authentication and API connection with valid credentials
|
||||
- Test each AI service with Gemini to ensure proper functionality
|
||||
- Compare outputs between Claude and Gemini for the same inputs to verify quality
|
||||
|
||||
3. End-to-end tests:
|
||||
- Test the complete user flow of switching to Gemini and using various AI features
|
||||
- Verify streaming responses work correctly if supported
|
||||
|
||||
4. Performance tests:
|
||||
- Measure and compare response times between Claude and Gemini
|
||||
- Test with various input lengths to verify handling of context limits
|
||||
|
||||
5. Manual testing:
|
||||
- Verify the quality of Gemini responses across different use cases
|
||||
- Test edge cases like very long inputs or specialized domain knowledge
|
||||
|
||||
All tests should pass with Gemini selected as the provider, and the user experience should be consistent regardless of which provider is selected.
|
||||
56
tasks/task_038.txt
Normal file
56
tasks/task_038.txt
Normal file
@@ -0,0 +1,56 @@
|
||||
# Task ID: 38
|
||||
# Title: Implement Version Check System with Upgrade Notifications
|
||||
# Status: done
|
||||
# Dependencies: None
|
||||
# Priority: high
|
||||
# Description: Create a system that checks for newer package versions and displays upgrade notifications when users run any command, informing them to update to the latest version.
|
||||
# Details:
|
||||
Implement a version check mechanism that runs automatically with every command execution:
|
||||
|
||||
1. Create a new module (e.g., `versionChecker.js`) that will:
|
||||
- Fetch the latest version from npm registry using the npm registry API (https://registry.npmjs.org/task-master-ai/latest)
|
||||
- Compare it with the current installed version (from package.json)
|
||||
- Store the last check timestamp to avoid excessive API calls (check once per day)
|
||||
- Cache the result to minimize network requests
|
||||
|
||||
2. The notification should:
|
||||
- Use colored text (e.g., yellow background with black text) to be noticeable
|
||||
- Include the current version and latest version
|
||||
- Show the exact upgrade command: 'npm i task-master-ai@latest'
|
||||
- Be displayed at the beginning or end of command output, not interrupting the main content
|
||||
- Include a small separator line to distinguish it from command output
|
||||
|
||||
3. Implementation considerations:
|
||||
- Handle network failures gracefully (don't block command execution if version check fails)
|
||||
- Add a configuration option to disable update checks if needed
|
||||
- Ensure the check is lightweight and doesn't significantly impact command performance
|
||||
- Consider using a package like 'semver' for proper version comparison
|
||||
- Implement a cooldown period (e.g., only check once per day) to avoid excessive API calls
|
||||
|
||||
4. The version check should be integrated into the main command execution flow so it runs for all commands automatically.
|
||||
|
||||
# Test Strategy:
|
||||
1. Manual testing:
|
||||
- Install an older version of the package
|
||||
- Run various commands and verify the update notification appears
|
||||
- Update to the latest version and confirm the notification no longer appears
|
||||
- Test with network disconnected to ensure graceful handling of failures
|
||||
|
||||
2. Unit tests:
|
||||
- Mock the npm registry response to test different scenarios:
|
||||
- When a newer version exists
|
||||
- When using the latest version
|
||||
- When the registry is unavailable
|
||||
- Test the version comparison logic with various version strings
|
||||
- Test the cooldown/caching mechanism works correctly
|
||||
|
||||
3. Integration tests:
|
||||
- Create a test that runs a command and verifies the notification appears in the expected format
|
||||
- Test that the notification appears for all commands
|
||||
- Verify the notification doesn't interfere with normal command output
|
||||
|
||||
4. Edge cases to test:
|
||||
- Pre-release versions (alpha/beta)
|
||||
- Very old versions
|
||||
- When package.json is missing or malformed
|
||||
- When npm registry returns unexpected data
|
||||
Reference in New Issue
Block a user