chore: expands some tasks and adds 'inspector' commands to scripts in package json to easily get inspector up for our mcp server at http://localhost:8888/?proxyPort=9000 which should play nice for those of us who have shit running on 3000

This commit is contained in:
Eyal Toledano
2025-03-29 17:52:11 -04:00
parent 4e45a09279
commit 3a2d9670f1
5 changed files with 534 additions and 132 deletions

View File

@@ -0,0 +1 @@

View File

@@ -17,7 +17,8 @@
"prepublishOnly": "npm run prepare-package",
"prepare": "chmod +x bin/task-master.js bin/task-master-init.js",
"changeset": "changeset",
"release": "changeset publish"
"release": "changeset publish",
"inspector": "CLIENT_PORT=8888 SERVER_PORT=9000 npx @modelcontextprotocol/inspector node mcp-server/server.js"
},
"keywords": [
"claude",

View File

@@ -1,61 +1,142 @@
# Task ID: 23
# Title: Implement MCP Server Functionality for Task Master using FastMCP
# Status: pending
# Title: Complete MCP Server Implementation for Task Master using FastMCP
# Status: in-progress
# Dependencies: 22
# Priority: medium
# Description: Extend Task Master to function as an MCP server by leveraging FastMCP's JavaScript/TypeScript implementation for efficient context management services.
# Description: Finalize the MCP server functionality for Task Master by leveraging FastMCP's capabilities, transitioning from CLI-based execution to direct function imports, and optimizing performance, authentication, and context management. Ensure the server integrates seamlessly with Cursor via `mcp.json` and supports proper tool registration, efficient context handling, and transport type handling (focusing on stdio). Additionally, ensure the server can be instantiated properly when installed via `npx` or `npm i -g`. Evaluate and address gaps in the current implementation, including function imports, context management, caching, tool registration, and adherence to FastMCP best practices.
# Details:
This task involves implementing the Model Context Protocol server capabilities within Task Master using FastMCP. The implementation should:
This task involves completing the Model Context Protocol (MCP) server implementation for Task Master using FastMCP. Key updates include:
1. Use FastMCP to create the MCP server module (`mcp-server.ts` or equivalent)
2. Implement the required MCP endpoints using FastMCP:
- `/context` - For retrieving and updating context
- `/models` - For listing available models
- `/execute` - For executing operations with context
3. Utilize FastMCP's built-in features for context management, including:
- Efficient context storage and retrieval
- Context windowing and truncation
- Metadata and tagging support
4. Add authentication and authorization mechanisms using FastMCP capabilities
5. Implement error handling and response formatting as per MCP specifications
6. Configure Task Master to enable/disable MCP server functionality via FastMCP settings
7. Add documentation on using Task Master as an MCP server with FastMCP
8. Ensure compatibility with existing MCP clients by adhering to FastMCP's compliance features
9. Optimize performance using FastMCP tools, especially for context retrieval operations
10. Add logging for MCP server operations using FastMCP's logging utilities
1. Transition from CLI-based execution (currently using `child_process.spawnSync`) to direct Task Master function imports for improved performance and reliability.
2. Implement caching mechanisms for frequently accessed contexts to enhance performance, leveraging FastMCP's efficient transport mechanisms (e.g., stdio).
3. Refactor context management to align with best practices for handling large context windows, metadata, and tagging.
4. Refactor tool registration in `tools/index.js` to include clear descriptions and parameter definitions, leveraging FastMCP's decorator-based patterns for better integration.
5. Enhance transport type handling to ensure proper stdio communication and compatibility with FastMCP.
6. Ensure the MCP server can be instantiated and run correctly when installed globally via `npx` or `npm i -g`.
7. Integrate the ModelContextProtocol SDK directly to streamline resource and tool registration, ensuring compatibility with FastMCP's transport mechanisms.
8. Identify and address missing components or functionalities to meet FastMCP best practices, such as robust error handling, monitoring endpoints, and concurrency support.
9. Update documentation to include examples of using the MCP server with FastMCP, detailed setup instructions, and client integration guides.
The implementation should follow RESTful API design principles and leverage FastMCP's concurrency handling for multiple client requests. Consider using TypeScript for better type safety and integration with FastMCP[1][2].
The implementation must ensure compatibility with existing MCP clients and follow RESTful API design principles, while supporting concurrent requests and maintaining robust error handling.
# Test Strategy:
Testing for the MCP server functionality should include:
Testing for the MCP server implementation will follow a comprehensive approach based on our established testing guidelines:
1. Unit tests:
- Test each MCP endpoint handler function independently using FastMCP
- Verify context storage and retrieval mechanisms provided by FastMCP
- Test authentication and authorization logic
- Validate error handling for various failure scenarios
## Test Organization
2. Integration tests:
- Set up a test MCP server instance using FastMCP
- Test complete request/response cycles for each endpoint
- Verify context persistence across multiple requests
- Test with various payload sizes and content types
1. **Unit Tests** (`tests/unit/mcp-server/`):
- Test individual MCP server components in isolation
- Mock all external dependencies including FastMCP SDK
- Test each tool implementation separately
- Verify direct function imports work correctly
- Test context management and caching mechanisms
- Example files: `context-manager.test.js`, `tool-registration.test.js`, `direct-imports.test.js`
3. Compatibility tests:
- Test with existing MCP client libraries
- Verify compliance with the MCP specification
- Ensure backward compatibility with any MCP versions supported by FastMCP
2. **Integration Tests** (`tests/integration/mcp-server/`):
- Test interactions between MCP server components
- Verify proper tool registration with FastMCP
- Test context flow between components
- Validate error handling across module boundaries
- Example files: `server-tool-integration.test.js`, `context-flow.test.js`
4. Performance tests:
- Measure response times for context operations with various context sizes
- Test concurrent request handling using FastMCP's concurrency tools
- Verify memory usage remains within acceptable limits during extended operation
3. **End-to-End Tests** (`tests/e2e/mcp-server/`):
- Test complete MCP server workflows
- Verify server instantiation via different methods (direct, npx, global install)
- Test actual stdio communication with mock clients
- Example files: `server-startup.e2e.test.js`, `client-communication.e2e.test.js`
5. Security tests:
- Verify authentication mechanisms cannot be bypassed
- Test for common API vulnerabilities (injection, CSRF, etc.)
4. **Test Fixtures** (`tests/fixtures/mcp-server/`):
- Sample context data
- Mock tool definitions
- Sample MCP requests and responses
All tests should be automated and included in the CI/CD pipeline. Documentation should include examples of how to test the MCP server functionality manually using tools like curl or Postman.
## Testing Approach
### Module Mocking Strategy
```javascript
// Mock the FastMCP SDK
jest.mock('@model-context-protocol/sdk', () => ({
MCPServer: jest.fn().mockImplementation(() => ({
registerTool: jest.fn(),
registerResource: jest.fn(),
start: jest.fn().mockResolvedValue(undefined),
stop: jest.fn().mockResolvedValue(undefined)
})),
MCPError: jest.fn().mockImplementation(function(message, code) {
this.message = message;
this.code = code;
})
}));
// Import modules after mocks
import { MCPServer, MCPError } from '@model-context-protocol/sdk';
import { initMCPServer } from '../../scripts/mcp-server.js';
```
### Context Management Testing
- Test context creation, retrieval, and manipulation
- Verify caching mechanisms work correctly
- Test context windowing and metadata handling
- Validate context persistence across server restarts
### Direct Function Import Testing
- Verify Task Master functions are imported correctly
- Test performance improvements compared to CLI execution
- Validate error handling with direct imports
### Tool Registration Testing
- Verify tools are registered with proper descriptions and parameters
- Test decorator-based registration patterns
- Validate tool execution with different input types
### Error Handling Testing
- Test all error paths with appropriate MCPError types
- Verify error propagation to clients
- Test recovery from various error conditions
### Performance Testing
- Benchmark response times with and without caching
- Test memory usage under load
- Verify concurrent request handling
## Test Quality Guidelines
- Follow TDD approach when possible
- Maintain test independence and isolation
- Use descriptive test names explaining expected behavior
- Aim for 80%+ code coverage, with critical paths at 100%
- Follow the mock-first-then-import pattern for all Jest mocks
- Avoid testing implementation details that might change
- Ensure tests don't depend on execution order
## Specific Test Cases
1. **Server Initialization**
- Test server creation with various configuration options
- Verify proper tool and resource registration
- Test server startup and shutdown procedures
2. **Context Operations**
- Test context creation, retrieval, update, and deletion
- Verify context windowing and truncation
- Test context metadata and tagging
3. **Tool Execution**
- Test each tool with various input parameters
- Verify proper error handling for invalid inputs
- Test tool execution performance
4. **MCP.json Integration**
- Test creation and updating of .cursor/mcp.json
- Verify proper server registration in mcp.json
- Test handling of existing mcp.json files
5. **Transport Handling**
- Test stdio communication
- Verify proper message formatting
- Test error handling in transport layer
All tests will be automated and integrated into the CI/CD pipeline to ensure consistent quality.
# Subtasks:
## 1. Create Core MCP Server Module and Basic Structure [done]
@@ -79,7 +160,7 @@ Testing approach:
- Test basic error handling with invalid requests
## 2. Implement Context Management System [done]
### Dependencies: 23.1
### Dependencies: 23.1
### Description: Develop a robust context management system that can efficiently store, retrieve, and manipulate context data according to the MCP specification.
### Details:
Implementation steps:
@@ -100,7 +181,7 @@ Testing approach:
- Test persistence mechanisms with simulated failures
## 3. Implement MCP Endpoints and API Handlers [done]
### Dependencies: 23.1, 23.2
### Dependencies: 23.1, 23.2
### Description: Develop the complete API handlers for all required MCP endpoints, ensuring they follow the protocol specification and integrate with the context management system.
### Details:
Implementation steps:
@@ -125,49 +206,86 @@ Testing approach:
- Test error handling with invalid inputs
- Benchmark endpoint performance
## 4. Implement Authentication and Authorization System [pending]
### Dependencies: 23.1, 23.3
### Description: Create a secure authentication and authorization mechanism for MCP clients to ensure only authorized applications can access the MCP server functionality.
## 6. Refactor MCP Server to Leverage ModelContextProtocol SDK [deferred]
### Dependencies: 23.1, 23.2, 23.3
### Description: Integrate the ModelContextProtocol SDK directly into the MCP server implementation to streamline tool registration and resource handling.
### Details:
Implementation steps:
1. Design authentication scheme (API keys, OAuth, JWT, etc.)
2. Implement authentication middleware for all MCP endpoints
3. Create an API key management system for client applications
4. Develop role-based access control for different operations
5. Implement rate limiting to prevent abuse
6. Add secure token validation and handling
7. Create endpoints for managing client credentials
8. Implement audit logging for authentication events
1. Replace manual tool registration with ModelContextProtocol SDK methods.
2. Use SDK utilities to simplify resource and template management.
3. Ensure compatibility with FastMCP's transport mechanisms.
4. Update server initialization to include SDK-based configurations.
Testing approach:
- Security testing for authentication mechanisms
- Test access control with various permission levels
- Verify rate limiting functionality
- Test token validation with valid and invalid tokens
- Simulate unauthorized access attempts
- Verify audit logs contain appropriate information
- Verify SDK integration with all MCP endpoints.
- Test resource and template registration using SDK methods.
- Validate compatibility with existing MCP clients.
- Benchmark performance improvements from SDK integration.
## 5. Optimize Performance and Finalize Documentation [pending]
### Dependencies: 23.1, 23.2, 23.3, 23.4
### Description: Optimize the MCP server implementation for performance, especially for context retrieval operations, and create comprehensive documentation for users.
## 8. Implement Direct Function Imports and Replace CLI-based Execution [in-progress]
### Dependencies: None
### Description: Refactor the MCP server implementation to use direct Task Master function imports instead of the current CLI-based execution using child_process.spawnSync. This will improve performance, reliability, and enable better error handling.
### Details:
Implementation steps:
1. Profile the MCP server to identify performance bottlenecks
2. Implement caching mechanisms for frequently accessed contexts
3. Optimize context serialization and deserialization
4. Add connection pooling for database operations (if applicable)
5. Implement request batching for bulk operations
6. Create comprehensive API documentation with examples
7. Add setup and configuration guides to the Task Master documentation
8. Create example client implementations
9. Add monitoring endpoints for server health and metrics
10. Implement graceful degradation under high load
1. Create a new module to import and expose Task Master core functions directly
2. Modify tools/utils.js to remove executeTaskMasterCommand and replace with direct function calls
3. Update each tool implementation (listTasks.js, showTask.js, etc.) to use the direct function imports
4. Implement proper error handling with try/catch blocks and FastMCP's MCPError
5. Add unit tests to verify the function imports work correctly
6. Test performance improvements by comparing response times between CLI and function import approaches
Testing approach:
- Load testing with simulated concurrent clients
- Measure response times for various operations
- Test with large context sizes to verify performance
- Verify documentation accuracy with sample requests
- Test monitoring endpoints
- Perform stress testing to identify failure points
## 9. Implement Context Management and Caching Mechanisms [deferred]
### Dependencies: 23.1
### Description: Enhance the MCP server with proper context management and caching to improve performance and user experience, especially for frequently accessed data and contexts.
### Details:
1. Implement a context manager class that leverages FastMCP's Context object
2. Add caching for frequently accessed task data with configurable TTL settings
3. Implement context tagging for better organization of context data
4. Add methods to efficiently handle large context windows
5. Create helper functions for storing and retrieving context data
6. Implement cache invalidation strategies for task updates
7. Add cache statistics for monitoring performance
8. Create unit tests for context management and caching functionality
## 10. Enhance Tool Registration and Resource Management [in-progress]
### Dependencies: 23.1
### Description: Refactor tool registration to follow FastMCP best practices, using decorators and improving the overall structure. Implement proper resource management for task templates and other shared resources.
### Details:
1. Update registerTaskMasterTools function to use FastMCP's decorator pattern
2. Implement @mcp.tool() decorators for all existing tools
3. Add proper type annotations and documentation for all tools
4. Create resource handlers for task templates using @mcp.resource()
5. Implement resource templates for common task patterns
6. Update the server initialization to properly register all tools and resources
7. Add validation for tool inputs using FastMCP's built-in validation
8. Create comprehensive tests for tool registration and resource access
## 11. Implement Comprehensive Error Handling [pending]
### Dependencies: 23.1, 23.3
### Description: Implement robust error handling using FastMCP's MCPError, including custom error types for different categories and standardized error responses.
### Details:
1. Create custom error types extending MCPError for different categories (validation, auth, etc.)\n2. Implement standardized error responses following MCP protocol\n3. Add error handling middleware for all MCP endpoints\n4. Ensure proper error propagation from tools to client\n5. Add debug mode with detailed error information\n6. Document error types and handling patterns
## 12. Implement Structured Logging System [pending]
### Dependencies: 23.1, 23.3
### Description: Implement a comprehensive logging system for the MCP server with different log levels, structured logging format, and request/response tracking.
### Details:
1. Design structured log format for consistent parsing\n2. Implement different log levels (debug, info, warn, error)\n3. Add request/response logging middleware\n4. Implement correlation IDs for request tracking\n5. Add performance metrics logging\n6. Configure log output destinations (console, file)\n7. Document logging patterns and usage
## 13. Create Testing Framework and Test Suite [pending]
### Dependencies: 23.1, 23.3, 23.8
### Description: Implement a comprehensive testing framework for the MCP server, including unit tests, integration tests, and end-to-end tests.
### Details:
1. Set up Jest testing framework with proper configuration\n2. Create MCPTestClient for testing FastMCP server interaction\n3. Implement unit tests for individual tool functions\n4. Create integration tests for end-to-end request/response cycles\n5. Set up test fixtures and mock data\n6. Implement test coverage reporting\n7. Document testing guidelines and examples
## 14. Add MCP.json to the Init Workflow [done]
### Dependencies: 23.1, 23.3
### Description: Implement functionality to create or update .cursor/mcp.json during project initialization, handling cases where: 1) If there's no mcp.json, create it with the appropriate configuration; 2) If there is an mcp.json, intelligently append to it without syntax errors like trailing commas
### Details:
1. Create functionality to detect if .cursor/mcp.json exists in the project\n2. Implement logic to create a new mcp.json file with proper structure if it doesn't exist\n3. Add functionality to read and parse existing mcp.json if it exists\n4. Create method to add a new taskmaster-ai server entry to the mcpServers object\n5. Implement intelligent JSON merging that avoids trailing commas and syntax errors\n6. Ensure proper formatting and indentation in the generated/updated JSON\n7. Add validation to verify the updated configuration is valid JSON\n8. Include this functionality in the init workflow\n9. Add error handling for file system operations and JSON parsing\n10. Document the mcp.json structure and integration process
## 15. Implement SSE Support for Real-time Updates [deferred]
### Dependencies: 23.1, 23.3, 23.11
### Description: Add Server-Sent Events (SSE) capabilities to the MCP server to enable real-time updates and streaming of task execution progress, logs, and status changes to clients
### Details:
1. Research and implement SSE protocol for the MCP server\n2. Create dedicated SSE endpoints for event streaming\n3. Implement event emitter pattern for internal event management\n4. Add support for different event types (task status, logs, errors)\n5. Implement client connection management with proper keep-alive handling\n6. Add filtering capabilities to allow subscribing to specific event types\n7. Create in-memory event buffer for clients reconnecting\n8. Document SSE endpoint usage and client implementation examples\n9. Add robust error handling for dropped connections\n10. Implement rate limiting and backpressure mechanisms\n11. Add authentication for SSE connections

View File

@@ -1,56 +1,231 @@
# Task ID: 32
# Title: Implement 'learn' Command for Automatic Cursor Rule Generation
# Title: Implement "learn" Command for Automatic Cursor Rule Generation
# Status: pending
# Dependencies: None
# Priority: high
# Description: Create a new 'learn' command that analyzes code changes and chat history to automatically generate or update Cursor rules in the .cursor/rules directory based on successful implementation patterns.
# Description: Create a new "learn" command that analyzes Cursor's chat history and code changes to automatically generate or update rule files in the .cursor/rules directory, following the cursor_rules.mdc template format. This command will help Cursor autonomously improve its ability to follow development standards by learning from successful implementations.
# Details:
Implement a new command in the task-master CLI that enables Cursor to learn from successful coding patterns:
Implement a new command in the task-master CLI that enables Cursor to learn from successful coding patterns and chat interactions:
1. Create a new module `commands/learn.js` that implements the command logic
2. Update `index.js` to register the new command
3. The command should:
- Accept an optional parameter for specifying which patterns to focus on
- Use git diff to extract code changes since the last commit
- Access the Cursor chat history if possible (investigate API or file storage location)
- Call Claude via ai-services.js with the following context:
* Code diffs
* Chat history excerpts showing challenges and solutions
* Existing rules from .cursor/rules if present
- Parse Claude's response to extract rule definitions
- Create or update .mdc files in the .cursor/rules directory
- Provide a summary of what was learned and which rules were updated
Key Components:
1. Cursor Data Analysis
- Access and parse Cursor's chat history from ~/Library/Application Support/Cursor/User/History
- Extract relevant patterns, corrections, and successful implementations
- Track file changes and their associated chat context
4. Create helper functions to:
- Extract relevant patterns from diffs
- Format the prompt for Claude to focus on identifying reusable patterns
- Parse Claude's response into valid rule definitions
- Handle rule conflicts or duplications
2. Rule Management
- Use cursor_rules.mdc as the template for all rule file formatting
- Manage rule files in .cursor/rules directory
- Support both creation and updates of rule files
- Categorize rules based on context (testing, components, API, etc.)
5. Ensure the command handles errors gracefully, especially if chat history is inaccessible
6. Add appropriate logging to show the learning process
7. Document the command in the README.md file
3. AI Integration
- Utilize ai-services.js to interact with Claude
- Provide comprehensive context including:
* Relevant chat history showing the evolution of solutions
* Code changes and their outcomes
* Existing rules and template structure
- Generate or update rules while maintaining template consistency
4. Implementation Requirements:
- Automatic triggering after task completion (configurable)
- Manual triggering via CLI command
- Proper error handling for missing or corrupt files
- Validation against cursor_rules.mdc template
- Performance optimization for large histories
- Clear logging and progress indication
5. Key Files:
- commands/learn.js: Main command implementation
- rules/cursor-rules-manager.js: Rule file management
- utils/chat-history-analyzer.js: Cursor chat analysis
- index.js: Command registration
6. Security Considerations:
- Safe file system operations
- Proper error handling for inaccessible files
- Validation of generated rules
- Backup of existing rules before updates
# Test Strategy:
1. Unit tests:
- Create tests for each helper function in isolation
- Mock git diff responses and chat history data
- Verify rule extraction logic works with different input patterns
- Test error handling for various failure scenarios
1. Unit Tests:
- Test each component in isolation:
* Chat history extraction and analysis
* Rule file management and validation
* Pattern detection and categorization
* Template validation logic
- Mock file system operations and AI responses
- Test error handling and edge cases
2. Integration tests:
- Test the command in a repository with actual code changes
- Verify it correctly generates .mdc files in the .cursor/rules directory
- Check that generated rules follow the correct format
- Verify the command correctly updates existing rules without losing custom modifications
2. Integration Tests:
- End-to-end command execution
- File system interactions
- AI service integration
- Rule generation and updates
- Template compliance validation
3. Manual testing scenarios:
- Run the command after implementing a feature with specific patterns
- Verify the generated rules capture the intended patterns
- Test the command with and without existing rules
- Verify the command works when chat history is available and when it isn't
- Test with large diffs to ensure performance remains acceptable
3. Manual Testing:
- Test after completing actual development tasks
- Verify rule quality and usefulness
- Check template compliance
- Validate performance with large histories
- Test automatic and manual triggering
4. Validation Criteria:
- Generated rules follow cursor_rules.mdc format
- Rules capture meaningful patterns
- Performance remains acceptable
- Error handling works as expected
- Generated rules improve Cursor's effectiveness
# Subtasks:
## 1. Create Initial File Structure [pending]
### Dependencies: None
### Description: Set up the basic file structure for the learn command implementation
### Details:
Create the following files with basic exports:
- commands/learn.js
- rules/cursor-rules-manager.js
- utils/chat-history-analyzer.js
- utils/cursor-path-helper.js
## 2. Implement Cursor Path Helper [pending]
### Dependencies: None
### Description: Create utility functions to handle Cursor's application data paths
### Details:
In utils/cursor-path-helper.js implement:
- getCursorAppDir(): Returns ~/Library/Application Support/Cursor
- getCursorHistoryDir(): Returns User/History path
- getCursorLogsDir(): Returns logs directory path
- validatePaths(): Ensures required directories exist
## 3. Create Chat History Analyzer Base [pending]
### Dependencies: None
### Description: Create the base structure for analyzing Cursor's chat history
### Details:
In utils/chat-history-analyzer.js create:
- ChatHistoryAnalyzer class
- readHistoryDir(): Lists all history directories
- readEntriesJson(): Parses entries.json files
- parseHistoryEntry(): Extracts relevant data from .js files
## 4. Implement Chat History Extraction [pending]
### Dependencies: None
### Description: Add core functionality to extract relevant chat history
### Details:
In ChatHistoryAnalyzer add:
- extractChatHistory(startTime): Gets history since task start
- parseFileChanges(): Extracts code changes
- parseAIInteractions(): Extracts AI responses
- filterRelevantHistory(): Removes irrelevant entries
## 5. Create CursorRulesManager Base [pending]
### Dependencies: None
### Description: Set up the base structure for managing Cursor rules
### Details:
In rules/cursor-rules-manager.js create:
- CursorRulesManager class
- readTemplate(): Reads cursor_rules.mdc
- listRuleFiles(): Lists all .mdc files
- readRuleFile(): Reads specific rule file
## 6. Implement Template Validation [pending]
### Dependencies: None
### Description: Add validation logic for rule files against cursor_rules.mdc
### Details:
In CursorRulesManager add:
- validateRuleFormat(): Checks against template
- parseTemplateStructure(): Extracts template sections
- validateAgainstTemplate(): Validates content structure
- getRequiredSections(): Lists mandatory sections
## 7. Add Rule Categorization Logic [pending]
### Dependencies: None
### Description: Implement logic to categorize changes into rule files
### Details:
In CursorRulesManager add:
- categorizeChanges(): Maps changes to rule files
- detectRuleCategories(): Identifies relevant categories
- getRuleFileForPattern(): Maps patterns to files
- createNewRuleFile(): Initializes new rule files
## 8. Implement Pattern Analysis [pending]
### Dependencies: None
### Description: Create functions to analyze implementation patterns
### Details:
In ChatHistoryAnalyzer add:
- extractPatterns(): Finds success patterns
- extractCorrections(): Finds error corrections
- findSuccessfulPaths(): Tracks successful implementations
- analyzeDecisions(): Extracts key decisions
## 9. Create AI Prompt Builder [pending]
### Dependencies: None
### Description: Implement prompt construction for Claude
### Details:
In learn.js create:
- buildRuleUpdatePrompt(): Builds Claude prompt
- formatHistoryContext(): Formats chat history
- formatRuleContext(): Formats current rules
- buildInstructions(): Creates specific instructions
## 10. Implement Learn Command Core [pending]
### Dependencies: None
### Description: Create the main learn command implementation
### Details:
In commands/learn.js implement:
- learnCommand(): Main command function
- processRuleUpdates(): Handles rule updates
- generateSummary(): Creates learning summary
- handleErrors(): Manages error cases
## 11. Add Auto-trigger Support [pending]
### Dependencies: None
### Description: Implement automatic learning after task completion
### Details:
Update task-manager.js:
- Add autoLearnConfig handling
- Modify completeTask() to trigger learning
- Add learning status tracking
- Implement learning queue
## 12. Implement CLI Integration [pending]
### Dependencies: None
### Description: Add the learn command to the CLI
### Details:
Update index.js to:
- Register learn command
- Add command options
- Handle manual triggers
- Process command flags
## 13. Add Progress Logging [pending]
### Dependencies: None
### Description: Implement detailed progress logging
### Details:
Create utils/learn-logger.js with:
- logLearningProgress(): Tracks overall progress
- logRuleUpdates(): Tracks rule changes
- logErrors(): Handles error logging
- createSummary(): Generates final report
## 14. Implement Error Recovery [pending]
### Dependencies: None
### Description: Add robust error handling throughout the system
### Details:
Create utils/error-handler.js with:
- handleFileErrors(): Manages file system errors
- handleParsingErrors(): Manages parsing failures
- handleAIErrors(): Manages Claude API errors
- implementRecoveryStrategies(): Adds recovery logic
## 15. Add Performance Optimization [pending]
### Dependencies: None
### Description: Optimize performance for large histories
### Details:
Add to utils/performance-optimizer.js:
- implementCaching(): Adds result caching
- optimizeFileReading(): Improves file reading
- addProgressiveLoading(): Implements lazy loading
- addMemoryManagement(): Manages memory usage
4. Validation:
- After generating rules, use them in Cursor to verify they correctly guide future implementations
- Have multiple team members test the command to ensure consistent results

View File

@@ -1795,13 +1795,120 @@
},
{
"id": 32,
"title": "Implement 'learn' Command for Automatic Cursor Rule Generation",
"description": "Create a new 'learn' command that analyzes code changes and chat history to automatically generate or update Cursor rules in the .cursor/rules directory based on successful implementation patterns.",
"title": "Implement \"learn\" Command for Automatic Cursor Rule Generation",
"description": "Create a new \"learn\" command that analyzes Cursor's chat history and code changes to automatically generate or update rule files in the .cursor/rules directory, following the cursor_rules.mdc template format. This command will help Cursor autonomously improve its ability to follow development standards by learning from successful implementations.",
"status": "pending",
"dependencies": [],
"priority": "high",
"details": "Implement a new command in the task-master CLI that enables Cursor to learn from successful coding patterns:\n\n1. Create a new module `commands/learn.js` that implements the command logic\n2. Update `index.js` to register the new command\n3. The command should:\n - Accept an optional parameter for specifying which patterns to focus on\n - Use git diff to extract code changes since the last commit\n - Access the Cursor chat history if possible (investigate API or file storage location)\n - Call Claude via ai-services.js with the following context:\n * Code diffs\n * Chat history excerpts showing challenges and solutions\n * Existing rules from .cursor/rules if present\n - Parse Claude's response to extract rule definitions\n - Create or update .mdc files in the .cursor/rules directory\n - Provide a summary of what was learned and which rules were updated\n\n4. Create helper functions to:\n - Extract relevant patterns from diffs\n - Format the prompt for Claude to focus on identifying reusable patterns\n - Parse Claude's response into valid rule definitions\n - Handle rule conflicts or duplications\n\n5. Ensure the command handles errors gracefully, especially if chat history is inaccessible\n6. Add appropriate logging to show the learning process\n7. Document the command in the README.md file",
"testStrategy": "1. Unit tests:\n - Create tests for each helper function in isolation\n - Mock git diff responses and chat history data\n - Verify rule extraction logic works with different input patterns\n - Test error handling for various failure scenarios\n\n2. Integration tests:\n - Test the command in a repository with actual code changes\n - Verify it correctly generates .mdc files in the .cursor/rules directory\n - Check that generated rules follow the correct format\n - Verify the command correctly updates existing rules without losing custom modifications\n\n3. Manual testing scenarios:\n - Run the command after implementing a feature with specific patterns\n - Verify the generated rules capture the intended patterns\n - Test the command with and without existing rules\n - Verify the command works when chat history is available and when it isn't\n - Test with large diffs to ensure performance remains acceptable\n\n4. Validation:\n - After generating rules, use them in Cursor to verify they correctly guide future implementations\n - Have multiple team members test the command to ensure consistent results"
"details": "Implement a new command in the task-master CLI that enables Cursor to learn from successful coding patterns and chat interactions:\n\nKey Components:\n1. Cursor Data Analysis\n - Access and parse Cursor's chat history from ~/Library/Application Support/Cursor/User/History\n - Extract relevant patterns, corrections, and successful implementations\n - Track file changes and their associated chat context\n\n2. Rule Management\n - Use cursor_rules.mdc as the template for all rule file formatting\n - Manage rule files in .cursor/rules directory\n - Support both creation and updates of rule files\n - Categorize rules based on context (testing, components, API, etc.)\n\n3. AI Integration\n - Utilize ai-services.js to interact with Claude\n - Provide comprehensive context including:\n * Relevant chat history showing the evolution of solutions\n * Code changes and their outcomes\n * Existing rules and template structure\n - Generate or update rules while maintaining template consistency\n\n4. Implementation Requirements:\n - Automatic triggering after task completion (configurable)\n - Manual triggering via CLI command\n - Proper error handling for missing or corrupt files\n - Validation against cursor_rules.mdc template\n - Performance optimization for large histories\n - Clear logging and progress indication\n\n5. Key Files:\n - commands/learn.js: Main command implementation\n - rules/cursor-rules-manager.js: Rule file management\n - utils/chat-history-analyzer.js: Cursor chat analysis\n - index.js: Command registration\n\n6. Security Considerations:\n - Safe file system operations\n - Proper error handling for inaccessible files\n - Validation of generated rules\n - Backup of existing rules before updates",
"testStrategy": "1. Unit Tests:\n - Test each component in isolation:\n * Chat history extraction and analysis\n * Rule file management and validation\n * Pattern detection and categorization\n * Template validation logic\n - Mock file system operations and AI responses\n - Test error handling and edge cases\n\n2. Integration Tests:\n - End-to-end command execution\n - File system interactions\n - AI service integration\n - Rule generation and updates\n - Template compliance validation\n\n3. Manual Testing:\n - Test after completing actual development tasks\n - Verify rule quality and usefulness\n - Check template compliance\n - Validate performance with large histories\n - Test automatic and manual triggering\n\n4. Validation Criteria:\n - Generated rules follow cursor_rules.mdc format\n - Rules capture meaningful patterns\n - Performance remains acceptable\n - Error handling works as expected\n - Generated rules improve Cursor's effectiveness",
"subtasks": [
{
"id": 1,
"title": "Create Initial File Structure",
"description": "Set up the basic file structure for the learn command implementation",
"details": "Create the following files with basic exports:\n- commands/learn.js\n- rules/cursor-rules-manager.js\n- utils/chat-history-analyzer.js\n- utils/cursor-path-helper.js",
"status": "pending"
},
{
"id": 2,
"title": "Implement Cursor Path Helper",
"description": "Create utility functions to handle Cursor's application data paths",
"details": "In utils/cursor-path-helper.js implement:\n- getCursorAppDir(): Returns ~/Library/Application Support/Cursor\n- getCursorHistoryDir(): Returns User/History path\n- getCursorLogsDir(): Returns logs directory path\n- validatePaths(): Ensures required directories exist",
"status": "pending"
},
{
"id": 3,
"title": "Create Chat History Analyzer Base",
"description": "Create the base structure for analyzing Cursor's chat history",
"details": "In utils/chat-history-analyzer.js create:\n- ChatHistoryAnalyzer class\n- readHistoryDir(): Lists all history directories\n- readEntriesJson(): Parses entries.json files\n- parseHistoryEntry(): Extracts relevant data from .js files",
"status": "pending"
},
{
"id": 4,
"title": "Implement Chat History Extraction",
"description": "Add core functionality to extract relevant chat history",
"details": "In ChatHistoryAnalyzer add:\n- extractChatHistory(startTime): Gets history since task start\n- parseFileChanges(): Extracts code changes\n- parseAIInteractions(): Extracts AI responses\n- filterRelevantHistory(): Removes irrelevant entries",
"status": "pending"
},
{
"id": 5,
"title": "Create CursorRulesManager Base",
"description": "Set up the base structure for managing Cursor rules",
"details": "In rules/cursor-rules-manager.js create:\n- CursorRulesManager class\n- readTemplate(): Reads cursor_rules.mdc\n- listRuleFiles(): Lists all .mdc files\n- readRuleFile(): Reads specific rule file",
"status": "pending"
},
{
"id": 6,
"title": "Implement Template Validation",
"description": "Add validation logic for rule files against cursor_rules.mdc",
"details": "In CursorRulesManager add:\n- validateRuleFormat(): Checks against template\n- parseTemplateStructure(): Extracts template sections\n- validateAgainstTemplate(): Validates content structure\n- getRequiredSections(): Lists mandatory sections",
"status": "pending"
},
{
"id": 7,
"title": "Add Rule Categorization Logic",
"description": "Implement logic to categorize changes into rule files",
"details": "In CursorRulesManager add:\n- categorizeChanges(): Maps changes to rule files\n- detectRuleCategories(): Identifies relevant categories\n- getRuleFileForPattern(): Maps patterns to files\n- createNewRuleFile(): Initializes new rule files",
"status": "pending"
},
{
"id": 8,
"title": "Implement Pattern Analysis",
"description": "Create functions to analyze implementation patterns",
"details": "In ChatHistoryAnalyzer add:\n- extractPatterns(): Finds success patterns\n- extractCorrections(): Finds error corrections\n- findSuccessfulPaths(): Tracks successful implementations\n- analyzeDecisions(): Extracts key decisions",
"status": "pending"
},
{
"id": 9,
"title": "Create AI Prompt Builder",
"description": "Implement prompt construction for Claude",
"details": "In learn.js create:\n- buildRuleUpdatePrompt(): Builds Claude prompt\n- formatHistoryContext(): Formats chat history\n- formatRuleContext(): Formats current rules\n- buildInstructions(): Creates specific instructions",
"status": "pending"
},
{
"id": 10,
"title": "Implement Learn Command Core",
"description": "Create the main learn command implementation",
"details": "In commands/learn.js implement:\n- learnCommand(): Main command function\n- processRuleUpdates(): Handles rule updates\n- generateSummary(): Creates learning summary\n- handleErrors(): Manages error cases",
"status": "pending"
},
{
"id": 11,
"title": "Add Auto-trigger Support",
"description": "Implement automatic learning after task completion",
"details": "Update task-manager.js:\n- Add autoLearnConfig handling\n- Modify completeTask() to trigger learning\n- Add learning status tracking\n- Implement learning queue",
"status": "pending"
},
{
"id": 12,
"title": "Implement CLI Integration",
"description": "Add the learn command to the CLI",
"details": "Update index.js to:\n- Register learn command\n- Add command options\n- Handle manual triggers\n- Process command flags",
"status": "pending"
},
{
"id": 13,
"title": "Add Progress Logging",
"description": "Implement detailed progress logging",
"details": "Create utils/learn-logger.js with:\n- logLearningProgress(): Tracks overall progress\n- logRuleUpdates(): Tracks rule changes\n- logErrors(): Handles error logging\n- createSummary(): Generates final report",
"status": "pending"
},
{
"id": 14,
"title": "Implement Error Recovery",
"description": "Add robust error handling throughout the system",
"details": "Create utils/error-handler.js with:\n- handleFileErrors(): Manages file system errors\n- handleParsingErrors(): Manages parsing failures\n- handleAIErrors(): Manages Claude API errors\n- implementRecoveryStrategies(): Adds recovery logic",
"status": "pending"
},
{
"id": 15,
"title": "Add Performance Optimization",
"description": "Optimize performance for large histories",
"details": "Add to utils/performance-optimizer.js:\n- implementCaching(): Adds result caching\n- optimizeFileReading(): Improves file reading\n- addProgressiveLoading(): Implements lazy loading\n- addMemoryManagement(): Manages memory usage",
"status": "pending"
}
]
},
{
"id": 33,