chore: task mgmt

This commit is contained in:
Eyal Toledano
2025-04-03 01:45:19 -04:00
parent f11e00a026
commit 87b1eb61ee
3 changed files with 208 additions and 72 deletions

View File

@@ -1,89 +1,72 @@
# Task ID: 41 # Task ID: 41
# Title: Implement GitHub Actions CI Workflow for Task Master # Title: Implement Visual Task Dependency Graph in Terminal
# Status: pending # Status: pending
# Dependencies: None # Dependencies: None
# Priority: high # Priority: medium
# Description: Create a streamlined CI workflow file (ci.yml) that efficiently tests the Task Master codebase using GitHub Actions. # Description: Create a feature that renders task dependencies as a visual graph using ASCII/Unicode characters in the terminal, with color-coded nodes representing tasks and connecting lines showing dependency relationships.
# Details: # Details:
Create a GitHub Actions workflow file at `.github/workflows/ci.yml` with the following specifications: This implementation should include:
1. Configure the workflow to trigger on: 1. Create a new command `graph` or `visualize` that displays the dependency graph.
- Push events to any branch
- Pull request events targeting any branch
2. Core workflow configuration: 2. Design an ASCII/Unicode-based graph rendering system that:
- Use Ubuntu latest as the primary testing environment - Represents each task as a node with its ID and abbreviated title
- Use Node.js 20.x (LTS) for consistency with the project - Shows dependencies as directional lines between nodes (→, ↑, ↓, etc.)
- Focus on single environment for speed and simplicity - Uses color coding for different task statuses (e.g., green for completed, yellow for in-progress, red for blocked)
- Handles complex dependency chains with proper spacing and alignment
3. Configure workflow steps to: 3. Implement layout algorithms to:
- Checkout the repository using actions/checkout@v4 - Minimize crossing lines for better readability
- Set up Node.js using actions/setup-node@v4 with npm caching - Properly space nodes to avoid overlapping
- Install dependencies with 'npm ci' - Support both vertical and horizontal graph orientations (as a configurable option)
- Run tests with 'npm run test:coverage'
4. Implement efficient caching: 4. Add detection and highlighting of circular dependencies with a distinct color/pattern
- Cache node_modules using actions/cache@v4
- Use package-lock.json hash for cache key
- Implement proper cache restoration keys
5. Ensure proper timeouts: 5. Include a legend explaining the color coding and symbols used
- 2 minutes for dependency installation
- Appropriate timeout for test execution
6. Artifact handling: 6. Ensure the graph is responsive to terminal width, with options to:
- Upload test results and coverage reports - Automatically scale to fit the current terminal size
- Use consistent naming for artifacts - Allow zooming in/out of specific sections for large graphs
- Retain artifacts for 30 days - Support pagination or scrolling for very large dependency networks
7. Add options to filter the graph by:
- Specific task IDs or ranges
- Task status
- Dependency depth (e.g., show only direct dependencies or N levels deep)
8. Ensure accessibility by using distinct patterns in addition to colors for users with color vision deficiencies
9. Optimize performance for projects with many tasks and complex dependency relationships
# Test Strategy: # Test Strategy:
To verify correct implementation of the GitHub Actions CI workflow: 1. Unit Tests:
- Test the graph generation algorithm with various dependency structures
- Verify correct node placement and connection rendering
- Test circular dependency detection
- Verify color coding matches task statuses
1. Manual verification: 2. Integration Tests:
- Check that the file is correctly placed at `.github/workflows/ci.yml` - Test the command with projects of varying sizes (small, medium, large)
- Verify the YAML syntax is valid - Verify correct handling of different terminal sizes
- Confirm all required configurations are present - Test all filtering options
2. Functional testing: 3. Visual Verification:
- Push a commit to verify the workflow triggers - Create test cases with predefined dependency structures and verify the visual output matches expected patterns
- Create a PR to verify the workflow runs on pull requests - Test with terminals of different sizes, including very narrow terminals
- Verify test coverage reports are generated and uploaded - Verify readability of complex graphs
- Confirm caching is working effectively
3. Performance testing: 4. Edge Cases:
- Verify cache hits reduce installation time - Test with no dependencies (single nodes only)
- Confirm workflow completes within expected timeframe - Test with circular dependencies
- Check artifact upload and download speeds - Test with very deep dependency chains
- Test with wide dependency networks (many parallel tasks)
- Test with the maximum supported number of tasks
# Subtasks: 5. Usability Testing:
## 1. Create Basic GitHub Actions Workflow [pending] - Have team members use the feature and provide feedback on readability and usefulness
### Dependencies: None - Test in different terminal emulators to ensure compatibility
### Description: Set up the foundational GitHub Actions workflow file with proper triggers and Node.js setup - Verify the feature works in terminals with limited color support
### Details:
1. Create `.github/workflows/ci.yml`
2. Configure workflow name and triggers
3. Set up Ubuntu runner and Node.js 20.x
4. Implement checkout and Node.js setup actions
5. Configure npm caching
6. Test basic workflow functionality
## 2. Implement Test and Coverage Steps [pending]
### Dependencies: 41.1
### Description: Add test execution and coverage reporting to the workflow
### Details:
1. Add dependency installation with proper timeout
2. Configure test execution with coverage
3. Set up test results and coverage artifacts
4. Verify artifact upload functionality
5. Test the complete workflow
## 3. Optimize Workflow Performance [pending]
### Dependencies: 41.1, 41.2
### Description: Implement caching and performance optimizations
### Details:
1. Set up node_modules caching
2. Configure cache key strategy
3. Implement proper timeout values
4. Test caching effectiveness
5. Document performance improvements
6. Performance Testing:
- Measure rendering time for large projects
- Ensure reasonable performance with 100+ interconnected tasks

91
tasks/task_042.txt Normal file
View File

@@ -0,0 +1,91 @@
# Task ID: 42
# Title: Implement MCP-to-MCP Communication Protocol
# Status: pending
# Dependencies: None
# Priority: medium
# Description: Design and implement a communication protocol that allows Taskmaster to interact with external MCP (Model Context Protocol) tools and servers, enabling programmatic operations across these tools without requiring custom integration code. The system should dynamically connect to MCP servers chosen by the user for task storage and management (e.g., GitHub-MCP or Postgres-MCP). This eliminates the need for separate APIs or SDKs for each service. The goal is to create a standardized, agnostic system that facilitates seamless task execution and interaction with external systems. Additionally, the system should support two operational modes: **solo/local mode**, where tasks are managed locally using a `tasks.json` file, and **multiplayer/remote mode**, where tasks are managed via external MCP integrations. The core modules of Taskmaster should dynamically adapt their operations based on the selected mode, with multiplayer/remote mode leveraging MCP servers for all task management operations.
# Details:
This task involves creating a standardized way for Taskmaster to communicate with external MCP implementations and tools. The implementation should:
1. Define a standard protocol for communication with MCP servers, including authentication, request/response formats, and error handling.
2. Leverage the existing `fastmcp` server logic to enable interaction with external MCP tools programmatically, focusing on creating a modular and reusable system.
3. Implement an adapter pattern that allows Taskmaster to connect to any MCP-compliant tool or server.
4. Build a client module capable of discovering, connecting to, and exchanging data with external MCP tools, ensuring compatibility with various implementations.
5. Provide a reference implementation for interacting with a specific MCP tool (e.g., GitHub-MCP or Postgres-MCP) to demonstrate the protocol's functionality.
6. Ensure the protocol supports versioning to maintain compatibility as MCP tools evolve.
7. Implement rate limiting and backoff strategies to prevent overwhelming external MCP tools.
8. Create a configuration system that allows users to specify connection details for external MCP tools and servers.
9. Add support for two operational modes:
- **Solo/Local Mode**: Tasks are managed locally using a `tasks.json` file.
- **Multiplayer/Remote Mode**: Tasks are managed via external MCP integrations (e.g., GitHub-MCP or Postgres-MCP). The system should dynamically switch between these modes based on user configuration.
10. Update core modules to perform task operations on the appropriate system (local or remote) based on the selected mode, with remote mode relying entirely on MCP servers for task management.
11. Document the protocol thoroughly to enable other developers to implement it in their MCP tools.
The implementation should prioritize asynchronous communication where appropriate and handle network failures gracefully. Security considerations, including encryption and robust authentication mechanisms, should be integral to the design.
# Test Strategy:
Testing should verify both the protocol design and implementation:
1. Unit tests for the adapter pattern, ensuring it correctly translates between Taskmaster's internal models and the MCP protocol.
2. Integration tests with a mock MCP tool or server to validate the full request/response cycle.
3. Specific tests for the reference implementation (e.g., GitHub-MCP or Postgres-MCP), including authentication flows.
4. Error handling tests that simulate network failures, timeouts, and malformed responses.
5. Performance tests to ensure the communication does not introduce significant latency.
6. Security tests to verify that authentication and encryption mechanisms are functioning correctly.
7. End-to-end tests demonstrating Taskmaster's ability to programmatically interact with external MCP tools and execute tasks.
8. Compatibility tests with different versions of the protocol to ensure backward compatibility.
9. Tests for mode switching:
- Validate that Taskmaster correctly operates in solo/local mode using the `tasks.json` file.
- Validate that Taskmaster correctly operates in multiplayer/remote mode with external MCP integrations (e.g., GitHub-MCP or Postgres-MCP).
- Ensure seamless switching between modes without data loss or corruption.
10. A test harness should be created to simulate an MCP tool or server for testing purposes without relying on external dependencies. Test cases should be documented thoroughly to serve as examples for other implementations.
# Subtasks:
## 42-1. Define MCP-to-MCP communication protocol [pending]
### Dependencies: None
### Description:
### Details:
## 42-2. Implement adapter pattern for MCP integration [pending]
### Dependencies: None
### Description:
### Details:
## 42-3. Develop client module for MCP tool discovery and interaction [pending]
### Dependencies: None
### Description:
### Details:
## 42-4. Provide reference implementation for GitHub-MCP integration [pending]
### Dependencies: None
### Description:
### Details:
## 42-5. Add support for solo/local and multiplayer/remote modes [pending]
### Dependencies: None
### Description:
### Details:
## 42-6. Update core modules to support dynamic mode-based operations [pending]
### Dependencies: None
### Description:
### Details:
## 42-7. Document protocol and mode-switching functionality [pending]
### Dependencies: None
### Description:
### Details:
## 42-8. Update terminology to reflect MCP server-based communication [pending]
### Dependencies: None
### Description:
### Details:

View File

@@ -2377,6 +2377,68 @@
"priority": "medium", "priority": "medium",
"details": "Implement a new 'plan' command that will append a structured implementation plan to existing tasks or subtasks. The implementation should:\n\n1. Accept an '--id' parameter that can reference either a task or subtask ID\n2. Determine whether the ID refers to a task or subtask and retrieve the appropriate content from tasks.json and/or individual task files\n3. Generate a step-by-step implementation plan using AI (Claude by default)\n4. Support a '--research' flag to use Perplexity instead of Claude when needed\n5. Format the generated plan within XML tags like `<implementation_plan as of timestamp>...</implementation_plan>`\n6. Append this plan to the implementation details section of the task/subtask\n7. Display a confirmation card indicating the implementation plan was successfully created\n\nThe implementation plan should be detailed and actionable, containing specific steps such as searching for files, creating new files, modifying existing files, etc. The goal is to frontload planning work into the task/subtask so execution can begin immediately.\n\nReference the existing 'update-subtask' command implementation as a starting point, as it uses a similar approach for appending content to tasks. Ensure proper error handling for cases where the specified ID doesn't exist or when API calls fail.", "details": "Implement a new 'plan' command that will append a structured implementation plan to existing tasks or subtasks. The implementation should:\n\n1. Accept an '--id' parameter that can reference either a task or subtask ID\n2. Determine whether the ID refers to a task or subtask and retrieve the appropriate content from tasks.json and/or individual task files\n3. Generate a step-by-step implementation plan using AI (Claude by default)\n4. Support a '--research' flag to use Perplexity instead of Claude when needed\n5. Format the generated plan within XML tags like `<implementation_plan as of timestamp>...</implementation_plan>`\n6. Append this plan to the implementation details section of the task/subtask\n7. Display a confirmation card indicating the implementation plan was successfully created\n\nThe implementation plan should be detailed and actionable, containing specific steps such as searching for files, creating new files, modifying existing files, etc. The goal is to frontload planning work into the task/subtask so execution can begin immediately.\n\nReference the existing 'update-subtask' command implementation as a starting point, as it uses a similar approach for appending content to tasks. Ensure proper error handling for cases where the specified ID doesn't exist or when API calls fail.",
"testStrategy": "Testing should verify:\n\n1. Command correctly identifies and retrieves content for both task and subtask IDs\n2. Implementation plans are properly generated and formatted with XML tags and timestamps\n3. Plans are correctly appended to the implementation details section without overwriting existing content\n4. The '--research' flag successfully switches the backend from Claude to Perplexity\n5. Appropriate error messages are displayed for invalid IDs or API failures\n6. Confirmation card is displayed after successful plan creation\n\nTest cases should include:\n- Running 'plan --id 123' on an existing task\n- Running 'plan --id 123.1' on an existing subtask\n- Running 'plan --id 123 --research' to test the Perplexity integration\n- Running 'plan --id 999' with a non-existent ID to verify error handling\n- Running the command on tasks with existing implementation plans to ensure proper appending\n\nManually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements." "testStrategy": "Testing should verify:\n\n1. Command correctly identifies and retrieves content for both task and subtask IDs\n2. Implementation plans are properly generated and formatted with XML tags and timestamps\n3. Plans are correctly appended to the implementation details section without overwriting existing content\n4. The '--research' flag successfully switches the backend from Claude to Perplexity\n5. Appropriate error messages are displayed for invalid IDs or API failures\n6. Confirmation card is displayed after successful plan creation\n\nTest cases should include:\n- Running 'plan --id 123' on an existing task\n- Running 'plan --id 123.1' on an existing subtask\n- Running 'plan --id 123 --research' to test the Perplexity integration\n- Running 'plan --id 999' with a non-existent ID to verify error handling\n- Running the command on tasks with existing implementation plans to ensure proper appending\n\nManually review the quality of generated plans to ensure they provide actionable, step-by-step guidance that accurately reflects the task requirements."
},
{
"id": 41,
"title": "Implement Visual Task Dependency Graph in Terminal",
"description": "Create a feature that renders task dependencies as a visual graph using ASCII/Unicode characters in the terminal, with color-coded nodes representing tasks and connecting lines showing dependency relationships.",
"status": "pending",
"dependencies": [],
"priority": "medium",
"details": "This implementation should include:\n\n1. Create a new command `graph` or `visualize` that displays the dependency graph.\n\n2. Design an ASCII/Unicode-based graph rendering system that:\n - Represents each task as a node with its ID and abbreviated title\n - Shows dependencies as directional lines between nodes (→, ↑, ↓, etc.)\n - Uses color coding for different task statuses (e.g., green for completed, yellow for in-progress, red for blocked)\n - Handles complex dependency chains with proper spacing and alignment\n\n3. Implement layout algorithms to:\n - Minimize crossing lines for better readability\n - Properly space nodes to avoid overlapping\n - Support both vertical and horizontal graph orientations (as a configurable option)\n\n4. Add detection and highlighting of circular dependencies with a distinct color/pattern\n\n5. Include a legend explaining the color coding and symbols used\n\n6. Ensure the graph is responsive to terminal width, with options to:\n - Automatically scale to fit the current terminal size\n - Allow zooming in/out of specific sections for large graphs\n - Support pagination or scrolling for very large dependency networks\n\n7. Add options to filter the graph by:\n - Specific task IDs or ranges\n - Task status\n - Dependency depth (e.g., show only direct dependencies or N levels deep)\n\n8. Ensure accessibility by using distinct patterns in addition to colors for users with color vision deficiencies\n\n9. Optimize performance for projects with many tasks and complex dependency relationships",
"testStrategy": "1. Unit Tests:\n - Test the graph generation algorithm with various dependency structures\n - Verify correct node placement and connection rendering\n - Test circular dependency detection\n - Verify color coding matches task statuses\n\n2. Integration Tests:\n - Test the command with projects of varying sizes (small, medium, large)\n - Verify correct handling of different terminal sizes\n - Test all filtering options\n\n3. Visual Verification:\n - Create test cases with predefined dependency structures and verify the visual output matches expected patterns\n - Test with terminals of different sizes, including very narrow terminals\n - Verify readability of complex graphs\n\n4. Edge Cases:\n - Test with no dependencies (single nodes only)\n - Test with circular dependencies\n - Test with very deep dependency chains\n - Test with wide dependency networks (many parallel tasks)\n - Test with the maximum supported number of tasks\n\n5. Usability Testing:\n - Have team members use the feature and provide feedback on readability and usefulness\n - Test in different terminal emulators to ensure compatibility\n - Verify the feature works in terminals with limited color support\n\n6. Performance Testing:\n - Measure rendering time for large projects\n - Ensure reasonable performance with 100+ interconnected tasks"
},
{
"id": 42,
"title": "Implement MCP-to-MCP Communication Protocol",
"description": "Design and implement a communication protocol that allows Taskmaster to interact with external MCP (Model Context Protocol) tools and servers, enabling programmatic operations across these tools without requiring custom integration code. The system should dynamically connect to MCP servers chosen by the user for task storage and management (e.g., GitHub-MCP or Postgres-MCP). This eliminates the need for separate APIs or SDKs for each service. The goal is to create a standardized, agnostic system that facilitates seamless task execution and interaction with external systems. Additionally, the system should support two operational modes: **solo/local mode**, where tasks are managed locally using a `tasks.json` file, and **multiplayer/remote mode**, where tasks are managed via external MCP integrations. The core modules of Taskmaster should dynamically adapt their operations based on the selected mode, with multiplayer/remote mode leveraging MCP servers for all task management operations.",
"status": "pending",
"dependencies": [],
"priority": "medium",
"details": "This task involves creating a standardized way for Taskmaster to communicate with external MCP implementations and tools. The implementation should:\n\n1. Define a standard protocol for communication with MCP servers, including authentication, request/response formats, and error handling.\n2. Leverage the existing `fastmcp` server logic to enable interaction with external MCP tools programmatically, focusing on creating a modular and reusable system.\n3. Implement an adapter pattern that allows Taskmaster to connect to any MCP-compliant tool or server.\n4. Build a client module capable of discovering, connecting to, and exchanging data with external MCP tools, ensuring compatibility with various implementations.\n5. Provide a reference implementation for interacting with a specific MCP tool (e.g., GitHub-MCP or Postgres-MCP) to demonstrate the protocol's functionality.\n6. Ensure the protocol supports versioning to maintain compatibility as MCP tools evolve.\n7. Implement rate limiting and backoff strategies to prevent overwhelming external MCP tools.\n8. Create a configuration system that allows users to specify connection details for external MCP tools and servers.\n9. Add support for two operational modes:\n - **Solo/Local Mode**: Tasks are managed locally using a `tasks.json` file.\n - **Multiplayer/Remote Mode**: Tasks are managed via external MCP integrations (e.g., GitHub-MCP or Postgres-MCP). The system should dynamically switch between these modes based on user configuration.\n10. Update core modules to perform task operations on the appropriate system (local or remote) based on the selected mode, with remote mode relying entirely on MCP servers for task management.\n11. Document the protocol thoroughly to enable other developers to implement it in their MCP tools.\n\nThe implementation should prioritize asynchronous communication where appropriate and handle network failures gracefully. Security considerations, including encryption and robust authentication mechanisms, should be integral to the design.",
"testStrategy": "Testing should verify both the protocol design and implementation:\n\n1. Unit tests for the adapter pattern, ensuring it correctly translates between Taskmaster's internal models and the MCP protocol.\n2. Integration tests with a mock MCP tool or server to validate the full request/response cycle.\n3. Specific tests for the reference implementation (e.g., GitHub-MCP or Postgres-MCP), including authentication flows.\n4. Error handling tests that simulate network failures, timeouts, and malformed responses.\n5. Performance tests to ensure the communication does not introduce significant latency.\n6. Security tests to verify that authentication and encryption mechanisms are functioning correctly.\n7. End-to-end tests demonstrating Taskmaster's ability to programmatically interact with external MCP tools and execute tasks.\n8. Compatibility tests with different versions of the protocol to ensure backward compatibility.\n9. Tests for mode switching:\n - Validate that Taskmaster correctly operates in solo/local mode using the `tasks.json` file.\n - Validate that Taskmaster correctly operates in multiplayer/remote mode with external MCP integrations (e.g., GitHub-MCP or Postgres-MCP).\n - Ensure seamless switching between modes without data loss or corruption.\n10. A test harness should be created to simulate an MCP tool or server for testing purposes without relying on external dependencies. Test cases should be documented thoroughly to serve as examples for other implementations.",
"subtasks": [
{
"id": "42-1",
"title": "Define MCP-to-MCP communication protocol",
"status": "pending"
},
{
"id": "42-2",
"title": "Implement adapter pattern for MCP integration",
"status": "pending"
},
{
"id": "42-3",
"title": "Develop client module for MCP tool discovery and interaction",
"status": "pending"
},
{
"id": "42-4",
"title": "Provide reference implementation for GitHub-MCP integration",
"status": "pending"
},
{
"id": "42-5",
"title": "Add support for solo/local and multiplayer/remote modes",
"status": "pending"
},
{
"id": "42-6",
"title": "Update core modules to support dynamic mode-based operations",
"status": "pending"
},
{
"id": "42-7",
"title": "Document protocol and mode-switching functionality",
"status": "pending"
},
{
"id": "42-8",
"title": "Update terminology to reflect MCP server-based communication",
"status": "pending"
}
]
} }
] ]
} }