Adjusts sub tasks for 24 and 26.
This commit is contained in:
@@ -1,16 +1,16 @@
|
||||
# Task ID: 24
|
||||
# Title: Implement AI-Powered Test Generation Command using FastMCP
|
||||
# Title: Implement AI-Powered Test Generation Command
|
||||
# Status: pending
|
||||
# Dependencies: 22
|
||||
# Priority: high
|
||||
# Description: Create a new 'generate-test' command in Task Master that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks, utilizing FastMCP for AI integration.
|
||||
# Description: Create a new 'generate-test' command in Task Master that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks, utilizing Claude API for AI integration.
|
||||
# Details:
|
||||
Implement a new command in the Task Master CLI that generates comprehensive Jest test files for tasks. The command should be callable as 'task-master generate-test --id=1' and should:
|
||||
|
||||
1. Accept a task ID parameter to identify which task to generate tests for
|
||||
2. Retrieve the task and its subtasks from the task store
|
||||
3. Analyze the task description, details, and subtasks to understand implementation requirements
|
||||
4. Construct an appropriate prompt for the AI service using FastMCP
|
||||
4. Construct an appropriate prompt for the AI service using Claude API
|
||||
5. Process the AI response to create a well-formatted test file named 'task_XXX.test.ts' where XXX is the zero-padded task ID
|
||||
6. Include appropriate test cases that cover the main functionality described in the task
|
||||
7. Generate mocks for external dependencies identified in the task description
|
||||
@@ -19,14 +19,14 @@ Implement a new command in the Task Master CLI that generates comprehensive Jest
|
||||
10. Include error handling for API failures, invalid task IDs, etc.
|
||||
11. Add appropriate documentation for the command in the help system
|
||||
|
||||
The implementation should utilize FastMCP for AI service integration and maintain consistency with the current command structure and error handling patterns. Consider using TypeScript for better type safety and integration with FastMCP[1][2].
|
||||
The implementation should utilize the Claude API for AI service integration and maintain consistency with the current command structure and error handling patterns. Consider using TypeScript for better type safety and integration with the Claude API.
|
||||
|
||||
# Test Strategy:
|
||||
Testing for this feature should include:
|
||||
|
||||
1. Unit tests for the command handler function to verify it correctly processes arguments and options
|
||||
2. Mock tests for the FastMCP integration to ensure proper prompt construction and response handling
|
||||
3. Integration tests that verify the end-to-end flow using a mock FastMCP response
|
||||
2. Mock tests for the Claude API integration to ensure proper prompt construction and response handling
|
||||
3. Integration tests that verify the end-to-end flow using a mock Claude API response
|
||||
4. Tests for error conditions including:
|
||||
- Invalid task IDs
|
||||
- Network failures when contacting the AI service
|
||||
@@ -59,30 +59,30 @@ Testing approach:
|
||||
- Test error handling for non-existent task IDs
|
||||
- Test basic command flow with a mock task store
|
||||
|
||||
## 2. Implement AI prompt construction and FastMCP integration [pending]
|
||||
## 2. Implement AI prompt construction [pending]
|
||||
### Dependencies: 24.1
|
||||
### Description: Develop the logic to analyze tasks, construct appropriate AI prompts, and interact with the AI service using FastMCP to generate test content.
|
||||
### Description: Develop the logic to analyze tasks, construct appropriate AI prompts, and interact with the AI service using the existing ai-service.js to generate test content.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Create a utility function to analyze task descriptions and subtasks for test requirements
|
||||
2. Implement a prompt builder that formats task information into an effective AI prompt
|
||||
3. Use FastMCP to send the prompt and receive the response
|
||||
4. Process the FastMCP response to extract the generated test code
|
||||
5. Implement error handling for FastMCP failures, rate limits, and malformed responses
|
||||
6. Add appropriate logging for the FastMCP interaction process
|
||||
3. Use ai-service.js as needed to send the prompt and receive the response (streaming)
|
||||
4. Process the response to extract the generated test code
|
||||
5. Implement error handling for failures, rate limits, and malformed responses
|
||||
6. Add appropriate logging for the test generation process
|
||||
|
||||
Testing approach:
|
||||
- Test prompt construction with various task types
|
||||
- Test FastMCP integration with mocked responses
|
||||
- Test error handling for FastMCP failures
|
||||
- Test response processing with sample FastMCP outputs
|
||||
- Test ai services integration with mocked responses
|
||||
- Test error handling for ai service failures
|
||||
- Test response processing with sample ai-services.js outputs
|
||||
|
||||
## 3. Implement test file generation and output [pending]
|
||||
### Dependencies: 24.2
|
||||
### Description: Create functionality to format AI-generated tests into proper Jest test files and save them to the appropriate location.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Create a utility to format the FastMCP response into a well-structured Jest test file
|
||||
1. Create a utility to format the ai-services.js response into a well-structured Jest test file
|
||||
2. Implement naming logic for test files (task_XXX.test.ts for parent tasks, task_XXX_YYY.test.ts for subtasks)
|
||||
3. Add logic to determine the appropriate file path for saving the test
|
||||
4. Implement file system operations to write the test file
|
||||
@@ -93,7 +93,7 @@ Implementation steps:
|
||||
|
||||
Testing approach:
|
||||
- Test file naming logic for various task/subtask combinations
|
||||
- Test file content formatting with sample FastMCP outputs
|
||||
- Test file content formatting with sample ai-services.js outputs
|
||||
- Test file system operations with mocked fs module
|
||||
- Test the complete flow from command input to file output
|
||||
- Verify generated tests can be executed by Jest
|
||||
|
||||
@@ -88,9 +88,3 @@ The testing focus should be on proving immediate value to users while ensuring r
|
||||
### Details:
|
||||
1. Create utility functions in utils.js for reading context from files\n2. Add proper error handling and logging for file access issues\n3. Implement content validation to ensure reasonable size limits\n4. Add content truncation if files exceed token limits\n5. Create helper functions for formatting context additions properly\n6. Document the utility functions with clear examples
|
||||
|
||||
## 5. Test Subtask [pending]
|
||||
### Dependencies: None
|
||||
### Description: This is a test subtask
|
||||
### Details:
|
||||
|
||||
|
||||
|
||||
@@ -19,6 +19,19 @@
|
||||
"testStrategy": "Verify that the tasks.json structure can be created, read, and validated. Test with sample data to ensure all fields are properly handled and that validation correctly identifies invalid structures.",
|
||||
"subtasks": []
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Develop Command Line Interface Foundation",
|
||||
"description": "Create the basic CLI structure using Commander.js with command parsing and help documentation.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"priority": "high",
|
||||
"details": "Implement the CLI foundation including:\n- Set up Commander.js for command parsing\n- Create help documentation for all commands\n- Implement colorized console output for better readability\n- Add logging system with configurable levels\n- Handle global options (--help, --version, --file, --quiet, --debug, --json)",
|
||||
"testStrategy": "Test each command with various parameters to ensure proper parsing. Verify help documentation is comprehensive and accurate. Test logging at different verbosity levels.",
|
||||
"subtasks": []
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement Basic Task Operations",
|
||||
@@ -1335,15 +1348,15 @@
|
||||
},
|
||||
{
|
||||
"id": 24,
|
||||
"title": "Implement AI-Powered Test Generation Command using FastMCP",
|
||||
"description": "Create a new 'generate-test' command in Task Master that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks, utilizing FastMCP for AI integration.",
|
||||
"title": "Implement AI-Powered Test Generation Command",
|
||||
"description": "Create a new 'generate-test' command in Task Master that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks, utilizing Claude API for AI integration.",
|
||||
"status": "pending",
|
||||
"dependencies": [
|
||||
22
|
||||
],
|
||||
"priority": "high",
|
||||
"details": "Implement a new command in the Task Master CLI that generates comprehensive Jest test files for tasks. The command should be callable as 'task-master generate-test --id=1' and should:\n\n1. Accept a task ID parameter to identify which task to generate tests for\n2. Retrieve the task and its subtasks from the task store\n3. Analyze the task description, details, and subtasks to understand implementation requirements\n4. Construct an appropriate prompt for the AI service using FastMCP\n5. Process the AI response to create a well-formatted test file named 'task_XXX.test.ts' where XXX is the zero-padded task ID\n6. Include appropriate test cases that cover the main functionality described in the task\n7. Generate mocks for external dependencies identified in the task description\n8. Create assertions that validate the expected behavior\n9. Handle both parent tasks and subtasks appropriately (for subtasks, name the file 'task_XXX_YYY.test.ts' where YYY is the subtask ID)\n10. Include error handling for API failures, invalid task IDs, etc.\n11. Add appropriate documentation for the command in the help system\n\nThe implementation should utilize FastMCP for AI service integration and maintain consistency with the current command structure and error handling patterns. Consider using TypeScript for better type safety and integration with FastMCP[1][2].",
|
||||
"testStrategy": "Testing for this feature should include:\n\n1. Unit tests for the command handler function to verify it correctly processes arguments and options\n2. Mock tests for the FastMCP integration to ensure proper prompt construction and response handling\n3. Integration tests that verify the end-to-end flow using a mock FastMCP response\n4. Tests for error conditions including:\n - Invalid task IDs\n - Network failures when contacting the AI service\n - Malformed AI responses\n - File system permission issues\n5. Verification that generated test files follow Jest conventions and can be executed\n6. Tests for both parent task and subtask handling\n7. Manual verification of the quality of generated tests by running them against actual task implementations\n\nCreate a test fixture with sample tasks of varying complexity to evaluate the test generation capabilities across different scenarios. The tests should verify that the command outputs appropriate success/error messages to the console and creates files in the expected location with proper content structure.",
|
||||
"details": "Implement a new command in the Task Master CLI that generates comprehensive Jest test files for tasks. The command should be callable as 'task-master generate-test --id=1' and should:\n\n1. Accept a task ID parameter to identify which task to generate tests for\n2. Retrieve the task and its subtasks from the task store\n3. Analyze the task description, details, and subtasks to understand implementation requirements\n4. Construct an appropriate prompt for the AI service using Claude API\n5. Process the AI response to create a well-formatted test file named 'task_XXX.test.ts' where XXX is the zero-padded task ID\n6. Include appropriate test cases that cover the main functionality described in the task\n7. Generate mocks for external dependencies identified in the task description\n8. Create assertions that validate the expected behavior\n9. Handle both parent tasks and subtasks appropriately (for subtasks, name the file 'task_XXX_YYY.test.ts' where YYY is the subtask ID)\n10. Include error handling for API failures, invalid task IDs, etc.\n11. Add appropriate documentation for the command in the help system\n\nThe implementation should utilize the Claude API for AI service integration and maintain consistency with the current command structure and error handling patterns. Consider using TypeScript for better type safety and integration with the Claude API.",
|
||||
"testStrategy": "Testing for this feature should include:\n\n1. Unit tests for the command handler function to verify it correctly processes arguments and options\n2. Mock tests for the Claude API integration to ensure proper prompt construction and response handling\n3. Integration tests that verify the end-to-end flow using a mock Claude API response\n4. Tests for error conditions including:\n - Invalid task IDs\n - Network failures when contacting the AI service\n - Malformed AI responses\n - File system permission issues\n5. Verification that generated test files follow Jest conventions and can be executed\n6. Tests for both parent task and subtask handling\n7. Manual verification of the quality of generated tests by running them against actual task implementations\n\nCreate a test fixture with sample tasks of varying complexity to evaluate the test generation capabilities across different scenarios. The tests should verify that the command outputs appropriate success/error messages to the console and creates files in the expected location with proper content structure.",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
@@ -1496,15 +1509,6 @@
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 26
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Test Subtask",
|
||||
"description": "This is a test subtask",
|
||||
"details": "",
|
||||
"status": "pending",
|
||||
"dependencies": [],
|
||||
"parentTaskId": 26
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user