This commit introduces a comprehensive set of skipped tests to both and . These skipped tests serve as a blueprint for future test implementation, outlining the necessary test cases for currently untested functionalities.
- Ensures sync with bin/ folder by adding -r/--research to the command
- Fixes an issue that improperly parsed command line args
- Ensures confirmation card on dependency add/remove
- Properly formats some sub-task dependencies
**Potentially addressed issues:**
While primarily focused on adding test coverage, this commit also implicitly addresses potential issues by:
- **Improving error handling coverage:** The addition of skipped tests for error scenarios in functions like , , , and highlights areas where error handling needs to be robustly tested and potentially improved in the codebase.
- **Enhancing dependency validation:** Skipped tests for include validation of dependencies, prompting a review of the dependency validation logic and ensuring its correctness.
- **Standardizing test coverage:** By creating a clear roadmap for testing all functions, this commit contributes to a more standardized and complete test suite, reducing the likelihood of undiscovered bugs in the future.
**task-manager.test.js:**
- Added skipped test blocks for the following functions:
- : Includes tests for handling valid JSON responses, malformed JSON, missing tasks in responses, Perplexity AI research integration, Claude fallback, and parallel task processing.
- : Covers tests for updating tasks based on context, handling Claude streaming, Perplexity AI integration, scenarios with no tasks to update, and error handling during updates.
- : Includes tests for generating task files from , formatting dependencies with status indicators, handling tasks without subtasks, empty task arrays, and dependency validation before file generation.
- : Covers tests for updating task status, subtask status using dot notation, updating multiple tasks, automatic subtask status updates, parent task update suggestions, and handling non-existent task IDs.
- : Includes tests for updating regular and subtask statuses, handling parent tasks without subtasks, and non-existent subtask IDs.
- : Covers tests for displaying all tasks, filtering by status, displaying subtasks, showing completion statistics, identifying the next task, and handling empty task arrays.
- : Includes tests for generating subtasks, using complexity reports for subtask counts, Perplexity AI integration, appending subtasks, skipping completed tasks, and error handling during subtask generation.
- : Covers tests for expanding all pending tasks, sorting by complexity, skipping tasks with existing subtasks (unless forced), using task-specific parameters from complexity reports, handling empty task arrays, and error handling for individual tasks.
- : Includes tests for clearing subtasks from specific and multiple tasks, handling tasks without subtasks, non-existent task IDs, and regenerating task files after clearing subtasks.
- : Covers tests for adding new tasks using AI, handling Claude streaming, validating dependencies, handling malformed AI responses, and using existing task context for generation.
**utils.test.js:**
- Added skipped test blocks for the following functions:
- : Tests for logging messages according to log levels and filtering messages below configured levels.
- : Tests for reading and parsing valid JSON files, handling file not found errors, and invalid JSON formats.
- : Tests for writing JSON data to files and handling file write errors.
- : Tests for escaping double quotes in prompts and handling prompts without special characters.
- : Tests for reading and parsing complexity reports, handling missing report files, and custom report paths.
- : Tests for finding tasks in reports by ID, handling non-existent task IDs, and invalid report structures.
- : Tests for verifying existing task and subtask IDs, handling non-existent IDs, and invalid inputs.
- : Tests for formatting numeric and string task IDs and preserving dot notation for subtasks.
- : Tests for detecting simple and complex cycles in dependency graphs, handling acyclic graphs, and empty dependency maps.
These skipped tests provide a clear roadmap for future test development, ensuring comprehensive coverage for core functionalities in both modules. They document the intended behavior of each function and outline various scenarios, including happy paths, edge cases, and error conditions, thereby improving the overall test strategy and maintainability of the Task Master CLI.
101 lines
5.8 KiB
Plaintext
101 lines
5.8 KiB
Plaintext
# Task ID: 24
|
|
# Title: Implement AI-Powered Test Generation Command using FastMCP
|
|
# Status: pending
|
|
# Dependencies: 22
|
|
# Priority: high
|
|
# Description: Create a new 'generate-test' command in Task Master that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks, utilizing FastMCP for AI integration.
|
|
# Details:
|
|
Implement a new command in the Task Master CLI that generates comprehensive Jest test files for tasks. The command should be callable as 'task-master generate-test --id=1' and should:
|
|
|
|
1. Accept a task ID parameter to identify which task to generate tests for
|
|
2. Retrieve the task and its subtasks from the task store
|
|
3. Analyze the task description, details, and subtasks to understand implementation requirements
|
|
4. Construct an appropriate prompt for the AI service using FastMCP
|
|
5. Process the AI response to create a well-formatted test file named 'task_XXX.test.ts' where XXX is the zero-padded task ID
|
|
6. Include appropriate test cases that cover the main functionality described in the task
|
|
7. Generate mocks for external dependencies identified in the task description
|
|
8. Create assertions that validate the expected behavior
|
|
9. Handle both parent tasks and subtasks appropriately (for subtasks, name the file 'task_XXX_YYY.test.ts' where YYY is the subtask ID)
|
|
10. Include error handling for API failures, invalid task IDs, etc.
|
|
11. Add appropriate documentation for the command in the help system
|
|
|
|
The implementation should utilize FastMCP for AI service integration and maintain consistency with the current command structure and error handling patterns. Consider using TypeScript for better type safety and integration with FastMCP[1][2].
|
|
|
|
# Test Strategy:
|
|
Testing for this feature should include:
|
|
|
|
1. Unit tests for the command handler function to verify it correctly processes arguments and options
|
|
2. Mock tests for the FastMCP integration to ensure proper prompt construction and response handling
|
|
3. Integration tests that verify the end-to-end flow using a mock FastMCP response
|
|
4. Tests for error conditions including:
|
|
- Invalid task IDs
|
|
- Network failures when contacting the AI service
|
|
- Malformed AI responses
|
|
- File system permission issues
|
|
5. Verification that generated test files follow Jest conventions and can be executed
|
|
6. Tests for both parent task and subtask handling
|
|
7. Manual verification of the quality of generated tests by running them against actual task implementations
|
|
|
|
Create a test fixture with sample tasks of varying complexity to evaluate the test generation capabilities across different scenarios. The tests should verify that the command outputs appropriate success/error messages to the console and creates files in the expected location with proper content structure.
|
|
|
|
# Subtasks:
|
|
## 1. Create command structure for 'generate-test' [pending]
|
|
### Dependencies: None
|
|
### Description: Implement the basic structure for the 'generate-test' command, including command registration, parameter validation, and help documentation.
|
|
### Details:
|
|
Implementation steps:
|
|
1. Create a new file `src/commands/generate-test.ts`
|
|
2. Implement the command structure following the pattern of existing commands
|
|
3. Register the new command in the CLI framework
|
|
4. Add command options for task ID (--id=X) parameter
|
|
5. Implement parameter validation to ensure a valid task ID is provided
|
|
6. Add help documentation for the command
|
|
7. Create the basic command flow that retrieves the task from the task store
|
|
8. Implement error handling for invalid task IDs and other basic errors
|
|
|
|
Testing approach:
|
|
- Test command registration
|
|
- Test parameter validation (missing ID, invalid ID format)
|
|
- Test error handling for non-existent task IDs
|
|
- Test basic command flow with a mock task store
|
|
|
|
## 2. Implement AI prompt construction and FastMCP integration [pending]
|
|
### Dependencies: 24.1
|
|
### Description: Develop the logic to analyze tasks, construct appropriate AI prompts, and interact with the AI service using FastMCP to generate test content.
|
|
### Details:
|
|
Implementation steps:
|
|
1. Create a utility function to analyze task descriptions and subtasks for test requirements
|
|
2. Implement a prompt builder that formats task information into an effective AI prompt
|
|
3. Use FastMCP to send the prompt and receive the response
|
|
4. Process the FastMCP response to extract the generated test code
|
|
5. Implement error handling for FastMCP failures, rate limits, and malformed responses
|
|
6. Add appropriate logging for the FastMCP interaction process
|
|
|
|
Testing approach:
|
|
- Test prompt construction with various task types
|
|
- Test FastMCP integration with mocked responses
|
|
- Test error handling for FastMCP failures
|
|
- Test response processing with sample FastMCP outputs
|
|
|
|
## 3. Implement test file generation and output [pending]
|
|
### Dependencies: 24.2
|
|
### Description: Create functionality to format AI-generated tests into proper Jest test files and save them to the appropriate location.
|
|
### Details:
|
|
Implementation steps:
|
|
1. Create a utility to format the FastMCP response into a well-structured Jest test file
|
|
2. Implement naming logic for test files (task_XXX.test.ts for parent tasks, task_XXX_YYY.test.ts for subtasks)
|
|
3. Add logic to determine the appropriate file path for saving the test
|
|
4. Implement file system operations to write the test file
|
|
5. Add validation to ensure the generated test follows Jest conventions
|
|
6. Implement formatting of the test file for consistency with project coding standards
|
|
7. Add user feedback about successful test generation and file location
|
|
8. Implement handling for both parent tasks and subtasks
|
|
|
|
Testing approach:
|
|
- Test file naming logic for various task/subtask combinations
|
|
- Test file content formatting with sample FastMCP outputs
|
|
- Test file system operations with mocked fs module
|
|
- Test the complete flow from command input to file output
|
|
- Verify generated tests can be executed by Jest
|
|
|