Simplified the Task Master CLI by organizing code into modules within the directory. **Why:** - **Better Organization:** Code is now grouped by function (AI, commands, dependencies, tasks, UI, utilities). - **Easier to Maintain:** Smaller modules are simpler to update and fix. - **Scalable:** New features can be added more easily in a structured way. **What Changed:** - Moved code from single _____ _ __ __ _ |_ _|_ _ ___| | __ | \/ | __ _ ___| |_ ___ _ __ | |/ _` / __| |/ / | |\/| |/ _` / __| __/ _ \ '__| | | (_| \__ \ < | | | | (_| \__ \ || __/ | |_|\__,_|___/_|\_\ |_| |_|\__,_|___/\__\___|_| by https://x.com/eyaltoledano ╭────────────────────────────────────────────╮ │ │ │ Version: 0.9.16 Project: Task Master │ │ │ ╰────────────────────────────────────────────╯ ╭─────────────────────╮ │ │ │ Task Master CLI │ │ │ ╰─────────────────────╯ ╭───────────────────╮ │ Task Generation │ ╰───────────────────╯ parse-prd --input=<file.txt> [--tasks=10] Generate tasks from a PRD document generate Create individual task files from tasks… ╭───────────────────╮ │ Task Management │ ╰───────────────────╯ list [--status=<status>] [--with-subtas… List all tasks with their status set-status --id=<id> --status=<status> Update task status (done, pending, etc.) update --from=<id> --prompt="<context>" Update tasks based on new requirements add-task --prompt="<text>" [--dependencies=… Add a new task using AI add-dependency --id=<id> --depends-on=<id> Add a dependency to a task remove-dependency --id=<id> --depends-on=<id> Remove a dependency from a task ╭──────────────────────────╮ │ Task Analysis & Detail │ ╰──────────────────────────╯ analyze-complexity [--research] [--threshold=5] Analyze tasks and generate expansion re… complexity-report [--file=<path>] Display the complexity analysis report expand --id=<id> [--num=5] [--research] [… Break down tasks into detailed subtasks expand --all [--force] [--research] Expand all pending tasks with subtasks clear-subtasks --id=<id> Remove subtasks from specified tasks ╭─────────────────────────────╮ │ Task Navigation & Viewing │ ╰─────────────────────────────╯ next Show the next task to work on based on … show <id> Display detailed information about a sp… ╭─────────────────────────╮ │ Dependency Management │ ╰─────────────────────────╯ validate-dependenci… Identify invalid dependencies without f… fix-dependencies Fix invalid dependencies automatically ╭─────────────────────────╮ │ Environment Variables │ ╰─────────────────────────╯ ANTHROPIC_API_KEY Your Anthropic API key Required MODEL Claude model to use Default: claude-3-7-sonn… MAX_TOKENS Maximum tokens for responses Default: 4000 TEMPERATURE Temperature for model responses Default: 0.7 PERPLEXITY_API_KEY Perplexity API key for research Optional PERPLEXITY_MODEL Perplexity model to use Default: sonar-small-onl… DEBUG Enable debug logging Default: false LOG_LEVEL Console output level (debug,info,warn,error) Default: info DEFAULT_SUBTASKS Default number of subtasks to generate Default: 3 DEFAULT_PRIORITY Default task priority Default: medium PROJECT_NAME Project name displayed in UI Default: Task Master file into these new modules: - : AI interactions (Claude, Perplexity) - : CLI command definitions (Commander.js) - : Task dependency handling - : Core task operations (create, list, update, etc.) - : User interface elements (display, formatting) - : Utility functions and configuration - : Exports all modules - Replaced direct use of _____ _ __ __ _ |_ _|_ _ ___| | __ | \/ | __ _ ___| |_ ___ _ __ | |/ _` / __| |/ / | |\/| |/ _` / __| __/ _ \ '__| | | (_| \__ \ < | | | | (_| \__ \ || __/ | |_|\__,_|___/_|\_\ |_| |_|\__,_|___/\__\___|_| by https://x.com/eyaltoledano ╭────────────────────────────────────────────╮ │ │ │ Version: 0.9.16 Project: Task Master │ │ │ ╰────────────────────────────────────────────╯ ╭─────────────────────╮ │ │ │ Task Master CLI │ │ │ ╰─────────────────────╯ ╭───────────────────╮ │ Task Generation │ ╰───────────────────╯ parse-prd --input=<file.txt> [--tasks=10] Generate tasks from a PRD document generate Create individual task files from tasks… ╭───────────────────╮ │ Task Management │ ╰───────────────────╯ list [--status=<status>] [--with-subtas… List all tasks with their status set-status --id=<id> --status=<status> Update task status (done, pending, etc.) update --from=<id> --prompt="<context>" Update tasks based on new requirements add-task --prompt="<text>" [--dependencies=… Add a new task using AI add-dependency --id=<id> --depends-on=<id> Add a dependency to a task remove-dependency --id=<id> --depends-on=<id> Remove a dependency from a task ╭──────────────────────────╮ │ Task Analysis & Detail │ ╰──────────────────────────╯ analyze-complexity [--research] [--threshold=5] Analyze tasks and generate expansion re… complexity-report [--file=<path>] Display the complexity analysis report expand --id=<id> [--num=5] [--research] [… Break down tasks into detailed subtasks expand --all [--force] [--research] Expand all pending tasks with subtasks clear-subtasks --id=<id> Remove subtasks from specified tasks ╭─────────────────────────────╮ │ Task Navigation & Viewing │ ╰─────────────────────────────╯ next Show the next task to work on based on … show <id> Display detailed information about a sp… ╭─────────────────────────╮ │ Dependency Management │ ╰─────────────────────────╯ validate-dependenci… Identify invalid dependencies without f… fix-dependencies Fix invalid dependencies automatically ╭─────────────────────────╮ │ Environment Variables │ ╰─────────────────────────╯ ANTHROPIC_API_KEY Your Anthropic API key Required MODEL Claude model to use Default: claude-3-7-sonn… MAX_TOKENS Maximum tokens for responses Default: 4000 TEMPERATURE Temperature for model responses Default: 0.7 PERPLEXITY_API_KEY Perplexity API key for research Optional PERPLEXITY_MODEL Perplexity model to use Default: sonar-small-onl… DEBUG Enable debug logging Default: false LOG_LEVEL Console output level (debug,info,warn,error) Default: info DEFAULT_SUBTASKS Default number of subtasks to generate Default: 3 DEFAULT_PRIORITY Default task priority Default: medium PROJECT_NAME Project name displayed in UI Default: Task Master with the global command (see ). - Updated documentation () to reflect the new command. **Benefits:** Code is now cleaner, easier to work with, and ready for future growth. Use the command (or ) to run the CLI. See for command details.
102 lines
5.8 KiB
Plaintext
102 lines
5.8 KiB
Plaintext
# Task ID: 24
|
|
# Title: Implement AI-Powered Test Generation Command
|
|
# Status: pending
|
|
# Dependencies: ⏱️ 22 (pending)
|
|
# Priority: high
|
|
# Description: Create a new 'generate-test' command that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks.
|
|
# Details:
|
|
Implement a new command in the Task Master CLI that generates comprehensive Jest test files for tasks. The command should be callable as 'task-master generate-test --id=1' and should:
|
|
|
|
1. Accept a task ID parameter to identify which task to generate tests for
|
|
2. Retrieve the task and its subtasks from the task store
|
|
3. Analyze the task description, details, and subtasks to understand implementation requirements
|
|
4. Construct an appropriate prompt for an AI service (e.g., OpenAI API) that requests generation of Jest tests
|
|
5. Process the AI response to create a well-formatted test file named 'task_XXX.test.js' where XXX is the zero-padded task ID
|
|
6. Include appropriate test cases that cover the main functionality described in the task
|
|
7. Generate mocks for external dependencies identified in the task description
|
|
8. Create assertions that validate the expected behavior
|
|
9. Handle both parent tasks and subtasks appropriately (for subtasks, name the file 'task_XXX_YYY.test.js' where YYY is the subtask ID)
|
|
10. Include error handling for API failures, invalid task IDs, etc.
|
|
11. Add appropriate documentation for the command in the help system
|
|
|
|
The implementation should utilize the existing AI service integration in the codebase and maintain consistency with the current command structure and error handling patterns.
|
|
|
|
# Test Strategy:
|
|
Testing for this feature should include:
|
|
|
|
1. Unit tests for the command handler function to verify it correctly processes arguments and options
|
|
2. Mock tests for the AI service integration to ensure proper prompt construction and response handling
|
|
3. Integration tests that verify the end-to-end flow using a mock AI response
|
|
4. Tests for error conditions including:
|
|
- Invalid task IDs
|
|
- Network failures when contacting the AI service
|
|
- Malformed AI responses
|
|
- File system permission issues
|
|
5. Verification that generated test files follow Jest conventions and can be executed
|
|
6. Tests for both parent task and subtask handling
|
|
7. Manual verification of the quality of generated tests by running them against actual task implementations
|
|
|
|
Create a test fixture with sample tasks of varying complexity to evaluate the test generation capabilities across different scenarios. The tests should verify that the command outputs appropriate success/error messages to the console and creates files in the expected location with proper content structure.
|
|
|
|
# Subtasks:
|
|
## 1. Create command structure for 'generate-test' [pending]
|
|
### Dependencies: None
|
|
### Description: Implement the basic structure for the 'generate-test' command, including command registration, parameter validation, and help documentation
|
|
### Details:
|
|
Implementation steps:
|
|
1. Create a new file `src/commands/generate-test.js`
|
|
2. Implement the command structure following the pattern of existing commands
|
|
3. Register the new command in the CLI framework
|
|
4. Add command options for task ID (--id=X) parameter
|
|
5. Implement parameter validation to ensure a valid task ID is provided
|
|
6. Add help documentation for the command
|
|
7. Create the basic command flow that retrieves the task from the task store
|
|
8. Implement error handling for invalid task IDs and other basic errors
|
|
|
|
Testing approach:
|
|
- Test command registration
|
|
- Test parameter validation (missing ID, invalid ID format)
|
|
- Test error handling for non-existent task IDs
|
|
- Test basic command flow with a mock task store
|
|
|
|
## 2. Implement AI prompt construction and API integration [pending]
|
|
### Dependencies: 1 (pending)
|
|
### Description: Develop the logic to analyze tasks, construct appropriate AI prompts, and interact with the AI service to generate test content
|
|
### Details:
|
|
Implementation steps:
|
|
1. Create a utility function to analyze task descriptions and subtasks for test requirements
|
|
2. Implement a prompt builder that formats task information into an effective AI prompt
|
|
3. The prompt should request Jest test generation with specifics about mocking dependencies and creating assertions
|
|
4. Integrate with the existing AI service in the codebase to send the prompt
|
|
5. Process the AI response to extract the generated test code
|
|
6. Implement error handling for API failures, rate limits, and malformed responses
|
|
7. Add appropriate logging for the AI interaction process
|
|
|
|
Testing approach:
|
|
- Test prompt construction with various task types
|
|
- Test AI service integration with mocked responses
|
|
- Test error handling for API failures
|
|
- Test response processing with sample AI outputs
|
|
|
|
## 3. Implement test file generation and output [pending]
|
|
### Dependencies: 2 (pending)
|
|
### Description: Create functionality to format AI-generated tests into proper Jest test files and save them to the appropriate location
|
|
### Details:
|
|
Implementation steps:
|
|
1. Create a utility to format the AI response into a well-structured Jest test file
|
|
2. Implement naming logic for test files (task_XXX.test.js for parent tasks, task_XXX_YYY.test.js for subtasks)
|
|
3. Add logic to determine the appropriate file path for saving the test
|
|
4. Implement file system operations to write the test file
|
|
5. Add validation to ensure the generated test follows Jest conventions
|
|
6. Implement formatting of the test file for consistency with project coding standards
|
|
7. Add user feedback about successful test generation and file location
|
|
8. Implement handling for both parent tasks and subtasks
|
|
|
|
Testing approach:
|
|
- Test file naming logic for various task/subtask combinations
|
|
- Test file content formatting with sample AI outputs
|
|
- Test file system operations with mocked fs module
|
|
- Test the complete flow from command input to file output
|
|
- Verify generated tests can be executed by Jest
|
|
|