Files
claude-task-master/.taskmaster/tasks/task_024.txt
2025-05-31 16:21:03 +02:00

604 lines
22 KiB
Plaintext

# Task ID: 24
# Title: Implement AI-Powered Test Generation Command
# Status: pending
# Dependencies: 22
# Priority: high
# Description: Create a new 'generate-test' command in Task Master that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks, utilizing Claude API for AI integration.
# Details:
Implement a new command in the Task Master CLI that generates comprehensive Jest test files for tasks. The command should be callable as 'task-master generate-test --id=1' and should:
1. Accept a task ID parameter to identify which task to generate tests for
2. Retrieve the task and its subtasks from the task store
3. Analyze the task description, details, and subtasks to understand implementation requirements
4. Construct an appropriate prompt for the AI service using Claude API
5. Process the AI response to create a well-formatted test file named 'task_XXX.test.ts' where XXX is the zero-padded task ID
6. Include appropriate test cases that cover the main functionality described in the task
7. Generate mocks for external dependencies identified in the task description
8. Create assertions that validate the expected behavior
9. Handle both parent tasks and subtasks appropriately (for subtasks, name the file 'task_XXX_YYY.test.ts' where YYY is the subtask ID)
10. Include error handling for API failures, invalid task IDs, etc.
11. Add appropriate documentation for the command in the help system
The implementation should utilize the Claude API for AI service integration and maintain consistency with the current command structure and error handling patterns. Consider using TypeScript for better type safety and integration with the Claude API.
# Test Strategy:
Testing for this feature should include:
1. Unit tests for the command handler function to verify it correctly processes arguments and options
2. Mock tests for the Claude API integration to ensure proper prompt construction and response handling
3. Integration tests that verify the end-to-end flow using a mock Claude API response
4. Tests for error conditions including:
- Invalid task IDs
- Network failures when contacting the AI service
- Malformed AI responses
- File system permission issues
5. Verification that generated test files follow Jest conventions and can be executed
6. Tests for both parent task and subtask handling
7. Manual verification of the quality of generated tests by running them against actual task implementations
Create a test fixture with sample tasks of varying complexity to evaluate the test generation capabilities across different scenarios. The tests should verify that the command outputs appropriate success/error messages to the console and creates files in the expected location with proper content structure.
# Subtasks:
## 1. Create command structure for 'generate-test' [pending]
### Dependencies: None
### Description: Implement the basic structure for the 'generate-test' command, including command registration, parameter validation, and help documentation.
### Details:
Implementation steps:
1. Create a new file `src/commands/generate-test.ts`
2. Implement the command structure following the pattern of existing commands
3. Register the new command in the CLI framework
4. Add command options for task ID (--id=X) parameter
5. Implement parameter validation to ensure a valid task ID is provided
6. Add help documentation for the command
7. Create the basic command flow that retrieves the task from the task store
8. Implement error handling for invalid task IDs and other basic errors
Testing approach:
- Test command registration
- Test parameter validation (missing ID, invalid ID format)
- Test error handling for non-existent task IDs
- Test basic command flow with a mock task store
<info added on 2025-05-23T21:02:03.909Z>
## Updated Implementation Approach
Based on code review findings, the implementation approach needs to be revised:
1. Implement the command in `scripts/modules/commands.js` instead of creating a new file
2. Add command registration in the `registerCommands()` function (around line 482)
3. Follow existing command structure pattern:
```javascript
programInstance
.command('generate-test')
.description('Generate test cases for a task using AI')
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-i, --id <id>', 'Task ID parameter')
.option('-p, --prompt <text>', 'Additional prompt context')
.option('-r, --research', 'Use research model')
.action(async (options) => {
// Implementation
});
```
4. Use the following utilities:
- `findProjectRoot()` for resolving project paths
- `findTaskById()` for retrieving task data
- `chalk` for formatted console output
5. Implement error handling following the pattern:
```javascript
try {
// Implementation
} catch (error) {
console.error(chalk.red(`Error generating test: ${error.message}`));
if (error.details) {
console.error(chalk.red(error.details));
}
process.exit(1);
}
```
6. Required imports:
- chalk for colored output
- path for file path operations
- findProjectRoot and findTaskById from './utils.js'
</info added on 2025-05-23T21:02:03.909Z>
## 2. Implement AI prompt construction and FastMCP integration [pending]
### Dependencies: 24.1
### Description: Develop the logic to analyze tasks, construct appropriate AI prompts, and interact with the AI service using FastMCP to generate test content.
### Details:
Implementation steps:
1. Create a utility function to analyze task descriptions and subtasks for test requirements
2. Implement a prompt builder that formats task information into an effective AI prompt
3. Use FastMCP to send the prompt and receive the response
4. Process the FastMCP response to extract the generated test code
5. Implement error handling for FastMCP failures, rate limits, and malformed responses
6. Add appropriate logging for the FastMCP interaction process
Testing approach:
- Test prompt construction with various task types
- Test FastMCP integration with mocked responses
- Test error handling for FastMCP failures
- Test response processing with sample FastMCP outputs
<info added on 2025-05-23T21:04:33.890Z>
## AI Integration Implementation
### AI Service Integration
- Use the unified AI service layer, not FastMCP directly
- Implement with `generateObjectService` from '../ai-services-unified.js'
- Define Zod schema for structured test generation output:
- testContent: Complete Jest test file content
- fileName: Suggested filename for the test file
- mockRequirements: External dependencies that need mocking
### Prompt Construction
- Create system prompt defining AI's role as test generator
- Build user prompt with task context (ID, title, description, details)
- Include test strategy and subtasks context in the prompt
- Follow patterns from add-task.js for prompt structure
### Task Analysis
- Retrieve task data using `findTaskById()` from utils.js
- Build context by analyzing task description, details, and testStrategy
- Examine project structure for import patterns
- Parse specific testing requirements from task.testStrategy field
### File System Operations
- Determine output path in same directory as tasks.json
- Generate standardized filename based on task ID
- Use fs.writeFileSync for writing test content to file
### Error Handling & UI
- Implement try/catch blocks for AI service calls
- Display user-friendly error messages with chalk
- Use loading indicators during AI processing
- Support both research and main AI models
### Telemetry
- Pass through telemetryData from AI service response
- Display AI usage summary for CLI output
### Required Dependencies
- generateObjectService from ai-services-unified.js
- UI components (loading indicators, display functions)
- Zod for schema validation
- Chalk for formatted console output
</info added on 2025-05-23T21:04:33.890Z>
## 3. Implement test file generation and output [pending]
### Dependencies: 24.2
### Description: Create functionality to format AI-generated tests into proper Jest test files and save them to the appropriate location.
### Details:
Implementation steps:
1. Create a utility to format the FastMCP response into a well-structured Jest test file
2. Implement naming logic for test files (task_XXX.test.ts for parent tasks, task_XXX_YYY.test.ts for subtasks)
3. Add logic to determine the appropriate file path for saving the test
4. Implement file system operations to write the test file
5. Add validation to ensure the generated test follows Jest conventions
6. Implement formatting of the test file for consistency with project coding standards
7. Add user feedback about successful test generation and file location
8. Implement handling for both parent tasks and subtasks
Testing approach:
- Test file naming logic for various task/subtask combinations
- Test file content formatting with sample FastMCP outputs
- Test file system operations with mocked fs module
- Test the complete flow from command input to file output
- Verify generated tests can be executed by Jest
<info added on 2025-05-23T21:06:32.457Z>
## Detailed Implementation Guidelines
### File Naming Convention Implementation
```javascript
function generateTestFileName(taskId, isSubtask = false) {
if (isSubtask) {
// For subtasks like "24.1", generate "task_024_001.test.js"
const [parentId, subtaskId] = taskId.split('.');
return `task_${parentId.padStart(3, '0')}_${subtaskId.padStart(3, '0')}.test.js`;
} else {
// For parent tasks like "24", generate "task_024.test.js"
return `task_${taskId.toString().padStart(3, '0')}.test.js`;
}
}
```
### File Location Strategy
- Place generated test files in the `tasks/` directory alongside task files
- This ensures co-location with task documentation and simplifies implementation
### File Content Structure Template
```javascript
/**
* Test file for Task ${taskId}: ${taskTitle}
* Generated automatically by Task Master
*/
import { jest } from '@jest/globals';
// Additional imports based on task requirements
describe('Task ${taskId}: ${taskTitle}', () => {
beforeEach(() => {
// Setup code
});
afterEach(() => {
// Cleanup code
});
test('should ${testDescription}', () => {
// Test implementation
});
});
```
### Code Formatting Standards
- Follow project's .prettierrc configuration:
- Tab width: 2 spaces (useTabs: true)
- Print width: 80 characters
- Semicolons: Required (semi: true)
- Quotes: Single quotes (singleQuote: true)
- Trailing commas: None (trailingComma: "none")
- Bracket spacing: True
- Arrow parens: Always
### File System Operations Implementation
```javascript
import fs from 'fs';
import path from 'path';
// Determine output path
const tasksDir = path.dirname(tasksPath); // Same directory as tasks.json
const fileName = generateTestFileName(task.id, isSubtask);
const filePath = path.join(tasksDir, fileName);
// Ensure directory exists
if (!fs.existsSync(tasksDir)) {
fs.mkdirSync(tasksDir, { recursive: true });
}
// Write test file with proper error handling
try {
fs.writeFileSync(filePath, formattedTestContent, 'utf8');
} catch (error) {
throw new Error(`Failed to write test file: ${error.message}`);
}
```
### Error Handling for File Operations
```javascript
try {
// File writing operation
fs.writeFileSync(filePath, testContent, 'utf8');
} catch (error) {
if (error.code === 'ENOENT') {
throw new Error(`Directory does not exist: ${path.dirname(filePath)}`);
} else if (error.code === 'EACCES') {
throw new Error(`Permission denied writing to: ${filePath}`);
} else if (error.code === 'ENOSPC') {
throw new Error('Insufficient disk space to write test file');
} else {
throw new Error(`Failed to write test file: ${error.message}`);
}
}
```
### User Feedback Implementation
```javascript
// Success feedback
console.log(chalk.green('✅ Test file generated successfully:'));
console.log(chalk.cyan(` File: ${fileName}`));
console.log(chalk.cyan(` Location: ${filePath}`));
console.log(chalk.gray(` Size: ${testContent.length} characters`));
// Additional info
if (mockRequirements && mockRequirements.length > 0) {
console.log(chalk.yellow(` Mocks needed: ${mockRequirements.join(', ')}`));
}
```
### Content Validation Requirements
1. Jest Syntax Validation:
- Ensure proper describe/test structure
- Validate import statements
- Check for balanced brackets and parentheses
2. Code Quality Checks:
- Verify no syntax errors
- Ensure proper indentation
- Check for required imports
3. Test Completeness:
- At least one test case
- Proper test descriptions
- Appropriate assertions
### Required Dependencies
```javascript
import fs from 'fs';
import path from 'path';
import chalk from 'chalk';
import { log } from '../utils.js';
```
### Integration with Existing Patterns
Follow the pattern from `generate-task-files.js`:
1. Read task data using existing utilities
2. Process content with proper formatting
3. Write files with error handling
4. Provide feedback to user
5. Return success data for MCP integration
</info added on 2025-05-23T21:06:32.457Z>
<info added on 2025-05-23T21:18:25.369Z>
## Corrected Implementation Approach
### Updated File Location Strategy
**CORRECTION**: Tests should go in `/tests/` directory, not `/tasks/` directory.
Based on Jest configuration analysis:
- Jest is configured with `roots: ['<rootDir>/tests']`
- Test pattern: `**/?(*.)+(spec|test).js`
- Current test structure has `/tests/unit/`, `/tests/integration/`, etc.
### Recommended Directory Structure:
```
tests/
├── unit/ # Manual unit tests
├── integration/ # Manual integration tests
├── generated/ # AI-generated tests
│ ├── tasks/ # Generated task tests
│ │ ├── task_024.test.js
│ │ └── task_024_001.test.js
│ └── README.md # Explains generated tests
└── fixtures/ # Test fixtures
```
### Updated File Path Logic:
```javascript
// Determine output path - place in tests/generated/tasks/
const projectRoot = findProjectRoot() || '.';
const testsDir = path.join(projectRoot, 'tests', 'generated', 'tasks');
const fileName = generateTestFileName(task.id, isSubtask);
const filePath = path.join(testsDir, fileName);
// Ensure directory structure exists
if (!fs.existsSync(testsDir)) {
fs.mkdirSync(testsDir, { recursive: true });
}
```
### Testing Framework Configuration
The generate-test command should read the configured testing framework from `.taskmasterconfig`:
```javascript
// Read testing framework from config
const config = getConfig(projectRoot);
const testingFramework = config.testingFramework || 'jest'; // Default to Jest
// Generate different templates based on framework
switch (testingFramework) {
case 'jest':
return generateJestTest(task, context);
case 'mocha':
return generateMochaTest(task, context);
case 'vitest':
return generateVitestTest(task, context);
default:
throw new Error(`Unsupported testing framework: ${testingFramework}`);
}
```
### Framework-Specific Templates
**Jest Template** (current):
```javascript
/**
* Test file for Task ${taskId}: ${taskTitle}
* Generated automatically by Task Master
*/
import { jest } from '@jest/globals';
// Task-specific imports
describe('Task ${taskId}: ${taskTitle}', () => {
beforeEach(() => {
jest.clearAllMocks();
});
test('should ${testDescription}', () => {
// Test implementation
});
});
```
**Mocha Template**:
```javascript
/**
* Test file for Task ${taskId}: ${taskTitle}
* Generated automatically by Task Master
*/
import { expect } from 'chai';
import sinon from 'sinon';
// Task-specific imports
describe('Task ${taskId}: ${taskTitle}', () => {
beforeEach(() => {
sinon.restore();
});
it('should ${testDescription}', () => {
// Test implementation
});
});
```
**Vitest Template**:
```javascript
/**
* Test file for Task ${taskId}: ${taskTitle}
* Generated automatically by Task Master
*/
import { describe, test, expect, vi, beforeEach } from 'vitest';
// Task-specific imports
describe('Task ${taskId}: ${taskTitle}', () => {
beforeEach(() => {
vi.clearAllMocks();
});
test('should ${testDescription}', () => {
// Test implementation
});
});
```
### AI Prompt Enhancement for Mocking
To address the mocking challenge, enhance the AI prompt with project context:
```javascript
const systemPrompt = `You are an expert at generating comprehensive test files. When generating tests, pay special attention to mocking external dependencies correctly.
CRITICAL MOCKING GUIDELINES:
1. Analyze the task requirements to identify external dependencies (APIs, databases, file system, etc.)
2. Mock external dependencies at the module level, not inline
3. Use the testing framework's mocking utilities (jest.mock(), sinon.stub(), vi.mock())
4. Create realistic mock data that matches the expected API responses
5. Test both success and error scenarios for mocked dependencies
6. Ensure mocks are cleared between tests to prevent test pollution
Testing Framework: ${testingFramework}
Project Structure: ${projectStructureContext}
`;
```
### Integration with Future Features
This primitive command design enables:
1. **Automatic test generation**: `task-master add-task --with-test`
2. **Batch test generation**: `task-master generate-tests --all`
3. **Framework-agnostic**: Support multiple testing frameworks
4. **Smart mocking**: LLM analyzes dependencies and generates appropriate mocks
### Updated Implementation Requirements:
1. **Read testing framework** from `.taskmasterconfig`
2. **Create tests directory structure** if it doesn't exist
3. **Generate framework-specific templates** based on configuration
4. **Enhanced AI prompts** with mocking best practices
5. **Project structure analysis** for better import resolution
6. **Mock dependency detection** from task requirements
</info added on 2025-05-23T21:18:25.369Z>
## 4. Implement MCP tool integration for generate-test command [pending]
### Dependencies: 24.3
### Description: Create MCP server tool support for the generate-test command to enable integration with Claude Code and other MCP clients.
### Details:
Implementation steps:
1. Create direct function wrapper in mcp-server/src/core/direct-functions/
2. Create MCP tool registration in mcp-server/src/tools/
3. Add tool to the main tools index
4. Implement proper parameter validation and error handling
5. Ensure telemetry data is properly passed through
6. Add tool to MCP server registration
The MCP tool should support the same parameters as the CLI command:
- id: Task ID to generate tests for
- file: Path to tasks.json file
- research: Whether to use research model
- prompt: Additional context for test generation
Follow the existing pattern from other MCP tools like add-task.js and expand-task.js.
## 5. Add testing framework configuration to project initialization [pending]
### Dependencies: 24.3
### Description: Enhance the init.js process to let users choose their preferred testing framework (Jest, Mocha, Vitest, etc.) and store this choice in .taskmasterconfig for use by the generate-test command.
### Details:
Implementation requirements:
1. **Add Testing Framework Prompt to init.js**:
- Add interactive prompt asking users to choose testing framework
- Support Jest (default), Mocha + Chai, Vitest, Ava, Jasmine
- Include brief descriptions of each framework
- Allow --testing-framework flag for non-interactive mode
2. **Update .taskmasterconfig Template**:
- Add testingFramework field to configuration file
- Include default dependencies for each framework
- Store framework-specific configuration options
3. **Framework-Specific Setup**:
- Generate appropriate config files (jest.config.js, vitest.config.ts, etc.)
- Add framework dependencies to package.json suggestions
- Create sample test file for the chosen framework
4. **Integration Points**:
- Ensure generate-test command reads testingFramework from config
- Add validation to prevent conflicts between framework choices
- Support switching frameworks later via models command or separate config command
This makes the generate-test command truly framework-agnostic and sets up the foundation for --with-test flags in other commands.
<info added on 2025-05-23T21:22:02.048Z>
# Implementation Plan for Testing Framework Integration
## Code Structure
### 1. Update init.js
- Add testing framework prompt after addAliases prompt
- Implement framework selection with descriptions
- Support non-interactive mode with --testing-framework flag
- Create setupTestingFramework() function to handle framework-specific setup
### 2. Create New Module Files
- Create `scripts/modules/testing-frameworks.js` for framework templates and setup
- Add sample test generators for each supported framework
- Implement config file generation for each framework
### 3. Update Configuration Templates
- Modify `assets/.taskmasterconfig` to include testing fields:
```json
"testingFramework": "{{testingFramework}}",
"testingConfig": {
"framework": "{{testingFramework}}",
"setupFiles": [],
"testDirectory": "tests",
"testPattern": "**/*.test.js",
"coverage": {
"enabled": false,
"threshold": 80
}
}
```
### 4. Create Framework-Specific Templates
- `assets/jest.config.template.js`
- `assets/vitest.config.template.ts`
- `assets/.mocharc.template.json`
- `assets/ava.config.template.js`
- `assets/jasmine.json.template`
### 5. Update commands.js
- Add `--testing-framework <framework>` option to init command
- Add validation for supported frameworks
## Error Handling
- Validate selected framework against supported list
- Handle existing config files gracefully with warning/overwrite prompt
- Provide recovery options if framework setup fails
- Add conflict detection for multiple testing frameworks
## Integration Points
- Ensure generate-test command reads testingFramework from config
- Prepare for future --with-test flag in other commands
- Support framework switching via config command
## Testing Requirements
- Unit tests for framework selection logic
- Integration tests for config file generation
- Validation tests for each supported framework
</info added on 2025-05-23T21:22:02.048Z>