Task management, research, improvements for 24, 41 and 51
This commit is contained in:
@@ -58,6 +58,50 @@ Testing approach:
|
||||
- Test parameter validation (missing ID, invalid ID format)
|
||||
- Test error handling for non-existent task IDs
|
||||
- Test basic command flow with a mock task store
|
||||
<info added on 2025-05-23T21:02:03.909Z>
|
||||
## Updated Implementation Approach
|
||||
|
||||
Based on code review findings, the implementation approach needs to be revised:
|
||||
|
||||
1. Implement the command in `scripts/modules/commands.js` instead of creating a new file
|
||||
2. Add command registration in the `registerCommands()` function (around line 482)
|
||||
3. Follow existing command structure pattern:
|
||||
```javascript
|
||||
programInstance
|
||||
.command('generate-test')
|
||||
.description('Generate test cases for a task using AI')
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('-i, --id <id>', 'Task ID parameter')
|
||||
.option('-p, --prompt <text>', 'Additional prompt context')
|
||||
.option('-r, --research', 'Use research model')
|
||||
.action(async (options) => {
|
||||
// Implementation
|
||||
});
|
||||
```
|
||||
|
||||
4. Use the following utilities:
|
||||
- `findProjectRoot()` for resolving project paths
|
||||
- `findTaskById()` for retrieving task data
|
||||
- `chalk` for formatted console output
|
||||
|
||||
5. Implement error handling following the pattern:
|
||||
```javascript
|
||||
try {
|
||||
// Implementation
|
||||
} catch (error) {
|
||||
console.error(chalk.red(`Error generating test: ${error.message}`));
|
||||
if (error.details) {
|
||||
console.error(chalk.red(error.details));
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
```
|
||||
|
||||
6. Required imports:
|
||||
- chalk for colored output
|
||||
- path for file path operations
|
||||
- findProjectRoot and findTaskById from './utils.js'
|
||||
</info added on 2025-05-23T21:02:03.909Z>
|
||||
|
||||
## 2. Implement AI prompt construction and FastMCP integration [pending]
|
||||
### Dependencies: 24.1
|
||||
@@ -76,6 +120,50 @@ Testing approach:
|
||||
- Test FastMCP integration with mocked responses
|
||||
- Test error handling for FastMCP failures
|
||||
- Test response processing with sample FastMCP outputs
|
||||
<info added on 2025-05-23T21:04:33.890Z>
|
||||
## AI Integration Implementation
|
||||
|
||||
### AI Service Integration
|
||||
- Use the unified AI service layer, not FastMCP directly
|
||||
- Implement with `generateObjectService` from '../ai-services-unified.js'
|
||||
- Define Zod schema for structured test generation output:
|
||||
- testContent: Complete Jest test file content
|
||||
- fileName: Suggested filename for the test file
|
||||
- mockRequirements: External dependencies that need mocking
|
||||
|
||||
### Prompt Construction
|
||||
- Create system prompt defining AI's role as test generator
|
||||
- Build user prompt with task context (ID, title, description, details)
|
||||
- Include test strategy and subtasks context in the prompt
|
||||
- Follow patterns from add-task.js for prompt structure
|
||||
|
||||
### Task Analysis
|
||||
- Retrieve task data using `findTaskById()` from utils.js
|
||||
- Build context by analyzing task description, details, and testStrategy
|
||||
- Examine project structure for import patterns
|
||||
- Parse specific testing requirements from task.testStrategy field
|
||||
|
||||
### File System Operations
|
||||
- Determine output path in same directory as tasks.json
|
||||
- Generate standardized filename based on task ID
|
||||
- Use fs.writeFileSync for writing test content to file
|
||||
|
||||
### Error Handling & UI
|
||||
- Implement try/catch blocks for AI service calls
|
||||
- Display user-friendly error messages with chalk
|
||||
- Use loading indicators during AI processing
|
||||
- Support both research and main AI models
|
||||
|
||||
### Telemetry
|
||||
- Pass through telemetryData from AI service response
|
||||
- Display AI usage summary for CLI output
|
||||
|
||||
### Required Dependencies
|
||||
- generateObjectService from ai-services-unified.js
|
||||
- UI components (loading indicators, display functions)
|
||||
- Zod for schema validation
|
||||
- Chalk for formatted console output
|
||||
</info added on 2025-05-23T21:04:33.890Z>
|
||||
|
||||
## 3. Implement test file generation and output [pending]
|
||||
### Dependencies: 24.2
|
||||
@@ -97,4 +185,419 @@ Testing approach:
|
||||
- Test file system operations with mocked fs module
|
||||
- Test the complete flow from command input to file output
|
||||
- Verify generated tests can be executed by Jest
|
||||
<info added on 2025-05-23T21:06:32.457Z>
|
||||
## Detailed Implementation Guidelines
|
||||
|
||||
### File Naming Convention Implementation
|
||||
```javascript
|
||||
function generateTestFileName(taskId, isSubtask = false) {
|
||||
if (isSubtask) {
|
||||
// For subtasks like "24.1", generate "task_024_001.test.js"
|
||||
const [parentId, subtaskId] = taskId.split('.');
|
||||
return `task_${parentId.padStart(3, '0')}_${subtaskId.padStart(3, '0')}.test.js`;
|
||||
} else {
|
||||
// For parent tasks like "24", generate "task_024.test.js"
|
||||
return `task_${taskId.toString().padStart(3, '0')}.test.js`;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### File Location Strategy
|
||||
- Place generated test files in the `tasks/` directory alongside task files
|
||||
- This ensures co-location with task documentation and simplifies implementation
|
||||
|
||||
### File Content Structure Template
|
||||
```javascript
|
||||
/**
|
||||
* Test file for Task ${taskId}: ${taskTitle}
|
||||
* Generated automatically by Task Master
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
// Additional imports based on task requirements
|
||||
|
||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||
beforeEach(() => {
|
||||
// Setup code
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Cleanup code
|
||||
});
|
||||
|
||||
test('should ${testDescription}', () => {
|
||||
// Test implementation
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Code Formatting Standards
|
||||
- Follow project's .prettierrc configuration:
|
||||
- Tab width: 2 spaces (useTabs: true)
|
||||
- Print width: 80 characters
|
||||
- Semicolons: Required (semi: true)
|
||||
- Quotes: Single quotes (singleQuote: true)
|
||||
- Trailing commas: None (trailingComma: "none")
|
||||
- Bracket spacing: True
|
||||
- Arrow parens: Always
|
||||
|
||||
### File System Operations Implementation
|
||||
```javascript
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
// Determine output path
|
||||
const tasksDir = path.dirname(tasksPath); // Same directory as tasks.json
|
||||
const fileName = generateTestFileName(task.id, isSubtask);
|
||||
const filePath = path.join(tasksDir, fileName);
|
||||
|
||||
// Ensure directory exists
|
||||
if (!fs.existsSync(tasksDir)) {
|
||||
fs.mkdirSync(tasksDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Write test file with proper error handling
|
||||
try {
|
||||
fs.writeFileSync(filePath, formattedTestContent, 'utf8');
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to write test file: ${error.message}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling for File Operations
|
||||
```javascript
|
||||
try {
|
||||
// File writing operation
|
||||
fs.writeFileSync(filePath, testContent, 'utf8');
|
||||
} catch (error) {
|
||||
if (error.code === 'ENOENT') {
|
||||
throw new Error(`Directory does not exist: ${path.dirname(filePath)}`);
|
||||
} else if (error.code === 'EACCES') {
|
||||
throw new Error(`Permission denied writing to: ${filePath}`);
|
||||
} else if (error.code === 'ENOSPC') {
|
||||
throw new Error('Insufficient disk space to write test file');
|
||||
} else {
|
||||
throw new Error(`Failed to write test file: ${error.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### User Feedback Implementation
|
||||
```javascript
|
||||
// Success feedback
|
||||
console.log(chalk.green('✅ Test file generated successfully:'));
|
||||
console.log(chalk.cyan(` File: ${fileName}`));
|
||||
console.log(chalk.cyan(` Location: ${filePath}`));
|
||||
console.log(chalk.gray(` Size: ${testContent.length} characters`));
|
||||
|
||||
// Additional info
|
||||
if (mockRequirements && mockRequirements.length > 0) {
|
||||
console.log(chalk.yellow(` Mocks needed: ${mockRequirements.join(', ')}`));
|
||||
}
|
||||
```
|
||||
|
||||
### Content Validation Requirements
|
||||
1. Jest Syntax Validation:
|
||||
- Ensure proper describe/test structure
|
||||
- Validate import statements
|
||||
- Check for balanced brackets and parentheses
|
||||
|
||||
2. Code Quality Checks:
|
||||
- Verify no syntax errors
|
||||
- Ensure proper indentation
|
||||
- Check for required imports
|
||||
|
||||
3. Test Completeness:
|
||||
- At least one test case
|
||||
- Proper test descriptions
|
||||
- Appropriate assertions
|
||||
|
||||
### Required Dependencies
|
||||
```javascript
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import chalk from 'chalk';
|
||||
import { log } from '../utils.js';
|
||||
```
|
||||
|
||||
### Integration with Existing Patterns
|
||||
Follow the pattern from `generate-task-files.js`:
|
||||
1. Read task data using existing utilities
|
||||
2. Process content with proper formatting
|
||||
3. Write files with error handling
|
||||
4. Provide feedback to user
|
||||
5. Return success data for MCP integration
|
||||
</info added on 2025-05-23T21:06:32.457Z>
|
||||
<info added on 2025-05-23T21:18:25.369Z>
|
||||
## Corrected Implementation Approach
|
||||
|
||||
### Updated File Location Strategy
|
||||
|
||||
**CORRECTION**: Tests should go in `/tests/` directory, not `/tasks/` directory.
|
||||
|
||||
Based on Jest configuration analysis:
|
||||
- Jest is configured with `roots: ['<rootDir>/tests']`
|
||||
- Test pattern: `**/?(*.)+(spec|test).js`
|
||||
- Current test structure has `/tests/unit/`, `/tests/integration/`, etc.
|
||||
|
||||
### Recommended Directory Structure:
|
||||
```
|
||||
tests/
|
||||
├── unit/ # Manual unit tests
|
||||
├── integration/ # Manual integration tests
|
||||
├── generated/ # AI-generated tests
|
||||
│ ├── tasks/ # Generated task tests
|
||||
│ │ ├── task_024.test.js
|
||||
│ │ └── task_024_001.test.js
|
||||
│ └── README.md # Explains generated tests
|
||||
└── fixtures/ # Test fixtures
|
||||
```
|
||||
|
||||
### Updated File Path Logic:
|
||||
```javascript
|
||||
// Determine output path - place in tests/generated/tasks/
|
||||
const projectRoot = findProjectRoot() || '.';
|
||||
const testsDir = path.join(projectRoot, 'tests', 'generated', 'tasks');
|
||||
const fileName = generateTestFileName(task.id, isSubtask);
|
||||
const filePath = path.join(testsDir, fileName);
|
||||
|
||||
// Ensure directory structure exists
|
||||
if (!fs.existsSync(testsDir)) {
|
||||
fs.mkdirSync(testsDir, { recursive: true });
|
||||
}
|
||||
```
|
||||
|
||||
### Testing Framework Configuration
|
||||
|
||||
The generate-test command should read the configured testing framework from `.taskmasterconfig`:
|
||||
|
||||
```javascript
|
||||
// Read testing framework from config
|
||||
const config = getConfig(projectRoot);
|
||||
const testingFramework = config.testingFramework || 'jest'; // Default to Jest
|
||||
|
||||
// Generate different templates based on framework
|
||||
switch (testingFramework) {
|
||||
case 'jest':
|
||||
return generateJestTest(task, context);
|
||||
case 'mocha':
|
||||
return generateMochaTest(task, context);
|
||||
case 'vitest':
|
||||
return generateVitestTest(task, context);
|
||||
default:
|
||||
throw new Error(`Unsupported testing framework: ${testingFramework}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Framework-Specific Templates
|
||||
|
||||
**Jest Template** (current):
|
||||
```javascript
|
||||
/**
|
||||
* Test file for Task ${taskId}: ${taskTitle}
|
||||
* Generated automatically by Task Master
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
// Task-specific imports
|
||||
|
||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
test('should ${testDescription}', () => {
|
||||
// Test implementation
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Mocha Template**:
|
||||
```javascript
|
||||
/**
|
||||
* Test file for Task ${taskId}: ${taskTitle}
|
||||
* Generated automatically by Task Master
|
||||
*/
|
||||
|
||||
import { expect } from 'chai';
|
||||
import sinon from 'sinon';
|
||||
// Task-specific imports
|
||||
|
||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||
beforeEach(() => {
|
||||
sinon.restore();
|
||||
});
|
||||
|
||||
it('should ${testDescription}', () => {
|
||||
// Test implementation
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Vitest Template**:
|
||||
```javascript
|
||||
/**
|
||||
* Test file for Task ${taskId}: ${taskTitle}
|
||||
* Generated automatically by Task Master
|
||||
*/
|
||||
|
||||
import { describe, test, expect, vi, beforeEach } from 'vitest';
|
||||
// Task-specific imports
|
||||
|
||||
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
test('should ${testDescription}', () => {
|
||||
// Test implementation
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### AI Prompt Enhancement for Mocking
|
||||
|
||||
To address the mocking challenge, enhance the AI prompt with project context:
|
||||
|
||||
```javascript
|
||||
const systemPrompt = `You are an expert at generating comprehensive test files. When generating tests, pay special attention to mocking external dependencies correctly.
|
||||
|
||||
CRITICAL MOCKING GUIDELINES:
|
||||
1. Analyze the task requirements to identify external dependencies (APIs, databases, file system, etc.)
|
||||
2. Mock external dependencies at the module level, not inline
|
||||
3. Use the testing framework's mocking utilities (jest.mock(), sinon.stub(), vi.mock())
|
||||
4. Create realistic mock data that matches the expected API responses
|
||||
5. Test both success and error scenarios for mocked dependencies
|
||||
6. Ensure mocks are cleared between tests to prevent test pollution
|
||||
|
||||
Testing Framework: ${testingFramework}
|
||||
Project Structure: ${projectStructureContext}
|
||||
`;
|
||||
```
|
||||
|
||||
### Integration with Future Features
|
||||
|
||||
This primitive command design enables:
|
||||
1. **Automatic test generation**: `task-master add-task --with-test`
|
||||
2. **Batch test generation**: `task-master generate-tests --all`
|
||||
3. **Framework-agnostic**: Support multiple testing frameworks
|
||||
4. **Smart mocking**: LLM analyzes dependencies and generates appropriate mocks
|
||||
|
||||
### Updated Implementation Requirements:
|
||||
|
||||
1. **Read testing framework** from `.taskmasterconfig`
|
||||
2. **Create tests directory structure** if it doesn't exist
|
||||
3. **Generate framework-specific templates** based on configuration
|
||||
4. **Enhanced AI prompts** with mocking best practices
|
||||
5. **Project structure analysis** for better import resolution
|
||||
6. **Mock dependency detection** from task requirements
|
||||
</info added on 2025-05-23T21:18:25.369Z>
|
||||
|
||||
## 4. Implement MCP tool integration for generate-test command [pending]
|
||||
### Dependencies: 24.3
|
||||
### Description: Create MCP server tool support for the generate-test command to enable integration with Claude Code and other MCP clients.
|
||||
### Details:
|
||||
Implementation steps:
|
||||
1. Create direct function wrapper in mcp-server/src/core/direct-functions/
|
||||
2. Create MCP tool registration in mcp-server/src/tools/
|
||||
3. Add tool to the main tools index
|
||||
4. Implement proper parameter validation and error handling
|
||||
5. Ensure telemetry data is properly passed through
|
||||
6. Add tool to MCP server registration
|
||||
|
||||
The MCP tool should support the same parameters as the CLI command:
|
||||
- id: Task ID to generate tests for
|
||||
- file: Path to tasks.json file
|
||||
- research: Whether to use research model
|
||||
- prompt: Additional context for test generation
|
||||
|
||||
Follow the existing pattern from other MCP tools like add-task.js and expand-task.js.
|
||||
|
||||
## 5. Add testing framework configuration to project initialization [pending]
|
||||
### Dependencies: 24.3
|
||||
### Description: Enhance the init.js process to let users choose their preferred testing framework (Jest, Mocha, Vitest, etc.) and store this choice in .taskmasterconfig for use by the generate-test command.
|
||||
### Details:
|
||||
Implementation requirements:
|
||||
|
||||
1. **Add Testing Framework Prompt to init.js**:
|
||||
- Add interactive prompt asking users to choose testing framework
|
||||
- Support Jest (default), Mocha + Chai, Vitest, Ava, Jasmine
|
||||
- Include brief descriptions of each framework
|
||||
- Allow --testing-framework flag for non-interactive mode
|
||||
|
||||
2. **Update .taskmasterconfig Template**:
|
||||
- Add testingFramework field to configuration file
|
||||
- Include default dependencies for each framework
|
||||
- Store framework-specific configuration options
|
||||
|
||||
3. **Framework-Specific Setup**:
|
||||
- Generate appropriate config files (jest.config.js, vitest.config.ts, etc.)
|
||||
- Add framework dependencies to package.json suggestions
|
||||
- Create sample test file for the chosen framework
|
||||
|
||||
4. **Integration Points**:
|
||||
- Ensure generate-test command reads testingFramework from config
|
||||
- Add validation to prevent conflicts between framework choices
|
||||
- Support switching frameworks later via models command or separate config command
|
||||
|
||||
This makes the generate-test command truly framework-agnostic and sets up the foundation for --with-test flags in other commands.
|
||||
<info added on 2025-05-23T21:22:02.048Z>
|
||||
# Implementation Plan for Testing Framework Integration
|
||||
|
||||
## Code Structure
|
||||
|
||||
### 1. Update init.js
|
||||
- Add testing framework prompt after addAliases prompt
|
||||
- Implement framework selection with descriptions
|
||||
- Support non-interactive mode with --testing-framework flag
|
||||
- Create setupTestingFramework() function to handle framework-specific setup
|
||||
|
||||
### 2. Create New Module Files
|
||||
- Create `scripts/modules/testing-frameworks.js` for framework templates and setup
|
||||
- Add sample test generators for each supported framework
|
||||
- Implement config file generation for each framework
|
||||
|
||||
### 3. Update Configuration Templates
|
||||
- Modify `assets/.taskmasterconfig` to include testing fields:
|
||||
```json
|
||||
"testingFramework": "{{testingFramework}}",
|
||||
"testingConfig": {
|
||||
"framework": "{{testingFramework}}",
|
||||
"setupFiles": [],
|
||||
"testDirectory": "tests",
|
||||
"testPattern": "**/*.test.js",
|
||||
"coverage": {
|
||||
"enabled": false,
|
||||
"threshold": 80
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Create Framework-Specific Templates
|
||||
- `assets/jest.config.template.js`
|
||||
- `assets/vitest.config.template.ts`
|
||||
- `assets/.mocharc.template.json`
|
||||
- `assets/ava.config.template.js`
|
||||
- `assets/jasmine.json.template`
|
||||
|
||||
### 5. Update commands.js
|
||||
- Add `--testing-framework <framework>` option to init command
|
||||
- Add validation for supported frameworks
|
||||
|
||||
## Error Handling
|
||||
- Validate selected framework against supported list
|
||||
- Handle existing config files gracefully with warning/overwrite prompt
|
||||
- Provide recovery options if framework setup fails
|
||||
- Add conflict detection for multiple testing frameworks
|
||||
|
||||
## Integration Points
|
||||
- Ensure generate-test command reads testingFramework from config
|
||||
- Prepare for future --with-test flag in other commands
|
||||
- Support framework switching via config command
|
||||
|
||||
## Testing Requirements
|
||||
- Unit tests for framework selection logic
|
||||
- Integration tests for config file generation
|
||||
- Validation tests for each supported framework
|
||||
</info added on 2025-05-23T21:22:02.048Z>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user