Task management, research, improvements for 24, 41 and 51
This commit is contained in:
@@ -58,6 +58,50 @@ Testing approach:
|
|||||||
- Test parameter validation (missing ID, invalid ID format)
|
- Test parameter validation (missing ID, invalid ID format)
|
||||||
- Test error handling for non-existent task IDs
|
- Test error handling for non-existent task IDs
|
||||||
- Test basic command flow with a mock task store
|
- Test basic command flow with a mock task store
|
||||||
|
<info added on 2025-05-23T21:02:03.909Z>
|
||||||
|
## Updated Implementation Approach
|
||||||
|
|
||||||
|
Based on code review findings, the implementation approach needs to be revised:
|
||||||
|
|
||||||
|
1. Implement the command in `scripts/modules/commands.js` instead of creating a new file
|
||||||
|
2. Add command registration in the `registerCommands()` function (around line 482)
|
||||||
|
3. Follow existing command structure pattern:
|
||||||
|
```javascript
|
||||||
|
programInstance
|
||||||
|
.command('generate-test')
|
||||||
|
.description('Generate test cases for a task using AI')
|
||||||
|
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||||
|
.option('-i, --id <id>', 'Task ID parameter')
|
||||||
|
.option('-p, --prompt <text>', 'Additional prompt context')
|
||||||
|
.option('-r, --research', 'Use research model')
|
||||||
|
.action(async (options) => {
|
||||||
|
// Implementation
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Use the following utilities:
|
||||||
|
- `findProjectRoot()` for resolving project paths
|
||||||
|
- `findTaskById()` for retrieving task data
|
||||||
|
- `chalk` for formatted console output
|
||||||
|
|
||||||
|
5. Implement error handling following the pattern:
|
||||||
|
```javascript
|
||||||
|
try {
|
||||||
|
// Implementation
|
||||||
|
} catch (error) {
|
||||||
|
console.error(chalk.red(`Error generating test: ${error.message}`));
|
||||||
|
if (error.details) {
|
||||||
|
console.error(chalk.red(error.details));
|
||||||
|
}
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Required imports:
|
||||||
|
- chalk for colored output
|
||||||
|
- path for file path operations
|
||||||
|
- findProjectRoot and findTaskById from './utils.js'
|
||||||
|
</info added on 2025-05-23T21:02:03.909Z>
|
||||||
|
|
||||||
## 2. Implement AI prompt construction and FastMCP integration [pending]
|
## 2. Implement AI prompt construction and FastMCP integration [pending]
|
||||||
### Dependencies: 24.1
|
### Dependencies: 24.1
|
||||||
@@ -76,6 +120,50 @@ Testing approach:
|
|||||||
- Test FastMCP integration with mocked responses
|
- Test FastMCP integration with mocked responses
|
||||||
- Test error handling for FastMCP failures
|
- Test error handling for FastMCP failures
|
||||||
- Test response processing with sample FastMCP outputs
|
- Test response processing with sample FastMCP outputs
|
||||||
|
<info added on 2025-05-23T21:04:33.890Z>
|
||||||
|
## AI Integration Implementation
|
||||||
|
|
||||||
|
### AI Service Integration
|
||||||
|
- Use the unified AI service layer, not FastMCP directly
|
||||||
|
- Implement with `generateObjectService` from '../ai-services-unified.js'
|
||||||
|
- Define Zod schema for structured test generation output:
|
||||||
|
- testContent: Complete Jest test file content
|
||||||
|
- fileName: Suggested filename for the test file
|
||||||
|
- mockRequirements: External dependencies that need mocking
|
||||||
|
|
||||||
|
### Prompt Construction
|
||||||
|
- Create system prompt defining AI's role as test generator
|
||||||
|
- Build user prompt with task context (ID, title, description, details)
|
||||||
|
- Include test strategy and subtasks context in the prompt
|
||||||
|
- Follow patterns from add-task.js for prompt structure
|
||||||
|
|
||||||
|
### Task Analysis
|
||||||
|
- Retrieve task data using `findTaskById()` from utils.js
|
||||||
|
- Build context by analyzing task description, details, and testStrategy
|
||||||
|
- Examine project structure for import patterns
|
||||||
|
- Parse specific testing requirements from task.testStrategy field
|
||||||
|
|
||||||
|
### File System Operations
|
||||||
|
- Determine output path in same directory as tasks.json
|
||||||
|
- Generate standardized filename based on task ID
|
||||||
|
- Use fs.writeFileSync for writing test content to file
|
||||||
|
|
||||||
|
### Error Handling & UI
|
||||||
|
- Implement try/catch blocks for AI service calls
|
||||||
|
- Display user-friendly error messages with chalk
|
||||||
|
- Use loading indicators during AI processing
|
||||||
|
- Support both research and main AI models
|
||||||
|
|
||||||
|
### Telemetry
|
||||||
|
- Pass through telemetryData from AI service response
|
||||||
|
- Display AI usage summary for CLI output
|
||||||
|
|
||||||
|
### Required Dependencies
|
||||||
|
- generateObjectService from ai-services-unified.js
|
||||||
|
- UI components (loading indicators, display functions)
|
||||||
|
- Zod for schema validation
|
||||||
|
- Chalk for formatted console output
|
||||||
|
</info added on 2025-05-23T21:04:33.890Z>
|
||||||
|
|
||||||
## 3. Implement test file generation and output [pending]
|
## 3. Implement test file generation and output [pending]
|
||||||
### Dependencies: 24.2
|
### Dependencies: 24.2
|
||||||
@@ -97,4 +185,419 @@ Testing approach:
|
|||||||
- Test file system operations with mocked fs module
|
- Test file system operations with mocked fs module
|
||||||
- Test the complete flow from command input to file output
|
- Test the complete flow from command input to file output
|
||||||
- Verify generated tests can be executed by Jest
|
- Verify generated tests can be executed by Jest
|
||||||
|
<info added on 2025-05-23T21:06:32.457Z>
|
||||||
|
## Detailed Implementation Guidelines
|
||||||
|
|
||||||
|
### File Naming Convention Implementation
|
||||||
|
```javascript
|
||||||
|
function generateTestFileName(taskId, isSubtask = false) {
|
||||||
|
if (isSubtask) {
|
||||||
|
// For subtasks like "24.1", generate "task_024_001.test.js"
|
||||||
|
const [parentId, subtaskId] = taskId.split('.');
|
||||||
|
return `task_${parentId.padStart(3, '0')}_${subtaskId.padStart(3, '0')}.test.js`;
|
||||||
|
} else {
|
||||||
|
// For parent tasks like "24", generate "task_024.test.js"
|
||||||
|
return `task_${taskId.toString().padStart(3, '0')}.test.js`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### File Location Strategy
|
||||||
|
- Place generated test files in the `tasks/` directory alongside task files
|
||||||
|
- This ensures co-location with task documentation and simplifies implementation
|
||||||
|
|
||||||
|
### File Content Structure Template
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Test file for Task ${taskId}: ${taskTitle}
|
||||||
|
* Generated automatically by Task Master
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { jest } from '@jest/globals';
|
||||||
|
// Additional imports based on task requirements
|
||||||
|
|
||||||
|
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
// Setup code
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
// Cleanup code
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should ${testDescription}', () => {
|
||||||
|
// Test implementation
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Code Formatting Standards
|
||||||
|
- Follow project's .prettierrc configuration:
|
||||||
|
- Tab width: 2 spaces (useTabs: true)
|
||||||
|
- Print width: 80 characters
|
||||||
|
- Semicolons: Required (semi: true)
|
||||||
|
- Quotes: Single quotes (singleQuote: true)
|
||||||
|
- Trailing commas: None (trailingComma: "none")
|
||||||
|
- Bracket spacing: True
|
||||||
|
- Arrow parens: Always
|
||||||
|
|
||||||
|
### File System Operations Implementation
|
||||||
|
```javascript
|
||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
|
||||||
|
// Determine output path
|
||||||
|
const tasksDir = path.dirname(tasksPath); // Same directory as tasks.json
|
||||||
|
const fileName = generateTestFileName(task.id, isSubtask);
|
||||||
|
const filePath = path.join(tasksDir, fileName);
|
||||||
|
|
||||||
|
// Ensure directory exists
|
||||||
|
if (!fs.existsSync(tasksDir)) {
|
||||||
|
fs.mkdirSync(tasksDir, { recursive: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write test file with proper error handling
|
||||||
|
try {
|
||||||
|
fs.writeFileSync(filePath, formattedTestContent, 'utf8');
|
||||||
|
} catch (error) {
|
||||||
|
throw new Error(`Failed to write test file: ${error.message}`);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Handling for File Operations
|
||||||
|
```javascript
|
||||||
|
try {
|
||||||
|
// File writing operation
|
||||||
|
fs.writeFileSync(filePath, testContent, 'utf8');
|
||||||
|
} catch (error) {
|
||||||
|
if (error.code === 'ENOENT') {
|
||||||
|
throw new Error(`Directory does not exist: ${path.dirname(filePath)}`);
|
||||||
|
} else if (error.code === 'EACCES') {
|
||||||
|
throw new Error(`Permission denied writing to: ${filePath}`);
|
||||||
|
} else if (error.code === 'ENOSPC') {
|
||||||
|
throw new Error('Insufficient disk space to write test file');
|
||||||
|
} else {
|
||||||
|
throw new Error(`Failed to write test file: ${error.message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### User Feedback Implementation
|
||||||
|
```javascript
|
||||||
|
// Success feedback
|
||||||
|
console.log(chalk.green('✅ Test file generated successfully:'));
|
||||||
|
console.log(chalk.cyan(` File: ${fileName}`));
|
||||||
|
console.log(chalk.cyan(` Location: ${filePath}`));
|
||||||
|
console.log(chalk.gray(` Size: ${testContent.length} characters`));
|
||||||
|
|
||||||
|
// Additional info
|
||||||
|
if (mockRequirements && mockRequirements.length > 0) {
|
||||||
|
console.log(chalk.yellow(` Mocks needed: ${mockRequirements.join(', ')}`));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Content Validation Requirements
|
||||||
|
1. Jest Syntax Validation:
|
||||||
|
- Ensure proper describe/test structure
|
||||||
|
- Validate import statements
|
||||||
|
- Check for balanced brackets and parentheses
|
||||||
|
|
||||||
|
2. Code Quality Checks:
|
||||||
|
- Verify no syntax errors
|
||||||
|
- Ensure proper indentation
|
||||||
|
- Check for required imports
|
||||||
|
|
||||||
|
3. Test Completeness:
|
||||||
|
- At least one test case
|
||||||
|
- Proper test descriptions
|
||||||
|
- Appropriate assertions
|
||||||
|
|
||||||
|
### Required Dependencies
|
||||||
|
```javascript
|
||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import chalk from 'chalk';
|
||||||
|
import { log } from '../utils.js';
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration with Existing Patterns
|
||||||
|
Follow the pattern from `generate-task-files.js`:
|
||||||
|
1. Read task data using existing utilities
|
||||||
|
2. Process content with proper formatting
|
||||||
|
3. Write files with error handling
|
||||||
|
4. Provide feedback to user
|
||||||
|
5. Return success data for MCP integration
|
||||||
|
</info added on 2025-05-23T21:06:32.457Z>
|
||||||
|
<info added on 2025-05-23T21:18:25.369Z>
|
||||||
|
## Corrected Implementation Approach
|
||||||
|
|
||||||
|
### Updated File Location Strategy
|
||||||
|
|
||||||
|
**CORRECTION**: Tests should go in `/tests/` directory, not `/tasks/` directory.
|
||||||
|
|
||||||
|
Based on Jest configuration analysis:
|
||||||
|
- Jest is configured with `roots: ['<rootDir>/tests']`
|
||||||
|
- Test pattern: `**/?(*.)+(spec|test).js`
|
||||||
|
- Current test structure has `/tests/unit/`, `/tests/integration/`, etc.
|
||||||
|
|
||||||
|
### Recommended Directory Structure:
|
||||||
|
```
|
||||||
|
tests/
|
||||||
|
├── unit/ # Manual unit tests
|
||||||
|
├── integration/ # Manual integration tests
|
||||||
|
├── generated/ # AI-generated tests
|
||||||
|
│ ├── tasks/ # Generated task tests
|
||||||
|
│ │ ├── task_024.test.js
|
||||||
|
│ │ └── task_024_001.test.js
|
||||||
|
│ └── README.md # Explains generated tests
|
||||||
|
└── fixtures/ # Test fixtures
|
||||||
|
```
|
||||||
|
|
||||||
|
### Updated File Path Logic:
|
||||||
|
```javascript
|
||||||
|
// Determine output path - place in tests/generated/tasks/
|
||||||
|
const projectRoot = findProjectRoot() || '.';
|
||||||
|
const testsDir = path.join(projectRoot, 'tests', 'generated', 'tasks');
|
||||||
|
const fileName = generateTestFileName(task.id, isSubtask);
|
||||||
|
const filePath = path.join(testsDir, fileName);
|
||||||
|
|
||||||
|
// Ensure directory structure exists
|
||||||
|
if (!fs.existsSync(testsDir)) {
|
||||||
|
fs.mkdirSync(testsDir, { recursive: true });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing Framework Configuration
|
||||||
|
|
||||||
|
The generate-test command should read the configured testing framework from `.taskmasterconfig`:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Read testing framework from config
|
||||||
|
const config = getConfig(projectRoot);
|
||||||
|
const testingFramework = config.testingFramework || 'jest'; // Default to Jest
|
||||||
|
|
||||||
|
// Generate different templates based on framework
|
||||||
|
switch (testingFramework) {
|
||||||
|
case 'jest':
|
||||||
|
return generateJestTest(task, context);
|
||||||
|
case 'mocha':
|
||||||
|
return generateMochaTest(task, context);
|
||||||
|
case 'vitest':
|
||||||
|
return generateVitestTest(task, context);
|
||||||
|
default:
|
||||||
|
throw new Error(`Unsupported testing framework: ${testingFramework}`);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Framework-Specific Templates
|
||||||
|
|
||||||
|
**Jest Template** (current):
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Test file for Task ${taskId}: ${taskTitle}
|
||||||
|
* Generated automatically by Task Master
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { jest } from '@jest/globals';
|
||||||
|
// Task-specific imports
|
||||||
|
|
||||||
|
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should ${testDescription}', () => {
|
||||||
|
// Test implementation
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Mocha Template**:
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Test file for Task ${taskId}: ${taskTitle}
|
||||||
|
* Generated automatically by Task Master
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { expect } from 'chai';
|
||||||
|
import sinon from 'sinon';
|
||||||
|
// Task-specific imports
|
||||||
|
|
||||||
|
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
sinon.restore();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should ${testDescription}', () => {
|
||||||
|
// Test implementation
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Vitest Template**:
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Test file for Task ${taskId}: ${taskTitle}
|
||||||
|
* Generated automatically by Task Master
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, test, expect, vi, beforeEach } from 'vitest';
|
||||||
|
// Task-specific imports
|
||||||
|
|
||||||
|
describe('Task ${taskId}: ${taskTitle}', () => {
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
test('should ${testDescription}', () => {
|
||||||
|
// Test implementation
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### AI Prompt Enhancement for Mocking
|
||||||
|
|
||||||
|
To address the mocking challenge, enhance the AI prompt with project context:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const systemPrompt = `You are an expert at generating comprehensive test files. When generating tests, pay special attention to mocking external dependencies correctly.
|
||||||
|
|
||||||
|
CRITICAL MOCKING GUIDELINES:
|
||||||
|
1. Analyze the task requirements to identify external dependencies (APIs, databases, file system, etc.)
|
||||||
|
2. Mock external dependencies at the module level, not inline
|
||||||
|
3. Use the testing framework's mocking utilities (jest.mock(), sinon.stub(), vi.mock())
|
||||||
|
4. Create realistic mock data that matches the expected API responses
|
||||||
|
5. Test both success and error scenarios for mocked dependencies
|
||||||
|
6. Ensure mocks are cleared between tests to prevent test pollution
|
||||||
|
|
||||||
|
Testing Framework: ${testingFramework}
|
||||||
|
Project Structure: ${projectStructureContext}
|
||||||
|
`;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration with Future Features
|
||||||
|
|
||||||
|
This primitive command design enables:
|
||||||
|
1. **Automatic test generation**: `task-master add-task --with-test`
|
||||||
|
2. **Batch test generation**: `task-master generate-tests --all`
|
||||||
|
3. **Framework-agnostic**: Support multiple testing frameworks
|
||||||
|
4. **Smart mocking**: LLM analyzes dependencies and generates appropriate mocks
|
||||||
|
|
||||||
|
### Updated Implementation Requirements:
|
||||||
|
|
||||||
|
1. **Read testing framework** from `.taskmasterconfig`
|
||||||
|
2. **Create tests directory structure** if it doesn't exist
|
||||||
|
3. **Generate framework-specific templates** based on configuration
|
||||||
|
4. **Enhanced AI prompts** with mocking best practices
|
||||||
|
5. **Project structure analysis** for better import resolution
|
||||||
|
6. **Mock dependency detection** from task requirements
|
||||||
|
</info added on 2025-05-23T21:18:25.369Z>
|
||||||
|
|
||||||
|
## 4. Implement MCP tool integration for generate-test command [pending]
|
||||||
|
### Dependencies: 24.3
|
||||||
|
### Description: Create MCP server tool support for the generate-test command to enable integration with Claude Code and other MCP clients.
|
||||||
|
### Details:
|
||||||
|
Implementation steps:
|
||||||
|
1. Create direct function wrapper in mcp-server/src/core/direct-functions/
|
||||||
|
2. Create MCP tool registration in mcp-server/src/tools/
|
||||||
|
3. Add tool to the main tools index
|
||||||
|
4. Implement proper parameter validation and error handling
|
||||||
|
5. Ensure telemetry data is properly passed through
|
||||||
|
6. Add tool to MCP server registration
|
||||||
|
|
||||||
|
The MCP tool should support the same parameters as the CLI command:
|
||||||
|
- id: Task ID to generate tests for
|
||||||
|
- file: Path to tasks.json file
|
||||||
|
- research: Whether to use research model
|
||||||
|
- prompt: Additional context for test generation
|
||||||
|
|
||||||
|
Follow the existing pattern from other MCP tools like add-task.js and expand-task.js.
|
||||||
|
|
||||||
|
## 5. Add testing framework configuration to project initialization [pending]
|
||||||
|
### Dependencies: 24.3
|
||||||
|
### Description: Enhance the init.js process to let users choose their preferred testing framework (Jest, Mocha, Vitest, etc.) and store this choice in .taskmasterconfig for use by the generate-test command.
|
||||||
|
### Details:
|
||||||
|
Implementation requirements:
|
||||||
|
|
||||||
|
1. **Add Testing Framework Prompt to init.js**:
|
||||||
|
- Add interactive prompt asking users to choose testing framework
|
||||||
|
- Support Jest (default), Mocha + Chai, Vitest, Ava, Jasmine
|
||||||
|
- Include brief descriptions of each framework
|
||||||
|
- Allow --testing-framework flag for non-interactive mode
|
||||||
|
|
||||||
|
2. **Update .taskmasterconfig Template**:
|
||||||
|
- Add testingFramework field to configuration file
|
||||||
|
- Include default dependencies for each framework
|
||||||
|
- Store framework-specific configuration options
|
||||||
|
|
||||||
|
3. **Framework-Specific Setup**:
|
||||||
|
- Generate appropriate config files (jest.config.js, vitest.config.ts, etc.)
|
||||||
|
- Add framework dependencies to package.json suggestions
|
||||||
|
- Create sample test file for the chosen framework
|
||||||
|
|
||||||
|
4. **Integration Points**:
|
||||||
|
- Ensure generate-test command reads testingFramework from config
|
||||||
|
- Add validation to prevent conflicts between framework choices
|
||||||
|
- Support switching frameworks later via models command or separate config command
|
||||||
|
|
||||||
|
This makes the generate-test command truly framework-agnostic and sets up the foundation for --with-test flags in other commands.
|
||||||
|
<info added on 2025-05-23T21:22:02.048Z>
|
||||||
|
# Implementation Plan for Testing Framework Integration
|
||||||
|
|
||||||
|
## Code Structure
|
||||||
|
|
||||||
|
### 1. Update init.js
|
||||||
|
- Add testing framework prompt after addAliases prompt
|
||||||
|
- Implement framework selection with descriptions
|
||||||
|
- Support non-interactive mode with --testing-framework flag
|
||||||
|
- Create setupTestingFramework() function to handle framework-specific setup
|
||||||
|
|
||||||
|
### 2. Create New Module Files
|
||||||
|
- Create `scripts/modules/testing-frameworks.js` for framework templates and setup
|
||||||
|
- Add sample test generators for each supported framework
|
||||||
|
- Implement config file generation for each framework
|
||||||
|
|
||||||
|
### 3. Update Configuration Templates
|
||||||
|
- Modify `assets/.taskmasterconfig` to include testing fields:
|
||||||
|
```json
|
||||||
|
"testingFramework": "{{testingFramework}}",
|
||||||
|
"testingConfig": {
|
||||||
|
"framework": "{{testingFramework}}",
|
||||||
|
"setupFiles": [],
|
||||||
|
"testDirectory": "tests",
|
||||||
|
"testPattern": "**/*.test.js",
|
||||||
|
"coverage": {
|
||||||
|
"enabled": false,
|
||||||
|
"threshold": 80
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Create Framework-Specific Templates
|
||||||
|
- `assets/jest.config.template.js`
|
||||||
|
- `assets/vitest.config.template.ts`
|
||||||
|
- `assets/.mocharc.template.json`
|
||||||
|
- `assets/ava.config.template.js`
|
||||||
|
- `assets/jasmine.json.template`
|
||||||
|
|
||||||
|
### 5. Update commands.js
|
||||||
|
- Add `--testing-framework <framework>` option to init command
|
||||||
|
- Add validation for supported frameworks
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
- Validate selected framework against supported list
|
||||||
|
- Handle existing config files gracefully with warning/overwrite prompt
|
||||||
|
- Provide recovery options if framework setup fails
|
||||||
|
- Add conflict detection for multiple testing frameworks
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
- Ensure generate-test command reads testingFramework from config
|
||||||
|
- Prepare for future --with-test flag in other commands
|
||||||
|
- Support framework switching via config command
|
||||||
|
|
||||||
|
## Testing Requirements
|
||||||
|
- Unit tests for framework selection logic
|
||||||
|
- Integration tests for config file generation
|
||||||
|
- Validation tests for each supported framework
|
||||||
|
</info added on 2025-05-23T21:22:02.048Z>
|
||||||
|
|
||||||
|
|||||||
@@ -77,48 +77,263 @@ This implementation should include:
|
|||||||
### Description: Design and implement the command-line interface for the dependency graph tool, including argument parsing and help documentation.
|
### Description: Design and implement the command-line interface for the dependency graph tool, including argument parsing and help documentation.
|
||||||
### Details:
|
### Details:
|
||||||
Define commands for input file specification, output options, filtering, and other user-configurable parameters.
|
Define commands for input file specification, output options, filtering, and other user-configurable parameters.
|
||||||
|
<info added on 2025-05-23T21:02:26.442Z>
|
||||||
|
Implement a new 'diagram' command (with 'graph' alias) in commands.js following the Commander.js pattern. The command should:
|
||||||
|
|
||||||
|
1. Import diagram-generator.js module functions for generating visual representations
|
||||||
|
2. Support multiple visualization types with --type option:
|
||||||
|
- dependencies: show task dependency relationships
|
||||||
|
- subtasks: show task/subtask hierarchy
|
||||||
|
- flow: show task workflow
|
||||||
|
- gantt: show timeline visualization
|
||||||
|
|
||||||
|
3. Include the following options:
|
||||||
|
- --task <id>: Filter diagram to show only specified task and its relationships
|
||||||
|
- --mermaid: Output raw Mermaid markdown for external rendering
|
||||||
|
- --visual: Render diagram directly in terminal
|
||||||
|
- --format <format>: Output format (text, svg, png)
|
||||||
|
|
||||||
|
4. Implement proper error handling and validation:
|
||||||
|
- Validate task IDs using existing taskExists() function
|
||||||
|
- Handle invalid option combinations
|
||||||
|
- Provide descriptive error messages
|
||||||
|
|
||||||
|
5. Integrate with UI components:
|
||||||
|
- Use ui.js display functions for consistent output formatting
|
||||||
|
- Apply chalk coloring for terminal output
|
||||||
|
- Use boxen formatting consistent with other commands
|
||||||
|
|
||||||
|
6. Handle file operations:
|
||||||
|
- Resolve file paths using findProjectRoot() pattern
|
||||||
|
- Support saving diagrams to files when appropriate
|
||||||
|
|
||||||
|
7. Include comprehensive help text following the established pattern in other commands
|
||||||
|
</info added on 2025-05-23T21:02:26.442Z>
|
||||||
|
|
||||||
## 2. Graph Layout Algorithms [pending]
|
## 2. Graph Layout Algorithms [pending]
|
||||||
### Dependencies: 41.1
|
### Dependencies: 41.1
|
||||||
### Description: Develop or integrate algorithms to compute optimal node and edge placement for clear and readable graph layouts in a terminal environment.
|
### Description: Develop or integrate algorithms to compute optimal node and edge placement for clear and readable graph layouts in a terminal environment.
|
||||||
### Details:
|
### Details:
|
||||||
Consider topological sorting, hierarchical, and force-directed layouts suitable for ASCII/Unicode rendering.
|
Consider topological sorting, hierarchical, and force-directed layouts suitable for ASCII/Unicode rendering.
|
||||||
|
<info added on 2025-05-23T21:02:49.434Z>
|
||||||
|
Create a new diagram-generator.js module in the scripts/modules/ directory following Task Master's module architecture pattern. The module should include:
|
||||||
|
|
||||||
|
1. Core functions for generating Mermaid diagrams:
|
||||||
|
- generateDependencyGraph(tasks, options) - creates flowchart showing task dependencies
|
||||||
|
- generateSubtaskDiagram(task, options) - creates hierarchy diagram for subtasks
|
||||||
|
- generateProjectFlow(tasks, options) - creates overall project workflow
|
||||||
|
- generateGanttChart(tasks, options) - creates timeline visualization
|
||||||
|
|
||||||
|
2. Integration with existing Task Master data structures:
|
||||||
|
- Use the same task object format from task-manager.js
|
||||||
|
- Leverage dependency analysis from dependency-manager.js
|
||||||
|
- Support complexity scores from analyze-complexity functionality
|
||||||
|
- Handle both main tasks and subtasks with proper ID notation (parentId.subtaskId)
|
||||||
|
|
||||||
|
3. Layout algorithm considerations for Mermaid:
|
||||||
|
- Topological sorting for dependency flows
|
||||||
|
- Hierarchical layouts for subtask trees
|
||||||
|
- Circular dependency detection and highlighting
|
||||||
|
- Terminal width-aware formatting for ASCII fallback
|
||||||
|
|
||||||
|
4. Export functions following the existing module pattern at the bottom of the file
|
||||||
|
</info added on 2025-05-23T21:02:49.434Z>
|
||||||
|
|
||||||
## 3. ASCII/Unicode Rendering Engine [pending]
|
## 3. ASCII/Unicode Rendering Engine [pending]
|
||||||
### Dependencies: 41.2
|
### Dependencies: 41.2
|
||||||
### Description: Implement rendering logic to display the dependency graph using ASCII and Unicode characters in the terminal.
|
### Description: Implement rendering logic to display the dependency graph using ASCII and Unicode characters in the terminal.
|
||||||
### Details:
|
### Details:
|
||||||
Support for various node and edge styles, and ensure compatibility with different terminal types.
|
Support for various node and edge styles, and ensure compatibility with different terminal types.
|
||||||
|
<info added on 2025-05-23T21:03:10.001Z>
|
||||||
|
Extend ui.js with diagram display functions that integrate with Task Master's existing UI patterns:
|
||||||
|
|
||||||
|
1. Implement core diagram display functions:
|
||||||
|
- displayTaskDiagram(tasksPath, diagramType, options) as the main entry point
|
||||||
|
- displayMermaidCode(mermaidCode, title) for formatted code output with boxen
|
||||||
|
- displayDiagramLegend() to explain symbols and colors
|
||||||
|
|
||||||
|
2. Ensure UI consistency by:
|
||||||
|
- Using established chalk color schemes (blue/green/yellow/red)
|
||||||
|
- Applying boxen for consistent component formatting
|
||||||
|
- Following existing display function patterns (displayTaskById, displayComplexityReport)
|
||||||
|
- Utilizing cli-table3 for any diagram metadata tables
|
||||||
|
|
||||||
|
3. Address terminal rendering challenges:
|
||||||
|
- Implement ASCII/Unicode fallback when Mermaid rendering isn't available
|
||||||
|
- Respect terminal width constraints using process.stdout.columns
|
||||||
|
- Integrate with loading indicators via startLoadingIndicator/stopLoadingIndicator
|
||||||
|
|
||||||
|
4. Update task file generation to include Mermaid diagram sections in individual task files
|
||||||
|
|
||||||
|
5. Support both CLI and MCP output formats through the outputFormat parameter
|
||||||
|
</info added on 2025-05-23T21:03:10.001Z>
|
||||||
|
|
||||||
## 4. Color Coding Support [pending]
|
## 4. Color Coding Support [pending]
|
||||||
### Dependencies: 41.3
|
### Dependencies: 41.3
|
||||||
### Description: Add color coding to nodes and edges to visually distinguish types, statuses, or other attributes in the graph.
|
### Description: Add color coding to nodes and edges to visually distinguish types, statuses, or other attributes in the graph.
|
||||||
### Details:
|
### Details:
|
||||||
Use ANSI escape codes for color; provide options for colorblind-friendly palettes.
|
Use ANSI escape codes for color; provide options for colorblind-friendly palettes.
|
||||||
|
<info added on 2025-05-23T21:03:35.762Z>
|
||||||
|
Integrate color coding with Task Master's existing status system:
|
||||||
|
|
||||||
|
1. Extend getStatusWithColor() in ui.js to support diagram contexts:
|
||||||
|
- Add 'diagram' parameter to determine rendering context
|
||||||
|
- Modify color intensity for better visibility in graph elements
|
||||||
|
|
||||||
|
2. Implement Task Master's established color scheme using ANSI codes:
|
||||||
|
- Green (\x1b[32m) for 'done'/'completed' tasks
|
||||||
|
- Yellow (\x1b[33m) for 'pending' tasks
|
||||||
|
- Orange (\x1b[38;5;208m) for 'in-progress' tasks
|
||||||
|
- Red (\x1b[31m) for 'blocked' tasks
|
||||||
|
- Gray (\x1b[90m) for 'deferred'/'cancelled' tasks
|
||||||
|
- Magenta (\x1b[35m) for 'review' tasks
|
||||||
|
|
||||||
|
3. Create diagram-specific color functions:
|
||||||
|
- getDependencyLineColor(fromTaskStatus, toTaskStatus) - color dependency arrows based on relationship status
|
||||||
|
- getNodeBorderColor(task) - style node borders using priority/complexity indicators
|
||||||
|
- getSubtaskGroupColor(parentTask) - visually group related subtasks
|
||||||
|
|
||||||
|
4. Integrate complexity visualization:
|
||||||
|
- Use getComplexityWithColor() for node background or border thickness
|
||||||
|
- Map complexity scores to visual weight in the graph
|
||||||
|
|
||||||
|
5. Ensure accessibility:
|
||||||
|
- Add text-based indicators (symbols like ✓, ⚠, ⏳) alongside colors
|
||||||
|
- Implement colorblind-friendly palettes as user-selectable option
|
||||||
|
- Include shape variations for different statuses
|
||||||
|
|
||||||
|
6. Follow existing ANSI patterns:
|
||||||
|
- Maintain consistency with terminal UI color usage
|
||||||
|
- Reuse color constants from the codebase
|
||||||
|
|
||||||
|
7. Support graceful degradation:
|
||||||
|
- Check terminal capabilities using existing detection
|
||||||
|
- Provide monochrome fallbacks with distinctive patterns
|
||||||
|
- Use bold/underline as alternatives when colors unavailable
|
||||||
|
</info added on 2025-05-23T21:03:35.762Z>
|
||||||
|
|
||||||
## 5. Circular Dependency Detection [pending]
|
## 5. Circular Dependency Detection [pending]
|
||||||
### Dependencies: 41.2
|
### Dependencies: 41.2
|
||||||
### Description: Implement algorithms to detect and highlight circular dependencies within the graph.
|
### Description: Implement algorithms to detect and highlight circular dependencies within the graph.
|
||||||
### Details:
|
### Details:
|
||||||
Clearly mark cycles in the rendered output and provide warnings or errors as appropriate.
|
Clearly mark cycles in the rendered output and provide warnings or errors as appropriate.
|
||||||
|
<info added on 2025-05-23T21:04:20.125Z>
|
||||||
|
Integrate with Task Master's existing circular dependency detection:
|
||||||
|
|
||||||
|
1. Import the dependency detection logic from dependency-manager.js module
|
||||||
|
2. Utilize the findCycles function from utils.js or dependency-manager.js
|
||||||
|
3. Extend validateDependenciesCommand functionality to highlight cycles in diagrams
|
||||||
|
|
||||||
|
Visual representation in Mermaid diagrams:
|
||||||
|
- Apply red/bold styling to nodes involved in dependency cycles
|
||||||
|
- Add warning annotations to cyclic edges
|
||||||
|
- Implement cycle path highlighting with distinctive line styles
|
||||||
|
|
||||||
|
Integration with validation workflow:
|
||||||
|
- Execute dependency validation before diagram generation
|
||||||
|
- Display cycle warnings consistent with existing CLI error messaging
|
||||||
|
- Utilize chalk.red and boxen for error highlighting following established patterns
|
||||||
|
|
||||||
|
Add diagram legend entries that explain cycle notation and warnings
|
||||||
|
|
||||||
|
Ensure detection of cycles in both:
|
||||||
|
- Main task dependencies
|
||||||
|
- Subtask dependencies within parent tasks
|
||||||
|
|
||||||
|
Follow Task Master's error handling patterns for graceful cycle reporting and user notification
|
||||||
|
</info added on 2025-05-23T21:04:20.125Z>
|
||||||
|
|
||||||
## 6. Filtering and Search Functionality [pending]
|
## 6. Filtering and Search Functionality [pending]
|
||||||
### Dependencies: 41.1, 41.2
|
### Dependencies: 41.1, 41.2
|
||||||
### Description: Enable users to filter nodes and edges by criteria such as name, type, or dependency depth.
|
### Description: Enable users to filter nodes and edges by criteria such as name, type, or dependency depth.
|
||||||
### Details:
|
### Details:
|
||||||
Support command-line flags for filtering and interactive search if feasible.
|
Support command-line flags for filtering and interactive search if feasible.
|
||||||
|
<info added on 2025-05-23T21:04:57.811Z>
|
||||||
|
Implement MCP tool integration for task dependency visualization:
|
||||||
|
|
||||||
|
1. Create task_diagram.js in mcp-server/src/tools/ following existing tool patterns
|
||||||
|
2. Implement taskDiagramDirect.js in mcp-server/src/core/direct-functions/
|
||||||
|
3. Use Zod schema for parameter validation:
|
||||||
|
- diagramType (dependencies, subtasks, flow, gantt)
|
||||||
|
- taskId (optional string)
|
||||||
|
- format (mermaid, text, json)
|
||||||
|
- includeComplexity (boolean)
|
||||||
|
|
||||||
|
4. Structure response data with:
|
||||||
|
- mermaidCode for client-side rendering
|
||||||
|
- metadata (nodeCount, edgeCount, cycleWarnings)
|
||||||
|
- support for both task-specific and project-wide diagrams
|
||||||
|
|
||||||
|
5. Integrate with session management and project root handling
|
||||||
|
6. Implement error handling using handleApiResult pattern
|
||||||
|
7. Register the tool in tools/index.js
|
||||||
|
|
||||||
|
Maintain compatibility with existing command-line flags for filtering and interactive search.
|
||||||
|
</info added on 2025-05-23T21:04:57.811Z>
|
||||||
|
|
||||||
## 7. Accessibility Features [pending]
|
## 7. Accessibility Features [pending]
|
||||||
### Dependencies: 41.3, 41.4
|
### Dependencies: 41.3, 41.4
|
||||||
### Description: Ensure the tool is accessible, including support for screen readers, high-contrast modes, and keyboard navigation.
|
### Description: Ensure the tool is accessible, including support for screen readers, high-contrast modes, and keyboard navigation.
|
||||||
### Details:
|
### Details:
|
||||||
Provide alternative text output and ensure color is not the sole means of conveying information.
|
Provide alternative text output and ensure color is not the sole means of conveying information.
|
||||||
|
<info added on 2025-05-23T21:05:54.584Z>
|
||||||
|
# Accessibility and Export Integration
|
||||||
|
|
||||||
|
## Accessibility Features
|
||||||
|
- Provide alternative text output for visual elements
|
||||||
|
- Ensure color is not the sole means of conveying information
|
||||||
|
- Support keyboard navigation through the dependency graph
|
||||||
|
- Add screen reader compatible node descriptions
|
||||||
|
|
||||||
|
## Export Integration
|
||||||
|
- Extend generateTaskFiles function in task-manager.js to include Mermaid diagram sections
|
||||||
|
- Add Mermaid code blocks to task markdown files under ## Diagrams header
|
||||||
|
- Follow existing task file generation patterns and markdown structure
|
||||||
|
- Support multiple diagram types per task file:
|
||||||
|
* Task dependencies (prerequisite relationships)
|
||||||
|
* Subtask hierarchy visualization
|
||||||
|
* Task flow context in project workflow
|
||||||
|
- Integrate with existing fs module file writing operations
|
||||||
|
- Add diagram export options to the generate command in commands.js
|
||||||
|
- Support SVG and PNG export using Mermaid CLI when available
|
||||||
|
- Implement error handling for diagram generation failures
|
||||||
|
- Reference exported diagrams in task markdown with proper paths
|
||||||
|
- Update CLI generate command with options like --include-diagrams
|
||||||
|
</info added on 2025-05-23T21:05:54.584Z>
|
||||||
|
|
||||||
## 8. Performance Optimization [pending]
|
## 8. Performance Optimization [pending]
|
||||||
### Dependencies: 41.2, 41.3, 41.4, 41.5, 41.6
|
### Dependencies: 41.2, 41.3, 41.4, 41.5, 41.6
|
||||||
### Description: Profile and optimize the tool for large graphs to ensure responsive rendering and low memory usage.
|
### Description: Profile and optimize the tool for large graphs to ensure responsive rendering and low memory usage.
|
||||||
### Details:
|
### Details:
|
||||||
Implement lazy loading, efficient data structures, and parallel processing where appropriate.
|
Implement lazy loading, efficient data structures, and parallel processing where appropriate.
|
||||||
|
<info added on 2025-05-23T21:06:14.533Z>
|
||||||
|
# Mermaid Library Integration and Terminal-Specific Handling
|
||||||
|
|
||||||
|
## Package Dependencies
|
||||||
|
- Add mermaid package as an optional dependency in package.json for generating raw Mermaid diagram code
|
||||||
|
- Consider mermaid-cli for SVG/PNG conversion capabilities
|
||||||
|
- Evaluate terminal-image or similar libraries for terminals with image support
|
||||||
|
- Explore ascii-art-ansi or box-drawing character libraries for text-only terminals
|
||||||
|
|
||||||
|
## Terminal Capability Detection
|
||||||
|
- Leverage existing terminal detection from ui.js to assess rendering capabilities
|
||||||
|
- Implement detection for:
|
||||||
|
- iTerm2 and other terminals with image protocol support
|
||||||
|
- Terminals with Unicode/extended character support
|
||||||
|
- Basic terminals requiring pure ASCII output
|
||||||
|
|
||||||
|
## Rendering Strategy with Fallbacks
|
||||||
|
1. Primary: Generate raw Mermaid code for user copy/paste
|
||||||
|
2. Secondary: Render simplified ASCII tree/flow representation using box characters
|
||||||
|
3. Tertiary: Present dependencies in tabular format for minimal terminals
|
||||||
|
|
||||||
|
## Implementation Approach
|
||||||
|
- Use dynamic imports for optional rendering libraries to maintain lightweight core
|
||||||
|
- Implement graceful degradation when optional packages aren't available
|
||||||
|
- Follow Task Master's philosophy of minimal dependencies
|
||||||
|
- Ensure performance optimization through lazy loading where appropriate
|
||||||
|
- Design modular rendering components that can be swapped based on terminal capabilities
|
||||||
|
</info added on 2025-05-23T21:06:14.533Z>
|
||||||
|
|
||||||
## 9. Documentation [pending]
|
## 9. Documentation [pending]
|
||||||
### Dependencies: 41.1, 41.2, 41.3, 41.4, 41.5, 41.6, 41.7, 41.8
|
### Dependencies: 41.1, 41.2, 41.3, 41.4, 41.5, 41.6, 41.7, 41.8
|
||||||
@@ -131,4 +346,28 @@ Include examples, troubleshooting, and contribution guidelines.
|
|||||||
### Description: Develop automated tests for all major features, including CLI parsing, layout correctness, rendering, color coding, filtering, and cycle detection.
|
### Description: Develop automated tests for all major features, including CLI parsing, layout correctness, rendering, color coding, filtering, and cycle detection.
|
||||||
### Details:
|
### Details:
|
||||||
Include unit, integration, and regression tests; validate accessibility and performance claims.
|
Include unit, integration, and regression tests; validate accessibility and performance claims.
|
||||||
|
<info added on 2025-05-23T21:08:36.329Z>
|
||||||
|
# Documentation Tasks for Visual Task Dependency Graph
|
||||||
|
|
||||||
|
## User Documentation
|
||||||
|
1. Update README.md with diagram command documentation following existing command reference format
|
||||||
|
2. Add examples to CLI command help text in commands.js matching patterns from other commands
|
||||||
|
3. Create docs/diagrams.md with detailed usage guide including:
|
||||||
|
- Command examples for each diagram type
|
||||||
|
- Mermaid code samples and output
|
||||||
|
- Terminal compatibility notes
|
||||||
|
- Integration with task workflow examples
|
||||||
|
- Troubleshooting section for common diagram rendering issues
|
||||||
|
- Accessibility features and terminal fallback options
|
||||||
|
|
||||||
|
## Developer Documentation
|
||||||
|
1. Update MCP tool documentation to include the new task_diagram tool
|
||||||
|
2. Add JSDoc comments to all new functions following existing code standards
|
||||||
|
3. Create contributor documentation for extending diagram types
|
||||||
|
4. Update API documentation for any new MCP interface endpoints
|
||||||
|
|
||||||
|
## Integration Documentation
|
||||||
|
1. Document integration with existing commands (analyze-complexity, generate, etc.)
|
||||||
|
2. Provide examples showing how diagrams complement other Task Master features
|
||||||
|
</info added on 2025-05-23T21:08:36.329Z>
|
||||||
|
|
||||||
|
|||||||
@@ -3,56 +3,89 @@
|
|||||||
# Status: pending
|
# Status: pending
|
||||||
# Dependencies: None
|
# Dependencies: None
|
||||||
# Priority: medium
|
# Priority: medium
|
||||||
# Description: Create a command that allows users to quickly research topics using Perplexity AI, with options to include task context or custom prompts.
|
# Description: Create an interactive REPL-style chat interface for AI-powered research that maintains conversation context, integrates project information, and provides session management capabilities.
|
||||||
# Details:
|
# Details:
|
||||||
Develop a new command called 'research' that integrates with Perplexity AI's API to fetch information on specified topics. The command should:
|
Develop an interactive REPL-style chat interface for AI-powered research that allows users to have ongoing research conversations with context awareness. The system should:
|
||||||
|
|
||||||
1. Accept the following parameters:
|
1. Create an interactive REPL using inquirer that:
|
||||||
- A search query string (required)
|
- Maintains conversation history and context
|
||||||
- A task or subtask ID for context (optional)
|
- Provides a natural chat-like experience
|
||||||
- A custom prompt to guide the research (optional)
|
- Supports special commands with the '/' prefix
|
||||||
|
|
||||||
2. When a task/subtask ID is provided, extract relevant information from it to enrich the research query with context.
|
2. Integrate with the existing ai-services-unified.js using research mode:
|
||||||
|
- Leverage our unified AI service architecture
|
||||||
|
- Configure appropriate system prompts for research context
|
||||||
|
- Handle streaming responses for real-time feedback
|
||||||
|
|
||||||
3. Implement proper API integration with Perplexity, including authentication and rate limiting handling.
|
3. Support multiple context sources:
|
||||||
|
- Task/subtask IDs for project context
|
||||||
|
- File paths for code or document context
|
||||||
|
- Custom prompts for specific research directions
|
||||||
|
- Project file tree for system context
|
||||||
|
|
||||||
4. Format and display the research results in a readable format in the terminal, with options to:
|
4. Implement chat commands including:
|
||||||
- Save the results to a file
|
- `/save` - Save conversation to file
|
||||||
- Copy results to clipboard
|
- `/task` - Associate with or load context from a task
|
||||||
- Generate a summary of key points
|
- `/help` - Show available commands and usage
|
||||||
|
- `/exit` - End the research session
|
||||||
|
- `/copy` - Copy last response to clipboard
|
||||||
|
- `/summary` - Generate summary of conversation
|
||||||
|
- `/detail` - Adjust research depth level
|
||||||
|
|
||||||
5. Cache research results to avoid redundant API calls for the same queries.
|
5. Create session management capabilities:
|
||||||
|
- Generate and track unique session IDs
|
||||||
|
- Save/load sessions automatically
|
||||||
|
- Browse and switch between previous sessions
|
||||||
|
- Export sessions to portable formats
|
||||||
|
|
||||||
6. Provide a configuration option to set the depth/detail level of research (quick overview vs. comprehensive).
|
6. Design a consistent UI using ui.js patterns:
|
||||||
|
- Color-coded messages for user/AI distinction
|
||||||
|
- Support for markdown rendering in terminal
|
||||||
|
- Progressive display of AI responses
|
||||||
|
- Clear visual hierarchy and readability
|
||||||
|
|
||||||
7. Handle errors gracefully, especially network issues or API limitations.
|
7. Follow the "taskmaster way":
|
||||||
|
- Create something new and exciting
|
||||||
|
- Focus on usefulness and practicality
|
||||||
|
- Avoid over-engineering
|
||||||
|
- Maintain consistency with existing patterns
|
||||||
|
|
||||||
The command should follow the existing CLI structure and maintain consistency with other commands in the system.
|
The REPL should feel like a natural conversation while providing powerful research capabilities that integrate seamlessly with the rest of the system.
|
||||||
|
|
||||||
# Test Strategy:
|
# Test Strategy:
|
||||||
1. Unit tests:
|
1. Unit tests:
|
||||||
- Test the command with various combinations of parameters (query only, query+task, query+custom prompt, all parameters)
|
- Test the REPL command parsing and execution
|
||||||
- Mock the Perplexity API responses to test different scenarios (successful response, error response, rate limiting)
|
- Mock AI service responses to test different scenarios
|
||||||
- Verify that task context is correctly extracted and incorporated into the research query
|
- Verify context extraction and integration from various sources
|
||||||
|
- Test session serialization and deserialization
|
||||||
|
|
||||||
2. Integration tests:
|
2. Integration tests:
|
||||||
- Test actual API calls to Perplexity with valid credentials (using a test account)
|
- Test actual AI service integration with the REPL
|
||||||
- Verify the caching mechanism works correctly for repeated queries
|
- Verify session persistence across application restarts
|
||||||
- Test error handling with intentionally invalid requests
|
- Test conversation state management with long interactions
|
||||||
|
- Verify context switching between different tasks and files
|
||||||
|
|
||||||
3. User acceptance testing:
|
3. User acceptance testing:
|
||||||
- Have team members use the command for real research needs and provide feedback
|
- Have team members use the REPL for real research needs
|
||||||
- Verify the command works in different network environments
|
- Test the conversation flow and command usability
|
||||||
- Test the command with very long queries and responses
|
- Verify the UI is intuitive and responsive
|
||||||
|
- Test with various terminal sizes and environments
|
||||||
|
|
||||||
4. Performance testing:
|
4. Performance testing:
|
||||||
- Measure and optimize response time for queries
|
- Measure and optimize response time for queries
|
||||||
- Test behavior under poor network conditions
|
- Test behavior with large conversation histories
|
||||||
|
- Verify performance with complex context sources
|
||||||
|
- Test under poor network conditions
|
||||||
|
|
||||||
Validate that the research results are properly formatted, readable, and that all output options (save, copy) function correctly.
|
5. Specific test scenarios:
|
||||||
|
- Verify markdown rendering for complex formatting
|
||||||
|
- Test streaming display with various response lengths
|
||||||
|
- Verify export features create properly formatted files
|
||||||
|
- Test session recovery from simulated crashes
|
||||||
|
- Validate handling of special characters and unicode
|
||||||
|
|
||||||
# Subtasks:
|
# Subtasks:
|
||||||
## 1. Create Perplexity API Client Service [pending]
|
## 1. Create Perplexity API Client Service [cancelled]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
### Description: Develop a service module that handles all interactions with the Perplexity AI API, including authentication, request formatting, and response handling.
|
### Description: Develop a service module that handles all interactions with the Perplexity AI API, including authentication, request formatting, and response handling.
|
||||||
### Details:
|
### Details:
|
||||||
@@ -72,6 +105,9 @@ Testing approach:
|
|||||||
- Test error handling with simulated network failures
|
- Test error handling with simulated network failures
|
||||||
- Verify caching mechanism works correctly
|
- Verify caching mechanism works correctly
|
||||||
- Test with various query types and options
|
- Test with various query types and options
|
||||||
|
<info added on 2025-05-23T21:06:45.726Z>
|
||||||
|
DEPRECATION NOTICE: This subtask is no longer needed and has been marked for removal. Instead of creating a new Perplexity service, we will leverage the existing ai-services-unified.js with research mode. This approach allows us to maintain a unified architecture for AI services rather than implementing a separate service specifically for Perplexity.
|
||||||
|
</info added on 2025-05-23T21:06:45.726Z>
|
||||||
|
|
||||||
## 2. Implement Task Context Extraction Logic [pending]
|
## 2. Implement Task Context Extraction Logic [pending]
|
||||||
### Dependencies: None
|
### Dependencies: None
|
||||||
@@ -94,6 +130,37 @@ Testing approach:
|
|||||||
- Test with various task structures and content types
|
- Test with various task structures and content types
|
||||||
- Verify error handling for missing or invalid tasks
|
- Verify error handling for missing or invalid tasks
|
||||||
- Test the quality of extracted context with sample queries
|
- Test the quality of extracted context with sample queries
|
||||||
|
<info added on 2025-05-23T21:11:44.560Z>
|
||||||
|
Updated Implementation Approach:
|
||||||
|
|
||||||
|
REFACTORED IMPLEMENTATION:
|
||||||
|
1. Extract the fuzzy search logic from add-task.js (lines ~240-400) into `utils/contextExtractor.js`
|
||||||
|
2. Implement a reusable `TaskContextExtractor` class with the following methods:
|
||||||
|
- `extractTaskContext(taskId)` - Base context extraction
|
||||||
|
- `performFuzzySearch(query, options)` - Enhanced Fuse.js implementation
|
||||||
|
- `getRelevanceScore(task, query)` - Scoring mechanism from add-task.js
|
||||||
|
- `detectPurposeCategories(task)` - Category classification logic
|
||||||
|
- `findRelatedTasks(taskId)` - Identify dependencies and relationships
|
||||||
|
- `aggregateMultiQueryContext(queries)` - Support for multiple search terms
|
||||||
|
|
||||||
|
3. Add configurable context depth levels:
|
||||||
|
- Minimal: Just task title and description
|
||||||
|
- Standard: Include details and immediate relationships
|
||||||
|
- Comprehensive: Full context with all dependencies and related tasks
|
||||||
|
|
||||||
|
4. Implement context formatters:
|
||||||
|
- `formatForSystemPrompt(context)` - Structured for AI system instructions
|
||||||
|
- `formatForChatContext(context)` - Conversational format for chat
|
||||||
|
- `formatForResearchQuery(context, query)` - Optimized for research commands
|
||||||
|
|
||||||
|
5. Add caching layer for performance optimization:
|
||||||
|
- Implement LRU cache for expensive fuzzy search results
|
||||||
|
- Cache invalidation on task updates
|
||||||
|
|
||||||
|
6. Ensure backward compatibility with existing context extraction requirements
|
||||||
|
|
||||||
|
This approach leverages our existing sophisticated search logic rather than rebuilding from scratch, while making it more flexible and reusable across the application.
|
||||||
|
</info added on 2025-05-23T21:11:44.560Z>
|
||||||
|
|
||||||
## 3. Build Research Command CLI Interface [pending]
|
## 3. Build Research Command CLI Interface [pending]
|
||||||
### Dependencies: 51.1, 51.2
|
### Dependencies: 51.1, 51.2
|
||||||
@@ -120,6 +187,40 @@ Testing approach:
|
|||||||
- Verify command validation logic works correctly
|
- Verify command validation logic works correctly
|
||||||
- Test with various combinations of options
|
- Test with various combinations of options
|
||||||
- Ensure proper error messages for invalid inputs
|
- Ensure proper error messages for invalid inputs
|
||||||
|
<info added on 2025-05-23T21:09:08.478Z>
|
||||||
|
Implementation details:
|
||||||
|
1. Create a new module `repl/research-chat.js` for the interactive research experience
|
||||||
|
2. Implement REPL-style chat interface using inquirer with:
|
||||||
|
- Persistent conversation history management
|
||||||
|
- Context-aware prompting system
|
||||||
|
- Command parsing for special instructions
|
||||||
|
3. Implement REPL commands:
|
||||||
|
- `/save` - Save conversation to file
|
||||||
|
- `/task` - Associate with or load context from a task
|
||||||
|
- `/help` - Show available commands and usage
|
||||||
|
- `/exit` - End the research session
|
||||||
|
- `/copy` - Copy last response to clipboard
|
||||||
|
- `/summary` - Generate summary of conversation
|
||||||
|
- `/detail` - Adjust research depth level
|
||||||
|
4. Create context initialization system:
|
||||||
|
- Task/subtask context loading
|
||||||
|
- File content integration
|
||||||
|
- System prompt configuration
|
||||||
|
5. Integrate with ai-services-unified.js research mode
|
||||||
|
6. Implement conversation state management:
|
||||||
|
- Track message history
|
||||||
|
- Maintain context window
|
||||||
|
- Handle context pruning for long conversations
|
||||||
|
7. Design consistent UI patterns using ui.js library
|
||||||
|
8. Add entry point in main CLI application
|
||||||
|
|
||||||
|
Testing approach:
|
||||||
|
- Test REPL command parsing and execution
|
||||||
|
- Verify context initialization with various inputs
|
||||||
|
- Test conversation state management
|
||||||
|
- Ensure proper error handling and recovery
|
||||||
|
- Validate UI consistency across different terminal environments
|
||||||
|
</info added on 2025-05-23T21:09:08.478Z>
|
||||||
|
|
||||||
## 4. Implement Results Processing and Output Formatting [pending]
|
## 4. Implement Results Processing and Output Formatting [pending]
|
||||||
### Dependencies: 51.1, 51.3
|
### Dependencies: 51.1, 51.3
|
||||||
@@ -145,8 +246,45 @@ Testing approach:
|
|||||||
- Verify file saving functionality creates proper files with correct content
|
- Verify file saving functionality creates proper files with correct content
|
||||||
- Test clipboard functionality
|
- Test clipboard functionality
|
||||||
- Verify summarization produces useful results
|
- Verify summarization produces useful results
|
||||||
|
<info added on 2025-05-23T21:10:00.181Z>
|
||||||
|
Implementation details:
|
||||||
|
1. Create a new module `utils/chatFormatter.js` for REPL interface formatting
|
||||||
|
2. Implement terminal output formatting for conversational display:
|
||||||
|
- Color-coded messages distinguishing user inputs and AI responses
|
||||||
|
- Proper text wrapping and indentation for readability
|
||||||
|
- Support for markdown rendering in terminal
|
||||||
|
- Visual indicators for system messages and status updates
|
||||||
|
3. Implement streaming/progressive display of AI responses:
|
||||||
|
- Character-by-character or chunk-by-chunk display
|
||||||
|
- Cursor animations during response generation
|
||||||
|
- Ability to interrupt long responses
|
||||||
|
4. Design chat history visualization:
|
||||||
|
- Scrollable history with clear message boundaries
|
||||||
|
- Timestamp display options
|
||||||
|
- Session identification
|
||||||
|
5. Create specialized formatters for different content types:
|
||||||
|
- Code blocks with syntax highlighting
|
||||||
|
- Bulleted and numbered lists
|
||||||
|
- Tables and structured data
|
||||||
|
- Citations and references
|
||||||
|
6. Implement export functionality:
|
||||||
|
- Save conversations to markdown or text files
|
||||||
|
- Export individual responses
|
||||||
|
- Copy responses to clipboard
|
||||||
|
7. Adapt existing ui.js patterns for conversational context:
|
||||||
|
- Maintain consistent styling while supporting chat flow
|
||||||
|
- Handle multi-turn context appropriately
|
||||||
|
|
||||||
## 5. Implement Caching and Results Management System [pending]
|
Testing approach:
|
||||||
|
- Test streaming display with various response lengths and speeds
|
||||||
|
- Verify markdown rendering accuracy for complex formatting
|
||||||
|
- Test history navigation and scrolling functionality
|
||||||
|
- Verify export features create properly formatted files
|
||||||
|
- Test display on various terminal sizes and configurations
|
||||||
|
- Verify handling of special characters and unicode
|
||||||
|
</info added on 2025-05-23T21:10:00.181Z>
|
||||||
|
|
||||||
|
## 5. Implement Caching and Results Management System [cancelled]
|
||||||
### Dependencies: 51.1, 51.4
|
### Dependencies: 51.1, 51.4
|
||||||
### Description: Create a persistent caching system for research results and implement functionality to manage, retrieve, and reference previous research.
|
### Description: Create a persistent caching system for research results and implement functionality to manage, retrieve, and reference previous research.
|
||||||
### Details:
|
### Details:
|
||||||
@@ -173,4 +311,142 @@ Testing approach:
|
|||||||
- Test history management commands
|
- Test history management commands
|
||||||
- Verify task association functionality
|
- Verify task association functionality
|
||||||
- Test with large cache sizes to ensure performance
|
- Test with large cache sizes to ensure performance
|
||||||
|
<info added on 2025-05-23T21:10:28.544Z>
|
||||||
|
Implementation details:
|
||||||
|
1. Create a session management system for the REPL experience:
|
||||||
|
- Generate and track unique session IDs
|
||||||
|
- Store conversation history with timestamps
|
||||||
|
- Maintain context and state between interactions
|
||||||
|
2. Implement session persistence:
|
||||||
|
- Save sessions to disk automatically
|
||||||
|
- Load previous sessions on startup
|
||||||
|
- Handle graceful recovery from crashes
|
||||||
|
3. Build session browser and selector:
|
||||||
|
- List available sessions with preview
|
||||||
|
- Filter sessions by date, topic, or content
|
||||||
|
- Enable quick switching between sessions
|
||||||
|
4. Implement conversation state serialization:
|
||||||
|
- Capture full conversation context
|
||||||
|
- Preserve user preferences per session
|
||||||
|
- Handle state migration during updates
|
||||||
|
5. Add session sharing capabilities:
|
||||||
|
- Export sessions to portable formats
|
||||||
|
- Import sessions from files
|
||||||
|
- Generate shareable links (if applicable)
|
||||||
|
6. Create session management commands:
|
||||||
|
- Create new sessions
|
||||||
|
- Clone existing sessions
|
||||||
|
- Archive or delete old sessions
|
||||||
|
|
||||||
|
Testing approach:
|
||||||
|
- Verify session persistence across application restarts
|
||||||
|
- Test session recovery from simulated crashes
|
||||||
|
- Validate state serialization with complex conversations
|
||||||
|
- Ensure session switching maintains proper context
|
||||||
|
- Test session import/export functionality
|
||||||
|
- Verify performance with large conversation histories
|
||||||
|
</info added on 2025-05-23T21:10:28.544Z>
|
||||||
|
|
||||||
|
## 6. Implement Project Context Generation [pending]
|
||||||
|
### Dependencies: 51.2
|
||||||
|
### Description: Create functionality to generate and include project-level context such as file trees, repository structure, and codebase insights for more informed research.
|
||||||
|
### Details:
|
||||||
|
Implementation details:
|
||||||
|
1. Create a new module `utils/projectContextGenerator.js` for project-level context extraction
|
||||||
|
2. Implement file tree generation functionality:
|
||||||
|
- Scan project directory structure recursively
|
||||||
|
- Filter out irrelevant files (node_modules, .git, etc.)
|
||||||
|
- Format file tree for AI consumption
|
||||||
|
- Include file counts and structure statistics
|
||||||
|
3. Add code analysis capabilities:
|
||||||
|
- Extract key imports and dependencies
|
||||||
|
- Identify main modules and their relationships
|
||||||
|
- Generate high-level architecture overview
|
||||||
|
4. Implement context summarization:
|
||||||
|
- Create concise project overview
|
||||||
|
- Identify key technologies and patterns
|
||||||
|
- Summarize project purpose and structure
|
||||||
|
5. Add caching for expensive operations:
|
||||||
|
- Cache file tree with invalidation on changes
|
||||||
|
- Store analysis results with TTL
|
||||||
|
6. Create integration with research REPL:
|
||||||
|
- Add project context to system prompts
|
||||||
|
- Support `/project` command to refresh context
|
||||||
|
- Allow selective inclusion of project components
|
||||||
|
|
||||||
|
Testing approach:
|
||||||
|
- Test file tree generation with various project structures
|
||||||
|
- Verify filtering logic works correctly
|
||||||
|
- Test context summarization quality
|
||||||
|
- Measure performance impact of context generation
|
||||||
|
- Verify caching mechanism effectiveness
|
||||||
|
|
||||||
|
## 7. Create REPL Command System [pending]
|
||||||
|
### Dependencies: 51.3
|
||||||
|
### Description: Implement a flexible command system for the research REPL that allows users to control the conversation flow, manage sessions, and access additional functionality.
|
||||||
|
### Details:
|
||||||
|
Implementation details:
|
||||||
|
1. Create a new module `repl/commands.js` for REPL command handling
|
||||||
|
2. Implement a command parser that:
|
||||||
|
- Detects commands starting with `/`
|
||||||
|
- Parses arguments and options
|
||||||
|
- Handles quoted strings and special characters
|
||||||
|
3. Create a command registry system:
|
||||||
|
- Register command handlers with descriptions
|
||||||
|
- Support command aliases
|
||||||
|
- Enable command discovery and help
|
||||||
|
4. Implement core commands:
|
||||||
|
- `/save [filename]` - Save conversation
|
||||||
|
- `/task <taskId>` - Load task context
|
||||||
|
- `/file <path>` - Include file content
|
||||||
|
- `/help [command]` - Show help
|
||||||
|
- `/exit` - End session
|
||||||
|
- `/copy [n]` - Copy nth response
|
||||||
|
- `/summary` - Generate conversation summary
|
||||||
|
- `/detail <level>` - Set detail level
|
||||||
|
- `/clear` - Clear conversation
|
||||||
|
- `/project` - Refresh project context
|
||||||
|
- `/session <id|new>` - Switch/create session
|
||||||
|
5. Add command completion and suggestions
|
||||||
|
6. Implement error handling for invalid commands
|
||||||
|
7. Create a help system with examples
|
||||||
|
|
||||||
|
Testing approach:
|
||||||
|
- Test command parsing with various inputs
|
||||||
|
- Verify command execution and error handling
|
||||||
|
- Test command completion functionality
|
||||||
|
- Verify help system provides useful information
|
||||||
|
- Test with complex command sequences
|
||||||
|
|
||||||
|
## 8. Integrate with AI Services Unified [pending]
|
||||||
|
### Dependencies: 51.3, 51.4
|
||||||
|
### Description: Integrate the research REPL with the existing ai-services-unified.js to leverage the unified AI service architecture with research mode.
|
||||||
|
### Details:
|
||||||
|
Implementation details:
|
||||||
|
1. Update `repl/research-chat.js` to integrate with ai-services-unified.js
|
||||||
|
2. Configure research mode in AI service:
|
||||||
|
- Set appropriate system prompts
|
||||||
|
- Configure temperature and other parameters
|
||||||
|
- Enable streaming responses
|
||||||
|
3. Implement context management:
|
||||||
|
- Format conversation history for AI context
|
||||||
|
- Include task and project context
|
||||||
|
- Handle context window limitations
|
||||||
|
4. Add support for different research styles:
|
||||||
|
- Exploratory research with broader context
|
||||||
|
- Focused research with specific questions
|
||||||
|
- Comparative analysis between concepts
|
||||||
|
5. Implement response handling:
|
||||||
|
- Process streaming chunks
|
||||||
|
- Format and display responses
|
||||||
|
- Handle errors and retries
|
||||||
|
6. Add configuration options for AI service selection
|
||||||
|
7. Implement fallback mechanisms for service unavailability
|
||||||
|
|
||||||
|
Testing approach:
|
||||||
|
- Test integration with mocked AI services
|
||||||
|
- Verify context formatting and management
|
||||||
|
- Test streaming response handling
|
||||||
|
- Verify error handling and recovery
|
||||||
|
- Test with various research styles and queries
|
||||||
|
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user