Files
claude-task-master/.cursor/rules/new_features.mdc
Eyal Toledano 0eec95323c feat: Enhance Task Master CLI with Testing Framework, Perplexity AI Integration, and Refactored Core Logic
This commit introduces significant enhancements and refactoring to the Task Master CLI, focusing on improved testing, integration with Perplexity AI for research-backed task updates, and core logic refactoring for better maintainability and functionality.

**Testing Infrastructure Setup:**
- Implemented Jest as the primary testing framework, setting up a comprehensive testing environment.
- Added new test scripts to  including , , and  for streamlined testing workflows.
- Integrated necessary devDependencies for testing, such as , , , , and , to support unit, integration, and end-to-end testing.

**Dependency Updates:**
- Updated  and  to reflect the latest dependency versions, ensuring project stability and access to the newest features and security patches.
- Upgraded  to version 0.9.16 and usage: openai [-h] [-v] [-b API_BASE] [-k API_KEY] [-p PROXY [PROXY ...]]
              [-o ORGANIZATION] [-t {openai,azure}]
              [--api-version API_VERSION] [--azure-endpoint AZURE_ENDPOINT]
              [--azure-ad-token AZURE_AD_TOKEN] [-V]
              {api,tools,migrate,grit} ...

positional arguments:
  {api,tools,migrate,grit}
    api                 Direct API calls
    tools               Client side tools for convenience

options:
  -h, --help            show this help message and exit
  -v, --verbose         Set verbosity.
  -b, --api-base API_BASE
                        What API base url to use.
  -k, --api-key API_KEY
                        What API key to use.
  -p, --proxy PROXY [PROXY ...]
                        What proxy to use.
  -o, --organization ORGANIZATION
                        Which organization to run as (will use your default
                        organization if not specified)
  -t, --api-type {openai,azure}
                        The backend API to call, must be `openai` or `azure`
  --api-version API_VERSION
                        The Azure API version, e.g.
                        'https://learn.microsoft.com/en-us/azure/ai-
                        services/openai/reference#rest-api-versioning'
  --azure-endpoint AZURE_ENDPOINT
                        The Azure endpoint, e.g.
                        'https://endpoint.openai.azure.com'
  --azure-ad-token AZURE_AD_TOKEN
                        A token from Azure Active Directory,
                        https://www.microsoft.com/en-
                        us/security/business/identity-access/microsoft-entra-
                        id
  -V, --version         show program's version number and exit to 4.89.0.
- Added  dependency (version 2.3.0) and updated  related dependencies to their latest versions.

**Perplexity AI Integration for Research-Backed Updates:**
- Introduced an option to leverage Perplexity AI for task updates, enabling research-backed enhancements to task details.
- Implemented logic to initialize a Perplexity AI client if the  environment variable is available.
- Modified the  function to accept a  parameter, allowing dynamic selection between Perplexity AI and Claude AI for task updates based on API key availability and user preference.
- Enhanced  to handle responses from Perplexity AI and update tasks accordingly, including improved error handling and logging for robust operation.

**Core Logic Refactoring and Improvements:**
- Refactored the  function to utilize task IDs instead of dependency IDs, ensuring consistency and clarity in dependency management.
- Implemented a new  function to rigorously check for both circular dependencies and self-dependencies within tasks, improving task relationship integrity.
- Enhanced UI elements in :
    - Refactored  to incorporate icons for different task statuses and utilize a  object for color mapping, improving visual representation of task status.
    - Updated  to display colored complexity scores with emojis, providing a more intuitive and visually appealing representation of task complexity.
- Refactored the task data structure creation and validation process:
    - Updated the JSON Schema for  to reflect a more streamlined and efficient task structure.
    - Implemented Task Model Classes for better data modeling and type safety.
    - Improved File System Operations for task data management.
    - Developed robust Validation Functions and an Error Handling System to ensure data integrity and application stability.

**Testing Guidelines Implementation:**
- Implemented guidelines for writing testable code when developing new features, promoting a test-driven development approach.
- Added testing requirements and best practices for unit, integration, and edge case testing to ensure comprehensive test coverage.
- Updated the development workflow to mandate writing tests before proceeding with configuration and documentation updates, reinforcing the importance of testing throughout the development lifecycle.

This commit collectively enhances the Task Master CLI's reliability, functionality, and developer experience through improved testing practices, AI-powered research capabilities, and a more robust and maintainable codebase.
2025-03-24 13:28:08 -04:00

312 lines
11 KiB
Plaintext

---
description: Guidelines for integrating new features into the Task Master CLI
globs: scripts/modules/*.js
alwaysApply: false
---
# Task Master Feature Integration Guidelines
## Feature Placement Decision Process
- **Identify Feature Type**:
- **Data Manipulation**: Features that create, read, update, or delete tasks belong in [`task-manager.js`](mdc:scripts/modules/task-manager.js)
- **Dependency Management**: Features that handle task relationships belong in [`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js)
- **User Interface**: Features that display information to users belong in [`ui.js`](mdc:scripts/modules/ui.js)
- **AI Integration**: Features that use AI models belong in [`ai-services.js`](mdc:scripts/modules/ai-services.js)
- **Cross-Cutting**: Features that don't fit one category may need components in multiple modules
- **Command-Line Interface**:
- All new user-facing commands should be added to [`commands.js`](mdc:scripts/modules/commands.js)
- Use consistent patterns for option naming and help text
- Follow the Commander.js model for subcommand structure
## Implementation Pattern
The standard pattern for adding a feature follows this workflow:
1. **Core Logic**: Implement the business logic in the appropriate module
2. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js)
3. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js)
4. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
5. **Configuration**: Update any configuration in [`utils.js`](mdc:scripts/modules/utils.js) if needed
6. **Documentation**: Update help text and documentation in [dev_workflow.mdc](mdc:scripts/modules/dev_workflow.mdc)
```javascript
// 1. CORE LOGIC: Add function to appropriate module (example in task-manager.js)
/**
* Archives completed tasks to archive.json
* @param {string} tasksPath - Path to the tasks.json file
* @param {string} archivePath - Path to the archive.json file
* @returns {number} Number of tasks archived
*/
async function archiveTasks(tasksPath, archivePath = 'tasks/archive.json') {
// Implementation...
return archivedCount;
}
// Export from the module
export {
// ... existing exports ...
archiveTasks,
};
```
```javascript
// 2. UI COMPONENTS: Add display function to ui.js
/**
* Display archive operation results
* @param {string} archivePath - Path to the archive file
* @param {number} count - Number of tasks archived
*/
function displayArchiveResults(archivePath, count) {
console.log(boxen(
chalk.green(`Successfully archived ${count} tasks to ${archivePath}`),
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
));
}
// Export from the module
export {
// ... existing exports ...
displayArchiveResults,
};
```
```javascript
// 3. COMMAND INTEGRATION: Add to commands.js
import { archiveTasks } from './task-manager.js';
import { displayArchiveResults } from './ui.js';
// In registerCommands function
programInstance
.command('archive')
.description('Archive completed tasks to separate file')
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
.option('-o, --output <file>', 'Archive output file', 'tasks/archive.json')
.action(async (options) => {
const tasksPath = options.file;
const archivePath = options.output;
console.log(chalk.blue(`Archiving completed tasks from ${tasksPath} to ${archivePath}...`));
const archivedCount = await archiveTasks(tasksPath, archivePath);
displayArchiveResults(archivePath, archivedCount);
});
```
## Cross-Module Features
For features requiring components in multiple modules:
- ✅ **DO**: Create a clear unidirectional flow of dependencies
```javascript
// In task-manager.js
function analyzeTasksDifficulty(tasks) {
// Implementation...
return difficultyScores;
}
// In ui.js - depends on task-manager.js
import { analyzeTasksDifficulty } from './task-manager.js';
function displayDifficultyReport(tasks) {
const scores = analyzeTasksDifficulty(tasks);
// Render the scores...
}
```
- ❌ **DON'T**: Create circular dependencies between modules
```javascript
// In task-manager.js - depends on ui.js
import { displayDifficultyReport } from './ui.js';
function analyzeTasks() {
// Implementation...
displayDifficultyReport(tasks); // WRONG! Don't call UI functions from task-manager
}
// In ui.js - depends on task-manager.js
import { analyzeTasks } from './task-manager.js';
```
## Command-Line Interface Standards
- **Naming Conventions**:
- Use kebab-case for command names (`analyze-complexity`, not `analyzeComplexity`)
- Use camelCase for option names (`--outputFormat`, not `--output-format`)
- Use the same option names across commands when they represent the same concept
- **Command Structure**:
```javascript
programInstance
.command('command-name')
.description('Clear, concise description of what the command does')
.option('-s, --short-option <value>', 'Option description', 'default value')
.option('--long-option <value>', 'Option description')
.action(async (options) => {
// Command implementation
});
```
## Utility Function Guidelines
When adding utilities to [`utils.js`](mdc:scripts/modules/utils.js):
- Only add functions that could be used by multiple modules
- Keep utilities single-purpose and purely functional
- Document parameters and return values
```javascript
/**
* Formats a duration in milliseconds to a human-readable string
* @param {number} ms - Duration in milliseconds
* @returns {string} Formatted duration string (e.g., "2h 30m 15s")
*/
function formatDuration(ms) {
// Implementation...
return formatted;
}
```
## Writing Testable Code
When implementing new features, follow these guidelines to ensure your code is testable:
- **Dependency Injection**
- Design functions to accept dependencies as parameters
- Avoid hard-coded dependencies that are difficult to mock
```javascript
// ✅ DO: Accept dependencies as parameters
function processTask(task, fileSystem, logger) {
fileSystem.writeFile('task.json', JSON.stringify(task));
logger.info('Task processed');
}
// ❌ DON'T: Use hard-coded dependencies
function processTask(task) {
fs.writeFile('task.json', JSON.stringify(task));
console.log('Task processed');
}
```
- **Separate Logic from Side Effects**
- Keep pure logic separate from I/O operations or UI rendering
- This allows testing the logic without mocking complex dependencies
```javascript
// ✅ DO: Separate logic from side effects
function calculateTaskPriority(task, dependencies) {
// Pure logic that returns a value
return computedPriority;
}
function displayTaskPriority(task, dependencies) {
const priority = calculateTaskPriority(task, dependencies);
console.log(`Task priority: ${priority}`);
}
```
- **Callback Functions and Testing**
- When using callbacks (like in Commander.js commands), define them separately
- This allows testing the callback logic independently
```javascript
// ✅ DO: Define callbacks separately for testing
function getVersionString() {
// Logic to determine version
return version;
}
// In setupCLI
programInstance.version(getVersionString);
// In tests
test('getVersionString returns correct version', () => {
expect(getVersionString()).toBe('1.5.0');
});
```
- **UI Output Testing**
- For UI components, focus on testing conditional logic rather than exact output
- Use string pattern matching (like `expect(result).toContain('text')`)
- Pay attention to emojis and formatting which can make exact string matching difficult
```javascript
// ✅ DO: Test the essence of the output, not exact formatting
test('statusFormatter shows done status correctly', () => {
const result = formatStatus('done');
expect(result).toContain('done');
expect(result).toContain('✅');
});
```
## Testing Requirements
Every new feature **must** include comprehensive tests following the guidelines in [`tests.mdc`](mdc:.cursor/rules/tests.mdc). Testing should include:
1. **Unit Tests**: Test individual functions and components in isolation
```javascript
// Example unit test for a new utility function
describe('newFeatureUtil', () => {
test('should perform expected operation with valid input', () => {
expect(newFeatureUtil('valid input')).toBe('expected result');
});
test('should handle edge cases appropriately', () => {
expect(newFeatureUtil('')).toBeNull();
});
});
```
2. **Integration Tests**: Verify the feature works correctly with other components
```javascript
// Example integration test for a new command
describe('newCommand integration', () => {
test('should call the correct service functions with parsed arguments', () => {
const mockService = jest.fn().mockResolvedValue('success');
// Set up test with mocked dependencies
// Call the command handler
// Verify service was called with expected arguments
});
});
```
3. **Edge Cases**: Test boundary conditions and error handling
- Invalid inputs
- Missing dependencies
- File system errors
- API failures
4. **Test Coverage**: Aim for at least 80% coverage for all new code
5. **Jest Mocking Best Practices**
- Follow the mock-first-then-import pattern as described in [`tests.mdc`](mdc:.cursor/rules/tests.mdc)
- Use jest.spyOn() to create spy functions for testing
- Clear mocks between tests to prevent interference
- See the Jest Module Mocking Best Practices section in [`tests.mdc`](mdc:.cursor/rules/tests.mdc) for details
When submitting a new feature, always run the full test suite to ensure nothing was broken:
```bash
npm test
```
## Documentation Requirements
For each new feature:
1. Add help text to the command definition
2. Update [`dev_workflow.mdc`](mdc:scripts/modules/dev_workflow.mdc) with command reference
3. Add examples to the appropriate sections in [`MODULE_PLAN.md`](mdc:scripts/modules/MODULE_PLAN.md)
Follow the existing command reference format:
```markdown
- **Command Reference: your-command**
- CLI Syntax: `task-master your-command [options]`
- Description: Brief explanation of what the command does
- Parameters:
- `--option1=<value>`: Description of option1 (default: 'default')
- `--option2=<value>`: Description of option2 (required)
- Example: `task-master your-command --option1=value --option2=value2`
- Notes: Additional details, limitations, or special considerations
```
For more information on module structure, see [`MODULE_PLAN.md`](mdc:scripts/modules/MODULE_PLAN.md) and follow [`self_improve.mdc`](mdc:scripts/modules/self_improve.mdc) for best practices on updating documentation.