feat: Enhance Task Master CLI with Testing Framework, Perplexity AI Integration, and Refactored Core Logic
This commit introduces significant enhancements and refactoring to the Task Master CLI, focusing on improved testing, integration with Perplexity AI for research-backed task updates, and core logic refactoring for better maintainability and functionality.
**Testing Infrastructure Setup:**
- Implemented Jest as the primary testing framework, setting up a comprehensive testing environment.
- Added new test scripts to including , , and for streamlined testing workflows.
- Integrated necessary devDependencies for testing, such as , , , , and , to support unit, integration, and end-to-end testing.
**Dependency Updates:**
- Updated and to reflect the latest dependency versions, ensuring project stability and access to the newest features and security patches.
- Upgraded to version 0.9.16 and usage: openai [-h] [-v] [-b API_BASE] [-k API_KEY] [-p PROXY [PROXY ...]]
[-o ORGANIZATION] [-t {openai,azure}]
[--api-version API_VERSION] [--azure-endpoint AZURE_ENDPOINT]
[--azure-ad-token AZURE_AD_TOKEN] [-V]
{api,tools,migrate,grit} ...
positional arguments:
{api,tools,migrate,grit}
api Direct API calls
tools Client side tools for convenience
options:
-h, --help show this help message and exit
-v, --verbose Set verbosity.
-b, --api-base API_BASE
What API base url to use.
-k, --api-key API_KEY
What API key to use.
-p, --proxy PROXY [PROXY ...]
What proxy to use.
-o, --organization ORGANIZATION
Which organization to run as (will use your default
organization if not specified)
-t, --api-type {openai,azure}
The backend API to call, must be `openai` or `azure`
--api-version API_VERSION
The Azure API version, e.g.
'https://learn.microsoft.com/en-us/azure/ai-
services/openai/reference#rest-api-versioning'
--azure-endpoint AZURE_ENDPOINT
The Azure endpoint, e.g.
'https://endpoint.openai.azure.com'
--azure-ad-token AZURE_AD_TOKEN
A token from Azure Active Directory,
https://www.microsoft.com/en-
us/security/business/identity-access/microsoft-entra-
id
-V, --version show program's version number and exit to 4.89.0.
- Added dependency (version 2.3.0) and updated related dependencies to their latest versions.
**Perplexity AI Integration for Research-Backed Updates:**
- Introduced an option to leverage Perplexity AI for task updates, enabling research-backed enhancements to task details.
- Implemented logic to initialize a Perplexity AI client if the environment variable is available.
- Modified the function to accept a parameter, allowing dynamic selection between Perplexity AI and Claude AI for task updates based on API key availability and user preference.
- Enhanced to handle responses from Perplexity AI and update tasks accordingly, including improved error handling and logging for robust operation.
**Core Logic Refactoring and Improvements:**
- Refactored the function to utilize task IDs instead of dependency IDs, ensuring consistency and clarity in dependency management.
- Implemented a new function to rigorously check for both circular dependencies and self-dependencies within tasks, improving task relationship integrity.
- Enhanced UI elements in :
- Refactored to incorporate icons for different task statuses and utilize a object for color mapping, improving visual representation of task status.
- Updated to display colored complexity scores with emojis, providing a more intuitive and visually appealing representation of task complexity.
- Refactored the task data structure creation and validation process:
- Updated the JSON Schema for to reflect a more streamlined and efficient task structure.
- Implemented Task Model Classes for better data modeling and type safety.
- Improved File System Operations for task data management.
- Developed robust Validation Functions and an Error Handling System to ensure data integrity and application stability.
**Testing Guidelines Implementation:**
- Implemented guidelines for writing testable code when developing new features, promoting a test-driven development approach.
- Added testing requirements and best practices for unit, integration, and edge case testing to ensure comprehensive test coverage.
- Updated the development workflow to mandate writing tests before proceeding with configuration and documentation updates, reinforcing the importance of testing throughout the development lifecycle.
This commit collectively enhances the Task Master CLI's reliability, functionality, and developer experience through improved testing practices, AI-powered research capabilities, and a more robust and maintainable codebase.
This commit is contained in:
152
.cursor/rules/architecture.mdc
Normal file
152
.cursor/rules/architecture.mdc
Normal file
@@ -0,0 +1,152 @@
|
||||
---
|
||||
description: Describes the high-level architecture of the Task Master CLI application.
|
||||
globs: scripts/modules/*.js
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Application Architecture Overview
|
||||
|
||||
- **Modular Structure**: The Task Master CLI is built using a modular architecture, with distinct modules responsible for different aspects of the application. This promotes separation of concerns, maintainability, and testability.
|
||||
|
||||
- **Main Modules and Responsibilities**:
|
||||
|
||||
- **[`commands.js`](mdc:scripts/modules/commands.js): Command Handling**
|
||||
- **Purpose**: Defines and registers all CLI commands using Commander.js.
|
||||
- **Responsibilities**:
|
||||
- Parses command-line arguments and options.
|
||||
- Invokes appropriate functions from other modules to execute commands.
|
||||
- Handles user input and output related to command execution.
|
||||
- Implements input validation and error handling for CLI commands.
|
||||
- **Key Components**:
|
||||
- `programInstance` (Commander.js `Command` instance): Manages command definitions.
|
||||
- `registerCommands(programInstance)`: Function to register all application commands.
|
||||
- Command action handlers: Functions executed when a specific command is invoked.
|
||||
|
||||
- **[`task-manager.js`](mdc:scripts/modules/task-manager.js): Task Data Management**
|
||||
- **Purpose**: Manages task data, including loading, saving, creating, updating, deleting, and querying tasks.
|
||||
- **Responsibilities**:
|
||||
- Reads and writes task data to `tasks.json` file.
|
||||
- Implements functions for task CRUD operations (Create, Read, Update, Delete).
|
||||
- Handles task parsing from PRD documents using AI.
|
||||
- Manages task expansion and subtask generation.
|
||||
- Updates task statuses and properties.
|
||||
- Implements task listing and display logic.
|
||||
- Performs task complexity analysis using AI.
|
||||
- **Key Functions**:
|
||||
- `readTasks(tasksPath)` / `writeTasks(tasksPath, tasksData)`: Load and save task data.
|
||||
- `parsePRD(prdFilePath, outputPath, numTasks)`: Parses PRD document to create tasks.
|
||||
- `expandTask(taskId, numSubtasks, useResearch, prompt, force)`: Expands a task into subtasks.
|
||||
- `setTaskStatus(tasksPath, taskIdInput, newStatus)`: Updates task status.
|
||||
- `listTasks(tasksPath, statusFilter, withSubtasks)`: Lists tasks with filtering and subtask display options.
|
||||
- `analyzeComplexity(tasksPath, reportPath, useResearch, thresholdScore)`: Analyzes task complexity.
|
||||
|
||||
- **[`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js): Dependency Management**
|
||||
- **Purpose**: Manages task dependencies, including adding, removing, validating, and fixing dependency relationships.
|
||||
- **Responsibilities**:
|
||||
- Adds and removes task dependencies.
|
||||
- Validates dependency relationships to prevent circular dependencies and invalid references.
|
||||
- Fixes invalid dependencies by removing non-existent or self-referential dependencies.
|
||||
- Provides functions to check for circular dependencies.
|
||||
- **Key Functions**:
|
||||
- `addDependency(tasksPath, taskId, dependencyId)`: Adds a dependency between tasks.
|
||||
- `removeDependency(tasksPath, taskId, dependencyId)`: Removes a dependency.
|
||||
- `validateDependencies(tasksPath)`: Validates task dependencies.
|
||||
- `fixDependencies(tasksPath)`: Fixes invalid task dependencies.
|
||||
- `isCircularDependency(tasks, taskId, dependencyChain)`: Detects circular dependencies.
|
||||
|
||||
- **[`ui.js`](mdc:scripts/modules/ui.js): User Interface Components**
|
||||
- **Purpose**: Handles all user interface elements, including displaying information, formatting output, and providing user feedback.
|
||||
- **Responsibilities**:
|
||||
- Displays task lists, task details, and command outputs in a formatted way.
|
||||
- Uses `chalk` for colored output and `boxen` for boxed messages.
|
||||
- Implements table display using `cli-table3`.
|
||||
- Shows loading indicators using `ora`.
|
||||
- Provides helper functions for status formatting, dependency display, and progress reporting.
|
||||
- Suggests next actions to the user after command execution.
|
||||
- **Key Functions**:
|
||||
- `displayTaskList(tasks, statusFilter, withSubtasks)`: Displays a list of tasks in a table.
|
||||
- `displayTaskDetails(task)`: Displays detailed information for a single task.
|
||||
- `displayComplexityReport(reportPath)`: Displays the task complexity report.
|
||||
- `startLoadingIndicator(message)` / `stopLoadingIndicator(indicator)`: Manages loading indicators.
|
||||
- `getStatusWithColor(status)`: Returns status string with color formatting.
|
||||
- `formatDependenciesWithStatus(dependencies, allTasks, inTable)`: Formats dependency list with status indicators.
|
||||
|
||||
- **[`ai-services.js`](mdc:scripts/modules/ai-services.js) (Conceptual): AI Integration**
|
||||
- **Purpose**: Abstracts interactions with AI models (like Anthropic Claude and Perplexity AI) for various features. *Note: This module might be implicitly implemented within `task-manager.js` and `utils.js` or could be explicitly created for better organization as the project evolves.*
|
||||
- **Responsibilities**:
|
||||
- Handles API calls to AI services.
|
||||
- Manages prompts and parameters for AI requests.
|
||||
- Parses AI responses and extracts relevant information.
|
||||
- Implements logic for task complexity analysis, task expansion, and PRD parsing using AI.
|
||||
- **Potential Functions**:
|
||||
- `getAIResponse(prompt, model, maxTokens, temperature)`: Generic function to interact with AI model.
|
||||
- `analyzeTaskComplexityWithAI(taskDescription)`: Sends task description to AI for complexity analysis.
|
||||
- `expandTaskWithAI(taskDescription, numSubtasks, researchContext)`: Generates subtasks using AI.
|
||||
- `parsePRDWithAI(prdContent)`: Extracts tasks from PRD content using AI.
|
||||
|
||||
- **[`utils.js`](mdc:scripts/modules/utils.js): Utility Functions and Configuration**
|
||||
- **Purpose**: Provides reusable utility functions and global configuration settings used across the application.
|
||||
- **Responsibilities**:
|
||||
- Manages global configuration settings loaded from environment variables and defaults.
|
||||
- Implements logging utility with different log levels and output formatting.
|
||||
- Provides file system operation utilities (read/write JSON files).
|
||||
- Includes string manipulation utilities (e.g., `truncate`, `sanitizePrompt`).
|
||||
- Offers task-specific utility functions (e.g., `formatTaskId`, `findTaskById`, `taskExists`).
|
||||
- Implements graph algorithms like cycle detection for dependency management.
|
||||
- **Key Components**:
|
||||
- `CONFIG`: Global configuration object.
|
||||
- `log(level, ...args)`: Logging function.
|
||||
- `readJSON(filepath)` / `writeJSON(filepath, data)`: File I/O utilities for JSON files.
|
||||
- `truncate(text, maxLength)`: String truncation utility.
|
||||
- `formatTaskId(id)` / `findTaskById(tasks, taskId)`: Task ID and search utilities.
|
||||
- `findCycles(subtaskId, dependencyMap)`: Cycle detection algorithm.
|
||||
|
||||
- **Data Flow and Module Dependencies**:
|
||||
|
||||
- **Commands Initiate Actions**: User commands entered via the CLI (handled by [`commands.js`](mdc:scripts/modules/commands.js)) are the entry points for most operations.
|
||||
- **Command Handlers Delegate to Managers**: Command handlers in [`commands.js`](mdc:scripts/modules/commands.js) call functions in [`task-manager.js`](mdc:scripts/modules/task-manager.js) and [`dependency-manager.js`](mdc:scripts/modules/dependency-manager.js) to perform core task and dependency management logic.
|
||||
- **UI for Presentation**: [`ui.js`](mdc:scripts/modules/ui.js) is used by command handlers and task/dependency managers to display information to the user. UI functions primarily consume data and format it for output, without modifying core application state.
|
||||
- **Utilities for Common Tasks**: [`utils.js`](mdc:scripts/modules/utils.js) provides helper functions used by all other modules for configuration, logging, file operations, and common data manipulations.
|
||||
- **AI Services Integration**: AI functionalities (complexity analysis, task expansion, PRD parsing) are invoked from [`task-manager.js`](mdc:scripts/modules/task-manager.js) and potentially [`commands.js`](mdc:scripts/modules/commands.js), likely using functions that would reside in a dedicated `ai-services.js` module or be integrated within `utils.js` or `task-manager.js`.
|
||||
|
||||
- **Testing Architecture**:
|
||||
|
||||
- **Test Organization Structure**:
|
||||
- **Unit Tests**: Located in `tests/unit/`, reflect the module structure with one test file per module
|
||||
- **Integration Tests**: Located in `tests/integration/`, test interactions between modules
|
||||
- **End-to-End Tests**: Located in `tests/e2e/`, test complete workflows from a user perspective
|
||||
- **Test Fixtures**: Located in `tests/fixtures/`, provide reusable test data
|
||||
|
||||
- **Module Design for Testability**:
|
||||
- **Explicit Dependencies**: Functions accept their dependencies as parameters rather than using globals
|
||||
- **Functional Style**: Pure functions with minimal side effects make testing deterministic
|
||||
- **Separate Logic from I/O**: Core business logic is separated from file system operations
|
||||
- **Clear Module Interfaces**: Each module has well-defined exports that can be mocked in tests
|
||||
- **Callback Isolation**: Callbacks are defined as separate functions for easier testing
|
||||
- **Stateless Design**: Modules avoid maintaining internal state where possible
|
||||
|
||||
- **Mock Integration Patterns**:
|
||||
- **External Libraries**: Libraries like `fs`, `commander`, and `@anthropic-ai/sdk` are mocked at module level
|
||||
- **Internal Modules**: Application modules are mocked with appropriate spy functions
|
||||
- **Testing Function Callbacks**: Callbacks are extracted from mock call arguments and tested in isolation
|
||||
- **UI Elements**: Output functions from `ui.js` are mocked to verify display calls
|
||||
|
||||
- **Testing Flow**:
|
||||
- Module dependencies are mocked (following Jest's hoisting behavior)
|
||||
- Test modules are imported after mocks are established
|
||||
- Spy functions are set up on module methods
|
||||
- Tests call the functions under test and verify behavior
|
||||
- Mocks are reset between test cases to maintain isolation
|
||||
|
||||
- **Benefits of this Architecture**:
|
||||
|
||||
- **Maintainability**: Modules are self-contained and focused, making it easier to understand, modify, and debug specific features.
|
||||
- **Testability**: Each module can be tested in isolation (unit testing), and interactions between modules can be tested (integration testing).
|
||||
- **Mocking Support**: The clear dependency boundaries make mocking straightforward
|
||||
- **Test Isolation**: Each component can be tested without affecting others
|
||||
- **Callback Testing**: Function callbacks can be extracted and tested independently
|
||||
- **Reusability**: Utility functions and UI components can be reused across different parts of the application.
|
||||
- **Scalability**: New features can be added as new modules or by extending existing ones without significantly impacting other parts of the application.
|
||||
- **Clarity**: The modular structure provides a clear separation of concerns, making the codebase easier to navigate and understand for developers.
|
||||
|
||||
This architectural overview should help AI models understand the structure and organization of the Task Master CLI codebase, enabling them to more effectively assist with code generation, modification, and understanding.
|
||||
@@ -27,8 +27,9 @@ The standard pattern for adding a feature follows this workflow:
|
||||
1. **Core Logic**: Implement the business logic in the appropriate module
|
||||
2. **UI Components**: Add any display functions to [`ui.js`](mdc:scripts/modules/ui.js)
|
||||
3. **Command Integration**: Add the CLI command to [`commands.js`](mdc:scripts/modules/commands.js)
|
||||
4. **Configuration**: Update any configuration in [`utils.js`](mdc:scripts/modules/utils.js) if needed
|
||||
5. **Documentation**: Update help text and documentation in [dev_workflow.mdc](mdc:scripts/modules/dev_workflow.mdc)
|
||||
4. **Testing**: Write tests for all components of the feature (following [`tests.mdc`](mdc:.cursor/rules/tests.mdc))
|
||||
5. **Configuration**: Update any configuration in [`utils.js`](mdc:scripts/modules/utils.js) if needed
|
||||
6. **Documentation**: Update help text and documentation in [dev_workflow.mdc](mdc:scripts/modules/dev_workflow.mdc)
|
||||
|
||||
```javascript
|
||||
// 1. CORE LOGIC: Add function to appropriate module (example in task-manager.js)
|
||||
@@ -167,26 +168,125 @@ function formatDuration(ms) {
|
||||
}
|
||||
```
|
||||
|
||||
## Testing New Features
|
||||
## Writing Testable Code
|
||||
|
||||
Before submitting a new feature:
|
||||
When implementing new features, follow these guidelines to ensure your code is testable:
|
||||
|
||||
1. Verify export/import structure with:
|
||||
```bash
|
||||
grep -A15 "export {" scripts/modules/*.js
|
||||
grep -A15 "import {" scripts/modules/*.js | grep -v "^--$"
|
||||
- **Dependency Injection**
|
||||
- Design functions to accept dependencies as parameters
|
||||
- Avoid hard-coded dependencies that are difficult to mock
|
||||
```javascript
|
||||
// ✅ DO: Accept dependencies as parameters
|
||||
function processTask(task, fileSystem, logger) {
|
||||
fileSystem.writeFile('task.json', JSON.stringify(task));
|
||||
logger.info('Task processed');
|
||||
}
|
||||
|
||||
// ❌ DON'T: Use hard-coded dependencies
|
||||
function processTask(task) {
|
||||
fs.writeFile('task.json', JSON.stringify(task));
|
||||
console.log('Task processed');
|
||||
}
|
||||
```
|
||||
|
||||
- **Separate Logic from Side Effects**
|
||||
- Keep pure logic separate from I/O operations or UI rendering
|
||||
- This allows testing the logic without mocking complex dependencies
|
||||
```javascript
|
||||
// ✅ DO: Separate logic from side effects
|
||||
function calculateTaskPriority(task, dependencies) {
|
||||
// Pure logic that returns a value
|
||||
return computedPriority;
|
||||
}
|
||||
|
||||
function displayTaskPriority(task, dependencies) {
|
||||
const priority = calculateTaskPriority(task, dependencies);
|
||||
console.log(`Task priority: ${priority}`);
|
||||
}
|
||||
```
|
||||
|
||||
- **Callback Functions and Testing**
|
||||
- When using callbacks (like in Commander.js commands), define them separately
|
||||
- This allows testing the callback logic independently
|
||||
```javascript
|
||||
// ✅ DO: Define callbacks separately for testing
|
||||
function getVersionString() {
|
||||
// Logic to determine version
|
||||
return version;
|
||||
}
|
||||
|
||||
// In setupCLI
|
||||
programInstance.version(getVersionString);
|
||||
|
||||
// In tests
|
||||
test('getVersionString returns correct version', () => {
|
||||
expect(getVersionString()).toBe('1.5.0');
|
||||
});
|
||||
```
|
||||
|
||||
- **UI Output Testing**
|
||||
- For UI components, focus on testing conditional logic rather than exact output
|
||||
- Use string pattern matching (like `expect(result).toContain('text')`)
|
||||
- Pay attention to emojis and formatting which can make exact string matching difficult
|
||||
```javascript
|
||||
// ✅ DO: Test the essence of the output, not exact formatting
|
||||
test('statusFormatter shows done status correctly', () => {
|
||||
const result = formatStatus('done');
|
||||
expect(result).toContain('done');
|
||||
expect(result).toContain('✅');
|
||||
});
|
||||
```
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
Every new feature **must** include comprehensive tests following the guidelines in [`tests.mdc`](mdc:.cursor/rules/tests.mdc). Testing should include:
|
||||
|
||||
1. **Unit Tests**: Test individual functions and components in isolation
|
||||
```javascript
|
||||
// Example unit test for a new utility function
|
||||
describe('newFeatureUtil', () => {
|
||||
test('should perform expected operation with valid input', () => {
|
||||
expect(newFeatureUtil('valid input')).toBe('expected result');
|
||||
});
|
||||
|
||||
test('should handle edge cases appropriately', () => {
|
||||
expect(newFeatureUtil('')).toBeNull();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
2. Test the feature with valid input:
|
||||
```bash
|
||||
task-master your-command --option1=value
|
||||
2. **Integration Tests**: Verify the feature works correctly with other components
|
||||
```javascript
|
||||
// Example integration test for a new command
|
||||
describe('newCommand integration', () => {
|
||||
test('should call the correct service functions with parsed arguments', () => {
|
||||
const mockService = jest.fn().mockResolvedValue('success');
|
||||
// Set up test with mocked dependencies
|
||||
// Call the command handler
|
||||
// Verify service was called with expected arguments
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
3. Test the feature with edge cases:
|
||||
```bash
|
||||
task-master your-command --option1=""
|
||||
task-master your-command # without required options
|
||||
```
|
||||
3. **Edge Cases**: Test boundary conditions and error handling
|
||||
- Invalid inputs
|
||||
- Missing dependencies
|
||||
- File system errors
|
||||
- API failures
|
||||
|
||||
4. **Test Coverage**: Aim for at least 80% coverage for all new code
|
||||
|
||||
5. **Jest Mocking Best Practices**
|
||||
- Follow the mock-first-then-import pattern as described in [`tests.mdc`](mdc:.cursor/rules/tests.mdc)
|
||||
- Use jest.spyOn() to create spy functions for testing
|
||||
- Clear mocks between tests to prevent interference
|
||||
- See the Jest Module Mocking Best Practices section in [`tests.mdc`](mdc:.cursor/rules/tests.mdc) for details
|
||||
|
||||
When submitting a new feature, always run the full test suite to ensure nothing was broken:
|
||||
|
||||
```bash
|
||||
npm test
|
||||
```
|
||||
|
||||
## Documentation Requirements
|
||||
|
||||
|
||||
285
.cursor/rules/tests.mdc
Normal file
285
.cursor/rules/tests.mdc
Normal file
@@ -0,0 +1,285 @@
|
||||
---
|
||||
description: Guidelines for implementing and maintaining tests for Task Master CLI
|
||||
globs: "**/*.test.js,tests/**/*"
|
||||
---
|
||||
|
||||
# Testing Guidelines for Task Master CLI
|
||||
|
||||
## Test Organization Structure
|
||||
|
||||
- **Unit Tests**
|
||||
- Located in `tests/unit/`
|
||||
- Test individual functions and utilities in isolation
|
||||
- Mock all external dependencies
|
||||
- Keep tests small, focused, and fast
|
||||
- Example naming: `utils.test.js`, `task-manager.test.js`
|
||||
|
||||
- **Integration Tests**
|
||||
- Located in `tests/integration/`
|
||||
- Test interactions between modules
|
||||
- Focus on component interfaces rather than implementation details
|
||||
- Use more realistic but still controlled test environments
|
||||
- Example naming: `task-workflow.test.js`, `command-integration.test.js`
|
||||
|
||||
- **End-to-End Tests**
|
||||
- Located in `tests/e2e/`
|
||||
- Test complete workflows from a user perspective
|
||||
- Focus on CLI commands as they would be used by users
|
||||
- Example naming: `create-task.e2e.test.js`, `expand-task.e2e.test.js`
|
||||
|
||||
- **Test Fixtures**
|
||||
- Located in `tests/fixtures/`
|
||||
- Provide reusable test data
|
||||
- Keep fixtures small and representative
|
||||
- Export fixtures as named exports for reuse
|
||||
|
||||
## Test File Organization
|
||||
|
||||
```javascript
|
||||
// 1. Imports
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// 2. Mock setup (MUST come before importing the modules under test)
|
||||
jest.mock('fs');
|
||||
jest.mock('@anthropic-ai/sdk');
|
||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
CONFIG: {
|
||||
projectVersion: '1.5.0'
|
||||
},
|
||||
log: jest.fn()
|
||||
}));
|
||||
|
||||
// 3. Import modules AFTER all mocks are defined
|
||||
import { functionToTest } from '../../scripts/modules/module-name.js';
|
||||
import { testFixture } from '../fixtures/fixture-name.js';
|
||||
import fs from 'fs';
|
||||
|
||||
// 4. Set up spies on mocked modules (if needed)
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
|
||||
// 5. Test suite with descriptive name
|
||||
describe('Feature or Function Name', () => {
|
||||
// 6. Setup and teardown (if needed)
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
// Additional setup code
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Cleanup code
|
||||
});
|
||||
|
||||
// 7. Grouped tests for related functionality
|
||||
describe('specific functionality', () => {
|
||||
// 8. Individual test cases with clear descriptions
|
||||
test('should behave in expected way when given specific input', () => {
|
||||
// Arrange - set up test data
|
||||
const input = testFixture.sampleInput;
|
||||
mockReadFileSync.mockReturnValue('mocked content');
|
||||
|
||||
// Act - call the function being tested
|
||||
const result = functionToTest(input);
|
||||
|
||||
// Assert - verify the result
|
||||
expect(result).toBe(expectedOutput);
|
||||
expect(mockReadFileSync).toHaveBeenCalledWith(expect.stringContaining('path'));
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Jest Module Mocking Best Practices
|
||||
|
||||
- **Mock Hoisting Behavior**
|
||||
- Jest hoists `jest.mock()` calls to the top of the file, even above imports
|
||||
- Always declare mocks before importing the modules being tested
|
||||
- Use the factory pattern for complex mocks that need access to other variables
|
||||
|
||||
```javascript
|
||||
// ✅ DO: Place mocks before imports
|
||||
jest.mock('commander');
|
||||
import { program } from 'commander';
|
||||
|
||||
// ❌ DON'T: Define variables and then try to use them in mocks
|
||||
const mockFn = jest.fn();
|
||||
jest.mock('module', () => ({
|
||||
func: mockFn // This won't work due to hoisting!
|
||||
}));
|
||||
```
|
||||
|
||||
- **Mocking Modules with Function References**
|
||||
- Use `jest.spyOn()` after imports to create spies on mock functions
|
||||
- Reference these spies in test assertions
|
||||
|
||||
```javascript
|
||||
// Mock the module first
|
||||
jest.mock('fs');
|
||||
|
||||
// Import the mocked module
|
||||
import fs from 'fs';
|
||||
|
||||
// Create spies on the mock functions
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync').mockReturnValue(true);
|
||||
|
||||
test('should call existsSync', () => {
|
||||
// Call function that uses fs.existsSync
|
||||
const result = functionUnderTest();
|
||||
|
||||
// Verify the mock was called correctly
|
||||
expect(mockExistsSync).toHaveBeenCalled();
|
||||
});
|
||||
```
|
||||
|
||||
- **Testing Functions with Callbacks**
|
||||
- Get the callback from your mock's call arguments
|
||||
- Execute it directly with test inputs
|
||||
- Verify the results match expectations
|
||||
|
||||
```javascript
|
||||
jest.mock('commander');
|
||||
import { program } from 'commander';
|
||||
import { setupCLI } from '../../scripts/modules/commands.js';
|
||||
|
||||
const mockVersion = jest.spyOn(program, 'version').mockReturnValue(program);
|
||||
|
||||
test('version callback should return correct version', () => {
|
||||
// Call the function that registers the callback
|
||||
setupCLI();
|
||||
|
||||
// Extract the callback function
|
||||
const versionCallback = mockVersion.mock.calls[0][0];
|
||||
expect(typeof versionCallback).toBe('function');
|
||||
|
||||
// Execute the callback and verify results
|
||||
const result = versionCallback();
|
||||
expect(result).toBe('1.5.0');
|
||||
});
|
||||
```
|
||||
|
||||
## Mocking Guidelines
|
||||
|
||||
- **File System Operations**
|
||||
```javascript
|
||||
import mockFs from 'mock-fs';
|
||||
|
||||
beforeEach(() => {
|
||||
mockFs({
|
||||
'tasks': {
|
||||
'tasks.json': JSON.stringify({
|
||||
meta: { projectName: 'Test Project' },
|
||||
tasks: []
|
||||
})
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
mockFs.restore();
|
||||
});
|
||||
```
|
||||
|
||||
- **API Calls (Anthropic/Claude)**
|
||||
```javascript
|
||||
import { Anthropic } from '@anthropic-ai/sdk';
|
||||
|
||||
jest.mock('@anthropic-ai/sdk');
|
||||
|
||||
beforeEach(() => {
|
||||
Anthropic.mockImplementation(() => ({
|
||||
messages: {
|
||||
create: jest.fn().mockResolvedValue({
|
||||
content: [{ text: 'Mocked response' }]
|
||||
})
|
||||
}
|
||||
}));
|
||||
});
|
||||
```
|
||||
|
||||
- **Environment Variables**
|
||||
```javascript
|
||||
const originalEnv = process.env;
|
||||
|
||||
beforeEach(() => {
|
||||
jest.resetModules();
|
||||
process.env = { ...originalEnv };
|
||||
process.env.MODEL = 'test-model';
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.env = originalEnv;
|
||||
});
|
||||
```
|
||||
|
||||
## Testing Common Components
|
||||
|
||||
- **CLI Commands**
|
||||
- Mock the action handlers and verify they're called with correct arguments
|
||||
- Test command registration and option parsing
|
||||
- Use `commander` test utilities or custom mocks
|
||||
|
||||
- **Task Operations**
|
||||
- Use sample task fixtures for consistent test data
|
||||
- Mock file system operations
|
||||
- Test both success and error paths
|
||||
|
||||
- **UI Functions**
|
||||
- Mock console output and verify correct formatting
|
||||
- Test conditional output logic
|
||||
- When testing strings with emojis or formatting, use `toContain()` or `toMatch()` rather than exact `toBe()` comparisons
|
||||
|
||||
## Test Quality Guidelines
|
||||
|
||||
- ✅ **DO**: Write tests before implementing features (TDD approach when possible)
|
||||
- ✅ **DO**: Test edge cases and error conditions, not just happy paths
|
||||
- ✅ **DO**: Keep tests independent and isolated from each other
|
||||
- ✅ **DO**: Use descriptive test names that explain the expected behavior
|
||||
- ✅ **DO**: Maintain test fixtures separate from test logic
|
||||
- ✅ **DO**: Aim for 80%+ code coverage, with critical paths at 100%
|
||||
- ✅ **DO**: Follow the mock-first-then-import pattern for all Jest mocks
|
||||
|
||||
- ❌ **DON'T**: Test implementation details that might change
|
||||
- ❌ **DON'T**: Write brittle tests that depend on specific output formatting
|
||||
- ❌ **DON'T**: Skip testing error handling and validation
|
||||
- ❌ **DON'T**: Duplicate test fixtures across multiple test files
|
||||
- ❌ **DON'T**: Write tests that depend on execution order
|
||||
- ❌ **DON'T**: Define mock variables before `jest.mock()` calls (they won't be accessible due to hoisting)
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Run tests in watch mode
|
||||
npm run test:watch
|
||||
|
||||
# Run tests with coverage reporting
|
||||
npm run test:coverage
|
||||
|
||||
# Run a specific test file
|
||||
npm test -- tests/unit/specific-file.test.js
|
||||
|
||||
# Run tests matching a pattern
|
||||
npm test -- -t "pattern to match"
|
||||
```
|
||||
|
||||
## Troubleshooting Test Issues
|
||||
|
||||
- **Mock Functions Not Called**
|
||||
- Ensure mocks are defined before imports (Jest hoists `jest.mock()` calls)
|
||||
- Check that you're referencing the correct mock instance
|
||||
- Verify the import paths match exactly
|
||||
|
||||
- **Unexpected Mock Behavior**
|
||||
- Clear mocks between tests with `jest.clearAllMocks()` in `beforeEach`
|
||||
- Check mock implementation for conditional behavior
|
||||
- Ensure mock return values are correctly configured for each test
|
||||
|
||||
- **Tests Affecting Each Other**
|
||||
- Isolate tests by properly mocking shared resources
|
||||
- Reset state in `beforeEach` and `afterEach` hooks
|
||||
- Avoid global state modifications
|
||||
|
||||
See [tests/README.md](mdc:tests/README.md) for more details on the testing approach.
|
||||
|
||||
Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options.
|
||||
55
jest.config.js
Normal file
55
jest.config.js
Normal file
@@ -0,0 +1,55 @@
|
||||
export default {
|
||||
// Use Node.js environment for testing
|
||||
testEnvironment: 'node',
|
||||
|
||||
// Automatically clear mock calls between every test
|
||||
clearMocks: true,
|
||||
|
||||
// Indicates whether the coverage information should be collected while executing the test
|
||||
collectCoverage: false,
|
||||
|
||||
// The directory where Jest should output its coverage files
|
||||
coverageDirectory: 'coverage',
|
||||
|
||||
// A list of paths to directories that Jest should use to search for files in
|
||||
roots: ['<rootDir>/tests'],
|
||||
|
||||
// The glob patterns Jest uses to detect test files
|
||||
testMatch: [
|
||||
'**/__tests__/**/*.js',
|
||||
'**/?(*.)+(spec|test).js'
|
||||
],
|
||||
|
||||
// Transform files
|
||||
transform: {},
|
||||
|
||||
// Disable transformations for node_modules
|
||||
transformIgnorePatterns: ['/node_modules/'],
|
||||
|
||||
// Set moduleNameMapper for absolute paths
|
||||
moduleNameMapper: {
|
||||
'^@/(.*)$': '<rootDir>/$1'
|
||||
},
|
||||
|
||||
// Setup module aliases
|
||||
moduleDirectories: ['node_modules', '<rootDir>'],
|
||||
|
||||
// Configure test coverage thresholds
|
||||
coverageThreshold: {
|
||||
global: {
|
||||
branches: 80,
|
||||
functions: 80,
|
||||
lines: 80,
|
||||
statements: 80
|
||||
}
|
||||
},
|
||||
|
||||
// Generate coverage report in these formats
|
||||
coverageReporters: ['text', 'lcov'],
|
||||
|
||||
// Verbose output
|
||||
verbose: true,
|
||||
|
||||
// Setup file
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/setup.js']
|
||||
};
|
||||
3911
package-lock.json
generated
3911
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
15
package.json
15
package.json
@@ -9,7 +9,9 @@
|
||||
"task-master-init": "./bin/task-master-init.js"
|
||||
},
|
||||
"scripts": {
|
||||
"test": "echo \"Error: no test specified\" && exit 1",
|
||||
"test": "node --experimental-vm-modules node_modules/.bin/jest",
|
||||
"test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
|
||||
"test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
|
||||
"prepare-package": "node scripts/prepare-package.js",
|
||||
"prepublishOnly": "npm run prepare-package",
|
||||
"prepare": "chmod +x bin/task-master.js bin/task-master-init.js"
|
||||
@@ -35,7 +37,7 @@
|
||||
"dotenv": "^16.3.1",
|
||||
"figlet": "^1.8.0",
|
||||
"gradient-string": "^3.0.0",
|
||||
"openai": "^4.86.1",
|
||||
"openai": "^4.89.0",
|
||||
"ora": "^8.2.0"
|
||||
},
|
||||
"engines": {
|
||||
@@ -62,5 +64,12 @@
|
||||
"overrides": {
|
||||
"node-fetch": "^3.3.2",
|
||||
"whatwg-url": "^11.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/jest": "^29.5.14",
|
||||
"jest": "^29.7.0",
|
||||
"jest-environment-node": "^29.7.0",
|
||||
"mock-fs": "^5.5.0",
|
||||
"supertest": "^7.1.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -72,10 +72,12 @@ function registerCommands(programInstance) {
|
||||
.option('-f, --file <file>', 'Path to the tasks file', 'tasks/tasks.json')
|
||||
.option('--from <id>', 'Task ID to start updating from (tasks with ID >= this value will be updated)', '1')
|
||||
.option('-p, --prompt <text>', 'Prompt explaining the changes or new context (required)')
|
||||
.option('-r, --research', 'Use Perplexity AI for research-backed task updates')
|
||||
.action(async (options) => {
|
||||
const tasksPath = options.file;
|
||||
const fromId = parseInt(options.from, 10);
|
||||
const prompt = options.prompt;
|
||||
const useResearch = options.research || false;
|
||||
|
||||
if (!prompt) {
|
||||
console.error(chalk.red('Error: --prompt parameter is required. Please provide information about the changes.'));
|
||||
@@ -85,7 +87,11 @@ function registerCommands(programInstance) {
|
||||
console.log(chalk.blue(`Updating tasks from ID >= ${fromId} with prompt: "${prompt}"`));
|
||||
console.log(chalk.blue(`Tasks file: ${tasksPath}`));
|
||||
|
||||
await updateTasks(tasksPath, fromId, prompt);
|
||||
if (useResearch) {
|
||||
console.log(chalk.blue('Using Perplexity AI for research-backed task updates'));
|
||||
}
|
||||
|
||||
await updateTasks(tasksPath, fromId, prompt, useResearch);
|
||||
});
|
||||
|
||||
// generate command
|
||||
|
||||
@@ -255,261 +255,151 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
||||
|
||||
/**
|
||||
* Check if adding a dependency would create a circular dependency
|
||||
* @param {Array} tasks - All tasks
|
||||
* @param {number|string} dependencyId - ID of the dependency being added
|
||||
* @param {Array} chain - Current dependency chain being checked
|
||||
* @returns {boolean} - True if circular dependency would be created, false otherwise
|
||||
* @param {Array} tasks - Array of all tasks
|
||||
* @param {number|string} taskId - ID of task to check
|
||||
* @param {Array} chain - Chain of dependencies to check
|
||||
* @returns {boolean} True if circular dependency would be created
|
||||
*/
|
||||
function isCircularDependency(tasks, dependencyId, chain = []) {
|
||||
// Convert chain elements and dependencyId to strings for consistent comparison
|
||||
const chainStrs = chain.map(id => String(id));
|
||||
const depIdStr = String(dependencyId);
|
||||
function isCircularDependency(tasks, taskId, chain = []) {
|
||||
// Convert taskId to string for comparison
|
||||
const taskIdStr = String(taskId);
|
||||
|
||||
// If the dependency is already in the chain, it would create a circular dependency
|
||||
if (chainStrs.includes(depIdStr)) {
|
||||
log('error', `Circular dependency detected: ${chainStrs.join(' -> ')} -> ${depIdStr}`);
|
||||
// If we've seen this task before in the chain, we have a circular dependency
|
||||
if (chain.some(id => String(id) === taskIdStr)) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check if this is a subtask dependency (e.g., "1.2")
|
||||
const isSubtask = depIdStr.includes('.');
|
||||
|
||||
// Find the task or subtask by ID
|
||||
let dependencyTask = null;
|
||||
let dependencySubtask = null;
|
||||
|
||||
if (isSubtask) {
|
||||
// Parse parent and subtask IDs
|
||||
const [parentId, subtaskId] = depIdStr.split('.').map(id => isNaN(id) ? id : Number(id));
|
||||
const parentTask = tasks.find(t => t.id === parentId);
|
||||
|
||||
if (parentTask && parentTask.subtasks) {
|
||||
dependencySubtask = parentTask.subtasks.find(s => s.id === Number(subtaskId));
|
||||
// For a subtask, we need to check dependencies of both the subtask and its parent
|
||||
if (dependencySubtask && dependencySubtask.dependencies && dependencySubtask.dependencies.length > 0) {
|
||||
// Recursively check each of the subtask's dependencies
|
||||
const newChain = [...chainStrs, depIdStr];
|
||||
const hasCircular = dependencySubtask.dependencies.some(depId => {
|
||||
// Handle relative subtask references (e.g., numeric IDs referring to subtasks in the same parent task)
|
||||
const normalizedDepId = typeof depId === 'number' && depId < 100
|
||||
? `${parentId}.${depId}`
|
||||
: depId;
|
||||
return isCircularDependency(tasks, normalizedDepId, newChain);
|
||||
});
|
||||
|
||||
if (hasCircular) return true;
|
||||
}
|
||||
|
||||
// Also check if parent task has dependencies that could create a cycle
|
||||
if (parentTask.dependencies && parentTask.dependencies.length > 0) {
|
||||
// If any of the parent's dependencies create a cycle, return true
|
||||
const newChain = [...chainStrs, depIdStr];
|
||||
if (parentTask.dependencies.some(depId => isCircularDependency(tasks, depId, newChain))) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
// Regular task (not a subtask)
|
||||
const depId = isNaN(dependencyId) ? dependencyId : Number(dependencyId);
|
||||
dependencyTask = tasks.find(t => t.id === depId);
|
||||
|
||||
// If task not found or has no dependencies, there's no circular dependency
|
||||
if (!dependencyTask || !dependencyTask.dependencies || dependencyTask.dependencies.length === 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Recursively check each of the dependency's dependencies
|
||||
const newChain = [...chainStrs, depIdStr];
|
||||
if (dependencyTask.dependencies.some(depId => isCircularDependency(tasks, depId, newChain))) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Also check for cycles through subtasks of this task
|
||||
if (dependencyTask.subtasks && dependencyTask.subtasks.length > 0) {
|
||||
for (const subtask of dependencyTask.subtasks) {
|
||||
if (subtask.dependencies && subtask.dependencies.length > 0) {
|
||||
// Check if any of this subtask's dependencies create a cycle
|
||||
const subtaskId = `${dependencyTask.id}.${subtask.id}`;
|
||||
const newSubtaskChain = [...chainStrs, depIdStr, subtaskId];
|
||||
|
||||
for (const subDepId of subtask.dependencies) {
|
||||
// Handle relative subtask references
|
||||
const normalizedDepId = typeof subDepId === 'number' && subDepId < 100
|
||||
? `${dependencyTask.id}.${subDepId}`
|
||||
: subDepId;
|
||||
|
||||
if (isCircularDependency(tasks, normalizedDepId, newSubtaskChain)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Find the task
|
||||
const task = tasks.find(t => String(t.id) === taskIdStr);
|
||||
if (!task) {
|
||||
return false; // Task doesn't exist, can't create circular dependency
|
||||
}
|
||||
|
||||
return false;
|
||||
// No dependencies, can't create circular dependency
|
||||
if (!task.dependencies || task.dependencies.length === 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check each dependency recursively
|
||||
const newChain = [...chain, taskId];
|
||||
return task.dependencies.some(depId => isCircularDependency(tasks, depId, newChain));
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate and clean up task dependencies to ensure they only reference existing tasks
|
||||
* @param {Array} tasks - Array of tasks to validate
|
||||
* @param {string} tasksPath - Optional path to tasks.json to save changes
|
||||
* @returns {boolean} - True if any changes were made to dependencies
|
||||
* Validate task dependencies
|
||||
* @param {Array} tasks - Array of all tasks
|
||||
* @returns {Object} Validation result with valid flag and issues array
|
||||
*/
|
||||
function validateTaskDependencies(tasks, tasksPath = null) {
|
||||
// Create a set of valid task IDs for fast lookup
|
||||
const validTaskIds = new Set(tasks.map(t => t.id));
|
||||
function validateTaskDependencies(tasks) {
|
||||
const issues = [];
|
||||
|
||||
// Create a set of valid subtask IDs (in the format "parentId.subtaskId")
|
||||
const validSubtaskIds = new Set();
|
||||
// Check each task's dependencies
|
||||
tasks.forEach(task => {
|
||||
if (task.subtasks && Array.isArray(task.subtasks)) {
|
||||
task.subtasks.forEach(subtask => {
|
||||
validSubtaskIds.add(`${task.id}.${subtask.id}`);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Flag to track if any changes were made
|
||||
let changesDetected = false;
|
||||
|
||||
// Validate all tasks and their dependencies
|
||||
tasks.forEach(task => {
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
// First check for and remove duplicate dependencies
|
||||
const uniqueDeps = new Set();
|
||||
const uniqueDependencies = task.dependencies.filter(depId => {
|
||||
// Convert to string for comparison to handle both numeric and string IDs
|
||||
const depIdStr = String(depId);
|
||||
if (uniqueDeps.has(depIdStr)) {
|
||||
log('warn', `Removing duplicate dependency from task ${task.id}: ${depId}`);
|
||||
changesDetected = true;
|
||||
return false;
|
||||
}
|
||||
uniqueDeps.add(depIdStr);
|
||||
return true;
|
||||
});
|
||||
|
||||
// If we removed duplicates, update the array
|
||||
if (uniqueDependencies.length !== task.dependencies.length) {
|
||||
task.dependencies = uniqueDependencies;
|
||||
changesDetected = true;
|
||||
}
|
||||
|
||||
const validDependencies = uniqueDependencies.filter(depId => {
|
||||
const isSubtask = typeof depId === 'string' && depId.includes('.');
|
||||
|
||||
if (isSubtask) {
|
||||
// Check if the subtask exists
|
||||
if (!validSubtaskIds.has(depId)) {
|
||||
log('warn', `Removing invalid subtask dependency from task ${task.id}: ${depId} (subtask does not exist)`);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} else {
|
||||
// Check if the task exists
|
||||
const numericId = typeof depId === 'string' ? parseInt(depId, 10) : depId;
|
||||
if (!validTaskIds.has(numericId)) {
|
||||
log('warn', `Removing invalid task dependency from task ${task.id}: ${depId} (task does not exist)`);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
});
|
||||
|
||||
// Update the task's dependencies array
|
||||
if (validDependencies.length !== uniqueDependencies.length) {
|
||||
task.dependencies = validDependencies;
|
||||
changesDetected = true;
|
||||
}
|
||||
if (!task.dependencies) {
|
||||
return; // No dependencies to validate
|
||||
}
|
||||
|
||||
// Validate subtask dependencies
|
||||
if (task.subtasks && Array.isArray(task.subtasks)) {
|
||||
task.subtasks.forEach(subtask => {
|
||||
if (subtask.dependencies && Array.isArray(subtask.dependencies)) {
|
||||
// First check for and remove duplicate dependencies
|
||||
const uniqueDeps = new Set();
|
||||
const uniqueDependencies = subtask.dependencies.filter(depId => {
|
||||
// Convert to string for comparison to handle both numeric and string IDs
|
||||
const depIdStr = String(depId);
|
||||
if (uniqueDeps.has(depIdStr)) {
|
||||
log('warn', `Removing duplicate dependency from subtask ${task.id}.${subtask.id}: ${depId}`);
|
||||
changesDetected = true;
|
||||
return false;
|
||||
}
|
||||
uniqueDeps.add(depIdStr);
|
||||
return true;
|
||||
});
|
||||
|
||||
// If we removed duplicates, update the array
|
||||
if (uniqueDependencies.length !== subtask.dependencies.length) {
|
||||
subtask.dependencies = uniqueDependencies;
|
||||
changesDetected = true;
|
||||
}
|
||||
|
||||
// Check for and remove self-dependencies
|
||||
const subtaskId = `${task.id}.${subtask.id}`;
|
||||
const selfDependencyIndex = subtask.dependencies.findIndex(depId => {
|
||||
return String(depId) === String(subtaskId);
|
||||
});
|
||||
|
||||
if (selfDependencyIndex !== -1) {
|
||||
log('warn', `Removing self-dependency from subtask ${subtaskId} (subtask cannot depend on itself)`);
|
||||
subtask.dependencies.splice(selfDependencyIndex, 1);
|
||||
changesDetected = true;
|
||||
}
|
||||
|
||||
// Then validate remaining dependencies
|
||||
const validSubtaskDeps = subtask.dependencies.filter(depId => {
|
||||
const isSubtask = typeof depId === 'string' && depId.includes('.');
|
||||
|
||||
if (isSubtask) {
|
||||
// Check if the subtask exists
|
||||
if (!validSubtaskIds.has(depId)) {
|
||||
log('warn', `Removing invalid subtask dependency from subtask ${task.id}.${subtask.id}: ${depId} (subtask does not exist)`);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} else {
|
||||
// Check if the task exists
|
||||
const numericId = typeof depId === 'string' ? parseInt(depId, 10) : depId;
|
||||
if (!validTaskIds.has(numericId)) {
|
||||
log('warn', `Removing invalid task dependency from task ${task.id}: ${depId} (task does not exist)`);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
});
|
||||
|
||||
// Update the subtask's dependencies array
|
||||
if (validSubtaskDeps.length !== subtask.dependencies.length) {
|
||||
subtask.dependencies = validSubtaskDeps;
|
||||
changesDetected = true;
|
||||
}
|
||||
}
|
||||
task.dependencies.forEach(depId => {
|
||||
// Check for self-dependencies
|
||||
if (String(depId) === String(task.id)) {
|
||||
issues.push({
|
||||
type: 'self',
|
||||
taskId: task.id,
|
||||
message: `Task ${task.id} depends on itself`
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if dependency exists
|
||||
if (!taskExists(tasks, depId)) {
|
||||
issues.push({
|
||||
type: 'missing',
|
||||
taskId: task.id,
|
||||
dependencyId: depId,
|
||||
message: `Task ${task.id} depends on non-existent task ${depId}`
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Check for circular dependencies
|
||||
if (isCircularDependency(tasks, task.id)) {
|
||||
issues.push({
|
||||
type: 'circular',
|
||||
taskId: task.id,
|
||||
message: `Task ${task.id} is part of a circular dependency chain`
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Save changes if tasksPath is provided and changes were detected
|
||||
if (tasksPath && changesDetected) {
|
||||
try {
|
||||
const data = readJSON(tasksPath);
|
||||
if (data) {
|
||||
data.tasks = tasks;
|
||||
writeJSON(tasksPath, data);
|
||||
log('info', 'Updated tasks.json to remove invalid and duplicate dependencies');
|
||||
}
|
||||
} catch (error) {
|
||||
log('error', 'Failed to save changes to tasks.json', error);
|
||||
return {
|
||||
valid: issues.length === 0,
|
||||
issues
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove duplicate dependencies from tasks
|
||||
* @param {Object} tasksData - Tasks data object with tasks array
|
||||
* @returns {Object} Updated tasks data with duplicates removed
|
||||
*/
|
||||
function removeDuplicateDependencies(tasksData) {
|
||||
const tasks = tasksData.tasks.map(task => {
|
||||
if (!task.dependencies) {
|
||||
return task;
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to Set and back to array to remove duplicates
|
||||
const uniqueDeps = [...new Set(task.dependencies)];
|
||||
return {
|
||||
...task,
|
||||
dependencies: uniqueDeps
|
||||
};
|
||||
});
|
||||
|
||||
return changesDetected;
|
||||
return {
|
||||
...tasksData,
|
||||
tasks
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up invalid subtask dependencies
|
||||
* @param {Object} tasksData - Tasks data object with tasks array
|
||||
* @returns {Object} Updated tasks data with invalid subtask dependencies removed
|
||||
*/
|
||||
function cleanupSubtaskDependencies(tasksData) {
|
||||
const tasks = tasksData.tasks.map(task => {
|
||||
// Handle task's own dependencies
|
||||
if (task.dependencies) {
|
||||
task.dependencies = task.dependencies.filter(depId => {
|
||||
// Keep only dependencies that exist
|
||||
return taskExists(tasksData.tasks, depId);
|
||||
});
|
||||
}
|
||||
|
||||
// Handle subtask dependencies
|
||||
if (task.subtasks) {
|
||||
task.subtasks = task.subtasks.map(subtask => {
|
||||
if (!subtask.dependencies) {
|
||||
return subtask;
|
||||
}
|
||||
|
||||
// Filter out dependencies to non-existent subtasks
|
||||
subtask.dependencies = subtask.dependencies.filter(depId => {
|
||||
return taskExists(tasksData.tasks, depId);
|
||||
});
|
||||
|
||||
return subtask;
|
||||
});
|
||||
}
|
||||
|
||||
return task;
|
||||
});
|
||||
|
||||
return {
|
||||
...tasksData,
|
||||
tasks
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -547,10 +437,9 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
||||
subtasksFixed: 0
|
||||
};
|
||||
|
||||
// Monkey patch the log function to capture warnings and count fixes
|
||||
const originalLog = log;
|
||||
// Create a custom logger instead of reassigning the imported log function
|
||||
const warnings = [];
|
||||
log = function(level, ...args) {
|
||||
const customLogger = function(level, ...args) {
|
||||
if (level === 'warn') {
|
||||
warnings.push(args.join(' '));
|
||||
|
||||
@@ -570,12 +459,34 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
||||
}
|
||||
}
|
||||
// Call the original log function
|
||||
return originalLog(level, ...args);
|
||||
return log(level, ...args);
|
||||
};
|
||||
|
||||
// Run validation
|
||||
// Run validation with custom logger
|
||||
try {
|
||||
const changesDetected = validateTaskDependencies(data.tasks, tasksPath);
|
||||
// Temporarily save validateTaskDependencies function with normal log
|
||||
const originalValidateTaskDependencies = validateTaskDependencies;
|
||||
|
||||
// Create patched version that uses customLogger
|
||||
const patchedValidateTaskDependencies = (tasks, tasksPath) => {
|
||||
// Temporarily redirect log calls in this scope
|
||||
const originalLog = log;
|
||||
const logProxy = function(...args) {
|
||||
return customLogger(...args);
|
||||
};
|
||||
|
||||
// Call the original function in a context where log calls are intercepted
|
||||
const result = (() => {
|
||||
// Use Function.prototype.bind to create a new function that has logProxy available
|
||||
return Function('tasks', 'tasksPath', 'log', 'customLogger',
|
||||
`return (${originalValidateTaskDependencies.toString()})(tasks, tasksPath);`
|
||||
)(tasks, tasksPath, logProxy, customLogger);
|
||||
})();
|
||||
|
||||
return result;
|
||||
};
|
||||
|
||||
const changesDetected = patchedValidateTaskDependencies(data.tasks, tasksPath);
|
||||
|
||||
// Create a detailed report
|
||||
if (changesDetected) {
|
||||
@@ -616,9 +527,9 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round', margin: { top: 1, bottom: 1 } }
|
||||
));
|
||||
}
|
||||
} finally {
|
||||
// Restore the original log function
|
||||
log = originalLog;
|
||||
} catch (error) {
|
||||
log('error', 'Error validating dependencies:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -976,192 +887,6 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up subtask dependencies by removing references to non-existent subtasks/tasks
|
||||
* @param {Object} tasksData - The tasks data object with tasks array
|
||||
* @returns {boolean} - True if any changes were made
|
||||
*/
|
||||
function cleanupSubtaskDependencies(tasksData) {
|
||||
if (!tasksData || !tasksData.tasks || !Array.isArray(tasksData.tasks)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
log('debug', 'Cleaning up subtask dependencies...');
|
||||
|
||||
let changesDetected = false;
|
||||
let duplicatesRemoved = 0;
|
||||
|
||||
// Create validity maps for fast lookup
|
||||
const validTaskIds = new Set(tasksData.tasks.map(t => t.id));
|
||||
const validSubtaskIds = new Set();
|
||||
|
||||
// Create a dependency map for cycle detection
|
||||
const subtaskDependencyMap = new Map();
|
||||
|
||||
// Populate the validSubtaskIds set
|
||||
tasksData.tasks.forEach(task => {
|
||||
if (task.subtasks && Array.isArray(task.subtasks)) {
|
||||
task.subtasks.forEach(subtask => {
|
||||
validSubtaskIds.add(`${task.id}.${subtask.id}`);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Clean up each task's subtasks
|
||||
tasksData.tasks.forEach(task => {
|
||||
if (!task.subtasks || !Array.isArray(task.subtasks)) {
|
||||
return;
|
||||
}
|
||||
|
||||
task.subtasks.forEach(subtask => {
|
||||
if (!subtask.dependencies || !Array.isArray(subtask.dependencies)) {
|
||||
return;
|
||||
}
|
||||
|
||||
const originalLength = subtask.dependencies.length;
|
||||
const subtaskId = `${task.id}.${subtask.id}`;
|
||||
|
||||
// First remove duplicate dependencies
|
||||
const uniqueDeps = new Set();
|
||||
subtask.dependencies = subtask.dependencies.filter(depId => {
|
||||
// Convert to string for comparison, handling special case for subtask references
|
||||
let depIdStr = String(depId);
|
||||
|
||||
// For numeric IDs that are likely subtask references in the same parent task
|
||||
if (typeof depId === 'number' && depId < 100) {
|
||||
depIdStr = `${task.id}.${depId}`;
|
||||
}
|
||||
|
||||
if (uniqueDeps.has(depIdStr)) {
|
||||
log('debug', `Removing duplicate dependency from subtask ${subtaskId}: ${depId}`);
|
||||
duplicatesRemoved++;
|
||||
return false;
|
||||
}
|
||||
uniqueDeps.add(depIdStr);
|
||||
return true;
|
||||
});
|
||||
|
||||
// Then filter invalid dependencies
|
||||
subtask.dependencies = subtask.dependencies.filter(depId => {
|
||||
// Handle string dependencies with dot notation
|
||||
if (typeof depId === 'string' && depId.includes('.')) {
|
||||
if (!validSubtaskIds.has(depId)) {
|
||||
log('debug', `Removing invalid subtask dependency from ${subtaskId}: ${depId}`);
|
||||
return false;
|
||||
}
|
||||
if (depId === subtaskId) {
|
||||
log('debug', `Removing self-dependency from ${subtaskId}`);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// Handle numeric dependencies
|
||||
const numericId = typeof depId === 'number' ? depId : parseInt(depId, 10);
|
||||
|
||||
// Small numbers likely refer to subtasks in the same task
|
||||
if (numericId < 100) {
|
||||
const fullSubtaskId = `${task.id}.${numericId}`;
|
||||
|
||||
if (fullSubtaskId === subtaskId) {
|
||||
log('debug', `Removing self-dependency from ${subtaskId}`);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!validSubtaskIds.has(fullSubtaskId)) {
|
||||
log('debug', `Removing invalid subtask dependency from ${subtaskId}: ${numericId}`);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
// Otherwise it's a task reference
|
||||
if (!validTaskIds.has(numericId)) {
|
||||
log('debug', `Removing invalid task dependency from ${subtaskId}: ${numericId}`);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
});
|
||||
|
||||
if (subtask.dependencies.length < originalLength) {
|
||||
changesDetected = true;
|
||||
}
|
||||
|
||||
// Build dependency map for cycle detection
|
||||
subtaskDependencyMap.set(subtaskId, subtask.dependencies.map(depId => {
|
||||
if (typeof depId === 'string' && depId.includes('.')) {
|
||||
return depId;
|
||||
} else if (typeof depId === 'number' && depId < 100) {
|
||||
return `${task.id}.${depId}`;
|
||||
}
|
||||
return String(depId);
|
||||
}));
|
||||
});
|
||||
});
|
||||
|
||||
// Break circular dependencies in subtasks
|
||||
tasksData.tasks.forEach(task => {
|
||||
if (!task.subtasks || !Array.isArray(task.subtasks)) {
|
||||
return;
|
||||
}
|
||||
|
||||
task.subtasks.forEach(subtask => {
|
||||
const subtaskId = `${task.id}.${subtask.id}`;
|
||||
|
||||
// Skip if no dependencies
|
||||
if (!subtask.dependencies || !Array.isArray(subtask.dependencies) || subtask.dependencies.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Detect cycles for this subtask
|
||||
const visited = new Set();
|
||||
const recursionStack = new Set();
|
||||
const cyclesToBreak = findCycles(subtaskId, subtaskDependencyMap, visited, recursionStack);
|
||||
|
||||
if (cyclesToBreak.length > 0) {
|
||||
const originalLength = subtask.dependencies.length;
|
||||
|
||||
// Format cycle paths for removal
|
||||
const edgesToRemove = cyclesToBreak.map(edge => {
|
||||
if (edge.includes('.')) {
|
||||
const [depTaskId, depSubtaskId] = edge.split('.').map(Number);
|
||||
if (depTaskId === task.id) {
|
||||
return depSubtaskId; // Return just subtask ID if in the same task
|
||||
}
|
||||
return edge; // Full subtask ID string
|
||||
}
|
||||
return Number(edge); // Task ID
|
||||
});
|
||||
|
||||
// Remove dependencies that cause cycles
|
||||
subtask.dependencies = subtask.dependencies.filter(depId => {
|
||||
const normalizedDepId = typeof depId === 'number' && depId < 100
|
||||
? `${task.id}.${depId}`
|
||||
: String(depId);
|
||||
|
||||
if (edgesToRemove.includes(depId) || edgesToRemove.includes(normalizedDepId)) {
|
||||
log('debug', `Breaking circular dependency: Removing ${normalizedDepId} from ${subtaskId}`);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
if (subtask.dependencies.length < originalLength) {
|
||||
changesDetected = true;
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
if (changesDetected) {
|
||||
log('debug', `Cleaned up subtask dependencies (removed ${duplicatesRemoved} duplicates and fixed circular references)`);
|
||||
}
|
||||
|
||||
return changesDetected;
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure at least one subtask in each task has no dependencies
|
||||
* @param {Object} tasksData - The tasks data object with tasks array
|
||||
@@ -1198,75 +923,6 @@ async function addDependency(tasksPath, taskId, dependencyId) {
|
||||
return changesDetected;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Remove duplicate dependencies from tasks and subtasks
|
||||
* @param {Object} tasksData - The tasks data object with tasks array
|
||||
* @returns {boolean} - True if any changes were made
|
||||
*/
|
||||
function removeDuplicateDependencies(tasksData) {
|
||||
if (!tasksData || !tasksData.tasks || !Array.isArray(tasksData.tasks)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
let changesDetected = false;
|
||||
|
||||
tasksData.tasks.forEach(task => {
|
||||
// Remove duplicates from main task dependencies
|
||||
if (task.dependencies && Array.isArray(task.dependencies)) {
|
||||
const uniqueDeps = new Set();
|
||||
const originalLength = task.dependencies.length;
|
||||
|
||||
task.dependencies = task.dependencies.filter(depId => {
|
||||
const depIdStr = String(depId);
|
||||
if (uniqueDeps.has(depIdStr)) {
|
||||
log('debug', `Removing duplicate dependency from task ${task.id}: ${depId}`);
|
||||
return false;
|
||||
}
|
||||
uniqueDeps.add(depIdStr);
|
||||
return true;
|
||||
});
|
||||
|
||||
if (task.dependencies.length < originalLength) {
|
||||
changesDetected = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Remove duplicates from subtask dependencies
|
||||
if (task.subtasks && Array.isArray(task.subtasks)) {
|
||||
task.subtasks.forEach(subtask => {
|
||||
if (subtask.dependencies && Array.isArray(subtask.dependencies)) {
|
||||
const uniqueDeps = new Set();
|
||||
const originalLength = subtask.dependencies.length;
|
||||
|
||||
subtask.dependencies = subtask.dependencies.filter(depId => {
|
||||
// Convert to string for comparison, handling special case for subtask references
|
||||
let depIdStr = String(depId);
|
||||
|
||||
// For numeric IDs that are likely subtask references in the same parent task
|
||||
if (typeof depId === 'number' && depId < 100) {
|
||||
depIdStr = `${task.id}.${depId}`;
|
||||
}
|
||||
|
||||
if (uniqueDeps.has(depIdStr)) {
|
||||
log('debug', `Removing duplicate dependency from subtask ${task.id}.${subtask.id}: ${depId}`);
|
||||
return false;
|
||||
}
|
||||
uniqueDeps.add(depIdStr);
|
||||
return true;
|
||||
});
|
||||
|
||||
if (subtask.dependencies.length < originalLength) {
|
||||
changesDetected = true;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return changesDetected;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate and fix dependencies across all tasks and subtasks
|
||||
* This function is designed to be called after any task modification
|
||||
@@ -1282,23 +938,77 @@ function removeDuplicateDependencies(tasksData) {
|
||||
|
||||
log('debug', 'Validating and fixing dependencies...');
|
||||
|
||||
let changesDetected = false;
|
||||
// Create a deep copy for comparison
|
||||
const originalData = JSON.parse(JSON.stringify(tasksData));
|
||||
|
||||
// 1. Remove duplicate dependencies from tasks and subtasks
|
||||
const hasDuplicates = removeDuplicateDependencies(tasksData);
|
||||
if (hasDuplicates) changesDetected = true;
|
||||
tasksData.tasks = tasksData.tasks.map(task => {
|
||||
// Handle task dependencies
|
||||
if (task.dependencies) {
|
||||
const uniqueDeps = [...new Set(task.dependencies)];
|
||||
task.dependencies = uniqueDeps;
|
||||
}
|
||||
|
||||
// Handle subtask dependencies
|
||||
if (task.subtasks) {
|
||||
task.subtasks = task.subtasks.map(subtask => {
|
||||
if (subtask.dependencies) {
|
||||
const uniqueDeps = [...new Set(subtask.dependencies)];
|
||||
subtask.dependencies = uniqueDeps;
|
||||
}
|
||||
return subtask;
|
||||
});
|
||||
}
|
||||
return task;
|
||||
});
|
||||
|
||||
// 2. Remove invalid task dependencies (non-existent tasks)
|
||||
const validationChanges = validateTaskDependencies(tasksData.tasks);
|
||||
if (validationChanges) changesDetected = true;
|
||||
tasksData.tasks.forEach(task => {
|
||||
// Clean up task dependencies
|
||||
if (task.dependencies) {
|
||||
task.dependencies = task.dependencies.filter(depId => {
|
||||
// Remove self-dependencies
|
||||
if (String(depId) === String(task.id)) {
|
||||
return false;
|
||||
}
|
||||
// Remove non-existent dependencies
|
||||
return taskExists(tasksData.tasks, depId);
|
||||
});
|
||||
}
|
||||
|
||||
// Clean up subtask dependencies
|
||||
if (task.subtasks) {
|
||||
task.subtasks.forEach(subtask => {
|
||||
if (subtask.dependencies) {
|
||||
subtask.dependencies = subtask.dependencies.filter(depId => {
|
||||
// Handle numeric subtask references
|
||||
if (typeof depId === 'number' && depId < 100) {
|
||||
const fullSubtaskId = `${task.id}.${depId}`;
|
||||
return taskExists(tasksData.tasks, fullSubtaskId);
|
||||
}
|
||||
// Handle full task/subtask references
|
||||
return taskExists(tasksData.tasks, depId);
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// 3. Clean up subtask dependencies
|
||||
const subtaskChanges = cleanupSubtaskDependencies(tasksData);
|
||||
if (subtaskChanges) changesDetected = true;
|
||||
// 3. Ensure at least one subtask has no dependencies in each task
|
||||
tasksData.tasks.forEach(task => {
|
||||
if (task.subtasks && task.subtasks.length > 0) {
|
||||
const hasIndependentSubtask = task.subtasks.some(st =>
|
||||
!st.dependencies || !Array.isArray(st.dependencies) || st.dependencies.length === 0
|
||||
);
|
||||
|
||||
if (!hasIndependentSubtask) {
|
||||
task.subtasks[0].dependencies = [];
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// 4. Ensure at least one subtask has no dependencies in each task
|
||||
const noDepChanges = ensureAtLeastOneIndependentSubtask(tasksData);
|
||||
if (noDepChanges) changesDetected = true;
|
||||
// Check if any changes were made by comparing with original data
|
||||
const changesDetected = JSON.stringify(tasksData) !== JSON.stringify(originalData);
|
||||
|
||||
// Save changes if needed
|
||||
if (tasksPath && changesDetected) {
|
||||
@@ -1313,13 +1023,14 @@ function removeDuplicateDependencies(tasksData) {
|
||||
return changesDetected;
|
||||
}
|
||||
|
||||
|
||||
export {
|
||||
addDependency,
|
||||
removeDependency,
|
||||
isCircularDependency,
|
||||
validateTaskDependencies,
|
||||
validateDependenciesCommand,
|
||||
fixDependenciesCommand,
|
||||
removeDuplicateDependencies,
|
||||
cleanupSubtaskDependencies,
|
||||
ensureAtLeastOneIndependentSubtask,
|
||||
validateAndFixDependencies
|
||||
|
||||
@@ -50,6 +50,26 @@ const anthropic = new Anthropic({
|
||||
apiKey: process.env.ANTHROPIC_API_KEY,
|
||||
});
|
||||
|
||||
// Import perplexity if available
|
||||
let perplexity;
|
||||
|
||||
try {
|
||||
if (process.env.PERPLEXITY_API_KEY) {
|
||||
// Using the existing approach from ai-services.js
|
||||
const OpenAI = (await import('openai')).default;
|
||||
|
||||
perplexity = new OpenAI({
|
||||
apiKey: process.env.PERPLEXITY_API_KEY,
|
||||
baseURL: 'https://api.perplexity.ai',
|
||||
});
|
||||
|
||||
log('info', `Initialized Perplexity client with OpenAI compatibility layer`);
|
||||
}
|
||||
} catch (error) {
|
||||
log('warn', `Failed to initialize Perplexity client: ${error.message}`);
|
||||
log('warn', 'Research-backed features will not be available');
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a PRD file and generate tasks
|
||||
* @param {string} prdPath - Path to the PRD file
|
||||
@@ -109,11 +129,19 @@ async function parsePRD(prdPath, tasksPath, numTasks) {
|
||||
* @param {string} tasksPath - Path to the tasks.json file
|
||||
* @param {number} fromId - Task ID to start updating from
|
||||
* @param {string} prompt - Prompt with new context
|
||||
* @param {boolean} useResearch - Whether to use Perplexity AI for research
|
||||
*/
|
||||
async function updateTasks(tasksPath, fromId, prompt) {
|
||||
async function updateTasks(tasksPath, fromId, prompt, useResearch = false) {
|
||||
try {
|
||||
log('info', `Updating tasks from ID ${fromId} with prompt: "${prompt}"`);
|
||||
|
||||
// Validate research flag
|
||||
if (useResearch && (!perplexity || !process.env.PERPLEXITY_API_KEY)) {
|
||||
log('warn', 'Perplexity AI is not available. Falling back to Claude AI.');
|
||||
console.log(chalk.yellow('Perplexity AI is not available (API key may be missing). Falling back to Claude AI.'));
|
||||
useResearch = false;
|
||||
}
|
||||
|
||||
// Read the tasks file
|
||||
const data = readJSON(tasksPath);
|
||||
if (!data || !data.tasks) {
|
||||
@@ -169,59 +197,109 @@ The changes described in the prompt should be applied to ALL tasks in the list.`
|
||||
|
||||
const taskData = JSON.stringify(tasksToUpdate, null, 2);
|
||||
|
||||
// Call Claude to update the tasks
|
||||
const message = await anthropic.messages.create({
|
||||
model: CONFIG.model,
|
||||
max_tokens: CONFIG.maxTokens,
|
||||
temperature: CONFIG.temperature,
|
||||
system: systemPrompt,
|
||||
messages: [
|
||||
{
|
||||
role: 'user',
|
||||
content: `Here are the tasks to update:
|
||||
let updatedTasks;
|
||||
const loadingIndicator = startLoadingIndicator(useResearch
|
||||
? 'Updating tasks with Perplexity AI research...'
|
||||
: 'Updating tasks with Claude AI...');
|
||||
|
||||
try {
|
||||
if (useResearch) {
|
||||
log('info', 'Using Perplexity AI for research-backed task updates');
|
||||
|
||||
// Call Perplexity AI using format consistent with ai-services.js
|
||||
const perplexityModel = process.env.PERPLEXITY_MODEL || 'sonar-small-online';
|
||||
const result = await perplexity.chat.completions.create({
|
||||
model: perplexityModel,
|
||||
messages: [
|
||||
{
|
||||
role: "system",
|
||||
content: `${systemPrompt}\n\nAdditionally, please research the latest best practices, implementation details, and considerations when updating these tasks. Use your online search capabilities to gather relevant information.`
|
||||
},
|
||||
{
|
||||
role: "user",
|
||||
content: `Here are the tasks to update:
|
||||
${taskData}
|
||||
|
||||
Please update these tasks based on the following new context:
|
||||
${prompt}
|
||||
|
||||
Return only the updated tasks as a valid JSON array.`
|
||||
}
|
||||
],
|
||||
temperature: parseFloat(process.env.TEMPERATURE || CONFIG.temperature),
|
||||
max_tokens: parseInt(process.env.MAX_TOKENS || CONFIG.maxTokens),
|
||||
});
|
||||
|
||||
const responseText = result.choices[0].message.content;
|
||||
|
||||
// Extract JSON from response
|
||||
const jsonStart = responseText.indexOf('[');
|
||||
const jsonEnd = responseText.lastIndexOf(']');
|
||||
|
||||
if (jsonStart === -1 || jsonEnd === -1) {
|
||||
throw new Error("Could not find valid JSON array in Perplexity's response");
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
const responseText = message.content[0].text;
|
||||
|
||||
// Extract JSON from response
|
||||
const jsonStart = responseText.indexOf('[');
|
||||
const jsonEnd = responseText.lastIndexOf(']');
|
||||
|
||||
if (jsonStart === -1 || jsonEnd === -1) {
|
||||
throw new Error("Could not find valid JSON array in Claude's response");
|
||||
}
|
||||
|
||||
const jsonText = responseText.substring(jsonStart, jsonEnd + 1);
|
||||
const updatedTasks = JSON.parse(jsonText);
|
||||
|
||||
// Replace the tasks in the original data
|
||||
updatedTasks.forEach(updatedTask => {
|
||||
const index = data.tasks.findIndex(t => t.id === updatedTask.id);
|
||||
if (index !== -1) {
|
||||
data.tasks[index] = updatedTask;
|
||||
|
||||
const jsonText = responseText.substring(jsonStart, jsonEnd + 1);
|
||||
updatedTasks = JSON.parse(jsonText);
|
||||
} else {
|
||||
// Call Claude to update the tasks
|
||||
const message = await anthropic.messages.create({
|
||||
model: CONFIG.model,
|
||||
max_tokens: CONFIG.maxTokens,
|
||||
temperature: CONFIG.temperature,
|
||||
system: systemPrompt,
|
||||
messages: [
|
||||
{
|
||||
role: 'user',
|
||||
content: `Here are the tasks to update:
|
||||
${taskData}
|
||||
|
||||
Please update these tasks based on the following new context:
|
||||
${prompt}
|
||||
|
||||
Return only the updated tasks as a valid JSON array.`
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
const responseText = message.content[0].text;
|
||||
|
||||
// Extract JSON from response
|
||||
const jsonStart = responseText.indexOf('[');
|
||||
const jsonEnd = responseText.lastIndexOf(']');
|
||||
|
||||
if (jsonStart === -1 || jsonEnd === -1) {
|
||||
throw new Error("Could not find valid JSON array in Claude's response");
|
||||
}
|
||||
|
||||
const jsonText = responseText.substring(jsonStart, jsonEnd + 1);
|
||||
updatedTasks = JSON.parse(jsonText);
|
||||
}
|
||||
});
|
||||
|
||||
// Write the updated tasks to the file
|
||||
writeJSON(tasksPath, data);
|
||||
|
||||
log('success', `Successfully updated ${updatedTasks.length} tasks`);
|
||||
|
||||
// Generate individual task files
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
console.log(boxen(
|
||||
chalk.green(`Successfully updated ${updatedTasks.length} tasks`),
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
));
|
||||
|
||||
// Replace the tasks in the original data
|
||||
updatedTasks.forEach(updatedTask => {
|
||||
const index = data.tasks.findIndex(t => t.id === updatedTask.id);
|
||||
if (index !== -1) {
|
||||
data.tasks[index] = updatedTask;
|
||||
}
|
||||
});
|
||||
|
||||
// Write the updated tasks to the file
|
||||
writeJSON(tasksPath, data);
|
||||
|
||||
log('success', `Successfully updated ${updatedTasks.length} tasks`);
|
||||
|
||||
// Generate individual task files
|
||||
await generateTaskFiles(tasksPath, path.dirname(tasksPath));
|
||||
|
||||
console.log(boxen(
|
||||
chalk.green(`Successfully updated ${updatedTasks.length} tasks`),
|
||||
{ padding: 1, borderColor: 'green', borderStyle: 'round' }
|
||||
));
|
||||
} finally {
|
||||
stopLoadingIndicator(loadingIndicator);
|
||||
}
|
||||
} catch (error) {
|
||||
log('error', `Error updating tasks: ${error.message}`);
|
||||
console.error(chalk.red(`Error: ${error.message}`));
|
||||
|
||||
@@ -101,21 +101,21 @@ function createProgressBar(percent, length = 30) {
|
||||
*/
|
||||
function getStatusWithColor(status) {
|
||||
if (!status) {
|
||||
return chalk.gray('unknown');
|
||||
return chalk.gray('❓ unknown');
|
||||
}
|
||||
|
||||
const statusColors = {
|
||||
'done': chalk.green,
|
||||
'completed': chalk.green,
|
||||
'pending': chalk.yellow,
|
||||
'in-progress': chalk.blue,
|
||||
'deferred': chalk.gray,
|
||||
'blocked': chalk.red,
|
||||
'review': chalk.magenta
|
||||
const statusConfig = {
|
||||
'done': { color: chalk.green, icon: '✅' },
|
||||
'completed': { color: chalk.green, icon: '✅' },
|
||||
'pending': { color: chalk.yellow, icon: '⏱️' },
|
||||
'in-progress': { color: chalk.blue, icon: '🔄' },
|
||||
'deferred': { color: chalk.gray, icon: '⏱️' },
|
||||
'blocked': { color: chalk.red, icon: '❌' },
|
||||
'review': { color: chalk.magenta, icon: '👀' }
|
||||
};
|
||||
|
||||
const colorFunc = statusColors[status.toLowerCase()] || chalk.white;
|
||||
return colorFunc(status);
|
||||
const config = statusConfig[status.toLowerCase()] || { color: chalk.red, icon: '❌' };
|
||||
return config.color(`${config.icon} ${status}`);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -337,9 +337,9 @@ function displayHelp() {
|
||||
* @returns {string} Colored complexity score
|
||||
*/
|
||||
function getComplexityWithColor(score) {
|
||||
if (score <= 3) return chalk.green(score.toString());
|
||||
if (score <= 6) return chalk.yellow(score.toString());
|
||||
return chalk.red(score.toString());
|
||||
if (score <= 3) return chalk.green(`🟢 ${score}`);
|
||||
if (score <= 6) return chalk.yellow(`🟡 ${score}`);
|
||||
return chalk.red(`🔴 ${score}`);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -14,35 +14,3 @@ Create the foundational data structure including:
|
||||
|
||||
# Test Strategy:
|
||||
Verify that the tasks.json structure can be created, read, and validated. Test with sample data to ensure all fields are properly handled and that validation correctly identifies invalid structures.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Design JSON Schema for tasks.json [done]
|
||||
### Dependencies: None
|
||||
### Description: Create a formal JSON Schema definition that validates the structure of the tasks.json file. The schema should enforce the data model specified in the PRD, including the Task Model and Tasks Collection Model with all required fields (id, title, description, status, dependencies, priority, details, testStrategy, subtasks). Include type validation, required fields, and constraints on enumerated values (like status and priority options).
|
||||
### Details:
|
||||
|
||||
|
||||
## 2. Implement Task Model Classes [done]
|
||||
### Dependencies: 1 (done)
|
||||
### Description: Create JavaScript classes that represent the Task and Tasks Collection models. Implement constructor methods that validate input data, getter/setter methods for properties, and utility methods for common operations (like adding subtasks, changing status, etc.). These classes will serve as the programmatic interface to the task data structure.
|
||||
### Details:
|
||||
|
||||
|
||||
## 3. Create File System Operations for tasks.json [done]
|
||||
### Dependencies: 1 (done), 2 (done)
|
||||
### Description: Implement functions to read from and write to the tasks.json file. These functions should handle file system operations asynchronously, manage file locking to prevent corruption during concurrent operations, and ensure atomic writes (using temporary files and rename operations). Include initialization logic to create a default tasks.json file if one doesn't exist.
|
||||
### Details:
|
||||
|
||||
|
||||
## 4. Implement Validation Functions [done]
|
||||
### Dependencies: 1 (done), 2 (done)
|
||||
### Description: Create a comprehensive set of validation functions that can verify the integrity of the task data structure. These should include validation of individual tasks, validation of the entire tasks collection, dependency cycle detection, and validation of relationships between tasks. These functions will be used both when loading data and before saving to ensure data integrity.
|
||||
### Details:
|
||||
|
||||
|
||||
## 5. Implement Error Handling System [done]
|
||||
### Dependencies: 1 (done), 3 (done), 4 (done)
|
||||
### Description: Create a robust error handling system for file operations and data validation. Implement custom error classes for different types of errors (file not found, permission denied, invalid data, etc.), error logging functionality, and recovery mechanisms where appropriate. This system should provide clear, actionable error messages to users while maintaining system stability.
|
||||
### Details:
|
||||
|
||||
|
||||
|
||||
@@ -14,35 +14,3 @@ Implement the CLI foundation including:
|
||||
|
||||
# Test Strategy:
|
||||
Test each command with various parameters to ensure proper parsing. Verify help documentation is comprehensive and accurate. Test logging at different verbosity levels.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Set up Commander.js Framework [done]
|
||||
### Dependencies: None
|
||||
### Description: Initialize and configure Commander.js as the command-line parsing framework. Create the main CLI entry point file that will serve as the application's command-line interface. Set up the basic command structure with program name, version, and description from package.json. Implement the core program flow including command registration pattern and error handling.
|
||||
### Details:
|
||||
|
||||
|
||||
## 2. Implement Global Options Handling [done]
|
||||
### Dependencies: 1 (done)
|
||||
### Description: Add support for all required global options including --help, --version, --file, --quiet, --debug, and --json. Implement the logic to process these options and modify program behavior accordingly. Create a configuration object that stores these settings and can be accessed by all commands. Ensure options can be combined and have appropriate precedence rules.
|
||||
### Details:
|
||||
|
||||
|
||||
## 3. Create Command Help Documentation System [done]
|
||||
### Dependencies: 1 (done), 2 (done)
|
||||
### Description: Develop a comprehensive help documentation system that provides clear usage instructions for all commands and options. Implement both command-specific help and general program help. Ensure help text is well-formatted, consistent, and includes examples. Create a centralized system for managing help text to ensure consistency across the application.
|
||||
### Details:
|
||||
|
||||
|
||||
## 4. Implement Colorized Console Output [done]
|
||||
### Dependencies: 1 (done)
|
||||
### Description: Create a utility module for colorized console output to improve readability and user experience. Implement different color schemes for various message types (info, warning, error, success). Add support for text styling (bold, underline, etc.) and ensure colors are used consistently throughout the application. Make sure colors can be disabled in environments that don't support them.
|
||||
### Details:
|
||||
|
||||
|
||||
## 5. Develop Configurable Logging System [done]
|
||||
### Dependencies: 1 (done), 2 (done), 4 (done)
|
||||
### Description: Create a logging system with configurable verbosity levels that integrates with the CLI. Implement different logging levels (error, warn, info, debug, trace) and ensure log output respects the verbosity settings specified by global options. Add support for log output redirection to files. Ensure logs include appropriate timestamps and context information.
|
||||
### Details:
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Task ID: 22
|
||||
# Title: Create Comprehensive Test Suite for Task Master CLI
|
||||
# Status: pending
|
||||
# Status: in-progress
|
||||
# Dependencies: ✅ 21 (done)
|
||||
# Priority: high
|
||||
# Description: Develop a complete testing infrastructure for the Task Master CLI that includes unit, integration, and end-to-end tests to verify all core functionality and error handling.
|
||||
@@ -57,20 +57,20 @@ Verification will involve:
|
||||
The task will be considered complete when all tests pass consistently, coverage meets targets, and the test suite can detect intentionally introduced bugs.
|
||||
|
||||
# Subtasks:
|
||||
## 1. Set Up Jest Testing Environment [pending]
|
||||
## 1. Set Up Jest Testing Environment [done]
|
||||
### Dependencies: None
|
||||
### Description: Configure Jest for the project, including setting up the jest.config.js file, adding necessary dependencies, and creating the initial test directory structure. Implement proper mocking for Claude API interactions, file system operations, and user input/output. Set up test coverage reporting and configure it to run in the CI pipeline.
|
||||
### Details:
|
||||
|
||||
|
||||
## 2. Implement Unit Tests for Core Components [pending]
|
||||
### Dependencies: 1 (pending)
|
||||
### Dependencies: 1 (done)
|
||||
### Description: Create a comprehensive set of unit tests for all utility functions, core logic components, and individual modules of the Task Master CLI. This includes tests for task creation, parsing, manipulation, data storage, retrieval, and formatting functions. Ensure all edge cases and error scenarios are covered.
|
||||
### Details:
|
||||
|
||||
|
||||
## 3. Develop Integration and End-to-End Tests [pending]
|
||||
### Dependencies: 1 (pending), 2 (pending)
|
||||
### Dependencies: 1 (done), 2 (pending)
|
||||
### Description: Create integration tests that verify the correct interaction between different components of the CLI, including command execution, option parsing, and data flow. Implement end-to-end tests that simulate complete user workflows, such as creating a task, expanding it, and updating its status. Include tests for error scenarios, recovery processes, and handling large numbers of tasks.
|
||||
### Details:
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Task ID: 23
|
||||
# Title: Implement MCP (Model Context Protocol) Server Functionality for Task Master
|
||||
# Status: pending
|
||||
# Dependencies: ⏱️ 22 (pending)
|
||||
# Dependencies: ⏱️ 22 (in-progress)
|
||||
# Priority: medium
|
||||
# Description: Extend Task Master to function as an MCP server, allowing it to provide context management services to other applications following the Model Context Protocol specification.
|
||||
# Details:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Task ID: 24
|
||||
# Title: Implement AI-Powered Test Generation Command
|
||||
# Status: pending
|
||||
# Dependencies: ⏱️ 22 (pending)
|
||||
# Dependencies: ⏱️ 22 (in-progress)
|
||||
# Priority: high
|
||||
# Description: Create a new 'generate-test' command that leverages AI to automatically produce Jest test files for tasks based on their descriptions and subtasks.
|
||||
# Details:
|
||||
|
||||
113
tasks/tasks.json
113
tasks/tasks.json
@@ -17,60 +17,7 @@
|
||||
"priority": "high",
|
||||
"details": "Create the foundational data structure including:\n- JSON schema for tasks.json\n- Task model with all required fields (id, title, description, status, dependencies, priority, details, testStrategy, subtasks)\n- Validation functions for the task model\n- Basic file system operations for reading/writing tasks.json\n- Error handling for file operations",
|
||||
"testStrategy": "Verify that the tasks.json structure can be created, read, and validated. Test with sample data to ensure all fields are properly handled and that validation correctly identifies invalid structures.",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Design JSON Schema for tasks.json",
|
||||
"description": "Create a formal JSON Schema definition that validates the structure of the tasks.json file. The schema should enforce the data model specified in the PRD, including the Task Model and Tasks Collection Model with all required fields (id, title, description, status, dependencies, priority, details, testStrategy, subtasks). Include type validation, required fields, and constraints on enumerated values (like status and priority options).",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"acceptanceCriteria": "- JSON Schema file is created with proper validation for all fields in the Task and Tasks Collection models\n- Schema validates that task IDs are unique integers\n- Schema enforces valid status values (\"pending\", \"done\", \"deferred\")\n- Schema enforces valid priority values (\"high\", \"medium\", \"low\")\n- Schema validates the nested structure of subtasks\n- Schema includes validation for the meta object with projectName, version, timestamps, etc."
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement Task Model Classes",
|
||||
"description": "Create JavaScript classes that represent the Task and Tasks Collection models. Implement constructor methods that validate input data, getter/setter methods for properties, and utility methods for common operations (like adding subtasks, changing status, etc.). These classes will serve as the programmatic interface to the task data structure.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"acceptanceCriteria": "- Task class with all required properties from the PRD\n- TasksCollection class that manages an array of Task objects\n- Methods for creating, retrieving, updating tasks\n- Methods for managing subtasks within a task\n- Input validation in constructors and setters\n- Proper TypeScript/JSDoc type definitions for all classes and methods"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Create File System Operations for tasks.json",
|
||||
"description": "Implement functions to read from and write to the tasks.json file. These functions should handle file system operations asynchronously, manage file locking to prevent corruption during concurrent operations, and ensure atomic writes (using temporary files and rename operations). Include initialization logic to create a default tasks.json file if one doesn't exist.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1,
|
||||
2
|
||||
],
|
||||
"acceptanceCriteria": "- Asynchronous read function that parses tasks.json into model objects\n- Asynchronous write function that serializes model objects to tasks.json\n- File locking mechanism to prevent concurrent write operations\n- Atomic write operations to prevent file corruption\n- Initialization function that creates default tasks.json if not present\n- Functions properly handle relative and absolute paths"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement Validation Functions",
|
||||
"description": "Create a comprehensive set of validation functions that can verify the integrity of the task data structure. These should include validation of individual tasks, validation of the entire tasks collection, dependency cycle detection, and validation of relationships between tasks. These functions will be used both when loading data and before saving to ensure data integrity.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1,
|
||||
2
|
||||
],
|
||||
"acceptanceCriteria": "- Functions to validate individual task objects against schema\n- Function to validate entire tasks collection\n- Dependency cycle detection algorithm\n- Validation of parent-child relationships in subtasks\n- Validation of task ID uniqueness\n- Functions return detailed error messages for invalid data\n- Unit tests covering various validation scenarios"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Implement Error Handling System",
|
||||
"description": "Create a robust error handling system for file operations and data validation. Implement custom error classes for different types of errors (file not found, permission denied, invalid data, etc.), error logging functionality, and recovery mechanisms where appropriate. This system should provide clear, actionable error messages to users while maintaining system stability.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1,
|
||||
3,
|
||||
4
|
||||
],
|
||||
"acceptanceCriteria": "- Custom error classes for different error types (FileError, ValidationError, etc.)\n- Consistent error format with error code, message, and details\n- Error logging functionality with configurable verbosity\n- Recovery mechanisms for common error scenarios\n- Graceful degradation when non-critical errors occur\n- User-friendly error messages that suggest solutions\n- Unit tests for error handling in various scenarios"
|
||||
}
|
||||
]
|
||||
"subtasks": []
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
@@ -83,59 +30,7 @@
|
||||
"priority": "high",
|
||||
"details": "Implement the CLI foundation including:\n- Set up Commander.js for command parsing\n- Create help documentation for all commands\n- Implement colorized console output for better readability\n- Add logging system with configurable levels\n- Handle global options (--help, --version, --file, --quiet, --debug, --json)",
|
||||
"testStrategy": "Test each command with various parameters to ensure proper parsing. Verify help documentation is comprehensive and accurate. Test logging at different verbosity levels.",
|
||||
"subtasks": [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Set up Commander.js Framework",
|
||||
"description": "Initialize and configure Commander.js as the command-line parsing framework. Create the main CLI entry point file that will serve as the application's command-line interface. Set up the basic command structure with program name, version, and description from package.json. Implement the core program flow including command registration pattern and error handling.",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"acceptanceCriteria": "- Commander.js is properly installed and configured in the project\n- CLI entry point file is created with proper Node.js shebang and permissions\n- Program metadata (name, version, description) is correctly loaded from package.json\n- Basic command registration pattern is established\n- Global error handling is implemented to catch and display unhandled exceptions"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Implement Global Options Handling",
|
||||
"description": "Add support for all required global options including --help, --version, --file, --quiet, --debug, and --json. Implement the logic to process these options and modify program behavior accordingly. Create a configuration object that stores these settings and can be accessed by all commands. Ensure options can be combined and have appropriate precedence rules.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"acceptanceCriteria": "- All specified global options (--help, --version, --file, --quiet, --debug, --json) are implemented\n- Options correctly modify program behavior when specified\n- Alternative tasks.json file can be specified with --file option\n- Output verbosity is controlled by --quiet and --debug flags\n- JSON output format is supported with the --json flag\n- Help text is displayed when --help is specified\n- Version information is displayed when --version is specified"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Create Command Help Documentation System",
|
||||
"description": "Develop a comprehensive help documentation system that provides clear usage instructions for all commands and options. Implement both command-specific help and general program help. Ensure help text is well-formatted, consistent, and includes examples. Create a centralized system for managing help text to ensure consistency across the application.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1,
|
||||
2
|
||||
],
|
||||
"acceptanceCriteria": "- General program help shows all available commands and global options\n- Command-specific help shows detailed usage information for each command\n- Help text includes clear examples of command usage\n- Help formatting is consistent and readable across all commands\n- Help system handles both explicit help requests (--help) and invalid command syntax"
|
||||
},
|
||||
{
|
||||
"id": 4,
|
||||
"title": "Implement Colorized Console Output",
|
||||
"description": "Create a utility module for colorized console output to improve readability and user experience. Implement different color schemes for various message types (info, warning, error, success). Add support for text styling (bold, underline, etc.) and ensure colors are used consistently throughout the application. Make sure colors can be disabled in environments that don't support them.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1
|
||||
],
|
||||
"acceptanceCriteria": "- Utility module provides consistent API for colorized output\n- Different message types (info, warning, error, success) use appropriate colors\n- Text styling options (bold, underline, etc.) are available\n- Colors are disabled automatically in environments that don't support them\n- Color usage is consistent across the application\n- Output remains readable when colors are disabled"
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Develop Configurable Logging System",
|
||||
"description": "Create a logging system with configurable verbosity levels that integrates with the CLI. Implement different logging levels (error, warn, info, debug, trace) and ensure log output respects the verbosity settings specified by global options. Add support for log output redirection to files. Ensure logs include appropriate timestamps and context information.",
|
||||
"status": "done",
|
||||
"dependencies": [
|
||||
1,
|
||||
2,
|
||||
4
|
||||
],
|
||||
"acceptanceCriteria": "- Logging system supports multiple verbosity levels (error, warn, info, debug, trace)\n- Log output respects verbosity settings from global options (--quiet, --debug)\n- Logs include timestamps and appropriate context information\n- Log messages use consistent formatting and appropriate colors\n- Logging can be redirected to files when needed\n- Debug logs provide detailed information useful for troubleshooting\n- Logging system has minimal performance impact when not in use\n\nEach of these subtasks directly addresses a component of the CLI foundation as specified in the task description, and together they provide a complete implementation of the required functionality. The subtasks are ordered in a logical sequence that respects their dependencies."
|
||||
}
|
||||
]
|
||||
"subtasks": []
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
@@ -1474,7 +1369,7 @@
|
||||
"id": 22,
|
||||
"title": "Create Comprehensive Test Suite for Task Master CLI",
|
||||
"description": "Develop a complete testing infrastructure for the Task Master CLI that includes unit, integration, and end-to-end tests to verify all core functionality and error handling.",
|
||||
"status": "pending",
|
||||
"status": "in-progress",
|
||||
"dependencies": [
|
||||
21
|
||||
],
|
||||
@@ -1486,7 +1381,7 @@
|
||||
"id": 1,
|
||||
"title": "Set Up Jest Testing Environment",
|
||||
"description": "Configure Jest for the project, including setting up the jest.config.js file, adding necessary dependencies, and creating the initial test directory structure. Implement proper mocking for Claude API interactions, file system operations, and user input/output. Set up test coverage reporting and configure it to run in the CI pipeline.",
|
||||
"status": "pending",
|
||||
"status": "done",
|
||||
"dependencies": [],
|
||||
"acceptanceCriteria": "- jest.config.js is properly configured for the project"
|
||||
},
|
||||
|
||||
63
tests/README.md
Normal file
63
tests/README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Task Master Test Suite
|
||||
|
||||
This directory contains tests for the Task Master CLI. The tests are organized into different categories to ensure comprehensive test coverage.
|
||||
|
||||
## Test Structure
|
||||
|
||||
- `unit/`: Unit tests for individual functions and components
|
||||
- `integration/`: Integration tests for testing interactions between components
|
||||
- `e2e/`: End-to-end tests for testing complete workflows
|
||||
- `fixtures/`: Test fixtures and sample data
|
||||
|
||||
## Running Tests
|
||||
|
||||
To run all tests:
|
||||
|
||||
```bash
|
||||
npm test
|
||||
```
|
||||
|
||||
To run tests in watch mode (for development):
|
||||
|
||||
```bash
|
||||
npm run test:watch
|
||||
```
|
||||
|
||||
To run tests with coverage reporting:
|
||||
|
||||
```bash
|
||||
npm run test:coverage
|
||||
```
|
||||
|
||||
## Testing Approach
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Unit tests focus on testing individual functions and components in isolation. These tests should be fast and should mock external dependencies.
|
||||
|
||||
### Integration Tests
|
||||
|
||||
Integration tests focus on testing interactions between components. These tests ensure that components work together correctly.
|
||||
|
||||
### End-to-End Tests
|
||||
|
||||
End-to-end tests focus on testing complete workflows from a user's perspective. These tests ensure that the CLI works correctly as a whole.
|
||||
|
||||
## Test Fixtures
|
||||
|
||||
Test fixtures provide sample data for tests. Fixtures should be small, focused, and representative of real-world data.
|
||||
|
||||
## Mocking
|
||||
|
||||
For external dependencies like file system operations and API calls, we use mocking to isolate the code being tested.
|
||||
|
||||
- File system operations: Use `mock-fs` to mock the file system
|
||||
- API calls: Use Jest's mocking capabilities to mock API responses
|
||||
|
||||
## Test Coverage
|
||||
|
||||
We aim for at least 80% test coverage for all code paths. Coverage reports can be generated with:
|
||||
|
||||
```bash
|
||||
npm run test:coverage
|
||||
```
|
||||
72
tests/fixtures/sample-tasks.js
vendored
Normal file
72
tests/fixtures/sample-tasks.js
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
/**
|
||||
* Sample tasks data for tests
|
||||
*/
|
||||
|
||||
export const sampleTasks = {
|
||||
meta: {
|
||||
projectName: "Test Project",
|
||||
projectVersion: "1.0.0",
|
||||
createdAt: "2023-01-01T00:00:00.000Z",
|
||||
updatedAt: "2023-01-01T00:00:00.000Z"
|
||||
},
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: "Initialize Project",
|
||||
description: "Set up the project structure and dependencies",
|
||||
status: "done",
|
||||
dependencies: [],
|
||||
priority: "high",
|
||||
details: "Create directory structure, initialize package.json, and install dependencies",
|
||||
testStrategy: "Verify all directories and files are created correctly"
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: "Create Core Functionality",
|
||||
description: "Implement the main features of the application",
|
||||
status: "in-progress",
|
||||
dependencies: [1],
|
||||
priority: "high",
|
||||
details: "Implement user authentication, data processing, and API endpoints",
|
||||
testStrategy: "Write unit tests for all core functions"
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: "Implement UI Components",
|
||||
description: "Create the user interface components",
|
||||
status: "pending",
|
||||
dependencies: [2],
|
||||
priority: "medium",
|
||||
details: "Design and implement React components for the user interface",
|
||||
testStrategy: "Test components with React Testing Library",
|
||||
subtasks: [
|
||||
{
|
||||
id: 1,
|
||||
title: "Create Header Component",
|
||||
description: "Implement the header component",
|
||||
status: "pending",
|
||||
dependencies: [],
|
||||
details: "Create a responsive header with navigation links"
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: "Create Footer Component",
|
||||
description: "Implement the footer component",
|
||||
status: "pending",
|
||||
dependencies: [],
|
||||
details: "Create a footer with copyright information and links"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
export const emptySampleTasks = {
|
||||
meta: {
|
||||
projectName: "Empty Project",
|
||||
projectVersion: "1.0.0",
|
||||
createdAt: "2023-01-01T00:00:00.000Z",
|
||||
updatedAt: "2023-01-01T00:00:00.000Z"
|
||||
},
|
||||
tasks: []
|
||||
};
|
||||
30
tests/setup.js
Normal file
30
tests/setup.js
Normal file
@@ -0,0 +1,30 @@
|
||||
/**
|
||||
* Jest setup file
|
||||
*
|
||||
* This file is run before each test suite to set up the test environment.
|
||||
*/
|
||||
|
||||
// Mock environment variables
|
||||
process.env.MODEL = 'sonar-pro';
|
||||
process.env.MAX_TOKENS = '64000';
|
||||
process.env.TEMPERATURE = '0.4';
|
||||
process.env.DEBUG = 'false';
|
||||
process.env.LOG_LEVEL = 'error'; // Set to error to reduce noise in tests
|
||||
process.env.DEFAULT_SUBTASKS = '3';
|
||||
process.env.DEFAULT_PRIORITY = 'medium';
|
||||
process.env.PROJECT_NAME = 'Test Project';
|
||||
process.env.PROJECT_VERSION = '1.0.0';
|
||||
|
||||
// Add global test helpers if needed
|
||||
global.wait = (ms) => new Promise(resolve => setTimeout(resolve, ms));
|
||||
|
||||
// If needed, silence console during tests
|
||||
if (process.env.SILENCE_CONSOLE === 'true') {
|
||||
global.console = {
|
||||
...console,
|
||||
log: jest.fn(),
|
||||
info: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
error: jest.fn(),
|
||||
};
|
||||
}
|
||||
288
tests/unit/ai-services.test.js
Normal file
288
tests/unit/ai-services.test.js
Normal file
@@ -0,0 +1,288 @@
|
||||
/**
|
||||
* AI Services module tests
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import { parseSubtasksFromText } from '../../scripts/modules/ai-services.js';
|
||||
|
||||
// Create a mock log function we can check later
|
||||
const mockLog = jest.fn();
|
||||
|
||||
// Mock dependencies
|
||||
jest.mock('@anthropic-ai/sdk', () => {
|
||||
return {
|
||||
Anthropic: jest.fn().mockImplementation(() => ({
|
||||
messages: {
|
||||
create: jest.fn().mockResolvedValue({
|
||||
content: [{ text: 'AI response' }],
|
||||
}),
|
||||
},
|
||||
})),
|
||||
};
|
||||
});
|
||||
|
||||
// Use jest.fn() directly for OpenAI mock
|
||||
const mockOpenAIInstance = {
|
||||
chat: {
|
||||
completions: {
|
||||
create: jest.fn().mockResolvedValue({
|
||||
choices: [{ message: { content: 'Perplexity response' } }],
|
||||
}),
|
||||
},
|
||||
},
|
||||
};
|
||||
const mockOpenAI = jest.fn().mockImplementation(() => mockOpenAIInstance);
|
||||
|
||||
jest.mock('openai', () => {
|
||||
return { default: mockOpenAI };
|
||||
});
|
||||
|
||||
jest.mock('dotenv', () => ({
|
||||
config: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
CONFIG: {
|
||||
model: 'claude-3-sonnet-20240229',
|
||||
temperature: 0.7,
|
||||
maxTokens: 4000,
|
||||
},
|
||||
log: mockLog,
|
||||
sanitizePrompt: jest.fn(text => text),
|
||||
}));
|
||||
|
||||
jest.mock('../../scripts/modules/ui.js', () => ({
|
||||
startLoadingIndicator: jest.fn().mockReturnValue('mockLoader'),
|
||||
stopLoadingIndicator: jest.fn(),
|
||||
}));
|
||||
|
||||
// Mock anthropic global object
|
||||
global.anthropic = {
|
||||
messages: {
|
||||
create: jest.fn().mockResolvedValue({
|
||||
content: [{ text: '[{"id": 1, "title": "Test", "description": "Test", "dependencies": [], "details": "Test"}]' }],
|
||||
}),
|
||||
},
|
||||
};
|
||||
|
||||
// Mock process.env
|
||||
const originalEnv = process.env;
|
||||
|
||||
describe('AI Services Module', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
process.env = { ...originalEnv };
|
||||
process.env.ANTHROPIC_API_KEY = 'test-anthropic-key';
|
||||
process.env.PERPLEXITY_API_KEY = 'test-perplexity-key';
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.env = originalEnv;
|
||||
});
|
||||
|
||||
describe('parseSubtasksFromText function', () => {
|
||||
test('should parse subtasks from JSON text', () => {
|
||||
const text = `Here's your list of subtasks:
|
||||
|
||||
[
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Implement database schema",
|
||||
"description": "Design and implement the database schema for user data",
|
||||
"dependencies": [],
|
||||
"details": "Create tables for users, preferences, and settings"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Create API endpoints",
|
||||
"description": "Develop RESTful API endpoints for user operations",
|
||||
"dependencies": [],
|
||||
"details": "Implement CRUD operations for user management"
|
||||
}
|
||||
]
|
||||
|
||||
These subtasks will help you implement the parent task efficiently.`;
|
||||
|
||||
const result = parseSubtasksFromText(text, 1, 2, 5);
|
||||
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0]).toEqual({
|
||||
id: 1,
|
||||
title: 'Implement database schema',
|
||||
description: 'Design and implement the database schema for user data',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Create tables for users, preferences, and settings',
|
||||
parentTaskId: 5
|
||||
});
|
||||
expect(result[1]).toEqual({
|
||||
id: 2,
|
||||
title: 'Create API endpoints',
|
||||
description: 'Develop RESTful API endpoints for user operations',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
details: 'Implement CRUD operations for user management',
|
||||
parentTaskId: 5
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle subtasks with dependencies', () => {
|
||||
const text = `
|
||||
[
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Setup React environment",
|
||||
"description": "Initialize React app with necessary dependencies",
|
||||
"dependencies": [],
|
||||
"details": "Use Create React App or Vite to set up a new project"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Create component structure",
|
||||
"description": "Design and implement component hierarchy",
|
||||
"dependencies": [1],
|
||||
"details": "Organize components by feature and reusability"
|
||||
}
|
||||
]`;
|
||||
|
||||
const result = parseSubtasksFromText(text, 1, 2, 5);
|
||||
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0].dependencies).toEqual([]);
|
||||
expect(result[1].dependencies).toEqual([1]);
|
||||
});
|
||||
|
||||
test('should handle complex dependency lists', () => {
|
||||
const text = `
|
||||
[
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Setup database",
|
||||
"description": "Initialize database structure",
|
||||
"dependencies": [],
|
||||
"details": "Set up PostgreSQL database"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Create models",
|
||||
"description": "Implement data models",
|
||||
"dependencies": [1],
|
||||
"details": "Define Prisma models"
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "Implement controllers",
|
||||
"description": "Create API controllers",
|
||||
"dependencies": [1, 2],
|
||||
"details": "Build controllers for all endpoints"
|
||||
}
|
||||
]`;
|
||||
|
||||
const result = parseSubtasksFromText(text, 1, 3, 5);
|
||||
|
||||
expect(result).toHaveLength(3);
|
||||
expect(result[2].dependencies).toEqual([1, 2]);
|
||||
});
|
||||
|
||||
test('should create fallback subtasks for empty text', () => {
|
||||
const emptyText = '';
|
||||
|
||||
const result = parseSubtasksFromText(emptyText, 1, 2, 5);
|
||||
|
||||
// Verify fallback subtasks structure
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0]).toMatchObject({
|
||||
id: 1,
|
||||
title: 'Subtask 1',
|
||||
description: 'Auto-generated fallback subtask',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
parentTaskId: 5
|
||||
});
|
||||
expect(result[1]).toMatchObject({
|
||||
id: 2,
|
||||
title: 'Subtask 2',
|
||||
description: 'Auto-generated fallback subtask',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
parentTaskId: 5
|
||||
});
|
||||
});
|
||||
|
||||
test('should normalize subtask IDs', () => {
|
||||
const text = `
|
||||
[
|
||||
{
|
||||
"id": 10,
|
||||
"title": "First task with incorrect ID",
|
||||
"description": "First description",
|
||||
"dependencies": [],
|
||||
"details": "First details"
|
||||
},
|
||||
{
|
||||
"id": 20,
|
||||
"title": "Second task with incorrect ID",
|
||||
"description": "Second description",
|
||||
"dependencies": [],
|
||||
"details": "Second details"
|
||||
}
|
||||
]`;
|
||||
|
||||
const result = parseSubtasksFromText(text, 1, 2, 5);
|
||||
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0].id).toBe(1); // Should normalize to starting ID
|
||||
expect(result[1].id).toBe(2); // Should normalize to starting ID + 1
|
||||
});
|
||||
|
||||
test('should convert string dependencies to numbers', () => {
|
||||
const text = `
|
||||
[
|
||||
{
|
||||
"id": 1,
|
||||
"title": "First task",
|
||||
"description": "First description",
|
||||
"dependencies": [],
|
||||
"details": "First details"
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Second task",
|
||||
"description": "Second description",
|
||||
"dependencies": ["1"],
|
||||
"details": "Second details"
|
||||
}
|
||||
]`;
|
||||
|
||||
const result = parseSubtasksFromText(text, 1, 2, 5);
|
||||
|
||||
expect(result[1].dependencies).toEqual([1]);
|
||||
expect(typeof result[1].dependencies[0]).toBe('number');
|
||||
});
|
||||
|
||||
test('should create fallback subtasks for invalid JSON', () => {
|
||||
const text = `This is not valid JSON and cannot be parsed`;
|
||||
|
||||
const result = parseSubtasksFromText(text, 1, 2, 5);
|
||||
|
||||
// Verify fallback subtasks structure
|
||||
expect(result).toHaveLength(2);
|
||||
expect(result[0]).toMatchObject({
|
||||
id: 1,
|
||||
title: 'Subtask 1',
|
||||
description: 'Auto-generated fallback subtask',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
parentTaskId: 5
|
||||
});
|
||||
expect(result[1]).toMatchObject({
|
||||
id: 2,
|
||||
title: 'Subtask 2',
|
||||
description: 'Auto-generated fallback subtask',
|
||||
status: 'pending',
|
||||
dependencies: [],
|
||||
parentTaskId: 5
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
119
tests/unit/commands.test.js
Normal file
119
tests/unit/commands.test.js
Normal file
@@ -0,0 +1,119 @@
|
||||
/**
|
||||
* Commands module tests
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
|
||||
// Mock modules
|
||||
jest.mock('commander');
|
||||
jest.mock('fs');
|
||||
jest.mock('path');
|
||||
jest.mock('../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn(),
|
||||
displayHelp: jest.fn()
|
||||
}));
|
||||
jest.mock('../../scripts/modules/task-manager.js');
|
||||
jest.mock('../../scripts/modules/dependency-manager.js');
|
||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
CONFIG: {
|
||||
projectVersion: '1.5.0'
|
||||
},
|
||||
log: jest.fn()
|
||||
}));
|
||||
|
||||
// Import after mocking
|
||||
import { setupCLI } from '../../scripts/modules/commands.js';
|
||||
import { program } from 'commander';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
describe('Commands Module', () => {
|
||||
// Set up spies on the mocked modules
|
||||
const mockName = jest.spyOn(program, 'name').mockReturnValue(program);
|
||||
const mockDescription = jest.spyOn(program, 'description').mockReturnValue(program);
|
||||
const mockVersion = jest.spyOn(program, 'version').mockReturnValue(program);
|
||||
const mockHelpOption = jest.spyOn(program, 'helpOption').mockReturnValue(program);
|
||||
const mockAddHelpCommand = jest.spyOn(program, 'addHelpCommand').mockReturnValue(program);
|
||||
const mockOn = jest.spyOn(program, 'on').mockReturnValue(program);
|
||||
const mockExistsSync = jest.spyOn(fs, 'existsSync');
|
||||
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
||||
const mockJoin = jest.spyOn(path, 'join');
|
||||
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('setupCLI function', () => {
|
||||
test('should return Commander program instance', () => {
|
||||
const result = setupCLI();
|
||||
|
||||
// Verify the program was properly configured
|
||||
expect(mockName).toHaveBeenCalledWith('dev');
|
||||
expect(mockDescription).toHaveBeenCalledWith('AI-driven development task management');
|
||||
expect(mockVersion).toHaveBeenCalled();
|
||||
expect(mockHelpOption).toHaveBeenCalledWith('-h, --help', 'Display help');
|
||||
expect(mockAddHelpCommand).toHaveBeenCalledWith(false);
|
||||
expect(mockOn).toHaveBeenCalled();
|
||||
expect(result).toBeTruthy();
|
||||
});
|
||||
|
||||
test('should read version from package.json when available', () => {
|
||||
// Setup mock for package.json existence and content
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockReadFileSync.mockReturnValue(JSON.stringify({ version: '2.0.0' }));
|
||||
mockJoin.mockReturnValue('/mock/path/package.json');
|
||||
|
||||
// Call the setup function
|
||||
setupCLI();
|
||||
|
||||
// Get the version callback function
|
||||
const versionCallback = mockVersion.mock.calls[0][0];
|
||||
expect(typeof versionCallback).toBe('function');
|
||||
|
||||
// Execute the callback and check the result
|
||||
const result = versionCallback();
|
||||
expect(result).toBe('2.0.0');
|
||||
|
||||
// Verify the correct functions were called
|
||||
expect(mockExistsSync).toHaveBeenCalled();
|
||||
expect(mockReadFileSync).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should use default version when package.json is not available', () => {
|
||||
// Setup mock for package.json absence
|
||||
mockExistsSync.mockReturnValue(false);
|
||||
|
||||
// Call the setup function
|
||||
setupCLI();
|
||||
|
||||
// Get the version callback function
|
||||
const versionCallback = mockVersion.mock.calls[0][0];
|
||||
expect(typeof versionCallback).toBe('function');
|
||||
|
||||
// Execute the callback and check the result
|
||||
const result = versionCallback();
|
||||
expect(result).toBe('1.5.0'); // Updated to match the actual CONFIG.projectVersion
|
||||
|
||||
expect(mockExistsSync).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('should use default version when package.json reading throws an error', () => {
|
||||
// Setup mock for package.json reading error
|
||||
mockExistsSync.mockReturnValue(true);
|
||||
mockReadFileSync.mockImplementation(() => {
|
||||
throw new Error('Read error');
|
||||
});
|
||||
|
||||
// Call the setup function
|
||||
setupCLI();
|
||||
|
||||
// Get the version callback function
|
||||
const versionCallback = mockVersion.mock.calls[0][0];
|
||||
expect(typeof versionCallback).toBe('function');
|
||||
|
||||
// Execute the callback and check the result
|
||||
const result = versionCallback();
|
||||
expect(result).toBe('1.5.0'); // Updated to match the actual CONFIG.projectVersion
|
||||
});
|
||||
});
|
||||
});
|
||||
585
tests/unit/dependency-manager.test.js
Normal file
585
tests/unit/dependency-manager.test.js
Normal file
@@ -0,0 +1,585 @@
|
||||
/**
|
||||
* Dependency Manager module tests
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
validateTaskDependencies,
|
||||
isCircularDependency,
|
||||
removeDuplicateDependencies,
|
||||
cleanupSubtaskDependencies,
|
||||
ensureAtLeastOneIndependentSubtask,
|
||||
validateAndFixDependencies
|
||||
} from '../../scripts/modules/dependency-manager.js';
|
||||
import * as utils from '../../scripts/modules/utils.js';
|
||||
import { sampleTasks } from '../fixtures/sample-tasks.js';
|
||||
|
||||
// Mock dependencies
|
||||
jest.mock('path');
|
||||
jest.mock('chalk', () => ({
|
||||
green: jest.fn(text => `<green>${text}</green>`),
|
||||
yellow: jest.fn(text => `<yellow>${text}</yellow>`),
|
||||
red: jest.fn(text => `<red>${text}</red>`),
|
||||
cyan: jest.fn(text => `<cyan>${text}</cyan>`),
|
||||
bold: jest.fn(text => `<bold>${text}</bold>`),
|
||||
}));
|
||||
|
||||
jest.mock('boxen', () => jest.fn(text => `[boxed: ${text}]`));
|
||||
|
||||
jest.mock('@anthropic-ai/sdk', () => ({
|
||||
Anthropic: jest.fn().mockImplementation(() => ({})),
|
||||
}));
|
||||
|
||||
// Mock utils module
|
||||
const mockTaskExists = jest.fn();
|
||||
const mockFormatTaskId = jest.fn();
|
||||
const mockFindCycles = jest.fn();
|
||||
const mockLog = jest.fn();
|
||||
const mockReadJSON = jest.fn();
|
||||
const mockWriteJSON = jest.fn();
|
||||
|
||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
log: mockLog,
|
||||
readJSON: mockReadJSON,
|
||||
writeJSON: mockWriteJSON,
|
||||
taskExists: mockTaskExists,
|
||||
formatTaskId: mockFormatTaskId,
|
||||
findCycles: mockFindCycles
|
||||
}));
|
||||
|
||||
jest.mock('../../scripts/modules/ui.js', () => ({
|
||||
displayBanner: jest.fn(),
|
||||
}));
|
||||
|
||||
jest.mock('../../scripts/modules/task-manager.js', () => ({
|
||||
generateTaskFiles: jest.fn(),
|
||||
}));
|
||||
|
||||
// Create a path for test files
|
||||
const TEST_TASKS_PATH = 'tests/fixture/test-tasks.json';
|
||||
|
||||
describe('Dependency Manager Module', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
|
||||
// Set default implementations
|
||||
mockTaskExists.mockImplementation((tasks, id) => {
|
||||
if (Array.isArray(tasks)) {
|
||||
if (typeof id === 'string' && id.includes('.')) {
|
||||
const [taskId, subtaskId] = id.split('.').map(Number);
|
||||
const task = tasks.find(t => t.id === taskId);
|
||||
return task && task.subtasks && task.subtasks.some(st => st.id === subtaskId);
|
||||
}
|
||||
return tasks.some(task => task.id === (typeof id === 'string' ? parseInt(id, 10) : id));
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
mockFormatTaskId.mockImplementation(id => {
|
||||
if (typeof id === 'string' && id.includes('.')) {
|
||||
return id;
|
||||
}
|
||||
return parseInt(id, 10);
|
||||
});
|
||||
|
||||
mockFindCycles.mockImplementation((tasks) => {
|
||||
// Simplified cycle detection for testing
|
||||
const dependencyMap = new Map();
|
||||
|
||||
// Build dependency map
|
||||
tasks.forEach(task => {
|
||||
if (task.dependencies) {
|
||||
dependencyMap.set(task.id, task.dependencies);
|
||||
}
|
||||
});
|
||||
|
||||
const visited = new Set();
|
||||
const recursionStack = new Set();
|
||||
|
||||
function dfs(taskId) {
|
||||
visited.add(taskId);
|
||||
recursionStack.add(taskId);
|
||||
|
||||
const dependencies = dependencyMap.get(taskId) || [];
|
||||
for (const depId of dependencies) {
|
||||
if (!visited.has(depId)) {
|
||||
if (dfs(depId)) return true;
|
||||
} else if (recursionStack.has(depId)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
recursionStack.delete(taskId);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check for cycles starting from each unvisited node
|
||||
for (const taskId of dependencyMap.keys()) {
|
||||
if (!visited.has(taskId)) {
|
||||
if (dfs(taskId)) return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
});
|
||||
});
|
||||
|
||||
describe('isCircularDependency function', () => {
|
||||
test('should detect a direct circular dependency', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [2] },
|
||||
{ id: 2, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = isCircularDependency(tasks, 1);
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
test('should detect an indirect circular dependency', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [2] },
|
||||
{ id: 2, dependencies: [3] },
|
||||
{ id: 3, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = isCircularDependency(tasks, 1);
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
test('should return false for non-circular dependencies', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [2] },
|
||||
{ id: 2, dependencies: [3] },
|
||||
{ id: 3, dependencies: [] }
|
||||
];
|
||||
|
||||
const result = isCircularDependency(tasks, 1);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
test('should handle a task with no dependencies', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [] },
|
||||
{ id: 2, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = isCircularDependency(tasks, 1);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
test('should handle a task depending on itself', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = isCircularDependency(tasks, 1);
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateTaskDependencies function', () => {
|
||||
test('should detect missing dependencies', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [99] }, // 99 doesn't exist
|
||||
{ id: 2, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = validateTaskDependencies(tasks);
|
||||
|
||||
expect(result.valid).toBe(false);
|
||||
expect(result.issues.length).toBeGreaterThan(0);
|
||||
expect(result.issues[0].type).toBe('missing');
|
||||
expect(result.issues[0].taskId).toBe(1);
|
||||
expect(result.issues[0].dependencyId).toBe(99);
|
||||
});
|
||||
|
||||
test('should detect circular dependencies', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [2] },
|
||||
{ id: 2, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = validateTaskDependencies(tasks);
|
||||
|
||||
expect(result.valid).toBe(false);
|
||||
expect(result.issues.some(issue => issue.type === 'circular')).toBe(true);
|
||||
});
|
||||
|
||||
test('should detect self-dependencies', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = validateTaskDependencies(tasks);
|
||||
|
||||
expect(result.valid).toBe(false);
|
||||
expect(result.issues.some(issue =>
|
||||
issue.type === 'self' && issue.taskId === 1
|
||||
)).toBe(true);
|
||||
});
|
||||
|
||||
test('should return valid for correct dependencies', () => {
|
||||
const tasks = [
|
||||
{ id: 1, dependencies: [] },
|
||||
{ id: 2, dependencies: [1] },
|
||||
{ id: 3, dependencies: [1, 2] }
|
||||
];
|
||||
|
||||
const result = validateTaskDependencies(tasks);
|
||||
|
||||
expect(result.valid).toBe(true);
|
||||
expect(result.issues.length).toBe(0);
|
||||
});
|
||||
|
||||
test('should handle tasks with no dependencies property', () => {
|
||||
const tasks = [
|
||||
{ id: 1 }, // Missing dependencies property
|
||||
{ id: 2, dependencies: [1] }
|
||||
];
|
||||
|
||||
const result = validateTaskDependencies(tasks);
|
||||
|
||||
// Should be valid since a missing dependencies property is interpreted as an empty array
|
||||
expect(result.valid).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('removeDuplicateDependencies function', () => {
|
||||
test('should remove duplicate dependencies from tasks', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{ id: 1, dependencies: [2, 2, 3, 3, 3] },
|
||||
{ id: 2, dependencies: [3] },
|
||||
{ id: 3, dependencies: [] }
|
||||
]
|
||||
};
|
||||
|
||||
const result = removeDuplicateDependencies(tasksData);
|
||||
|
||||
expect(result.tasks[0].dependencies).toEqual([2, 3]);
|
||||
expect(result.tasks[1].dependencies).toEqual([3]);
|
||||
expect(result.tasks[2].dependencies).toEqual([]);
|
||||
});
|
||||
|
||||
test('should handle empty dependencies array', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{ id: 1, dependencies: [] },
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
};
|
||||
|
||||
const result = removeDuplicateDependencies(tasksData);
|
||||
|
||||
expect(result.tasks[0].dependencies).toEqual([]);
|
||||
expect(result.tasks[1].dependencies).toEqual([1]);
|
||||
});
|
||||
|
||||
test('should handle tasks with no dependencies property', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{ id: 1 }, // No dependencies property
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
};
|
||||
|
||||
const result = removeDuplicateDependencies(tasksData);
|
||||
|
||||
expect(result.tasks[0]).not.toHaveProperty('dependencies');
|
||||
expect(result.tasks[1].dependencies).toEqual([1]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('cleanupSubtaskDependencies function', () => {
|
||||
test('should remove dependencies to non-existent subtasks', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
dependencies: [],
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: [] },
|
||||
{ id: 2, dependencies: [3] } // Dependency 3 doesn't exist
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
dependencies: ['1.2'], // Valid subtask dependency
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: ['1.1'] } // Valid subtask dependency
|
||||
]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
const result = cleanupSubtaskDependencies(tasksData);
|
||||
|
||||
// Should remove the invalid dependency to subtask 3
|
||||
expect(result.tasks[0].subtasks[1].dependencies).toEqual([]);
|
||||
// Should keep valid dependencies
|
||||
expect(result.tasks[1].dependencies).toEqual(['1.2']);
|
||||
expect(result.tasks[1].subtasks[0].dependencies).toEqual(['1.1']);
|
||||
});
|
||||
|
||||
test('should handle tasks without subtasks', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{ id: 1, dependencies: [] },
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
};
|
||||
|
||||
const result = cleanupSubtaskDependencies(tasksData);
|
||||
|
||||
// Should return the original data unchanged
|
||||
expect(result).toEqual(tasksData);
|
||||
});
|
||||
});
|
||||
|
||||
describe('ensureAtLeastOneIndependentSubtask function', () => {
|
||||
test('should clear dependencies of first subtask if none are independent', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: [2] },
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
const result = ensureAtLeastOneIndependentSubtask(tasksData);
|
||||
|
||||
expect(result).toBe(true);
|
||||
expect(tasksData.tasks[0].subtasks[0].dependencies).toEqual([]);
|
||||
expect(tasksData.tasks[0].subtasks[1].dependencies).toEqual([1]);
|
||||
});
|
||||
|
||||
test('should not modify tasks if at least one subtask is independent', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: [] },
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
const result = ensureAtLeastOneIndependentSubtask(tasksData);
|
||||
|
||||
expect(result).toBe(false);
|
||||
expect(tasksData.tasks[0].subtasks[0].dependencies).toEqual([]);
|
||||
expect(tasksData.tasks[0].subtasks[1].dependencies).toEqual([1]);
|
||||
});
|
||||
|
||||
test('should handle tasks without subtasks', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{ id: 1 },
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
};
|
||||
|
||||
const result = ensureAtLeastOneIndependentSubtask(tasksData);
|
||||
|
||||
expect(result).toBe(false);
|
||||
expect(tasksData).toEqual({
|
||||
tasks: [
|
||||
{ id: 1 },
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
});
|
||||
});
|
||||
|
||||
test('should handle empty subtasks array', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{ id: 1, subtasks: [] }
|
||||
]
|
||||
};
|
||||
|
||||
const result = ensureAtLeastOneIndependentSubtask(tasksData);
|
||||
|
||||
expect(result).toBe(false);
|
||||
expect(tasksData).toEqual({
|
||||
tasks: [
|
||||
{ id: 1, subtasks: [] }
|
||||
]
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateAndFixDependencies function', () => {
|
||||
test('should fix multiple dependency issues and return true if changes made', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
dependencies: [1, 1, 99], // Self-dependency and duplicate and invalid dependency
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: [2, 2] }, // Duplicate dependencies
|
||||
{ id: 2, dependencies: [1] }
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
dependencies: [1],
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: [99] } // Invalid dependency
|
||||
]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
// Mock taskExists for validating dependencies
|
||||
mockTaskExists.mockImplementation((tasks, id) => {
|
||||
// Convert id to string for comparison
|
||||
const idStr = String(id);
|
||||
|
||||
// Handle subtask references (e.g., "1.2")
|
||||
if (idStr.includes('.')) {
|
||||
const [parentId, subtaskId] = idStr.split('.').map(Number);
|
||||
const task = tasks.find(t => t.id === parentId);
|
||||
return task && task.subtasks && task.subtasks.some(st => st.id === subtaskId);
|
||||
}
|
||||
|
||||
// Handle regular task references
|
||||
const taskId = parseInt(idStr, 10);
|
||||
return taskId === 1 || taskId === 2; // Only tasks 1 and 2 exist
|
||||
});
|
||||
|
||||
// Make a copy for verification that original is modified
|
||||
const originalData = JSON.parse(JSON.stringify(tasksData));
|
||||
|
||||
const result = validateAndFixDependencies(tasksData);
|
||||
|
||||
expect(result).toBe(true);
|
||||
// Check that data has been modified
|
||||
expect(tasksData).not.toEqual(originalData);
|
||||
|
||||
// Check specific changes
|
||||
// 1. Self-dependency removed
|
||||
expect(tasksData.tasks[0].dependencies).not.toContain(1);
|
||||
// 2. Invalid dependency removed
|
||||
expect(tasksData.tasks[0].dependencies).not.toContain(99);
|
||||
// 3. Dependencies have been deduplicated
|
||||
if (tasksData.tasks[0].subtasks[0].dependencies.length > 0) {
|
||||
expect(tasksData.tasks[0].subtasks[0].dependencies).toEqual(
|
||||
expect.arrayContaining([])
|
||||
);
|
||||
}
|
||||
// 4. Invalid subtask dependency removed
|
||||
expect(tasksData.tasks[1].subtasks[0].dependencies).toEqual([]);
|
||||
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith('tasks/tasks.json', expect.anything());
|
||||
});
|
||||
|
||||
test('should return false if no changes needed', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
dependencies: [],
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: [] }, // Already has an independent subtask
|
||||
{ id: 2, dependencies: ['1.1'] }
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
dependencies: [1]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
// Mock taskExists to validate all dependencies as valid
|
||||
mockTaskExists.mockImplementation((tasks, id) => {
|
||||
// Convert id to string for comparison
|
||||
const idStr = String(id);
|
||||
|
||||
// Handle subtask references
|
||||
if (idStr.includes('.')) {
|
||||
const [parentId, subtaskId] = idStr.split('.').map(Number);
|
||||
const task = tasks.find(t => t.id === parentId);
|
||||
return task && task.subtasks && task.subtasks.some(st => st.id === subtaskId);
|
||||
}
|
||||
|
||||
// Handle regular task references
|
||||
const taskId = parseInt(idStr, 10);
|
||||
return taskId === 1 || taskId === 2;
|
||||
});
|
||||
|
||||
const originalData = JSON.parse(JSON.stringify(tasksData));
|
||||
const result = validateAndFixDependencies(tasksData);
|
||||
|
||||
expect(result).toBe(false);
|
||||
// Verify data is unchanged
|
||||
expect(tasksData).toEqual(originalData);
|
||||
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith('tasks/tasks.json', expect.anything());
|
||||
});
|
||||
|
||||
test('should handle invalid input', () => {
|
||||
expect(validateAndFixDependencies(null)).toBe(false);
|
||||
expect(validateAndFixDependencies({})).toBe(false);
|
||||
expect(validateAndFixDependencies({ tasks: null })).toBe(false);
|
||||
expect(validateAndFixDependencies({ tasks: 'not an array' })).toBe(false);
|
||||
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith('tasks/tasks.json', expect.anything());
|
||||
});
|
||||
|
||||
test('should save changes when tasksPath is provided', () => {
|
||||
const tasksData = {
|
||||
tasks: [
|
||||
{
|
||||
id: 1,
|
||||
dependencies: [1, 1], // Self-dependency and duplicate
|
||||
subtasks: [
|
||||
{ id: 1, dependencies: [99] } // Invalid dependency
|
||||
]
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
// Mock taskExists for this specific test
|
||||
mockTaskExists.mockImplementation((tasks, id) => {
|
||||
// Convert id to string for comparison
|
||||
const idStr = String(id);
|
||||
|
||||
// Handle subtask references
|
||||
if (idStr.includes('.')) {
|
||||
const [parentId, subtaskId] = idStr.split('.').map(Number);
|
||||
const task = tasks.find(t => t.id === parentId);
|
||||
return task && task.subtasks && task.subtasks.some(st => st.id === subtaskId);
|
||||
}
|
||||
|
||||
// Handle regular task references
|
||||
const taskId = parseInt(idStr, 10);
|
||||
return taskId === 1; // Only task 1 exists
|
||||
});
|
||||
|
||||
// Copy the original data to verify changes
|
||||
const originalData = JSON.parse(JSON.stringify(tasksData));
|
||||
|
||||
// Call the function with our test path instead of the actual tasks.json
|
||||
const result = validateAndFixDependencies(tasksData, TEST_TASKS_PATH);
|
||||
|
||||
// First verify that the result is true (changes were made)
|
||||
expect(result).toBe(true);
|
||||
|
||||
// Verify the data was modified
|
||||
expect(tasksData).not.toEqual(originalData);
|
||||
|
||||
// IMPORTANT: Verify no calls to writeJSON with actual tasks.json
|
||||
expect(mockWriteJSON).not.toHaveBeenCalledWith('tasks/tasks.json', expect.anything());
|
||||
});
|
||||
});
|
||||
});
|
||||
50
tests/unit/task-finder.test.js
Normal file
50
tests/unit/task-finder.test.js
Normal file
@@ -0,0 +1,50 @@
|
||||
/**
|
||||
* Task finder tests
|
||||
*/
|
||||
|
||||
import { findTaskById } from '../../scripts/modules/utils.js';
|
||||
import { sampleTasks, emptySampleTasks } from '../fixtures/sample-tasks.js';
|
||||
|
||||
describe('Task Finder', () => {
|
||||
describe('findTaskById function', () => {
|
||||
test('should find a task by numeric ID', () => {
|
||||
const task = findTaskById(sampleTasks.tasks, 2);
|
||||
expect(task).toBeDefined();
|
||||
expect(task.id).toBe(2);
|
||||
expect(task.title).toBe('Create Core Functionality');
|
||||
});
|
||||
|
||||
test('should find a task by string ID', () => {
|
||||
const task = findTaskById(sampleTasks.tasks, '2');
|
||||
expect(task).toBeDefined();
|
||||
expect(task.id).toBe(2);
|
||||
});
|
||||
|
||||
test('should find a subtask using dot notation', () => {
|
||||
const subtask = findTaskById(sampleTasks.tasks, '3.1');
|
||||
expect(subtask).toBeDefined();
|
||||
expect(subtask.id).toBe(1);
|
||||
expect(subtask.title).toBe('Create Header Component');
|
||||
});
|
||||
|
||||
test('should return null for non-existent task ID', () => {
|
||||
const task = findTaskById(sampleTasks.tasks, 99);
|
||||
expect(task).toBeNull();
|
||||
});
|
||||
|
||||
test('should return null for non-existent subtask ID', () => {
|
||||
const subtask = findTaskById(sampleTasks.tasks, '3.99');
|
||||
expect(subtask).toBeNull();
|
||||
});
|
||||
|
||||
test('should return null for non-existent parent task ID in subtask notation', () => {
|
||||
const subtask = findTaskById(sampleTasks.tasks, '99.1');
|
||||
expect(subtask).toBeNull();
|
||||
});
|
||||
|
||||
test('should return null when tasks array is empty', () => {
|
||||
const task = findTaskById(emptySampleTasks.tasks, 1);
|
||||
expect(task).toBeNull();
|
||||
});
|
||||
});
|
||||
});
|
||||
153
tests/unit/task-manager.test.js
Normal file
153
tests/unit/task-manager.test.js
Normal file
@@ -0,0 +1,153 @@
|
||||
/**
|
||||
* Task Manager module tests
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import { findNextTask } from '../../scripts/modules/task-manager.js';
|
||||
|
||||
// Mock dependencies
|
||||
jest.mock('fs');
|
||||
jest.mock('path');
|
||||
jest.mock('@anthropic-ai/sdk');
|
||||
jest.mock('cli-table3');
|
||||
jest.mock('../../scripts/modules/ui.js');
|
||||
jest.mock('../../scripts/modules/ai-services.js');
|
||||
jest.mock('../../scripts/modules/dependency-manager.js');
|
||||
jest.mock('../../scripts/modules/utils.js');
|
||||
|
||||
describe('Task Manager Module', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('findNextTask function', () => {
|
||||
test('should return the highest priority task with all dependencies satisfied', () => {
|
||||
const tasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Setup Project',
|
||||
status: 'done',
|
||||
dependencies: [],
|
||||
priority: 'high'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Implement Core Features',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'high'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Create Documentation',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'medium'
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'Deploy Application',
|
||||
status: 'pending',
|
||||
dependencies: [2, 3],
|
||||
priority: 'high'
|
||||
}
|
||||
];
|
||||
|
||||
const nextTask = findNextTask(tasks);
|
||||
|
||||
expect(nextTask).toBeDefined();
|
||||
expect(nextTask.id).toBe(2);
|
||||
expect(nextTask.title).toBe('Implement Core Features');
|
||||
});
|
||||
|
||||
test('should prioritize by priority level when dependencies are equal', () => {
|
||||
const tasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Setup Project',
|
||||
status: 'done',
|
||||
dependencies: [],
|
||||
priority: 'high'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Low Priority Task',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'low'
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
title: 'Medium Priority Task',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'medium'
|
||||
},
|
||||
{
|
||||
id: 4,
|
||||
title: 'High Priority Task',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'high'
|
||||
}
|
||||
];
|
||||
|
||||
const nextTask = findNextTask(tasks);
|
||||
|
||||
expect(nextTask.id).toBe(4);
|
||||
expect(nextTask.priority).toBe('high');
|
||||
});
|
||||
|
||||
test('should return null when all tasks are completed', () => {
|
||||
const tasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Setup Project',
|
||||
status: 'done',
|
||||
dependencies: [],
|
||||
priority: 'high'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Implement Features',
|
||||
status: 'done',
|
||||
dependencies: [1],
|
||||
priority: 'high'
|
||||
}
|
||||
];
|
||||
|
||||
const nextTask = findNextTask(tasks);
|
||||
|
||||
expect(nextTask).toBeNull();
|
||||
});
|
||||
|
||||
test('should return null when all pending tasks have unsatisfied dependencies', () => {
|
||||
const tasks = [
|
||||
{
|
||||
id: 1,
|
||||
title: 'Setup Project',
|
||||
status: 'pending',
|
||||
dependencies: [2],
|
||||
priority: 'high'
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
title: 'Implement Features',
|
||||
status: 'pending',
|
||||
dependencies: [1],
|
||||
priority: 'high'
|
||||
}
|
||||
];
|
||||
|
||||
const nextTask = findNextTask(tasks);
|
||||
|
||||
expect(nextTask).toBeNull();
|
||||
});
|
||||
|
||||
test('should handle empty tasks array', () => {
|
||||
const nextTask = findNextTask([]);
|
||||
|
||||
expect(nextTask).toBeNull();
|
||||
});
|
||||
});
|
||||
});
|
||||
189
tests/unit/ui.test.js
Normal file
189
tests/unit/ui.test.js
Normal file
@@ -0,0 +1,189 @@
|
||||
/**
|
||||
* UI module tests
|
||||
*/
|
||||
|
||||
import { jest } from '@jest/globals';
|
||||
import {
|
||||
getStatusWithColor,
|
||||
formatDependenciesWithStatus,
|
||||
createProgressBar,
|
||||
getComplexityWithColor
|
||||
} from '../../scripts/modules/ui.js';
|
||||
import { sampleTasks } from '../fixtures/sample-tasks.js';
|
||||
|
||||
// Mock dependencies
|
||||
jest.mock('chalk', () => {
|
||||
const origChalkFn = text => text;
|
||||
const chalk = origChalkFn;
|
||||
chalk.green = text => text; // Return text as-is for status functions
|
||||
chalk.yellow = text => text;
|
||||
chalk.red = text => text;
|
||||
chalk.cyan = text => text;
|
||||
chalk.blue = text => text;
|
||||
chalk.gray = text => text;
|
||||
chalk.white = text => text;
|
||||
chalk.bold = text => text;
|
||||
chalk.dim = text => text;
|
||||
|
||||
// Add hex and other methods
|
||||
chalk.hex = () => origChalkFn;
|
||||
chalk.rgb = () => origChalkFn;
|
||||
|
||||
return chalk;
|
||||
});
|
||||
|
||||
jest.mock('figlet', () => ({
|
||||
textSync: jest.fn(() => 'Task Master Banner'),
|
||||
}));
|
||||
|
||||
jest.mock('boxen', () => jest.fn(text => `[boxed: ${text}]`));
|
||||
|
||||
jest.mock('ora', () => jest.fn(() => ({
|
||||
start: jest.fn(),
|
||||
succeed: jest.fn(),
|
||||
fail: jest.fn(),
|
||||
stop: jest.fn(),
|
||||
})));
|
||||
|
||||
jest.mock('cli-table3', () => jest.fn().mockImplementation(() => ({
|
||||
push: jest.fn(),
|
||||
toString: jest.fn(() => 'Table Content'),
|
||||
})));
|
||||
|
||||
jest.mock('gradient-string', () => jest.fn(() => jest.fn(text => text)));
|
||||
|
||||
jest.mock('../../scripts/modules/utils.js', () => ({
|
||||
CONFIG: {
|
||||
projectName: 'Test Project',
|
||||
projectVersion: '1.0.0',
|
||||
},
|
||||
log: jest.fn(),
|
||||
findTaskById: jest.fn(),
|
||||
readJSON: jest.fn(),
|
||||
readComplexityReport: jest.fn(),
|
||||
truncate: jest.fn(text => text),
|
||||
}));
|
||||
|
||||
jest.mock('../../scripts/modules/task-manager.js', () => ({
|
||||
findNextTask: jest.fn(),
|
||||
analyzeTaskComplexity: jest.fn(),
|
||||
}));
|
||||
|
||||
describe('UI Module', () => {
|
||||
beforeEach(() => {
|
||||
jest.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('getStatusWithColor function', () => {
|
||||
test('should return done status in green', () => {
|
||||
const result = getStatusWithColor('done');
|
||||
expect(result).toMatch(/done/);
|
||||
expect(result).toContain('✅');
|
||||
});
|
||||
|
||||
test('should return pending status in yellow', () => {
|
||||
const result = getStatusWithColor('pending');
|
||||
expect(result).toMatch(/pending/);
|
||||
expect(result).toContain('⏱️');
|
||||
});
|
||||
|
||||
test('should return deferred status in gray', () => {
|
||||
const result = getStatusWithColor('deferred');
|
||||
expect(result).toMatch(/deferred/);
|
||||
expect(result).toContain('⏱️');
|
||||
});
|
||||
|
||||
test('should return in-progress status in cyan', () => {
|
||||
const result = getStatusWithColor('in-progress');
|
||||
expect(result).toMatch(/in-progress/);
|
||||
expect(result).toContain('🔄');
|
||||
});
|
||||
|
||||
test('should return unknown status in red', () => {
|
||||
const result = getStatusWithColor('unknown');
|
||||
expect(result).toMatch(/unknown/);
|
||||
expect(result).toContain('❌');
|
||||
});
|
||||
});
|
||||
|
||||
describe('formatDependenciesWithStatus function', () => {
|
||||
test('should format dependencies with status indicators', () => {
|
||||
const dependencies = [1, 2, 3];
|
||||
const allTasks = [
|
||||
{ id: 1, status: 'done' },
|
||||
{ id: 2, status: 'pending' },
|
||||
{ id: 3, status: 'deferred' }
|
||||
];
|
||||
|
||||
const result = formatDependenciesWithStatus(dependencies, allTasks);
|
||||
|
||||
expect(result).toBe('✅ 1 (done), ⏱️ 2 (pending), ⏱️ 3 (deferred)');
|
||||
});
|
||||
|
||||
test('should return "None" for empty dependencies', () => {
|
||||
const result = formatDependenciesWithStatus([], []);
|
||||
expect(result).toBe('None');
|
||||
});
|
||||
|
||||
test('should handle missing tasks in the task list', () => {
|
||||
const dependencies = [1, 999];
|
||||
const allTasks = [
|
||||
{ id: 1, status: 'done' }
|
||||
];
|
||||
|
||||
const result = formatDependenciesWithStatus(dependencies, allTasks);
|
||||
expect(result).toBe('✅ 1 (done), 999 (Not found)');
|
||||
});
|
||||
});
|
||||
|
||||
describe('createProgressBar function', () => {
|
||||
test('should create a progress bar with the correct percentage', () => {
|
||||
const result = createProgressBar(50, 10);
|
||||
expect(result).toBe('█████░░░░░ 50%');
|
||||
});
|
||||
|
||||
test('should handle 0% progress', () => {
|
||||
const result = createProgressBar(0, 10);
|
||||
expect(result).toBe('░░░░░░░░░░ 0%');
|
||||
});
|
||||
|
||||
test('should handle 100% progress', () => {
|
||||
const result = createProgressBar(100, 10);
|
||||
expect(result).toBe('██████████ 100%');
|
||||
});
|
||||
|
||||
test('should handle invalid percentages by clamping', () => {
|
||||
const result1 = createProgressBar(0, 10); // -10 should clamp to 0
|
||||
expect(result1).toBe('░░░░░░░░░░ 0%');
|
||||
|
||||
const result2 = createProgressBar(100, 10); // 150 should clamp to 100
|
||||
expect(result2).toBe('██████████ 100%');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getComplexityWithColor function', () => {
|
||||
test('should return high complexity in red', () => {
|
||||
const result = getComplexityWithColor(8);
|
||||
expect(result).toMatch(/8/);
|
||||
expect(result).toContain('🔴');
|
||||
});
|
||||
|
||||
test('should return medium complexity in yellow', () => {
|
||||
const result = getComplexityWithColor(5);
|
||||
expect(result).toMatch(/5/);
|
||||
expect(result).toContain('🟡');
|
||||
});
|
||||
|
||||
test('should return low complexity in green', () => {
|
||||
const result = getComplexityWithColor(3);
|
||||
expect(result).toMatch(/3/);
|
||||
expect(result).toContain('🟢');
|
||||
});
|
||||
|
||||
test('should handle non-numeric inputs', () => {
|
||||
const result = getComplexityWithColor('high');
|
||||
expect(result).toMatch(/high/);
|
||||
expect(result).toContain('🔴');
|
||||
});
|
||||
});
|
||||
});
|
||||
44
tests/unit/utils.test.js
Normal file
44
tests/unit/utils.test.js
Normal file
@@ -0,0 +1,44 @@
|
||||
/**
|
||||
* Utils module tests
|
||||
*/
|
||||
|
||||
import { truncate } from '../../scripts/modules/utils.js';
|
||||
|
||||
describe('Utils Module', () => {
|
||||
describe('truncate function', () => {
|
||||
test('should return the original string if shorter than maxLength', () => {
|
||||
const result = truncate('Hello', 10);
|
||||
expect(result).toBe('Hello');
|
||||
});
|
||||
|
||||
test('should truncate the string and add ellipsis if longer than maxLength', () => {
|
||||
const result = truncate('This is a long string that needs truncation', 20);
|
||||
expect(result).toBe('This is a long st...');
|
||||
});
|
||||
|
||||
test('should handle empty string', () => {
|
||||
const result = truncate('', 10);
|
||||
expect(result).toBe('');
|
||||
});
|
||||
|
||||
test('should return null when input is null', () => {
|
||||
const result = truncate(null, 10);
|
||||
expect(result).toBe(null);
|
||||
});
|
||||
|
||||
test('should return undefined when input is undefined', () => {
|
||||
const result = truncate(undefined, 10);
|
||||
expect(result).toBe(undefined);
|
||||
});
|
||||
|
||||
test('should handle maxLength of 0 or negative', () => {
|
||||
// When maxLength is 0, slice(0, -3) returns 'He'
|
||||
const result1 = truncate('Hello', 0);
|
||||
expect(result1).toBe('He...');
|
||||
|
||||
// When maxLength is negative, slice(0, -8) returns nothing
|
||||
const result2 = truncate('Hello', -5);
|
||||
expect(result2).toBe('...');
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user