Files
claude-task-master/.cursor/rules/tests.mdc
Eyal Toledano de5e22e8bd feat: Add skipped tests for task-manager and utils modules, and address potential issues
This commit introduces a comprehensive set of skipped tests to both  and . These skipped tests serve as a blueprint for future test implementation, outlining the necessary test cases for currently untested functionalities.

- Ensures sync with bin/ folder by adding -r/--research to the  command
- Fixes an issue that improperly parsed command line args
- Ensures confirmation card on dependency add/remove
- Properly formats some sub-task dependencies

**Potentially addressed issues:**

While primarily focused on adding test coverage, this commit also implicitly addresses potential issues by:

- **Improving error handling coverage:** The addition of skipped tests for error scenarios in functions like , , , and  highlights areas where error handling needs to be robustly tested and potentially improved in the codebase.
- **Enhancing dependency validation:** Skipped tests for  include validation of dependencies, prompting a review of the dependency validation logic and ensuring its correctness.
- **Standardizing test coverage:** By creating a clear roadmap for testing all functions, this commit contributes to a more standardized and complete test suite, reducing the likelihood of undiscovered bugs in the future.

**task-manager.test.js:**

- Added skipped test blocks for the following functions:
    - : Includes tests for handling valid JSON responses, malformed JSON, missing tasks in responses, Perplexity AI research integration, Claude fallback, and parallel task processing.
    - : Covers tests for updating tasks based on context, handling Claude streaming, Perplexity AI integration, scenarios with no tasks to update, and error handling during updates.
    - : Includes tests for generating task files from , formatting dependencies with status indicators, handling tasks without subtasks, empty task arrays, and dependency validation before file generation.
    - : Covers tests for updating task status, subtask status using dot notation, updating multiple tasks, automatic subtask status updates, parent task update suggestions, and handling non-existent task IDs.
    - : Includes tests for updating regular and subtask statuses, handling parent tasks without subtasks, and non-existent subtask IDs.
    - : Covers tests for displaying all tasks, filtering by status, displaying subtasks, showing completion statistics, identifying the next task, and handling empty task arrays.
    - : Includes tests for generating subtasks, using complexity reports for subtask counts, Perplexity AI integration, appending subtasks, skipping completed tasks, and error handling during subtask generation.
    - : Covers tests for expanding all pending tasks, sorting by complexity, skipping tasks with existing subtasks (unless forced), using task-specific parameters from complexity reports, handling empty task arrays, and error handling for individual tasks.
    - : Includes tests for clearing subtasks from specific and multiple tasks, handling tasks without subtasks, non-existent task IDs, and regenerating task files after clearing subtasks.
    - : Covers tests for adding new tasks using AI, handling Claude streaming, validating dependencies, handling malformed AI responses, and using existing task context for generation.

**utils.test.js:**

- Added skipped test blocks for the following functions:
    - : Tests for logging messages according to log levels and filtering messages below configured levels.
    - : Tests for reading and parsing valid JSON files, handling file not found errors, and invalid JSON formats.
    - : Tests for writing JSON data to files and handling file write errors.
    - : Tests for escaping double quotes in prompts and handling prompts without special characters.
    - : Tests for reading and parsing complexity reports, handling missing report files, and custom report paths.
    - : Tests for finding tasks in reports by ID, handling non-existent task IDs, and invalid report structures.
    - : Tests for verifying existing task and subtask IDs, handling non-existent IDs, and invalid inputs.
    - : Tests for formatting numeric and string task IDs and preserving dot notation for subtasks.
    - : Tests for detecting simple and complex cycles in dependency graphs, handling acyclic graphs, and empty dependency maps.

These skipped tests provide a clear roadmap for future test development, ensuring comprehensive coverage for core functionalities in both modules. They document the intended behavior of each function and outline various scenarios, including happy paths, edge cases, and error conditions, thereby improving the overall test strategy and maintainability of the Task Master CLI.
2025-03-24 18:54:35 -04:00

290 lines
8.9 KiB
Plaintext

---
description: Guidelines for implementing and maintaining tests for Task Master CLI
globs: "**/*.test.js,tests/**/*"
---
# Testing Guidelines for Task Master CLI
## Test Organization Structure
- **Unit Tests**
- Located in `tests/unit/`
- Test individual functions and utilities in isolation
- Mock all external dependencies
- Keep tests small, focused, and fast
- Example naming: `utils.test.js`, `task-manager.test.js`
- **Integration Tests**
- Located in `tests/integration/`
- Test interactions between modules
- Focus on component interfaces rather than implementation details
- Use more realistic but still controlled test environments
- Example naming: `task-workflow.test.js`, `command-integration.test.js`
- **End-to-End Tests**
- Located in `tests/e2e/`
- Test complete workflows from a user perspective
- Focus on CLI commands as they would be used by users
- Example naming: `create-task.e2e.test.js`, `expand-task.e2e.test.js`
- **Test Fixtures**
- Located in `tests/fixtures/`
- Provide reusable test data
- Keep fixtures small and representative
- Export fixtures as named exports for reuse
## Test File Organization
```javascript
// 1. Imports
import { jest } from '@jest/globals';
// 2. Mock setup (MUST come before importing the modules under test)
jest.mock('fs');
jest.mock('@anthropic-ai/sdk');
jest.mock('../../scripts/modules/utils.js', () => ({
CONFIG: {
projectVersion: '1.5.0'
},
log: jest.fn()
}));
// 3. Import modules AFTER all mocks are defined
import { functionToTest } from '../../scripts/modules/module-name.js';
import { testFixture } from '../fixtures/fixture-name.js';
import fs from 'fs';
// 4. Set up spies on mocked modules (if needed)
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
// 5. Test suite with descriptive name
describe('Feature or Function Name', () => {
// 6. Setup and teardown (if needed)
beforeEach(() => {
jest.clearAllMocks();
// Additional setup code
});
afterEach(() => {
// Cleanup code
});
// 7. Grouped tests for related functionality
describe('specific functionality', () => {
// 8. Individual test cases with clear descriptions
test('should behave in expected way when given specific input', () => {
// Arrange - set up test data
const input = testFixture.sampleInput;
mockReadFileSync.mockReturnValue('mocked content');
// Act - call the function being tested
const result = functionToTest(input);
// Assert - verify the result
expect(result).toBe(expectedOutput);
expect(mockReadFileSync).toHaveBeenCalledWith(expect.stringContaining('path'));
});
});
});
```
## Jest Module Mocking Best Practices
- **Mock Hoisting Behavior**
- Jest hoists `jest.mock()` calls to the top of the file, even above imports
- Always declare mocks before importing the modules being tested
- Use the factory pattern for complex mocks that need access to other variables
```javascript
// ✅ DO: Place mocks before imports
jest.mock('commander');
import { program } from 'commander';
// ❌ DON'T: Define variables and then try to use them in mocks
const mockFn = jest.fn();
jest.mock('module', () => ({
func: mockFn // This won't work due to hoisting!
}));
```
- **Mocking Modules with Function References**
- Use `jest.spyOn()` after imports to create spies on mock functions
- Reference these spies in test assertions
```javascript
// Mock the module first
jest.mock('fs');
// Import the mocked module
import fs from 'fs';
// Create spies on the mock functions
const mockExistsSync = jest.spyOn(fs, 'existsSync').mockReturnValue(true);
test('should call existsSync', () => {
// Call function that uses fs.existsSync
const result = functionUnderTest();
// Verify the mock was called correctly
expect(mockExistsSync).toHaveBeenCalled();
});
```
- **Testing Functions with Callbacks**
- Get the callback from your mock's call arguments
- Execute it directly with test inputs
- Verify the results match expectations
```javascript
jest.mock('commander');
import { program } from 'commander';
import { setupCLI } from '../../scripts/modules/commands.js';
const mockVersion = jest.spyOn(program, 'version').mockReturnValue(program);
test('version callback should return correct version', () => {
// Call the function that registers the callback
setupCLI();
// Extract the callback function
const versionCallback = mockVersion.mock.calls[0][0];
expect(typeof versionCallback).toBe('function');
// Execute the callback and verify results
const result = versionCallback();
expect(result).toBe('1.5.0');
});
```
## Mocking Guidelines
- **File System Operations**
```javascript
import mockFs from 'mock-fs';
beforeEach(() => {
mockFs({
'tasks': {
'tasks.json': JSON.stringify({
meta: { projectName: 'Test Project' },
tasks: []
})
}
});
});
afterEach(() => {
mockFs.restore();
});
```
- **API Calls (Anthropic/Claude)**
```javascript
import { Anthropic } from '@anthropic-ai/sdk';
jest.mock('@anthropic-ai/sdk');
beforeEach(() => {
Anthropic.mockImplementation(() => ({
messages: {
create: jest.fn().mockResolvedValue({
content: [{ text: 'Mocked response' }]
})
}
}));
});
```
- **Environment Variables**
```javascript
const originalEnv = process.env;
beforeEach(() => {
jest.resetModules();
process.env = { ...originalEnv };
process.env.MODEL = 'test-model';
});
afterEach(() => {
process.env = originalEnv;
});
```
## Testing Common Components
- **CLI Commands**
- Mock the action handlers and verify they're called with correct arguments
- Test command registration and option parsing
- Use `commander` test utilities or custom mocks
- **Task Operations**
- Use sample task fixtures for consistent test data
- Mock file system operations
- Test both success and error paths
- **UI Functions**
- Mock console output and verify correct formatting
- Test conditional output logic
- When testing strings with emojis or formatting, use `toContain()` or `toMatch()` rather than exact `toBe()` comparisons
- For functions with different behavior modes (e.g., `forConsole`, `forTable` parameters), create separate tests for each mode
- Test the structure of formatted output (e.g., check that it's a comma-separated list with the right number of items) rather than exact string matching
- When testing chalk-formatted output, remember that strict equality comparison (`toBe()`) can fail even when the visible output looks identical
- Consider using more flexible assertions like checking for the presence of key elements when working with styled text
- Mock chalk functions to return the input text to make testing easier while still verifying correct function calls
## Test Quality Guidelines
- ✅ **DO**: Write tests before implementing features (TDD approach when possible)
- ✅ **DO**: Test edge cases and error conditions, not just happy paths
- ✅ **DO**: Keep tests independent and isolated from each other
- ✅ **DO**: Use descriptive test names that explain the expected behavior
- ✅ **DO**: Maintain test fixtures separate from test logic
- ✅ **DO**: Aim for 80%+ code coverage, with critical paths at 100%
- ✅ **DO**: Follow the mock-first-then-import pattern for all Jest mocks
- ❌ **DON'T**: Test implementation details that might change
- ❌ **DON'T**: Write brittle tests that depend on specific output formatting
- ❌ **DON'T**: Skip testing error handling and validation
- ❌ **DON'T**: Duplicate test fixtures across multiple test files
- ❌ **DON'T**: Write tests that depend on execution order
- ❌ **DON'T**: Define mock variables before `jest.mock()` calls (they won't be accessible due to hoisting)
## Running Tests
```bash
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with coverage reporting
npm run test:coverage
# Run a specific test file
npm test -- tests/unit/specific-file.test.js
# Run tests matching a pattern
npm test -- -t "pattern to match"
```
## Troubleshooting Test Issues
- **Mock Functions Not Called**
- Ensure mocks are defined before imports (Jest hoists `jest.mock()` calls)
- Check that you're referencing the correct mock instance
- Verify the import paths match exactly
- **Unexpected Mock Behavior**
- Clear mocks between tests with `jest.clearAllMocks()` in `beforeEach`
- Check mock implementation for conditional behavior
- Ensure mock return values are correctly configured for each test
- **Tests Affecting Each Other**
- Isolate tests by properly mocking shared resources
- Reset state in `beforeEach` and `afterEach` hooks
- Avoid global state modifications
See [tests/README.md](mdc:tests/README.md) for more details on the testing approach.
Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options.