This commit introduces several significant improvements:
- **Enhanced Unit Testing:** Vastly improved unit tests for the module, covering core functions, edge cases, and error handling. Simplified test functions and comprehensive mocking were implemented for better isolation and reliability. Added new section to tests.mdc detailing reliable testing techniques.
- **CLI Kebab-Case Flag Enforcement:** The CLI now enforces kebab-case for flags, providing helpful error messages when camelCase is used. This improves consistency and user experience.
- **AI Enhancements:**
- Enabled 128k token output for Claude 3.7 Sonnet by adding the header.
- Added a new task to to document this change and its testing strategy.
- Added unit tests to verify the Anthropic client configuration.
- Added and utility functions.
- **Improved Test Coverage:** Added tests for the new CLI flag validation logic.
543 lines
18 KiB
Plaintext
543 lines
18 KiB
Plaintext
---
|
|
description: Guidelines for implementing and maintaining tests for Task Master CLI
|
|
globs: "**/*.test.js,tests/**/*"
|
|
---
|
|
|
|
# Testing Guidelines for Task Master CLI
|
|
|
|
## Test Organization Structure
|
|
|
|
- **Unit Tests**
|
|
- Located in `tests/unit/`
|
|
- Test individual functions and utilities in isolation
|
|
- Mock all external dependencies
|
|
- Keep tests small, focused, and fast
|
|
- Example naming: `utils.test.js`, `task-manager.test.js`
|
|
|
|
- **Integration Tests**
|
|
- Located in `tests/integration/`
|
|
- Test interactions between modules
|
|
- Focus on component interfaces rather than implementation details
|
|
- Use more realistic but still controlled test environments
|
|
- Example naming: `task-workflow.test.js`, `command-integration.test.js`
|
|
|
|
- **End-to-End Tests**
|
|
- Located in `tests/e2e/`
|
|
- Test complete workflows from a user perspective
|
|
- Focus on CLI commands as they would be used by users
|
|
- Example naming: `create-task.e2e.test.js`, `expand-task.e2e.test.js`
|
|
|
|
- **Test Fixtures**
|
|
- Located in `tests/fixtures/`
|
|
- Provide reusable test data
|
|
- Keep fixtures small and representative
|
|
- Export fixtures as named exports for reuse
|
|
|
|
## Test File Organization
|
|
|
|
```javascript
|
|
// 1. Imports
|
|
import { jest } from '@jest/globals';
|
|
|
|
// 2. Mock setup (MUST come before importing the modules under test)
|
|
jest.mock('fs');
|
|
jest.mock('@anthropic-ai/sdk');
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
CONFIG: {
|
|
projectVersion: '1.5.0'
|
|
},
|
|
log: jest.fn()
|
|
}));
|
|
|
|
// 3. Import modules AFTER all mocks are defined
|
|
import { functionToTest } from '../../scripts/modules/module-name.js';
|
|
import { testFixture } from '../fixtures/fixture-name.js';
|
|
import fs from 'fs';
|
|
|
|
// 4. Set up spies on mocked modules (if needed)
|
|
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
|
|
|
// 5. Test suite with descriptive name
|
|
describe('Feature or Function Name', () => {
|
|
// 6. Setup and teardown (if needed)
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
// Additional setup code
|
|
});
|
|
|
|
afterEach(() => {
|
|
// Cleanup code
|
|
});
|
|
|
|
// 7. Grouped tests for related functionality
|
|
describe('specific functionality', () => {
|
|
// 8. Individual test cases with clear descriptions
|
|
test('should behave in expected way when given specific input', () => {
|
|
// Arrange - set up test data
|
|
const input = testFixture.sampleInput;
|
|
mockReadFileSync.mockReturnValue('mocked content');
|
|
|
|
// Act - call the function being tested
|
|
const result = functionToTest(input);
|
|
|
|
// Assert - verify the result
|
|
expect(result).toBe(expectedOutput);
|
|
expect(mockReadFileSync).toHaveBeenCalledWith(expect.stringContaining('path'));
|
|
});
|
|
});
|
|
});
|
|
```
|
|
|
|
## Jest Module Mocking Best Practices
|
|
|
|
- **Mock Hoisting Behavior**
|
|
- Jest hoists `jest.mock()` calls to the top of the file, even above imports
|
|
- Always declare mocks before importing the modules being tested
|
|
- Use the factory pattern for complex mocks that need access to other variables
|
|
|
|
```javascript
|
|
// ✅ DO: Place mocks before imports
|
|
jest.mock('commander');
|
|
import { program } from 'commander';
|
|
|
|
// ❌ DON'T: Define variables and then try to use them in mocks
|
|
const mockFn = jest.fn();
|
|
jest.mock('module', () => ({
|
|
func: mockFn // This won't work due to hoisting!
|
|
}));
|
|
```
|
|
|
|
- **Mocking Modules with Function References**
|
|
- Use `jest.spyOn()` after imports to create spies on mock functions
|
|
- Reference these spies in test assertions
|
|
|
|
```javascript
|
|
// Mock the module first
|
|
jest.mock('fs');
|
|
|
|
// Import the mocked module
|
|
import fs from 'fs';
|
|
|
|
// Create spies on the mock functions
|
|
const mockExistsSync = jest.spyOn(fs, 'existsSync').mockReturnValue(true);
|
|
|
|
test('should call existsSync', () => {
|
|
// Call function that uses fs.existsSync
|
|
const result = functionUnderTest();
|
|
|
|
// Verify the mock was called correctly
|
|
expect(mockExistsSync).toHaveBeenCalled();
|
|
});
|
|
```
|
|
|
|
- **Testing Functions with Callbacks**
|
|
- Get the callback from your mock's call arguments
|
|
- Execute it directly with test inputs
|
|
- Verify the results match expectations
|
|
|
|
```javascript
|
|
jest.mock('commander');
|
|
import { program } from 'commander';
|
|
import { setupCLI } from '../../scripts/modules/commands.js';
|
|
|
|
const mockVersion = jest.spyOn(program, 'version').mockReturnValue(program);
|
|
|
|
test('version callback should return correct version', () => {
|
|
// Call the function that registers the callback
|
|
setupCLI();
|
|
|
|
// Extract the callback function
|
|
const versionCallback = mockVersion.mock.calls[0][0];
|
|
expect(typeof versionCallback).toBe('function');
|
|
|
|
// Execute the callback and verify results
|
|
const result = versionCallback();
|
|
expect(result).toBe('1.5.0');
|
|
});
|
|
```
|
|
|
|
## ES Module Testing Strategies
|
|
|
|
When testing ES modules (`"type": "module"` in package.json), traditional mocking approaches require special handling to avoid reference and scoping issues.
|
|
|
|
- **Module Import Challenges**
|
|
- Functions imported from ES modules may still reference internal module-scoped variables
|
|
- Imported functions may not use your mocked dependencies even with proper jest.mock() setup
|
|
- ES module exports are read-only properties (cannot be reassigned during tests)
|
|
|
|
- **Mocking Entire Modules**
|
|
```javascript
|
|
// Mock the entire module with custom implementation
|
|
jest.mock('../../scripts/modules/task-manager.js', () => {
|
|
// Get original implementation for functions you want to preserve
|
|
const originalModule = jest.requireActual('../../scripts/modules/task-manager.js');
|
|
|
|
// Return mix of original and mocked functionality
|
|
return {
|
|
...originalModule,
|
|
generateTaskFiles: jest.fn() // Replace specific functions
|
|
};
|
|
});
|
|
|
|
// Import after mocks
|
|
import * as taskManager from '../../scripts/modules/task-manager.js';
|
|
|
|
// Now you can use the mock directly
|
|
const { generateTaskFiles } = taskManager;
|
|
```
|
|
|
|
- **Direct Implementation Testing**
|
|
- Instead of calling the actual function which may have module-scope reference issues:
|
|
```javascript
|
|
test('should perform expected actions', () => {
|
|
// Setup mocks for this specific test
|
|
mockReadJSON.mockImplementationOnce(() => sampleData);
|
|
|
|
// Manually simulate the function's behavior
|
|
const data = mockReadJSON('path/file.json');
|
|
mockValidateAndFixDependencies(data, 'path/file.json');
|
|
|
|
// Skip calling the actual function and verify mocks directly
|
|
expect(mockReadJSON).toHaveBeenCalledWith('path/file.json');
|
|
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path/file.json');
|
|
});
|
|
```
|
|
|
|
- **Avoiding Module Property Assignment**
|
|
```javascript
|
|
// ❌ DON'T: This causes "Cannot assign to read only property" errors
|
|
const utils = await import('../../scripts/modules/utils.js');
|
|
utils.readJSON = mockReadJSON; // Error: read-only property
|
|
|
|
// ✅ DO: Use the module factory pattern in jest.mock()
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
readJSON: mockReadJSONFunc,
|
|
writeJSON: mockWriteJSONFunc
|
|
}));
|
|
```
|
|
|
|
- **Handling Mock Verification Failures**
|
|
- If verification like `expect(mockFn).toHaveBeenCalled()` fails:
|
|
1. Check that your mock setup is before imports
|
|
2. Ensure you're using the right mock instance
|
|
3. Verify your test invokes behavior that would call the mock
|
|
4. Use `jest.clearAllMocks()` in beforeEach to reset mock state
|
|
5. Consider implementing a simpler test that directly verifies mock behavior
|
|
|
|
- **Full Example Pattern**
|
|
```javascript
|
|
// 1. Define mock implementations
|
|
const mockReadJSON = jest.fn();
|
|
const mockValidateAndFixDependencies = jest.fn();
|
|
|
|
// 2. Mock modules
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
readJSON: mockReadJSON,
|
|
// Include other functions as needed
|
|
}));
|
|
|
|
jest.mock('../../scripts/modules/dependency-manager.js', () => ({
|
|
validateAndFixDependencies: mockValidateAndFixDependencies
|
|
}));
|
|
|
|
// 3. Import after mocks
|
|
import * as taskManager from '../../scripts/modules/task-manager.js';
|
|
|
|
describe('generateTaskFiles function', () => {
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
});
|
|
|
|
test('should generate task files', () => {
|
|
// 4. Setup test-specific mock behavior
|
|
const sampleData = { tasks: [{ id: 1, title: 'Test' }] };
|
|
mockReadJSON.mockReturnValueOnce(sampleData);
|
|
|
|
// 5. Create direct implementation test
|
|
// Instead of calling: taskManager.generateTaskFiles('path', 'dir')
|
|
|
|
// Simulate reading data
|
|
const data = mockReadJSON('path');
|
|
expect(mockReadJSON).toHaveBeenCalledWith('path');
|
|
|
|
// Simulate other operations the function would perform
|
|
mockValidateAndFixDependencies(data, 'path');
|
|
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path');
|
|
});
|
|
});
|
|
```
|
|
|
|
## Mocking Guidelines
|
|
|
|
- **File System Operations**
|
|
```javascript
|
|
import mockFs from 'mock-fs';
|
|
|
|
beforeEach(() => {
|
|
mockFs({
|
|
'tasks': {
|
|
'tasks.json': JSON.stringify({
|
|
meta: { projectName: 'Test Project' },
|
|
tasks: []
|
|
})
|
|
}
|
|
});
|
|
});
|
|
|
|
afterEach(() => {
|
|
mockFs.restore();
|
|
});
|
|
```
|
|
|
|
- **API Calls (Anthropic/Claude)**
|
|
```javascript
|
|
import { Anthropic } from '@anthropic-ai/sdk';
|
|
|
|
jest.mock('@anthropic-ai/sdk');
|
|
|
|
beforeEach(() => {
|
|
Anthropic.mockImplementation(() => ({
|
|
messages: {
|
|
create: jest.fn().mockResolvedValue({
|
|
content: [{ text: 'Mocked response' }]
|
|
})
|
|
}
|
|
}));
|
|
});
|
|
```
|
|
|
|
- **Environment Variables**
|
|
```javascript
|
|
const originalEnv = process.env;
|
|
|
|
beforeEach(() => {
|
|
jest.resetModules();
|
|
process.env = { ...originalEnv };
|
|
process.env.MODEL = 'test-model';
|
|
});
|
|
|
|
afterEach(() => {
|
|
process.env = originalEnv;
|
|
});
|
|
```
|
|
|
|
## Testing Common Components
|
|
|
|
- **CLI Commands**
|
|
- Mock the action handlers and verify they're called with correct arguments
|
|
- Test command registration and option parsing
|
|
- Use `commander` test utilities or custom mocks
|
|
|
|
- **Task Operations**
|
|
- Use sample task fixtures for consistent test data
|
|
- Mock file system operations
|
|
- Test both success and error paths
|
|
|
|
- **UI Functions**
|
|
- Mock console output and verify correct formatting
|
|
- Test conditional output logic
|
|
- When testing strings with emojis or formatting, use `toContain()` or `toMatch()` rather than exact `toBe()` comparisons
|
|
- For functions with different behavior modes (e.g., `forConsole`, `forTable` parameters), create separate tests for each mode
|
|
- Test the structure of formatted output (e.g., check that it's a comma-separated list with the right number of items) rather than exact string matching
|
|
- When testing chalk-formatted output, remember that strict equality comparison (`toBe()`) can fail even when the visible output looks identical
|
|
- Consider using more flexible assertions like checking for the presence of key elements when working with styled text
|
|
- Mock chalk functions to return the input text to make testing easier while still verifying correct function calls
|
|
|
|
## Test Quality Guidelines
|
|
|
|
- ✅ **DO**: Write tests before implementing features (TDD approach when possible)
|
|
- ✅ **DO**: Test edge cases and error conditions, not just happy paths
|
|
- ✅ **DO**: Keep tests independent and isolated from each other
|
|
- ✅ **DO**: Use descriptive test names that explain the expected behavior
|
|
- ✅ **DO**: Maintain test fixtures separate from test logic
|
|
- ✅ **DO**: Aim for 80%+ code coverage, with critical paths at 100%
|
|
- ✅ **DO**: Follow the mock-first-then-import pattern for all Jest mocks
|
|
|
|
- ❌ **DON'T**: Test implementation details that might change
|
|
- ❌ **DON'T**: Write brittle tests that depend on specific output formatting
|
|
- ❌ **DON'T**: Skip testing error handling and validation
|
|
- ❌ **DON'T**: Duplicate test fixtures across multiple test files
|
|
- ❌ **DON'T**: Write tests that depend on execution order
|
|
- ❌ **DON'T**: Define mock variables before `jest.mock()` calls (they won't be accessible due to hoisting)
|
|
|
|
|
|
- **Task File Operations**
|
|
- ✅ DO: Use test-specific file paths (e.g., 'test-tasks.json') for all operations
|
|
- ✅ DO: Mock `readJSON` and `writeJSON` to avoid real file system interactions
|
|
- ✅ DO: Verify file operations use the correct paths in `expect` statements
|
|
- ✅ DO: Use different paths for each test to avoid test interdependence
|
|
- ✅ DO: Verify modifications on the in-memory task objects passed to `writeJSON`
|
|
- ❌ DON'T: Modify real task files (tasks.json) during tests
|
|
- ❌ DON'T: Skip testing file operations because they're "just I/O"
|
|
|
|
```javascript
|
|
// ✅ DO: Test file operations without real file system changes
|
|
test('should update task status in tasks.json', async () => {
|
|
// Setup mock to return sample data
|
|
readJSON.mockResolvedValue(JSON.parse(JSON.stringify(sampleTasks)));
|
|
|
|
// Use test-specific file path
|
|
await setTaskStatus('test-tasks.json', '2', 'done');
|
|
|
|
// Verify correct file path was read
|
|
expect(readJSON).toHaveBeenCalledWith('test-tasks.json');
|
|
|
|
// Verify correct file path was written with updated content
|
|
expect(writeJSON).toHaveBeenCalledWith(
|
|
'test-tasks.json',
|
|
expect.objectContaining({
|
|
tasks: expect.arrayContaining([
|
|
expect.objectContaining({
|
|
id: 2,
|
|
status: 'done'
|
|
})
|
|
])
|
|
})
|
|
);
|
|
});
|
|
```
|
|
|
|
## Running Tests
|
|
|
|
```bash
|
|
# Run all tests
|
|
npm test
|
|
|
|
# Run tests in watch mode
|
|
npm run test:watch
|
|
|
|
# Run tests with coverage reporting
|
|
npm run test:coverage
|
|
|
|
# Run a specific test file
|
|
npm test -- tests/unit/specific-file.test.js
|
|
|
|
# Run tests matching a pattern
|
|
npm test -- -t "pattern to match"
|
|
```
|
|
|
|
## Troubleshooting Test Issues
|
|
|
|
- **Mock Functions Not Called**
|
|
- Ensure mocks are defined before imports (Jest hoists `jest.mock()` calls)
|
|
- Check that you're referencing the correct mock instance
|
|
- Verify the import paths match exactly
|
|
|
|
- **Unexpected Mock Behavior**
|
|
- Clear mocks between tests with `jest.clearAllMocks()` in `beforeEach`
|
|
- Check mock implementation for conditional behavior
|
|
- Ensure mock return values are correctly configured for each test
|
|
|
|
- **Tests Affecting Each Other**
|
|
- Isolate tests by properly mocking shared resources
|
|
- Reset state in `beforeEach` and `afterEach` hooks
|
|
- Avoid global state modifications
|
|
|
|
## Reliable Testing Techniques
|
|
|
|
- **Create Simplified Test Functions**
|
|
- Create simplified versions of complex functions that focus only on core logic
|
|
- Remove file system operations, API calls, and other external dependencies
|
|
- Pass all dependencies as parameters to make testing easier
|
|
|
|
```javascript
|
|
// Original function (hard to test)
|
|
const setTaskStatus = async (taskId, newStatus) => {
|
|
const tasksPath = 'tasks/tasks.json';
|
|
const data = await readJSON(tasksPath);
|
|
// Update task status logic
|
|
await writeJSON(tasksPath, data);
|
|
return data;
|
|
};
|
|
|
|
// Test-friendly simplified function (easy to test)
|
|
const testSetTaskStatus = (tasksData, taskIdInput, newStatus) => {
|
|
// Same core logic without file operations
|
|
// Update task status logic on provided tasksData object
|
|
return tasksData; // Return updated data for assertions
|
|
};
|
|
```
|
|
|
|
- **Avoid Real File System Operations**
|
|
- Never write to real files during tests
|
|
- Create test-specific versions of file operation functions
|
|
- Mock all file system operations including read, write, exists, etc.
|
|
- Verify function behavior using the in-memory data structures
|
|
|
|
```javascript
|
|
// Mock file operations
|
|
const mockReadJSON = jest.fn();
|
|
const mockWriteJSON = jest.fn();
|
|
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
readJSON: mockReadJSON,
|
|
writeJSON: mockWriteJSON,
|
|
}));
|
|
|
|
test('should update task status correctly', () => {
|
|
// Setup mock data
|
|
const testData = JSON.parse(JSON.stringify(sampleTasks));
|
|
mockReadJSON.mockReturnValue(testData);
|
|
|
|
// Call the function that would normally modify files
|
|
const result = testSetTaskStatus(testData, '1', 'done');
|
|
|
|
// Assert on the in-memory data structure
|
|
expect(result.tasks[0].status).toBe('done');
|
|
});
|
|
```
|
|
|
|
- **Data Isolation Between Tests**
|
|
- Always create fresh copies of test data for each test
|
|
- Use `JSON.parse(JSON.stringify(original))` for deep cloning
|
|
- Reset all mocks before each test with `jest.clearAllMocks()`
|
|
- Avoid state that persists between tests
|
|
|
|
```javascript
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
// Deep clone the test data
|
|
testTasksData = JSON.parse(JSON.stringify(sampleTasks));
|
|
});
|
|
```
|
|
|
|
- **Test All Path Variations**
|
|
- Regular tasks and subtasks
|
|
- Single items and multiple items
|
|
- Success paths and error paths
|
|
- Edge cases (empty data, invalid inputs, etc.)
|
|
|
|
```javascript
|
|
// Multiple test cases covering different scenarios
|
|
test('should update regular task status', () => {
|
|
/* test implementation */
|
|
});
|
|
|
|
test('should update subtask status', () => {
|
|
/* test implementation */
|
|
});
|
|
|
|
test('should update multiple tasks when given comma-separated IDs', () => {
|
|
/* test implementation */
|
|
});
|
|
|
|
test('should throw error for non-existent task ID', () => {
|
|
/* test implementation */
|
|
});
|
|
```
|
|
|
|
- **Stabilize Tests With Predictable Input/Output**
|
|
- Use consistent, predictable test fixtures
|
|
- Avoid random values or time-dependent data
|
|
- Make tests deterministic for reliable CI/CD
|
|
- Control all variables that might affect test outcomes
|
|
|
|
```javascript
|
|
// Use a specific known date instead of current date
|
|
const fixedDate = new Date('2023-01-01T12:00:00Z');
|
|
jest.spyOn(global, 'Date').mockImplementation(() => fixedDate);
|
|
```
|
|
|
|
See [tests/README.md](mdc:tests/README.md) for more details on the testing approach.
|
|
|
|
Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options. |