This commit introduces a major refactoring of the MCP server implementation to prioritize direct function calls over CLI execution, enhancing performance and reliability. It also includes substantial updates to documentation for consistency and interlinking.
**MCP Server & Core Logic Refactoring:**
1. **Introduce Direct Function Wrappers ():**
* Created to house direct wrappers for core Task Master functions (imported from ).
* Implemented as the first wrapper, calling .
* Added utility within to centralize file location logic, removing duplication.
* Established the map for registering these wrappers.
2. **Enhance MCP Utilities ():**
* Added : A primary utility function to streamline MCP tool methods. It handles logging, argument processing (incl. project root normalization), calling the direct action function (e.g., ), processing results via , and formatting the final MCP response.
* Added : Standardizes processing of objects returned by direct function wrappers.
* Added : Filters sensitive/large fields (like , ) from responses sent to the MCP client.
* Added , , to support the new workflow.
* Refactored to use internally, simplifying its usage (though it's now primarily a fallback).
3. **Update MCP Tools (, ):**
* Refactored to use with , significantly simplifying the tool's method.
* Updated (initially) to use the improved (Note: further refactoring to use a direct wrapper for would follow the pattern).
**Documentation Enhancements:**
4. **Comprehensive Interlinking:** Added links across rule files (, , , , , ) to connect related guidelines, improving navigation.
5. **Standardize CLI Syntax ():** Removed legacy ℹ️ Initialized Perplexity client with OpenAI compatibility layer examples, reinforcing ℹ️ Initialized Perplexity client with OpenAI compatibility layer
_____ _ __ __ _
|_ _|_ _ ___| | __ | \/ | __ _ ___| |_ ___ _ __
| |/ _` / __| |/ / | |\/| |/ _` / __| __/ _ \ '__|
| | (_| \__ \ < | | | | (_| \__ \ || __/ |
|_|\__,_|___/_|\_\ |_| |_|\__,_|___/\__\___|_|
by https://x.com/eyaltoledano
╭────────────────────────────────────────────╮
│ │
│ Version: 0.9.30 Project: Task Master │
│ │
╰────────────────────────────────────────────╯
╭─────────────────────╮
│ │
│ Task Master CLI │
│ │
╰─────────────────────╯
╭───────────────────╮
│ Task Generation │
╰───────────────────╯
parse-prd --input=<file.txt> [--tasks=10] Generate tasks from a PRD document
generate Create individual task files from tasks…
╭───────────────────╮
│ Task Management │
╰───────────────────╯
list [--status=<status>] [--with-subtas… List all tasks with their status
set-status --id=<id> --status=<status> Update task status (done, pending, etc.)
update --from=<id> --prompt="<context>" Update tasks based on new requirements
add-task --prompt="<text>" [--dependencies=… Add a new task using AI
add-dependency --id=<id> --depends-on=<id> Add a dependency to a task
remove-dependency --id=<id> --depends-on=<id> Remove a dependency from a task
╭──────────────────────────╮
│ Task Analysis & Detail │
╰──────────────────────────╯
analyze-complexity [--research] [--threshold=5] Analyze tasks and generate expansion re…
complexity-report [--file=<path>] Display the complexity analysis report
expand --id=<id> [--num=5] [--research] [… Break down tasks into detailed subtasks
expand --all [--force] [--research] Expand all pending tasks with subtasks
clear-subtasks --id=<id> Remove subtasks from specified tasks
╭─────────────────────────────╮
│ Task Navigation & Viewing │
╰─────────────────────────────╯
next Show the next task to work on based on …
show <id> Display detailed information about a sp…
╭─────────────────────────╮
│ Dependency Management │
╰─────────────────────────╯
validate-dependenci… Identify invalid dependencies without f…
fix-dependencies Fix invalid dependencies automatically
╭─────────────────────────╮
│ Environment Variables │
╰─────────────────────────╯
ANTHROPIC_API_KEY Your Anthropic API key Required
MODEL Claude model to use Default: claude-3-7-sonn…
MAX_TOKENS Maximum tokens for responses Default: 4000
TEMPERATURE Temperature for model responses Default: 0.7
PERPLEXITY_API_KEY Perplexity API key for research Optional
PERPLEXITY_MODEL Perplexity model to use Default: sonar-pro
DEBUG Enable debug logging Default: false
LOG_LEVEL Console output level (debug,info,warn,error) Default: info
DEFAULT_SUBTASKS Default number of subtasks to generate Default: 3
DEFAULT_PRIORITY Default task priority Default: medium
PROJECT_NAME Project name displayed in UI Default: Task Master
_____ _ __ __ _
|_ _|_ _ ___| | __ | \/ | __ _ ___| |_ ___ _ __
| |/ _` / __| |/ / | |\/| |/ _` / __| __/ _ \ '__|
| | (_| \__ \ < | | | | (_| \__ \ || __/ |
|_|\__,_|___/_|\_\ |_| |_|\__,_|___/\__\___|_|
by https://x.com/eyaltoledano
╭────────────────────────────────────────────╮
│ │
│ Version: 0.9.30 Project: Task Master │
│ │
╰────────────────────────────────────────────╯
╭─────────────────────╮
│ │
│ Task Master CLI │
│ │
╰─────────────────────╯
╭───────────────────╮
│ Task Generation │
╰───────────────────╯
parse-prd --input=<file.txt> [--tasks=10] Generate tasks from a PRD document
generate Create individual task files from tasks…
╭───────────────────╮
│ Task Management │
╰───────────────────╯
list [--status=<status>] [--with-subtas… List all tasks with their status
set-status --id=<id> --status=<status> Update task status (done, pending, etc.)
update --from=<id> --prompt="<context>" Update tasks based on new requirements
add-task --prompt="<text>" [--dependencies=… Add a new task using AI
add-dependency --id=<id> --depends-on=<id> Add a dependency to a task
remove-dependency --id=<id> --depends-on=<id> Remove a dependency from a task
╭──────────────────────────╮
│ Task Analysis & Detail │
╰──────────────────────────╯
analyze-complexity [--research] [--threshold=5] Analyze tasks and generate expansion re…
complexity-report [--file=<path>] Display the complexity analysis report
expand --id=<id> [--num=5] [--research] [… Break down tasks into detailed subtasks
expand --all [--force] [--research] Expand all pending tasks with subtasks
clear-subtasks --id=<id> Remove subtasks from specified tasks
╭─────────────────────────────╮
│ Task Navigation & Viewing │
╰─────────────────────────────╯
next Show the next task to work on based on …
show <id> Display detailed information about a sp…
╭─────────────────────────╮
│ Dependency Management │
╰─────────────────────────╯
validate-dependenci… Identify invalid dependencies without f…
fix-dependencies Fix invalid dependencies automatically
╭─────────────────────────╮
│ Environment Variables │
╰─────────────────────────╯
ANTHROPIC_API_KEY Your Anthropic API key Required
MODEL Claude model to use Default: claude-3-7-sonn…
MAX_TOKENS Maximum tokens for responses Default: 4000
TEMPERATURE Temperature for model responses Default: 0.7
PERPLEXITY_API_KEY Perplexity API key for research Optional
PERPLEXITY_MODEL Perplexity model to use Default: sonar-pro
DEBUG Enable debug logging Default: false
LOG_LEVEL Console output level (debug,info,warn,error) Default: info
DEFAULT_SUBTASKS Default number of subtasks to generate Default: 3
DEFAULT_PRIORITY Default task priority Default: medium
PROJECT_NAME Project name displayed in UI Default: Task Master as the primary CLI.
6. **Add MCP Architecture & Workflow Sections:** Added detailed sections in and explaining MCP server structure and the workflow for adding new MCP tool integrations using the direct function pattern.
7. **Clarify MCP Role:** Updated and to better explain the MCP server's role and its specific utilities.
Overall, this establishes a cleaner, more performant, and maintainable pattern for integrating Task Master functionality with the MCP server, supported by improved documentation.
662 lines
22 KiB
Plaintext
662 lines
22 KiB
Plaintext
---
|
|
description: Guidelines for implementing and maintaining tests for Task Master CLI
|
|
globs: "**/*.test.js,tests/**/*"
|
|
---
|
|
|
|
# Testing Guidelines for Task Master CLI
|
|
|
|
## Test Organization Structure
|
|
|
|
- **Unit Tests** (See [`architecture.mdc`](mdc:.cursor/rules/architecture.mdc) for module breakdown)
|
|
- Located in `tests/unit/`
|
|
- Test individual functions and utilities in isolation
|
|
- Mock all external dependencies
|
|
- Keep tests small, focused, and fast
|
|
- Example naming: `utils.test.js`, `task-manager.test.js`
|
|
|
|
- **Integration Tests**
|
|
- Located in `tests/integration/`
|
|
- Test interactions between modules
|
|
- Focus on component interfaces rather than implementation details
|
|
- Use more realistic but still controlled test environments
|
|
- Example naming: `task-workflow.test.js`, `command-integration.test.js`
|
|
|
|
- **End-to-End Tests**
|
|
- Located in `tests/e2e/`
|
|
- Test complete workflows from a user perspective
|
|
- Focus on CLI commands as they would be used by users
|
|
- Example naming: `create-task.e2e.test.js`, `expand-task.e2e.test.js`
|
|
|
|
- **Test Fixtures**
|
|
- Located in `tests/fixtures/`
|
|
- Provide reusable test data
|
|
- Keep fixtures small and representative
|
|
- Export fixtures as named exports for reuse
|
|
|
|
## Test File Organization
|
|
|
|
```javascript
|
|
// 1. Imports
|
|
import { jest } from '@jest/globals';
|
|
|
|
// 2. Mock setup (MUST come before importing the modules under test)
|
|
jest.mock('fs');
|
|
jest.mock('@anthropic-ai/sdk');
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
CONFIG: {
|
|
projectVersion: '1.5.0'
|
|
},
|
|
log: jest.fn()
|
|
}));
|
|
|
|
// 3. Import modules AFTER all mocks are defined
|
|
import { functionToTest } from '../../scripts/modules/module-name.js';
|
|
import { testFixture } from '../fixtures/fixture-name.js';
|
|
import fs from 'fs';
|
|
|
|
// 4. Set up spies on mocked modules (if needed)
|
|
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
|
|
|
// 5. Test suite with descriptive name
|
|
describe('Feature or Function Name', () => {
|
|
// 6. Setup and teardown (if needed)
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
// Additional setup code
|
|
});
|
|
|
|
afterEach(() => {
|
|
// Cleanup code
|
|
});
|
|
|
|
// 7. Grouped tests for related functionality
|
|
describe('specific functionality', () => {
|
|
// 8. Individual test cases with clear descriptions
|
|
test('should behave in expected way when given specific input', () => {
|
|
// Arrange - set up test data
|
|
const input = testFixture.sampleInput;
|
|
mockReadFileSync.mockReturnValue('mocked content');
|
|
|
|
// Act - call the function being tested
|
|
const result = functionToTest(input);
|
|
|
|
// Assert - verify the result
|
|
expect(result).toBe(expectedOutput);
|
|
expect(mockReadFileSync).toHaveBeenCalledWith(expect.stringContaining('path'));
|
|
});
|
|
});
|
|
});
|
|
```
|
|
|
|
## Jest Module Mocking Best Practices
|
|
|
|
- **Mock Hoisting Behavior**
|
|
- Jest hoists `jest.mock()` calls to the top of the file, even above imports
|
|
- Always declare mocks before importing the modules being tested
|
|
- Use the factory pattern for complex mocks that need access to other variables
|
|
|
|
```javascript
|
|
// ✅ DO: Place mocks before imports
|
|
jest.mock('commander');
|
|
import { program } from 'commander';
|
|
|
|
// ❌ DON'T: Define variables and then try to use them in mocks
|
|
const mockFn = jest.fn();
|
|
jest.mock('module', () => ({
|
|
func: mockFn // This won't work due to hoisting!
|
|
}));
|
|
```
|
|
|
|
- **Mocking Modules with Function References**
|
|
- Use `jest.spyOn()` after imports to create spies on mock functions
|
|
- Reference these spies in test assertions
|
|
|
|
```javascript
|
|
// Mock the module first
|
|
jest.mock('fs');
|
|
|
|
// Import the mocked module
|
|
import fs from 'fs';
|
|
|
|
// Create spies on the mock functions
|
|
const mockExistsSync = jest.spyOn(fs, 'existsSync').mockReturnValue(true);
|
|
|
|
test('should call existsSync', () => {
|
|
// Call function that uses fs.existsSync
|
|
const result = functionUnderTest();
|
|
|
|
// Verify the mock was called correctly
|
|
expect(mockExistsSync).toHaveBeenCalled();
|
|
});
|
|
```
|
|
|
|
- **Testing Functions with Callbacks**
|
|
- Get the callback from your mock's call arguments
|
|
- Execute it directly with test inputs
|
|
- Verify the results match expectations
|
|
|
|
```javascript
|
|
jest.mock('commander');
|
|
import { program } from 'commander';
|
|
import { setupCLI } from '../../scripts/modules/commands.js';
|
|
|
|
const mockVersion = jest.spyOn(program, 'version').mockReturnValue(program);
|
|
|
|
test('version callback should return correct version', () => {
|
|
// Call the function that registers the callback
|
|
setupCLI();
|
|
|
|
// Extract the callback function
|
|
const versionCallback = mockVersion.mock.calls[0][0];
|
|
expect(typeof versionCallback).toBe('function');
|
|
|
|
// Execute the callback and verify results
|
|
const result = versionCallback();
|
|
expect(result).toBe('1.5.0');
|
|
});
|
|
```
|
|
|
|
## ES Module Testing Strategies
|
|
|
|
When testing ES modules (`"type": "module"` in package.json), traditional mocking approaches require special handling to avoid reference and scoping issues.
|
|
|
|
- **Module Import Challenges**
|
|
- Functions imported from ES modules may still reference internal module-scoped variables
|
|
- Imported functions may not use your mocked dependencies even with proper jest.mock() setup
|
|
- ES module exports are read-only properties (cannot be reassigned during tests)
|
|
|
|
- **Mocking Entire Modules**
|
|
```javascript
|
|
// Mock the entire module with custom implementation
|
|
jest.mock('../../scripts/modules/task-manager.js', () => {
|
|
// Get original implementation for functions you want to preserve
|
|
const originalModule = jest.requireActual('../../scripts/modules/task-manager.js');
|
|
|
|
// Return mix of original and mocked functionality
|
|
return {
|
|
...originalModule,
|
|
generateTaskFiles: jest.fn() // Replace specific functions
|
|
};
|
|
});
|
|
|
|
// Import after mocks
|
|
import * as taskManager from '../../scripts/modules/task-manager.js';
|
|
|
|
// Now you can use the mock directly
|
|
const { generateTaskFiles } = taskManager;
|
|
```
|
|
|
|
- **Direct Implementation Testing**
|
|
- Instead of calling the actual function which may have module-scope reference issues:
|
|
```javascript
|
|
test('should perform expected actions', () => {
|
|
// Setup mocks for this specific test
|
|
mockReadJSON.mockImplementationOnce(() => sampleData);
|
|
|
|
// Manually simulate the function's behavior
|
|
const data = mockReadJSON('path/file.json');
|
|
mockValidateAndFixDependencies(data, 'path/file.json');
|
|
|
|
// Skip calling the actual function and verify mocks directly
|
|
expect(mockReadJSON).toHaveBeenCalledWith('path/file.json');
|
|
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path/file.json');
|
|
});
|
|
```
|
|
|
|
- **Avoiding Module Property Assignment**
|
|
```javascript
|
|
// ❌ DON'T: This causes "Cannot assign to read only property" errors
|
|
const utils = await import('../../scripts/modules/utils.js');
|
|
utils.readJSON = mockReadJSON; // Error: read-only property
|
|
|
|
// ✅ DO: Use the module factory pattern in jest.mock()
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
readJSON: mockReadJSONFunc,
|
|
writeJSON: mockWriteJSONFunc
|
|
}));
|
|
```
|
|
|
|
- **Handling Mock Verification Failures**
|
|
- If verification like `expect(mockFn).toHaveBeenCalled()` fails:
|
|
1. Check that your mock setup is before imports
|
|
2. Ensure you're using the right mock instance
|
|
3. Verify your test invokes behavior that would call the mock
|
|
4. Use `jest.clearAllMocks()` in beforeEach to reset mock state
|
|
5. Consider implementing a simpler test that directly verifies mock behavior
|
|
|
|
- **Full Example Pattern**
|
|
```javascript
|
|
// 1. Define mock implementations
|
|
const mockReadJSON = jest.fn();
|
|
const mockValidateAndFixDependencies = jest.fn();
|
|
|
|
// 2. Mock modules
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
readJSON: mockReadJSON,
|
|
// Include other functions as needed
|
|
}));
|
|
|
|
jest.mock('../../scripts/modules/dependency-manager.js', () => ({
|
|
validateAndFixDependencies: mockValidateAndFixDependencies
|
|
}));
|
|
|
|
// 3. Import after mocks
|
|
import * as taskManager from '../../scripts/modules/task-manager.js';
|
|
|
|
describe('generateTaskFiles function', () => {
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
});
|
|
|
|
test('should generate task files', () => {
|
|
// 4. Setup test-specific mock behavior
|
|
const sampleData = { tasks: [{ id: 1, title: 'Test' }] };
|
|
mockReadJSON.mockReturnValueOnce(sampleData);
|
|
|
|
// 5. Create direct implementation test
|
|
// Instead of calling: taskManager.generateTaskFiles('path', 'dir')
|
|
|
|
// Simulate reading data
|
|
const data = mockReadJSON('path');
|
|
expect(mockReadJSON).toHaveBeenCalledWith('path');
|
|
|
|
// Simulate other operations the function would perform
|
|
mockValidateAndFixDependencies(data, 'path');
|
|
expect(mockValidateAndFixDependencies).toHaveBeenCalledWith(data, 'path');
|
|
});
|
|
});
|
|
```
|
|
|
|
## Mocking Guidelines
|
|
|
|
- **File System Operations**
|
|
```javascript
|
|
import mockFs from 'mock-fs';
|
|
|
|
beforeEach(() => {
|
|
mockFs({
|
|
'tasks': {
|
|
'tasks.json': JSON.stringify({
|
|
meta: { projectName: 'Test Project' },
|
|
tasks: []
|
|
})
|
|
}
|
|
});
|
|
});
|
|
|
|
afterEach(() => {
|
|
mockFs.restore();
|
|
});
|
|
```
|
|
|
|
- **API Calls (Anthropic/Claude)**
|
|
```javascript
|
|
import { Anthropic } from '@anthropic-ai/sdk';
|
|
|
|
jest.mock('@anthropic-ai/sdk');
|
|
|
|
beforeEach(() => {
|
|
Anthropic.mockImplementation(() => ({
|
|
messages: {
|
|
create: jest.fn().mockResolvedValue({
|
|
content: [{ text: 'Mocked response' }]
|
|
})
|
|
}
|
|
}));
|
|
});
|
|
```
|
|
|
|
- **Environment Variables**
|
|
```javascript
|
|
const originalEnv = process.env;
|
|
|
|
beforeEach(() => {
|
|
jest.resetModules();
|
|
process.env = { ...originalEnv };
|
|
process.env.MODEL = 'test-model';
|
|
});
|
|
|
|
afterEach(() => {
|
|
process.env = originalEnv;
|
|
});
|
|
```
|
|
|
|
## Testing Common Components
|
|
|
|
- **CLI Commands**
|
|
- Mock the action handlers (defined in [`commands.js`](mdc:scripts/modules/commands.js)) and verify they're called with correct arguments
|
|
- Test command registration and option parsing
|
|
- Use `commander` test utilities or custom mocks
|
|
|
|
- **Task Operations**
|
|
- Use sample task fixtures for consistent test data
|
|
- Mock file system operations
|
|
- Test both success and error paths
|
|
|
|
- **UI Functions**
|
|
- Mock console output and verify correct formatting
|
|
- Test conditional output logic
|
|
- When testing strings with emojis or formatting, use `toContain()` or `toMatch()` rather than exact `toBe()` comparisons
|
|
- For functions with different behavior modes (e.g., `forConsole`, `forTable` parameters), create separate tests for each mode
|
|
- Test the structure of formatted output (e.g., check that it's a comma-separated list with the right number of items) rather than exact string matching
|
|
- When testing chalk-formatted output, remember that strict equality comparison (`toBe()`) can fail even when the visible output looks identical
|
|
- Consider using more flexible assertions like checking for the presence of key elements when working with styled text
|
|
- Mock chalk functions to return the input text to make testing easier while still verifying correct function calls
|
|
|
|
## Test Quality Guidelines
|
|
|
|
- ✅ **DO**: Write tests before implementing features (TDD approach when possible)
|
|
- ✅ **DO**: Test edge cases and error conditions, not just happy paths
|
|
- ✅ **DO**: Keep tests independent and isolated from each other
|
|
- ✅ **DO**: Use descriptive test names that explain the expected behavior
|
|
- ✅ **DO**: Maintain test fixtures separate from test logic
|
|
- ✅ **DO**: Aim for 80%+ code coverage, with critical paths at 100%
|
|
- ✅ **DO**: Follow the mock-first-then-import pattern for all Jest mocks
|
|
|
|
- ❌ **DON'T**: Test implementation details that might change
|
|
- ❌ **DON'T**: Write brittle tests that depend on specific output formatting
|
|
- ❌ **DON'T**: Skip testing error handling and validation
|
|
- ❌ **DON'T**: Duplicate test fixtures across multiple test files
|
|
- ❌ **DON'T**: Write tests that depend on execution order
|
|
- ❌ **DON'T**: Define mock variables before `jest.mock()` calls (they won't be accessible due to hoisting)
|
|
|
|
|
|
- **Task File Operations**
|
|
- ✅ DO: Use test-specific file paths (e.g., 'test-tasks.json') for all operations
|
|
- ✅ DO: Mock `readJSON` and `writeJSON` to avoid real file system interactions
|
|
- ✅ DO: Verify file operations use the correct paths in `expect` statements
|
|
- ✅ DO: Use different paths for each test to avoid test interdependence
|
|
- ✅ DO: Verify modifications on the in-memory task objects passed to `writeJSON`
|
|
- ❌ DON'T: Modify real task files (tasks.json) during tests
|
|
- ❌ DON'T: Skip testing file operations because they're "just I/O"
|
|
|
|
```javascript
|
|
// ✅ DO: Test file operations without real file system changes
|
|
test('should update task status in tasks.json', async () => {
|
|
// Setup mock to return sample data
|
|
readJSON.mockResolvedValue(JSON.parse(JSON.stringify(sampleTasks)));
|
|
|
|
// Use test-specific file path
|
|
await setTaskStatus('test-tasks.json', '2', 'done');
|
|
|
|
// Verify correct file path was read
|
|
expect(readJSON).toHaveBeenCalledWith('test-tasks.json');
|
|
|
|
// Verify correct file path was written with updated content
|
|
expect(writeJSON).toHaveBeenCalledWith(
|
|
'test-tasks.json',
|
|
expect.objectContaining({
|
|
tasks: expect.arrayContaining([
|
|
expect.objectContaining({
|
|
id: 2,
|
|
status: 'done'
|
|
})
|
|
])
|
|
})
|
|
);
|
|
});
|
|
```
|
|
|
|
## Running Tests
|
|
|
|
```bash
|
|
# Run all tests
|
|
npm test
|
|
|
|
# Run tests in watch mode
|
|
npm run test:watch
|
|
|
|
# Run tests with coverage reporting
|
|
npm run test:coverage
|
|
|
|
# Run a specific test file
|
|
npm test -- tests/unit/specific-file.test.js
|
|
|
|
# Run tests matching a pattern
|
|
npm test -- -t "pattern to match"
|
|
```
|
|
|
|
## Troubleshooting Test Issues
|
|
|
|
- **Mock Functions Not Called**
|
|
- Ensure mocks are defined before imports (Jest hoists `jest.mock()` calls)
|
|
- Check that you're referencing the correct mock instance
|
|
- Verify the import paths match exactly
|
|
|
|
- **Unexpected Mock Behavior**
|
|
- Clear mocks between tests with `jest.clearAllMocks()` in `beforeEach`
|
|
- Check mock implementation for conditional behavior
|
|
- Ensure mock return values are correctly configured for each test
|
|
|
|
- **Tests Affecting Each Other**
|
|
- Isolate tests by properly mocking shared resources
|
|
- Reset state in `beforeEach` and `afterEach` hooks
|
|
- Avoid global state modifications
|
|
|
|
## Common Testing Pitfalls and Solutions
|
|
|
|
- **Complex Library Mocking**
|
|
- **Problem**: Trying to create full mocks of complex libraries like Commander.js can be error-prone
|
|
- **Solution**: Instead of mocking the entire library, test the command handlers directly by calling your action handlers with the expected arguments
|
|
```javascript
|
|
// ❌ DON'T: Create complex mocks of Commander.js
|
|
class MockCommand {
|
|
constructor() { /* Complex mock implementation */ }
|
|
option() { /* ... */ }
|
|
action() { /* ... */ }
|
|
// Many methods to implement
|
|
}
|
|
|
|
// ✅ DO: Test the command handlers directly
|
|
test('should use default PRD path when no arguments provided', async () => {
|
|
// Call the action handler directly with the right params
|
|
await parsePrdAction(undefined, { numTasks: '10', output: 'tasks/tasks.json' });
|
|
|
|
// Assert on behavior
|
|
expect(mockParsePRD).toHaveBeenCalledWith('scripts/prd.txt', 'tasks/tasks.json', 10);
|
|
});
|
|
```
|
|
|
|
- **ES Module Mocking Challenges**
|
|
- **Problem**: ES modules don't support `require()` and imports are read-only
|
|
- **Solution**: Use Jest's module factory pattern and ensure mocks are defined before imports
|
|
```javascript
|
|
// ❌ DON'T: Try to modify imported modules
|
|
import { detectCamelCaseFlags } from '../../scripts/modules/utils.js';
|
|
detectCamelCaseFlags = jest.fn(); // Error: Assignment to constant variable
|
|
|
|
// ❌ DON'T: Try to use require with ES modules
|
|
const utils = require('../../scripts/modules/utils.js'); // Error in ES modules
|
|
|
|
// ✅ DO: Use Jest module factory pattern
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
detectCamelCaseFlags: jest.fn(),
|
|
toKebabCase: jest.fn()
|
|
}));
|
|
|
|
// Import after mocks are defined
|
|
import { detectCamelCaseFlags } from '../../scripts/modules/utils.js';
|
|
```
|
|
|
|
- **Function Redeclaration Errors**
|
|
- **Problem**: Declaring the same function twice in a test file causes errors
|
|
- **Solution**: Use different function names or create local test-specific implementations
|
|
```javascript
|
|
// ❌ DON'T: Redefine imported functions with the same name
|
|
import { detectCamelCaseFlags } from '../../scripts/modules/utils.js';
|
|
|
|
function detectCamelCaseFlags() { /* Test implementation */ }
|
|
// Error: Identifier has already been declared
|
|
|
|
// ✅ DO: Use a different name for test implementations
|
|
function testDetectCamelCaseFlags() { /* Test implementation */ }
|
|
```
|
|
|
|
- **Console.log Circular References**
|
|
- **Problem**: Creating infinite recursion by spying on console.log while also allowing it to log
|
|
- **Solution**: Implement a mock that doesn't call the original function
|
|
```javascript
|
|
// ❌ DON'T: Create circular references with console.log
|
|
const mockConsoleLog = jest.spyOn(console, 'log');
|
|
mockConsoleLog.mockImplementation(console.log); // Creates infinite recursion
|
|
|
|
// ✅ DO: Use a non-recursive mock implementation
|
|
const mockConsoleLog = jest.spyOn(console, 'log').mockImplementation(() => {});
|
|
```
|
|
|
|
- **Mock Function Method Issues**
|
|
- **Problem**: Trying to use jest.fn() methods on imported functions that aren't properly mocked
|
|
- **Solution**: Create explicit jest.fn() mocks for functions you need to call jest methods on
|
|
```javascript
|
|
// ❌ DON'T: Try to use jest methods on imported functions without proper mocking
|
|
import { parsePRD } from '../../scripts/modules/task-manager.js';
|
|
parsePRD.mockClear(); // Error: parsePRD.mockClear is not a function
|
|
|
|
// ✅ DO: Create proper jest.fn() mocks
|
|
const mockParsePRD = jest.fn().mockResolvedValue(undefined);
|
|
jest.mock('../../scripts/modules/task-manager.js', () => ({
|
|
parsePRD: mockParsePRD
|
|
}));
|
|
// Now you can use:
|
|
mockParsePRD.mockClear();
|
|
```
|
|
|
|
- **EventEmitter Max Listeners Warning**
|
|
- **Problem**: Commander.js adds many listeners in complex mocks, causing warnings
|
|
- **Solution**: Either increase the max listeners limit or avoid deep mocking
|
|
```javascript
|
|
// Option 1: Increase max listeners if you must mock Commander
|
|
class MockCommand extends EventEmitter {
|
|
constructor() {
|
|
super();
|
|
this.setMaxListeners(20); // Avoid MaxListenersExceededWarning
|
|
}
|
|
}
|
|
|
|
// Option 2 (preferred): Test command handlers directly instead
|
|
// (as shown in the first example)
|
|
```
|
|
|
|
- **Test Isolation Issues**
|
|
- **Problem**: Tests affecting each other due to shared mock state
|
|
- **Solution**: Reset all mocks in beforeEach and use separate test-specific mocks
|
|
```javascript
|
|
// ❌ DON'T: Allow mock state to persist between tests
|
|
const globalMock = jest.fn().mockReturnValue('test');
|
|
|
|
// ✅ DO: Clear mocks before each test
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
// Set up test-specific mock behavior
|
|
mockFunction.mockReturnValue('test-specific value');
|
|
});
|
|
```
|
|
|
|
## Reliable Testing Techniques
|
|
|
|
- **Create Simplified Test Functions**
|
|
- Create simplified versions of complex functions that focus only on core logic
|
|
- Remove file system operations, API calls, and other external dependencies
|
|
- Pass all dependencies as parameters to make testing easier
|
|
|
|
```javascript
|
|
// Original function (hard to test)
|
|
const setTaskStatus = async (taskId, newStatus) => {
|
|
const tasksPath = 'tasks/tasks.json';
|
|
const data = await readJSON(tasksPath);
|
|
// Update task status logic
|
|
await writeJSON(tasksPath, data);
|
|
return data;
|
|
};
|
|
|
|
// Test-friendly simplified function (easy to test)
|
|
const testSetTaskStatus = (tasksData, taskIdInput, newStatus) => {
|
|
// Same core logic without file operations
|
|
// Update task status logic on provided tasksData object
|
|
return tasksData; // Return updated data for assertions
|
|
};
|
|
```
|
|
|
|
- **Avoid Real File System Operations**
|
|
- Never write to real files during tests
|
|
- Create test-specific versions of file operation functions
|
|
- Mock all file system operations including read, write, exists, etc.
|
|
- Verify function behavior using the in-memory data structures
|
|
|
|
```javascript
|
|
// Mock file operations
|
|
const mockReadJSON = jest.fn();
|
|
const mockWriteJSON = jest.fn();
|
|
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
readJSON: mockReadJSON,
|
|
writeJSON: mockWriteJSON,
|
|
}));
|
|
|
|
test('should update task status correctly', () => {
|
|
// Setup mock data
|
|
const testData = JSON.parse(JSON.stringify(sampleTasks));
|
|
mockReadJSON.mockReturnValue(testData);
|
|
|
|
// Call the function that would normally modify files
|
|
const result = testSetTaskStatus(testData, '1', 'done');
|
|
|
|
// Assert on the in-memory data structure
|
|
expect(result.tasks[0].status).toBe('done');
|
|
});
|
|
```
|
|
|
|
- **Data Isolation Between Tests**
|
|
- Always create fresh copies of test data for each test
|
|
- Use `JSON.parse(JSON.stringify(original))` for deep cloning
|
|
- Reset all mocks before each test with `jest.clearAllMocks()`
|
|
- Avoid state that persists between tests
|
|
|
|
```javascript
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
// Deep clone the test data
|
|
testTasksData = JSON.parse(JSON.stringify(sampleTasks));
|
|
});
|
|
```
|
|
|
|
- **Test All Path Variations**
|
|
- Regular tasks and subtasks
|
|
- Single items and multiple items
|
|
- Success paths and error paths
|
|
- Edge cases (empty data, invalid inputs, etc.)
|
|
|
|
```javascript
|
|
// Multiple test cases covering different scenarios
|
|
test('should update regular task status', () => {
|
|
/* test implementation */
|
|
});
|
|
|
|
test('should update subtask status', () => {
|
|
/* test implementation */
|
|
});
|
|
|
|
test('should update multiple tasks when given comma-separated IDs', () => {
|
|
/* test implementation */
|
|
});
|
|
|
|
test('should throw error for non-existent task ID', () => {
|
|
/* test implementation */
|
|
});
|
|
```
|
|
|
|
- **Stabilize Tests With Predictable Input/Output**
|
|
- Use consistent, predictable test fixtures
|
|
- Avoid random values or time-dependent data
|
|
- Make tests deterministic for reliable CI/CD
|
|
- Control all variables that might affect test outcomes
|
|
|
|
```javascript
|
|
// Use a specific known date instead of current date
|
|
const fixedDate = new Date('2023-01-01T12:00:00Z');
|
|
jest.spyOn(global, 'Date').mockImplementation(() => fixedDate);
|
|
```
|
|
|
|
See [tests/README.md](mdc:tests/README.md) for more details on the testing approach.
|
|
|
|
Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options. |