This commit introduces significant enhancements and refactoring to the Task Master CLI, focusing on improved testing, integration with Perplexity AI for research-backed task updates, and core logic refactoring for better maintainability and functionality.
**Testing Infrastructure Setup:**
- Implemented Jest as the primary testing framework, setting up a comprehensive testing environment.
- Added new test scripts to including , , and for streamlined testing workflows.
- Integrated necessary devDependencies for testing, such as , , , , and , to support unit, integration, and end-to-end testing.
**Dependency Updates:**
- Updated and to reflect the latest dependency versions, ensuring project stability and access to the newest features and security patches.
- Upgraded to version 0.9.16 and usage: openai [-h] [-v] [-b API_BASE] [-k API_KEY] [-p PROXY [PROXY ...]]
[-o ORGANIZATION] [-t {openai,azure}]
[--api-version API_VERSION] [--azure-endpoint AZURE_ENDPOINT]
[--azure-ad-token AZURE_AD_TOKEN] [-V]
{api,tools,migrate,grit} ...
positional arguments:
{api,tools,migrate,grit}
api Direct API calls
tools Client side tools for convenience
options:
-h, --help show this help message and exit
-v, --verbose Set verbosity.
-b, --api-base API_BASE
What API base url to use.
-k, --api-key API_KEY
What API key to use.
-p, --proxy PROXY [PROXY ...]
What proxy to use.
-o, --organization ORGANIZATION
Which organization to run as (will use your default
organization if not specified)
-t, --api-type {openai,azure}
The backend API to call, must be `openai` or `azure`
--api-version API_VERSION
The Azure API version, e.g.
'https://learn.microsoft.com/en-us/azure/ai-
services/openai/reference#rest-api-versioning'
--azure-endpoint AZURE_ENDPOINT
The Azure endpoint, e.g.
'https://endpoint.openai.azure.com'
--azure-ad-token AZURE_AD_TOKEN
A token from Azure Active Directory,
https://www.microsoft.com/en-
us/security/business/identity-access/microsoft-entra-
id
-V, --version show program's version number and exit to 4.89.0.
- Added dependency (version 2.3.0) and updated related dependencies to their latest versions.
**Perplexity AI Integration for Research-Backed Updates:**
- Introduced an option to leverage Perplexity AI for task updates, enabling research-backed enhancements to task details.
- Implemented logic to initialize a Perplexity AI client if the environment variable is available.
- Modified the function to accept a parameter, allowing dynamic selection between Perplexity AI and Claude AI for task updates based on API key availability and user preference.
- Enhanced to handle responses from Perplexity AI and update tasks accordingly, including improved error handling and logging for robust operation.
**Core Logic Refactoring and Improvements:**
- Refactored the function to utilize task IDs instead of dependency IDs, ensuring consistency and clarity in dependency management.
- Implemented a new function to rigorously check for both circular dependencies and self-dependencies within tasks, improving task relationship integrity.
- Enhanced UI elements in :
- Refactored to incorporate icons for different task statuses and utilize a object for color mapping, improving visual representation of task status.
- Updated to display colored complexity scores with emojis, providing a more intuitive and visually appealing representation of task complexity.
- Refactored the task data structure creation and validation process:
- Updated the JSON Schema for to reflect a more streamlined and efficient task structure.
- Implemented Task Model Classes for better data modeling and type safety.
- Improved File System Operations for task data management.
- Developed robust Validation Functions and an Error Handling System to ensure data integrity and application stability.
**Testing Guidelines Implementation:**
- Implemented guidelines for writing testable code when developing new features, promoting a test-driven development approach.
- Added testing requirements and best practices for unit, integration, and edge case testing to ensure comprehensive test coverage.
- Updated the development workflow to mandate writing tests before proceeding with configuration and documentation updates, reinforcing the importance of testing throughout the development lifecycle.
This commit collectively enhances the Task Master CLI's reliability, functionality, and developer experience through improved testing practices, AI-powered research capabilities, and a more robust and maintainable codebase.
285 lines
8.3 KiB
Plaintext
285 lines
8.3 KiB
Plaintext
---
|
|
description: Guidelines for implementing and maintaining tests for Task Master CLI
|
|
globs: "**/*.test.js,tests/**/*"
|
|
---
|
|
|
|
# Testing Guidelines for Task Master CLI
|
|
|
|
## Test Organization Structure
|
|
|
|
- **Unit Tests**
|
|
- Located in `tests/unit/`
|
|
- Test individual functions and utilities in isolation
|
|
- Mock all external dependencies
|
|
- Keep tests small, focused, and fast
|
|
- Example naming: `utils.test.js`, `task-manager.test.js`
|
|
|
|
- **Integration Tests**
|
|
- Located in `tests/integration/`
|
|
- Test interactions between modules
|
|
- Focus on component interfaces rather than implementation details
|
|
- Use more realistic but still controlled test environments
|
|
- Example naming: `task-workflow.test.js`, `command-integration.test.js`
|
|
|
|
- **End-to-End Tests**
|
|
- Located in `tests/e2e/`
|
|
- Test complete workflows from a user perspective
|
|
- Focus on CLI commands as they would be used by users
|
|
- Example naming: `create-task.e2e.test.js`, `expand-task.e2e.test.js`
|
|
|
|
- **Test Fixtures**
|
|
- Located in `tests/fixtures/`
|
|
- Provide reusable test data
|
|
- Keep fixtures small and representative
|
|
- Export fixtures as named exports for reuse
|
|
|
|
## Test File Organization
|
|
|
|
```javascript
|
|
// 1. Imports
|
|
import { jest } from '@jest/globals';
|
|
|
|
// 2. Mock setup (MUST come before importing the modules under test)
|
|
jest.mock('fs');
|
|
jest.mock('@anthropic-ai/sdk');
|
|
jest.mock('../../scripts/modules/utils.js', () => ({
|
|
CONFIG: {
|
|
projectVersion: '1.5.0'
|
|
},
|
|
log: jest.fn()
|
|
}));
|
|
|
|
// 3. Import modules AFTER all mocks are defined
|
|
import { functionToTest } from '../../scripts/modules/module-name.js';
|
|
import { testFixture } from '../fixtures/fixture-name.js';
|
|
import fs from 'fs';
|
|
|
|
// 4. Set up spies on mocked modules (if needed)
|
|
const mockReadFileSync = jest.spyOn(fs, 'readFileSync');
|
|
|
|
// 5. Test suite with descriptive name
|
|
describe('Feature or Function Name', () => {
|
|
// 6. Setup and teardown (if needed)
|
|
beforeEach(() => {
|
|
jest.clearAllMocks();
|
|
// Additional setup code
|
|
});
|
|
|
|
afterEach(() => {
|
|
// Cleanup code
|
|
});
|
|
|
|
// 7. Grouped tests for related functionality
|
|
describe('specific functionality', () => {
|
|
// 8. Individual test cases with clear descriptions
|
|
test('should behave in expected way when given specific input', () => {
|
|
// Arrange - set up test data
|
|
const input = testFixture.sampleInput;
|
|
mockReadFileSync.mockReturnValue('mocked content');
|
|
|
|
// Act - call the function being tested
|
|
const result = functionToTest(input);
|
|
|
|
// Assert - verify the result
|
|
expect(result).toBe(expectedOutput);
|
|
expect(mockReadFileSync).toHaveBeenCalledWith(expect.stringContaining('path'));
|
|
});
|
|
});
|
|
});
|
|
```
|
|
|
|
## Jest Module Mocking Best Practices
|
|
|
|
- **Mock Hoisting Behavior**
|
|
- Jest hoists `jest.mock()` calls to the top of the file, even above imports
|
|
- Always declare mocks before importing the modules being tested
|
|
- Use the factory pattern for complex mocks that need access to other variables
|
|
|
|
```javascript
|
|
// ✅ DO: Place mocks before imports
|
|
jest.mock('commander');
|
|
import { program } from 'commander';
|
|
|
|
// ❌ DON'T: Define variables and then try to use them in mocks
|
|
const mockFn = jest.fn();
|
|
jest.mock('module', () => ({
|
|
func: mockFn // This won't work due to hoisting!
|
|
}));
|
|
```
|
|
|
|
- **Mocking Modules with Function References**
|
|
- Use `jest.spyOn()` after imports to create spies on mock functions
|
|
- Reference these spies in test assertions
|
|
|
|
```javascript
|
|
// Mock the module first
|
|
jest.mock('fs');
|
|
|
|
// Import the mocked module
|
|
import fs from 'fs';
|
|
|
|
// Create spies on the mock functions
|
|
const mockExistsSync = jest.spyOn(fs, 'existsSync').mockReturnValue(true);
|
|
|
|
test('should call existsSync', () => {
|
|
// Call function that uses fs.existsSync
|
|
const result = functionUnderTest();
|
|
|
|
// Verify the mock was called correctly
|
|
expect(mockExistsSync).toHaveBeenCalled();
|
|
});
|
|
```
|
|
|
|
- **Testing Functions with Callbacks**
|
|
- Get the callback from your mock's call arguments
|
|
- Execute it directly with test inputs
|
|
- Verify the results match expectations
|
|
|
|
```javascript
|
|
jest.mock('commander');
|
|
import { program } from 'commander';
|
|
import { setupCLI } from '../../scripts/modules/commands.js';
|
|
|
|
const mockVersion = jest.spyOn(program, 'version').mockReturnValue(program);
|
|
|
|
test('version callback should return correct version', () => {
|
|
// Call the function that registers the callback
|
|
setupCLI();
|
|
|
|
// Extract the callback function
|
|
const versionCallback = mockVersion.mock.calls[0][0];
|
|
expect(typeof versionCallback).toBe('function');
|
|
|
|
// Execute the callback and verify results
|
|
const result = versionCallback();
|
|
expect(result).toBe('1.5.0');
|
|
});
|
|
```
|
|
|
|
## Mocking Guidelines
|
|
|
|
- **File System Operations**
|
|
```javascript
|
|
import mockFs from 'mock-fs';
|
|
|
|
beforeEach(() => {
|
|
mockFs({
|
|
'tasks': {
|
|
'tasks.json': JSON.stringify({
|
|
meta: { projectName: 'Test Project' },
|
|
tasks: []
|
|
})
|
|
}
|
|
});
|
|
});
|
|
|
|
afterEach(() => {
|
|
mockFs.restore();
|
|
});
|
|
```
|
|
|
|
- **API Calls (Anthropic/Claude)**
|
|
```javascript
|
|
import { Anthropic } from '@anthropic-ai/sdk';
|
|
|
|
jest.mock('@anthropic-ai/sdk');
|
|
|
|
beforeEach(() => {
|
|
Anthropic.mockImplementation(() => ({
|
|
messages: {
|
|
create: jest.fn().mockResolvedValue({
|
|
content: [{ text: 'Mocked response' }]
|
|
})
|
|
}
|
|
}));
|
|
});
|
|
```
|
|
|
|
- **Environment Variables**
|
|
```javascript
|
|
const originalEnv = process.env;
|
|
|
|
beforeEach(() => {
|
|
jest.resetModules();
|
|
process.env = { ...originalEnv };
|
|
process.env.MODEL = 'test-model';
|
|
});
|
|
|
|
afterEach(() => {
|
|
process.env = originalEnv;
|
|
});
|
|
```
|
|
|
|
## Testing Common Components
|
|
|
|
- **CLI Commands**
|
|
- Mock the action handlers and verify they're called with correct arguments
|
|
- Test command registration and option parsing
|
|
- Use `commander` test utilities or custom mocks
|
|
|
|
- **Task Operations**
|
|
- Use sample task fixtures for consistent test data
|
|
- Mock file system operations
|
|
- Test both success and error paths
|
|
|
|
- **UI Functions**
|
|
- Mock console output and verify correct formatting
|
|
- Test conditional output logic
|
|
- When testing strings with emojis or formatting, use `toContain()` or `toMatch()` rather than exact `toBe()` comparisons
|
|
|
|
## Test Quality Guidelines
|
|
|
|
- ✅ **DO**: Write tests before implementing features (TDD approach when possible)
|
|
- ✅ **DO**: Test edge cases and error conditions, not just happy paths
|
|
- ✅ **DO**: Keep tests independent and isolated from each other
|
|
- ✅ **DO**: Use descriptive test names that explain the expected behavior
|
|
- ✅ **DO**: Maintain test fixtures separate from test logic
|
|
- ✅ **DO**: Aim for 80%+ code coverage, with critical paths at 100%
|
|
- ✅ **DO**: Follow the mock-first-then-import pattern for all Jest mocks
|
|
|
|
- ❌ **DON'T**: Test implementation details that might change
|
|
- ❌ **DON'T**: Write brittle tests that depend on specific output formatting
|
|
- ❌ **DON'T**: Skip testing error handling and validation
|
|
- ❌ **DON'T**: Duplicate test fixtures across multiple test files
|
|
- ❌ **DON'T**: Write tests that depend on execution order
|
|
- ❌ **DON'T**: Define mock variables before `jest.mock()` calls (they won't be accessible due to hoisting)
|
|
|
|
## Running Tests
|
|
|
|
```bash
|
|
# Run all tests
|
|
npm test
|
|
|
|
# Run tests in watch mode
|
|
npm run test:watch
|
|
|
|
# Run tests with coverage reporting
|
|
npm run test:coverage
|
|
|
|
# Run a specific test file
|
|
npm test -- tests/unit/specific-file.test.js
|
|
|
|
# Run tests matching a pattern
|
|
npm test -- -t "pattern to match"
|
|
```
|
|
|
|
## Troubleshooting Test Issues
|
|
|
|
- **Mock Functions Not Called**
|
|
- Ensure mocks are defined before imports (Jest hoists `jest.mock()` calls)
|
|
- Check that you're referencing the correct mock instance
|
|
- Verify the import paths match exactly
|
|
|
|
- **Unexpected Mock Behavior**
|
|
- Clear mocks between tests with `jest.clearAllMocks()` in `beforeEach`
|
|
- Check mock implementation for conditional behavior
|
|
- Ensure mock return values are correctly configured for each test
|
|
|
|
- **Tests Affecting Each Other**
|
|
- Isolate tests by properly mocking shared resources
|
|
- Reset state in `beforeEach` and `afterEach` hooks
|
|
- Avoid global state modifications
|
|
|
|
See [tests/README.md](mdc:tests/README.md) for more details on the testing approach.
|
|
|
|
Refer to [jest.config.js](mdc:jest.config.js) for Jest configuration options. |